text
stringlengths
9
7.94M
\begin{document} \begin{abstract} In this paper, we show that any compact gradient $k$-Yamabe soliton must have constant $\sigma_{k}$-curvature. Moreover, we provide a certain condition for a compact $k$-Yamabe soliton to be gradient. \end{abstract} \title{TRIVIALITY RESULTS FOR COMPACT $k$-YAMABE SOLITONS} \section{Introduction and main results} \label{intro} The concept of gradient $k$-Yamabe soliton, introduced in the celebrated work \cite{catino2012global}, corresponds to a natural generalization of gradient Yamabe solitons. We recall that a Riemannian manifold $(M^n, g)$ is a \textit{$k$-Yamabe soliton} if it admits a constant $\lambda\in\mathbb{R}$ and a vector field $X\in \mathfrak{X}(M)$ satisfying the equation \begin{equation}\label{def1} \frac{1}{2}\mathcal{L}_{X}g=2(n-1)(\sigma_{k}-\lambda)g, \end{equation} where $\mathcal{L}_{X}g$ and $\sigma_{k}$ stand, respectively, for the Lie derivative of $g$ in the direction of $X$ and the $\sigma_{k}$-curvature of $g$. Recall that, if we denote by $\lambda_{1},\lambda_{2},\dots,\lambda_{n}$ the eigenvalues of the symmetric endomorphism $g^{-1}A_{g}$, where $A_{g}$ is the Schouten tensor defined by \begin{equation*} A_{g}=\frac{1}{n-2}\left(Ric_{g}-\frac{scal_{g}}{2(n-1)}g\right), \end{equation*} then the $\sigma_{k}$-curvature of $g$ is defined as the $k$-th symmetric elementary function of $\lambda_{1},\dots,\lambda_{n}$, namely \begin{equation*} \sigma_{k}=\sigma_{k}(g^{-1}A_{g})=\sum_{i_{1}<\dots i_{k}}\lambda_{i_{1}}\cdot \dots\cdot \lambda_{i_{k}}, \quad \text{for}\quad 1\leq k\leq n. \end{equation*} Since $\sigma_{1}$ is the trace of $g^{-1}A_{g}$, the $1$-Yamabe solitons simply correspond to gradient Yamabe solitons \cite{chow1992yamabe,daskalopoulos2013classification, di2008yamabe, hamilton1988ricci, ma2012remarks,tokura2018warped}. For simplicity, the soliton will be denoted by $(M^{n}, g, X, \lambda)$. It may happen that $X=\nabla f$ is the gradient field of a smooth real function $f$ on $M$, in which case the soliton $(M^{n}, g, \nabla f, \lambda)$ is referred to as a \textit{gradient $k$-Yamabe soliton}. Equation \eqref{def1} then becomes \begin{equation}\label{eq fundamental} \nabla^2 f=2(n-1)(\sigma_{k}-\lambda)g, \end{equation} where $\nabla^2 f$ is the Hessian of $f$. Moreover, when either $f$ is a constant function or $X$ is a Killing vector field, the soliton is called \textit{trivial} and, in this case, the metric $g$ is of constant $k$-curvature $\sigma_{k}=\lambda$. In recent years, much efforts have been devoted to study the geometry of $k$-Yamabe solitons. For instance, Hsu in \cite{hsu2012note} shown that any compact gradient $1$-Yamabe soliton is trivial. For $k>1$, the extension of the previous result was investigated by Catino et al. \cite{catino2012global}, and Bo et al. \cite{bo2018k}. In \cite{catino2012global}, the authors proved that any compact, gradient $k$-Yamabe soliton with nonnegative Ricci tensor is trivial. On the other hand, the authors in \cite{bo2018k} showed that any compact, gradient $k$-Yamabe soliton with constant negative scalar curvature must be trivial. In this paper, we extend the above results as follows. \begin{theorem}\label{T1}Any compact gradient $k$-Yamabe soliton $(M^n,g, \nabla f,\lambda)$ is trivial, i.e., has constant $\sigma_{k}$-curvature $\sigma_{k}=\lambda$. \end{theorem} In the scope of $k$-Yamabe solitons, we provide the following extension of Theorem 1.3 in \cite{bo2018k}. \begin{theorem}\label{T4}The compact $k$-Yamabe soliton $(M^n,g,X,\lambda)$ is trivial if one of the following conditions holds: \begin{itemize} \item [\textup{(a)}] $k=1$. \item [\textup{(b)}] $k\geq2$ and $(M^n,g)$ is locally conformally flat. \end{itemize} \end{theorem} The Hodge-de Rham decomposition theorem (see \cite{aquino2011some, warner2013foundations}), shows that any vector field $X$ on a compact oriented Riemannian manifold $M$ can be decompose as follows: \begin{equation}\label{Hodge} X = \nabla h + Y, \end{equation} where $h$ is a smooth function on $M$ and $Y\in \mathfrak{X}(M)$ is a free divergence vector field. Indeed, just consider the $1$-form $X^{\flat}$. Hence applying the Hodge-de Rham theorem, we decompose $X^{\flat}$ as follows: \[X^{\flat}=d\alpha+\delta\beta+\gamma.\] Taking $Y = (\delta\beta+\gamma)^{\sharp}$ and $(d\alpha)^{\sharp}=\nabla h$ we arrive at the desired result. Now we notice that the same result obtained in \cite{pirhadi2017almost} for compact almost Yamabe solitons also works for compact $k$-Yamabe solitons. More precisely, we have the following theorem. \begin{theorem}\label{T2}The compact $k$-Yamabe soliton $(M^n,g,X,\lambda)$ is gradient if, and only if, \[\int_{M^n}Ric(\nabla h,Y)dv_{g} \leq0,\] where $h$ and $Y$ are the Hodge-de Rham decomposition components of $X$. \end{theorem} As a consequence of Theorem \ref{T1} and Theorem \ref{T2}, we derive the following triviality result. \begin{cor}\label{corr}Let $(M^n,g,X,\lambda)$ be a compact $k$-Yamabe soliton $(k\ge2)$ and $X=\nabla h+Y$ the Hodge-de Rham decomposition of $X$. If \[\int_{M^n}Ric(\nabla h,Y)dv_{g}\leq0,\] then $(M^n, g)$ is a trivial $k$-Yamabe soliton. \end{cor} An immediate consequence of the above corollary is the next result. \begin{cor}\label{cor}Any compact $k$-Yamabe soliton $(M^n,g,X,\lambda)$ with $k\ge2$ and nonpositive Ricci curvature is trivial. \end{cor} Finally, taking into account the $L^{2}(M)$ orthogonality of the Hodge-de Rham decomposition, we obtain. \begin{theorem}\label{1.6}Let $(M^n,g,X,\lambda)$ be a compact $k$-Yamabe soliton $(k\ge2)$ and $X=\nabla h+Y$ the Hodge-de Rham decomposition of $X$. If \[\int_{M^n}g(\nabla h,X)dv_{g}\leq0,\] then $(M^n, g)$ is a trivial $k$-Yamabe soliton. \end{theorem} \section{Proofs} \begin{myproof}{Theorem}{\ref{T4}} If $k=1$, then $(M^n,g)$ is a Yamabe soliton and the result is well known from \cite{di2008yamabe}. Now, consider $k\geq2$ and suppose $(M^n,g)$ locally conformally flat. It was proved in \cite{han2006kazdan, viaclovsky2000some} that, on a compact, locally conformally flat, Riemannian manifold, one has \[\int_{M^n}g(\nabla \sigma_{k},X)dv_{g}=0,\] for every conformal Killing vector field $X$ on $(M^n,g)$. From the structure equation \eqref{def1}, we know that $X$ is a conformal Killing vector field; hence, it follows that \begin{equation}\label{9090} 0=\int_{M^n}g(\nabla \sigma_{k},X)dv_{g}=-\int_{M^n}\sigma_{k}(div X)dv_{g}=-2n(n-1)\int_{M^n}\sigma_{k}(\sigma_{k}-\lambda)dv_{g}, \end{equation} where in the second equality we have used the divergence theorem. On the other hand, again from the divergence theorem, we obtain \begin{equation}\label{8080}0=\int_{M^n}div X dv_{g}=2n(n-1)\int_{M^n}(\sigma_{k}-\lambda)dv_{g}. \end{equation} Jointly equations \eqref{9090} and \eqref{8080}, we conclude that \[2n(n-1)\int_{M^n}(\sigma_{k}-\lambda)^2 dv_{g}=0,\] which implies that $\sigma_{k}=\lambda$ and $\mathcal{L}_{X}g=0$. Hence $(M^n,g)$ is trivial. \end{myproof} \begin{myproof}{Theorem}{\ref{T1}} If $k=1$, then $(M^n,g)$ is a gradient Yamabe soliton and the result is well known from \cite{hsu2012note}. Now, consider $k\geq2$ and suppose by contradiction that $f$ is nonconstant. From Theorem 1.1 of \cite{catino2012global}, we obtain that $(M^n,g)$ is rotationally symmetric and $M^{n}\setminus \{N,S\}$ is locally conformally flat. Here $N,S$ corresponds to the extremal points of $f$ in $M$. From the structure equation \eqref{eq fundamental}, we know that $\nabla f$ is a conformal Killing vector field; hence, we can apply Theorem 5.2 of \cite{viaclovsky2000some} to deduce \begin{equation}\label{1221} 0=\int_{M^{n}\setminus\{N,S\}}g(\nabla\sigma_{k},\nabla f)dv_{g}=\int_{M^n}g(\nabla\sigma_{k},\nabla f)dv_{g}=-2n(n-1)\int_{M^n}\sigma_{k}(\sigma_{k}-\lambda)dv_{g}, \end{equation} where in the last equality we have used the divergence theorem. On the other hand, again from the divergence theorem, we get \begin{equation}\label{808080}0=\int_{M^n}\Delta f dv_{g}=2n(n-1)\int_{M^n}(\sigma_{k}-\lambda)dv_{g}. \end{equation} Jointly equations \eqref{1221} and \eqref{808080}, we conclude that \[2n(n-1)\int_{M^n}(\sigma_{k}-\lambda)^2 dv_{g}=0,\] which implies that $\sigma_{k}=\lambda$ and $f$ is harmonic. Since $M^n$ is compact, $f$ is a constant, which leads to a contradiction. This proves that $f$ is constant. \end{myproof} \begin{myproof}{Theorem}{\ref{T2}} From the Hodge-de Rham decomposition \eqref{Hodge}, we deduce that \begin{equation}\label{t11111}\frac{1}{2}\mathcal{L}_{Y}g=\frac{1}{2}\mathcal{L}_{X}g-\frac{1}{2}\mathcal{L}_{\nabla h}g=2(n-1)(\sigma_{k}-\lambda)g-\nabla^{2}h. \end{equation} Therefore, to prove that $(M^n,g)$ admits a gradient $k$-Yamabe soliton structure, it is necessary and sufficient to show that $\mathcal{L}_{Y}g=0$. From \eqref{t11111}, we arrive that \begin{equation}\label{t12} \begin{split} \frac{1}{4}\int_{M^n}|\mathcal{L}_{Y}g|^2 dv_{g}&=\int_{M^n}\left[4n(n-1)^2(\sigma_{k}-\lambda)^2-4(n-1)g\left(\nabla^2h,(\sigma_{k}-\lambda)g\right)+|\nabla^2 h|^2 \right]dv_{g}\\ &=\int_{M^n}\left[|\nabla^2 h|^2-4n(n-1)^2(\sigma_{k}-\lambda)^2\right]dv_{g}. \end{split} \end{equation} We are going to compute the right-hand side of \eqref{t12} using the following identity \begin{equation}\label{tt1}\int_{M^n}2Ric(\nabla h,Y)dv_{g}=\int_{M^n}\left[Ric(X,X)-Ric(\nabla h,\nabla h)-Ric(Y,Y)\right]dv_{g}. \end{equation} Taking the divergence of \eqref{t11111}, we get \begin{equation}\label{t2} \begin{split} \frac{1}{2}div(\mathcal{L}_{Y}g)(Y)&=\frac{1}{2}div(\mathcal{L}_{X}g)(Y)-\frac{1}{2}div(\mathcal{L}_{\nabla h}g)(Y)\\ &=2(n-1)div(\sigma_{k}-\lambda)(Y)-\frac{1}{2}div(\mathcal{L}_{\nabla h}g)(Y)\\ &=2(n-1)g(\nabla\sigma_{k},Y)-\frac{1}{2}div(\mathcal{L}_{\nabla h}g)(Y). \end{split} \end{equation} Hence, from the Bochner formula (see Lemma 2.1 of \cite{petersen2009rigidity}), we can express \eqref{t2} as follows \begin{equation}\label{t4} \frac{1}{2}\Delta|Y|^2-|\nabla Y|^2+Ric(Y,Y)=4(n-1)g(\nabla\sigma_{k},Y)-2Ric(\nabla h, Y)-2g(\nabla \Delta h,Y), \end{equation} and using the compactness of $M^n$, we arrive at equation \begin{equation}\label{dd} \int_{M^n}2Ric(\nabla h,Y)dv_{g}=\int_{M^n}\left[|\nabla Y|^2-Ric(Y,Y)\right] dv_{g}. \end{equation} On the other hand, the same argument as above shows that \begin{equation}\label{11} \frac{1}{2}\Delta|X|^2-|\nabla X|^2+Ric(X,X)=-2(n-1)(n-2)g(\nabla \sigma_{k},X). \end{equation} Since \begin{equation*} \begin{split} \int_{M^n}|\nabla X|^2dv_{g}&=\int_{M^n}\big{[}|\nabla^2 h|^2+|\nabla Y|^2+2g(\nabla\nabla h,\nabla Y)\big{]}dv_{g}\\ &=\int_{M^n}\big{[}|\nabla^2 h|^2+|\nabla Y|^2-2g(\nabla\Delta h+Ric(\nabla h), Y)\big{]}dv_{g}\\ &=\int_{M^n}\big{[}|\nabla^2 h|^2+|\nabla Y|^2-2Ric(\nabla h, Y)\big{]}dv_{g},\\ \end{split} \end{equation*} we may integrate \eqref{11} over $M^n$ to deduce \begin{equation}\label{dd2} \begin{split} \int_{M^n}Ric(X,X)dv_{g}&=\int_{M^n}\left[|\nabla X|^2-2(n-1)(n-2)g(\nabla \sigma_{k},X)\right]dv_{g}\\ &=\int_{M^n}\big{[}|\nabla^2 h|^2+|\nabla Y|^2-2Ric(\nabla h, Y)+4n(n-1)^2\times\\ &\qquad\times(n-2)(\sigma_{k}-\lambda)^2\big{]}dv_{g}.\\ \end{split} \end{equation} Again, the same argument based on Lemma 2.1 of \cite{petersen2009rigidity}, allow us to deduce that \begin{equation}\label{dd3} \int_{M^n}Ric(\nabla h,\nabla h)dv_{g}=\int_{M^n}\left[4n^2(n-1)^2(\sigma_{k}-\lambda)-|\nabla^2 h|^2\right]dv_{g}. \end{equation} Now, replacing back \eqref{dd}, \eqref{dd2} and \eqref{dd3} into \eqref{tt1}, we get \[\int_{M^n}\left[|\nabla^2 h|-4n(n-1)^2(\sigma_{k}-\lambda)^2\right]dv_{g}=\int_{M^n}Ric(\nabla h,Y)dv_{g},\] which combining with \eqref{t12} produce the desired result. \end{myproof} \begin{myproof}{Theorem}{\ref{1.6}}Since the Hodge-de Rham decomposition is orthogonal on $L^{2}(M)$, we get \begin{equation*} \int_{M^n}g(\nabla h, X)dv_{g}=\int_{M^n}g(\nabla h, \nabla h+Y)dv_{g}=\int_{M^n}|\nabla h|^{2}dv_{g}. \end{equation*} Therefore, if \begin{equation*} \int_{M^n}g(\nabla h, X)dv_{g}\leq0, \end{equation*} we obtain that $\nabla h=0$ and, consequently, $X=Y$. Now, since $Y$ is a free divergence vector field, we deduce \[0=div Y=div X=2n(n-1)(\sigma_{k}-\lambda),\] which implies that $\sigma_{k}=\lambda$ and $\mathcal{L}_{X}g=0$, hence, trivial. \end{myproof} \end{document}
\begin{document} \title{\LARGE \bf Sparse semiparametric canonical correlation analysis for data of mixed types} \begin{abstract} Canonical correlation analysis investigates linear relationships between two sets of variables, but often works poorly on modern data sets due to high-dimensionality and mixed data types such as continuous, binary and zero-inflated. To overcome these challenges, we propose a semiparametric approach for sparse canonical correlation analysis based on Gaussian copula. Our main contribution is a truncated latent Gaussian copula model for data with excess zeros, which allows us to derive a rank-based estimator of the latent correlation matrix for mixed variable types without the estimation of marginal transformation functions. The resulting canonical correlation analysis method works well in high-dimensional settings as demonstrated via numerical studies, as well as in application to the analysis of association between gene expression and micro RNA data of breast cancer patients. \end{abstract} \textbf{Keywords:} BIC; Gaussian copula model; Kendall's $\tau$; Latent correlation matrix; Truncated continuous variable; Zero-inflated data. \section{Introduction}\label{sec:intro} Canonical correlation analysis investigates linear associations between two sets of variables, and is widely used in various fields including biomedical sciences, imaging and genomics \citep{Hardoon:2004, Chi:2013gj, Safo:2018he}. However, sample canonical correlation analysis often performs poorly due to two main challenges: high-dimensionality and non-normality of the data. In high-dimensional settings, sample canonical correlation analysis is known to overfit the data due to the singularity of sample covariance matrices \citep{Hardoon:2004, SuffCCA}. Additional regularization is often used to address this challenge. \citet{RCCA:2008} focus on ridge regularization of sample covariance matrices to avoid singularity, while more recent methods focus on sparsity regularization of canonical vectors \citep{parkhomenko2009sparse, Witten:2009PMD, ChenLiu, Chi:2013gj, FastRegCCA, Wilms:2015wna, Gao:2015_minimax, Safo:2018he}. At the same time, with the advancement in technology, it is common to collect data of different types. For example, the Cancer Genome Atlas Project contains matched data of mixed types such as gene expression (continuous), mutation (binary) and micro RNA (count) data. While regularized canonical correlation methods work well for Gaussian data, they still are based on the sample covariance matrix, and therefore are not appropriate for the analysis in the presence of binary data or data with excess zero values. Several approaches have been proposed to address the non-normality of the data. There are completely nonparametric approaches such as kernel canonical correlation analysis \citep{Hardoon:2004}. Alternatively, there are parametric approaches building upon a probabilistic interpretation of \citet{BachJordan}. For example, \citet{ZohPCAN2016} develop probabilistic canonical correlation analysis for count data based on Poisson distribution. More recently, \citet{semiCCA} utilize a normal semiparametric transformation model for the analysis of mixed types of variables; however, the method requires estimation of marginal transformation functions via nonparametric maximum likelihood. In summary, significant progress has been made in developing regularized variants of sample canonical correlation analysis that work well in high-dimensional settings. However, these approaches are not suited for mixed data types. At the same time, several methods have been proposed to account for non-normality of the data, however they are not designed for high-dimensional settings. More importantly, to our knowledge none of the existing methods explicitly address the case of zero-inflated measurements, which, for example, is common for micro RNA and microbiome abundance data. To bridge this major gap, we propose a semiparametric approach for sparse canonical correlation analysis, which allows us to handle high-dimensional data of mixed types via a common latent Gaussian copula framework. Our work has three main contributions. First, we model the zeros in the data as observed due to truncation of an underlying latent continuous variable, and define a corresponding truncated Gaussian copula model. We derive explicit formulas for the bridge functions that connect the Kendall's $\tau$ of the observed data to the latent correlation matrix for different combinations of continuous, binary and truncated data types, and use these formulas to construct a rank-based estimator of the latent correlation matrix for the mixed data. \citet{Fan:2016um} use a similar bridge function approach in the context of graphical models, however the authors do not consider the truncated variable type. The latter requires derivation of new bridge functions, and those derivations are considerably more involved than the corresponding derivations for the continuous/binary case. The significant advantage of the bridge function technique is that it allows us to estimate the latent correlation structure of a Gaussian copula without estimating marginal transformation functions, in contrast to \citet{semiCCA}. Secondly, we use the derived rank-based estimator instead of the sample correlation matrix within the sparse canonical correlation analysis framework that is motivated by \citet{Chi:2013gj} and \citet{Wilms:2015wna}. This allows us to take into account the dataset-specific correlation structure in addition to the cross-correlation structure. In contrast, \citet{parkhomenko2009sparse} and \citet{Witten:2009PMD} model the variables within each data set as uncorrelated. We develop an efficient optimization algorithm to solve the corresponding problem. Finally, we propose two types of Bayesian Information Criteria (\textsc{bic}) for tuning parameter selection, which leads to significant computational savings compared to commonly used cross-validation and permutation techniques \citep{Witten:SAGMB}. \citet{Wilms:2015wna} also use {\textsc{bic}} in the canonical correlation analysis context, however only one criterion is proposed. Our two criteria correspond to the cases of the error variance being either known or unknown. We found that both are competitive in our numerical studies, however one criterion works best for variable selection, whereas the other works best for prediction. \section{Background} \subsection{Canonical correlation analysis} In this section we review both the classical canonical correlation analysis, and its sparse alternatives. Given two random vectors $\mathbf{X}_1\in \mathbb{R}^{p_1}$ and $\mathbf{X}_2\in \mathbb{R}^{p_2}$, let $\Sigma_1 = \hbox{cov} (\mathbf{X}_1)$, $\Sigma_2 = \hbox{cov}(\mathbf{X}_2)$ and $\Sigma_{12} = \hbox{cov} (\mathbf{X}_1, \mathbf{X}_2)$. Population canonical correlation analysis \citep{Hotelling:1936} seeks linear combinations $w_1^{\top}\mathbf{X}_1$ and $w_2^{\top}\mathbf{X}_2$ with maximal correlation, that is \begin{equation}\label{eq:pCCA} \maximize_{w_1,w_2} \Big\{ w_1^{\top}\Sigma_{12}w_2 \Big\} \quad \mbox{subject to}\quad w_1^{\top}\Sigma_1 w_1=1, \quad w_2^{\top}\Sigma_2 w_2=1. \end{equation} Problem \eqref{eq:pCCA} has a closed form solution via the singular value decomposition of $\Sigma_{1}^{-1/2} \Sigma_{12} \Sigma_{2}^{-1/2}$. Given the first pair of singular vectors $(u,v)$, the solutions to~\eqref{eq:pCCA} can be expressed as $w_1 = \Sigma_1^{-1/2}u$ and $w_2 = \Sigma_2^{-1/2}v$. Sample canonical correlation analysis replaces $\Sigma_1$, $\Sigma_2$ and $\Sigma_{12}$ in~\eqref{eq:pCCA} by corresponding sample covariance matrices $S_1$, $S_2$ and $S_{12}$. In high-dimensional settings when sample size is small compared to the number of variables, $S_1$ and $S_2$ are singular, thus leading to non-uniqueness of solution and poor performance due to overfitting. A common approach to circumvent this challenge is to consider sparse regularization of $w_1$ and $w_2$ via the addition of a $\ell_1$ penalty in the objective function of~\eqref{eq:pCCA} \citep{Witten:2009PMD, parkhomenko2009sparse, Chi:2013gj, Wilms:2015wna}. Sparse canonical correlation analysis is then formulated as \begin{equation}\label{eq:sparseCCA} \maximize_{w_1,w_2} \Big\{ w_1^{\top}S_{12}w_2-\lambda_1\|w_1\|_1-\lambda_2\|w_2\|_1 \Big\} \quad \mbox{subject to}\quad w_1^{\top}S_1 w_1\le1, \quad w_2^{\top}S_2 w_2\le1. \end{equation} In addition to $\ell_1$ penalties, the equality constraints in~\eqref{eq:pCCA} are replaced with inequality constraints which define convex sets. This generalization is possible since nonzero solutions to~\eqref{eq:sparseCCA} satisfy the constraints with equality, see Proposition~\ref{prop:kkt} below. While problem~\eqref{eq:sparseCCA} works well in high-dimensional settings, it still relies on sample covariance matrices, and therefore is not well-suited for skewed or non-continuous data, such as binary or zero-inflated. We next review the Gaussian copula models that we propose to use to address these challenges. \subsection{Latent Gaussian copula model for mixed data} In this section we review the Gaussian copula model in~\citet{Liu2009:NPN}, and its extension to mixed continuous and binary data in~\cite{Fan:2016um}. \begin{definition}[Gaussian copula model] A random vector $\mathbf{X}=(X_1,\ldots,X_p)^{\top}$ satisfies a Gaussian copula model if there exists a set of monotonically increasing transformations $f=(f_j)^p_{j=1}$ satisfying $f(\mathbf{X}) = \left\{f_1(X_1),\ldots,f_p(X_p)\right\}^{\top}\sim {\textup{N}}_p(0,\Sigma)$ with $\Sigma_{jj}=1$ for all $j$. We denote $\mathbf{X} \sim {\textup{NPN}}(0, \Sigma, f)$. \end{definition} \begin{definition}[Latent Gaussian copula model for mixed data] Let $\mathbf{X}_1\in \mathbb{R}^{p_1}$ be continuous and $\mathbf{X}_2\in \mathbb{R}^{p_2}$ be binary random vectors with $\mathbf{X}=(\mathbf{X}_1, \mathbf{X}_2)$. Then $\mathbf{X}$ satisfies the latent Gaussian copula model if there exists a $p_2$-dimensional random vector $\mathbf{U}_2=(U_{p_1+1},\ldots,U_{p_1+p_2})^{\top}$ such that $\mathbf{U}:=(\mathbf{X}_1,\mathbf{U}_2)\sim {\textup{NPN}}(0,\Sigma,f)$ and $X_j = I(U_j >C_j)$ for all $j=p_1+1,\ldots,p_1+p_2$, where $I(\cdot)$ is the indicator function and $\mathbf{C}=(C_1,\ldots,C_{p_2})$ is a vector of constants. We denote this as $\mathbf{X} \sim {\textup{LNPN}}(0,\Sigma, f, \mathbf{C})$, where $\Sigma$ is the latent correlation matrix. \end{definition} \citet{Fan:2016um} consider the problem of estimating $\Sigma$ for the latent Gaussian copula model based on the Kendall's $\tau$. Given the observed data $(X_{1j}, X_{1k}), \ldots, (X_{nj}, X_{nk}) $ for variables $X_j$ and $X_k$, Kendall's $\tau$ is defined as \[ \widehat\tau_{jk} = \dfrac{2}{n(n-1)}\sum_{1\le i<i'\le n}\textup{sign}(X_{ij}-X_{i'j})\textup{sign}(X_{ik}-X_{i'k}). \] Since $\widehat \tau_{jk}$ is invariant under monotone transformation of the data, it is well-suited to capture associations in copula models. Let $\tau_{jk} = \mathbb{E}(\widehat \tau_{jk})$ be the population Kendall's $\tau$. The latent correlation matrix $\Sigma$ is connected to Kendall's $\tau$ via the so-called bridge function $F$ such that $\Sigma_{jk} = F^{-1}(\tau_{jk})$ for all variables $j$ and $k$. \citet{Fan:2016um} derive an explicit form of the bridge function for continuous, binary and mixed variable pairs, which allows to estimate the latent correlation matrix via the method of moments. We summarize these results below. \begin{theorem}[\citet{Fan:2016um}]\label{d:hatR} Let $\mathbf{X}=(\mathbf{X}_1, \mathbf{X}_2)\sim{\textup{LNPN}}(0,\Sigma, f, \mathbf{C})$ with $p_1$-dimensional continuous $\mathbf{X}_1$ and $p_2$-dimensional binary $\mathbf{X}_2$. The rank-based estimator of $\Sigma$ is the symmetric matrix $\widehat R$ with $\widehat R_{jj}=1$ and $\widehat R_{jk} = \widehat R_{kj}=F_{jk}^{-1}(\widehat \tau_{jk})$, where for $r\in (0,1)$, $$ F_{jk}(r) = \begin{cases} 2\sin^{-1}(r)/\pi& \mbox{if}\quad 1\leq j<k\leq p_1;\\ 2\left\{\Phi_2( \Delta_j, \Delta_{k}; r)-\Phi( \Delta_j)\Phi( \Delta_{k})\right\}& \mbox{if}\quad p_1+1 \leq j<k\leq p_1+p_2;\\ 4\Phi_2( \Delta_k, 0; r/\surd{2})-2\Phi( \Delta_k)&\mbox{if}\quad 1\leq j\leq p_1, p_1+1\leq k\leq p_1+ p_2. \end{cases} $$ Here $\Delta_j = f_j(C_j)$, $\Phi(\cdot)$ is the cdf of the standard normal distribution, and $\Phi_2(\cdot, \cdot; r)$ is the cdf of the standard bivariate normal distribution with correlation $r$. \end{theorem} \begin{remark} Since $\Delta_j = f_j(C_j)$ is unknown in practice, \citet{Fan:2016um} propose to use a plug-in estimator from the moment equation $\mathbb{E}(X_{ij})= 1- \Phi(\Delta_j)$, leading to $\widehat \Delta_j = \Phi^{-1}(1-\bar X_{j})$, where $\bar X_{j} = \sum_{i=1}^{n}X_{ij}/n$. \end{remark} \citet{Fan:2016um} use these results in the context of Gaussian graphical models, and replace the sample covariance matrix with a rank-based estimator $\widehat R$, which allows one to use Gaussian models with skewed continuous and binary data. However, \citet{Fan:2016um} do not consider the case of zero-inflated data, which requires formulation of a new model, and derivation of new bridge functions. \section{Methodology} \subsection{Truncated latent Gaussian copula model}\label{sec:trunc} Our goal is to model the zero-inflated data through latent Gaussian copula models. Two motivating examples are micro RNA and microbiome data, where it is common to encounter a large number of zero counts. In both examples it is reasonable to assume that zeros are observed due to truncation of underlying latent continuous variables. More generally, one can think of zeros as representing the measurement error due to truncation of values below a certain positive threshold. This intuition leads us to consider the following model. \begin{definition}[Truncated latent Gaussian copula model]\label{d:trunc}A random vector $\mathbf{X}=(X_1, \ldots, X_p)^{\top}$ satisfies the truncated Gaussian copula model if there exists a $p$-dimensional random vector $\mathbf{U} = (U_1,\ldots, U_p)^{\top} \sim {\textup{NPN}}(0, \Sigma, f)$ such that $$ X_j = I(U_j>C_j)U_j \quad (j=1,\ldots,p), $$ where $I(\cdot)$ is the indicator function and $\mathbf{C}=(C_1,\ldots,C_p)$ is a vector of positive constants. We denote $X \sim \textup{TLNPN}(0,\Sigma, f, \mathbf{C})$, where $\Sigma$ is the latent correlation matrix. \end{definition} The methodology in \citet{Fan:2016um} allows them to estimate the latent correlation matrix in the presence of mixed continuous and binary data. Our Definition~\ref{d:trunc} adds a third type, which we denote as \textit{truncated} for short. To construct a rank-based estimator for $\Sigma$ as in Theorem~\ref{d:hatR} in the presence of truncated variables, below we derive an explicit form of the bridge function for all possible combinations of the data types. Throughout, we use $\Phi(\cdot)$ for the cdf of a standard normal distribution and $\Phi_d(\cdots; \Sigma_d)$ for the cdf of a standard $d$-variate normal distribution with correlation matrix $\Sigma_d$. All the proofs are deferred to the Supplementary Material. \begin{theorem}\label{bridge1} Let $X_j$ be truncated and $X_k$ be binary. Then $\mathbb{E} (\widehat \tau_{jk}) = F_{\rm TB}(\Sigma_{jk};\Delta_j, \Delta_k)$, where \[ F_{\rm TB}(\Sigma_{jk};\Delta_j, \Delta_k) = 2 \{1-\Phi(\Delta_j)\}\Phi(\Delta_k) -2 \Phi_3\left(-\Delta_j, \Delta_k, 0; \Sigma_{3a} \right) -2 \Phi_3\left(-\Delta_j, \Delta_k, 0; \Sigma_{3b} \right), \] $\Delta_j = f_j(C_j)$, $\Delta_k = f_k(C_k)$, \[ \Sigma_{3a}=\begin{pmatrix} 1 & -\Sigma_{jk} & 1/\surd{2} \\ -\Sigma_{jk} & 1 & -\Sigma_{jk}/\surd{2}\\ 1/\surd{2} & -\Sigma_{jk}/\surd{2} & 1 \end{pmatrix}, \quad \Sigma_{3b}=\begin{pmatrix} 1 & 0 & -1/\surd{2} \\ 0 & 1 & -\Sigma_{jk}/\surd{2}\\ -1/\surd{2} & -\Sigma_{jk}/\surd{2} & 1 \end{pmatrix}. \] \end{theorem} \begin{theorem}\label{bridge2} Let $X_j$ be truncated and $X_k$ be continuous. Then $\mathbb{E} (\widehat \tau_{jk}) = F_{\rm TC}(\Sigma_{jk};\Delta_j)$, where \[ F_{\rm TC}(\Sigma_{jk};\Delta_j) = -2 \Phi_2 (-\Delta_j,0; 1/\surd{2} ) +4\Phi_3 \left(-\Delta_j,0,0; \Sigma_3\right), \] $\Delta_j = f_j(C_j)$ and \[ \Sigma_3 = \begin{pmatrix} 1 & 1/\surd{2} & \Sigma_{jk}/\surd{2}\\ 1/\surd{2} & 1 & \Sigma_{jk}\\ \Sigma_{jk}/\surd{2} & \Sigma_{jk} & 1 \end{pmatrix}. \] \end{theorem} \begin{theorem}\label{bridge3} Let both $X_j$ and $X_k$ be truncated. Then $\mathbb{E} (\widehat \tau_{jk}) = F_{\rm TT}(\Sigma_{jk};\Delta_j, \Delta_k)$, where \[ F_{\rm TT}(\Sigma_{jk};\Delta_j, \Delta_k) = ~-2 \Phi_4 (-\Delta_j, -\Delta_k, 0,0; \Sigma_{4a}) + 2 \Phi_4 (-\Delta_j, -\Delta_k, 0,0; \Sigma_{4b}), \] $\Delta_j = f_j(C_j)$, $\Delta_k = f_k(C_k)$ and \[ \Sigma_{4a} = \begin{pmatrix} 1 & 0 & 1/\surd{2} & -\Sigma_{jk}/\surd{2}\\ 0& 1 & -\Sigma_{jk}/\surd{2}& 1/\surd{2}\\ 1/\surd{2}& -\Sigma_{jk}/\surd{2} & 1& -\Sigma_{jk}\\ -\Sigma_{jk}/\surd{2}& 1/\surd{2}& -\Sigma_{jk} & 1 \end{pmatrix} \] and \[ \Sigma_{4b} = \begin{pmatrix} 1 & \Sigma_{jk} & 1/\surd{2} & \Sigma_{jk}/\surd{2}\\ \Sigma_{jk} & 1 & \Sigma_{jk}/\surd{2}& 1/\surd{2}\\ 1/\surd{2}& \Sigma_{jk}/\surd{2} & 1& \Sigma_{jk}\\ \Sigma_{jk}/\surd{2}& 1/\surd{2}& \Sigma_{jk} & 1 \end{pmatrix}. \] \end{theorem} We also show that the inverse bridge function exists for all of the cases. \begin{theorem}\label{t:monotone} For any constants $\Delta_j$, $\Delta_k$, the bridge functions $F(\Sigma_{jk})$ in Theorems \ref{bridge1}--\ref{bridge3} are strictly increasing in $\Sigma_{jk} \in(-1,1)$, and thus the corresponding inverse functions $F^{-1}(\tau_{jk})$ exist. \end{theorem} \begin{remark} While the inverse functions exist, they do not have the closed form. In practice we estimate $\widehat R$ element-wise by solving $\widehat R_{jk}=\argmin_{r}\{F(r)-\widehat \tau_{jk}\}^2$. This leads to $O(p^2)$ computations which can be done in parallel to alleviate the computational burden. \end{remark} Theorems~\ref{bridge1}--\ref{t:monotone} complement the results of \citet{Fan:2016um} summarized in Theorem~\ref{d:hatR} by adding three more cases: continuous/truncated, binary/truncated and truncated/truncated. This allows us to construct a rank-based estimator $\widehat R$ for $\Sigma$ in the presence of mixed variables. \begin{remark}\label{rem:Rtilde} Since $\widehat R$ is not guaranteed to be positive semidefinite, \citet{Fan:2016um} regularize $\widehat R$ by projecting it onto the cone of positive semidefinite matrices. We follow this approach using the \texttt{nearPD} function in the \texttt{Matrix} R package leading to estimator $\widehat R_p$. Furthermore, we consider \begin{equation}\label{d:Rtilde} \widetilde R = (1-\nu)\widehat R_p + \nu I \end{equation} with a small value of $\nu>0$, so that $\widetilde R$ is strictly positive definite. Throughout, we fix $\nu = 0.01$. \end{remark} \begin{remark} As in the binary case, $\Delta_j = f_j(C_j)$ is unknown for truncated variables. Similar to \cite{Fan:2016um}, we use a plug-in estimator $\widehat \Delta_j$ based on the moment equation $\mathbb{E} \left\{I(X_{ij}>0) \right\} = \mathbb{P} (X_{j}>0) = \mathbb{P} \left\{f_j(U_{j})>\Delta_j\right\} = 1-\Phi(\Delta_j)$. Let $n_{\text{zero}}=\sum_{i=1}^{n} I(X_{ij}=0)$, then we use $\widehat{\Delta}_j = \Phi^{-1} \left({n_{\text{zero}}}/{n}\right)$. \end{remark} For clarity, we summarize below all the steps in the construction of our rank-based estimator $\widetilde R$ based on the observed data matrix $\mathbf{X}\in \mathbb{R}^{n \times p}$. \begin{enumerate} \item Calculate $\widehat \tau_{jk}$ for all pairs of variables $1\le j < k \le p$. \item Estimate $\widehat{\Delta}_j = \Phi^{-1} \{\sum_{i=1}^{n} I( X_{ij} \neq 0)/n \}$ for all $j$ of truncated or binary type. \item Compute $\widehat R_{jk} = F^{-1} (\widehat \tau_{jk})$, where $F$ is the bridge function chosen according to the type of variables $j$ and $k$ (with possible dependence on $\widehat \Delta_j$, $\widehat \Delta_k$). \item Project $\widehat R$ onto the cone of positive semidefinite matrices to form $\widehat R_p$. \item Set $\widetilde R = (1-\nu)\widehat R_p + \nu I$ for small $\nu > 0$. \end{enumerate} \subsection{Consistency of rank-based estimator for latent correlation matrix} We next show that our proposed estimator $\widetilde R$ is consistent for $\Sigma$. Similar to \citet{Fan:2016um}, we use the following two assumptions: \begin{itemize}[leftmargin=1in] \item[Assumtion 1.] All the elements of $\Sigma$ satisfy $|\Sigma_{jk}|\leq 1-\delta$ for some $\delta > 0$. \item[Assumtion 2.] All the thresholds $\Delta_j$ satisfy $|\Delta_j|\leq M$ for some constant $M>0$. \end{itemize} We first prove Lipschitz continuity of the inverse of the bridge function, $F^{-1}(\tau_{jk})$. \begin{theorem}\label{t:lipschitz}Under Assumptions~1--2, for any constants $\Delta_j$ and $\Delta_k$, the inverses of the bridge functions in Theorems \ref{bridge1}--\ref{bridge3}, $F^{-1}(\cdot)$, satisfy for any $\tau_1$, $\tau_2$ \[ |F^{-1}(\tau_1) - F^{-1}(\tau_2) | \le L |\tau_1 - \tau_2|, \] where $L>0$ is a constant independent of $\tau_1$, $\tau_2$, $\Delta_j$ and $\Delta_k$. \end{theorem} \citet{Fan:2016um} also prove Lipschitz continuity in the continuous/binary case, however their proof technique cannot be directly used for the truncated case considered here due to a more complex form of the bridge functions. Instead, we develop a new proof technique based on the multivariate chain rule, which also leads to simplified proofs in the continuous/binary case. The full proof is given in the Supplementary Material Section S.1. The Lipschitz continuity of the inverse bridge functions is then used to prove consistency of $\widehat R$. \begin{theorem}\label{t:consistency} Let a random $\mathbf{X}=(\mathbf{X}_1,\mathbf{X}_2, \mathbf{X}_3) \in \mathbb{R}^p$ satisfy the latent Gaussian copula model with correlation matrix $\Sigma$, with $\mathbf{X}_1\in \mathbb{R}^{p_1}$ being continuous, $\mathbf{X}_2\in \mathbb{R}^{p_2}$ being binary, and $\mathbf{X}_3\in \mathbb{R}^{p_3}$ being truncated with $p=p_1 + p_2 + p_3$. Let $\widehat R$ be the rank-based estimator for the correlation matrix $\Sigma$ from Section~\ref{sec:trunc} constructed by inverting corresponding bridge functions element-wise. Under Assumptions~1--2, with probability at least $1-p^{-1}$, for some $C>0$ independent of $n$, $p$ $$ \|\widehat R - \Sigma\|_{\max} = \max_{j,k}|\widehat R_{jk} - \Sigma_{jk}| \leq C(\log p/n)^{1/2}. $$ \end{theorem} Theorem~\ref{t:consistency} states that $\widehat R$ is consistent in estimating $\Sigma$ with respect to sup norm, and the consistency rate coincides up to constants with the rate obtained by the sample covariance matrix in the Gaussian case. In practice, we further regularize $\widehat R$ by forming $\widetilde R = (1-\nu)\widehat R_p + \nu I$. By Corollary~2 in \citet{Fan:2016um}, $\widehat R_p$ has the same consistency rate as $\widehat R$, hence Theorem~\ref{t:consistency} implies the consistency of $\widetilde R$ with the same rate as long as $\nu \le (\log p / n)^{1/2}$. \subsection{Semiparametric sparse canonical correlation analysis}\label{sec:optim} Our proposal is based on formulating sparse canonical correlation analysis using a latent correlation matrix from the Gaussian copula model for mixed data. At a population level, let $\Sigma$ be the latent correlation matrix for $(\mathbf{X}_1,\mathbf{X}_2) \sim {\textup{LNPN}}(0,\Sigma, f, \mathbf{C})$ where each $\mathbf{X}_1$ and $\mathbf{X}_2$ follows one of the three data types: continuous, binary or truncated. In Section~\ref{sec:trunc} we derived a rank-based estimator for $\Sigma$, which we propose to use within the sparse canonical correlation analysis framework~\eqref{eq:sparseCCA}. Given the semiparametric estimator $\widetilde R$ in \eqref{d:Rtilde}, we propose to find canonical vectors by solving \begin{equation}\label{eq:sparseCCAwithR} \minimize_{w_1,w_2} \Big\{-w_1^{\top}\widetilde{R}_{12}w_2+\lambda_1\|w_1\|_1+\lambda_2\|w_2\|_1\Big\}\quad\mbox{subject to}\quad w_1^{\top}\widetilde{R}_1 w_1\le1,\quad w_2^{\top}\widetilde{R}_2 w_2\le1. \end{equation} \begin{remark} \citet{qingmaiBiometrics} establish the consistency of estimated canonical vectors from the sparse canonical correlation analysis problem~\eqref{eq:sparseCCA} in the Gaussian case. Their proof relies on the sup norm bound for the sample covariance matrix. Since Theorem~\ref{t:consistency} establishes such a bound for our rank-based estimator, these results can be directly extended to~\eqref{eq:sparseCCAwithR}. \end{remark} While we focus only on the estimation of the first canonical pair, the subsequent canonical pairs can be found sequentially by using a deflation scheme as follows. Let $\widetilde R_{12}^{(1)}= \widetilde{R}_{12}$ and let $\widehat{w}_{1}$, $\widehat{w}_2$ be the $(k-1)$th estimated canonical pair. To estimate the $k$th pair for $k>1$, form $$\widetilde{R}_{12}^{(k)}=\widetilde R_{12}^{(k-1)} - (\widehat w_1^{\top}\tilde R_{12}^{(k-1)}\widehat w_2)\widetilde R_1\widehat w_{1}\widehat w_{2}^{\top}\widetilde R_2,$$ and solve~\eqref{eq:sparseCCAwithR} using $\widetilde R_{12}^{(k)}$ instead of $\widetilde R_{12}$. While problem~\eqref{eq:sparseCCAwithR} is not jointly convex in $w_1$ and $w_2$, it is biconvex. Therefore, we propose to iteratively optimize over $w_1$ and $w_2$. First, consider optimizing over $w_1$ with $w_2$ fixed. \begin{proposition}\label{prop:kkt} For a fixed $w_2\in \mathbb{R}^{p_2}$, let \begin{equation}\label{eq:prop1_1} \begin{split} \widehat w_1 =\argmin_{w_1} & \Big\{ - w_1^{\top} \widetilde R_{12} w_2 + \lambda_1\|w_1\|_1 \Big\}\quad \mbox{subject to}\quad w_1^{\top}\widetilde R_1 w_1\le 1. \end{split} \end{equation} This problem is equivalent to finding \begin{equation}\label{eq:prop1_2} \begin{split} \widetilde{w}_1 = \argmin_{w_1} & \Big\{(1/2) w_1^{\top} \widetilde R_1 w_1 - w_1^{\top} \widetilde R_{12} w_2 + \lambda_1\|w_1\|_1 \Big\}, \end{split} \end{equation} and then setting $\widehat{w}_1 =0$ if $\widetilde{w}_1=0$, and $\widehat{w}_1 =\widetilde{w}_1/(\widetilde{w}_1^{\top}\widetilde R_1\widetilde{w}_1)^{1/2}$ if $\widetilde{w}_1\neq 0$. \end{proposition} Both problems~\eqref{eq:prop1_1} and~\eqref{eq:prop1_2} are convex, but unlike~\eqref{eq:prop1_1}, problem~\eqref{eq:prop1_2} is unconstrained. Furthermore, problem~\eqref{eq:prop1_2} is of the same form as the well-studied penalized LASSO problem \citep{tibshirani1996regression}, which can be solved efficiently using for example the coordinate-descent algorithm. Hence, the proposed optimization algorithm for~\eqref{eq:sparseCCAwithR} can be viewed as a sequence of LASSO problems with rescaling. Given the value of $w_2$ at iteration $t$, the updates at iteration $t+1$ have the form \begin{align*} &\widetilde w_{1}=\argmin_{w_{1}}\Big\{(1/2){w_1^{\top} \widetilde R_1w_1}-w_1^{\top} \widetilde R_{12}w_2^{(t)}+\lambda_{1}\|w_{1}\|_{1}\Big\};\\ &\widehat w_{1}^{(t+1)}=\widetilde w_{1}/(\widetilde w_{1}^{\top} \widetilde R_1\widetilde w_{1})^{1/2};\\ &\widetilde w_{2}=\argmin_{w_{2}}\Big\{(1/2){w_2^{\top} \widetilde R_2w_2}-w_2^{\top} \widetilde R_{12}^{\top}w_1^{(t+1)}+\lambda_{2}\|w_{2}\|_{1}\Big\};\\ &\widehat w_{2}^{(t+1)}=\widetilde w_{2}/(\widetilde w_{2}^{\top} \widetilde R_2\widetilde w_{2})^{1/2}. \end{align*} If a zero solution is obtained at any of the steps, the optimization algorithm stops, and both $w_1$ and $w_2$ are returned as zeros. Otherwise, the algorithm proceeds until convergence, which is guaranteed due to biconvexity of~\eqref{eq:sparseCCAwithR} \citep{Gorski:2007uv}. We further describe a coordinate-descent algorithm for~\eqref{eq:prop1_2}. Consider the KKT conditions \citep{Boyd:2004uz} \[ \widetilde R_1 w_1 - \widetilde R_{12}w_2 + \lambda_1 s_1 = 0, \] where $s_1$ is the subgradient of $\|w_1\|_1$. If $\lambda_1 \geq \|\widetilde R_{12}w_2\|_{\infty}$, it follows that $\widetilde w_1 = 0$. Otherwise, the $i$th element of $w_1$ can be expressed through the other coordinates as \[ w_{1i} = S_{\lambda_1}\Big\{(\widetilde R_{12})_i w_2^{(t)} - (\widetilde R_{1})_{i, -i} (w_{1})_{-i}\Big\}, \] where $S_{\lambda} (t) = \textup{sign}(t) \left(| t | - \lambda \right)_{+}$ is the soft-thresholding operator, $(R_{12})_i$ denotes the $i$th row of matrix $R_{12}$ and $(R_1)_{i, -i}$ denotes $i$th row of matrix $R_1$ without the $i$th component that is $(R)_{i, -i} = (R_{i1}, \ldots, R_{i,i-1}, R_{i,i+1}, \ldots, R_{ip})$. The coordinate-descent algorithm proceeds by using the above formula to update one coordinate at a time until the convergence to a global optimum is achieved. This convergence is guaranteed due to convexity of the objective function and separability of the penalty with respect to coordinates \citep{Tseng:1988}. \subsection{Selection of tuning parameters}\label{sec:tuning} Cross-validation is a popular approach to select the tuning parameter in LASSO. In our context, however, it amounts to performing a grid search over both $\lambda_1$ and $\lambda_2$. Moreover, splitting the data as in cross-validation may lead to too small a number of testing samples to construct the rank-based estimator of the latent correlation matrix. Instead, motivated by \cite{Wilms:2015wna}, we propose to adapt the Bayesian information criterion to the canonical correlation analysis to avoid splitting the data and decrease computational costs. For the Gaussian linear regression model, the Bayesian information criterion (\textsc{bic}) has the form \[ \text{\textsc{bic}} = -2\ell + \text{df} \log n, \] where df indicates the number of parameters in the model, and $\ell$ is the log-likelihood \[ \ell = \log L = -(n/2)\log{\sigma^2} -\sum_{i=1}^{n} \left(y_i - X_i \boldsymbol{\beta} \right)^2/({2\sigma^2}). \] Two cases can be considered depending on whether the variance $\sigma^2$ is known or unknown. \begin{enumerate} \item If $\sigma^2$ is known, and the data are scaled so that $\sigma^2 = 1$, then \[ \text{\textsc{bic}} = n^{-1}\sum_{i=1}^{n} \Big(y_i - X_i \boldsymbol{\widehat\beta} \Big)^2 + \text{df}\dfrac{\log n}{n}. \] \item If $\sigma^2$ is unknown, using $\widehat{\sigma}^2_{\text{MLE}} = n^{-1} \sum_{i=1}^{n} \left(y_i - X_i \boldsymbol{\widehat \beta} \right)^2 $ leads to \[ \text{\textsc{bic}} = n\log \Big\{n^{-1}\sum_{i=1}^{n} \Big(y_i - X_i \boldsymbol{\widehat \beta} \Big)^2 \Big\} + \text{df} \log n. \] \end{enumerate} \cite{Wilms:2015wna} use criterion 2 for canonical correlation analysis by substituting $\| X_1 \widetilde w_1 - X_2 w_2\|_2^2 /n$ instead of $\sum_{i=1}^{n} (y_i - X_i \boldsymbol{\widehat \beta})^2 /n$ for centered $X_1$ and $X_2$. Since $\| X_1 \widetilde w_1 - X_2 w_2\|_2^2 /n = \widetilde w_1^{\top}S_1 \widetilde w_1 - 2 \widetilde w_1^{\top}S_{12}w_2 + w_2^{\top}S_2w_2$, and we use $\widetilde R$ instead of the sample covariance matrix $S$, we substitute \[ f(\widetilde{w}_1) = \widetilde{w}_1^\top \widetilde R_{1} \widetilde{w}_1 - 2 \widetilde{w}_1^\top \widetilde R_{12} w_2 + w_2 \widetilde R_{2} w_2 \] instead of residual sum of squares. Furthermore, motivated by the performance of the adjusted degrees of freedom variance estimator in \citet{reid2016study}, we also adjust $f(\widetilde w_1)$ for the 2nd criterion leading to \begin{equation} \begin{split}\nonumber \text{\textsc{bic}}_1 = f(\widetilde{w}_1) + \text{df}_{\widetilde{w}_1} \dfrac{\log n}{n};\quad \text{\textsc{bic}}_2 = \log \Big\{ \dfrac{n}{n-\text{df}_{\widetilde{w}_1}} f(\widetilde{w}_1) \Big\} + \text{df}_{\widetilde{w}_1} \dfrac{\log n}{n}. \end{split} \end{equation} Here df$_{\widetilde w_1}$ coincides with the size of the support of $\widetilde w_1$ \citep{Tibshirani:2012jk}. The \textsc{bic}~criteria for $w_2$ are defined analogously to those for $w_1$. We use both criteria in evaluating our approach. Given the selected criterion (either $\text{\textsc{bic}}_1$ or $\text{\textsc{bic}}_2$), we apply it sequentially at each step of the biconvex optimization algorithm of Section~\ref{sec:optim}, and each time select the tuning parameter corresponding to the smallest value of the criterion. Due to alternating minimization, the solution will in general depend on the choice of the initial starting point. By default, we initialize the algorithm with the unpenalized solution to \eqref{eq:pCCA} obtained using $\widetilde R + 0.25I$, which corresponds to canonical ridge solution with fixed amount of regularization \citep{RCCA:2008}. We find that this initialization works well compared to a random initialization, more details are provided in Section~S3$\cdot$2 of the Supplementary Material. \begin{remark} A sequence of $\lambda$ values for $w_1$ and $w_2$ are separately generated for the algorithm if there is no specification. For example, a sequence for $\lambda_1$ is generated as follows. We first calculate $\lambda_{\rm max} = \widetilde R_{12} \widehat w^{(0)}_{2}$ and $\lambda_{\rm min} = \epsilon \lambda_{\rm max}$, where $\widehat w^{(0)}_{2}$ is the initial starting point for $w_2$. Then, from $\lambda_{\rm min}$ to $\lambda_{\rm max}$, the sequence is generated to be equally spaced on a logarithmic scale. As a default, we use $20$ lambda values for each side with $\epsilon = 0.01$. The sequence for $\lambda_2$ is analogously defined. \end{remark} \section{Simulation studies}\label{sec:sim} In this section we evaluate the performance of the following methods: (i) Classical canonical correlation analysis based on the sample covariance matrix; (ii) Canonical ridge available in the R package \texttt{CCA} \citep{RCCA:2008}; (iii) Sparse canonical correlation analysis of \cite{Witten:2009PMD} available in the R package \texttt{PMA}; (iv) Sparse canonical correlation analysis of \citet{Gao:2017SCCA} available in the Matlab package \texttt{SCCALab}; (v) Sparse canonical correlation analysis via Kendall's $\tau$ proposed in this paper. For our method, we evaluate both types of \textsc{bic}~criteria as described in Section~\ref{sec:tuning}. We also consider using the Pearson sample correlation instead of $\widetilde R$ within our optimization framework with the same \textsc{bic}-criteria for parameter selection. For fair comparison with $\widetilde R$, we also apply shrinkage to the Pearson correlation matrix as in~\eqref{d:Rtilde}. Direct comparison of estimation performance between our rank-based estimator and Pearson sample correlation as a function of sample size and level of truncation can be found in the Supplementary Material Section S3.1. We generate $n=100$ independent pairs $(\mathbf{Z}_1, \mathbf{Z}_2) \in \mathbb{R}^{p_1+p_2}$ following \[ \begin{pmatrix} \mathbf{Z}_1\\\mathbf{Z}_2 \end{pmatrix} \sim {\textup{N}} \left\{ \begin{pmatrix} 0\\0 \end{pmatrix}, \begin{pmatrix} \Sigma_1 & \rho \Sigma_1 w_1 w_2^{\top} \Sigma_2\\ \rho \Sigma_2 w_2 w_1^{\top} \Sigma_1 & \Sigma_2 \\ \end{pmatrix} \right\}. \] We consider two settings for the number of variables: low-dimensional ($p_1=p_2=25$) and high-dimensional ($p_1=p_2=100$). Each canonical vector $w_g$ ($g=1, 2$) is defined by taking a vector of ones at the coordinates $(1, 6, 11, 16, 21)$ and zeros elsewhere, and normalizing it such that $w_g^\top \Sigma_g w_g =1$; a similar model is used in \citet{Chen:2013}. We use an autoregressive structure for $\Sigma_{1} = \{\gamma^{|j-k|}\}_{j,k=1}^{p_1}$ and a block-diagonal structure for $\Sigma_2=$block-diag$(\Sigma_\gamma, \ldots, \Sigma_\gamma)$, where $\Sigma_\gamma\in R^{d\times d}$ is an equicorrelated matrix with value $1$ on the diagonal and $\gamma$ off the diagonal. We use five blocks of size $d\in\{6, 6, 3, 7, 3\}$ for low-dimensional, and $d\in\{14, 21, 12, 25, 28\}$ for high-dimensional setting. We set $\gamma = \text{0$\cdot$7}$ for both $\Sigma_1$ and $\Sigma_2$. We further randomly permute the order of variables in each $\mathbf{Z}_g$ to remove the covariance-induced ordering. The value of the canonical correlation is set at $\rho = \text{0$\cdot$9}$. We consider transformations $\mathbf{U}_g = f_g(\mathbf{Z}_g +\mathbf{B}_g)$ where the elements of vector $\mathbf{B}_g$ are 0 or 1 with equal probability. The variation in the shift of $\mathbf{Z}_g$ across $p_g$ variables due to $\mathbf{B}_g$ leads to the variation in the proportion of zeros across the variables in the 5--80\% range for the same choice of truncation constant $C$. We consider three choices for $f_g$: (copula~0) no transformation, $f_g(z) = z$ for $g=1, 2$; (copula 1) exponential transformation for $\mathbf{U}_1$, $f_1(z) = \exp(z)$, and no transformation for $\mathbf{U}_2$, $f_2(z) = z$; (copula 2) exponential transformation for $\mathbf{U}_1$, $f_1(z) = \exp(z)$, and cubic transformation for $\mathbf{U}_2$, $f_2(z) = z^3$. Finally, we set $\mathbf{X}_g$ to be equal to $\mathbf{U}_g$ for continuous variable type, and dichotomize/truncate $\mathbf{U}_g$ at the same value $C$ for all variables to form binary/truncated $\mathbf{X}_g$. We set $C=\text{1$\cdot$5}$ for exponentially transformed variables, and $C=0$ for the others. For each case, we consider three combinations of variable types for $\mathbf{X}_1$/$\mathbf{X}_2$: truncated/truncated, truncated/continuous and truncated/binary. To compare the methods' performance, we evaluate expected out-of-sample correlation \begin{equation}\label{def:rhohat} \widehat{\rho} = \left| \dfrac{ \widehat{w}_1^\top \Sigma_{12} \widehat{w}_2 }{ (\widehat{w}_1^\top \Sigma_{1} \widehat{w}_1)^{1/2} (\widehat{w}_2^\top \Sigma_{2} \widehat{w}_2)^{1/2} } \right|, \end{equation} and predictive loss \begin{equation}\label{def:predloss} L(w_g, \widehat w_g) = 1-\dfrac{|\widehat{w}_g^\top \Sigma_{g} w_g|}{(\widehat{w}_g^\top \Sigma_{g} \widehat{w}_g)^{1/2} } \quad (g=1,2); \end{equation} a similar loss is used in \cite{Gao:2017SCCA}. By definition of the true canonical correlation $\rho$, for any $\widehat w_1$ and $\widehat w_2$ it holds that $\widehat \rho \leq \rho$, with equality when $\widehat w_1 = w_1$ and $\widehat w_2 = w_2$. Since $w_g^{\top}\Sigma_g w_g=1$, $L(w_g, \widehat w_g)\in[0,1]$ with $L(w_g, \widehat w_g)=0$ if $\widehat w_g = w_g$. We also evaluate the variable selection performance using the selected model size, true-positive rate and true-negative rate defined as $$ \text{TPR}_{g} = \dfrac{\#\{j: \widehat{w}_{gj} \neq 0 \text{ and } w_{gj} \neq 0\}}{\#\{j: w_{gj} \neq 0\}}, \quad \text{TNR}_{g} = \dfrac{\#\{j: \widehat{w}_{gj} = 0 \text{ and } w_{gj} = 0\}}{\#\{j: w_{gj} = 0\}} \quad (g=1,2). $$ \begin{figure} \caption{Truncated/truncated case. \textbf{Left:} The value of $\widehat{\rho}$ from \eqref{def:rhohat}. The horizontal lines indicate the true canonical correlation value $\rho = $ \text{0$\cdot$9}. \textbf{Right:} The value of predictive loss~\eqref{def:predloss}. Results over 500 replications. CCA:~Sample canonical correlation analysis; RidgeCCA: Canonical ridge of~\citet{RCCA:2008}; WittenCCA: method of~\cite{Witten:2009PMD}; GaoCCA:~method of ~\cite{Gao:2017SCCA}; PearsonBIC1, PearsonBIC2: proposed algorithm with Pearson sample correlation matrix; KendallBIC1, KendallBIC2: proposed algorithm with rank-based estimator $\widetilde R$; \textsc{bic}$_1$ or \textsc{bic}$_2$ refer to tuning parameter selection criteria; LD: low-dimensional setting ($p_1 = p_2 = 25$); HD: high-dimensional setting ($p_1 = p_2 = 100$).} \label{fig:TT_rhohatPredloss} \end{figure} \begin{figure} \caption{Truncated/truncated case. \textbf{TopLeft:} True positive rate (TPR); \textbf{TopRight:} True negative rate (TNR); \textbf{BottomMiddle:} Selected model size. Results over 500 replications. WittenCCA: method of~\cite{Witten:2009PMD}; GaoCCA:~method of ~\cite{Gao:2017SCCA}; PearsonBIC1, PearsonBIC2: proposed algorithm with Pearson sample correlation matrix; KendallBIC1, KendallBIC2: proposed algorithm with rank-based estimator $\widetilde R$; \textsc{bic}$_1$ or \textsc{bic}$_2$ refer to tuning parameter selection criteria; LD: low-dimensional setting ($p_1 = p_2 = 25$); HD: high-dimensional setting ($p_1 = p_2 = 100$).} \label{fig:TT_selection} \end{figure} The results for the truncated/truncated case over 500 replications are presented in Figures \ref{fig:TT_rhohatPredloss}--\ref{fig:TT_selection}. From Figure~\ref{fig:TT_rhohatPredloss}, the majority of methods achieve higher values of $\widehat \rho$ in the absence of data transformation (copula 0) compared to cases where transformation is applied (copula 1 and 2). The only exception is our approach based on Kendall's $\tau$, which as expected has comparable performance across the copula types. The performance of all methods deteriorates with increased dimension leading to smaller values of $\widehat\rho$ and larger predictive losses. The classical canonical correlation analysis performs especially poorly in high-dimensional settings with $\widehat \rho$ being almost 0 and predictive loss being close to 1 for both $w_1$ and $w_2$. Canonical ridge works well in the copula 0 setting, however its performance is strongly affected in the presence of transformations (copula 1 and 2). Surprisingly to us, Gao's method, as implemented in \texttt{SCCALab}, performs poorly compared to other approaches. Since Gao's method is designed for Gaussian data, the poor performance is likely due to its sensitivity to the presence of copulas and zero truncation (in the copula 0 case, proportions of zero values for each variable range from $5\%$ to $70\%$). We also use the default values in \texttt{SCCALab} for all of the parameters, so better performance could possibly be achieved by adjusting those values. Sparse canonical correlation analysis based on Pearson's correlation outperforms all other methods in low dimensional setting when no data transformation is applied (copula 0), however its performance deteriorates when the monotone transformations are applied to the data (copulas 1 and 2). It also performs worse than our rank-based approach in high-dimensional setting. This is likely due to the increase in variables with zero inflation due to truncation, which Pearson's correlation doesn't take into account. In low-dimensional settings, \textsc{bic}$_1$ and \textsc{bic}$_2$ criteria lead to similar values of $\widehat \rho$, with larger variance in \textsc{bic}$_1$ performance. In high-dimensional settings, \textsc{bic}$_2$ is clearly better than \textsc{bic}$_1$ in predictive performance, and this better performance is irrespective of the choice of the estimator for the latent correlation matrix (Pearson's correlation matrix or proposed rank-based correlation matrix). Overall, our method based on Kendall's $\tau$ with \textsc{bic}$_2$ criterion leads to highest values of $\widehat \rho$ and smallest values of predictive loss across dimensions and different copula types. Figure~\ref{fig:TT_selection} illustrates variable selection performance of each method. The classical canonical correlation analysis and canonical ridge are excluded as they do not perform variable selection. To ensure the results are consistent with numerical precision of optimization algorithm, we treat variable as nonzero if its loading is above $10^{-6}$ threshold in absolute value. Unexpected to us, the number of selected variables varies significantly across replications for Witten's method (bottom figure in Figure~\ref{fig:TT_selection}), leading to significant variations in true positive and true negative rates. We suspect this is due to the use of a permutation approach for selection of tuning parameters. Our approach based on Kendall's $\tau$ leads to a more favorable combination of true positive and true negative rates compared to competing methods, especially when data transformations are applied. Furthermore, this advantage is maintained independently of tuning parameter selection scheme. In Section~S3$\cdot$3 of the Supplementary Material, we compare the true positive versus false positive curves obtained by each method over the range of tuning parameters, and find that our rank-based estimator leads to highest area under the curve in the copula settings. Comparing \textsc{bic}$_1$ with \textsc{bic}$_2$ performance in Figure~\ref{fig:TT_selection}, \textsc{bic}$_1$ leads to the sparsest model and the highest true negative rate for both Pearson correlation and our rank-based correlation , at the expense of missing some true variables in the high-dimensional settings. Given the comparison in predictive performance between the two selection criteria, we conclude that \textsc{bic}$_1$ is better suited for variable selection, especially when it is desired to have a high true negative rate, whereas \textsc{bic}$_2$ works better for prediction. In addition to the truncated/truncated case, we also consider truncated/continuous and truncated/binary cases in Section~S3$\cdot$4 of the Supplementary Material. The conclusions of methods' comparison are similar to the truncated/truncated case. Overall, all the methods perform best in the truncated/continuous case and worst in the truncated/binary case, which is not surprising, since dichotomization of continuous variable leads to a loss of information, thus reducing the effective sample size. \section{Application to TCGA data}\label{sec:data} The Cancer Genome Atlas (TCGA) project collects data from multiple platforms using high-throughput sequencing technologies. We consider gene expression data ($p_1=891$) and micro RNA data ($p_2=431$) for $n=500$ matched subjects from the TCGA breast cancer database. We treat gene expression data as continuous and micro RNA data as truncated continuous. The range of proportions of zero values contained in each variable in micro RNA data is $0 - \text{49$\cdot$8}\%$. The subjects belong to one of the 5 breast cancer subtypes: Normal, Basal, Her2, LumA and LumB, with 37 subjects having missing subtype information (denoted as NA). The goal of the analysis is to characterize the association between gene expression and micro RNA data, and investigate whether this association is related to breast cancer subtypes. To investigate the performance of our method relative to other approaches, we randomly split the data 500 times. Each time $400$ samples are used for training, and the remaining $100$ test samples are used to assess the association via \[ \widehat{\rho}_{\text{test}} = \left| \dfrac{\widehat{w}_{1, \text{train}}^\top \Sigma_{12, \text{test}} \widehat{w}_{2, \text{train}}}{(\widehat{w}_{1, \text{train}}^\top \Sigma_{1, \text{test}} \widehat{w}_{1, \text{train}})^{1/2} (\widehat{w}_{2, \text{train}}^\top \Sigma_{2, \text{test}} \widehat{w}_{2, \text{train}})^{1/2}} \right|. \] Here $\Sigma_{\text{test}}$ is evaluated based on the test samples, and is either the rank-based estimator $\widetilde R$ for our method, or the sample covariance matrix for other methods. We also compare the number of selected genes and selected micro RNAs, and the results are presented in Table~\ref{tab:TCGA_cv}. We have not considered the method of \citet{Gao:2017SCCA} in this section due to its poor performance in Section~\ref{sec:sim} and high computational cost (it takes around 40 minutes per replication on these data on a Windows 3$\cdot$60GHz Intel Core i7 CPU machine). \begin{table}[!b] \centering \begin{tabular}{lrrrrrrrrrrl} \hline Method & \multicolumn{4}{r}{Selected Genes} & \multicolumn{4}{r}{Selected micro RNAs} & \multicolumn{3}{c}{$\widehat{\rho}_{\text{test}}$}\\ \hline CCA & & & 891$\cdot$00 & (0$\cdot$00) & & & 431$\cdot$00 & (0$\cdot$00) & & 0$\cdot$004 & (0$\cdot$109) \\ RidgeCCA & & & 891$\cdot$00 & (0$\cdot$00) & & & 431$\cdot$00 & (0$\cdot$00) & & 0$\cdot$712 & (0$\cdot$126)\\ WittenCCA & & & 338$\cdot$36 & (194$\cdot$58)& & & 165$\cdot$53 & (100$\cdot$30) & & 0$\cdot$789 & (0$\cdot$041) \\ PearsonBIC1 & & & 9$\cdot$92 & (2$\cdot$62) & & & 16$\cdot$08 & (3$\cdot$18) & & 0$\cdot$813 & (0$\cdot$044) \\ PearsonBIC2 & & & 27$\cdot$68 & (6$\cdot$20) & & & 40$\cdot$50 & (12$\cdot$54) & & 0$\cdot$857 & (0$\cdot$034) \\ KendallBIC1 & & & 18$\cdot$24 & (3$\cdot$51) & & & 9$\cdot$68 & (3$\cdot$15) & & $\mathbf{0}$\mbox{$\cdot$}$\mathbf{880}$ & (0$\cdot$030) \\ KendallBIC2 & & & 38$\cdot$18 & (8$\cdot$47) & & & 31$\cdot$01 & (6$\cdot$74) & & $\mathbf{0}$\mbox{$\cdot$}$\mathbf{913}$ & (0$\cdot$029)\\ \hline \end{tabular} \caption{Mean support sizes and values of $\widehat{\rho}_{\text{test}}$'s over 500 random splits of breast cancer data. The standard deviation is given in parentheses}\label{tab:TCGA_cv} \end{table} Of course, neither the sample canonical correlation analysis nor the canonical ridge method performs variable selection. In addition, $\widehat{\rho}_{\text{test}}$ is very close to $0$ for the sample canonical correlation, confirming poor performance of the method. Canonical ridge leads to significantly higher values of $\widehat \rho_{\text{test}}$ demonstrating the advantage of added regularization, however it still has smaller correlation values compared to other approaches. The method of~\cite{Witten:2009PMD} leads to higher correlation values compared to both sample canonical correlation analysis and canonical ridge, however it still selects a significant number of variables, with highly variable model sizes across replications. We suspect this is due to the use of a permutation-based algorithm for tuning parameter selection: similar behaviour is also observed in Section~\ref{sec:sim}. Sparse canonical correlation analysis based on Pearson's correlation selects a much smaller number of genes and micro RNAs but achieves higher values of $\widehat \rho_{\text{test}}$ than the method of \citet{Witten:2009PMD}. This is consistent with results in Section~\ref{sec:sim}. The highest values of $\widehat \rho_{\text{test}}$ are achieved by our approach based on Kendall's $\tau$ with smaller number of selected variables, confirming that found association is not due to over-fitting as it generalizes well to out-of-sample data. \textsc{bic}$_1$ criterion leads to the sparser model than \textsc{bic}$_2$ consistently for both Pearson and Kendall-based correlation estimates, with \textsc{bic}$_2$ criterion having the larger out-of-sample correlation value. In light of these results and results of Section~\ref{sec:sim}, we conclude that \textsc{bic}$_1$ is advantageous for variable selection due to its selection of sparser model and higher true negative rate observed in simulations, whereas \textsc{bic}$_2$ is advantageous for prediction. \begin{figure} \caption{\baselineskip=12pt Genes and micro RNAs selected often more than 80\% of 500 repetitions by our approach with the~\textsc{bic}$_2$ criterion are used for heatmap. \textbf{Left:} A heatmap of 19 genes. The blue indicates positive expression level, and red for negative expression level. The white means zero expression level. \textbf{Right:} A heatmap of 16 micro RNAs. The saturation level of colors are assigned based on variable-specific quantiles. For both figures, the dissimilarity measure is set as $1-\widetilde{R}$ with our rank-based correlation $\widetilde{R}$, and Ward linkage is used.} \label{fig:TCGA_heatmapboth} \end{figure} We next investigate possible relationships between selected variables and breast cancer subtypes. Since the selected variables may change across the random data splits, we consider the selection frequency of each gene and micro RNA across all 500 replications of our method with \textsc{bic}$_2$ criterion, and choose the variables that are selected at least 80\% of the times. Figure~\ref{fig:TCGA_heatmapboth} shows heatmaps of expression levels of resulting 19 genes and 16 micro RNAs, with samples ordered by their respective cancer subtype. The heatmaps show clear separation between Basal and other subtypes, suggesting that the found association is relevant to cancer biology. Many of the selected genes and micro RNAs can be found in recent literature which supports their association with breast cancer. \citet{ERBB4:kim2016prognostic} indicates that ERBB4 is a prognostic marker for triple negative breast cancer, which is often used interchangeably with Basal-like breast cancer. In agreement with our results, \citet{VGLL1} identifies that VGLL1 and miR-934 are highly correlated with each other, and that both are overexpressed in the Basal-like subtype. They also find that selected FOXA1 and GATA3 genes, as well as ESR1 gene (not selected at 80\% frequency threshold, but still has a 73.4\% frequency), have strong negative correlation with both VGLL1 and miR-934. The expression level of selected ELF5 is shown to play a key role in determining breast cancer molecular subtype in \citet{ELF5_kalyuga} and \citet{ELF5:piggin2016}. Furthermore, \citet{miRNA18a_505} validate that selected hsa-miR-18a and hsa-miR-505 miRNAs are significantly correlated with prognostic breast cancer biomarkers, and high expression of hsa-miR-18a is strongly associated with Basal-like breast cancer features. Finally, the selected hsa-miR-135b is reported to be related to breast cancer cell growth in \citet{miRNA135b_ref1} and \citet{miRNA135b_ref2}. \section{Discussion} One of the main contributions of this work is a truncated Gaussian copula model for the zero-inflated data, and corresponding development of a rank-based estimator for the latent correlation matrix. While our focus is on canonical correlation analysis, our estimator can be used in conjunction with other covariance-based approaches. For example it can be used for constructing graphical models as in~\citet{Fan:2016um} in cases where some or all of the variables have an excess of zeros. Micro RNA data is one example that we have explored in this work, however another prominent example is microbiome abundance data. It would be of interest to further explore the potential of our modeling approach in different application areas. The R package \texttt{mixedCCA} with our method's implementation is available from the authors github page \url{https://github.com/irinagain/mixedCCA}. \end{document}
\begin{document} \title{The Foliated Weinstein Conjecture} \subjclass[2010]{Primary: 53D10.} \date{May, 2015} \keywords{contact structures, foliations, Weinstein conjecture} \author{\'Alvaro del Pino} \address{Universidad Aut\'onoma de Madrid and Instituto de Ciencias Matem\'aticas -- CSIC. C. Nicol\'as Cabrera, 13--15, 28049, Madrid, Spain.} \email{[email protected]} \author{Francisco Presas} \address{Instituto de Ciencias Matem\'aticas -- CSIC. C. Nicol\'as Cabrera, 13--15, 28049, Madrid, Spain.} \email{[email protected]} \begin{abstract} A foliation is said to admit a foliated contact structure if there is a codimension $1$ distribution in the tangent space of the foliation such that the restriction to any leaf is contact. We prove a version of the Weinstein conjecture in the presence of an overtwisted leaf. The result is shown to be sharp. \end{abstract} \maketitle \section{Introduction} The \textbf{Weinstein conjecture} \cite{Wei} states that the Reeb vector field associated to a contact form $\alpha$ in a closed $(2n+1)$--manifold $M$ always carries a closed periodic orbit. Hofer proved in \cite{Ho} that the Weinstein conjecture holds for any 3-dimensional contact manifold $(M^3, \alpha)$ overtwisted or satisfying $\pi_2(M) \neq 0$. Then, it was proven in full generality by Taubes \cite{Tau} by localising the Seiberg-Witten equations along Reeb orbits. The main theorem of this note -- definitions of the relevant objects will be given in the next section -- reads as follows: \begin{theorem} \label{thm:main} Let $(M^{3+m}, {\mathcal{F}}^3, \xi^2)$ be a contact foliation in a closed manifold $M$. Let $\alpha$ be a defining 1--form for an extension of $\xi$ and let $R$ be its Reeb vector field. Let ${\mathcal{L}}^3 \hookrightarrow M$ be a leaf. \begin{itemize} \item[i.] If $({\mathcal{L}}, \xi|_{{\mathcal{L}}})$ is an overtwisted contact manifold, $R$ possesses a closed orbit in the closure of ${\mathcal{L}}$. \item[ii.] If $\pi_2({\mathcal{L}}) \neq 0$, $R$ possesses a closed orbit in the closure of ${\mathcal{L}}$. \end{itemize} \end{theorem} The case where the leaf ${\mathcal{L}}$ is closed corresponds to the Weinstein conjecture. This result constrasts, just as in the non--foliated case, with the behaviour of \textsl{smooth flows}: it was proven in \cite{CPP15} that any never vanishing vector field tangent to a foliation $(M^{3+m}, {\mathcal{F}}^3)$ can be homotoped, using parametric plugs, to a tangent vector field without periodic orbits. The proof of Theorem \ref{thm:main}, based on Hofer's methods, occupies the last section of the note. Before that, several examples showing the sharpness of the result are discussed. In Subsection \ref{ssec:tight}, Proposition \ref{prop:noGeodesics} constructs a contact foliation in the 4--torus $\mathbb{T}^4$ that has \textsl{all leaves tight} and that has no Reeb orbits. Naturally, in this example all leaves are open. This shows that the \textbf{foliated Weinstein conjecture} does not necessarily hold as soon as we drop the assumption on overtwistedness. Then Proposition \ref{prop:geodesic} presents a more sophisticated example of a contact foliation in $\mathbb{S}^3 \times \mathbb{S}^1$ with all leaves tight and with closed Reeb orbits appearing only in the unique compact leaf of the foliation. In Subsection \ref{ssec:sharp} we construct a foliation in $\mathbb{S}^2 \times \mathbb{S}^1 \times \mathbb{S}^1$ that has two compact leaves $\mathbb{S}^2 \times \mathbb{S}^1 \times \{0,\pi\}$ on which all others accumulate. We then endow it with a foliated contact structure that makes all leaves overtwisted but that has closed Reeb orbits only in the compact ones. Theorem \ref{thm:main} is therefore sharp in the sense that an overtwisted leaf might not possess a Reeb orbit \textsl{itself}. In Subsection \ref{ssec:nonComplete} we construct Reeb flows with no closed orbits in every \textsl{open contact manifold}. \textsl{Acknowledgements.} Part of this work was carried out while the first author was visiting the group of Andr\'as Stipsicz at the Alfr\'ed R\'enyi Institute. We are very thankful to Klaus Niederkr\"uger for the many hours of insightful conversations about this project. The authors would also like to thank Viktor Ginzburg for his interest in this work and for his suggestions regarding Propositions \ref{prop:openLeaf} and \ref{prop:noGeodesics}. Several other people have discussed with us when this work was in progress: S. Behrens, R. Casals, M. Hutchings, E. Miranda, D. Pancholi, and D. Peralta--Salas. The first author is supported by a La Caixa--Severo Ochoa grant. Both authors are supported by Spanish National Research Project MTM2013--42135 and the ICMAT Severo Ochoa grant SEV--2011--0087 through the V. Ginzburg Lab. \section{The relevant concepts involved} All objects considered henceforth will be smooth. Foliations and distributions will be oriented and cooriented. Often, arguments where orientability assumptions are dropped would go through by taking double or quadruple covers appropriately. \subsection{Contact structures} \begin{definition} Let $W$ be a $2n+1$ dimensional manifold. A distribution $\xi^{2n} \subset TW$ is said to be a \textbf{contact distribution} if it is maximally non--integrable. A $1$--form $\alpha \in \Omega^1(W)$ satisfying $\ker(\alpha) = \xi$ is called a \textbf{contact form}. $\xi$ being maximally non--integrable amounts to $\alpha$ satisfying $\alpha \wedge d\alpha^n \neq 0$. We say that the pair $(W,\xi)$ is a \textbf{contact manifold}. \end{definition} A map $\phi: (W_1, \xi_1) \to (W_2,\xi_2)$ satisfying $\phi^* \xi_2 = \xi_1$ is a contact map. If $\phi$ is additionally a diffeomorphism we will say that $\phi$ is a \textbf{contactomorphism}. \begin{example} Consider $\mathbb{R}^{2n+1}$ with coordinates $(x_1,y_1,\cdots,x_n,y_n,z)$. The $2n$--distribution $\xi_{st} = \ker(dz-\sum_{i=1..n} x_idy_i)$ is called the standard \textbf{tight} contact structure. \end{example} \begin{example} Consider $\mathbb{R}^3$ with cylindrical coordinates $(r,\theta,z)$. The $2$--distribution $\xi_{ot} = \ker(\cos(\theta)dz+r\sin(r)d\theta)$ is called the contact structure \textbf{overtwisted at infinity}. The disc $\Delta = \{z=0, r \leq \pi\}$ is called the \textbf{overtwisted disc}. \end{example} It was shown by Bennequin in \cite{Be} that the structures $(\mathbb{R}^3, \xi_{st})$ and $(\mathbb{R}^3, \xi_{ot})$, although homotopic as plane fields, are distinct as contact structures. \subsubsection{Overtwisted contact structures in dimension $3$} \begin{definition} Let $(W^3,\xi^2)$ be a contact manifold. $(W,\xi)$ is said to be an \textbf{overtwisted} contact manifold if there is an embedded 2--disc $D \subset W$ and a contactomorphism $\phi: \nu(\Delta) \to \nu(D)$ between a neighbourhood $\nu(\Delta)$ of the overtwisted disc $\Delta \subset \mathbb{R}^3$ and a neighbourhood $\nu(D) \subset W$ of $D$. \end{definition} The relevance of this notion stems from the following theorem stating that overtwisted contact manifolds are completely classified by their underlying algebraic topology. \begin{theorem} \emph{(Eliashberg \cite{El89})} \label{thm:El89} Let $W^3$ be a $3$--fold. Any plane field $\eta \subset TW$ is homotopic to an overtwisted contact structure. Further, any two overtwisted contact structures $\xi_1, \xi_2 \subset TW$ homotopic as plane fields are homotopic through overtwisted contact structures. In particular, they are contactomorphic. \end{theorem} Theorem \ref{thm:El89} says that 2--plane fields and contact structures in $3$--manifolds present a 1 to 1 correspondence at the level of connected components. Eliashberg's result is stronger than what we have stated. Indeed, there is a weak homotopy equivalence if one restricts to the class of plane fields that have a fixed overtwisted disc. Overtwisted contact structures in $\mathbb{R}^3$ were completely classified by Eliashberg in \cite{El93}. In particular, the following proposition will be used in Subsection \ref{ssec:sharp}. \begin{proposition} \emph{(Eliashberg \cite{El93})} \label{prop:El93} Let $\xi$ be a contact structure in $\mathbb{R}^3$ that is overtwisted in the complement of every compact subset. Then $\xi$ is isotopic to $\xi_{ot}$. \end{proposition} Contact structures with the property that they remain overtwisted after removing any compact subset are called \textbf{overtwisted at infinity}. \subsubsection{Overtwisted contact structures in higher dimensions} Overtwisted contact structures have been defined in full generality -- for every dimension -- in \cite{BEM}. In \cite{CMP} it has been shown that the overtwisted disc in higher dimensions can be understood as an stabilisation of the overtwisted disc in dimension $3$. The following lemma will be useful in Subsection \ref{ssec:nonComplete}. Its proof is based on a \textsl{swindling} argument, as found in \cite{El92}. \begin{lemma} \label{coro:BEM} \emph{(\cite[Corollary 1.4]{BEM})} Let $(M^{2n+1}, \xi_M)$ be a connected overtwisted contact manifold and let $(N^{2n+1}, \xi_N)$ be an open contact manifold of the same dimension. Let $f : N \rightarrow M$ be a smooth embedding covered by a contact bundle homomorphism $\Phi : TN \rightarrow TM$ -- that is, $\Phi|_{\xi_M(p)}$ maps into $\xi_N(f(p))$ and preserves the conformal symplectic structure -- and assume that $df$ and $\Phi$ are homotopic as injective bundle homomorphisms $TN \rightarrow TM$. Then $f$ is isotopic to a contact embedding $\tilde{f} : (N, \xi_N) \rightarrow (M, \xi_M)$. \end{lemma} \subsubsection{Convex surfaces} \label{sssec:convex} Let $(W^3, \xi^2)$ be a contact manifold. Let $\Sigma^2 \subset W$ be an immersed surface. The intersection $\xi \cap T\Sigma$ yields a singular foliation by lines on $\Sigma$, which is called the \textbf{characteristic foliation}. In the generic case, it can be assumed that the singularities -- the points where $\xi_p = T_p\Sigma$ -- are isolated points, that can then be classified into \textbf{nicely elliptic} and \textbf{hyperbolic}. \begin{example} \label{ex:ot} By our characterisation of overtwistedness, any overtwisted manifold $(W, \xi)$ contains a disc $\Sigma$ with a single singular point, which is nicely elliptic and whose boundary is legendrian. All other leaves spiral around the legendrian boundary in one end and converge to the elliptic point in the other. Such a disk appears as a $C^\infty$--small perturbation of the overtwisted disk $\Delta$. \end{example} \begin{example} \label{ex:tight} Consider the unit sphere $\mathbb{S}^2$ in $(\mathbb{R}^3, \xi_{st})$. Its singular foliation has two critical points located in the poles, which are nicely elliptic. All other leaves are diffeomorphic to $\mathbb{R}$ and they connect the poles. \end{example} \begin{theorem}[Eliashberg, Giroux, Fuchs] \label{thm:EGF} Let $\Sigma = \mathbb{S}^2$ and let $(W, \xi)$ be tight. Then, after a $C^0$--small perturbation of its embedding, it can be assumed that the characteristic foliation of $\Sigma$ is conjugate to the one of the unit sphere in $\mathbb{R}^3$ tight. \end{theorem} \subsection{Contact foliations} The contents of this section appear in more detail in \cite{CPP}. \begin{definition} A \textbf{contact foliation} is a triple $(M^{2n+1+m}, {\mathcal{F}}^{2n+1}, \xi^{2n})$ where $M$ is a manifold of dimension $2n+1+m$, ${\mathcal{F}}$ is a foliation of codimension $m$, and $\xi \subset T{\mathcal{F}}$ is a distribution of dimension $2n$ that is contact on each leaf of ${\mathcal{F}}$. Often we will say that $\xi$ is a \textbf{foliated contact structure} on the foliation $(M, {\mathcal{F}})$. \end{definition} Contact foliations do exist in abundance as the following result shows: \begin{theorem} $($ \cite{CPP} $)$ Let $(M^{3+m}, {\mathcal{F}}^3)$ be a foliation such that the structure group of $T{\mathcal{F}}$ reduces to $U(1)\oplus 1$. Then ${\mathcal{F}}$ admits a foliated contact structure with all leaves overtwisted. \end{theorem} This result is the foliated counterpart of Eliashberg's result \cite{El89}. We say that a distribution $\Theta^{2n+m}$ satisfying $\xi = \Theta \cap T{\mathcal{F}}$ is an \textbf{extension} of $\xi$, and a regular equation $\alpha$ can be considered for $\Theta = \ker(\alpha)$. It follows that $d\alpha$ is a symplectic form on $\xi$, but not necessarily on $\Theta$. \begin{definition} Let $(M,{\mathcal{F}},\xi)$ be a contact foliation. Let $\Theta$ be an extension of $\xi$ with regular equation $\alpha$. The \textbf{Reeb vector field} $R$ associated to $\alpha$ is the unique vector field satisfying $R \in \Gamma(T{\mathcal{F}})$, $(i_R d\alpha)|_{T{\mathcal{F}}} = 0$, and $\alpha(R) = 1$. \end{definition} Of course this is nothing but the leafwise Reeb vector field induced by the restriction of $\alpha$ to each leaf of ${\mathcal{F}}$. \subsubsection{The space of foliated contact elements} The following concept will be relevant in the subsequent construction. \begin{definition} A \textbf{strong symplectic foliation} is a triple $(M^{m+2n},{\mathcal{F}}^{2n}, \omega)$ where $M$ is a smooth manifold, ${\mathcal{F}}$ a foliation, and $\omega \in \Omega^2(M)$ a closed 2--form that is symplectic on the leaves of ${\mathcal{F}}$. \end{definition} Let $(M^{n+m}, {\mathcal{F}}^n)$ be a smooth foliation. The cotangent space to the foliation $\pi: T^*{\mathcal{F}} \to M$ is an $n$--dimensional bundle over $M$ that carries a natural foliation ${\mathcal{F}}^* = \coprod_{{\mathcal{L}} \in {\mathcal{F}}} \pi^{-1}({\mathcal{L}})$. Additionally, it is endowed with a canonical $1$--form: \[ \lambda_{(p,w)}(v) = w \circ d_{(p,w)}\pi(v), \text{ at a point $(p,w)$, $p \in M$, $w \in T^*_p{\mathcal{F}}$.} \] If ${\mathcal{L}} \subset M$ is a leaf of ${\mathcal{F}}$ this is nothing but the \textbf{Liouville $1$--form} on $T^*{\mathcal{L}}$. Therefore, since $d\lambda$ is a leafwise symplectic form that is globally exact, $(T^*{\mathcal{F}}, {\mathcal{F}}^*, d\lambda)$ is a strong symplectic foliation. Fix a leafwise metric $g$ in $M$. Then there is a bundle isomorphism $\#: T^*{\mathcal{F}} \to T{\mathcal{F}}$. This defines a metric in $T^*{\mathcal{F}}$ by setting $g^*(w_1,w_2) = g(\# w_1, \# w_2)$. The presence of $g^*$ allows one to consider the unit cotangent bundle $\mathbb{S}(T^*{\mathcal{F}})$ as a submanifold of $T^*{\mathcal{F}}$ transverse to ${\mathcal{F}}^*$. The intersection of $\mathbb{S}(T^*{\mathcal{F}})$ with a leaf ${\mathcal{L}}$ is by construction the sphere bundle $\mathbb{S}(T^*{\mathcal{L}})$, which endowed with the form $\lambda$ corresponds to the contact manifold which is called the \textbf{space of oriented contact elements}. Therefore $(\mathbb{S}(T^*{\mathcal{F}}), {\mathcal{F}}^* \cap \mathbb{S}(T^*{\mathcal{F}}), \ker(\lambda))$ is a contact foliation. We call it the \textbf{space of foliated oriented contact elements}. \begin{lemma} The Reeb flow in $(\mathbb{S}(T^*{\mathcal{F}}), {\mathcal{F}}^* \cap \mathbb{S}(T^*{\mathcal{F}}), \lambda)$ coincides with the leafwise cogeodesic flow of $g$. \end{lemma} This lemma can be proved just as in the case of contact manifolds (see \cite[Theorem 1.5.2]{Ge}). This construction will be used in Subsection \ref{ssec:tight}. \subsubsection{The symplectisation of a contact foliation} \begin{definition} Let $(M^{2n+1+m}, {\mathcal{F}}^{2n+1}, \xi^{2n})$ be a contact foliation. Let $\Theta^{2n+m} \subset TM$ be an extension of $\xi$, and let $\alpha$ be a defining $1$--form for $\Theta$, $\ker(\alpha) = \Theta$. We say that \[ ({\mathbb{R}} \times M, {\mathcal{F}}_{\mathbb{R}} = \coprod_{{\mathcal{L}} \in {\mathcal{F}}} {\mathbb{R}} \times {\mathcal{L}}, \omega = d(e^t\alpha)), \text{ with $t$ the coordinate in $\mathbb{R}$, }\] is the \textbf{symplectisation} of $(M, {\mathcal{F}}, \xi)$. \end{definition} The symplectisation is another instance of a strong symplectic foliation. Restricted to every individual leaf this is the standard symplectisation of the contact structure on the leaf. We are abusing notation and we are writing $\alpha$ for $\pi^* \alpha$, where $\pi: {\mathbb{R}} \times M \to M$ is the projection onto the second factor. We will also write $\xi$ for the restriction of $(d\pi)^{-1} \xi$ to the level $T(\{t\} \times M)$ and $R$ for the lift of the Reeb vector field $R$ to $\{t\} \times M$. Let us also introduce the projection $\pi_\xi: T({\mathbb{R}} \times M) \to \xi$ along the $\partial_t$ and $R$ directions. \section{Several examples} \subsection{Non--complete Reeb vector fields with no closed orbits} \label{ssec:nonComplete} It is first reasonable to wonder about the Weinstein conjecture for open manifolds in general. In this direction, not much is known. In \cite{vdBPV} and its sequel \cite{vdBPRV} it is shown that the Weinstein conjecture holds for non--compact energy surfaces in cotangent bundles as long as one imposes certain topology conditions on the hypersurface and certain growth conditions on the hamiltonian, which is assumed to be of mechanical type. \begin{proposition} \label{prop:openLeaf} Let $(N^{2n+1}, \xi)$ be an open contact manifold. Then there is a contact form $\alpha$, $\ker(\alpha) = \xi$, whose (possibly non--complete) associated Reeb flow has no periodic orbits. \end{proposition} \begin{proof} Fix some small ball $U \subset N$. Modify $\xi$ within $U$ to introduce an overtwisted disc $\Delta$ in the sense of \cite{BEM}. By applying the relative $h$--principle for overtwisted contact structures, there is $\xi_{ot}$ in $N$ that agrees with $\xi$ outside of $U$ and that has $\Delta$ as an overtwisted disc. This new contact structure is homotopic to the original one as almost contact structures. Let $\{N_i\}_{i \in \mathbb{N}}$ be an exhaustion of $N$ by compact sets, $N_i \subset N_{i+1}$. Fix a non--degenerate contact form $\alpha_{ot}$ for the overtwisted structure $\xi_{ot}$. Its closed Reeb orbits are isolated and countable; moreover, we may assume that no closed orbit is fully contained in $\Delta$. We index them as follows: each compact set $N_i$ is intersected by finitely many closed orbits and hence we write $\{\gamma^i_j\}_{j \in I_i}$ for the collection of closed Reeb orbits intersecting $N_i$ but not $N_{i-1}$. Construct a path $\beta: [0,\infty) \to N$, avoiding $\Delta$, that is proper and such that $N \setminus \beta([0,\infty))$ is diffeomorphic to $N$ by a map isotopic to the identity. Then, for each $i$, and each $j \in I_i$, we can construct paths $\beta^i_j: [0,1] \to N_i$ such that the $\beta_j^i$ are all pairwise disjoint, they intersect ${\operatorname{Image}}(\beta)$ only at $\beta^i_j(0) \in {\operatorname{Image}}(\beta)$, they satisfy $\beta^i_j(1) \in \gamma^i_j \cap N_i$, and they avoid $\Delta$. Since the images of $\beta$ and the $\beta^i_j$ avoid $\Delta$, we can fix a closed contractible neighbourhood $V$ of $\Delta$ disjoint from them as well. Construct a path $\beta_{ot}: [0,1] \to N$ with $\beta_{ot}(0) \in \partial V$, $\beta_{ot}(1) \in {\operatorname{Image}}(\beta)$ and otherwise avoiding $V$ and all other paths. Consider the tree $T = \beta \cup \{\cup_{i \in \mathbb{N}, j \in I_i} \beta_j^i \} \cup \beta_{ot}$. Denote by $\nu(T)$ a small closed neighbourhood that deformation retracts onto $T$. We can assume that $N$ is diffeomophic to $N' = N \setminus (\nu(T) \cup V)$ by a diffeomorphism $f: N \to N'$ that is isotopic to the identity. The embedding $f: (N,\xi) \to (N' \cup V,\xi_{ot})$ has image $N'$ and is covered by a contact bundle homomorphism. This follows because $f$ is isotopic to the identity in $N$ and $\xi$ and $\xi_{ot}$ are homotopic. Now an application of Lemma \ref{coro:BEM} implies that there is an isocontact embedding $\tilde f: (N,\xi) \to (N' \cup V, \xi_{ot})$. The form $\alpha_{ot}$ has no periodic orbits in $N' \cup V$ by construction and hence the pullback form $\alpha = \tilde f^*\alpha_{ot}$ does not either. \end{proof} \begin{remark} A natural open question is whether it is true that every open contact manifold can be endowed with a contact form inducing a complete Reeb flow with no closed orbits. \end{remark} \subsection{The Weinstein conjecture does not hold for contact foliations with all leaves tight} \label{ssec:tight} We shall construct first a contact foliation with all leaves tight and with periodic orbits lying in the only compact leaf. \begin{proposition} \label{prop:geodesic} Let $(\mathbb{S}^3, {\mathcal{F}}_{Reeb})$ be the Reeb foliation on the $3$--sphere and let $g$ be the round metric in $\mathbb{S}^3$. Consider the contact foliation $(\mathbb{S}^3 \times \mathbb{S}^1, \lambda_{can})$ on the unit cotangent bundle of ${\mathcal{F}}_{Reeb}$. Its only closed Reeb orbits lie in the compact torus leaf. \end{proposition} The proposition is an easy consequence of the following Lemma. \begin{lemma} Consider the Riemannian manifold $(\mathbb{R}^2,g)$, where $g$ is of the form $dr \otimes dr + f(r) d\theta \otimes d\theta$, with $f(r)$ an increasing function with $f(r) = r^2$ close to the origin. $(\mathbb{R}^2,g)$ has no closed geodesics. \end{lemma} \begin{proof} Applying the Koszul formula yields the following equations for the Christoffel symbols: \[ g(\nabla_{\partial_r}\partial_\theta, \partial_\theta) = f'/2 = \Gamma_{r\theta}^\theta g(\partial_\theta, \partial_\theta) = \Gamma_{r\theta}^\theta f, \] \[ g(\nabla_{\partial_\theta}\partial_r, \partial_\theta) = f'/2 = \Gamma_{\theta r}^\theta g(\partial_\theta, \partial_\theta) = \Gamma_{\theta r}^\theta f, \] \[ g(\nabla_{\partial_\theta}\partial_\theta, \partial_r) = -f'/2 = \Gamma_{\theta \theta}^r g(\partial_r, \partial_r) = \Gamma_{\theta \theta}^r. \] And hence the geodesic equations read: \[ \overset{..}{r} = f'\overset{.}{\theta}^2, \] \[ \overset{..}{\theta} = -\log(f)'\overset{.}{\theta}\overset{.}{r}. \] If at any point $\overset{.}{\theta}=0$, then $\overset{.}{\theta} = 0$ for all times and $\overset{.}{r}$ is a constant. This situation corresponds to radial lines. All other geodesics have always $\overset{.}{\theta} \neq 0$ and hence $\overset{..}{r} > 0$. In particular, as soon as a geodesic has $\overset{.}{r} \geq 0$ at some point, it will have $\overset{.}{r} > 0$ for all the points in the forward orbit and hence it will not close up. For a geodesic to close up we deduce then that it must have $\overset{.}{r} < 0$ for all times, but then it cannot close up either. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:geodesic}] Consider $\mathbb{S}^3$ lying in $\mathbb{C}^2$, with coordinates $(z_1,z_2) = (r_1,\theta_1,r_2,\theta_2)$. The Reeb foliation can be assumed to have the Clifford torus $|z_1|^2 = |z_2|^2 = 1/2$ as its torus leaf. One of the solid tori, denote it by $T$, corresponds to $\{|z_1|^2 \leq 1/2, |z_2|^2 = 1 - |z_1|^2\}$ and the other one is given by the symmetric equation. Let us multiple cover the torus $T$ with the map $\phi: \mathbb{R} \times \mathbb{D}^2_{1/\sqrt{2}} \rightarrow T$ given by $\phi(s,r,\theta) = (r,\theta,\sqrt{1-r^2},s)$. For all purposes we can work in $\mathbb{R} \times \mathbb{D}^2_{1/\sqrt{2}}$, which is the universal cover of the torus, and hence we shall do so. The restriction of the flat metric of $\mathbb{C}^2$ \[ g = \sum_{i=1,2} dr_i \otimes dr_i + r_i^2 d\theta_i \otimes d\theta_i \] to $\mathbb{S}^3$ is precisely the round metric. In the parametrisation of $T$ given above it reads as: \[ \phi^* g = \dfrac{1}{1-r^2} dr \otimes dr + r^2 d\theta \otimes d\theta + (1-r^2) ds \otimes ds. \] Which in particular readily shows that the metric induced in the Clifford torus is flat. Consider the embeddings \[ \psi_c: \mathbb{R}^2 \rightarrow \mathbb{R} \times \mathbb{D}^2_{1/\sqrt{2}} \] \[ \psi_c(\rho,\theta) = (f(\rho)+c,\dfrac{\rho}{\sqrt{2}(1+\rho)},\theta), \] with $f: \mathbb{R} \to \mathbb{R}$ a smooth increasing function that agrees with $\rho^2$ near the origin and with the identity away from it. They realise the non--compact leaves of the Reeb foliation in $T$. It is clear that the leafwise metric is of the form \[ \psi_c^* \phi^* g = h_1(\rho) d\rho \otimes d\rho + h_2(\rho) d\theta \otimes d\theta \] with $h_2(\rho)$ increasing and converging to $1/2$ as $\rho \rightarrow \infty$ and $h_1(\rho)$ bounded from above and behaving as $O(\rho)$ near the origin. At every point of $\mathbb{R}^2$ a vector field $X$ pointing radially and of unit length can be defined. The properties of $h_1$ imply that $X$ is complete and following $X$ yields a reparametrisation $\Phi: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ that satisfies: \[ \Phi^*\psi_c^* \phi^* g = d\rho \otimes d\rho + \tilde{h}(\rho) d\theta \otimes d\theta \] with $\tilde{h}(\rho)$ still increasing and converging to $1/2$ as $\rho \rightarrow \infty$. Now the Lemma yields the result. \end{proof} \begin{remark} Taking the universal cover of a leaf yields the standard tight $\mathbb{R}^3$, so all leaves are tight. \end{remark} One can actually construct a contact foliation with no periodic orbits of the Reeb. \begin{proposition} \label{prop:noGeodesics} Consider the manifold $\mathbb{T}^3$, endowed with the Euclidean metric $g$, and the foliation ${\mathcal{F}}$ by planes given by two rationally independent slopes. The space of foliated cooriented contact elements $\mathbb{S}(T^*{\mathcal{F}})$ has no closed Reeb orbits. \end{proposition} \begin{proof} Let ${\mathcal{L}}$ be any leaf of ${\mathcal{F}}$. ${\mathcal{L}}$ is diffeomorphic to $\mathbb{R}^2 \times \mathbb{S}^1$ and its universal cover of is the standard tight $\mathbb{R}^3$. Hence it is a tight contact manifold. Since the restriction of $g$ to ${\mathcal{L}}$ is Euclidean, there are no closed geodesics on ${\mathcal{L}}$ and hence no closed Reeb orbits in its sphere cotangent bundle. \end{proof} \subsection{A sharp example. Overtwisted leaves with no closed orbits} \label{ssec:sharp} \subsubsection{$\mathbb{R}^3$ overtwisted at infinity with no closed orbits} Consider the following $1$--form in $\mathbb{R}^3$ in cylindrical coordinates: \[ \alpha = \cos(r) dz + (r\sin(r) + f(z)\phi(r))d\theta \] If $f(z)\phi(r) = 0$ identically, this is the standard form $\alpha_{ot}$ for the contact structure $\xi_{ot}$ that is overtwisted at infinity. We well henceforth assume that $f(z)\phi(r)$ is $C^1$--small, and therefore $\alpha$ will be a contact form as well. In particular, by Proposition \ref{prop:El93}, the contact structure it defines is contactomorphic to $\xi_{ot}$. Let us compute: \[ d\alpha = -\sin(r) dr \wedge dz + [\sin(r) + r\cos(r) + \phi'(r)f(z)]dr \wedge d\theta + f'(z)\phi(r) dz \wedge d\theta \] whose kernel, away from the origin, is spanned by: \[ X = -f'(z)\phi(r) \partial_r + [\sin(r) + r\cos(r) + \phi'(r)f(z)] \partial_z + \sin(r) \partial_\theta. \] It is easy to check that $\alpha(X) > 0$ far from the origin, and hence the Reeb is a positive multiple of $X$. Assume that $\phi(r)$ is a monotone function that is identically $0$ close to $0$ and identically $1$ in $[\delta, \infty)$, for $\delta > 0$ small. Then the Reeb vector field in the origin is $\partial_z$, and remains almost vertical nearby. Assume further that $f$ is strictly decreasing and $C^1$ small. Then the Reeb has a positive radial component away from the origin. We conclude that it has no closed orbits. \subsubsection{$\mathbb{S}^2 \times \mathbb{R}$ overtwisted at infinity with no closed orbits} Consider coordinates $(z,\theta; s)$ in $\mathbb{S}^2 \times \mathbb{R}$, with $z \in [0, 2\pi]$, and construct the following $1$--form: \[ \lambda_0 = \cos(z)ds + z(z-2\pi)\sin(z)d\theta. \] It is easy to see that it is a contact form that defines two families of overtwisted discs sharing a common boundary: $\{ z \in [0,\pi], s = s_0\}$ and $\{ z \in [\pi,2\pi], s = s_0\}$. It is therefore overtwisted at infinity. $\lambda_0$ defines two cylinders comprised of Reeb orbits: $\{z = \pi/2\}$ and $\{z = 3\pi/2\}$. Therefore, proceeding like in the previous example, we will add a small perturbation that gets rid of these orbits. Consider the form: \[ \lambda = \cos(z)ds + [z(z-2\pi)\sin(z) + f(s)\phi(z)]d\theta. \] Here we require for $\phi(z)$ to be constant close to the points $0$, $\pi/2$, $\pi$, $3\pi/2$ and $2\pi$, to satisfy: \[ \phi(0) = \phi(\pi) = \phi(2\pi) = 0, \quad \phi(\pi/2) = \phi(3\pi/2) = 1 \] and to be monotone in the subintervals inbetween. We assume that $f$ is strictly monotone and $C^1$ small. Computing: \[ d\lambda = -\sin(z)dr \wedge ds + [ (z-2\pi)\sin(z) + z\sin(z) + z(z-2\pi)\cos(z) + f(s)\phi'(z)]dz \wedge d\theta + f'(s)\phi(z) ds \wedge d\theta \] so the Reeb flow is a multiple of: \[ X = -f'(s)\phi(z) \partial_z + [ (z-2\pi)\sin(z) + z\sin(z) + z(z-2\pi)\cos(z) + f(s)\phi'(z)] \partial_s + \sin(z) \partial_\theta. \] Near $z = 0, \pi, 2\pi$ the Reeb is very close to $\pm \partial_s$. Away from those points, it has a non--zero $z$--component. It follows that it cannot have closed orbits. \subsubsection{Constructing the foliation} Consider $\mathbb{S}^2 \times \mathbb{S}^1 \times \mathbb{S}^1$ with coordinates $(z,\theta;s,t)$, $t \in [0,2]$. It can be endowed with the following $1$--form: \[ \tilde\lambda = \cos(z)ds + [z(z-2\pi)\sin(z) + F(t)\phi(z)]d\theta, \] with $F$ strictly increasing in $(0,1)$, strictly decreasing in $(1,2)$, $C^1$--small and having vanishing derivatives to all orders in $\{0,1\}$. $\phi$ is the bump function defined in the previous subsection. Let $\Phi: \mathbb{S}^1 \to \mathbb{S}^1$ be a diffeomorphism of the circle that fixes $\{0,1\}$ and no other points, is strictly increasing in $(0,1)$ as a map $(0,1) \to (0,1)$, and is strictly decreasing in $(1,2)$ as a map $(1,2) \to (1,2)$. $\Phi$ defines a foliation ${\mathcal{F}}_\Phi$ on $\mathbb{S}^2 \times \mathbb{S}^1 \times \mathbb{S}^1$ called the \textbf{suspension} of $\Phi$. ${\mathcal{F}}_\Phi$ can be constructed as follows. Find a family of functions $\Phi_s: \mathbb{S}^1 \to \mathbb{S}^1$, $s \in [0,1]$, satisfying: \begin{equation} \begin{cases} \Phi_0 = {\operatorname{Id}}, \quad \Phi_1 = \Phi, \\ \text{ the map } s \to \Phi_s(t) \text{ is strictly increasing in $(0,1)$ and strictly decreasing in $(1,2)$}, \\ \left.\dfrac{\partial}{\partial s}\right|_{s=1} \Phi_s(t) = \left.\dfrac{\partial}{\partial s}\right|_{s=0} \Phi_s(\Phi_1(t)) \quad \text{ for all } t. \end{cases} \end{equation} Then the curves $\gamma_t(s) = (s,\Phi_s(t))$, induce a foliation in $[0,1] \times \mathbb{S}^1$ which glues to yield a foliation by curves in the $2$--torus. ${\mathcal{F}}_\Phi$ is the lift of such a foliation. The leaves of the foliation in the $2$--torus are obtained by concatenating the segments $\gamma_t$. $\gamma_0$ and $\gamma_1$ yield closed curves $\tilde\gamma_0$ and $\tilde\gamma_1$. All other curves are diffeomorphic to $\mathbb{R}$, and we denote them by $\tilde\gamma_t(s) = (s,h_t(s))$, $t \in (0,1) \cup (1,2)$. By our assumption on $\Phi_s$, the functions $h_t$ are strictly increasing if $t \in (0,1)$ and strictly decreasing if $t \in (1,2)$. Observe that the non--compact leaves accumulate against the two compact ones. The contact structure in the compact leaves $\mathbb{S}^2 \times \tilde\gamma_t$, $t=0,1$, is given by \[ \cos(z)ds + z(z-2\pi)\sin(z)d\theta. \] In particular, they both have infinitely many closed orbits. The contact structure in the non compact leaves $\mathbb{S}^2 \times \tilde\gamma_t$, $t \in (0,1) \cup (1,2)$, reads \[ \cos(z)ds + [z(z-2\pi)\sin(z) + F(h_t(s))\phi(z)]d\theta \] Since $F \circ h_t$ is non--zero, strictly monotone and $C^1$--small, it is of the form described in the previous section. It follows that they have no periodic orbits. \begin{remark} In this example all leaves involved are overtwisted. Further, the non--compact leaves are overtwisted at infinity. It would be interesting to construct an example of a contact foliation where the non--compact leaves are overtwisted, the leaves in their closure are tight and the only periodic orbits appear in the tight leaves. \end{remark} \section{$J$--holomorphic curves in the symplectisation of a contact foliation} In this section we generalise the standard setup for moduli spaces of pseudoholomorphic curves to the foliated setting. The main result is Theorem \ref{thm:removalSingularities}, which deals with the removal of singularities. The proof is standard and closely follows that of \cite{Ho}, and indeed the only essential difference lies in the fact that, although the leaves might be open, they live inside a compact ambient manifold, so the Arzel\'a--Ascoli theorem can still be applied when carrying out the bubbling analysis. \subsection{Setup} Consider the contact foliation $(M^{m+2n+1}, {\mathcal{F}}^{2n+1}, \xi^{2n})$, with extension $\Theta^{2n+m}$ given by a $1$--form $\alpha$, and write $({\mathbb{R}} \times M, {\mathcal{F}}_\mathbb{R}, \omega)$ for its symplectisation. \subsubsection{The space of almost complex structures} The symplectic bundle $(\xi, d\alpha)$ can be endowed with a complex structure compatible with $d\alpha$, which we denote by $J_\xi$. The space of such choices is non--empty and contractible. $J_\xi$ induces a unique $\mathbb{R}$--invariant leafwise complex structure, $J \in {\operatorname{End}}(T{\mathcal{F}}_\mathbb{R})$, $J^2 = -{\operatorname{Id}}$, as follows: \[ J|_{\xi} = J_\xi \] \[ J(\partial_t) = R \] Observe that $J$ is \textbf{compatible} with $\omega$, and hence they define a metric, which turns each leaf of the symplectisation into a manifold which is not complete. Instead, we shall consider the better behaved $\mathbb{R}$--invariant leafwise riemannian metric $g$ in ${\mathbb{R}} \times {\mathcal{F}}$ given by: \[ g = dt \otimes dt + \alpha \otimes \alpha + d\alpha(J_\xi \circ \pi_\xi , \pi_\xi ). \] \subsubsection{$J$--holomorphic curves} Let $(S, i)$ be a Riemann surface, possibly with boundary. A map satisfying \begin{equation} \begin{cases} \label{eq:holCurves} F: (S,i) \to ({\mathbb{R}} \times M,J) \\ dF(TS) \subset {\mathcal{F}}_\mathbb{R} \\ J \circ dF = dF \circ i \end{cases} \end{equation} is called a parametrised \textbf{foliated $J$--holomorphic curve}. The second condition implies that $F(S)$ is contained in a leaf ${\mathbb{R}} \times {\mathcal{L}}$ of ${\mathcal{F}}_\mathbb{R}$. Indeed, $J$ is an almost complex structure in the open manifold ${\mathbb{R}} \times {\mathcal{L}}$, and $F$, regarded as a map into ${\mathbb{R}} \times {\mathcal{L}}$, is a \textbf{$J$--holomorphic curve} in the standard sense. By our choice of $J$, there is an $\mathbb{R}$--action on the space of foliated $J$--holomorphic curves given by translation on the $\mathbb{R}$ term of ${\mathbb{R}} \times M$. \subsubsection{Foliated $J$--holomorphic planes and cylinders} A solution of Equation (\ref{eq:holCurves}) \[ F = (a,u): (\mathbb{C},i) \rightarrow ({\mathbb{R}} \times M,J) \] is called a \textbf{foliated $J$--holomorphic plane}. If we write $\mathcal{M}_J^{\mathcal{F}}$ for the space of such maps, it is clear that the space of complex automorphisms of $\mathbb{C}$ acts on it by its action on the domain. $\mathcal{M}_J^{\mathcal{F}}$ is non--empty. Every Reeb orbit $\gamma: \mathbb{R} \rightarrow M$ has an associated foliated $J$--holomorphic plane given by \[ F(s,t) = (s,\gamma(t))\quad \text{ where $z = s+it$ are the standard complex coordinates in $\mathbb{C}$.}\] We call these the \textsl{trivial} solutions. Similarly, a solution of Equation \ref{eq:holCurves} \[ F = (a,u): (-\infty, \infty) \times \mathbb{S}^1 \rightarrow {\mathbb{R}} \times M \] is called a \textbf{foliated $J$--holomorphic cylinder}. We let $(s,t)$ be the coordinates in the cylinder and its complex structure to be given by $i(\partial_s) = \partial_t$. A closed Reeb orbit $\gamma: \mathbb{S}^1 \to M$, gives a \textsl{trivial} cylinder $F(s,t) = (s,\gamma(t))$. Recall that the cylinder $(-\infty, \infty) \times \mathbb{R}$ is biholomorphic to $\mathbb{C} \setminus \{0\}$ by the exponential, and for convenience we will often consider both domains interchangeably. In particular, given some foliated $J$--holomorphic plane, we could define a foliated $J$--holomorphic cylinder by introducing a pucture in the domain. Therefore, we say that a foliated $J$--holomorphic map \[ F=(a,u): \mathbb{C} \setminus \{0\} \to {\mathbb{R}} \times M \] can be \textbf{extended} over zero (or $\infty$) if there is a foliated $J$--holomorphic map with domain $\mathbb{C}$ (resp. the puctured Riemann sphere $\hat{\mathbb{C}} \setminus \{0\})$ that agrees with $F$ in $\mathbb{C} \setminus \{0\}$. \subsubsection{Energy} After introducing the \textsl{trivial} foliated $J$--holomorphic curves, we would like to introduce an \textsl{energy constraint} that singles out more interesting solutions of Equation \ref{eq:holCurves}. This leads us to the following definitions. \begin{definition} Consider the space of functions \[ \Gamma = \{ \phi \in C^\infty(\mathbb{R}, [0,1])| \quad \phi' \geq 0 \} \] Let $F: S \to {\mathbb{R}} \times M$ be a foliated $J$--holomorphic curve. Its \textbf{energy} is defined by: \begin{align} \label{eq:energy} E(F) = \sup_{\phi \in \Gamma} \int_S F^* d(\phi\alpha). \end{align} Its \textbf{horizontal energy} is defined by: \begin{align} \label{eq:horizontalEnergy} E^h(F) = \int_S F^* d\alpha. \end{align} \end{definition} Trivial solutions correspond to the following general phenomenon. \begin{lemma} \label{lem:zeroHorizontalEnergy} Let $F=(a,u): (S,i) \to ({\mathbb{R}} \times M,J)$ be a foliated $J$--holomorphic curve. $E^h(F) = 0$ if and only if ${\operatorname{Image}}(F) \subset {\mathbb{R}} \times \gamma$, where $\gamma$ is a Reeb orbit. \end{lemma} \begin{proof} Given a ball $U \subset S$ find complex coordinates $(s,t)$. Then: \[ \int_U F^* d\alpha = \int_U d\alpha(u_s, u_t) ds \wedge dt = \int_U d\alpha(u_s, J u_s) ds \wedge dt = \] \[ \int_U d\alpha(\pi_\xi u_s, \pi_\xi \circ J u_t) ds \wedge dt = \int_U |\pi_\xi u_s| ds \wedge dt \] and since \[ E^h(F) = \int_S F^* d\alpha = \int_S u^* d\alpha \] the claim follows. \end{proof} The following lemma states that cylinders with finite energy that cannot be extended to planes have to be necessarily trivial and hence imply the existence of a Reeb orbit. \begin{lemma} \label{lem:trivialCylinder} Let $F$ be a foliated $J$--holomorphic map \[ F=(a,u): \mathbb{C} \setminus \{0\} \to {\mathbb{R}} \times M \] satisfying $E(F) < \infty$ and $E^h(F) = 0$. If $F$ cannot be extended over its punctures, then $t \to u(e^{2\pi it})$, $t \in [0,1]$, is a parametrised closed Reeb orbit. \end{lemma} \begin{proof} By Lemma \ref{lem:zeroHorizontalEnergy}, we know that there is some Reeb orbit $\gamma$ (not necessarily closed) such that ${\operatorname{Image}}(F) \subset {\mathbb{R}} \times \gamma$. We can identify the universal cover of ${\mathbb{R}} \times \gamma$ with $\mathbb{C}$ with its standard complex structure. We claim that $\gamma$ is a closed orbit and that $F$ is a non contractible map into ${\mathbb{R}} \times \gamma$. Assuming otherwise, regard $F$ as a holomorphic map $f: \mathbb{C} \setminus \{0\} \to \mathbb{C} \subset \hat{\mathbb{C}}$. As such, its punctures are either removable or essential singularities. They cannot be removable singularities with values in $\mathbb{C}$ by assumption. If $f$ has a removable singularity that is a pole, a neighbourhood of the pucture branch covers a neighbourhood of $\infty$ in the Riemann sphere. In particular, there is a band $[a,b] \times {\mathbb{R}} \subset {\operatorname{Image}}(f) \subset \mathbb{C}$, with $a < b$ large enough. This contradicts the assumption that $E(F)$ was finite. If $f$ has an essential singularity, then Picard's great theorem states that every point in $\mathbb{C}$, except possibly one, is contained in ${\operatorname{Image}}(f)$. Again, this contradicts the assumption that $E(F)$ was finite. We deduce that $\gamma$ is a closed orbit and that $F$ is a non--contractible map into the cylinder ${\mathbb{R}} \times \gamma$. The exponential is a biholomorphism between the cylinder and $\mathbb{C} \setminus \{0\}$, so now we regard $F$ as a holomorphic map $h: \mathbb{C} \setminus \{0\} \to \mathbb{C} \setminus \{0\}$. Suppose one of the punctures was an essential singularity for $h$. Since $h$ has no zeroes or poles, Picard's theorem states that all other points in the Riemann sphere have infinitely many preimages by $h$. This contradicts $E(F) < \infty$. Therefore, $h$ can be extended over its punctures to be zero or $\infty$. $h$ is then a meromorphic function over the Riemann sphere, and hence it is nothing but the quotient of two polynomials. By our assumption that there are no other zeroes or poles this implies that $h(z) = az^k$, for some $k \in \mathbb{Z} \setminus \{0\}$, $a \in \mathbb{C}$. This shows that $t \to u(e^{2\pi it})$ parametrises the $k$--fold cover of $\gamma$. \end{proof} Exactly the same analysis yields the following lemma. \begin{lemma} \label{lem:trivialPlane} Let $F$ be a foliated $J$--holomorphic map \[ F=(a,u): \mathbb{C} \to {\mathbb{R}} \times M \] satisfying $E^h(F) = 0$. Then either $F$ is the constant map or $E(F) = \infty$. \end{lemma} \begin{proof} Let $\gamma$ be the Reeb orbit such that ${\operatorname{Image}}(F) \subset {\mathbb{R}} \times \gamma$. By taking the universal cover of ${\mathbb{R}} \times \gamma$, regard $F$ as a map $\mathbb{C} \to \mathbb{C}$, as in Lemma \ref{lem:trivialCylinder}. Now study the extension problem of $F$ to $\infty$. If it corresponds to a removable singularity with values in $\mathbb{C}$, then $F$ is the constant map. Otherwise, if it is either a pole or a non--removable singularity, it has infinite energy. \end{proof} \subsubsection{Riemannian and symplectic area} In the case of compact symplectic manifolds, there is an interplay between the symplectic area of a $J$--holomorphic curve and its riemannian area for the metric given by the symplectic form and the compatible almost complex structure. In our case, $g$ is not of that form. Rather, it is $\mathbb{R}$--invariant, while $\omega$ is not: $\mathbb{R}$--translations of the same $J$--holomorphic curve have different symplectic energy and indeed there are no universal constants relating the $\omega$--area and the $g$--area. However, $E$ and $E^h$ are invariant under the $\mathbb{R}$-action. Given $F$, a foliated $J$--holomorphic curve, let ${\operatorname{area}}_g(F)$ be its riemannian area in terms of $g$, and let ${\operatorname{area}}_{\omega_\phi}(F)$ be its symplectic area in terms of $\omega_\phi = d(\phi\alpha)$. \begin{lemma} \label{lem:noSphereBubbling} Let $F=(a,u): (S,i) \to ({\mathbb{R}} \times M,J)$ be a parametrised foliated $J$--holomorphic curve. Then, if $a$ is bounded below and above: \[ {\operatorname{area}}_g(F) < C{\operatorname{area}}_\omega(F) < C'\int_{\partial S} \alpha, \] for some constants $C, C'$ depending only on the upper and lower bounds of $a$. \end{lemma} \begin{proof} Consider $a_0$ and $a_1$ satisfying $a_0 < a < a_1$. Let $\phi(t) = \frac{t-a_0}{3(a_1-a_0)} + 1/3$ in $[a_0,a_1]$ and belonging to $\Gamma$. Then $\omega_\phi$ is a symplectic form in $[a_0,a_1] \times M$ and $J$ is $\omega_\phi$--compatible. Since $0 < D < \phi, \phi' < D' < \infty$, there are universal constants relating the metrics $g$ and $g_\phi = \omega_\phi(-,J-)$ in $[a_0,a_1] \times M$. Since $J$ is $\omega_\phi$--compatible, $F$ being $J$--holomorphic implies that ${\operatorname{area}}_{g_\phi}(F) = {\operatorname{area}}_{\omega_\phi}(F)$, and the first inequality follows. The second inequality follows by applying Stokes. \end{proof} An immediate consequence of Lemma \ref{lem:noSphereBubbling} is that there cannot be \textsl{closed} foliated $J$--holomorphic curves in ${\mathbb{R}} \times M$. \subsection{Bubbling} As we shall see in Section \ref{sec:main}, the way in which we will prove the existence of a periodic orbit of the Reeb vector field will be by constructing a $1$--dimensional moduli of pseudoholomorphic discs that necessarily will be open in one of its ends. The following lemma shows that the reason for it to be open must be that the gradient is not uniformly bounded for all discs in the moduli. \begin{proposition} \label{prop:ArzelaAscoli} Fix ${\mathcal{L}}$ a leaf of ${\mathcal{F}}$. Let $W \subset {\mathbb{R}} \times {\mathcal{L}}$ be a totally real compact submanifold, possibly with boundary. Let $(S,i)$ be a compact Riemann surface with boundary. Consider the sequence of foliated $J$--holomorphic maps \[ F_k: (S, \partial S) \to ({\mathbb{R}} \times {\mathcal{L}}, W), \quad k \in \mathbb{N}. \] Suppose that there is a uniform bound $||dF_k|| < C < \infty$. Then there is a subsequence $F_{k_i}$, $k_i \to \infty$, convergent in the $C^\infty$--topology to a foliated $J$--holomorphic map \[ F_\infty: (S, \partial S) \to ({\mathbb{R}} \times M, W) \] \end{proposition} \begin{proof} Observe that since we have a uniform gradient bound and $F_k(\partial S) \subset W$, for all $k$, it necessarily follows that the images of all the $F_k$ lie in a compact subset of ${\mathbb{R}} \times {\mathcal{L}}$. Then one can proceed as in the standard case to prove $C^\infty$ bounds from $C^1$ bounds and then apply Arzel\'a-Ascoli to conclude. \end{proof} \begin{remark} The same statement holds for surfaces without boundary as long as one imposes for the images of all the $F_k$ to lie in a compact set of the leaf. \end{remark} Proposition \ref{prop:ArzelaAscoli} suggests that we should study sequences of maps \[ F_k: (S, \partial S) \to ({\mathbb{R}} \times {\mathcal{L}}, W), \quad k \in \mathbb{N} \] in which $||dF_k||$ is not uniformly bounded. We have to consider two separate cases. \subsubsection{Plane bubbling} \begin{proposition} \label{prop:planeBubbling} Consider a sequence of foliated $J$--holomorphic curves \[ F_k: (S, \partial S) \to ({\mathbb{R}} \times {\mathcal{L}}, W), \quad k \in \mathbb{N} \] and a corresponding sequence of points $q_k$ in $S$ having $M_k = ||d_{q_k}F_k|| \to \infty$ and converging to a point $q \in S$. Suppose that there is an uniform bound $E(F_k) < C < \infty$. If ${\operatorname{dist}}(q_k,\partial S)M_k \to \infty$, there is a foliated $J$--holomorphic plane \[ F_\infty: \mathbb{C} \to {\mathbb{R}} \times {\mathcal{L}}' \] with $E(F_\infty) < C$, where ${\mathcal{L}}'$ is a leaf in the closure of ${\mathcal{L}}$. \end{proposition} \begin{proof} After possibly modifying the $q_k$ slightly, there are charts \[ \phi_k: \mathbb{D}^2(R_k) \to S \] \[ \phi_k(z) = q_k + \dfrac{z}{M_k} \] with $R_k < {\operatorname{dist}}(q_k,\partial S)M_k$, $R_k \to \infty$, $R_k/M_k \to 0$ and $||d(F_k \circ \phi_k)|| < 2$ -- this last condition is achieved by the so called Hofer's lemma, see \cite[Lemma 26]{Ho}. The maps $F_k \circ \phi_k$ have $C^1$ bounds by construction, but they have no $C^0$ bounds. By our construction of $J$, the vertical translation of a $J$--holomorphic map is still $J$--holomorphic and hence we can compose with a vertical translation $\tau_k$ guaranteeing that $\tau_k \circ F_k \circ \phi_k$ takes the point $0$ to the level $\{0\} \times {\mathcal{L}}$. Then, for every compact subset $\Omega \subset \mathbb{C}$, the maps $\tau_k \circ F_k \circ \phi_k: \Omega \to {\mathbb{R}} \times M$ are equicontinuous and bounded -- note that this is where we use that ${\mathcal{L}}$ lies inside the compact manifold $M$. Recall that having uniform $C^1$ bounds implies that we have uniform $C^\infty$ bounds. Hence, an application of Arzel\'a--Ascoli shows that a subsequence converges in $C^\infty_{loc}$ to a map $F_\infty: \mathbb{C} \to {\mathbb{R}} \times M$ that must be foliated and $J$--holomorphic, but not necessarily lying in ${\mathbb{R}} \times {\mathcal{L}}$, but maybe in some new leaf ${\mathbb{R}} \times {\mathcal{L}}'$. Note that the energy of the map $\tau_k \circ F_k \circ \phi_k$ is bounded above by that of $F_k$. Since we have uniform bounds for the energy of the $F_k$, we have uniform energy bounds for the maps $\tau_k \circ F_k \circ \phi_k$ and hence for their limit $F_\infty$. Note that $F_\infty$ is necessarily non constant, since $||d_0F_\infty|| = 1$ by construction. In particular, it has non--zero energy. \end{proof} \begin{remark} We say that the map $F_\infty$ as given in the proof is called a \textbf{plane bubble}. If the map $F_\infty$ could be extended over the pucture to a map with domain the Riemann sphere $\mathbb{S}^2$, this would yield a contradiction with Lemma \ref{lem:noSphereBubbling}. \end{remark} \subsubsection{Disc bubbling} \begin{proposition} \label{prop:discBubbling} Consider a sequence of foliated $J$--holomorphic curves \[ F_k: (S, \partial S) \to ({\mathbb{R}} \times {\mathcal{L}}, W), \quad k \in \mathbb{N} \] and a corresponding sequence of points $q_k$ in $S$ having $M_k = ||d_{q_k}F_k|| \to \infty$ converging to a point $q \in S$. Suppose that there is an uniform bound $E(F_k) < C < \infty$. If ${\operatorname{dist}}(q_k,\partial S)M_k$ is uniformly bounded from above, there is a foliated $J$--holomorphic disc \[ F_\infty: (\mathbb{D}^2, \partial \mathbb{D}^2) \to ({\mathbb{R}} \times {\mathcal{L}},W) \] with $E(F_\infty) < C$. \end{proposition} \begin{proof} Since we are assuming that $W$ is compact, the usual rescaling argument for the disc bubbling goes through and yields a punctured disc bubble lying in ${\mathbb{R}} \times {\mathcal{L}}$ and having bounded gradient. Then, the standard removal of singularities gives a disc bubble $F_\infty$. \end{proof} \subsection{Removal of singularities} The aim of this subsection is to prove the following result, which is one of the key ingredients for proving Theorem \ref{thm:main}. \begin{theorem}[Removal of singularities] \label{thm:removalSingularities} Let $F=(a,u): \mathbb{D}^2 \setminus \{0\} \to {\mathbb{R}} \times {\mathcal{L}} \subset {\mathbb{R}} \times M$ be a $J$--holomorphic curve with $0 < E(F) < \infty$, ${\mathcal{L}}$ a leaf of ${\mathcal{F}}$. Then, either $F$ extends to a $J$--holomorphic map over $\mathbb{D}^2$ or for every sequence of radii $r_k \to 0$ the curves $\gamma_{r_k}(s) = u(e^{r_k+is})$ converge in $C^\infty$ --possibly after taking a subsequence-- to a parametrised closed Reeb orbit lying in the closure of ${\mathcal{L}}$. \end{theorem} \begin{proof}[Proof of Theorem \ref{thm:removalSingularities}] Let us state the problem in terms of cylinders. Identify $\mathbb{D}^2 \setminus \{0\}$ with $[0, \infty) \times \mathbb{S}^1$ by using the biholomorphism $-\log(z)$, and regard $F$ as a foliated $J$--holomorphic map $[0, \infty) \times \mathbb{S}^1 \to {\mathbb{R}} \times M$. Then, the following maps are foliated $J$--holomorphic: \[ F_k=(a_k,u_k): [-R_k/2, \infty) \times \mathbb{S}^1 \to {\mathbb{R}} \times M \] \[ F_k(s,t) = (a(s+R_k,t) - a(R_k,0),u(s+R_k,t)) \] and by assumption they have a uniform bound $E(F_k) < C < \infty$ and $\lim_{k \to \infty} E^h(F_k) = 0$. Here $R_k = -\log(r_k) \to \infty$. Suppose that the gradient was not uniformly bounded for the family $F_k$. We can then find a sequence of points $q_k \in [0, \infty) \times \mathbb{S}^1$ escaping to infinity and satisfying $|d_{q_k}F| \to \infty$. Then we are under the assumptions of Proposition \ref{prop:planeBubbling}, and this yields a plane bubble $G: \mathbb{C} \to {\mathbb{R}} \times M$ with $E^h(G) = 0$, which must lie on top of a Reeb orbit by Lemma \ref{lem:zeroHorizontalEnergy}. By our bubbling analysis, it cannot be constant, since its gradient at the origin is $1$, which is a contradiction with it having $E(G) < \infty$, by Lemma \ref{lem:trivialPlane}. We conclude that the family $F_k$ has uniform $C^1$ bounds and hence uniform $C^\infty$ bounds. By construction $a_k(0,0) \in \{0\} \times M$, which means that we have uniform $C^0$ bounds on every compact subset of $(-\infty, \infty) \times \mathbb{S}^1$ --here is where we use the compactness of $M$. Arzel\'a-Ascoli implies that --after possibly taking a subsequence-- the maps $F_k$ converge in $C^\infty_{loc}$ to a map $F_\infty: (-\infty, \infty) \times \mathbb{S}^1 \to {\mathbb{R}} \times M$ with $E(F_\infty) < \infty$ and $E^h(F_\infty) = 0$. Observe that \[ \lim_{r \to 0} \int_{\gamma_r} \alpha = \int_{\gamma_1} \alpha - \int_{\mathbb{D}^2 \setminus \{0\}} d\alpha. \] If this limit is zero, then the argument above shows that the $\gamma_r$, $r \to 0$, tend to the constant map in the $C^\infty$ sense, and hence $F$ extends to a map over $\mathbb{D}^2$. Assuming otherwise, it is clear that $F_\infty$ cannot be the constant map and hence Lemma \ref{lem:trivialCylinder} implies the conclusion. \end{proof} \section{Existence of contractible periodic orbits in the closure of an overtwisted leaf} \label{sec:main} After setting up the study of foliated $J$--holomorphic curves in the previous section and dealing with its compactness issues, we use this machinery to conclude the proof of Theorem \ref{thm:main}. The setting of the theorem is as follows: $(M^{m+3}, {\mathcal{F}}^3, \xi^2)$ is a contact foliation with $\Theta^{2+m}$ an extension given by a $1$--form $\alpha$. We write $({\mathbb{R}} \times M, {\mathcal{F}}_\mathbb{R}, \omega)$ for its symplectisation. ${\mathcal{L}}^3$ is a leaf of ${\mathcal{F}}$. \subsection{The Bishop family} The following results have a local nature and hence do not depend on whether ${\mathcal{L}}$ is compact or not. Their proofs can be found in \cite{Ho}. \subsubsection{The Bishop family at an elliptic point} If $({\mathcal{L}}, \xi)$ is an overtwisted manifold, let $\Sigma$ be an overtwisted disc for $\xi$. Otherwise, if $\pi_2({\mathcal{L}}) \neq 0$, let $\Sigma$ be some sphere realising a non--zero class in $\pi_2$. Assume, after a small perturbation, that the characteristic foliations are as described in Subsection \ref{sssec:convex} in Exercises \ref{ex:ot} and \ref{ex:tight} and Theorem \ref{thm:EGF}. Denote by $\Gamma_\Sigma$ the set of singular points of the characteristic foliation of $\Sigma$. Let $p \in \Gamma_\Sigma$, a nicely elliptic point. The maps satisfying: \begin{equation} \begin{cases} \label{eq:holDiscs} F = (u, a): (\mathbb{D}^2, \partial \mathbb{D}^2) \to ({\mathbb{R}} \times {\mathcal{L}}, \{0\} \times \Sigma) \\ dF \circ i = J \circ dF, \\ {\operatorname{wind}}(F, p) = \pm 1, \\ {\operatorname{ind}}(F) = 4, \end{cases} \end{equation} will be called the \textbf{Bishop family}. ${\operatorname{wind}}(F, p)$ refers to the winding number of $F(\partial \mathbb{D}^2$) around the elliptic point $p$. The condition ${\operatorname{ind}}(F) = 4$ is implied by the other assumptions. It means that the linearised Cauchy--Riemann operator at $F$ has index $4$, and hence, if there is transversality, the solutions of Equation \ref{eq:holDiscs} close to $F$ form a smooth $4$--dimensional manifold. Since the Mobius transformations of the disc have real dimension $3$, this implies that the image of $F$ is part of a $1$--dimensional family of distinct discs. The Bishop family is not empty under some integrability assumptions. \begin{proposition} \label{prop:Bishop} \emph{(\cite{Bi}, \cite[Section 4.2]{Ho})} For a suitable choice of $J_\xi$, $J$ is integrable close to $p$. Then there is a smooth family of maps $F_s$, $s \in [0,\varepsilon)$, with $F_0(z) = p$ and $F_s$, $s>0$, disjoint embeddings satisfying Equation \ref{eq:holDiscs}. Additionally, there is a small neighbourhood $U$ of $p$ such that any other disc satisfying Equation \ref{eq:holDiscs} and interesecting $U$ is a reparametrisation of one of the $F_s$. \end{proposition} \subsubsection{Continuation of the Bishop family} The following statement shows that transversality always holds for the linearised Cauchy--Riemann operator for maps belonging to the Bishop family. \begin{proposition} \emph{(\cite[Theorem 17]{Ho})} \label{prop:openess} Let $F$ satisfy Equation \ref{eq:holDiscs}. Then there is a smooth family of disjoint embeddings $F_s$, $s \in (-\varepsilon, \varepsilon)$, satisfying Equation \ref{eq:holDiscs}, such that $F_0 = F$. Additionally, any two such families are related by a reparametrisation of the parameter space and a smooth family of Mobius transformations. \end{proposition} \subsubsection{Properties of the Bishop family} Convexity of $\{0\} \times {\mathcal{L}}$ inside of ${\mathbb{R}} \times {\mathcal{L}}$ and an application of the maximum principle yield the following lemma. It will be useful to show that there is no disc bubbling. \begin{lemma} \emph{(\cite[Lemma 19]{Ho})} \label{lem:winding} Let $F: (\mathbb{D}^2, \partial \mathbb{D}^2) \to ({\mathbb{R}} \times {\mathcal{L}}, \{0\} \times \Sigma)$ be a $J$--holomorphic map. Then $F(\partial \mathbb{D}^2)$ is transverse to the characteristic foliation of $\Sigma$ and $F(\mathbb{D}^2)$ is transverse to $\{0\} \times {\mathcal{L}}$. \end{lemma} In order to apply Theorem \ref{thm:removalSingularities} we must have energy bounds, which are provided by the following result. \begin{proposition} \emph{(\cite[Proposition 27]{Ho}} \label{prop:energyBounds} There are uniform energy bounds $0 < C_1 < E(F),E^h(F) < C_2 < \infty$ for every $F$ satisfying Equation \ref{eq:holDiscs} and having \[ {\operatorname{dist}}({\operatorname{Image}}(F), \Gamma_\Sigma) > \varepsilon > 0. \] \end{proposition} \begin{proof} By Stokes' theorem: \[ E(F) = \sup_{\phi \in \Gamma} \int_{\mathbb{D}^2} F^* d(\phi\alpha) = \sup_{\phi \in \Gamma} \int_{\partial \mathbb{D}^2} F^* \phi\alpha = \] \[ \sup_{\phi \in \Gamma} \phi(0) \int_{\partial \mathbb{D}^2} F^*\alpha = \int_{F(\partial \mathbb{D}^2)} \alpha. \] $F(\partial \mathbb{D}^2)$ winds around the critical point exactly once and hence bounds a disc within $\Sigma$. The area of such a disc is always bounded above by a universal constant and is bounded below under the assumption that they have radius at least $\varepsilon$. The claim follows. A similar estimate holds for $E^h$. \end{proof} \subsection{Proof of Theorem \ref{thm:main}} Now we tie all the results we have discussed so far. \begin{lemma} \label{lem:jump1ot} Let ${\mathcal{L}}$ be a leaf of ${\mathcal{F}}$ and assume that $({\mathcal{L}}, \xi)$ is an overtwisted contact manifold. Then there is a finite energy plane contained in ${\mathbb{R}} \times {\mathcal{L}}' \in {\mathcal{F}}_\mathbb{R}$, with ${\mathcal{L}}'$ lying in the closure of ${\mathcal{L}}$. \end{lemma} \begin{proof} Proposition \ref{prop:Bishop} shows that the set of solutions of Equation \ref{eq:holDiscs} is non--empty and Proposition \ref{prop:openess} shows that, up to Moebius transformations, it is an open $1$--dimensional manifold. Denote by $\mathcal{M}$ the component that contains the solutions arising from the elliptic point. The boundaries of the maps in $\mathcal{M}$ are pairwise disjoint by Proposition \ref{prop:openess} and they remain transverse to the characteristic foliation by Lemma \ref{lem:winding}. Hence, they define an open submanifold $D$ of the overtwisted disc. Take a sequence in $\mathcal{M}$ whose distance to the elliptic point is uniformly bounded from below. Then Proposition \ref{prop:ArzelaAscoli} says that either the gradient is unbounded in the family or their limit (by taking a subsequence) is a new solution of Equation \ref{eq:holDiscs}. We conclude that if the gradient does not explode, $D$ is both compact and open. Then $D$ should have a tangency with the boundary of the overtwisted disk, which is a contradiction with Lemma \ref{lem:winding}. Since the gradient explodes, we know by Propositions \ref{prop:planeBubbling} and \ref{prop:discBubbling} that either a plane or a disc bubble appears. In the case of a disc bubble, the standard analysis as in \cite{Ho} shows that bubbles connect, and hence we must have two $J$--holomorphic discs touching at a point and whose winding numbers add up to $1$. This is a contradiction with Lemma \ref{lem:winding}. We conclude that necessarily a plane bubble must appear. \end{proof} \begin{lemma} \label{lem:jump1pi2} Let ${\mathcal{L}}$ be a leaf of ${\mathcal{F}}$ and assume that $\pi_2({\mathcal{L}}) \neq 0$. Then there is a finite energy plane contained in ${\mathbb{R}} \times {\mathcal{L}} \in {\mathcal{F}}_\mathbb{R}$, with ${\mathcal{L}}'$ lying in the closure of ${\mathcal{L}}$. \end{lemma} \begin{proof} Let us denote by $p_-$ and $p_+$ the two elliptic points of the convex $2$--sphere $\Sigma$ realising a non trivial element of $\pi_2({\mathcal{L}})$. Proposition \ref{prop:Bishop} gives two different Bishop families starting at each point, which we denote by $\mathcal{M}^-$ and $\mathcal{M}^+$, respectively. Assume that the gradient is uniformly bounded in the Bishop family $\mathcal{M}^-$. Then $\mathcal{M}^-$ is open and compact, and an application of Proposition \ref{prop:openess} shows that it can be continued until the boundaries of the discs in the family reach $p_+$. Since we know by Proposition \ref{prop:Bishop} that in a neighbourhood of $p_+$ the only curves are those in $\mathcal{M}^+$, both families must be the same. The evaluation map \[ {\operatorname{ev}}: \mathcal{M}^- \times \mathbb{D}^2 \approx [0,1] \times \mathbb{D}^2 \to {\mathcal{L}} \] \[ {\operatorname{ev}}(F=(a,u), z) = u(z) \] satisfies ${\operatorname{ev}}(\partial (\mathbb{M}^- \times \mathbb{D}^2)) = \Sigma$, which contradicts the fact that $\Sigma$ was non--trivial in $\pi_2({\mathcal{L}})$. Therefore, the gradient must explode, and since a disc bubble cannot appear, the claim follows. \end{proof} We are now ready to prove Theorem \ref{thm:main}. \begin{proof}[Proof of Theorem \ref{thm:main}] Let $({\mathcal{L}}, \xi)$ be overtwisted. Proposition \ref{lem:jump1ot} yields a finite energy plane $F: \mathbb{C} \to {\mathbb{R}} \times {\mathcal{L}}$, with ${\mathcal{L}}'$ a leaf of ${\mathcal{F}}$ contained in the closure of ${\mathcal{L}}$. By Lemma \ref{lem:noSphereBubbling} this plane cannot be completed to a sphere. Now an application of Theorem \ref{thm:removalSingularities} shows that there is a closed Reeb orbit in some leaf ${\mathcal{L}}''$ lying in the closure of ${\mathcal{L}}'$. Since ${\mathcal{L}}''$ is in the closure of ${\mathcal{L}}$ the claim follows. Same argument goes through by applying Proposition \ref{lem:jump1pi2} if $\pi_2({\mathcal{L}}) \neq 0$. \end{proof} \begin{remark} As we have seen, Lemmas \ref{lem:jump1ot} and \ref{lem:jump1pi2} yield a finite energy plane in a leaf that might not be the one containing the overtwisted disc or the convex $2$--sphere. Then, an application of Theorem \ref{thm:removalSingularities} shows that the plane is asymptotic to a trivial cylinder that might live yet in a nother leaf. Our example in Subsection \ref{ssec:sharp} shows that at least one of these two phenomena must take place. Is it possible for a ``\textsl{double jump}'' to actually happen? \end{remark} \begin{remark} Let $(M^{2n+1+m}, {\mathcal{F}}^{2n+1}, \xi)$ be a contact foliation. Let ${\mathcal{L}}$ be a leaf of ${\mathcal{F}}$ and let $({\mathcal{F}}, \xi)$ be overtwisted in the sense of \cite{BEM}. More generally, assume that $({\mathcal{F}}, \xi)$ contains a plastikstufe \cite{Nie}. It is immediate that the Bishop family arising from the plastikstufe can be employed to show that there must be a Reeb orbit, so Theorem \ref{thm:main} also holds true for overtwisted manifolds in all dimensions. \end{remark} \section{The non--degenerate case} In this section we show that under non--degeneracy assumptions none of the \textsl{jumps} between leaves can happen. \begin{definition} Let $(M, {\mathcal{F}}, \xi)$ be a contact foliation and let $\alpha$ be the defining $1$--form for some extension $\Theta$ of $\xi$. A closed orbit of the Reeb vector field associated to $\alpha$ is called \textbf{non--degenerate} if it is isolated among Reeb orbits having the same period and lying in the same leaf of ${\mathcal{F}}$. The form $\alpha$ is called \textbf{non--degenerate} if all the closed orbits of its Reeb vector field are non--degenerate. \end{definition} The statement we want to show is the following. It is a stronger version of the Removal of Singularities (Theorem \ref{thm:removalSingularities}) in the non--degenerate case. \begin{theorem} \label{thm:nonDegenerateRemovalSingularities} Let $(M, {\mathcal{F}}, \xi)$ be a contact foliation and let $\alpha$ be the defining $1$--form for some extension $\Theta$ of $\xi$. Assume $\alpha$ is non--degenerate. Let $F=(a,u): \mathbb{D}^2 \setminus \{0\} \to {\mathbb{R}} \times {\mathcal{L}} \subset {\mathbb{R}} \times M$ be a $J$--holomorphic curve with $0 < E(F) < \infty$, ${\mathcal{L}}$ a leaf of ${\mathcal{F}}$. Then, either $F$ extends to a $J$--holomorphic map over $\mathbb{D}^2$ or the curves $\gamma_r(s) = u(e^{r+is})$ converge in $C^\infty$ to a closed Reeb orbit $\gamma$ lying in ${\mathcal{L}}$. \end{theorem} \begin{proof} We proceed by contradiction. Assume that $\gamma$, the limit of some $\gamma_{r_i}$, $r_i \to \infty$, is contained in some leaf ${\mathcal{L}}' \neq {\mathcal{L}}$. Denote $T = \int_\gamma \alpha$, the period of $\gamma$. By our assumption on $\alpha$, we can find a closed foliation chart $U \subset M$ diffeomorphic to $\mathbb{D}^2 \times \mathbb{S}^1 \times [-1,1]$ around $\gamma$ such that the plaque in $U$ containing $\gamma$ intersects no other orbits of period $T$. Write $h: U \to [-1,1]$ for the height function of the chart: we can assume that $h^{-1}(0)$ is the plaque containing $\gamma$. Since the curves $\gamma_{r_i}$ converge in $C^\infty$ to $\gamma$, their images are contained in $U$ for large enough $i$. Assume, by possibly restricting to a subsequence, that each ${\operatorname{Image}}(\gamma_{r_i})$ lies in a different plaque of ${\mathcal{F}} \cap U$. Then, for each $i$, there is a smallest radius $r_i < R_i < r_{i+1}$ such that ${\operatorname{Image}}(\gamma_r)$ is disjoint from the plaque containing ${\operatorname{Image}}(\gamma_{r_i})$, for all $r > R_i$. In particular, ${\operatorname{Image}}(\gamma_{R_i})$ intersects $\partial U$. Consider the maps \[ F_i: [r_i - R_i, r_{i+1} - R_i] \times \mathbb{S}^1 \to {\mathbb{R}} \times M \] \[ F_i(t,s) = (a(e^{t+R_i+is}) - a(e^{R_i}), u(e^{t+R_i+is})) \] By construction, $F_i(0,0) \in \{0\} \times M$, $F_i(0,s) \cap \{0\} \times (\partial U) \neq \emptyset$, and $\lim_{i \to \infty} h \circ F_i = 0$ By carrying out the bubbling analysis, we can assume that the $F_i$ have bounded gradient. In particular, $r_{i+1} - r_i$ must be uniformly bounded from below by a non--zero constant. Arcel\'a--Ascoli states that the $F_i$ converge in $C^\infty_{loc}$ --maybe after taking a subsequence-- to a map $F_\infty$ with $E^h(F_\infty) = 0$ and therefore lying on top of some Reeb orbit. By the properties of the $F_i$, $F_\infty$ must have image contained in ${\mathbb{R}} \times {\mathcal{L}}'$ and intersecting ${\mathbb{R}} \times (h^{-1}(0) \cap \partial U)$. In particular, ${\operatorname{Image}}(F_\infty)$ is not contained in ${\mathbb{R}} \times \gamma$. If $\lim_{i \to \infty} R_i-r_i < \infty$, the curves $s \to F_i(r_i-R_i,s)$ would converge to $\gamma$, which is a contradiction. Similarly we deduce that $\lim_{i \to \infty} r_{i+1} - R_i = \infty$. Since it has finite energy, $F_\infty: (-\infty, \infty) \times \mathbb{S}^1 \to {\mathbb{R}} \times {\mathcal{L}}'$ must yield a periodic orbit of the Reeb. It must be a closed orbit different from $\gamma$, having period $T$ and intersecting the plaque containing $\gamma$, which is a contradiction. We have proved that the limit must lie in ${\mathcal{L}}$. It is standard then that the limit does not depend on the sequence chosen $r_i$. \end{proof} \begin{remark} Theorem \ref{thm:nonDegenerateRemovalSingularities} immediately implies that a finite energy plane is asymptotic to a trivial cylinder lying in the same leaf. Similarly, it shows that the Bishop family always yields a plane bubble in the original leaf ${\mathcal{L}}$: outside of a finite set of points, the Bishop family converges to foliated $J$--holomorphic curve with boundary in the overtwisted disc and possibly many punctures that are asymptotic at $-\infty$ to a number of Reeb orbits necessarily lying in ${\mathcal{L}}$. \end{remark} \end{document}
\begin{document} \title{Bayesian Inference for Hybrid Discrete-Continuous Stochastic Kinetic Models} \author{Chris Sherlock$^1$, Andrew Golightly$^2$\footnote{[email protected]} ~and Colin S. Gillespie$^2$} \date{{\small $^1$Department of Mathematics and Statistics, Lancaster University, UK\\ $^{2}$School of Mathematics \& Statistics, Newcastle University, UK \\}} \maketitle \begin{abstract} \noindent We consider the problem of efficiently performing simulation and inference for stochastic kinetic models. Whilst it is possible to work directly with the resulting Markov jump process, computational cost can be prohibitive for networks of realistic size and complexity. In this paper, we consider an inference scheme based on a novel hybrid simulator that classifies reactions as either ``fast'' or ``slow'' with fast reactions evolving as a continuous Markov process whilst the remaining slow reaction occurrences are modelled through a Markov jump process with time dependent hazards. A linear noise approximation (LNA) of fast reaction dynamics is employed and slow reaction events are captured by exploiting the ability to solve the stochastic differential equation driving the LNA. This simulation procedure is used as a proposal mechanism inside a particle MCMC scheme, thus allowing Bayesian inference for the model parameters. We apply the scheme to a simple application and compare the output with an existing hybrid approach and also a scheme for performing inference for the underlying discrete stochastic model. \noindent \textbf{Keywords:} Stochastic kinetic model, linear noise approximation, Poisson thinning, particle MCMC \end{abstract} \section{Introduction} A growing realisation of the importance of stochasticity in cell and molecular processes \cite[for example]{mcadams1999,kitano2001,swain2002} has stimulated the need for efficient methods of inferring rate constants in stochastic kinetic models (SKMs) associated with gene regulatory networks. Such inferences are typically required to allow predictive \textit{in silico} experiments. Performing inference for the Markov jump process representation of the SKM is straightforward given observations on all reaction times and types. In this case, it is possible to construct a complete data likelihood, for which a conjugate analysis is possible \cite{wilkinson2012}. In practice, a subset of species may be observed at discrete times. \citeasnoun{boys2008} show that it is possible to construct Metropolis-Hastings schemes for performing inference in this setting. However, the statistical efficiency of such schemes can be poor, and these methods are likely to be more computationally demanding than simulating the process exactly (using, for example, the Gillespie algorithm \cite{gillespie1977}). Therefore, whilst inference in this setting is possible in theory, in practice computational cost precludes analysis of systems of realistic size. Considerable speed-up can be obtained by ignoring discreteness and stochasticity in the inferential model. For example, the macroscopic rate equation (MRE) models the dynamics with a set of coupled ordinary differential equations \cite{kampen2001}. Computational savings can still be made when adopting the diffusion approximation or chemical Langevin equation (CLE) \cite{Gillespie2000} on the other hand, which ignores discreteness but not stochasticity by modelling the biochemical network with a set of coupled stochastic differential equations (SDEs). Although the transition density characterising the process under the CLE is typically intractable, it has been shown that basing inference algorithms around this model can work well for some applications \cite{golightly2005,Heron07,Purutcuoglu07,golightly11,picchini13}. Further computational gains can be made by adopting a linear noise approximation (LNA) of the CLE \cite[for example]{kampen2001} which is given by the MRE plus a stochastic term accounting for random fluctuations about the MRE. Under the LNA, the transition density is a tractable Gaussian density (provided that the initial value is fixed or follows a Gaussian distribution). Performing inference for the LNA has been the focus of \citeasnoun{Komorowski09}, \citeasnoun{stathopoulos13} and \citeasnoun{sherlock2012} among others. However, biochemical reactions describing processes such as gene regulation can involve very low concentrations of reactants \cite{guptasarma1995} and ignoring the inherent discreteness in low copy number data traces is clearly unsatisfactory. The aim of this paper is to exploit the computational efficiency of methods such as the CLE and LNA whilst accurately describing the dynamics of low copy number species. Hybrid strategies for simulating from discrete-continuous stochastic kinetic models are reasonably well developed and involve partitioning reactions as fast or slow based on the likely number of occurrences of each reaction over a given time interval and the effect of each reaction on the number of reactants and products. Use of the CLE to model fast reaction dynamics in order to simulate efficiently from an approximation to the system has been the focus of \citeasnoun{haseltine02}, \citeasnoun{burrage04}, \citeasnoun{salis2005} and \citeasnoun{higham2011} amongst others. Discrete/ODE approaches (e.g. \citeasnoun{kiehl2004} and \citeasnoun{alfonsi2005}) are also possible and we refer the reader to \citeasnoun{pahle09} and \citeasnoun{golightly2013} for recent reviews. Since the slow reaction hazards will necessarily depend on species involved in fast reactions, these hazards are typically not constant between slow reaction events, and efficient sampling of these slow event times can be problematic. We propose a novel hybrid simulation strategy that models fast reaction dynamics with the LNA and slow dynamics with a Markov jump process. Moreover, by deriving a probable upper bound for a combination of components that drive the LNA, we obtain a probable upper bound for the total slow reaction hazard. This allows efficient sampling of the slow reaction times via \textit{thinning}, which is a point process variant of rejection sampling \cite{lewis1979}. Related approaches have been proposed by \citeasnoun{casella2011} and \citeasnoun{rao2013}. The former consider simulation for jump-diffusion processes by combining a thinning algorithm with a generalisation of the exact algorithm (for diffusions) developed by \citeasnoun{beskos2005}, whilst the latter assume that an upper bound for the rate matrix governing the MJP is available and use uniformisation \cite{hobolth2009} to simulate the process. We use our approximate model to perform Bayesian inference for the governing kinetic rate constants using noisy data observed at discrete time points. In particular, we focus on a special case of the particle marginal Metropolis Hastings (PMMH) algorithm \cite{andrieu2010} which targets the marginal posterior density of the model parameters and permits exact, simulation-based inference. The algorithm requires implementation of a particle filter \cite{carpenter1999,pitt1999,doucet2000,delmoral2002} in the latter step, and we apply the bootstrap filter \cite{gordon1993} which only requires the ability to forward simulate from the model and evaluate the observation densities associated with each data point. Use of our novel hybrid simulator inside the filter therefore avoids the need to evaluate the transition density associated with the hybrid model. We believe that this is the first serious attempt to explore the performance of a hybrid simulator when used as an inferential tool. To validate the methodology, we apply the method to an autoregulatory process with five reactions and two species. This simple application allows comparison of the proposed hybrid inference scheme with a scheme for performing inference for the true underlying discrete stochastic model. Finally, we compare the performance of the proposed hybrid scheme as an inferential tool with an approach based upon the simulation methodology described in \citeasnoun{salis2005}. The remainder of the article is structured as follows. In Section~\ref{kin} we give a brief exposition of the stochastic approach to chemical kinetics before outlining the hybrid simulation technique in Section~\ref{sim}. Section~\ref{particle} describes the particle MCMC scheme for inference. This is then applied in Section~\ref{app} before conclusions are drawn in Section~\ref{disc}. \section{Stochastic Kinetics -- A Brief Review}\label{kin} We consider here the stochastic approach to chemical kinetics and outline a Markov jump process (MJP) description of the dynamics of a system of interest, expressed by a reaction network. Two approximations that can be used in a hybrid modelling approach are outlined. For further details regarding stochastic kinetics we refer the reader to \citeasnoun{wilkinson2012}. \subsection{Stochastic Kinetic Models} A biochemical network is represented with a set of reactions. We have $k$ species $\mathcal{X}_{1},\mathcal{X}_{2},\ldots ,\mathcal{X}_{k}$ and $r$ reactions $R_{1},R_{2},\ldots ,R_{r}$ with a typical reaction $R_{i}$ of the form, \[ \begin{array}{cccc} R_{i}: \, & u_{i1}\mathcal{X}_{1}+\ldots +u_{ik}\mathcal{X}_{k} &\xrightarrow{\phantom{a}c_{i}\phantom{a}} & v_{i1}\mathcal{X}_{1}+\ldots +v_{ik}\mathcal{X}_{k}. \end{array} \] Note that $c_{i}$ is the kinetic rate constant associated with reaction $R_{i}$ and we write the vector of all rate constants as $\mathbf{c}=(c_{1},c_{2},\ldots ,c_{r})'$. Clearly, the effect of reaction $i$ on species $j$ is to change the number of molecules of $\mathcal{X}_{j}$ by an amount $v_{ij}-u_{ij}$. To this end, we may define the $r\times k$ \textit{net effect} matrix $\mathbf{A}$, given by $\mathbf{A}=\left\{a_{ij}\right\}$ where $a_{ij}=v_{ij}-u_{ij}$. To induce a compact notation, let $\mathbf{X}(t)=(X_{1}(t),X_{2}(t),\ldots ,X_{k}(t))'$ denote the number of molecules of each respective species at time $t$. Now, under the assumption of mass action kinetics, the instantaneous hazard of $R_{i}$ is \[ h_i(\mathbf{X}(t),c_i) = c_i\prod_{j=1}^k \binom{X_{j}(t)}{u_{ij}}. \] The \textit{order} of reaction $i$ is $\sum_j u_{ij}$. The evolution of a biochemical network of interest is most naturally modelled as a Markov jump process. Whilst the transition density associated with the process typically does not permit analytic tractability, the process can be exactly simulated forwards in time using a discrete event simulation method. The most well-used method is known in the stochastic kinetics literature as the \textit{Gillespie algorithm} \cite{gillespie1977} and uses the fact that if the current time and state are $t$ and $\mathbf{X}(t)$ respectively then the time $\tau$ to the next reaction event is \[ \tau\sim \textrm{Exp}\left\{\lambda(\mathbf{X}(t),\mathbf{c})\right\}, \quad \textrm{where \, $\lambda(\mathbf{X}(t),\mathbf{c})=\sum_{i=1}^{r}h_{i}(\mathbf{X}(t),c_{i})$}, \] and the reaction that occurs will be type $R_{i}$ with probability proportional to the reaction hazard $h_{i}(\mathbf{X}(t),c_{i})$. Other exact simulation methods are possible -- Gibson and Bruck's next reaction method \cite{gibson2000} is widely regarded to be the most computationally efficient strategy. As these methods capture every reaction occurrence, they can be extremely computationally costly for many systems of interest. \subsection{Chemical Langevin Equation} The CLE \cite{kampen2001,golightly2005} can be constructed by calculating the infinitesimal mean and variance of the Markov jump process and matching these quantities to the drift and diffusion coefficients of an It\^o stochastic differential equation (SDE). If we write $d\mathbf{X}(t)$ for the $k$-vector giving the change in state of each species in the time interval $(t,t+dt]$ then $d\mathbf{X}(t)=\mathbf{A}'d\mathbf{R}(t)$ where $d\mathbf{R}(t)$ is the $r$-vector whose $i$th element is a Poisson random quantity with mean $h_{i}(\mathbf{X}(t),c_{i})dt$. Hence, we arrive at \[ E\left\{d\mathbf{X}(t)\right\}= \mathbf{A}'\mathbf{h}(\mathbf{X}(t),\mathbf{c})dt,\qquad Var\left\{d\mathbf{X}(t)\right\}=\mathbf{A}'\textrm{diag}\left\{\mathbf{h}(\mathbf{X}(t),\mathbf{c})dt\right\}\mathbf{A}, \] where $\mathbf{h}(\mathbf{X}(t),\mathbf{c})=(h_{1}(\mathbf{X}(t),c_{1}),\ldots ,h_{r}(\mathbf{X}(t),c_{r}))'$ is the $r$-vector of hazards. Consequently, the It\^o SDE with the same infinitesimal mean and variance as the true Markov jump process is \begin{equation}\label{da} d\mathbf{X}(t)=\mathbf{A}'\mathbf{h}(\mathbf{X}(t),\mathbf{c})\,dt + \sqrt{\mathbf{A}'\textrm{diag}\left\{\mathbf{h}(\mathbf{X}(t),\mathbf{c})\right\}\mathbf{A}}\,d\mathbf{W}(t), \end{equation} where $d\mathbf{W}(t)$ is the increment of a $k$-dimensional Brownian motion and \linebreak[4] $\sqrt{\mathbf{A}'\textrm{diag}\left\{\mathbf{h}(\mathbf{X}(t),\mathbf{c})\right\}\mathbf{A}}$ is any $k\times k$ matrix square root. Note that ignoring the driving noise term in (\ref{da}) will yield the deterministic ordinary differential equation (ODE) representation of the system. The SDE in (\ref{da}) will be typically analytically intractable and it is therefore natural to work with the Euler-Maruyama approximation \begin{equation}\label{euler} \Delta \mathbf{X}(t)=\mathbf{A}'\mathbf{h}(\mathbf{X}(t),\mathbf{c})\,\Delta t+\sqrt{\mathbf{A}'\textrm{diag}\left\{\mathbf{h}(\mathbf{X}(t),\mathbf{c})\right\}\mathbf{A}}\,\Delta \mathbf{W}(t) \end{equation} where $\Delta \mathbf{W}(t)\sim \textrm{N}(0,\mathbf{I}\Delta t)$. Given the intractability of the CLE, we eschew this approach in favour of a further approximation which generally processes a greater degree of tractability than the CLE. This linear noise approximation (LNA) is the subject of the next section. \subsection{Linear Noise Approximation} The LNA can be viewed either as an approximation to the MJP or CLE and consequently can be obtained in a number of more or less formal ways. Here, we derive the LNA as a general approximation to the solution of an arbitrary SDE before considering the specific SDE given by the CLE. For further details of the LNA, we refer the reader to \citeasnoun{Komorowski09} and \citeasnoun{sherlock2012} for recent discussions. Consider now the SDE satisfied by an It\^o process $\{\mathbf{X}(t)\}$ of length $k$, \begin{equation}\label{eqn.full.sde} d\mathbf{X}(t) = \mbox{\boldmath$\alpha$}(\mathbf{X}(t)) \, dt + \epsilon\mbox{\boldmath$\beta$}(\mathbf{X}(t))\, d\mathbf{W}(t), \end{equation} with initial condition $\mathbf{X}\oftime{0}=\mathbf{x}_0$. Let $\mbox{\boldmath$\eta$}(t)$ be the (deterministic) solution to \begin{equation}\label{eqn.deterministic.y} \frac{d\mbox{\boldmath$\eta$}}{dt} = \mbox{\boldmath$\alpha$}(\mbox{\boldmath$\eta$}) \end{equation} with initial value $\mbox{\boldmath$\eta$}_0$. We assume that over the time interval of interest $\Norm{\mathbf{X}-\mbox{\boldmath$\eta$}}$ is $O(\epsilon)$. Set $\mathbf{M}(t)=(\mathbf{X}(t)-\mbox{\boldmath$\eta$}(t))/\epsilon$ and Taylor expand $\mathbf{X}(t)$ about $\mbox{\boldmath$\eta$}(t)$ in (\ref{eqn.full.sde}). Collecting terms of $O(\epsilon)$ gives \begin{equation} \label{eqn.perturb} d\mathbf{M}(t) = \mathbf{F}(t) \mathbf{M}(t)\,dt + \mbox{\boldmath$\beta$}(t) \,d\mathbf{W}(t), \end{equation} where $\mathbf{F}$ is the $k \times k$ matrix with components \[ F_{ij}(t)=\left.\frac{\partial \alpha_i}{\partial x_j}\right|_{\mbox{\boldmath$\eta$}(t)} \quad\text{and}\quad \mbox{\boldmath$\beta$}(t)=\mbox{\boldmath$\beta$}(\mbox{\boldmath$\eta$}(t)). \] The initial condition for (\ref{eqn.perturb}) is $\mathbf{M}(0)=(\mathbf{x}_0-\mbox{\boldmath$\eta$}_0)$, and thereafter $\mathbf{M}(t)$ is Gaussian for all $t$, provided that the initial condition is a fixed point mass or follows a Gaussian distribution. The $\epsilon$ in (\ref{eqn.full.sde}) indicates that the intrinsic noise term $\epsilon\mbox{\boldmath$\beta$}(\mathbf{X}(t))$ is ``small'', but plays no part in the form of \eqref{eqn.perturb}. For simplicity of presentation, therefore, and without loss of generality we henceforth set $\epsilon=1$. Suppose now that $\mathbf{M}\oftime{0} \sim \textrm{N}(\mathbf{m}_0,\mathbf{V}_0)$; in this case the SDE satisfied by $\mathbf{M}(t)$ in equation (\ref{eqn.perturb}) can be solved analytically (see Appendix \ref{lnaSol}) to give \begin{equation} \label{lna.solution1} \mathbf{M}(t) \sim \textrm{N} \left(\mathbf{G}(t) \mathbf{m}_0, \mathbf{G}(t) \mbox{\boldmath$\Psi$}(t) \mathbf{G}(t)' \right). \end{equation} Here $\mathbf{G}$ is the fundamental matrix for the deterministic ODE $d\mathbf{m}/dt = \mathbf{F}(t)\mathbf{m}$, so that \begin{equation} \label{lna.solution2} \frac{d\mathbf{G}}{dt} = \mathbf{F}(t)\mathbf{G}; \quad \mathbf{G}\oftime{0}=\mathbf{I}, \end{equation} and $\mbox{\boldmath$\Psi$}$ satisfies \begin{equation} \label{lna.solution3} \frac{d\mbox{\boldmath$\Psi$}}{dt} = \mathbf{G}^{-1} (t) \mbox{\boldmath$\beta$} (t) \mbox{\boldmath$\beta$}(t)'\left(\mathbf{G}^{-1}(t)\right)'; \quad \mbox{\boldmath$\Psi$}\oftime{0}=\mathbf{V}_0. \end{equation} Hence we obtain \[ \mathbf{X}(t)\sim\textrm{N}\left(\mbox{\boldmath$\eta$}(t)+\mathbf{G}(t)\mathbf{m}_0, \mathbf{G}(t)\mbox{\boldmath$\Psi$}(t)\mathbf{G}(t)'\right). \] In the following, we aim to exploit the analytic tractability of the LNA to build a novel hybrid model allowing both efficient simulation and inference. \section{Hybrid Simulation via the LNA}\label{sim} Hybrid simulation strategies begin by partitioning the reactions into two subsets, ``fast'' and ``slow''. It is helpful at this point to also label any \textit{species} that are changed by one or more fast reactions as fast and the remaining species as slow. In between any two slow reaction events we model the dynamics of each species changed by the action of a fast reaction via the LNA. Since the slow reaction hazards will, in general, depend on species changed by fast reaction occurrences, slow reaction event times will follow an inhomogeneous Poisson process. We simulate slow reaction events via thinning \cite{lewis1979}, which requires an upper bound on the total slow reaction intensity. In the following section, we give a novel dynamic re-partitioning scheme and provide a justification of the approach. In Section~\ref{bound}, we derive a probable bound on a linear combination of LNA components before using this result to give a probable upper bound on the total intensity of all slow reactions in Section~\ref{slowbound}. We describe our hybrid simulation strategy algorithmically in Section~\ref{alg}. \subsection{Choice of reaction type}\label{reactchoice} Consider the general criterion that over some time interval $\Delta t$ the changes brought about by reaction $j$ have a small relative impact on the state vector, $\mathbf{X}$; such changes will also have a small relative impact on the rate of each reaction. We represent a typical number of occurrences of a reaction by its expectation; however even if this expectation is less than one, we do not wish a single occurrence of $j$ to cause a substantial change in the state vector. For a reaction $j$ to be regarded as fast, we therefore require \begin{equation}\label{eqn.hybrid.choice.a} \Abs{a_{ji}} \max\left(1,h_j\Delta t\right)\le \epsilon X_i\quad \end{equation} for all $i$ such that $a_{ji}\neq0$ and for some $\epsilon >0$ which represents ``small''. Our proposed scheme re-evaluates the choice of reactions which can safely be modelled as fast at intervals of at most $\Delta t_{hybrid}$. Clearly this choice must be valid until the next re-evaluation and so, we require \eqref{eqn.hybrid.choice.a} to hold with $\Delta t_{hybrid}$ and $\epsilon$ equal to some $\epsilon_{hybrid}$. Both the CLE and LNA are based upon the Gaussian approximation to the Poisson distribution; let us deem this approximation to be sufficiently accurate provided that the mean of the Poisson distribution is at least $N^*$. We therefore require that, \textit{over the time interval where changes brought about by reaction $j$ start to noticeably affect the rates of at least one reaction} (which may be reaction $j$), \textit{the mean number of occurrences of reaction $j$ should be at least $N^*$}. Let $\Delta t_j$ be the time interval over which changes brought about by reaction $j$ start to have an effect. Now for some suitable choice of $\epsilon=\epsilon^*$, $\Delta t_j$ is the largest value $\Delta t$ which satisfies (\ref{eqn.hybrid.choice.a}). Clearly if $\Abs{a_{ji}} > \epsilon^* X_i$ for at least one $i$ then (\ref{eqn.hybrid.choice.a}) cannot be satisfied and the reaction must be slow. Otherwise $\Delta t_j$ is the largest $\Delta t$ that satisfies $\Abs{a_{ji}} h_j \Delta t\le \epsilon^* X_i~\forall i$; i.e. $h_j \Delta t_j = \epsilon^* \min_i \frac{1}{\Abs{a_{ji}}}X_i$. We however need $h_j \Delta t_j \ge N^*$; for an equation to be considered as fast we must therefore require that \begin{equation}\label{eqn.hybrid.choice.b} \Abs{a_{ji}}N^* \le \epsilon^* X_i \end{equation} for all $i$ such that $a_{ji}\neq0$. As might be inferred from the italicised fundamental condition, $\Delta t_j$ does not appear explicitly in this equation. Note also that subject to (\ref{eqn.hybrid.choice.b}), the requirement in (\ref{eqn.hybrid.choice.a}) $\Abs{a_{ji}} \le \epsilon X_i~\forall i$ is automatically satisfied provided $\epsilon\ge\epsilon^*/N^*$. In summary, for reaction $j$ to be classified as fast, we require (\ref{eqn.hybrid.choice.b}) to be satisfied, and (\ref{eqn.hybrid.choice.a}) to be satisfied for $\Delta t=\Delta t_{hybrid}$ and $\epsilon=\epsilon_{hybrid}$. \subsection{Probable bounds on a linear combination of LNA components}\label{bound} An upper bound on the total intensity of all slow reactions can be found by deriving an upper bound on a linear combination of the components that drive the LNA. We therefore require an upper bound of a function of the form $\sum_{i=1}^k b^*_i(t) M_i(r)$, $r \in[0,t]$, where $\mathbf{M}(r)$ satisfies (\ref{eqn.perturb}). The following result provides a bound which holds with probability as close to $1$ as desired. A proof can be found in \ref{boundproof}. \begin{proposition} Let $M_i(t)$, $i=1,\ldots, k$ be the components of the stochastic vector $\mathbf{M}(t)$ which satisfies $\mathbf{M}\oftime{0}=\mathbf{0}$ and evolves according to (\ref{eqn.perturb}). Define \begin{equation}\label{eqn.tau} \tau_i(t):=\int_0^t\sum_{j=1}^{k}\left[\mathbf{G}^{-1}(r)\mbox{\boldmath$\beta$}(r)\right]^2_{ij}dr, \end{equation} where $\mathbf{G}(t)$ is the deterministic matrix defined in (\ref{lna.solution2}). Set $\mathbf{b}(t)=\mathbf{G}(t)'\mathbf{b}^*(t)$, and \begin{equation}\label{eqn.define.bstar} b^{\max}_i:=\max_{r\in [0,t]}\Abs{b_i(r)}, i=1,\ldots,k. \end{equation} For any $\epsilon \in (0,1)$ and every $i$ in $1,\ldots,k$ define \begin{equation}\label{eqn.define.ustar} u^*_i:= -\Phi^{-1}\left(\frac{\epsilon}{4k}\right)\tau_i^{1/2}, \end{equation} where $\Phi(\cdot)$ is the cumulative distribution function of a standard normal distribution. Then \[ \Prob{\max_{r\in[0,t]}\sum_{i=1}^k b^*_i(r) M_i(r) \le \sum_{i=1}^k b^{max}_iu_i^*} \ge 1-\epsilon. \] \end{proposition} \subsection{Maximum intensity over an interval}\label{slowbound} The evolution of species numbers that arises from fast reactions is modelled via the LNA, whereas changes in species numbers that arise from slow reactions are modelled though the Markov Jump process. In order to efficiently simulate slow reaction events we require a relatively tight upper bound on the total hazard (or intensity) of all slow reactions. Consider the time interval between a given slow reaction event and either the next slow reaction or the time ($\Delta t_{hybrid}$ in the future) when reactions may be reclassified. Over this interval the number of molecules of each slow species remains fixed, with changes in reaction hazards depending only on the evolution of the relevant fast species. A first order reaction where the rate depends only on the number of molecules of a single slow species may therefore be treated, over this interval, as zeroth order, but with a different rate constant. Similarly a second order reaction where one or both of the reacting species are slow can be treated as a first or zeroth order reaction over this interval. In common with most reaction models (e.g. \citeasnoun{wilkinson2012}) we will assume that any apparent interactions between more than two molecules are built up from reactions of order two or fewer. For this interval we therefore partition the slow reactions into three classes $R_s^{(0)}$, $R_s^{(1)}$ and $R_s^{(2)}$, for reactions which, over this interval can be treated as zeroth, first and second order respectively, and where these classifications are understood to depend on the current classification of reactions into slow and fast. Denoting by $X_k$ the number of molecules of species $k$, we therefore have $h_j\left(t,c_j\right)=c^*_j$ for $j\in R_s^{(0)}$; $h_j\left(t,c_j\right)=c^*_jX_{k_1(j)}$ for $j\in R_s^{(1)}$; and $h_j\left(t,c_j\right)=c^*_jX_{k_1(j)}X_{k_2(j)}$ for $j\in R_s^{(2)}$, where $k_1(j)$ and $k_2(j)$ are the indices of the first and second (if required) reactants involved in reaction $j$, and each coefficient, $c_j^*$, is proportional to the true rate constant, $c_j$, but also takes into account the number of molecules of any slow reactants in reaction $j$. Writing $X_i(t)=\eta_i(t)+M_i(t)$ and neglecting terms in $M_iM_j$, the total intensity of all slow reactions is \begin{align*} \lambda^{(s)}(\mathbf{X}(t)) &\approx \sum_{j \in R_s^{(0)}} c^*_j+ \sum_{j \in R_s^{(1)}} c^*_j \left(\eta_{k_1(j)}(t) + M_{k_1(j)}(t)\right)\\ &+ \sum_{j \in R_s^{(2)}} c^*_j \left(\eta_{k_1(j)}(t)\eta_{k_2(j)}(t) + \eta_{k_1(j)}(t)M_{k_2(j)}(t)+ \eta_{k_2(j)}(t)M_{k_1(j)}(t)\right)\\ &= \lambda^{(s)}(\mbox{\boldmath$\eta$}(t)) + \hspace{-.2cm}\sum_{j \in R_s^{(1)}} c^*_j M_{k_1(j)}(t) \\ &\qquad +\sum_{j \in R_s^{(2)}} c^*_j \left( \eta_{k_1(j)}(t)M_{k_2(j)}(t) + \eta_{k_2(j)}(t)M_{k_1(j)}(t) \right).\\ \end{align*} This can be rewritten as \begin{equation}\label{eqn.total.lambda} \lambda^{(s)}(X(t))\approx\lambda^{(s)}(\mbox{\boldmath$\eta$}(t))+\sum_{i=1}^kb^*_i\left(\mathbf{c}^*,\mbox{\boldmath$\eta$}(t)\right)M_i(t), \end{equation} where \begin{equation}\label{eqn.define.b} b^*_i(\mathbf{c}^*,\mbox{\boldmath$\eta$}(t)) = \sum_{\{j \in R_s^{(1)}:k_1(j)=i\}}\hspace{-.4cm}c^*_j+ \sum_{\{j\in R_s^{(2)}: k_2(j)=i\}}\hspace{-.4cm}c^*_j~\eta_{k_1(j)} + \sum_{ \{j\in R_s^{(2)}: k_1(j)=i\}}\hspace{-.4cm}c^*_j~\eta_{k_2(j)}. \end{equation} Note that the approximation in (\ref{eqn.total.lambda}) is exact if, over the interval, all reactions can be treated as zeroth or first order. Also $b_i=0$ if all reactions whose rate is influenced by species $i$ can be treated as zeroth order reactions over the time interval. Defining $b^{max}_{i}$ and $u^*_{i}$ as in (\ref{eqn.define.bstar}) and (\ref{eqn.define.ustar}) and, given that we choose to make $M_i\oftime{0}=0$, we may therefore provide the following probable upper bound over the interval $[0,T]$ on the total intensity of all slow reactions combined: \begin{equation}\label{eqn.hmax} h^{s}_{\max}:=\lambda^{s}_{max} + \sum_{i=1}^{k}b_{i}^{max}u_{i}^{*}, \end{equation} where \[ \lambda^{s}_{max}:=\max_{t\in[0,T]}\lambda^{s}\left(\mathbf{c}^*,\mbox{\boldmath$\eta$}(t)\right). \] \subsection{Generic Algorithm}\label{alg} We now present a generic algorithm for simulating from a mixture of slow and fast reactions using the Linear Noise Approximation for the fast reactions and allowing the slow reactions to evolve through the ``exact'' Markov jump process. Given a starting state the algorithm chooses a time interval, $\Delta t_{integrate}$, over which to integrate the fast reaction mechanism and hence detect whether or not there has been a potential slow reaction. If there is a potential slow reaction in this interval then the fast reactions must be reintegrated up to this potential slow reaction time to simulate the state vector at this time. If the next slow reaction were to occur some considerable time in the future then $[t_{curr},t_{curr}+\Delta t_{integrate}]$ would ideally just fail to include this reaction time, and thereby eliminate the need to re-integrate over such a large time interval. By contrast the penalty to computational efficiency is smaller if there is just a small time interval until the next potential slow reaction. However the upper bound on the total slow intensity, and hence the rate at which potential reactions occur, increases with $\Delta t_{integrate}$. Given the circularity of these constraints we simply set $\Delta t_{integrate}$ as an arbitrary tuning factor. Furthermore, since we may only re-evaluate the fast/slow status of each reaction at the end of an integration we require $\Delta t_{integrate} \le \Delta t_{hybrid}$. The algorithm commences at time $t_{curr}=0$ with an initial state vector of $\mathbf{x}_{curr}:=(x_{curr,1},\dots,x_{curr,k})$ and ends at some pre-defined time $t_{end}>0$ with $\mathbf{x}_{curr}$ corresponding to the the state vector at $t_{end}$. The rate constants $\mathbf{c}$ are assumed to be known but to simplify our presentation of the algorithm we remove explicit mention of $\mathbf{c}$ from the notation. The algorithm starts with $\Delta t_{integrate}$ and $\Delta t_{hybrid}$ set to their default (user-defined) values. \begin{enumerate} \item \label{step.start}\textbf{If} $t_{curr}\ge t_{end}$ then \textbf{stop}. \item \label{step.nocheck} Set $\Delta t_{hybrid}=\min(\Delta t_{hybrid},t_{end}-t_{curr})$ and $\Delta t_{integrate}=\min(\Delta t_{integrate},t_{end}-t_{curr})$. \item \label{step.classify} \textit{Classify reactions}: given $\mathbf{x}_{curr}$ classify each reaction as either slow or fast. \item \label{step.integrate} \textit{Preliminary integration over full interval}: integrate jointly over\linebreak $[t_{curr}, t_{curr}+\Delta t_{integrate}]$ the $k$-vector ODE for $\mbox{\boldmath$\eta$}(t)$, (\ref{eqn.deterministic.y}), the $k\times k$ matrix ODE for $\mathbf{G}(t)$, (\ref{lna.solution2}), the ODEs for $\mbox{\boldmath$\Psi$}(t)$, (\ref{lna.solution3}), and the integral for $\tau_i(t_{curr},\Delta t_{integrate})$ ($i=1,\dots,k$), (\ref{eqn.tau}). Initial conditions for the ODEs are $\mbox{\boldmath$\eta$}(0)=\mathbf{x}_{curr}$, $\mathbf{G}(0)=\mathbf{I}$ and $\mbox{\boldmath$\Psi$}(0)=\mathbf{0}$. So that only fast reactions contribute to the evolution, for the purposes of this integration set the rate of each slow reaction to zero. \item Keep running maxima over the course of the ODE integration in order to calculate $\lambda^{s}_{max}$ and $b^{max}_{i}$ over the interval $[t_{curr},t_{curr}+\Delta t_{integrate}]$. \item Calculate $u^*_{i}~(i=1,\dots,k)$ from (\ref{eqn.define.ustar}). \item Simulate the first event time $t_*$ from a Poisson process which starts at $t_{curr}$ and has intensity $h^{s}_{max}$ as given in (\ref{eqn.hmax}). \item \textbf{If} $t_*>t_{curr}+\Delta t_{integrate}$ then there is \textit{no potential slow reaction} in $[t_{curr},t_{curr}+\Delta t_{integrate}]$; set $t_{curr}=t_{curr}+\Delta t_{integrate}$ and simulate the state vector at this new time, $\mathbf{x}_{t_{curr}}$; \textbf{go to Step \ref{step.start}}. \item \textit{Second integration}: integrate the ODEs from Step \ref{step.integrate} (except (\ref{eqn.tau})) forward over the interval $[t_{curr},t_*)$, again with the rate of each slow reaction set to zero. This provides the distribution of of the species just before time $t_*$, $\mathbf{X}\left(t_*^-\right)$, given that no slow reactions occurred up until this time. Hence simulate $\mathbf{x}\oftime{t_*^-}$ and set $\mathbf{x}_{curr}\leftarrow \mathbf{x}\oftime{t_*^-}$. \item Calculate the probability that a slow reaction actually occurs at $t_{*}$, $\lambda^{s}\left(\mathbf{x}_{curr}\right)/h^{s}_{max}$, and hence simulate whether or not a slow reaction occurs at $t_*$. \item \textbf{If} \textit{no slow reaction} occurs then set $t_{curr}=t_*$ and \textbf{go to Step \ref{step.nocheck}}. \item \textit{Update from slow reaction:} simulate which slow reaction occurs using the following probabilities for $j \in R_s$. \[ \Prob{\mbox{slow reaction } j|\mbox{slow reaction}} = \frac{h_j\left(\mathbf{x}_{curr}\right)}{\lambda^{s}(\mathbf{x}_{curr})}; \] update $\mathbf{x}_{curr}$ according to the net effects vector for the chosen slow reaction. \item \label{step.housekeep} Set $t_{curr}=t_*$ and \textbf{go to Step \ref{step.nocheck}}. \end{enumerate} \section{Bayesian Inference}\label{particle} We consider here the task of performing inference for the kinetic rate constants $\mathbf{c}$ given noisy measurements on the system state $\mathbf{X}(t)$ at discrete time points. We aim to embed the hybrid simulation method outlined in Section~\ref{sim} inside a recently proposed particle MCMC algorithm to obtain an efficient inference scheme. \subsection{A Particle MCMC approach} Suppose that the process $\mathbf{X}(t)$ is not observed exactly, rather, we have (without loss of generality) noisy measurements $\mathbf{Y}_{0:T}=\{\mathbf{Y}(t):t=0,\ldots ,T\}$ observed on a regular grid. We assume that the true underlying process $\mathbf{X}(t)$ is linked to $\mathbf{Y}(t)$ via the density $\pi(\mathbf{y}(t)|\mathbf{x}(t))$. Moreover, we assume that the observations are conditionally independent given the latent process. Rather than perform inference for the exact Markov jump process, we work with the hybrid model, and kinetic rate constants $\mathbf{c}$ governing this approximate model. Let $\mathbf{X}_{(0,T]}=\{\mathbf{X}(t):t\in(0,T]\}$ denote the complete process path on $(0,T]$ and denote the marginal density of $\mathbf{X}_{(0,T]}$, under the structure of the hybrid model, by $\pi_{h}(\mathbf{x}_{(0,T]}|\mathbf{x}(0),\mathbf{c})$, since it depends on the starting value $\mathbf{x}(0)$ and the rate constants $\mathbf{c}$. Note that this density can be sampled from by executing the algorithm described in Section~\ref{sim}. Let $\pi(\mathbf{x}(0))$ and $\pi(\mathbf{c})$ denote the respective prior densities for $\mathbf{X}(0)$ and $\mathbf{c}$. Fully Bayesian inference may proceed by sampling \[ \pi\left(\mathbf{c},\mathbf{x}_{[0,T]}|\mathbf{y}_{0:T}\right) \propto \pi\left(\mathbf{c}\right)\pi\left(\mathbf{x}(0)\right) \pi_{h}\left(\mathbf{x}_{(0,T]}|\mathbf{x}(0),\mathbf{c}\right) \prod_{i=0}^{T}\pi\left(\mathbf{x}(i)|\mathbf{y}(i)\right)\,. \] In this work, interest lies in the marginal posterior density \begin{align} \pi\left(\mathbf{c} |\mathbf{y}_{0:T}\right) &= \int \pi\left(\mathbf{c},\mathbf{x}_{[0,T]}|\mathbf{y}_{0:T}\right)\,d\mathbf{x}_{[0,T]}\nonumber \\ &\propto \pi(\mathbf{c})\pi(\mathbf{y}_{0:T}|\mathbf{c}) \label{target}\,. \end{align} Inference is problematic due to the intractability of the marginal likelihood $\pi(\mathbf{y}_{0:T}|\mathbf{c})$. We generate samples (\ref{target}) by appealing to a special case of the particle marginal Metropolis Hastings (PMMH) scheme described in \citeasnoun{andrieu2010} and \citeasnoun{andrieu09}. In brief, we propose a new $\mathbf{c}^{*}$ using a suitable proposal kernel $q(\mathbf{c}^{*}|\mathbf{c})$ and run a particle filter targeting $\pi(\mathbf{x}_{[0,T]}|\mathbf{y}_{0:T},\mathbf{c}^{*})$ to obtain the filter's estimate of marginal likelihood, denoted $\hat{\pi}(\mathbf{y}_{0:T}|\mathbf{c}^{*})$. At iteration $i$ the proposed $\mathbf{c}^*$ is accepted with probability \begin{equation}\label{aprob} \min\left\{1,\frac{\hat{\pi}(\mathbf{y}_{0:T}|\mathbf{c}^{*}) \pi(\mathbf{c}^{*})}{\hat{\pi}(\mathbf{y}_{0:T}|\mathbf{c}^{(i-1)}) \pi(\mathbf{c}^{(i-1)})} \times \frac{q(\mathbf{c}^{(i-1)} | \mathbf{c}^{*})}{q(\mathbf{c}^{*} | \mathbf{c}^{(i-1)})} \right\}\,. \end{equation} After initialising the rate constants and at iteration $i=0$ with $\mathbf{c}^{(0)}$, the algorithm proceeds as follows for $i\geq 1$: \begin{enumerate} \item Draw $\mathbf{c}^{*}\sim q(\cdot|\mathbf{c}^{(i-1)})$. \item Run a particle filter targeting $\pi(\mathbf{x}_{[0,T]}|\mathbf{y}_{0:T},\mathbf{c}^{*})$, and compute $\hat{\pi}(\mathbf{y}_{0:T}|\mathbf{c}^{*})$, the filter's estimate of marginal likelihood. \item With probability (\ref{aprob}) accept a move to $\mathbf{c}^{*}$ otherwise put $\mathbf{c}^{(i)}=\mathbf{c}^{(i-1)}$. \end{enumerate} The scheme as presented can be seen as a pseudo-marginal Metropolis-Hastings method \cite{beaumont03,andrieu09b}. In particular, provided that the estimator of marginal likelihood is non-negative and unbiased (or has a constant positive multiplicative bias that does not depend on $\mathbf{c}$), it is straightforward to verify that the method targets the marginal $\pi(\mathbf{c} |\mathbf{y}_{0:T})$. We let $\mathbf{u}$ denote all random variables generated by the particle filter and write the estimate of marginal likelihood as $\hat{\pi}(\mathbf{y}_{0:T}|\mathbf{c})=\pi(\mathbf{y}_{0:T}|\mathbf{c},\mathbf{u})$. By augmenting the state space of the Markov chain to include $\mathbf{u}$ the acceptance ratio in (\ref{aprob}) can be rewritten as \[ \frac{\pi(\mathbf{y}_{0:T}|\mathbf{c}^{*},\mathbf{u}^{*})\pi(\mathbf{u}^{*}|\mathbf{c}^{*}) \pi(\mathbf{c}^{*})}{\pi(\mathbf{y}_{0:T}|\mathbf{c}^{(i-1)},\mathbf{u}^{(i-1)})\pi(\mathbf{u}^{(i-1)}|\mathbf{c}^{(i-1)}) \pi(\mathbf{c}^{(i-1)})} \times \frac{q(\mathbf{c}^{(i-1)} | \mathbf{c}^{*})\pi(\mathbf{u}^{(i-1)}|\mathbf{c}^{(i-1)})}{q(\mathbf{c}^{*} | \mathbf{c}^{(i-1)})\pi(\mathbf{u}^{*}|\mathbf{c}^{*}) } \] and we see that the chain targets the joint density \begin{equation}\label{target2} \pi(\mathbf{c},\mathbf{u}|\mathbf{y}_{0:T})\propto \pi(\mathbf{y}_{0:T}|\mathbf{c},\mathbf{u})\pi(\mathbf{u} |\mathbf{c}) \pi(\mathbf{c})\,. \end{equation} Marginalising (\ref{target2}) over $\mathbf{u}$ gives $\pi(\mathbf{c}|\mathbf{y}_{0:T})$ as a marginal density. We note that if interest lies in the joint posterior density of $\mathbf{c}$ and the latent path, the above algorithm can be modified to target $\pi\left(\mathbf{c},\mathbf{x}_{[0,T]}|\mathbf{y}_{0:T}\right)$. Essentially, the ancestors of each particle must be stored to allow sampling of the particle filter's approximation to $\pi(\mathbf{x}_{[0,T]}|\mathbf{y}_{0:T},\mathbf{c}^*)$. We refer the reader to \citeasnoun{andrieu2010} for further details. Step 2 of the PMMH scheme requires implementation of a particle filter for the successive generation of samples from $\pi(\mathbf{x}_{[0,j]}|\mathbf{y}_{0:j},\mathbf{c}^{*})$ for each $j=0,1,\ldots ,T$. Note that up to proportionality, and for $j>0$ \[ \pi(\mathbf{x}_{[0:j]}|\mathbf{y}_{0:j})\propto \pi(\mathbf{y}(j)|\mathbf{x}(j))\pi(\mathbf{x}_{[0:j-1]}|\mathbf{y}_{0:j-1})\pi_{h}(\mathbf{x}_{(j-1,j]}|\mathbf{x}(j-1)) \] where we have dropped $\mathbf{c}^{*}$ from the notation. Now suppose that we have an equally weighted sample of points (or \textit{particles}) of size $N$ from $\pi(\mathbf{x}_{[0:j-1]}|\mathbf{y}_{0:j-1})$. Denote this sample by $\big\{\mathbf{x}_{[0:j-1]}^{k},k=1,\ldots ,N\big\}$. The bootstrap particle filter of \citeasnoun{gordon1993} generates an approximate sample from $\pi(\mathbf{x}_{[0:j]}|\mathbf{y}_{0:j})$ with the following importance resampling algorithm: \begin{enumerate} \item For $k=1,2,\ldots ,N$, draw $\mathbf{x}_{(j-1,j]}^{k}\sim \pi_{h}(\cdot |\mathbf{x}(j-1)^{k})$ using the hybrid simulator and construct the extended path, $\mathbf{x}_{[0,j]}^{k}=\left(\mathbf{x}_{[0,j-1]},\mathbf{x}_{(j-1,j]}\right)$. \item Construct and normalise the weights, \[ w^{(j)}_{k}= \pi(\mathbf{y}(j)|\mathbf{x}(j)^{k})\,,\quad \tilde{w}^{(j)}_{k}=\frac{w^{(j)}_{k}}{\sum_{l=1}^{N}w^{(j)}_{l}}\,, \] where $k=1,2,\ldots ,N$. \item Resample $N$ times amongst the $\mathbf{x}_{[0,j]}^{k}$ using the normalised weights as probabilities. \end{enumerate} In the case $j=0$, $\pi(\mathbf{x}(0)|\mathbf{y}(0))$ can be sampled by replacing Step 1 in the algorithm above with $N$ iid draws from the prior $\pi(\mathbf{x}(0))$. Hence, after initialising the particle filter with a sample from the prior, the above sequence of steps can be performed as each observation becomes available, with the posterior sample at one time point used as the prior for the next. By using the hybrid simulator to generate proposals inside the importance resampler, evaluation of the associated likelihood is not required when calculating the importance weights and the only term that needs to be evaluated is the tractable density associated with the measurement error. This setup is flexible and can be used with any forward simulator such as the Gillespie algorithm or chemical Langevin equation. After all data points have been assimilated, the filter's estimate of the marginal likelihood is \begin{equation}\label{ml} \hat{\pi}(\mathbf{y}_{0:T}) = \hat{\pi}(\mathbf{y}(0)) \prod_{j=0}^{T-1}\hat{\pi}(\mathbf{y}(j+1)|\mathbf{y}_{0:j})=\prod_{j=0}^{T}\frac{1}{N}\sum_{k=1}^{N}w_{k}^{(j)} \end{equation} for which we obtain unbiasedness under mild conditions involving the resampling scheme, satisfied by the bootstrap filter described above \cite{delmoral04}. Note that for the special case of the PMMH algorithm used here, when running the particle filter, we need only store the values of the latent states at each observation time, and each unnormalised weight. \subsubsection{Tuning}\label{tuning} The PMMH scheme requires specification of a number of particles $N$ to be used in the particle filter at Step 2. As noted by \cite{andrieu09b}, the mixing efficiency of the PMMH scheme decreases as the variance of the estimated marginal likelihood increases. This problem can be alleviated at the expense of greater computational cost by increasing $N$. This therefore suggests an optimal value of $N$ and finding this choice is the subject of \citeasnoun{pitt12}, \citeasnoun{doucet13} and \citeasnoun{sherlock2013}. The latter suggest that $N$ should be chosen so that the variance in the noise in the estimated log-posterior is around 2. \citeasnoun{pitt12} note that the penalty is small for a value between 0.25 and 2.25. We therefore recommend performing an initial pilot run of daPMMH to obtain an estimate of the posterior mean for the parameters $\mathbf{c}$, denoted $\hat{\mathbf{c}}$. The value of $N$ should then be chosen so that $\textrm{Var}(\log \pi(\mathbf{y}_{0:T}|\hat{\mathbf{c}}))$ is around 2. In our application, we note that the rate constants $\mathbf{c}$ must be strictly positive and we update $\log(\mathbf{c})=(\log(c_{1}),\ldots ,\log(c_{r}))'$ in a single block using a random walk proposal with Gaussian innovations. The innovation variance must be chosen appropriately to maximise statistical efficiency through well mixing chains. We take the innovation variance to be $\gamma \hat{\textrm{var}}(\mathbf{c})$, where $\hat{\textrm{var}}(\mathbf{c})$ is obtained from a short pilot run of the scheme. Following \citeasnoun{sherlock2013} we tune the scaling parameter $\gamma$ to give an acceptance rate of approximately $10\%$. \section{Application: Autoregulatory Network}\label{app} To assess the performance of the proposed hybrid approach as a simulator and as an inferential model, we consider a simple autoregulatory network with two species, $\mathcal{X}_{1}$ and $\mathcal{X}_{2}$ whose time course behaviour evolves according to the following set of coupled reactions, \begin{align*} R_{1}:\quad \emptyset &\xrightarrow{\phantom{a}c_{1}\phantom{a}} \mathcal{X}_{1} & R_{2}:\quad \emptyset &\xrightarrow{\phantom{a}c_{2}\phantom{a}} \mathcal{X}_{2} \\ R_{3}:\quad \mathcal{X}_{1} &\xrightarrow{\phantom{a}c_{3}\phantom{a}} \emptyset & R_{4}:\quad \mathcal{X}_{2} &\xrightarrow{\phantom{a}c_{4}\phantom{a}} \emptyset \\ R_{5}:\quad \mathcal{X}_{1}+\mathcal{X}_{2} &\xrightarrow{\phantom{a}c_{5}\phantom{a}} 2\mathcal{X}_{2} \end{align*} Essentially, reactions $R_{1}$ and $R_{2}$ represent immigration, reactions $R_{3}$ and $R_{4}$ represent death and finally $R_{5}$ can be thought of as interaction between the two species. Note that even for this simple system, the transition density associated with the resulting Markov jump process (under an assumption of mass action kinetics) cannot be found in closed form. Throughout this section we take \begin{equation} \label{eqn.rate.formula} \mathbf{c}=(2, sc, 1/50, 1, 1/(50\times sc))', \end{equation} and investigate the performance of our hybrid algorithm (henceforth designated as \textit{Hybrid LNA}) with regard to both the simulated distribution of ${X}_1$ and ${X}_2$ and inference on $\mathbf{c}$ for $sc\in\{1,10,100,1000\}$. The `probable upper bound' of Section \ref{bound} is fixed to hold with probability $1-10^{-6}$, whilst the relative and absolute errors of the stiff ODE solver were set to $10^{-4}$. We use the dynamic repartitioning procedure described in Section~\ref{reactchoice} with $N^{*}=15$ and $\epsilon^{*}=\epsilon=0.25$. Reactions are reclassified as fast or slow every $\Delta t_{hybrid}=\Delta t_{integrate}=0.1$ time units. For this specification, Equation~(\ref{eqn.hybrid.choice.b}) ensures that a reaction will be regarded as slow if the species numbers of species affected by that reaction are $60$ or fewer. The rates in \eqref{eqn.rate.formula} lead to an equilibrium for the MRE of \[ [{X}_1,~{X}_2]=[50(1+sc-\sqrt{1+sc^2}),~1+\sqrt{1+sc^2}], \] which, for $sc \gg 1$ is approximately $[50-25/sc,~sc]$. Thus, for $sc\gg 1$, when the system is at equilibrium, ${X}_1$ is typically small, ${X}_2$ is typically large, and reactions $R_2$ and $R_4$ are typically fast. If $R_2$ and $R_4$ were always the only fast reactions and $\mathcal{X}_2$ were always the only fast species then the LNA for the evolution of ${X}_2$ conditional on no slow reactions taking place would be analytically tractable and, further, there would be no need for dynamic repartitioning. We, however, do not take advantage of this special case as we wish to show the generic applicability of our method. To this end we also start each system away from equilibrium, at $\mathbf{X}(0)=(0,0)'$. For comparison, we also ran the Gillespie algorithm and a discrete/SDE hybrid simulation method in the spirit of the \textit{next reaction hybrid algorithm} of \citeasnoun{salis2005} (henceforth designated as \textit{Hybrid SDE}). Full details of this approach can be found in Appendix~\ref{hybsde}. For \textit{Hybrid SDE} we used the same dynamic partitioning criteria and additionally specified the required Euler time step to be $\Delta t_{Euler}=0.005$, which gave an accuracy comparable with that of \textit{Hybrid LNA}. \subsection{Simulation}\label{comp.sim} Using the autoregulatory network as a test case, we ran each hybrid simulator and the Gillespie algorithm for $20,000$ iterations. \begin{figure} \caption{Median (solid), inter-quartile range (inner shaded region) and 95\% credible region (outer shaded region) of $X_{1,t}$ based on $20,000$ stochastic realisations of the model using Gillespie's direct method, Hybrid LNA and the Hybrid SDE. Model parameters were $(2, sc, 1/50, 1, 1/(50\times sc))'$.} \label{fig:sims} \end{figure} \begin{figure} \caption{Simulator CPU time. Each point is the simulation time (in secs) of a single stochastic simulation, averaged over $1000$ simulations. Model parameters were $(2, sc, 1/50, 1, 1/(50\times sc))'$.} \label{fig:CPU} \end{figure} Figure~\ref{fig:sims} summarises the output of each simulation procedure, for species $\mathcal{X}_{1}$ and Figure~\ref{fig:CPU} shows the CPU time of each simulator, averaged over 1000 realisations (and using a much larger set of values for $sc$. We see little difference between simulator output. However, when taking into account computational cost, the advantage of either hybrid approach over the Gillespie algorithm is clear. For $sc<500$, reaction events occur relatively infrequently and the computational cost of the hybrid algorithms is dominated by the computational overhead of dynamic repartitioning. However for $sc>500$, the cost of both hybrid schemes is roughly constant, whereas the cost of the Gillespie algorithm increases linearly with $sc$. \textit{Hybrid LNA} requires minimal tuning, since the LNA solution involves solving a set of ODEs, for which stiff solvers that automatically and adaptively choose the time step so as to maintain a given level of accuracy are readily available. \textit{Hybrid SDE}, however, requires the user to choose a fixed Euler time-step, $\Delta t_{Euler}$, and manually attempt to balance accuracy against computational effort; moreover, since the CLE is stiff and non-deterministic, there is the possibility that any fixed $\Delta t_{Euler}$ might not maintain a desired level of accuracy throughout repeated simulations, especially with different rate constants, $\mathbf{c}$. Furthermore, the slow reaction updating procedure of Hybrid SDE can be inefficient in a number of ways. The algorithm requires that only one slow reaction event occurs in the interval over which the fast species are integrated. If more than one slow reaction is detected, $\Delta t_{hybrid}$ is reduced, the system state is rewound and a reclassification of reactions takes place. Because of the reduction in $\Delta t_{hybrid}$, the system rewind may reclassify some erstwhile fast reactions as slow and so actually increase the chance of multiple slow reaction occurrences. Moreover, there is a subtle error in the algorithm: if a rewind has occurred, the new forward simulation must be conditional on the previously-simulated values of the fast reactants over the old interval of length $\Delta t_{hybrid}$. Strictly speaking therefore, these values should be stored and re-used, with approximate bridges constructed if it is necessary to fill in between the stored values. However if some of the previously-fast reactants have now become slow then it is not at all clear how to condition on the results from the previous attempt at forwards simulation. We therefore did not make make any attempt to correct this problem. \subsection{Inference}\label{inf} Data were simulated at integer times on $[0,50]$ via the Gillespie algorithm. This gave four synthetic datasets which were then corrupted to give observations with a conditional distribution of \[ Y_{i}(t)|X_{i}(t)\sim \begin{cases} \textrm{Poisson}\left(X_{i}(t)\right) & \text{if $X_{i}(t)>0$},\\ \textrm{Bernouilli}(0.1) & \text{if $X_{i}(t)=0$} \end{cases} \] for each component $i=1,2$. The data are plotted in Figure~\ref{fig:data}, wherein, and for the remainder of this section, we refer to the PMMH scheme that uses a given simulator by using the name of that simulator: \textit{Hybrid LNA}, \textit{Hybrid SDE} and \textit{Gillepsie}. \begin{figure} \caption{The four synthetic datasets used. Each data set was generated via the Gillespie algorithm. The true species numbers are represented by a black line. The noised observations are indicated by dots.} \label{fig:data} \end{figure} To ensure identifiability, $c_{3}$ was fixed at its true value, while independent Uniform $U(-8,8)$ priors were used for the remaining $\log(c_i)$. For each combination of synthetic dataset and scheme we performed a pilot run with 50 particles to obtain an approximate covariance matrix $\hat{\textrm{Var}}(\mathbf{c})$ and approximate posterior mean $\hat{\mathbf{c}}$. Following the practical advice of \citeasnoun{sherlock2013}, further pilot runs were performed with $\mathbf{c}$ fixed at $\hat{\mathbf{c}}$ to determine the number of particles $N$ that gave a variance of the estimator of log-posterior $\log \pi(\mathbf{y}_{0:T}|\hat{\mathbf{c}})$ of around $2$. Table~\ref{tab:tabpart} shows the number of particles used for each scheme and each dataset. Note that \textit{Hybrid SDE} required more particles than \textit{Hybrid LNA} or \textit{Gillespie}, with nearly an order of magnitude difference when $sc=1$. We found that using fewer particles would result in particle degeneracy around time point 32, with only a few particles able to capture the increase in $R_{5}$ occurrences around this time point. \begin{table} \centering \begin{tabular}{@{} l lll @{}} \toprule & \multicolumn{3}{c}{Simulator} \\ \cmidrule(l){2-4} sc & Gillespie & Hybrid$_{\text{{\tiny LNA}}}$ & Hybrid$_{\text{{\tiny SDE}}}$ \\ \midrule $10^0$ & 250 & 250 & 1750 \\ $10^1$ & 800 & 800 & 1500 \\ $10^2$ & $\phantom{0}65$ & $\phantom{0}65$ & $\phantom{0}125$ \\ $10^3$ & $\phantom{0}65$ & $\phantom{0}65$ & $\phantom{00}85$ \\ \bottomrule \end{tabular} \caption{Number of particles used for each scheme and each synthetic dataset.}\label{tab:tabpart} \end{table} We performed $2\times 10^{5}$ iterations of each scheme for $sc=1, 10, 100$ and $2\times 10^{6}$ iterations for $sc=1000$. In all cases, the $\log(c_i)$ were updated in a single block using a Gaussian random walk proposal kernel with an innovation variance matrix given by $\gamma \hat{\textrm{Var}}(\mathbf{c})$, with $\gamma$ tuned to give an acceptance rate of around $10\%$. Figure~\ref{fig:fig_hsim} summarises the posterior output of each scheme. We see that in general, the sampled parameter values are consistent with the true values that produced the data. There appears to be little difference between the output of the PMMH scheme when using the Gillespie simulator, and both hybrid schemes, suggesting that little is lost by adopting a hybrid model to perform inference for the autoregulatory network. Figure~\ref{fig:ess} shows minimum effective sample size (ESS) per second for each scheme. The results are consistent with the timings shown in Figure~\ref{fig:CPU}. For relatively small values of $sc$, reaction events occur relatively infrequently and little is to be gained by running Hybrid SDE or Hybrid LNA over Gillespie. When using $sc=1000$ we see a gain in overall efficiency for the hybrid schemes. We would expect this relative gain to increase with $sc$, however, we found that the computational cost of running the PMMH scheme with the Gillespie simulator precluded comparison under this scenario. \begin{figure} \caption{95\% credible regions and posterior medians (black dot) for each parameter value based on the output of each PMCMC scheme (Gillespie, Hybrid LNA and Hybrid SDE). True values are indicated by a red dot.} \label{fig:fig_hsim} \end{figure} \begin{figure} \caption{Minimum effective sample size (ESS) per second.} \label{fig:ess} \end{figure} \section{Discussion}\label{disc} We have proposed a novel hybrid simulation method for efficiently simulating stochastic kinetic models (SKMs). Our approach models fast reaction dynamics with the LNA and slow dynamics with a Markov jump process. By deriving a probable upper bound for a combination of components that drive the LNA, we obtain a probable upper bound for the total slow reaction hazard thus allowing exact simulation of the slow reaction events. This exactness is conditional on the accuracy of the upper bound, of the LNA approximation and of the ODE solver used to integrate the LNA. The first and the last of these were set to high values, whilst the LNA itself is expected to be accurate since it is only applied to reactions that are classified as fast. To this end, reliable criteria for the (dynamic) partitioning of reactions were also provided. Unlike existing approaches to hybrid simulation that use the CLE, we avoid the need for a system rewind (and the consequent difficulty in making the algorithm strictly correct). We also avoid the requirement to specify a fixed Euler time step which is unlikely to be appropriate across all possible sets of rate parameters with prior support and all possible realisations of the process. We have also considered the task of inferring the rate constants governing SKMs by adopting the hybrid model and performing exact simulation-based Bayesian inference. We employed a recently-proposed particle MCMC scheme that, in its simplest implementation, only requires the ability to forward simulate from the model and evaluate an observation (or measurement error) density. We used this scheme to compare results based on our proposed hybrid simulator with those obtained under a hybrid simulator in the spirit of the work by \citeasnoun{salis2005}, and also with inferences obtained under the ``exact'' Markov jump process representation of the SKM. Both hybrid schemes led to inferences that were almost indistinguishable from those under the true model, with a clear indication of increasing relative efficiency as reaction rates increased. \subsection*{Computing details} All simulations were performed on a machine with 8GB of RAM and with an Intel i7 CPU. The operating system used was Ubuntu 12.04. The simulation code was mainly written in C and compiled with flags: \texttt{-Wall}, \texttt{-O3}, \texttt{-DHAVE\_INLINE} and \texttt{-DGSL\_RANGE\_CHECK\_OFF}. FORTAN code for the stiff ODE solver came from the \texttt{lsoda} package \cite{petzold83}. Graphics were constructed using R and the ggplot2 R package \cite{R,ggplot2}. The code can be downloaded from \begin{center} https://github.com/csgillespie/hybrid-pmcmc \end{center} \appendix \section{Appendices} \subsection{Solution to the LNA}\label{lnaSol} Recall that $\mathbf{G}$ is the fundamental matrix for the deterministic ODE $d\mathbf{m}/dt=\mathbf{F}(t)\mathbf{m}$, satisfying equation~(\ref{lna.solution2}). Note that \[ \mathbf{0}=\frac{d}{dt}\mathbf{G}\mathbf{G}^{-1}=\mathbf{G} \frac{d\mathbf{G}^{-1}}{dt} +\frac{d\mathbf{G}}{dt}\mathbf{G}^{-1}, ~~\text{so}~\frac{d\mathbf{G}^{-1}}{dt}=-\mathbf{G}^{-1}\mathbf{F}(t). \] Set \[ \mathbf{U}(t):=\mathbf{G}^{-1}(t)\mathbf{M}(t),~~ \text{so}~ \mathbf{U}\oftime{0}=\mathbf{M}\oftime{0}. \] Since $\mathbf{G}$ is deterministic, $d\mathbf{G}^{-1}d\mathbf{M}=\mathbf{0}$ and so by (\ref{eqn.perturb}) \[ d\mathbf{U}(t)=\mathbf{G}^{-1}\mathbf{F}\mathbf{M} dt+\mathbf{G}^{-1}\mbox{\boldmath$\beta$}~d\mathbf{W}_t-\mathbf{G}^{-1}\mathbf{F}\mathbf{M} dt=\mathbf{G}^{-1}\mbox{\boldmath$\beta$}~d\mathbf{W}(t). \] Thus \[ \mathbf{U}(t)-\mathbf{U}\oftime{0} = \int_0^t\mathbf{G}^{-1}(r)\mbox{\boldmath$\beta$}(r) ~d\mathbf{W}(r). \] Therefore by linearity and Ito's Isometry, \begin{equation}\label{eqn.Udist} \mathbf{U}(t)-\mathbf{U}(0)\sim N\left(\mathbf{0},~\int_0^t\mathbf{G}^{-1}(r) \mbox{\boldmath$\beta$}(r)\mbox{\boldmath$\beta$}(r)'\left(\mathbf{G}^{-1}(r)\right)' ~dr\right). \end{equation} Suppose now that $\mathbf{M}\oftime{0}~(=\mathbf{U}\oftime{0})\sim \textrm{N}(\mathbf{m}_0,\mathbf{V}_0)$, then \begin{align*} \mathbf{M}(t) &\sim N \left( \mathbf{G}(t) \mathbf{m}_0, \mathbf{G}(t) \mbox{\boldmath$\Psi$}(t) \mathbf{G}(t)' \right)\\ \text{where}\quad \mbox{\boldmath$\Psi$}(t) &= \mathbf{V}_0+\int_0^t\mathbf{G}^{-1}(r)\mbox{\boldmath$\beta$}(r)\mbox{\boldmath$\beta$}(r)'\left(\mathbf{G}^{-1}(r)\right)'dr. \end{align*} \subsection{Proof of Proposition 1}\label{boundproof} Firstly, $\sum_{i=1}^{k}b^*_i(r) M_i(r) = \sum_{i=1}^{k}b_i(r) U_i(r)$, where $U_i$ is the $i^{th}$ component of the vector $\mathbf{U}$ defined in Appendix \ref{lnaSol}, but with $\mathbf{U}(0)=\mathbf{0}$ (since $\mathbf{M}(0)=\mathbf{0}$). From its definition, (\ref{eqn.tau}), $\tau_i$ is the $i^{th}$ diagonal component of the variance in (\ref{eqn.Udist}), so \[ \Prob{U_i(t)\ge u_i^*} = \Phi\left(-u_i^*/\sqrt{\tau_i}\right), \] for currently arbitrary values $u^*_i>0~,~~i\in \{1,\dots,k\}$. Next, define the first hitting time $T_i(u^*_i)=\inf\{t:U_i(t)\ge u^*_i\}$. Now $U_i(t)\ge u^*_i \Leftrightarrow T_i(u_i^*)\le t=0$ so \[ \Prob{U_i(t)\ge u_i^*}=\Prob{U_i(t)\ge u_i^*|T_i(u_i^*)\le t}\Prob{T_i(u_i^*)\le t}. \] By the almost sure continuity of $U_i$, $\Prob{U_i(T_i(u_i^*))=u_i^*}=1$ and so by the symmetry of $U_i$, $\Prob{U_i(t)\ge u_i^*|T_i(u_i^*)\le t}=1/2$. However $T_i(u_i^*)\le t \Leftrightarrow \max_{(0,t]}U_i \ge u_i^*$, so \[ \Prob{\max_{(0,t]}U_i \ge u_i^*}=2\Phi\left(-u_i^*/\sqrt{\tau_i}\right). \] Given some $\epsilon>0$, we may therefore choose $u^*_i=-\Phi^{-1}\left({\epsilon}/{4k}\right)\tau_i^{1/2}$, which gives, marginally, \[ \Prob{\max_{(0,t]}U_i \ge u_i^*}=\frac{\epsilon}{2k}. \] By symmetry and the inclusion exclusion formula, therefore, marginally, \[ \Prob{\max_{(0,t]}\Abs{U_i} \ge u_i^*}=\frac{\epsilon}{k}. \] Hence \[ \Prob{\Abs{U_i(r)}\le u_i^*:i\in\{1,\dots,k\},r\in(0,t]}= 1-\Prob{\max_{(0,t]}\Abs{U_i} \ge u_i^*~\text{for any}~i}\ge 1-\epsilon. \] Thus with probability at least $1-\epsilon$, for all $r \in [0,t]$ \[ \sum_{i=1}^kb^*_i(r) M_i(r) = \sum_{i=1}^kb_i(r) U_i(r) \le \sum_{i=1}^k\Abs{b_i(r)}u_i^* \le \sum_{i=1}^kb^{max}_iu_i^*. \] \subsection{Hybrid Simulation based on the CLE}\label{hybsde} We consider a hybrid simulation algorithm in the spirit of the \textit{next reaction hybrid algorithm} of \citeasnoun{salis2005}. This approach treats the subset of fast species with the chemical Langevin equation and simulates their dynamics by numerically integrating the corresponding SDE. Let $\mathbf{X}^{f}(t)$ be the state of the fast species at time $t$. Suppose that we have $r^{f}$ fast reactions and $r^s$ slow reactions. We then arrive at \begin{equation}\label{daf} d\mathbf{X}^{f}(t)=\mathbf{A}_{f}'\mathbf{h}\big(\mathbf{X}(t),\mathbf{c}\big)\,dt+\sqrt{\mathbf{A}_{f}'\textrm{diag} \left\{\mathbf{h}^{f}\big(\mathbf{X}(t),\mathbf{c}\big)\right\}\mathbf{A}_{f}}\,d\mathbf{W}(t) \end{equation} where $\mathbf{A}_{f}$ is the $r^{f}\times k^{f}$ net effect matrix associated with the fast reactions and $\mathbf{h}^{f}\big(\mathbf{X}(t),\mathbf{c}\big)$ is the $r^{f}$-vector of fast reaction hazards which may depend on both fast and slow species numbers. Hence, the fast specie numbers can be simulated by recursively iterating the Euler discretisation of (\ref{daf}). It remains that we can sample the times of the slow reactions. This step can be performed by Monte Carlo, equating the integral of the time dependent probability density for the time of the $j$th slow reaction to a uniform random number. Since the slow reaction hazards are time varying, we write them as $h_{j}^{s}(t,\mathbf{c})$, $j=1,\ldots ,r^{s}$. Let $p_{j}(\tau_{j};t_{0})$ denote the next reaction probability density for the $j$th slow reaction. Here, $t_{0}$ is the time that the last occurred and $\tau_{j}$ is the time of the $j$th slow reaction. From \citeasnoun{gibson2000}, $p_{j}(\tau_{j};t_{0})$ is a time dependent exponential density for which the cumulative density function is \begin{equation}\label{cdf} F(\tau_{j};t_{0})=1-\exp\left(-\int_{t_{0}}^{t_{0}+\tau_{j}}h_{j}^{s}(t',\mathbf{c})dt'\right). \end{equation} Hence, setting equation (\ref{cdf}) equal to a uniform random number $r_{j}$ on $(0,1)$ and simplifying gives \begin{equation}\label{jump} \int_{t_{0}}^{t_{0}+\tau_{j}}h_{j}^{s}(t',\mathbf{c})dt'+\log(r_{j})=0. \end{equation} We solve equation (\ref{jump}) by rearranging it in terms of a residual $R_{j}(t)$ and setting the integral upper bound to be a variable so that \begin{equation}\label{jump2} \int_{t_{0}}^{t_{0}+t}h_{j}^{s}(t',\mathbf{c})dt'+\log(r_{j})=R_{j}(t). \end{equation} Plainly, if $R_{j}(t)=0$ then $t=\tau_{j}$, $R_{j}(t)<0$ implies that $t<\tau_{j}$ and similarly if $R_{j}(t)>0$ then $t>\tau_{j}$. Hence, starting with state $\mathbf{X}(t)$ at time $t$, we can compute $\mathbf{X}(t+\Delta t)$ assuming no slow reaction has occurred in $(t,t+\Delta t]$. If the residual $R_{j}(t)$ has performed a \textit{zero crossing} in $(t,t+\Delta t]$ then the $j$th slow reaction has occurred. We monitor $R_{j}(t)$ by writing equation (\ref{jump2}) in differential form, \begin{equation}\label{jump3} \frac{dR_{j}(t)}{dt}=h_{j}^{s}(t,\mathbf{c}), \qquad R_{j}(t_{0})=\log(r_{j}). \end{equation} Equation (\ref{jump3}) can then be solved by using a time discretisation method such as the Euler scheme. Note that the method is restricted to only one slow reaction event in $(t,t+\Delta t]$. If more than one zero crossing occurs in this interval then $\Delta t$ can be reduced, and the state restored to the previous one. Hence, if the $j$ slow reaction occurs, the reaction time $\tau_{j}$ can be found through an It\^o-Taylor series expansion of (\ref{jump3}). If $t'$ is the time just prior to the $j$th slow reaction then \[ \tau_{j}=-\frac{R_{j}(t')}{h_{j}^{s}(t',\mathbf{c})}+t'. \] The scheme provides an accurate way of capturing a slow reaction event provided that over the interval of interest, say $[t_{curr},t_{curr}+\Delta t_{integrate}]$, it is known that only one reaction occurs. Consequently, if more than one zero crossing is recorded, the interval length is reduced until at most one slow event is captured. The algorithm commences at time $t_{curr}=0$ with known rate constants $\mathbf{c}$, a known number molecules $\mathbf{x}_{curr}$ and $R_{j}(0)=\log(r_{j}),\, j=1,\ldots ,r^{s}$. The algorithm ends with $\mathbf{x}_{curr}$ as the state vector at time $t_{end}>t_{curr}$. For simplicity, we take the length of the time interval over which a slow reaction is detected to be $\Delta t_{integrate}= \Delta t_{hybrid}$. \begin{enumerate} \item \textbf{If} $t_{curr}\ge t_{end}$ then \textbf{stop}. \item Set $\Delta t_{hybrid}=\min(\Delta t_{hybrid},t_{end}-t_{curr})$. \item {Classify reactions}: given $\mathbf{x}_{curr}$ classify each reaction as either slow or fast. \item Calculate the fast reaction hazards. Using an Euler time step of $\Delta t_{euler}$, numerically integrate the SDE (\ref{daf}) for the fast species over $(t_{curr},t_{curr}+\Delta t_{hybrid}]$ giving a sample path for the fast species over $(t_{curr},t_{curr}+\Delta t_{hybrid}]$. \item Using the slow reaction hazards, compute each residual $R_{j}(t)$, $j=1,\ldots ,r^{s}$ using an Euler approximation of (\ref{jump3}) and decide whether or not a slow reaction has happened in $(t_{curr},t_{curr}+\Delta t_{hybrid}]$. \item \textbf{If} {no slow reaction} has occurred, set $t_{curr}:=t_{curr}+\Delta t_{hybrid}$ and update the fast species to their proposed values at $t_{curr}$; \textbf{go to Step 1}. \item \textbf{If} {one slow reaction} has occurred, identify the type $j$ and time $\tau_{j}$, set $t_{curr}=\tau_{j}$ and update the system to $\tau_{j}$ using the same random numbers as in step (d). Reset the $j$th residual, $R_{j}(t)=\log(r_{j})$. Reset $\Delta t_{hybrid}$ to its initial value if required. \textbf{Goto Step 1.} \item \textbf{If} {more than one slow reaction} has occurred, reduce $\Delta t_{hybrid}$ and \textbf{goto Step 3}. \end{enumerate} Note that in step 3, for consistency, we use the same decision criteria outlined in Section~\ref{reactchoice}. \end{document}
\begin{document} \title{Multipartite entanglement, quantum coherence and quantum criticality in triangular and Sierpi\'nski fractal lattices} \author{Jun-Qing Cheng} \author{Jing-Bo Xu} \email[Emali: ]{[email protected]} \affiliation{Zhejiang Institute of Modern Physics and Physics Department, Zhejiang University, Hangzhou 310027, China} \date{\today} \begin{abstract} We investigate the quantum phase transitions of the transverse-field quantum Ising model on the triangular lattice and Sierpi\'nski fractal lattices by employing the multipartite entanglement and quantum coherence along with the quantum renormalization group method. It is shown that the quantum criticalities of these high-dimensional models closely relate to the behaviors of the multipartite entanglement and quantum coherence. As the thermodynamic limit is approached, the first derivatives of the multipartite entanglement and quantum coherence exhibit singular behaviors, and the consistent finite-size scaling behaviors for each lattice are also obtained from the first derivatives. The multipartite entanglement and quantum coherence are demonstrated to be good indicators for detecting the quantum phase transitions in the triangular lattice and Sierpi\'nski fractal lattices. Furthermore, the dimensions determine the relations between the critical exponents and the correlation length exponents for these lattices. \begin{description} \item[PACS numbers] \pacs{}05.30.Rt, 03.67.Mn, 75.10.Pq, 64.60.al \end{description} \end{abstract} \maketitle \section{Introduction} Quantum phase transitions (QPTs) are notable manifestations of quantum many-body systems at absolute zero temperature where the quantum fluctuations play a dominant role and no thermal fluctuations exist \cite{Sachdev1999}. QPTs can be achieved by changing the parameters of Hamiltonian, such as an external magnetic field or the coupling constant. As a control parameter is varied through a critical value, the ground state of a system suffers an abrupt change mapped to a variation in the system's properties. How to reveal and characterize the critical phenomenons of quantum many-body systems is an important task and becomes a hot topic in condensed-matter physics. Traditional methods mainly focus on the identification of the order parameters and the pattern of symmetry breaking. Recent developments in quantum information theory \cite{Nielson2000Quantum} have provided some insights into the QPTs. Specifically, the quantum entanglement has been successfully used as an effective tool to reveal the QPTs without any prior knowledge of the order parameter \cite{Osborne2002,Nature2002Scaling,Wu2004Quantum,Amico2008}. Since the concept of renormalization was introduced from the quantum field theory to quantum statistical physics \cite{Wilson1975The}, many progresses have been made in the research of QPTs. As a variant of renormalization group at zero temperature, quantum renormalization group (QRG) is a tractable method for studying the criticalities of one-dimensional \cite{jafari2007phase,Ma2011Entanglement,PhysRevA.86.042102,Efrati2014Real} and two-dimensional \cite{Usman2015Quantum,PhysRevA.95} many-body systems. This method can be used to evaluate the quantum critical points and scaling behaviors analytically, but has difficulties in quantitative estimation for the transverse-field Ising model \cite{PhysRevB.18.3568,PhysRevB.19.4653}. Recently, it has been shown that a novel renormalization group (RG) map can not only be used to accurately examine the critical behavior of the one-dimensional quantum transverse-field Ising model, and also be used to predict the critical behaviors of the higher-dimensional models \cite{Miyazaki2011Real,kubica2014precise}. In particular, there have been efforts to study the quantum Ising models on fractal lattices \cite{Yi2013Critical,kubica2014precise,PhysRevE.91.012118} which were not clear before. Fractals are self-similar structures in noninteger dimensions and have both aesthetic and scientific interests. They have been used to interpolate between integer dimensional regular lattices and construct the networks for quantum computation and communication \cite{markham2013,Michael2016}. The quantum criticalities of fractal lattices attract our attention. On the other hand, the entanglement in the ground state of a many-body system can be utilized as a resource for quantum technologies \cite{Amico2008}. The multipartite entanglement offers significant advantages in quantum tasks compared with bipartite entanglement. For example, it is the main ingredient in measurement-based quantum computation \cite{ Browne2009} and various quantum communication protocols \cite{RevModPhys.74.145,PhysRevLett.86.4431,PhysRevLett.86.5188}. Therefore, the entanglement quantification of multipartite quantum states is necessary and essential in quantum information science. The monogamy of entanglement is one of most important properties in many-body quantum systems \cite{Horodecki2007}, and can be used to characterize the entanglement structure. It has been recently discovered that the squared entanglement of formation obeys the monogamy inequality in an arbitrary $N$-qubit mixed state, and a relevant multipartite entanglement indicator is proposed \cite{Bai2014General}. The multipartite entanglement provides a global view and more physical insights into the characters of a many-body system, and it may have some advantages over bipartite entanglement to reveal the QPTs. Furthermore, the quantum coherence, which arises from the quantum superposition principle, plays a very important role in the fields of quantum optics \cite{Louisell1973Quantum} and quantum information \cite{Nielson2000Quantum}. However, there has been no well-accepted efficient method for measuring the quantum coherence until very recently. A rigorous theoretical framework for quantifying the quantum coherence and the necessary constraints for the quantifier have been proposed \cite{PRLcoherence}. It is interesting to do some research about the multipartite entanglement and quantum coherence in the QPTs of high-dimensional many-body systems. These developments on QPTs, QRG method, multipartite entanglement and quantum coherence motivate us to consider the following questions: How do the multipartite entanglement and quantum coherence behave in the QPTs of high-dimensional models? Can the multipartite entanglement and quantum coherence be used to indicate the QPTs of the transverse-field quantum Ising models on the fractal lattices? If we can apply the QRG approach to find the finite-size scaling behaviors proposed in Ref. \cite{Nature2002Scaling} for the cases of fractal lattices? Are the critical exponents of multipartite entanglement consistent with the ones of quantum coherence for the same lattice? What are the relations between the critical exponents and correlation length exponents for high-dimensional systems? In this paper, we investigate the performances of multipartite entanglement and quantum coherence in the QPTs for transverse-field quantum Ising model on the triangular lattice and Sierpi\'nski fractal lattices by employing the QRG method. It is found that the quantum criticalities of these models closely relate to the behaviors of the multipartite entanglement and quantum coherence. The singularities for each lattice are observed from the first derivatives of the multipartite entanglement and quantum coherence. The scaling behaviors as introduced in Ref. \cite{Nature2002Scaling} are obtained for these lattices, especially the ones which describe how the critical points are touched as the thermodynamic limit is approached haven't been discussed before. It is also shown that the multipartite entanglement and quantum coherence obey the universal finite-size scaling laws for the same lattice. Furthermore, the dimensions of lattices play the decisive roles on the relations between the critical exponents and correlation length exponents. The multipartite entanglement and quantum coherence are proven to be good indicators to detect the QPTs of the transverse-field quantum Ising model on the high-dimensional lattices, such as the triangular and Sierpi\'nski fractal lattices. This paper is organized as follows. In Sec. \ref{section2}, we study the QPTs of lattices by employing multipartite entanglement along with the QRG method. In Sec. \ref{section3}, we investigate the quantum coherence and the QPTs of lattices by using the QRG method. Finally, the conclusions are drawn in Sec. \ref{section4}. \section{\label{section2}Multipartite entanglement and Quantum phase transitions in lattices} We consider a set of localized spin-$1/2$ particles in the triangular lattice or Sierpi\'nski fractal lattices coupled through exchange interaction $J$ and subject to an external magnetic field of strength $h$. The Hamiltonians for such transverse-field quantum Ising models are given by \begin{equation} H=-J \sum_{\left\langle i,j\right\rangle }\sigma_i^z \sigma_j^z -h\sum_{i}\sigma_i^x \end{equation} where $\sigma_i^\alpha (\alpha=x,z)$ are the standard spin-$1/2$ Pauli operators at the site $i$. The sums are over all the nearest neighbor pairs and over all sites, respectively. We mainly focus on the ferromagnetic interactions $J> 0$ and the transverse field $h\geqslant 0$. In this work, three kinds of lattices as shown in Fig. \ref{figure1} are considered, which are the triangular lattice and Sierpi\'nski fractal lattices, respectively. For simplicity, the exchange interaction normalized to the transverse field strength $g=J/h$ is applied during our investigation. \begin{figure} \caption{Schematic illustration of QRG transformation for the (a) triangular lattice and (b)(c) Sierpi\'nski fractal lattices.} \label{figure1a} \label{figure1b} \label{figure1c} \label{figure1} \end{figure} Typically, it is not easy to obtain the analytical solutions of these high-dimensional systems. Even if the numerical method such as Monte-Carlo simulation is applicable \cite{Hasenbusch2010A}, the calculation is computationally expensive. The QRG method is a analytical treatment for studying the QPTs, especially has advantages in estimating the quantum critical points and scaling behaviors. The main idea of QRG method is to eliminate or thin the degrees of freedom of the many-body systems through a recursive procedure until a tractable situation is reached. According to the Kadanoff's block method \cite{Efrati2014Real,jafari2007phase,PhysRevA.86.042102}, a spin chain can be split into blocks, which means that the Hamiltonian is decomposed into the block Hamiltonian and interacting (interblock) Hamiltonian. The low-lying eigenstates of each block Hamiltonian are applied to construct the basis for renormalized Hilbert space. In this way, the full Hamiltonian is projected onto the renormalized space to achieve an effective Hamiltonian with structural similarity to the original one. As long as the thermodynamic limit is touched by increasing the RG iterations, the global properties of the system can be captured. In the novel QRG method, for the purpose of preserving the symmetry of system and the structural similarity of Hamiltonian, not all the terms inside a block is included in the QRG transformation \cite{kubica2014precise}, which is used to significantly improve the estimation precision about the critical point. The procedure of QRG transformation for the triangular lattice is shown in Fig. \ref{figure1a}. The entire system is covered by blocks of three sites which are renormalized to be a single one. After performing the renormalization, two types of coupling strengths $g_a$ and $g_b$ replace the original one $g$, which means that the coupling strength becomes highly anisotropic. In Ref. \cite{kubica2014precise}, the authors have proposed that the renormalized coupling strength for triangular lattice should be a geometric mean of all coupling strengths and choose the hexagon consisting of seven sites as a basic cluster. Then the renormalized coupling strength can be obtained by \begin{equation} \label{RG equation1} g'_\mathrm{t}=g_a^{2/6} g_b^{4/6}=2^{1/3} g^2 (1+g^2)^{1/6} (g+\sqrt{1+g^2})^{2/3}. \end{equation} In this way, the triangular lattice with $ 7\times (\lambda^n)^d $ sites can be effectively represented by a seven-site cluster after completing the $n$th RG iteration step, where $d=2$ is the dimension of triangular lattice and $\lambda= \sqrt{3}$ is the scale of the length of the side for each RG iteration. The critical point $g_\mathrm{c}^t$ corresponding to the nontrivial fixed point is obtained by solving $g'=g$, i.e., $g_\mathrm{c}^\mathrm{t}\approx 0.539$. Similarly, we also study the transverse-field quantum Ising model on the Sierpi\'nski fractal lattices with Hausdorff dimension $d_\mathrm{H}=\log(\kappa+1)/\log2$ where $\kappa=2$ or $3$ is the spatial dimension. The procedures of QRG transformation for the Sierpi\'nski triangular lattice ($d_\mathrm{H}=1.585$) and Sierpi\'nski pyramid lattice ($d_\mathrm{H}=2$) are depicted in Fig. \ref{figure1b} and Fig. \ref{figure1c}, respectively \cite{kubica2014precise}. It can be observed that the basic cluster in the Sierpi\'nski triangular lattice is a triangle containing three sites, and for the Sierpi\'nski pyramid lattice, it is a pyramid containing four sites. It is not difficult to find that after $n$th RG iterations the Sierpi\'nski triangular (or pyramid) lattice with $3\times\lambda_{\mathrm{f}}^{1.585n}$ (or $4\times\lambda_{\mathrm{f}}^{2n}$) sites is represented by a three (or four)-site cluster, where $\lambda_\mathrm{f}=2$. Therefore, the renormalized coupling strengths for the fractal lattices can be obtained as follows \begin{equation} \label{RG equation2} g'_\mathrm{f}=g^{(3\kappa+1)/(\kappa+1)}(1+g^2)^{\kappa(\kappa-1)/2(\kappa+1)}. \end{equation} The critical points of the Sierpi\'nski triangular lattice and Sierpi\'nski pyramid lattice are given as $g_\mathrm{c}^\mathrm{St} \approx 0.869$ and $g_\mathrm{c}^\mathrm{Sp} \approx 0.786$, respectively. Moreover, the correlation length exponents $\nu$ for the triangular lattice, Sierpi\'nski triangular lattice and Sierpi\'nski pyramid lattice can be calculated as follows \begin{eqnarray} \label{correlation length exponent} &&\nu_\mathrm{t}^{-1}=\log_{\sqrt{3}} \frac{\mathrm{d}g'_\mathrm{t}}{\mathrm{d}g} |_{g_\mathrm{c}^\mathrm{t}},\nonumber\\ &&\nu_\mathrm{f}^{-1}=\log_2 \frac{\mathrm{d}g'_\mathrm{f}}{\mathrm{d}g} |_{g_\mathrm{c}}. \end{eqnarray} The results are $\nu_\mathrm{t} \simeq 0.630$, $\nu_\mathrm{St} \simeq 0.720$ and $\nu_\mathrm{Sp} \simeq 0.617$, respectively. Next, we briefly outline the definition of the monogamy of entanglement and the measure of multipartite entanglement in the present study. For an $N$-qubit system with state space ${\mathcal{H}_{A_1}} \otimes {\mathcal{H}_{A_2}} \otimes \cdots \otimes {\mathcal{H}_{A_N}}$, taking the subsystem $A_1$ as a \textquotedblleft node\textquotedblright \cite{Fanchini2017}, if the entanglement between the particles $A_1$ and $A_2,\cdots,A_N$ satisfies the inequality \begin{eqnarray} E^2_{A_1|A_2,\cdots,A_N}\geq E^2_{A_1 A_2}+E^2_{A_1 A_3}+\cdots + E^2_{A_1 A_N}, \end{eqnarray} with $E_{A_1|A_2,\cdots,A_N}$ quantifying the entanglement in the partition $A_1|A_2,\cdots,A_N$ and $E_{A_1 A_j}$ quantifying the one in the two-qubit system $A_1 A_j$, then the entanglement measure $E$ obeys the monogamous relation \cite{Horodecki2007}. This monogamy property imposes physical restrictions on unconditional sharability of quantum entanglement between the different parts of a many-body system. According to the Schmit decomposition \cite{Peres1995}, the subsystem $A_2,\cdots,A_N$ is equal to a logic qubit $A_{2,\cdots,N}$ for an $N$-qubit pure state $\left| \psi\right\rangle_{A_1 A_2,\cdots,A_N}$. As an example, the entanglement of formation $E_{\rm{f}} (A_1|A_2,\cdots,A_N)$ can be derived by using the analytical formula for a two-qubit state $\rho_{AB}$ \cite{Caves2001} \begin{eqnarray} \label{bipartite entanglement} E_{\rm{f}} ({\rho_{AB}})=h \left( \frac{1+\sqrt{1-C^2_{AB}}}{2}\right), \end{eqnarray} where $h(x)=-x\log_2{x}-(1-x)\log_2{(1-x)}$ is the binary entropy and $C_{AB}=\max \{ 0, \sqrt{\lambda_1}-\sqrt{\lambda_2}-\sqrt{\lambda_3}-\sqrt{\lambda_4} \} $ is the concurrence \cite{Hill1997} with decreasing nonnegative $\lambda_i$s being the eigenvalues of the matrix $\rho_{AB}(\sigma_y\otimes\sigma_y)\rho_{AB}^\ast(\sigma_y\otimes\sigma_y)$. The squared entanglement of formation has been found to obey the monogamy inequality in an arbitrary $N$-qubit mixed state, and a relevant indicator has been proposed to detect the multiqubit entangled states \cite{Bai2014General}, which reads \begin{eqnarray} \tau_{A_1|A_2, \cdots ,A_N} = E_{\rm{f}}^2({\rho _{A_1|A_2, \cdots ,A_N}}) - \sum^N\limits_{j \ne 1} {E_{\rm{f}}^2({\rho _{A_1 A_j}})} \label{multipartite entanglement}. \end{eqnarray} By utilizing this measure, we can not only explore the QPTs of many-body systems, but also examine the performance of multipartite entanglement in different phases. \subsection{triangular lattice} Now, we investigate the multipartite entanglement for the transverse-field quantum Ising model on the triangular lattice by employing the QRG method. Since the basic cluster contains seven sites, as shown in Fig. \ref{figure1a}, we choose the central site (labeled by $1$) as the \textquotedblleft node\textquotedblright \cite{Fanchini2017}, and calculate the seven-partite entanglement $\tau_{1|2, \cdots ,7}$ for studying the performances of multipartite entanglement in the QPT. The Hamiltonian of basic cluster can be written as \begin{equation} H=-g(\sum_{i=2}^{7}\sigma_1^z\sigma_i^z + \sum_{i=2}^{6}\sigma_i^z\sigma_{i+1}^z +\sigma_2^z\sigma_7^z)-\sum_{i=1}^{7}\sigma_i^x. \end{equation} The density matrix is given by $\rho=|\psi_0 \rangle \langle \psi_0|$, where $|\psi_0 \rangle $ is the ground state of the Hamiltonian of the basic cluster. Then we can calculate the seven-partite entanglement according to Eq. \ref{multipartite entanglement}, namely, \begin{eqnarray} \tau_{1|2, \cdots ,7} = E_{\rm{f}}^2({\rho _{1|2, \cdots ,7}}) - \sum^7\limits_{j \ne 1} {E_{\rm{f}}^2({\rho _{1 j}})}. \label{seven-partite} \end{eqnarray} \begin{figure}\label{figure2} \end{figure} \begin{figure} \caption{(a) First derivative of multipartite entanglement $\mathrm{d}\tau/\mathrm{d}g$ versus $g$ for different RG iterations on the triangular lattice. (b) The logarithm of the absolute value of maximum $\ln\left| \mathrm{d} \tau/\mathrm{d}g\right| $ versus the logarithm of the triangular lattice size $\ln(N)$, which is linear and shows a scaling behavior. (c)The scaling behavior of $g_\mathrm{max}$ in terms of system size $N$ for the triangular lattice, where $g_\mathrm{max}$ is the position of the maximum derivative of multipartite entanglement. (d) The finite-size scaling through renormalization treatment with the correlation length exponent $\nu=0.63$ for the multipartite entanglement. The curves corresponding to different system sizes approximately collapse onto a single one for this triangular lattice.} \label{figure3} \end{figure} Based on Eqs. \ref{RG equation1} and \ref{seven-partite}, we compute the seven-partite entanglement between seven sites in the basic cluster which can represent different system sizes after completing the corresponding RG iterations. The seven-partite entanglement $\tau$ as a function of $g$ for different RG iterations on the triangular lattice is plotted in Fig. \ref{figure2}. It can be observed that these curves cross each other at the critical point and two different saturated values of multipartite entanglement associated with two phases: the ferromagnetic phase ($g>g_\mathrm{c}^\mathrm{t}$) and the paramagnetic phase ($g<g_\mathrm{c}^\mathrm{t}$) are developed. As the size of system becomes large, the two phases are diverged more clearly. In particular, the saturated value of multipartite entanglement in the ferromagnetic phase approaches the maximum. Here, in order to provide a possible physical explanation, we display the bipartite entanglements of formation $E_{\rm{f}}(\rho_{12})$ and $E_{\rm{f}}(\rho_{1|2\cdots7})$ as functions of coupling strength $g$ for the zeroth RG iteration in the inset of Fig. \ref{figure2}. The bipartite entanglement $E_{\rm{f}}(\rho_{12})$ first increases from zero to the maximum and then decreases to zero monotonically, while another one $E_{\rm{f}}(\rho_{1|2\cdots7})$ increases monotonically with $g$ until the saturated value is arrived. As the coupling strength grows from zero, the increased probability for the spin pair staying at the entangled state leads to the generation of bipartite entanglement. Only when the competition between the interaction and quantum fluctuation reaches a counterbalance at the critical point, the bipartite entanglement $E_{\rm{f}}(\rho_{12})$ reaches its maximum \cite{gu2003entanglement}, agreeing with the results of Ref. \cite{PhysRevA.95}. Then the exchange couplings play a dominant role and keep the system staying at the ferromagnetic phase, which results in the decrease of bipartite entanglement between two neighboring sites. We can find the reason why the multipartite entanglement approaches the maximum in the ferromagnetic phase according to Eq. \ref{multipartite entanglement}. On the one hand, the increase of coupling strength brings the value of quantum entanglement $E_{\rm{f}}(\rho_{1|2\cdots7})$ up to the maximum. On the other hand, the bipartite entanglements between two neighboring sites are very little since the strong coupling strength. It can be concluded that the multipartite entanglement is not maximal at the critical point as the bipartite entanglement be, and the multipartite entanglement is richer in the ferromagnetic phase than the paramagnetic phase. Although above interpretation is superficial, it provides inspiration for further understanding. At zero field the model exhibits ferromagnetic behavior with net magnetization in the $z$ direction. The ground state is twofold degenerate and a product state with spins pointing in the $z$ direction, i.e. $|+\rangle = |\uparrow \uparrow\cdots\rangle$ or $|-\rangle = | \downarrow\downarrow \cdots \rangle$. In the large-field limit, ground state is also a product state with all spins being polarized to the direction of field. Although no bipartite entanglement exists in both cases, there is another possible solution for the ground state at zero field, namely, the superposition of the degenerate states, which may be a Greenberger-Horne-Zeilinger (GHZ)-like state $|\rm{G}\rangle = 1/\sqrt{2}(|+\rangle + |-\rangle)$ with genuine multipartite entanglement \cite{Montakhab2010}. Therefore, the multipartite entanglement approaches the maximum in the ferromagnetic phase, and these results may provide us a further insight in the entanglement distribution and QPT for many-body systems. More information on the location and the order of the QPT can be obtained by consideration of the derivatives of the multipartite entanglement with respect to the coupling strength. We plot the derivatives of multipartie entanglement $\mathrm{d}\tau/\mathrm{d}g$ as a function of $g$ for different RG iterations in Fig. \ref{figure3}(a). It can be seen from Fig. \ref{figure3}(a) that the first derivative of the multipartie entanglement exhibits a nonanalytic behavior, which indicates that the QPT of this system is a second-order QPT. The scaling behavior of the maximum of $\mathrm{d}\tau/\mathrm{d}g$ versus $N$ is displayed in the Fig. \ref{figure3}(b), which is a linear behavior of $\ln(|\mathrm{d}\tau/\mathrm{d}g|_{g_\mathrm{max}})$ versus $\ln(N)$. Based on numerical analysis, we can obtain $|\mathrm{d}\tau/\mathrm{d}g|_{g_\mathrm{max}}\sim N^{\mu'_1}$ where the critical exponent $\mu'_1 \simeq 0.790$. It has been found that the correlation length exponent is the inverse of critical exponent in the one-dimensional spin chain systems \cite{kargarian2008the,ma2011quantum}. For this triangular lattice, the relation between the correlation length exponent and critical exponent has a new form. The correlation length exponent $\nu$ gives the divergent behavior of correlation length in the vicinity of $g_\mathrm{c}$, i.e., $\xi \sim |g-g_\mathrm{c}|^{-\nu}$. Under the RG transformations, the correlation length scales as $\xi \rightarrow \xi_n=\xi/\lambda^n $, where $\lambda= \sqrt{3}$ is the scale of the length of the side for each RG iteration and related to $N$, i.e. $7\times (\lambda^n)^d = N$. For the $n$th RG iteration, the renormalized coupling strength $g_n$ is still a function of original one $g$. Since $ \xi_n \sim |g_n-g_\mathrm{c}|^{-\nu}$ and $\left|\mathrm{d}\tau/\mathrm{d}g|_{g_\mathrm{max}}\right| \sim \left| \mathrm{d}g_n/\mathrm{d}g|_{g_\mathrm{c}}\right|$, we can derive that \begin{equation} \left| \frac{\mathrm{d}\tau}{\mathrm{d}g} \right|_{g_\mathrm{max}} \sim N^{\frac{1}{\nu d}}. \end{equation} Comparing with $|\mathrm{d}\tau/\mathrm{d}g|_{g_\mathrm{max}}\sim N^{\mu_1}$, the relation of critical exponent and correlation length exponent is obtained, namely $\mu_1=1/(\nu d)$. Furthermore, the value of coupling strength $g_\mathrm{max}$ corresponding to the maximum of $\mathrm{d}\tau/\mathrm{d}g$ for each RG iteration gradually tends toward the critical point $g_\mathrm{c}$, which indicates another scaling behavior displayed in Fig. \ref{figure3}(c), i.e., $|g_\mathrm{c} -g_\mathrm{max}| \sim N^{-\mu'_2}$ where the critical exponent $\mu'_2\simeq 0.808$. This critical exponent is also related to the correlation length exponent $\nu$ in the vicinity of the critical point. The scaling of the position of maximum $g_\mathrm{max}$ comes from the behavior of the correlation length $\xi$ near the critical point. As the thermodynamic limit is approached, the correlation length $\xi \sim N^{1/d}$. Comparing with $\xi \sim |g - g_{c}|^{-\nu}$, the scaling form $|g_\mathrm{c} - g_\mathrm{max} | \sim N^{-1/(\nu d)}$ can be obtained, which implies that $\mu_2=1/(\nu d)$. We can observe that the second critical exponent is in good agreement with the first one $\mu_1=\mu_2=1/(\nu_\mathrm{t} d) \simeq 0.794$ where $\nu_\mathrm{t}$ is the correlation length exponent for the triangular lattice as shown in Eq. \ref{correlation length exponent}. It is noted that the above relation can also be proved by the numerical results $\mu'_1 \simeq 0.790$ and $\mu'_2\simeq 0.808$. Based on the divergence of derivative of multipartite entanglement, we plot $(\mathrm{d}\tau/\mathrm{d}g-\mathrm{d}\tau/\mathrm{d}g|_{g_\mathrm{max}})/N^{\frac{1}{\nu d}}$ versus $N^{\frac{1}{\nu d}}(g-g_\mathrm{max})$ for different RG iterations in Fig. \ref{figure3}(d). These curves for different system sizes approximately collapse onto a single one, which is a manifestation of the existence of finite-size scaling for the multipartite entanglement \cite{Nature2002Scaling,Kargarian2007}. We can conclude that the multipartite entanglement is a good indicator to signify the criticality of the transverse-field quantum Ising model on the triangular lattice. \subsection{Sierpi\'nski fractal lattice} \begin{figure} \caption{The evolutions of multipartite entanglement $\tau$ versus $g$ for different RG iterations on the (a) Sierpi\'nski triangular lattice and (b) Sierpi\'nski pyramid lattice.} \label{figure4a} \label{figure4b} \label{figure4} \end{figure} \begin{figure} \caption{(a) The first derivative of multipartite entanglement $\mathrm{d}\tau/\mathrm{d}g$ versus $g$ for different RG iterations on the Sierpi\'nski triangular lattice. (b) The logarithm of the absolute value of maximum $\ln\left| \mathrm{d}\tau/\mathrm{d}g\right| $versus the logarithm of the Sierpi\'nski triangular lattice size $\ln(N)$, which is linear and shows a scaling behavior. (c) The scaling behavior of $g_\mathrm{max}$ in terms of system size $N$ for the Sierpi\'nski triangular lattice, where $g_\mathrm{max}$ is the position of the maximum derivative of multipartite entanglement. (d) The finite-size scaling through renormalization treatment with the correlation length critical exponent $\nu=0.720$ for the multipartite entanglement. The curves corresponding to different system sizes approximately collapse onto a single one for this Sierpi\'nski triangular lattice.} \label{figure5} \end{figure} Next, we consider the transverse-field Ising model on the fractal lattices which are the generalizations of the Sierpi\'nski pyramid in $\kappa$ spatial dimensions. For $\kappa=2$ and $\kappa=3$, the fractal lattices are Sierpi\'nski triangle and pyramid lattices, respectively, as depicted in Fig. \ref{figure1b} and Fig. \ref{figure1c}, whose Hausdorff dimensions can be calculated by $d_\mathrm{H} =\log(\kappa+1)/\log2$. Here, we choose the site labeled by $1$ as the \textquotedblleft node\textquotedblright, and according to the numbers of sites in basic clusters of fractal lattices, we investigate the tripartite entanglement for Sierpi\'nski triangle lattice and four-partite entanglement for Sierpi\'nski pyramid lattice, respectively. The renormalized tripartite and four-partite entanglements can be obtained from Eqs. \ref{RG equation2} and \ref{multipartite entanglement}. The results about multipartite entanglement versus the reduced coupling strength $g$ for different RG iterations on the Sierpi\'nski triangular and pyramid lattices are displayed in Fig. \ref{figure4}. It is clearly observed from Fig. \ref{figure4} that the evolutions of multipartite entanglement on the fractal lattices are similar to that on the triangular lattice. As the size of the Sierpi\'nski triangular lattice increases, the tripartite entanglement produces two different saturated values that corresponding to two different phases. The paramagnetic order at $g<g_\mathrm{c}^\mathrm{St}$ induces the quantum fluctuation and leads to the destruction of tripartite entanglement. In contrast, as $g$ becomes large, the ferromagnetic order gradually builds the tripartite entanglement. The performance of four-partite entanglement in the Sierpi\'nski pyramid lattice is analogous to the tripartite entanglement in the Sierpi\'nski triangle lattice, one obvious difference is the positions of intersection points since the critical points of these two fractal lattices are diverse. These two figures reveal that as the thermodynamic limit is touched by increasing the RG iterations, the multipartite entanglement can be used to detect the critical points of the fractal lattices. \begin{figure} \caption{(a) The first derivative of multipartite entanglement $\mathrm{d}\tau/\mathrm{d}g$ versus $g$ for different RG iteration on the Sierpi\'nski pyramid lattice. (b) The logarithm of the absolute value of maximum $\ln\left| \mathrm{d}\tau/\mathrm{d}g\right| $versus the logarithm of the Sierpi\'nski pyramid lattice size $\ln(N)$, which is linear and shows a scaling behavior. (c) The scaling behavior of $g_\mathrm{max}$ in terms of system size $N$ for the Sierpi\'nski pyramid lattice, where $g_\mathrm{max}$ is the position of the maximum derivative of multipartite entanglement. (d) The finite-size scaling through renormalization treatment with the correlation length critical exponent $\nu=0.617$ for the multipartite entanglement. The curves corresponding to different system sizes approximately collapse onto a single one for this Sierpi\'nski pyramid lattice.} \label{figure6} \end{figure} The appearance of nonanalytic behavior in some quantity, often accompanied by a scaling behavior, is a feature of second-order QPT. The nonanalytic phenomenons of the first derivative of multipartite entanglement near the critical point and the scaling behaviors for the two-dimensional triangular lattice have been shown in Fig. \ref{figure3}. In the following, we pay our attention to the cases of fractal lattices. The first derivatives of multipartite entanglement $\mathrm{d}\tau/\mathrm{d}g$ versus $g$ for different RG iterations on the Sierpi\'nski triangular and pyramid lattices have been displayed in Fig. \ref{figure5}(a) and Fig. \ref{figure6}(a), respectively. The nonanalytic behaviors of multipartite entanglement near the critical points become more prominent when the sizes of systems increase, which means that the first derivatives of multipartite entanglement are singular near the critical points, and the systems both undergo the second-order QPTs. To further understand the relation between the renormlized multipartite entanglement and QPTs, we explore the finite-size scaling behaviors of multipartite entanglement close to the critical points. The linear behaviors of $\ln(\left| \mathrm{d}\tau/\mathrm{d}g\right|_{g_\mathrm{max}})$ versus $\ln(N)$ are revealed in Figs. \ref{figure5}(b) and \ref{figure6}(b). Numerical analysis confirmed that the maximum of $\mathrm{d}\tau/\mathrm{d}g$ obeys the following finite-size scaling behavior: $\left| \mathrm{d}\tau/\mathrm{d}g|_{g_\mathrm{max}}\right| \sim N^{\mu'}$, where the critical exponent for Sierpi\'nski triangular lattice is $\mu'_3 \simeq 0.880$ as shown in Fig. \ref{figure5}(b) and the one for Sierpi\'nski pyramid lattice is $\mu'_5 \simeq 0.807$ as shown in Fig. \ref{figure6}(b). The correlation length exhibits exponential behavior near the critical point $g_\mathrm{c}$, i.e., $\xi \sim |g-g_\mathrm{c}|^{-\nu}$. After the $n$th iteration, the correlation length scales as $\xi_n=\xi/\lambda^n \sim |g_n-g_\mathrm{c}|^{-\nu} $ with $\lambda_{\mathrm{f}}= 2$. $N$ and $\lambda_{\mathrm{f}}$ in the fractal lattice have the relation $N=N_0 \lambda_{\mathrm{f}}^{nd_\mathrm{H}}$, where $N_0 =3$ for $\kappa=2$ and $N_0 =4$ for $\kappa=3$. Since $\left|\mathrm{d}\tau/\mathrm{d}g|_{g_\mathrm{max}}\right| \sim \left| \mathrm{d}g_n/\mathrm{d}g|_{g_\mathrm{c}}\right|$, we can derive that \begin{equation} \left| \frac{\mathrm{d}\tau}{\mathrm{d}g} \right|_{g_\mathrm{max}} \sim N^{\frac{1}{\nu d_\mathrm{H}}}. \end{equation} Comparing with $|\mathrm{d}\tau/\mathrm{d}g|_{g_\mathrm{max}}\sim N^{\mu}$, the relation of critical exponent and correlation length exponent can be obtained $\mu=1/(\nu d_\mathrm{H})$ . It means that $\mu_3=1/(\nu_\mathrm{St} d_\mathrm{H}^\mathrm{St}) \simeq 0.876$ for Sierpi\'nski triangular lattice and $\mu_5=1/(\nu_\mathrm{Sp} d_\mathrm{H}^\mathrm{Sp}) \simeq 0.810$ for Sierpi\'nski pyramid lattice. The numerical results listed above $\mu'_3 \simeq 0.880$ and $\mu'_5 \simeq 0.807$ are in good agreement with the analytical results, i.e. $\mu'_3 \simeq \mu_3$ and $\mu'_5 \simeq \mu_5$. Furthermore, the value of $g_\mathrm{max}$ corresponding to the maximum of $\mathrm{d}\tau/\mathrm{d}g$ for each RG iteration gradually tends toward the critical point $g_\mathrm{c}$. It indicates $|g_\mathrm{c} -g_\mathrm{max}| \sim N^{-\mu}$, where the critical exponent for Sierpi\'nski triangular lattice is $\mu'_4 \simeq 0.870$ as shown in Fig. \ref{figure5}(c) and the one for Sierpi\'nski pyramid lattice is $\mu'_6 \simeq 0.826$ as shown in Fig. \ref{figure6}(c). These critical exponents $\mu_4$ and $\mu_6$ are directly related the correlation length exponents in the vicinities of the critical points. The correlation length is related to the size of the system in the thermodynamic limit, i.e., $\xi \sim N^{1/d_\mathrm{H}}$. Since $\xi \sim |g - g_{c}|^{-\nu}$, then the scaling form $|g_\mathrm{c} - g_\mathrm{max} | \sim N^{-1/(\nu d_\mathrm{H})}$ can be obtained, which implies that $\mu=1/(\nu d_\mathrm{H})$. That is to say, for the Sierpi\'nski triangular lattice, $\mu_4=1/(\nu_\mathrm{St} d_\mathrm{H}^\mathrm{St}) \simeq 0.876$ and for the Sierpi\'nski pyramid lattice, $\mu_6=1/(\nu_\mathrm{Sp} d_\mathrm{H}^\mathrm{Sp}) \simeq 0.810$. The numerical results are also consistent with the analytical results, i.e., $\mu'_4 \simeq \mu_4$ and $\mu'_6 \simeq \mu_6$. Finally, it is possible to make all the data from different RG iterations collapse onto a single curve by choosing a suitable scaling function and taking into account the distance of the maximum of the derivatives of multipartite entanglement from the critical point \cite{Nature2002Scaling,Kargarian2007}. We display $(\mathrm{d}\tau/\mathrm{d}g-\mathrm{d}\tau/\mathrm{d}g|_{g_\mathrm{max}})/N^{\frac{1}{\nu d_\mathrm{H}}}$ versus $N^{\frac{1}{\nu d_\mathrm{H}}}(g-g_\mathrm{max})$ for different RG iterations on the Sierpi\'nski triangular lattice in Fig. \ref{figure5}(d) and on the Sierpi\'nski pyramid lattice in Fig. \ref{figure6}(d). These curves approximately collapse onto a single universal one, which is a manifestation of the existence of finite-size scaling for the multipartite entanglement. These results justify that the RG implementation of multipartite entanglement truly capture the critical behaviors of the transverse-field quantum Ising model on the fractal lattices. It is well known that the cornerstone of the theory of critical phenomena is the universality, which indicates that the critical behavior is depend on the dimension of system and the symmetry of chosen order parameter \cite{Sachdev1999,Nature2002Scaling}. From above discussions, it can be confirmed that the critical behaviors of these lattices depend on their dimensions. Furthermore, we want to point out that the multipartite entanglement may be a better choice than bipartite entanglement or quantum correlations for studying the many-body systems. On the one hand, the bipartite entanglement has limited ability to capture the characters of the many-body systems. A typical example is the $N$-qubit GHZ state $\left| G \right\rangle = 1/\sqrt{2}( \left| \uparrow\uparrow \cdots\uparrow\right\rangle +\left| \downarrow\downarrow \cdots\downarrow \right\rangle)$, which has been proved to be an $N$-partite entangled state. However, its reduced density matrix of two spins ($i$ and $j$) $ \rho_{ij} = 1/2 ( \left| \uparrow\uparrow \right\rangle \left\langle \uparrow\uparrow\right| +\left| \downarrow\downarrow \right\rangle \left\langle \downarrow \downarrow \right|)$ is a separable mixed state and has no entanglement. In Fig. \ref{figure2}, the bipartite entanglement only reaches the maximum near the critical point, while the multipartite entanglement reaches the maximum in a more extensive region. The multipartite entanglement in ferromagnetic phase may be a valuable resource for the quantum information processing tasks. We may lose this important information and have less chance to know the entanglement distribution of the many-body system if we only consider the bipartite entanglement. On the other hand, although the bipartite entanglement has been successfully proved to capture the quantum critical points of some many-body systems, it has been indicated that the bipartite entanglement may fail to characterize the real quantum critical points \cite{Osborne2002,Qian2005,Yang2005}. For example, the concurrence may show no special behavior at the real critical point of the one-dimensional frustrated spin-1/2 Heisenberg model \cite{Qian2005}. In this sense, the multipartite entanglement provides a global view and more physical insights into the characters of many-body systems and may have some advantages over bipartite entanglement or quantum correlation for studying the many-body systems \cite{Montakhab2010,Hofmann2013}. \section{\label{section3}Quantum coherence and Quantum phase transitions in lattices} In this section, we choose the quantum coherence as an indicator to study the QPTs in the transverse-field quantum Ising model on the triangular lattice and fractal lattices by using the QRG method. It is noted that the existence of QPT is independent of the chosen physical quantity. In order to quantify the amount of quantum coherence, the $l_1$-norm and quantum relative entropy coherence have been proposed in Ref. \cite{PRLcoherence}. Besides, some other effective quantifiers of quantum coherence, such as the quantum coherence based on the trace distance and quantum Jensen-Shannon divergence (QJSD) have been put forward in the later works \cite{Shao2015The,Rana2016Trace,Radhakrishnan2016}. Here, we choose the quantum coherence based on the QJSD \cite{Radhakrishnan2016} to study the QPTs of lattices. The QJSD is a measure of distinguishability between two quantum states \cite{PhysRevA.72.052310} \begin{equation} J(\rho,\sigma)=S\left(\frac{\rho+\sigma}{2}\right)-\frac{S(\rho)+S(\sigma)}{2}, \end{equation} where $S(\rho)=-\rm{Tr}\rho \log_2 \rho $ is the von Neumann entropy. In Ref. \cite{PhysRevA.77.052311}, the metric character of QJSD has been discussed and a true metric based on the square root of QJSD has been proposed as follows \begin{equation} D(\rho,\sigma)=\sqrt{J(\rho,\sigma)}. \end{equation} It is noted that this metric verifies the triangle inequality in addition to satisfying the distance axioms, and it is a valuable tool since its metric properties. Moreover, it has been proven to be true for qubit and qudit systems \cite{Bri2008Properties,Lin2017Spectral,Radhakrishnan2016}, and the measure of quantum coherence based on the square root of the QJSD is given by \begin{equation} C(\rho)=\sqrt{S\left(\frac{\rho+\rho_{\rm{dia}}}{2}\right)-\frac{S(\rho)+S(\rho_{\rm{dia}})}{2}} \end{equation} where $\rho_{\rm{dia}}$ is the incoherent state obtained from $\rho$ by deleting all off-diagonal elements \cite{PRLcoherence}. Quantum coherence are usually ascribed to the off-diagonal elements of a density matrix with respect to a reference basis. We fix the computational basis $\left\lbrace |0\rangle, |1\rangle \right\rbrace $ as the reference basis, where $ |0\rangle$ and $ |1\rangle$ are the eigenvectors of spin operator $\sigma^z$. \subsection{ triangular lattice} \begin{figure} \caption{The evolution of quantum coherence $C$ versus $g$ for different RG iterations on the triangular lattice.} \label{figure7} \end{figure} First, we investigate the quantum coherence between two nearest-neighbor sites on the triangular lattice. As shown in Fig. \ref{figure1a}, the basic cluster contains seven sites, for simplicity, we apply the quantum coherence $C(\rho_{1 2})$ between sites $1$ and $2$ to study the QPT, where the reduced density matrix $\rho_{12}$ can be obtained by tracing over the sites $3,\cdots,7$. The quantum coherence $C$ as a function of $g$ for different RG iterations on the triangular lattice is plotted in Fig. \ref{figure7}. By comparing Fig. \ref{figure2}, it is clear that two different saturated values of quantum coherence are developed, however, the behaviors of quantum coherence for two different phases are completely opposite to the ones of multipartite entanglement. This is due to the fact that the quantum coherence is basis-dependent. We choose the eigenvectors of $\sigma^z$ as the reference basis and call it $\sigma^z$-basis. When the coupling strength $g$ is small enough, the external field induces the quantum fluctuation and lead to all the spins being polarized along the direction of the field, i.e. the $x$ axis. It means that the $\sigma^x$ terms contribute to the off-diagonal elements of density matrix, which leads the generation of quantum coherence in the $\sigma^z$-basis \cite{PhysRevA.96.012341}. Then a saturated value of quantum coherence is reached in the thermodynamic limit. As $g$ increases, the exchange coupling gradually plays a dominant role and keeps the system staying at the ferromagnetic phase, the contribution from the $\sigma^x$ terms for quantum coherence almost disappears. Therefore, the quantum coherence tends to zero in ferromagnetic phase after enough RG iterations. \begin{figure}\label{figure8} \end{figure} In order to obtain the precise location of critical point and the order of QPT, we look at the derivatives of the quantum coherence $\mathrm{d}C/\mathrm{d}g$ as a function of $g$ for different RG iterations in Fig. \ref{figure8}(a). It is quite clear from Fig. \ref{figure8}(a) that the first derivative of the quantum coherence exhibits a nonanalytic behavior in the vicinity of the critical point, which is a feature of the second-order QPT. The position of the minimum of $\mathrm{d}C/\mathrm{d}g$ is gradually close to the critical point as the size of system increases. A linear behavior of $\ln(|\mathrm{d}C/\mathrm{d}g|_{g_\mathrm{min}})$ versus $\ln(N)$ is displayed in Fig. \ref{figure8}(b). The critical exponent $\mu''_1$ for this scaling behavior is $|\mathrm{d}C/\mathrm{d}g|_{g_\mathrm{min}}\sim N^{\mu''_1}$ where $\mu''_1 \simeq 0.793$. Another scaling behavior is shown in Fig. \ref{figure8}(c), i.e., $|g_\mathrm{c} -g_\mathrm{min}| \sim N^{-\mu''_2}$ where the critical exponent $\mu''_2\simeq 0.795$. Using the similar analysis in the previous section, we can obtain the relation between the correlation length exponent and critical exponent for the triangular lattice, namely, $\mu_1=\mu_2=1/(d\nu_\mathrm{t}) \simeq 0.794$, which indicates the numerical result is consistent with the analytical one, i.e. $\mu''_1 \simeq \mu''_2 \simeq \mu_1$. In Fig. \ref{figure8}(d), we plot $(\mathrm{d}C/\mathrm{d}g-\mathrm{d}C/\mathrm{d}g|_{g_\mathrm{min}})/N^{\frac{1}{\nu d}}$ versus $N^{\frac{1}{\nu d}}(g-g_\mathrm{min})$ for different RG iterations. All the data from different $N$ collapse onto a single curve, which provides a manifestation of the existence of finite-size scaling for the quantum coherence. It can be concluded that the quantum coherence is a good indicator to signify the criticality of the transverse-field quantum Ising model on the triangular lattice. The multipartite entanglement and quantum coherence are both able to capture the characteristics of ground state and show special behaviors at the real critical point of the triangular lattice, which lead to the appearance of similar properties. At the same time, these similar properties enable us to obtain consistent critical exponents of multipartite entanglement and quantum coherence, i.e. $\mu'_1 \simeq \mu'_2 \simeq \mu''_1 \simeq \mu''_2$, which is the presentation of the universality of QPT and also demonstrates that the existence of QPT is independent of the chosen physical quantity. \begin{figure} \caption{The evolution of quantum coherence $C$ versus $g$ for different RG iteration on the (a) Sierpi\'nski triangular lattice and (b) Sierpi\'nski pyramid lattice.} \label{figure9a} \label{figure9b} \label{figure9} \end{figure} \begin{figure}\label{figure10} \end{figure} \subsection{Sierpi\'nski fractal lattice} Next, we turn to the transverse-field quantum Ising model on the fractal lattices and focus on the quantum coherence between two nearest-neighbor sites which are labeled by $1$ and $2$ in Figs. \ref{figure1b} and \ref{figure1c}. The results about quantum coherence $C$ versus the reduced coupling strength $g$ for different RG iterations on the Sierpi\'nski triangular and pyramid lattices are shown in Fig. \ref{figure9}. It is found that the evolutions of quantum coherence on the fractal lattices are similar to that on the triangular lattice. As the size of the lattice increases, the quantum coherence produces two different saturated values that corresponding to two different phases. Quantum coherence is stronger in the paramagnetic phase than the ferromagnetic phase for both of the fractal lattices. The positions of intersection points are different since the critical points of these two fractal lattices are not same. As the thermodynamic limit is touched by increasing the RG iterations, the quantum coherence can be used to detect the critical points of the transverse-field quantum Ising model on fractal lattices. \begin{figure}\label{figure11} \end{figure} The nonanalytic features of the first derivatives of quantum coherence at the critical points of the Sierpi\'nski triangular and pyramid lattices are given in Figs. \ref{figure10}(a) and \ref{figure11}(a), respectively. Two systems both exhibit singular properties as the increase of RG iterations. We also explore the finite-size scaling behaviors of renormalized quantum coherence close to the critical points of fractal lattices. The linear behaviors of $\ln(\left| \mathrm{d}C/\mathrm{d}g\right|_{g_\mathrm{min}})$ versus $\ln(N)$ are revealed in Figs. \ref{figure10}(b) and \ref{figure11}(b). The result of numerical analysis confirms that the minimum of $\mathrm{d}C/\mathrm{d}g$ obeys the following finite-size scaling behavior: $\left| \mathrm{d}C/\mathrm{d}g|_{g_\mathrm{min}}\right| \sim N^{\mu''}$, where the critical exponent for Sierpi\'nski triangular lattice is $\mu''_3 \simeq 0.879$ as shown in Fig. \ref{figure10}(b) and the one for Sierpi\'nski pyramid lattice is $\mu''_5 \simeq 0.808$ as shown in Fig. \ref{figure11}(b). Figs \ref{figure10}(c) and \ref{figure11}(c) present the results of our analysis for another kind of scaling behavior of $g_{\rm{min}}$ in terms of system size $N$, $|g_\mathrm{c} -g_\mathrm{min}| \sim N^{-\mu''}$, where $\mu''_4 \simeq 0.869$ for Sierpi\'nski triangular lattice and $\mu''_6 \simeq 0.815$ for Sierpi\'nski pyramid lattice. The relations between correlation length exponents and critical exponents of quantum coherence can be analytically obtained as well. In the case of Sierpi\'nski triangular lattice, $\mu_3=\mu_4=1/(\nu_\mathrm{St} d_\mathrm{H}^\mathrm{St}) \simeq 0.876$, and in the case of for Sierpi\'nski pyramid lattice, $\mu_5=\mu_6=1/(\nu_\mathrm{Sp} d_\mathrm{H}^\mathrm{Sp}) \simeq 0.810$. The numerical results are in agreement with the analytical ones. It can be seen from Figs. \ref{figure10}(d) and \ref{figure11}(d) that the curves corresponding to different sizes of system clearly collapse on a single universal curve. These results justify that the RG implementation of quantum coherence truly capture the critical behaviors of the transverse-field quantum Ising model on the fractal lattices. We want to emphasize that the study of quantum coherence on fractal lattices provides us more insights into the characteristics of the fractal lattices. First, the Hausdorff dimensions determining the relations between correlation length exponents and critical exponents of quantum coherence are confirmed again. Second, the critical exponents of quantum coherence are consistent with the ones of multipartite entanglement, namely $\mu''_3 \simeq \mu''_4 \simeq \mu'_3 \simeq \mu'_4$ for the Sierpi\'nski triangular lattice and $\mu''_5 \simeq \mu''_6 \simeq \mu'_5 \simeq \mu'_6$ for the Sierpi\'nski pyramid lattice. These results not only demonstrate that the existence of QPT is independent of the chosen physical quantity, but also are the indication of the universality of QPT. \begin{figure} \caption{The critical exponents of the triangular lattice (red circle), Sierpi\'nski triangular lattice (blue square) and Sierpi\'nski pyramid lattice (black triangle) with error bars denoting the simulation errors.} \label{figure12} \end{figure} At last, based on the obtained analytical and numerical results of the critical exponents, we display the critical exponents of the triangular lattice and Sierpi\'nski fractral lattices with error bars denoting the simulation errors in Fig. \ref{figure12}. The error bars originate from the standard deviation and indicate the errors between the numerical and analytical results. It can be confirmed again that the numerical values of critical exponents obtained from multipartite entanglement and quantum coherence are consistent with the analytical ones. \section{\label{section4}Conclusions} We investigate the performances of multipartite entanglement and quantum coherence in the quantum phase transitions for transverse-field quantum Ising model on the triangular lattice and Sierpi\'nski fractal lattices by employing the QRG method. It is shown that the quantum criticalities of these high-dimensional models closely relate to the behaviors of the multipartite entanglement and quantum coherence. As the thermodynamic limit is approached, the multipartite entanglement and quantum coherence for these models both develop two different values corresponding to two phases, i.e., the ferromagnetic phase and paramagnetic phase. However, the performances of the multipartite entanglement and quantum coherence in two phases are completely different. The multipartite entanglement in ferromagnetic phase is richer than the one in paramagnetic phase since the GHZ-like state is more likely to exist in the ferromagnetic phase. Nevertheless, the quantum coherence in paramagnetic phase is stronger than the one in ferromagnetic phase, for the reason that the external field may induce the quantum fluctuation and lead to some spin being polarized along the direction of the field, and the $\sigma^x$ terms contribute to the off-diagonal elements of density matrix, which leads the generation of quantum coherence. Moreover, the singularities and finite-size scaling behaviors for each lattice can be obtained by calculating the first derivatives of multipartite entanglement and quantum coherence. Although a similar analysis has been performed using the bipartite entanglement in Ref. \cite{PhysRevA.95}, the authors have only given one kind of scaling behavior for each lattice. In contrast, we have obtained all the scaling behaviors as mentioned in Ref. \cite{Nature2002Scaling}, especially the ones which characterize how the critical points $g_c$ of these models are touched as the increase of system sizes. The critical exponents are related to the correlation length exponents and dimensions of lattices, which is due to the fact that the universality of quantum phase transition is dependent on the effective dimension. The multipartite entanglement and quantum coherence are both able to capture the characteristics of ground states and show special behaviors at the critical points, which lead to the appearance of similar properties and consistent critical exponents. It is the presentation of the universality of QPT and also demonstrates that the existence of QPT is independent of the chosen physical quantity. In general, the multipartite entanglement and quantum coherence are both good indicators to detect the quantum phase transitions in the triangular lattice and Sierpi\'nski fractal lattices. We expect our results to be of interest for a wide range of applications in other high-dimensional lattices with help of the QRG method. \section{Acknowledgments} This project was supported by the National Natural Science Foundation of China (Grant No.11274274) and the Fundamental Research Funds for the Central Universities (Grant No.2017FZA3005 and 2016XZZX002-01). \begin{thebibliography}{53} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Sachdev}(1999)}]{Sachdev1999} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sachdev}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum Phase Transitions}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {address} {England},\ \bibinfo {year} {1999})\BibitemShut {NoStop} \bibitem [{\citenamefont {Nielson}\ and\ \citenamefont {Chuang}(2000)}]{Nielson2000Quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Nielson}}\ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum Computation and Quantum Information}}}\ (\bibinfo {publisher} {Cambridge University Press,},\ \bibinfo {year} {2000})\BibitemShut {NoStop} \bibitem [{\citenamefont {Osborne}\ and\ \citenamefont {Nielsen}(2002)}]{Osborne2002} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Osborne}}\ and\ \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Nielsen}},\ }\href {\doibase 10.1103/PhysRevA.66.032110} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {66}},\ \bibinfo {pages} {032110} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {A}\ \emph {et~al.}(2002)\citenamefont {A}, \citenamefont {L}, \citenamefont {G},\ and\ \citenamefont {R}}]{Nature2002Scaling} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Osterloh}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Amico}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Falci}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Fazio}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {416}},\ \bibinfo {pages} {608} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wu}\ \emph {et~al.}(2004)\citenamefont {Wu}, \citenamefont {Sarandy},\ and\ \citenamefont {Lidar}}]{Wu2004Quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~A.}\ \bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Sarandy}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Lidar}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {250404} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Amico}\ \emph {et~al.}(2008{\natexlab{a}})\citenamefont {Amico}, \citenamefont {Fazio}, \citenamefont {Osterloh},\ and\ \citenamefont {Vedral}}]{Amico2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Amico}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Fazio}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Osterloh}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Vedral}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {80}},\ \bibinfo {pages} {517} (\bibinfo {year} {2008}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wilson}(1975)}]{Wilson1975The} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~G.}\ \bibnamefont {Wilson}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys}\ }\textbf {\bibinfo {volume} {47}},\ \bibinfo {pages} {773} (\bibinfo {year} {1975})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jafari}\ and\ \citenamefont {Langari}(2007)}]{jafari2007phase} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Jafari}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Langari}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo {pages} {014412} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ma}\ \emph {et~al.}(2011)\citenamefont {Ma}, \citenamefont {Liu},\ and\ \citenamefont {Kong}}]{Ma2011Entanglement} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.-W.}\ \bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {S.-X.}\ \bibnamefont {Liu}}, \ and\ \bibinfo {author} {\bibfnamefont {X.-M.}\ \bibnamefont {Kong}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {062309} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yao}\ \emph {et~al.}(2012)\citenamefont {Yao}, \citenamefont {Li}, \citenamefont {Zhang}, \citenamefont {Yin}, \citenamefont {Chen}, \citenamefont {Guo},\ and\ \citenamefont {Han}}]{PhysRevA.86.042102} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yao}}, \bibinfo {author} {\bibfnamefont {H.-W.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {C.-M.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {Z.-Q.}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {G.-C.}\ \bibnamefont {Guo}}, \ and\ \bibinfo {author} {\bibfnamefont {Z.-F.}\ \bibnamefont {Han}},\ }\href {\doibase 10.1103/PhysRevA.86.042102} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages} {042102} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Efrati}\ \emph {et~al.}(2014)\citenamefont {Efrati}, \citenamefont {Wang}, \citenamefont {Kolan},\ and\ \citenamefont {Kadanoff}}]{Efrati2014Real} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Efrati}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kolan}}, \ and\ \bibinfo {author} {\bibfnamefont {L.~P.}\ \bibnamefont {Kadanoff}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages} {647} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Usman}\ \emph {et~al.}(2015)\citenamefont {Usman}, \citenamefont {Ilyas},\ and\ \citenamefont {Khan}}]{Usman2015Quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Usman}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ilyas}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Khan}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {032327} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xu}\ \emph {et~al.}(2017)\citenamefont {Xu}, \citenamefont {Kong}, \citenamefont {Liu},\ and\ \citenamefont {Yin}}]{PhysRevA.95} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.-L.}\ \bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {X.-M.}\ \bibnamefont {Kong}}, \bibinfo {author} {\bibfnamefont {Z.-Q.}\ \bibnamefont {Liu}}, \ and\ \bibinfo {author} {\bibfnamefont {C.-C.}\ \bibnamefont {Yin}},\ }\href {\doibase 10.1103/PhysRevA.95.042327} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {042327} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jullien}\ \emph {et~al.}(1978)\citenamefont {Jullien}, \citenamefont {Pfeuty}, \citenamefont {Fields},\ and\ \citenamefont {Doniach}}]{PhysRevB.18.3568} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Jullien}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Pfeuty}}, \bibinfo {author} {\bibfnamefont {J.~N.}\ \bibnamefont {Fields}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Doniach}},\ }\href {\doibase 10.1103/PhysRevB.18.3568} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {3568} (\bibinfo {year} {1978})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Penson}\ \emph {et~al.}(1979)\citenamefont {Penson}, \citenamefont {Jullien},\ and\ \citenamefont {Pfeuty}}]{PhysRevB.19.4653} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~A.}\ \bibnamefont {Penson}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Jullien}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Pfeuty}},\ }\href {\doibase 10.1103/PhysRevB.19.4653} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {19}},\ \bibinfo {pages} {4653} (\bibinfo {year} {1979})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Miyazaki}\ \emph {et~al.}(2011)\citenamefont {Miyazaki}, \citenamefont {Nishimori},\ and\ \citenamefont {Ortiz}}]{Miyazaki2011Real} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Miyazaki}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Nishimori}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ortiz}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {051103} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kubica}\ and\ \citenamefont {Yoshida}(2014)}]{kubica2014precise} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kubica}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Yoshida}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv: 1402.0619}\ } (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yi}(2013)}]{Yi2013Critical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Yi}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {014105} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yi}(2015)}]{PhysRevE.91.012118} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Yi}},\ }\href {\doibase 10.1103/PhysRevE.91.012118} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {012118} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {MARKHAM}\ \emph {et~al.}(2013)\citenamefont {MARKHAM}, \citenamefont {ANDERS}, \citenamefont {HAJDUŠEK},\ and\ \citenamefont {VEDRAL}}]{markham2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {MARKHAM}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {ANDERS}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {HAJDUŠEK}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {VEDRAL}},\ }\href {\doibase 10.1017/S0960129512000175} {\bibfield {journal} {\bibinfo {journal} {Math. Structures Comput. Sci.}\ }\textbf {\bibinfo {volume} {23}},\ \bibinfo {pages} {441–453} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Siomau}(2016)}]{Michael2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Siomau}},\ }\href {\doibase 10.1063/1.4953138} {\bibfield {journal} {\bibinfo {journal} {AIP Conference Proceedings}\ }\textbf {\bibinfo {volume} {1742}},\ \bibinfo {pages} {030017} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Browne}\ \emph {et~al.}(2009)\citenamefont {Browne}, \citenamefont {Briegel}, \citenamefont {Nest}, \citenamefont {Raussendorf},\ and\ \citenamefont {Dür}}]{Browne2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~E.}\ \bibnamefont {Browne}}, \bibinfo {author} {\bibfnamefont {H.~J.}\ \bibnamefont {Briegel}}, \bibinfo {author} {\bibfnamefont {M.~V.~D.}\ \bibnamefont {Nest}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Raussendorf}}, \ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Dür}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {19} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gisin}\ \emph {et~al.}(2002)\citenamefont {Gisin}, \citenamefont {Ribordy}, \citenamefont {Tittel},\ and\ \citenamefont {Zbinden}}]{RevModPhys.74.145} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ribordy}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Tittel}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zbinden}},\ }\href {\doibase 10.1103/RevModPhys.74.145} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {145} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {S\o{}rensen}\ and\ \citenamefont {M\o{}lmer}(2001)}]{PhysRevLett.86.4431} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont {S\o{}rensen}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {M\o{}lmer}},\ }\href {\doibase 10.1103/PhysRevLett.86.4431} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages} {4431} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Raussendorf}\ and\ \citenamefont {Briegel}(2001)}]{PhysRevLett.86.5188} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Raussendorf}}\ and\ \bibinfo {author} {\bibfnamefont {H.~J.}\ \bibnamefont {Briegel}},\ }\href {\doibase 10.1103/PhysRevLett.86.5188} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages} {5188} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Horodecki}\ \emph {et~al.}(2007)\citenamefont {Horodecki}, \citenamefont {Horodecki}, \citenamefont {Horodecki},\ and\ \citenamefont {Horodecki}}]{Horodecki2007} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Horodecki}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Horodecki}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Horodecki}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Horodecki}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {865} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bai}\ \emph {et~al.}(2014)\citenamefont {Bai}, \citenamefont {Xu},\ and\ \citenamefont {Wang}}]{Bai2014General} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.~K.}\ \bibnamefont {Bai}}, \bibinfo {author} {\bibfnamefont {Y.~F.}\ \bibnamefont {Xu}}, \ and\ \bibinfo {author} {\bibfnamefont {Z.~D.}\ \bibnamefont {Wang}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {113}},\ \bibinfo {pages} {100503} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Louisell}(1973)}]{Louisell1973Quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~H.}\ \bibnamefont {Louisell}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum Statistical Properties of Radiation}}}\ (\bibinfo {publisher} {Wiley},\ \bibinfo {year} {1973})\BibitemShut {NoStop} \bibitem [{\citenamefont {Baumgratz}\ \emph {et~al.}(2014{\natexlab{b}})\citenamefont {Baumgratz}, \citenamefont {Cramer},\ and\ \citenamefont {Plenio}}]{PRLcoherence} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Baumgratz}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Cramer}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}},\ }\href {\doibase 10.1103/PhysRevLett.113.140401} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {113}},\ \bibinfo {pages} {140401} (\bibinfo {year} {2014}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hasenbusch}(2010)}]{Hasenbusch2010A} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hasenbusch}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {174433} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Peres}\ and\ \citenamefont {Ballentine}(1995)}]{Peres1995} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Peres}}\ and\ \bibinfo {author} {\bibfnamefont {L.~E.}\ \bibnamefont {Ballentine}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum Theory: Concepts and Methods}}}\ (\bibinfo {publisher} {Kluwer Academic Publishers},\ \bibinfo {year} {1995})\ pp.\ \bibinfo {pages} {131--135}\BibitemShut {NoStop} \bibitem [{\citenamefont {Caves}\ \emph {et~al.}(2001)\citenamefont {Caves}, \citenamefont {Fuchs},\ and\ \citenamefont {Rungta}}]{Caves2001} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Caves}}, \bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont {Fuchs}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Rungta}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Found. Phys. Lett.}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {199} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hill}\ and\ \citenamefont {Wootters}(1997)}]{Hill1997} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Hill}}\ and\ \bibinfo {author} {\bibfnamefont {W.~K.}\ \bibnamefont {Wootters}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {5022} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fanchini}\ \emph {et~al.}(2017)\citenamefont {Fanchini}, \citenamefont {Pinto},\ and\ \citenamefont {Adesso}}]{Fanchini2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H. S. }\ \bibnamefont {Dhar}}, \bibinfo {author} {\bibfnamefont {A. K. }\ \bibnamefont {Pal}}, \bibinfo {author} {\bibfnamefont {D.}\ \bibnamefont {Rakshit}},\bibinfo {author} {\bibfnamefont {A.}\ \bibnamefont {Sen(De)}},\ and\ \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Sen}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Monogamy of Quantum Correlations- A Review, in Lectures on General Quantum Correlations and their Applications, edited by F. F. Fanchini, D. de O. S. Pinto, and G. Adesso, Quantum Science and Technology Series (Springer, Cham)}\ } (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gu}\ \emph {et~al.}(2003)\citenamefont {Gu}, \citenamefont {Lin},\ and\ \citenamefont {Li}}]{gu2003entanglement} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.-J.}\ \bibnamefont {Gu}}, \bibinfo {author} {\bibfnamefont {H.-Q.}\ \bibnamefont {Lin}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.-Q.}\ \bibnamefont {Li}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {68}} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Montakhab}\ and\ \citenamefont {Asadian}(2010)}]{Montakhab2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Montakhab}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Asadian}},\ }\href {\doibase 10.1103/PhysRevA.82.062313} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {062313} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kargarian}\ \emph {et~al.}(2008)\citenamefont {Kargarian}, \citenamefont {Jafari},\ and\ \citenamefont {Langari}}]{kargarian2008the} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kargarian}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Jafari}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Langari}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {032346} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ma}\ and\ \citenamefont {Kong}(2011)}]{ma2011quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.-W.}\ \bibnamefont {Ma}}\ and\ \bibinfo {author} {\bibfnamefont {X.-M.}\ \bibnamefont {Kong}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {042302} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kargarian}\ \emph {et~al.}(2007)\citenamefont {Kargarian}, \citenamefont {Jafari},\ and\ \citenamefont {Langari}}]{Kargarian2007} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kargarian}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Jafari}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Langari}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo {pages} {060304(R)} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Qian}\ \emph {et~al.}(2005)\citenamefont {Qian}, \citenamefont {Shi}, \citenamefont {Li}, \citenamefont {Song},\ and\ \citenamefont {Sun}}]{Qian2005} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-F.}\ \bibnamefont {Qian}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Shi}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Song}}, \ and\ \bibinfo {author} {\bibfnamefont {C.~P.}\ \bibnamefont {Sun}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {72}},\ \bibinfo {pages} {012333} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yang}(2005)}]{Yang2005} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~F.}\ \bibnamefont {Yang}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {71}},\ \bibinfo {pages} {030302(R)} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hofmann}\ \emph {et~al.}(2013)\citenamefont {Hofmann}, \citenamefont {Osterloh},\ and\ \citenamefont {Gühne}}]{Hofmann2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hofmann}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Osterloh}}, \ and\ \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Gühne}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {134101} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shao}\ \emph {et~al.}(2015)\citenamefont {Shao}, \citenamefont {Xi}, \citenamefont {Fan},\ and\ \citenamefont {Li}}]{Shao2015The} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.-H.}\ \bibnamefont {Shao}}, \bibinfo {author} {\bibfnamefont {Z.-J.}\ \bibnamefont {Xi}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Fan}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.-M.}\ \bibnamefont {Li}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {042120} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rana}\ \emph {et~al.}(2016)\citenamefont {Rana}, \citenamefont {Parashar},\ and\ \citenamefont {Lewenstein}}]{Rana2016Trace} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Rana}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Parashar}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lewenstein}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {012110} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Radhakrishnan}\ \emph {et~al.}(2016)\citenamefont {Radhakrishnan}, \citenamefont {Parthasarathy}, \citenamefont {Jambulingam},\ and\ \citenamefont {Byrnes}}]{Radhakrishnan2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Radhakrishnan}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Parthasarathy}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Jambulingam}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Byrnes}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages} {150504} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Majtey}\ \emph {et~al.}(2005)\citenamefont {Majtey}, \citenamefont {Lamberti},\ and\ \citenamefont {Prato}}]{PhysRevA.72.052310} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~P.}\ \bibnamefont {Majtey}}, \bibinfo {author} {\bibfnamefont {P.~W.}\ \bibnamefont {Lamberti}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~P.}\ \bibnamefont {Prato}},\ }\href {\doibase 10.1103/PhysRevA.72.052310} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {72}},\ \bibinfo {pages} {052310} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lamberti}\ \emph {et~al.}(2008)\citenamefont {Lamberti}, \citenamefont {Majtey}, \citenamefont {Borras}, \citenamefont {Casas},\ and\ \citenamefont {Plastino}}]{PhysRevA.77.052311} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~W.}\ \bibnamefont {Lamberti}}, \bibinfo {author} {\bibfnamefont {A.~P.}\ \bibnamefont {Majtey}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Borras}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Casas}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Plastino}},\ }\href {\doibase 10.1103/PhysRevA.77.052311} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {052311} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bri\"et}\ and\ \citenamefont {Harremo\"es}(2009)}]{Bri2008Properties} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Bri\"et}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Harremo\"es}},\ }\href {\doibase 10.1103/PhysRevA.79.052311} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {79}},\ \bibinfo {pages} {052311} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lin}\ \emph {et~al.}(2018)\citenamefont {Lin}, \citenamefont {Wang},\ and\ \citenamefont {Chen}}]{Lin2017Spectral} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {J.-M.}\ \bibnamefont {Wang}}, \ and\ \bibinfo {author} {\bibfnamefont {Z.-H.}\ \bibnamefont {Chen}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Lett. A}\ } \textbf {\bibinfo {volume} {382}}, \bibinfo {pages} {1516} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Radhakrishnan}\ \emph {et~al.}(2017)\citenamefont {Radhakrishnan}, \citenamefont {Ermakov},\ and\ \citenamefont {Byrnes}}]{PhysRevA.96.012341} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Radhakrishnan}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Ermakov}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Byrnes}},\ }\href {\doibase 10.1103/PhysRevA.96.012341} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo {pages} {012341} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{document} \title{Uniform Ergodicity of the Iterated Conditional SMC and Geometric Ergodicity of Particle Gibbs samplers} \author{Christophe Andrieu, Anthony Lee and Matti Vihola} \maketitle \lyxaddress{School of Mathematics, University of Bristol,} \lyxaddress{Department of Statistics, University of Warwick,} \lyxaddress{Department of Statistics, University of Oxford.} \begin{abstract} We establish quantitative bounds for rates of convergence and asymptotic variances for iterated conditional sequential Monte Carlo (i-cSMC) Markov chains and associated particle Gibbs samplers~\citep{andrieu-doucet-holenstein}. Our main findings are that the essential boundedness of potential functions associated with the i-cSMC algorithm provide necessary and sufficient conditions for the uniform ergodicity of the i-cSMC Markov chain, as well as quantitative bounds on its (uniformly geometric) rate of convergence. Furthermore, we show that the i-cSMC Markov chain cannot even be geometrically ergodic if this essential boundedness does not hold in many applications of interest. Our sufficiency and quantitative bounds rely on a novel non-asymptotic analysis of the expectation of a standard normalizing constant estimate with respect to a ``doubly conditional'' SMC algorithm. In addition, our results for i-cSMC imply that the rate of convergence can be improved arbitrarily by increasing $N$, the number of particles in the algorithm, and that in the presence of mixing assumptions, the rate of convergence can be kept constant by increasing $N$ linearly with the time horizon. We translate the sufficiency of the boundedness condition for i-cSMC into sufficient conditions for the particle Gibbs Markov chain to be geometrically ergodic and quantitative bounds on its geometric rate of convergence, which imply convergence of properties of the particle Gibbs Markov chain to those of its corresponding Gibbs sampler. These results complement recently discovered, and related, conditions for the particle marginal Metropolis--Hastings (PMMH) Markov chain. {} \emph{Keywords:} geometric ergodicity; iterated conditional sequential Monte Carlo; Metropolis-within-Gibbs; particle Gibbs; uniform ergodicity \end{abstract} \section{Introduction\label{sec:Introduction}} Particle Markov chain Monte Carlo (P-MCMC) methods are a set of recently proposed sampling techniques particularly well suited to the Bayesian estimation of static parameters in general state-space models~\citep{andrieu-doucet-holenstein}, although their scope extends beyond this class of models. At an abstract level, once the likelihood function and prior are defined, inference for this class of models relies on a probability distribution $\pi\big({\rm d}\theta\times{\rm d}x\big)$, defined on some measurable space $\left(\Theta\times\mathsf{X},\mathcal{B}(\Theta)\times\mathcal{B}\mathsf{(X)}\right)$, where $\theta$ is generally a low dimensional static parameter, the static parameter, while $x$, the hidden state of the system, is a large vector with a non-trivial dependence structure. Here, $\mathcal{B}(\,\cdot\,)$ denotes the $\sigma$-algebra related to the corresponding space. In practice the complexity of such probability distributions requires the use of sampling techniques to effectively carry out inference. When $\theta$ is known sequential Monte Carlo methods (SMC), or particle filters, are particularly suitable to carry out inference about $x$ by approximately sampling from the conditional distribution $\pi_{\theta}\big({\rm d}x\big)$. These algorithms rely on interacting particle systems and their performance and accuracy can be improved by increasing the number $N$ of such particles. P-MCMC realises the synthesis between SMC methods and classical Markov chain Monte Carlo (MCMC) methods, that is it allows the construction of Markov transition probabilities leaving $\pi\big({\rm d}\theta\times{\rm d}x\big)$ at least marginally invariant and from which it is possible to sample realisations $\{(\theta_{i},X_{i}),i\geq0\}$ with attractive efficiency properties. The particle marginal Metropolis--Hastings (PMMH) method is one such algorithm, which takes advantage of the availability of unbiased estimators of the likelihood function to provide an exact approximation of an idealized algorithm which computes the likelihood function exactly. The algorithm simply consists of replacing the true value of the likelihood function required to implement the standard Metropolis--Hastings (MH) algorithm with estimators, but is nevertheless guaranteed to be correct in that it leaves the required distribution of interest marginally invariant. In PMMH, the estimator of the likelihood is a byproduct of a sequential Monte Carlo (SMC) algorithm, whose accuracy can be improved by increasing $N$. In contrast, the particle Gibbs (PGibbs) sampler~\citep{andrieu-doucet-holenstein} involves approximating a Gibbs sampler which consists of constructing a Markov chain $\{(\theta_{i},X_{i}),i\geq0\}$, by repeatedly sampling from $\pi_{\theta}\big({\rm d}x\big)$ and $\pi_{x}\big({\rm d}\theta\big)$ in turn. In practice sampling from $\pi_{\theta}\big({\rm d}x\big)$ may be particularly difficult and the conditional SMC (cSMC)~\citep{andrieu-doucet-holenstein} update is a Markov transition probability $P_{N,\theta}$ which leaves $\pi_{\theta}\big({\rm d}x\big)$ invariant, therefore allowing the implementation of a Metropolis-within-Gibbs algorithm, that is a Markov transition probability leaving $\pi\big({\rm d}\theta\times{\rm d}x\big)$ invariant. The cSMC relies for its construction, as suggested by its name, on an SMC-like procedure and it is expected that as $N$ increases $P_{N,\theta}$ approaches $\pi_{\theta}\big({\rm d}x\big)$. While PMMH methods have been studied in a series of papers~\citep{andrieu-roberts,andrieu2015,lee-latuszynski,sherlock2015,Doucet07032015}, a theoretical study of the PGibbs is still missing. Indeed it has been shown that as $N$ increases, performance of the PMMH approaches that of the exact MH algorithm but the question of the approximation of the Gibbs sampler by a PGibbs has not been addressed to date. We note however that a study of one of its components, the cSMC update, has recently been undertaken in~\citep{chopin:singh:2013}, in which a coupling argument is central to their analysis. We refer to the Markov chain obtained by iterating the cSMC algorithm for a fixed target distribution as iterated i-cSMC here in order to distinguish it from that of the PGibbs. The present manuscript addresses questions concerning the i-cSMC similar to those of~\citep{chopin:singh:2013}, but our results differ in many respects and complement their findings in several directions. At a technical level our approach seems to be more straightforward in the scenario considered, relies on weaker assumptions for uniform convergence which we prove are necessary and sufficient and lead to quantitative bounds on performance measures in terms of the number $N$ of particles involved. We additionally transfer sufficient conditions for uniform ergodicity of the i-cSMC Markov chain into sufficient conditions for geometric ergodicity of the associated PGibbs Markov chain, the main motivation behind our work. This allows us in particular to show that under some conditions PGibbs is asymptotically as efficient as the Gibbs sampler as the number $N$ of particles increases. Contemporary to the first version of the present manuscript~\citep{andrieu2013uniform},~\citep{lindsten_pg} have also provided essentially the same sufficient conditions for the uniform convergence of the i-cSMC Markov chain (Theorem~\ref{thm:THEtheorem}, Section~\ref{sec:The-i-CSMC}) using a different proof technique. Here we have further established that the aforementioned conditions are also necessary for uniform convergence in general, but also geometric ergodicity in many realistic scenarios (Section~\ref{sec:conjectureonboundedness}). Similarly to us~\citep{lindsten_pg} also provide quantitative bounds and associated scaling properties of the i-cSMC, albeit for a different set of specialised conditions (a detailed comparison of the assumptions is provided after Theorem~\ref{thm:quantitative-on-epsilon_N} at the end of Section~\ref{sec:The-i-CSMC}). We have also very recently become aware of the contribution~\citep{del2014feynman} to the analysis of the properties of the cSMC, established using the formalism of~\citep{delmoral:2004}, but their practical implications are unclear. Similarly to~\citep{chopin:singh:2013},~\citep{lindsten_pg} do not attempt to address the practically important question of how uniform ergodicity of the i-cSMC can be translated into geometric ergodicity of the PGibbs sampler, an issue we address in Section~\ref{sec:The-particle-Gibbs}. In Section~\ref{sec:Discussion} we contrast the results obtained in this paper concerning the i-cSMC and PGibbs algorithm with known results concerned with other particle MCMC methods and draw final conclusions. Similarly to SMC methods, the cSMC and associated algorithms are complex mathematical objects which require the introduction of sometimes overwhelming notation which may obscure the main ideas. In the next section we attempt to remedy this by presenting our results in a simplified scenario, which captures our main ideas, before moving on to the general scenario. \section{Statement of our results in a simplified scenario} We first explain our results on a particularly simple instance of the i-cSMC algorithm. This should provide the reader with the essence of the results proved later on in the general scenario, while its simple structure will allow us to outline the main idea behind our proof in the general set-up (in Section~\ref{sec:Minorization-and-Dirichlet}). Assume we are interested in sampling from a probability distribution $\pi$ on some measurable space $\bigl(\mathsf{X},\mathcal{B}\bigl(\mathsf{X}\bigr)\bigr)$. We define the probability distribution $\tilde{\pi}$ on $\{1,\ldots,N\}\times\mathsf{X}^{N}$ \begin{equation} \tilde{\pi}\left(k,{\rm d}z^{1:N}\right)=\frac{1}{N}\pi({\rm d}z^{k})\prod_{j=1,j\neq k}^{N}M({\rm d}z^{j})\quad,\label{eq:artificialforiSIR} \end{equation} for some probability distribution $M$ defined on $\bigl(\mathsf{X},\mathcal{B}\bigl(\mathsf{X}\bigr)\bigr)$ and such that for any $S\in\mathcal{B}\bigl(\mathsf{X}\bigr)$ such that $\pi(S)>0$ then $M(S)>0$. As pointed out in the authors' discussion reply of~\citep{andrieu-doucet-holenstein}, in this simple scenario one can define an MCMC algorithm targeting $\pi$ by iterating the classical sampling importance resampling (SIR) procedure. More specifically, we sample alternately from (a) $Z^{1:N\setminus k}\mid\left(K=k,Z^{k}=z^{k}\right)\sim\prod_{i=1,i\neq k}^{N}M(z^{i})$ and (b) $K\mid\left(Z^{1:N}=z^{1:N}\right)\sim\tilde{\pi}(k|z^{1:N})$, where $Z^{1:N\setminus k}:=\bigl(Z^{1},Z^{2},\ldots,Z^{k-1},Z^{k+1},\ldots,Z^{N}\bigr)$. Owing to the fact that this algorithm is a Gibbs sampler on the distribution above and from the standard interlacing property of the two stage Gibbs sampler, one can check that the sequence $\{Z_{i}^{K_{i}}\}$ defines a Markov chain with invariant distribution $\pi$, and that its transition kernel is for any $(x,S)\in\mathsf{X}\times\mathcal{B}\bigl(\mathsf{X}\bigr)$ \[ P_{N}(x,S)=\int_{\mathsf{X}^{N-1}}\sum_{k=1}^{N}\frac{G(z^{k})}{\sum_{j=1}^{N}G(z^{j})}\mathbb{I}\{z^{k}\in S\}\prod_{i=2}^{N}M(\mathrm{d}z^{i}) \] with $G(x):=\pi({\rm d}x)/M({\rm d}x)$ and the convention $z^{1}=x$. Our first results are concerned with properties of the homogeneous Markov chain with transition probability $P_{N}$, in terms of $\bar{G}:=\pi-{\rm ess}\sup_{x}G(x)$ and $N$. We refer to the resulting algorithm as iterated SIR (i-SIR). We briefly introduce notions that allow us to make quantitative statements about the Markov chains under study. We use classical Hilbert space techniques for the analysis of reversible Markov chains. Letting $\mu\bigl(\cdot\bigr)$ be a probability distribution defined on some measurable space $\bigl(\mathsf{E},\mathcal{B}\bigl(\mathsf{E}\bigr)\bigr)$, we define the function space \[ L^{2}(\mathsf{E},\mu):=\left\{ f:\mathsf{E}\rightarrow\mathbb{R}:\mu(f^{2})<\infty\right\} , \] where the functions are taken to be measurable; hereafter all functions considered are assumed to be measurable with respect to an appropriate $\sigma$-algebra. Let $\Pi:\mathsf{E}\times\mathcal{B}\bigl(\mathsf{E}\bigr)\rightarrow[0,1]$ be a $\mu$-reversible Markov transition kernel and let $\{\xi_{i},i\geq0\}$ be the stationary Markov chain with transition kernel $\Pi$ (such that $\xi_{0}\sim\mu$). We will use the standard notation for any probability distribution $\nu$ on $\bigl(\mathsf{E},\mathcal{B}\bigl(\mathsf{E}\bigr)\bigr)$ and measurable function $f:\mathsf{E}\rightarrow\mathbb{R}$, \[ \nu\bigl(f\bigr):=\int_{\mathsf{E}}f(x)\nu({\rm d}x)\quad\text{,}\quad\Pi f(x):=\int_{\mathsf{E}}f(y)\Pi\bigl(x,{\rm d}y\bigr)\quad, \] for $k\geq2$, by induction, \[ \Pi^{k}f(x):=\int_{\mathsf{E}}\Pi\bigl(x,{\rm d}y\bigr)\Pi^{k-1}f(y)\quad. \] We denote $\nu\Pi^{k}f:=\nu\bigl(\Pi^{k}f\bigr)$ and refer to $\nu\Pi^{k}$ as either a probability measure or its corresponding operator on $L^{2}(\mathsf{E},\mu)$. For $f\in L^{2}\bigl(\mathsf{E},\mu\bigr)$, we define the variance of $f$ under $\mu$ as ${\rm var}_{\mu}(f):=\mu(f^{2})-\mu(f)^{2}$ and the ``asymptotic variance'' of $M^{-1}\sum_{i=1}^{M}f\big(\xi_{i}\big)$ for stationary realizations $\{\xi_{i},i\geq0\}$ associated to the homogeneous Markov chain with transition $\Pi$ as \[ \mathrm{var}(f,\Pi):=\lim_{M\rightarrow\infty}\mathrm{var}\left({\textstyle M^{-1/2}}{\textstyle \sum}_{i=1}^{M}[f(\xi_{i})-\mu(f)]\right)\quad. \] Some of our results involve norms of signed measures. As in, e.g., \citep{roberts-rosenthal-geometric}, for any signed measure $\nu$ on $\bigl(\mathsf{E},\mathcal{B}\bigl(\mathsf{E}\bigr)\bigr)$ we let \[ \|\nu\|_{TV}:=\frac{1}{2}\sup_{f:\mathsf{E}\rightarrow[-1,1]}\nu\bigl(f\bigr) \] denote the total variation distance and for $\nu\ll\mu$, \begin{equation} \Vert\nu\Vert_{L^{2}(\mathsf{E},\mu)}^{2}:=\int_{\mathsf{E}}\left|\frac{{\rm d}\nu}{{\rm d}\mu}\right|^{2}{\rm d}\mu=\sup_{f\in L^{2}\bigl(\mathsf{E},\mu\bigr),\:\|f\|_{\mu}>0}\frac{|\nu(f)|}{\|f\|_{\mu}}\quad.\label{eq:defL2norm} \end{equation} denote the $L^{2}(\mathsf{E},\mu)$ norm. Our results can be summarized as follows \begin{enumerate} \item $P_{N}$ is reversible with respect to $\pi$ and positive, that is the i-SIR Markov chain has non-negative stationary autocorrelations. \item If $\bar{G}<\infty$, and $N\geq2$, the i-SIR Markov chain is uniformly ergodic with for any $x\in\mathsf{X}$, \[ \|P_{N}^{n}(x,\cdot)-\pi(\cdot)\|_{TV}\leq\left(1-\frac{N-1}{2\bar{G}+N-2}\right)^{n}\quad. \] \item If $\bar{G}<\infty$, then for any $f\in L^{2}(\mathsf{X},\pi)$, \[ \mathrm{var}{}_{\pi}(f)\leq\mathrm{var}(f,P_{N})\leq\left[2\left(1+\frac{2\bar{G}-1}{N-1}\right)-1\right]\mathrm{var}{}_{\pi}(f)\quad. \] \item If $\bar{G}=\infty$ then the i-SIR Markov chain cannot be geometrically ergodic for any finite $N$. \end{enumerate} The second and third points provide quantitative bounds on standard measures of performance for MCMC algorithms, where the second provides a bound on the uniform (or equivalently uniformly geometric) rate of convergence of the Markov chain. Interest in algorithms such as i-SIR is motivated empirically from observed behaviour in line with the above bounds, as performance improves as $N$ increases, and part of our purpose here is to confirm and quantify theoretically such empirical successes. Moreover, this improvement can often be obtained with little extra computational effort, since on a parallel architecture one can sample from $M$ and evaluate $G$ in parallel, a characteristic of SMC algorithms more generally~\citep{lee-yau-giles-doucet-holmes}. While i-SIR can be used alone to sample from fairly general distributions, it can also be used as a constituent element of more elaborate MCMC schemes. Assume now that we wish to sample from a distribution $\pi$ defined on some measurable space $\left(\Theta\times\mathsf{X},\mathcal{B}(\Theta)\times\mathcal{B}\bigl(\mathsf{X}\bigr)\right)$, often defined for some $S\in\mathcal{B}(\Theta)\times\mathcal{B}(\mathsf{X})$ via (note the different nature of $\pi$ as compared to earlier) \[ \pi(S):=\frac{\int_{S}G_{\theta}(x)M_{\theta}({\rm d}x)\varpi({\rm d}\theta)}{\int_{\Theta\times\mathsf{X}}G_{\theta}(x)M_{\theta}({\rm d}x)\varpi({\rm d}\theta)}\quad, \] where $\{G_{\theta},\theta\in\Theta\}$ is a collection of non-negative potential functions and $\{M_{\theta},\theta\in\Theta\}$ a collection of probability measures which define for each $\theta\in\Theta$ the conditional distributions $\pi_{\theta}\bigl({\rm d}x\bigr):=M_{\theta}\bigl({\rm d}x\bigr)G_{\theta}(x)/\gamma_{\theta}$ with \[ \gamma_{\theta}:=\int_{\mathsf{X}}G_{\theta}(x)M_{\theta}({\rm d}x)\quad. \] The interpretation in a statistical context is that $\varpi$ is the prior distribution for some parameter $\theta$ of interest, whilst $\gamma_{\theta}$ is the likelihood function associated with some observed data and $x$ corresponds to the so-called latent variable(s). The form of $\gamma_{\theta}$ is often derived from the data being explained by the latent variable $x$ whose \emph{a priori} distribution conditional upon $\theta$ is $M_{\theta}$ and the likelihood function given the data and $x$ is $G_{\theta}(x)$. Assume here that we are able to sample from $\pi_{x}$, the conditional distribution of $\theta$ given $X=x$. For any $\theta\in\Theta$ one can define the i-SIR kernel for any $(x,S)\in\mathsf{X}\times\mathcal{B}\bigl(\mathsf{X}\bigr)$ via \[ P_{N,\theta}(x,S)=\int_{\mathsf{X}^{N-1}}\sum_{k=1}^{N}\frac{G_{\theta}(z^{k})}{\sum_{j=1}^{N}G_{\theta}(z^{j})}\mathbb{I}\{z^{k}\in S\}\prod_{i=2}^{N}M_{\theta}(\mathrm{d}z^{i})\quad, \] with $z^{1}=x$, so that the invariant distribution associated with $P_{N,\theta}$ is $\pi_{\theta}$, the conditional distribution of $X$ given $\theta$. One can sample from $\pi({\rm d}\theta\times{\rm d}x)$ with the following Markov transition, defined for any $\bigl(\theta_{0},x,S\bigr)\in\Theta\times\mathsf{X}\times\big(\mathcal{B}(\Theta)\times\mathcal{B}(\mathsf{X})\big)$ via \[ \Phi{}_{N}(\theta_{0},x;S):=\int_{S}P_{N,\theta}(x,{\rm d}y)\pi_{x}({\rm d}\theta)\quad, \] which can be viewed as an exact approximation of the Gibbs sampler defined via \[ \Gamma(\theta_{0},x;S):=\int_{S}\pi_{\theta}({\rm d}y)\pi_{x}({\rm d}\theta)\quad. \] The term exact approximation refers to the fact that while $P_{N,\theta}$ can be thought of as an approximation of the conditional distribution $\pi_{\theta}$ the resulting algorithm converges to $\pi$ and can be made arbitrarily close to $\Gamma$ as we increase $N$ as explained below -- we will refer to this algorithm and its generalisation as the particle Gibbs (PGibbs) sampler. Throughout the paper we will use the following convention: we will say $f\in L^{2}\bigl(\mathsf{E},\pi\bigr)$ with $\mathsf{E}=\Theta$ (resp. $\mathsf{E}=\mathsf{X}$) to mean that $f:\mathsf{E}\rightarrow\mathbb{R}$ is square integrable under the relevant marginal of $\pi$, or $f:\Theta\times\mathsf{X}\rightarrow\mathbb{R}$ does not depend on $x$ (resp. $\theta$) and is square integrable under the relevant marginal of $\pi$. This should not lead to any possible confusion. Letting $\bar{G}:=\pi-{\rm ess}\sup_{\theta,x}\frac{G_{\theta}(x)}{\gamma_{\theta}}$, our results for the PGibbs sampler, are as follows \begin{enumerate} \item Assume the $\Gamma$ Markov chain is such that there exists $\beta\in(0,1]$ such that for any $f:\mathsf{X}\rightarrow[-1,1]$ and $\nu\ll\pi$ \[ \left|\nu\Gamma^{n}(f)-\pi(f)\right|\leq\Vert\nu-\pi\Vert_{L^{2}(\mathsf{X},\pi)}\left(1-\beta\right)^{n}\quad. \] If $\bar{G}<\infty$, and $N\geq2$, then for any $f:\mathsf{X}\rightarrow[-1,1]$ and $\nu\ll\pi$ \[ \left|\nu\Phi_{N}^{n}(f)-\pi(f)\right|\leq\Vert\nu-\pi\Vert_{L^{2}(\mathsf{X},\pi)}\left(1-\beta_{N}'\right)^{n}\quad, \] where $\beta_{N}'$ satisfies \[ \beta_{N}'\geq\frac{N-1}{2\bar{G}+N-2}\beta\quad. \] \item For any $f\in L^{2}(\mathsf{X},\pi)$ and $N\geq2$, the asymptotic variance ${\rm var}(f,\Phi_{N})$ satisfies \[ {\rm var}\bigl(f,\Gamma\bigr)\leq{\rm var}(f,\Phi_{N})\leq\frac{2\bar{G}-1}{N-1}{\rm var}_{\pi}(f)+\left(1+\frac{2\bar{G}-1}{N-1}\right){\rm var}(f,\Gamma)\,. \] \item For any $f\in L^{2}\bigl(\Theta,\pi\bigr)$ and $N\geq2$, the asymptotic variance ${\rm var}(f,\Phi_{N})$ satisfies \[ {\rm var}\bigl(f,\Gamma\bigr)\leq{\rm var}(f,\Phi_{N})\leq\left(1+\frac{2\bar{G}-1}{N-1}\right){\rm var}\bigl(f,\Gamma\bigr)-\left(\frac{2\bar{G}-1}{N-1}\right){\rm var}_{\pi}(f)\quad. \] \end{enumerate} In the sequel, we prove similar results in the more general (and complex) scenario where $P_{N,\theta}$ is defined by a general cSMC algorithm with multinomial resampling, but the key ideas and results are similar (Section~\ref{sec:The-i-CSMC}). The results concerning the general form of the PGibbs sampler, from which its convergence in the sense of points 1--3 above follows, can be found in Section~\ref{sec:The-particle-Gibbs}. \section{The i-cSMC and its properties\label{sec:The-i-CSMC}} We mostly follow the notation of~\citep{delmoral:2004} and use the following conventions for lists, indices and superscripts. For $N\in\mathbb{N}$, we denote $[N]:=\bigl\{1,\ldots,N\bigr\}$, and for any $p\in\mathbb{N}$, $\mathbf{k},\mathbf{l}\in[N]^{p}$ and $u_{k}^{l}:\mathbb{N}^{2}\rightarrow\mathsf{E}$ (for a generic set $\mathsf{E}$ dependent on the context) we will use the notation $u_{\mathbf{k}}^{\mathbf{l}}$ to mean $\bigl(u_{k_{1}}^{l_{1}},u_{k_{2}}^{l_{2}},\dots,u_{k_{p}}^{l_{p}}\bigr)$, and whenever there is no dependence on $l$ (resp. $k$) of $u_{k}^{l}$ we simply ignore this superscript (resp. this index). We will also use the notation, for $k,l\in\mathbb{N}$ such that $l\geq k$, $k:l:=\bigl(k,k+1,\ldots,l\bigr)$. Let $\bigl(\mathsf{Z},\mathcal{B}\bigl(\mathsf{Z}\bigr)\bigr)$ be a measurable space and for some $T\geq1$ define a family of Markov transition probabilities on this space $\bigl\{ M_{t}\bigl(\cdot,\cdot\bigr),t\in[T]\bigr\}$ with the convention that for $t=1$ and any $z\in\mathsf{Z}$, $M_{1}(z,{\rm d}u)=M_{1}({\rm d}u)$ and a family of measurable non-negative functions, the potentials $G_{t}:\mathsf{Z}\rightarrow[0,\infty)$, again for $t\in[T]$. We first define an inhomogeneous Markov chain $\{Z_{1},\ldots,Z_{T}\}$ on $\mathsf{X}:=\mathsf{Z}^{T}$ endowed with the product $\sigma-$algebra $\mathcal{B}\bigl(\mathsf{X}\bigr)=\mathcal{B}(\mathsf{Z})^{T}$ and with probability distribution $\mathbb{P}\bigl(\cdot\bigr)$ and associated expectation $\mathbb{E}\bigl(\cdot\bigr)$ such that for $t=1$, the initial distribution is $\mathbb{P}\left(Z_{1}\in{\rm d}z_{1}\right):=M_{1}({\rm d}z_{1})$, and for $t=2,\ldots,T$ the transition probability is given by $M_{t}$, i.e. \[ \qquad\mathbb{P}\left(Z_{t}\in{\rm d}z_{t}\middle|Z_{t-1}=z_{t-1}\right):=M_{t}(z_{t-1},{\rm d}z_{t})\quad. \] We define for $p\in[T]$ and $f_{p}:\mathsf{Z}^{p}\rightarrow\mathbb{R}$ \[ \gamma_{p}(f_{p}):=\mathbb{E}\left(f_{p}(Z_{1},\ldots,Z_{p})\prod_{t=1}^{p}G_{t}\bigl(Z_{t}\bigr)\right), \] and can define for any $S\in\mathcal{B}(\mathsf{X})$ the probability distribution $\pi$ (which will be the target distribution of interest) \begin{equation} \pi(S):=\frac{\gamma_{T}(\mathbb{I}\bigl\{\,\cdot\,\in S\bigr\})}{\gamma_{T}}\quad,\label{eq:defofpiforSMCframework} \end{equation} where $\mathbb{I}\bigl\{\cdot\bigr\}$ denotes the indicator function and $\gamma_{T}:=\gamma_{T}\bigl(1\bigr)$. For $l>k\ge0$, we define \[ M_{k,l}(z_{k},{\rm d}z_{k+1:l}):=\prod_{t=k+1}^{l}M_{t}(z_{t-1},{\rm d}z_{t})\quad. \] Note in particular that with the convention above, for any $l\geq2$ and $z_{0}\in\mathsf{Z}$, $M_{0,l}\bigl(z_{0},{\rm d}z_{1:l}\bigr):=M_{1}({\rm d}z_{1})\times M_{1,l}\bigl(z_{1},{\rm d}z_{2:l}\bigr)$. The iterated conditional SMC (i-cSMC) is a family of homogeneous Markov chains, with state-space $\bigl(\mathsf{X},\mathcal{B}\bigl(\mathsf{X}\bigr)\bigr)$, indexed by $N\in\mathbb{N}$ (the concrete meaning of $N$ shall become clearer below). We denote by $P_{N}\bigl(\cdot,\cdot\bigr):\mathsf{X}\times\mathcal{B}\bigl(\mathsf{X}\bigr)\rightarrow[0,1]$ the corresponding Markov transition kernels, which we now define. To that end, we first detail for any $N\in\mathbb{N}$ the probability distribution of the conditional SMC (cSMC) algorithm, which corresponds to a process defined on the extended space $\mathsf{W}:=\bigl(\mathsf{Z}^{N}\times[N]^{N}\bigr)^{T-1}\times\mathsf{Z}^{N}\times[N]$ endowed with the corresponding product $\sigma-$algebra $\mathcal{B}\bigl(\mathsf{W}\bigr)$, of which $P_{N}$ is a simple by-product. Our focus is on a particular implementation of the algorithm corresponding to ``multinomial resampling''--other schemes are considered in~\citep{chopin:singh:2013}. For any $x\in\mathsf{X}$ and with $\mathbf{1}\in\{1\}^{T}$ we define the process $\{Z_{t},A_{t},t=1,\ldots,T\}$ on $\mathsf{W}$ through \begin{align} \mathbb{P}_{\mathbf{1},x}^{N}\left(Z_{1}\in{\rm d}z_{1}\right): & =\delta_{x_{1}}({\rm d}z_{1}^{1})\prod_{i=2}^{N}M_{1}({\rm d}z_{1}^{i})\label{eq:def_P_=00007B1,x=00007D-time-1} \end{align} and for $t\in\{2,\ldots,T\}$ \begin{eqnarray} & & \mathbb{P}_{\mathbf{1},x}^{N}\big(Z_{t}\in{\rm d}z_{t},A_{t-1}=a_{t-1}\,\big|\,Z_{1:t-1}=z_{1:t-1},A_{1:t-2}=a_{1:t-2}\big)\nonumber \\ & & \mathbb{=P}_{\mathbf{1},x}^{N}\left(Z_{t}\in{\rm d}z_{t},A_{t-1}=a_{t-1}\left|Z_{t-1}=z_{t-1}\right.\right)\nonumber \\ & & =\delta_{x_{t}}({\rm d}z_{t}^{1})\mathbb{I}\{a_{t-1}^{1}=1\}\prod_{i=2}^{N}\Bigg(\sum_{k=1}^{N}\frac{G_{t-1}(z_{t-1}^{k})}{\sum_{j=1}^{N}G_{t-1}(z_{t-1}^{j})}\mathbb{I}\left\{ a_{t-1}^{i}=k\right\} M_{t}(z_{t-1}^{k},{\rm d}z_{t}^{i})\Bigg)\quad,\label{eq:def_P_=00007B1,x=00007D-other-times} \end{eqnarray} where we keep $k$ to emphasize that we are sampling from that mixture. For the last iteration we only require one index and point out that whereas $A_{t}\in[N]^{N}$ for $t=1,\ldots,T-1$, we have $A_{T}\in[N]$ following \[ \mathbb{P}_{\mathbf{1},x}^{N}\left(A_{T}=k\left|Z_{T}=z_{T}\right.\right)={\displaystyle {\textstyle \frac{{\displaystyle G_{T}(z_{T}^{k})}}{{\displaystyle {\textstyle \sum_{j=1}^{N}}G_{T}(z_{T}^{j})}}\quad.}} \] The stochastic process defined by $\mathbb{P}_{{\bf 1},x}^{N}$ is referred to as the conditional SMC algorithm because it is closely related to a standard SMC algorithm, but where $x$ is a ``fixed path'' with lineage ${\bf 1}$. However, as remarked in~\citep{andrieu-doucet-holenstein}, $\mathbb{P}_{\mathbf{1},x}^{N}$ is not a conditional distribution of $\mathbb{P}^{N}\bigl(\cdot\bigr)$, the standard SMC algorithm whose definition here is deferred to~\citep[Appendix~\ref{sec:Comparison-with-Particle}]{andrieu2015uniformsupplement}. We note further that in order to simplify presentation we have focused here on the scenario where the lineage of $x$ was ${\bf 1}$ but that we could also use, as in~\citep{andrieu2015uniformsupplement}, the cSMC with ${\bf k}\in[N]^{T}$ (with associated symbol $\mathbb{P}_{\mathbf{k},x}^{N}$ and $\mathbb{E}_{\mathbf{k},x}^{N}$) corresponding to the process above, but where $\delta_{x_{t}}({\rm d}z_{t}^{1})\mathbb{I}\{a_{t-1}^{1}=1\}$ in (\ref{eq:def_P_=00007B1,x=00007D-other-times}) is replaced with $\delta_{x_{t}}({\rm d}z_{t}^{k_{t}})\mathbb{I}\{a_{t-1}^{k_{t}}=k_{t-1}\}$ and $\delta_{x_{1}}({\rm d}z_{1}^{1})$ with $\delta_{x_{1}}({\rm d}z_{1}^{k_{1}})$ in (\ref{eq:def_P_=00007B1,x=00007D-time-1}). For any $\mathbf{i}:=\bigl(i_{1},i_{2},\ldots,i_{T}\bigr)\in[N]^{T}$, $z_{1:T}\in\bigl(\mathsf{Z}^{N}\bigr)^{T}$, $a_{1:T}:=(a_{1},\ldots,a_{T})\in\bigl([N]^{N}\bigr)^{T-1}\times[N]$ and $S\in\mathcal{B}\bigl(\mathsf{X}\bigr)$ define \begin{equation} I_{\mathbf{i}}\bigl(z_{1:T},a_{1:T},S\bigr):=\mathbb{I}\{z_{1:T}^{\mathbf{i}}\in S,i_{T}=a_{T}\}\prod_{t=1}^{T-1}\mathbb{I}\{i_{t}=a_{t}^{i_{t+1}}\}\quad.\label{eq:def_of_I_i} \end{equation} Then the transition kernel of the iterated conditional SMC (i-cSMC), in the multinomial sampling scenario, is given for any $x\in\mathsf{X}$ and $S\in\mathcal{B}\left(\mathsf{X}\right)$ by \begin{equation} P_{N}(x,S):=\mathbb{E}_{\mathbf{1},x}^{N}\left[{\textstyle \sum_{\mathbf{i}\in[N]^{T}}}I_{\mathbf{i}}\bigl(Z_{1:T},A_{1:T},S\bigr)\right]\quad,\label{eq:PN_defn} \end{equation} that is, conditional upon $x$ we consider the probability distribution of those trajectories $Z_{1:T}^{\mathbf{i}}$ generated by the cSMC which form a lineage compatible with the lineages defined by the random variables $A_{1:T}$. Our main results concerning the i-cSMC algorithm are the following (our results concerning the particle Gibbs sampler are provided in Section~\ref{sec:The-particle-Gibbs}). We will denote by $\pi_{t}$ the corresponding marginal distribution of $\pi$ (see (\ref{eq:definitionpi_t:u}) for a precise definition). \begin{thm} \label{thm:THEtheorem}For $N\geq2$ the i-cSMC algorithm with kernel $P_{N}$ \begin{enumerate}[label=(\alph*)] \item is reversible with respect to $\pi$ and defines a positive operator, \item \label{enu:GboundedResults}if for all $t\in\{1,\ldots,T\}$ $\pi_{t}-{\rm ess}\sup_{z_{t}}G_{t}(z_{t})<\infty$ then there exists $\epsilon_{N}>0$ such that \begin{enumerate}[label=(\roman*)] \item \label{enu:minorization} for any $(x,S)\in\mathsf{X}\times\mathcal{B}\bigl(\mathsf{X}\bigr)$, \[ P_{N}(x,S)\geq\epsilon_{N}\pi(S)\quad, \] where $1-\epsilon_{N}=O(1/N)$, \item \label{enu:uniformconvergence}for any probability distribution $\nu\ll\pi$ on $\bigl(\mathsf{X},\mathcal{B}(\mathsf{X})\bigr)$ and $k\geq1$ \[ \|\nu P_{N}^{k}\bigl(\cdot\bigr)-\pi\bigl(\cdot\bigr)\|_{L^{2}(\mathsf{X},\pi)}\leq\|\nu-\pi\|_{L^{2}(\mathsf{X},\pi)}(1-\epsilon_{N})^{k}\quad, \] \item for any $x\in\mathsf{X}$ \[ \|\delta_{x}P_{N}^{k}\bigl(\cdot\bigr)-\pi\bigl(\cdot\bigr)\|_{TV}\leq(1-\epsilon_{N})^{k}\quad, \] \item \label{enu:upperboundasymptvar}for any $f\in L^{2}\bigl(\mathsf{X},\pi\bigr)$ \[ \mathrm{var}{}_{\pi}(f)\leq\mathrm{var}(f,P_{N})\leq\left[2\epsilon_{N}^{-1}-1\right]\mathrm{var}{}_{\pi}(f)\quad. \] \end{enumerate} \item \label{enu:iff-condition}if $\pi_{t}$-${\rm ess}\sup_{z_{t}}G_{t}(z_{t})=\infty$ for some $t\in[T]$, then, the i-cSMC kernel $P_{N}$ is not uniformly ergodic for any $N\in\mathbb{N}$, \item \label{enu:non-geometric-statement}if $\pi_{t}$-${\rm ess}\sup_{z_{t}}G_{t}(z_{t})=\infty$ for some $t\in[T]$ then, the i-cSMC kernel $P_{N}$ cannot be geometrically ergodic for any $N\in\mathbb{N}$ if $\pi$ is equivalent to a Lebesgue or counting measure on $\mathsf{X}$. \end{enumerate} \end{thm} \begin{rem} From Lemma~\ref{lem:full-supp}, statement~\ref{enu:non-geometric-statement} holds under a more abstract assumption, but we have chosen this explicit simplified statement for clarity at this point. In fact we suspect that~\ref{enu:non-geometric-statement} holds under the assumption $\pi_{t}-{\rm ess}\sup_{z_{t}}G_{t}(z_{t})=\infty$ for some $t\in[T]$ only, that is essential boundedness is a necessary condition for geometric ergodicity; see Conjecture~\ref{conj:general-necessity}. \end{rem} With additional conditions on $\{M_{t},G_{t},t=1,\ldots\}$ one can characterize $\epsilon_{N}$ in Theorem~\ref{thm:THEtheorem}\ref{enu:GboundedResults} further, and in particular characterize the rate at which $N$ should grow in terms of $T$ in order to maintain a set level of performance. This also requires additional notation and following~\citep{delmoral:2004} we define for any $z\in\mathsf{Z}$, $p,q\in\mathbb{N}$, $p\leq q$ and $f_{q}:\mathsf{Z}\rightarrow\mathbb{R}$, \[ Q_{p,q}\bigl(f_{q}\bigr)\bigl(z\bigr):=\mathbb{E}\left[f_{q}\bigl(Z_{q}\bigr)\prod_{k=p}^{q-1}G_{k}\bigl(Z_{k}\bigr)\,\Bigg|\,Z_{p}=z\right]\quad, \] and with the convention $Q_{0,p}(f_{p})(x)=M_{1}Q_{1,p}(f_{p})$ for any $f_{p}:\mathsf{Z}\rightarrow\mathbb{R}$, and \[ \eta_{p}(f_{p}):=\frac{Q_{0,p}(f_{p})}{Q_{0,p}(1)} \] and $\bar{M}_{p,p+1}\bigl(z,\cdot\bigr)=M_{p+1}\bigl(z,\cdot\bigr)$ and for $q>p\geq0$ we have the recursive definition, for any $z_{p}\in\mathsf{Z}$, \[ \bar{M}_{p,q}\bigl(z_{p},\cdot\bigr)=\int M_{p+1}(z_{p},{\rm d}z_{p+1})\bar{M}_{p+1,q}\bigl(z_{p+1},\cdot\bigr)\quad. \] The first condition is rather abstract, and can be viewed as a condition on the $h$-functions investigated in~\citep{whiteley_stability} in the context of stability properties of standard SMC algorithms. \begin{condition} \label{hyp:mixingabstract}There exists a constant $\alpha>0$ such that for any $p,k\in\mathbb{N}$, \[ \sup_{z\in\mathsf{Z}}\frac{Q_{p,p+k}(1)(z)}{\eta_{p}Q_{p,p+k}(1)}\leq\alpha\quad. \] \end{condition} \noindent One can however show that (A\ref{hyp:mixingabstract}) is implied by the following stronger assumption (see Lemma~\ref{lem:A2impliesA1}). \begin{condition}[Strong mixing conditions] \label{hyp:strongmixingpotentialassumptions}There exists $m\in\mathbb{Z}_{+}$ such that \begin{enumerate}[label=(\alph*)] \item \label{hyp:enu:Mcondition}There exists a constant $1\leq\beta<\infty$ such that for any $p\geq1$ and any $(z,z')\in\mathsf{Z}$ and $S\in\mathcal{B}(\mathsf{Z})$, \[ \bar{M}_{p,p+m}(z,S)\leq\beta\bar{M}_{p,p+m}(z',S)\;. \] \item \label{hyp:enu:Gcondition}The potential functions $G_{p}$ satisfy, for some $\delta<\infty$, \[ 1\leq\sup_{z,z'\in\mathsf{Z}^{2},p\in\{1,\ldots,T\}}\frac{G_{p}(z)}{G_{p}(z')}\leq\delta^{1/m}\quad. \] \end{enumerate} \end{condition} \begin{thm} \label{thm:quantitative-on-epsilon_N}Assume that for all $t\in\mathbb{N}$ $\pi_{t}-{\rm ess}\sup_{z_{t}}G_{t}(z_{t})<\infty$ and (A\ref{hyp:mixingabstract}) (or the stronger assumption (A\ref{hyp:strongmixingpotentialassumptions})) holds. Then with $\epsilon_{N}$ as in Theorem~\ref{thm:THEtheorem}\ref{enu:GboundedResults} for any $N\geq2$, there exists $C,\varepsilon>0$ such that with $N=C\times T$, then for any $T\geq1$, $\epsilon_{N}\geq\varepsilon>0$.\end{thm} \begin{rem} Similar results for the PGibbs sampler are provided in Section~\ref{sec:The-particle-Gibbs}. \end{rem} \begin{proof}[Proof of Theorem~\ref{thm:THEtheorem} ] The proofs of the various results are the subject of the following sections. More specifically, statement \begin{enumerate}[label=(\alph*)] \item follows from Lemma~\ref{lem:selfajoint_positive} (the latter property was established in~\citep{chopin:singh:2013} and the former noted/proved in~\citep{andrieu-doucet-holenstein,chopin:singh:2013}), \item all parts follow from Corollary~\ref{cor:convergence_epsilon} and~\citep[Proposition~\ref{prop:boundsspectralgapandvarianceforP_N}]{andrieu2015uniformsupplement}, which gathers generic results on $\pi-$invariant Markov chains satisfying \ref{enu:GboundedResults}\ref{enu:minorization}, \item follows from Proposition~\ref{prop:non-uniform}, \item follows from Proposition~\ref{prop:non-geom-easy} and Lemma~\ref{lem:full-supp}; Remark~\ref{rem:strictly_positive}. \end{enumerate} \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:quantitative-on-epsilon_N} ] Follows from Proposition~\ref{prop:mixing_bound}, Corollary~\ref{cor:linear_in_T_epsilon_N_bound} and Lemma~\ref{lem:A2impliesA1}. \end{proof} As pointed out in the introduction, soon after completing this work we have become aware of~\citep{lindsten_pg}, where a subset of our results have also been independently discovered. This motivates the following comparison. Result~\ref{enu:GboundedResults}\ref{enu:minorization} of Theorem~\ref{thm:THEtheorem} is identical to Theorem~1 of~\citep{lindsten_pg}, but relies on a different proof. Results~\ref{enu:GboundedResults}\ref{enu:uniformconvergence}--\ref{enu:upperboundasymptvar} rely on standard arguments, although~\ref{enu:upperboundasymptvar} does not seem to be well known and establishes informative quantitative bounds. The study of the necessity of our conditions to imply uniform or geometric ergodicity is not addressed in~\citep{lindsten_pg}. The result of Theorem~\ref{thm:quantitative-on-epsilon_N} corresponds to Proposition~5 of~\citep{lindsten_pg}. The conditions under which Theorem~\ref{thm:quantitative-on-epsilon_N} holds are rather stringent for some applications, in particular in the state-space model scenario. As discussed by~\citep{lindsten_pg} in that scenario (A\ref{hyp:strongmixingpotentialassumptions}) will essentially only hold in the case where $\mathsf{X}$ is compact. The condition (A\ref{hyp:mixingabstract}) is weaker and more natural in our analysis, but is not currently easy to verify in applications except through (A\ref{hyp:strongmixingpotentialassumptions}). In an attempt to relax (A\ref{hyp:strongmixingpotentialassumptions}), the authors of~\citep{lindsten_pg} investigate another set of specialised assumptions guaranteeing that the result of Theorem~\ref{thm:quantitative-on-epsilon_N} holds even in some non-compact scenarios provided the number of particles $N$ grows at a rate $T^{1/\gamma}$ for any $\gamma\in(0,1)$, a result in line with what is obtained with the stronger assumption (A\ref{hyp:strongmixingpotentialassumptions}), for which $\gamma=1$ is permissible. This requires the specification of a ``moment assumption'' which aims at controlling the variations of the various quantities involved under the law of the observation process $\{Y_{t},t\geq0\}$. Their approach, however, does not seem to allow one to consider the scaling properties of the PGibbs sampler (i.e. not just the i-cSMC); see their Theorem~6 and Remark~7. More importantly we note that their results require the law of the data to coincide with that of the specified model for some $\theta^{\star}\in\Theta$ which, although suggestive of what may happen in practice, is always an idealization. This delicate work is the main focus of the remainder of their investigation while here, in addition to establishing the necessity of some of the conditions, we have focused on the transference of the results obtained for the i-cSMC to the PGibbs sampler (Section~\ref{sec:The-particle-Gibbs}) with the aim of showing that the PGibbs has performance inferior to that of the Gibbs sampler, but arbitrarily close if we increase $N$. \section{Establishing the uniform minorization condition\label{sec:Minorization-and-Dirichlet}} Before proceeding we turn to the i-SIR which is particularly simple to analyze. The reason for detailing the short analysis of this simple scenario is to provide the reader with an overview of the developments which are to follow -- the remainder of the paper essentially replicates the key steps of the argument below, albeit in the more complex SMC framework. Notice that in this scenario $\mathsf{X}=\mathsf{Z}$ since $T=1$. We let $G(x):=\pi({\rm d}x)/M({\rm d}x)$ for any $x\in\mathsf{X}$ and assume that $\bar{G}:=\sup_{x\in\mathsf{X}}G(x)<\infty$. Then for $(x,S)\in\mathsf{X}\times\mathcal{B}\bigl(\mathsf{X}\bigr)$ we can rewrite \allowdisplaybreaks[4] \begin{eqnarray*} P_{N}(x,S) & = & \sum_{k=1}^{N}\int_{\mathsf{X}^{N}}{\textstyle {\displaystyle \frac{\pi({\rm d}z^{k})/M({\rm d}z^{k})}{\sum_{j=1}^{N}G(z^{j})}}}\mathbb{I}\bigl\{ z^{k}\in S\bigr\}\left({\textstyle \delta_{x}({\rm d}z^{1})\prod_{i=2}^{N}}M(\mathrm{d}z^{i})\right)\\ & = & \int_{\mathsf{X}^{N}}\frac{1}{\sum_{j=1}^{N}G(z^{j})}\frac{\pi({\rm d}z^{1})}{M({\rm d}z^{1})}\mathbb{I}\bigl\{ z^{1}\in S\bigr\}\left({\textstyle \delta_{x}({\rm d}z^{1})\prod_{i=2}^{N}}M(\mathrm{d}z^{i})\right)\\ & & +\sum_{k=2}^{N}\int_{\mathsf{X}^{N}}\frac{1}{\sum_{j=1}^{N}G(z^{j})}\mathbb{I}\bigl\{ z^{k}\in S\bigr\}\pi({\rm d}z^{k})\left({\textstyle \delta_{x}({\rm d}z^{1})\prod_{i=2,i\neq k}^{N}}M(\mathrm{d}z^{i})\right)\\ & = & \sum_{k=1}^{N}\int_{\mathsf{X}}\mathbb{E}_{1,x,k,y}\left[\frac{\mathbb{I}\bigl\{ y\in S\bigr\}}{\sum_{j=1}^{N}G(Z^{j})}\right]\left(\mathbb{I}\{k=1\}\frac{\pi({\rm d}x)}{M({\rm d}x)}\delta_{x}({\rm d}y)+\mathbb{I}\{k\neq1\}\pi({\rm d}y)\right), \end{eqnarray*} where $\mathbb{E}_{1,x,k,y}\bigl(\cdot\bigr)$ defines an expectation for the random variables $Z^{1},\ldots,Z^{N}$ associated to the probability distribution \[ {\textstyle \delta_{x}({\rm d}z^{1})\prod_{i=2}^{N}}M(\mathrm{d}z^{i}) \] for $k=1$ and $x=y\in\mathsf{X}$, and \[ {\textstyle \delta_{x,y}({\rm d}z^{1}\times{\rm d}z^{k})\prod_{i=2,i\neq k}^{N}}M(\mathrm{d}z^{i}) \] for $k\neq1$ and $x,y\in\mathsf{X}$. This auxiliary process turns out to be central to our analysis, and will be generalised to the general scenario and called ``doubly'' cSMC (c$^{2}$SMC). Indeed, omitting the term $k=1$ in the representation of $P_{N}$ and by application of Jensen's inequality to the convex mapping $x\mapsto(x+a)^{-1}$ for $x,a\in\mathbb{R}_{+}$ we obtain \begin{eqnarray*} P_{N}(x,S) & \geq & \sum_{k=2}^{N}\int_{\mathsf{X}}\mathbb{I}\bigl\{ y\in S\bigr\}\mathbb{E}_{1,x,k,y}\left[\frac{1}{G(x)+G(y)+\sum_{j=2,j\neq k}^{N}G(Z^{j})}\right]\pi({\rm d}y)\\ & \geq & \sum_{k=2}^{N}\int_{\mathsf{X}}\frac{\mathbb{I}\bigl\{ y\in S\bigr\}}{G(x)+G(y)+N-2}\pi({\rm d}y)\\ & \geq & \frac{N-1}{2\bar{G}+N-2}\pi(S)\quad. \end{eqnarray*} This is a uniform minorization condition which immediately implies uniform geometric convergence (see the outline of our results in Section \ref{sec:Introduction}), but in the present situation the result is even stronger in that, in particular, it provides us with quantitative bounds on the dependence of the performance of the algorithm on $N$. Indeed it is a standard result that the minorization constant \[ \epsilon_{N}=\frac{N-1}{2\bar{G}+N-2}=1-\frac{2\bar{G}-1}{2\bar{G}+N-2}\quad, \] provides the upper bound $1-\epsilon_{N}$ on the (geometric) rate of convergence of the algorithm, which here vanishes at an asymptotic rate $N^{-1}$ as $N$ increases. As we shall see the fact that the minorization measure is the invariant distribution leads to a direct lower bound on associated Dirichlet forms associated to $P_{N}$ which in turn provide quantitative bounds on the spectral gap and the associated asymptotic variance. In the remainder of the section we generalize the representation of $P_{N}$ in terms of the c$^{2}$SMC algorithm and ``the estimator of the normalizing constant'' which suggests applying Jensen's inequality as above. This requires us to consider estimates of the resulting expectation in Section~\ref{sec:Estimates-of-the}. In order to proceed further it is required to define the c$^{2}$SMC process, which is essentially similar to the cSMC process but where conditioning is now upon two trajectories $x,y\in\mathsf{X}$. The definition is therefore similar, but for reasons which will become clearer below the second fixed trajectory is set to have a lineage of the general form $\mathbf{k}:=k_{1:T}\in[N]^{T}$. We will use below the convention that $\delta_{a,b}\bigl({\rm d}z^{1}\times{\rm d}z^{k}\bigr)$ reduces to $\delta_{a}({\rm d}z^{1})$ whenever $k=1$. The definition of this process is similar to that of the cSMC algorithm and the distributions involved are defined for $x,y\in\mathsf{X}$ and $\mathbf{k}\in[N]^{T}$ as follows \[ \mathbb{P}_{\mathbf{1},x,\mathbf{k},y}^{N}\left(Z_{1}\in{\rm d}z_{1}\right)=\delta_{x_{1},y_{1}}\bigl({\rm d}z_{1}^{1}\times{\rm d}z_{1}^{k_{1}}\bigr)\prod_{i=2,i\neq k_{1}}^{N}M_{1}({\rm d}z_{1}^{i})\quad, \] and for $t=2,\ldots,T-1$ (with the convention $a_{t-1}^{k,l}:=(a_{t-1}^{k},a_{t-1}^{l})$) \begin{multline*} \mathbb{P}_{\mathbf{1},x,\mathbf{k},y}^{N}\left(Z_{t}\in{\rm d}z_{t},A_{t-1}=a_{t-1}\left|Z_{t-1}=z_{t-1}\right.\right)=\delta_{x_{t},y_{t}}\bigl({\rm d}z_{t}^{1}\times{\rm d}z_{t}^{k_{t}}\bigr)\\ \times\mathbb{I}\{a_{t-1}^{1,k_{t}}=(1,k_{t-1})\}\prod_{i=2,i\neq k_{t}}^{N}\Bigg(\sum_{l=1}^{N}\frac{G_{t-1}(z_{t-1}^{l})}{\sum_{j=1}^{N}G_{t-1}(z_{t-1}^{j})}\mathbb{I}\left\{ a_{t-1}^{i}=l\right\} M_{t}(z_{t-1}^{l},{\rm d}z_{t}^{i})\Bigg) \end{multline*} and \[ \mathbb{P}_{\mathbf{1},x,\mathbf{k},y}^{N}\left(A_{T}=l\left|Z_{T}=z_{T}\right.\right)={\displaystyle {\textstyle \frac{{\displaystyle G_{T}(z_{T}^{l})}}{{\displaystyle {\textstyle \sum_{j=1}^{N}}G_{T}(z_{T}^{j})}}\quad.}} \] We note that although the transitions and the initial distributions are, by the convention, well defined for $k_{t}=1$ and $x_{t}\neq y_{t}$ the distribution above will never be used in such a context. Just as $\mathbb{P}_{\mathbf{1},x}^{N}$ is not a conditional distribution of $\mathbb{P}^{N}\bigl(\cdot\bigr)$, the law of the SMC algorithm, the same holds between $\mathbb{P}_{\mathbf{1},x,\mathbf{k},y}^{N}\bigl(\cdot\bigr)$ and $\mathbb{P}_{\mathbf{1},x}^{N}\bigl(\cdot\bigr)$. However we now provide an important property relating these two probability distributions, which together with (\ref{eq:PN_defn}) will allow us to decompose this transition into key quantities and establish the sought minorization condition. The proof of the following Lemma is in~\citep[Appendix~\ref{sec:Proof-of-Lemma_simplecorrespondence}]{andrieu2015uniformsupplement}. \begin{lem} \label{lem:simple_correspondence}For $\mathbf{i}\in\{2,\ldots,N\}^{T}$ and $x\in\mathsf{X}$, \[ \mathbb{E}_{\mathbf{1},x}^{N}\left[I_{\mathbf{i}}\bigl(Z_{1:T},A_{1:T},S\bigr)\right]=\frac{\gamma_{T}}{N^{T}}\int_{\mathsf{X}}\pi({\rm d}y)\times\mathbb{I}\{y\in S\}\times\mathbb{E}_{\mathbf{1},x,\mathbf{i},y}^{N}\left[\frac{1}{{\displaystyle {\textstyle \prod_{t=1}^{T}\frac{1}{N}\sum_{j=1}^{N}}G_{t}(Z_{t}^{j})}}\right]\quad. \] \end{lem} As we shall see the concentration properties of the ``estimator of the normalizing constant'' plays a central role for any $z_{1:T}\in\bigl(\mathsf{Z}^{N}\bigr)^{T}$ \[ \hat{\gamma}_{T}^{N}\bigl(z_{1:T}\bigr):=\prod_{t=1}^{T}\frac{1}{N}\sum_{j=1}^{N}G_{t}(z_{t}^{j})\quad. \] We first obtain a uniform minorization condition for the cSMC transition probability. This simple result establishes the expectation of $\hat{\gamma}_{T}^{N}\bigl(Z_{1:T}\bigr)$ with respect to a c$^{2}$SMC algorithm as a key quantity of interest, and motivates the non-asymptotic analysis and bounds of Section~\ref{sec:Estimates-of-the}. \begin{prop} \label{prop:uniformminorizationcrude}For any $(x,S)\in\mathsf{X}\times\mathcal{B}\bigl(\mathsf{X}\bigr)$ and $N\geq2$ we have \[ P_{N}(x,S)\geq\int_{S}\frac{\gamma_{T}\times(1-1/N)^{T}}{\mathbb{E}_{\mathbf{1},x,\mathbf{2},y}^{N}\left[\hat{\gamma}_{T}^{N}\bigl(Z_{1:T}\bigr)\right]}\pi({\rm d}y)\quad. \] \end{prop} \begin{proof} Using (\ref{eq:PN_defn}), we only keep the trajectories for which there is no coalescence with the first trajectory, i.e., we exclude terms such that $i_{t}=1$ for some $t\in[T]$ and obtain \[ P_{N}(x,S)\geq\sum_{\mathbf{i}\in\{2,\ldots,N\}^{T}}\mathbb{E}_{\mathbf{1},x}^{N}\left[I_{\mathbf{i}}\bigl(Z_{1:T},A_{1:T},S\bigr)\right]\quad. \] Consequently, using Lemma~\ref{lem:simple_correspondence}, \begin{eqnarray*} P_{N}(x,S) & \geq & \sum_{i_{1:T}\in[2:N]^{T}}\frac{\gamma_{T}}{N^{T}}\int_{\mathsf{X}}\pi({\rm d}y)\times\mathbb{I}\{y\in S\}\times\mathbb{E}_{\mathbf{1},x,\mathbf{i},y}^{N}\left[\frac{1}{{\displaystyle {\textstyle \prod_{t=1}^{T}\frac{1}{N}\sum_{j=1}^{N}}G_{t}(Z_{t}^{j})}}\right]\\ & = & \frac{\gamma_{T}(N-1)^{T}}{N^{T}}\int_{S}\mathbb{E}_{\mathbf{1},x,\mathbf{2},y}^{N}\left[\frac{1}{{\displaystyle {\textstyle \prod_{t=1}^{T}\frac{1}{N}\sum_{j=1}^{N}}G_{t}(Z_{t}^{j})}}\right]\pi({\rm d}y_{1:T})\quad, \end{eqnarray*} using invariance by permutation of $i_{1},\ldots,i_{T}$ of the expectations. We conclude by application of Jensen's inequality for the convex function $u\mapsto1/u$ for $u\in\mathbb{R}_{+}$. \end{proof} \begin{cor} \label{cor:uniformminorbymu}Let $N\geq2$ and assume that \[ \epsilon_{N}:=\frac{\gamma_{T}\times(1-1/N)^{T}}{\sup_{x,y\in\mathsf{X}}\mathbb{E}_{\mathbf{1},x,\mathbf{2},y}^{N}\left[\hat{\gamma}_{T}^{N}\bigl(Z_{1:T}\bigr)\right]}>0\quad, \] then for any $(x,S)\in\mathsf{X}\times\mathcal{B}\bigl(\mathsf{X}\bigr)$, $P_{N}(x,S)\geq\epsilon_{N}\pi(S)$ and from Proposition~\ref{prop:uniformminorizationcrude} all the properties of~\citep[Proposition~\ref{prop:boundsspectralgapandvarianceforP_N}]{andrieu2015uniformsupplement} apply to the i-cSMC with $\varepsilon=\epsilon_{N}$. \end{cor} The next section is dedicated to finding a useful expression for the expectation $\mathbb{E}_{\mathbf{1},x,\mathbf{2},y}^{N}\left[\hat{\gamma}_{T}^{N}\bigl(Z_{1:T}\bigr)\right]$ and establishing explicit bounds on this quantity, and therefore $\epsilon_{N}$ in Corollary~\ref{cor:uniformminorbymu}, under additional assumptions. Before proceeding to novel analysis, for completeness we gather two known properties of the i-cSMC (in the general set-up) in the following lemma which will be exploited throughout the remainder of the paper. Both results are immediate upon noticing that the i-cSMC is a two stage Gibbs sampler on an artificial joint distribution (see (\ref{eq:definitionartificialdistribution}) in~\citep[Appendix~\ref{sec:Proof-of-Lemma_selfadjoint}]{andrieu2015uniformsupplement}, which is a generalization of (\ref{eq:artificialforiSIR})). The results have also been shown in detail in~\citep{chopin:singh:2013}. A proof is included in~\citep[Appendix~\ref{sec:Proof-of-Lemma_selfadjoint}]{andrieu2015uniformsupplement} for completeness. \begin{lem} \label{lem:selfajoint_positive}$P_{N}$, viewed as an operator on $L^{2}\bigl(\mathsf{X},\pi\bigr)$, is self-adjoint and positive. \end{lem} \section{Quantitative bounds for the doubly conditional i-cSMC expectation\label{sec:Estimates-of-the}} In this section we first find an exact expression for $\mathbb{E}_{\mathbf{1},x,\mathbf{2},y}^{N}\left[\hat{\gamma}_{T}^{N}\bigl(Z_{1:T}\bigr)\right]$ in terms of quantities underpinning the definition of $\pi$ given in Section~\ref{sec:The-i-CSMC} and then move on to provide various estimates of the conditional expectation involved in the minorization established in Proposition~\ref{prop:uniformminorizationcrude}, under various assumptions on the aforementioned quantities. Throughout we use the usual convention that $\sum_{\emptyset}=0$ and $\prod_{\emptyset}=1$. We let $G_{p,q}(z):=Q_{p,q}(1)(z)$ and $G_{p,q}^{1+2}:=G_{p,q}\bigl(x_{p}\bigr)+G_{p,q}\bigl(y_{p}\bigr)$. We note that $G_{p,p+1}(z)=G_{p}(z)$ for $p\in[T]$ and we use the convention throughout that for any $z\in\mathsf{Z}$, $G_{0}(z)=1$ and $Q_{0,p}\bigl(f_{p}\bigr)(z):=M_{1}\left(Q_{1,p}(f_{q})\right)$. We write $G_{0,p}:=M_{1}\left(Q_{1,p}(1)\right)$ since $G_{0,p}(z)$ is independent of $z$. Our first result, whose proof can be found in~\citep[Appendix~\ref{sec:Supplementary-quantitative}]{andrieu2015uniformsupplement}, is \begin{prop} \label{prop:doubleconditionalSMC}Let $x,y\in\mathsf{X}$ and $N\geq2$. Then, \[ \mathbb{E}_{\mathbf{1},x,\mathbf{2},y}^{N}\left[\hat{\gamma}_{T}^{N}\bigl(Z_{1:T}\bigr)\right]=\frac{1}{N^{T}}\sum_{s=1}^{T+1}(N-2)^{T+1-s}\sum_{\mathbf{i}\in\mathcal{I}_{T+1,s}}G_{0,i_{\text{1}}}C_{T,s}\bigl(\mathbf{i},x,y\bigr)\quad, \] where for any $s=1,\ldots,k$, \[ \mathcal{I}_{k,s}:=\left\{ i_{1},\ldots,i_{s}\in\mathbb{N}^{s}:T-k+1<i_{1}\cdots<i_{s}=T+1\right\} \quad, \] and for $\mathbf{i}\in\mathcal{I}_{k,s}$ \[ C_{k,s}\bigl(\mathbf{i},x,y\bigr):=\prod_{m=1}^{s-1}\bigl[G_{i_{m},i_{m+1}}\bigl(x_{i_{m}}\bigr)+G_{i_{m},i_{m+1}}\bigl(y_{i_{m}})\bigr]\quad. \] \end{prop} \begin{rem} While the expectation of interest here has been hitherto uninvestigated, the form of Proposition~\ref{prop:doubleconditionalSMC} is reminiscent of non-asymptotic results in~\citep{cerou-delmoral-guyader}, in which second moments of $\hat{\gamma}_{T}^{N}\bigl(Z_{1:T}\bigr)$ are analyzed with respect to the law of a standard SMC algorithm. \end{rem} We now turn to estimates of the expectation above, starting with very minimal assumptions which allow us to establish the minorization condition required to apply~\citep[Proposition~\ref{prop:boundsspectralgapandvarianceforP_N}]{andrieu2015uniformsupplement} and deduce most of our results, without the need for assumptions on the dynamic of the system---the number of particles is however required to grow exponentially in order to maintain a set level of performance. We show subsequently that with stronger assumptions on $\{M_{t},G_{t}\}_{t=1}^{T}$ it is possible to show that $N$ should grow linearly with $T$ to ensure that a set level of performance is maintained. \begin{prop} \label{prop:boundepsilonNwithuniformboundG}Assume that for all $t\in\{1,\ldots,T\}$, $\bar{G}_{t}:=\sup_{z\in\mathsf{Z}}G_{t}(z)<\infty$, then for any $N\geq2$ \[ \mathbb{E}_{\mathbf{1},x,\mathbf{2},y}^{N}\left[\hat{\gamma}_{T}^{N}\bigl(Z_{1:T}\bigr)\right]\leq\gamma_{T}\left\{ 1+\left[1-\left(1-\frac{2}{N}\right)^{T}\right]\left[\frac{\prod_{t=1}^{T}\bar{G}_{t}}{\gamma_{T}}-1\right]\right\} \quad. \] \end{prop} \begin{proof} The assumption on the potentials implies that for any $p,q\in\mathbb{N}$ with $p<q$ we have $G_{p,q}\leq\prod_{k=p}^{q-1}\bar{G}_{k}$, and from Proposition~\ref{prop:doubleconditionalSMC} we have \begin{eqnarray*} \mathbb{E}_{\mathbf{1},x,\mathbf{2},y}^{N}\left[\hat{\gamma}_{T}^{N}\bigl(Z_{1:T}\bigr)\right] & = & \sum_{s=1}^{T+1}\left(\frac{N-2}{N}\right)^{T+1-s}\frac{2^{s-1}}{N{}^{s-1}}\sum_{\mathcal{I}_{T+1,s}}G_{0,i_{\text{1}}}\prod_{m=1}^{s-1}\frac{1}{2}G_{i_{m},i_{m+1}}^{1+2}\\ & \leq & \gamma_{T}\left(\frac{N-2}{N}\right)^{T}+\prod_{k=1}^{T}\bar{G}_{k}\times\sum_{s=2}^{T+1}\binom{T}{s-1}\left(\frac{N-2}{N}\right)^{T+1-s}\frac{2^{s-1}}{N{}^{s-1}}\\ & = & \gamma_{T}\left(\frac{N-2}{N}\right)^{T}+\left[1-\left(\frac{N-2}{N}\right)^{T}\right]\prod_{k=1}^{T}\bar{G}_{k}\quad, \end{eqnarray*} and the result follows.\end{proof} \begin{cor} \label{cor:convergence_epsilon}Propositions~\ref{prop:uniformminorizationcrude} and~\ref{prop:boundepsilonNwithuniformboundG} together imply that for any $x,S\in\mathsf{X}\times\mathcal{B}(\mathsf{X})$, \[ P_{N}(x,S)\geq\epsilon_{N}\pi(S)\quad\text{with}\quad\epsilon_{N}=\frac{(1-1/N)^{T}}{1+\left[1-\left(1-\frac{2}{N}\right)^{T}\right]\left[\frac{\prod_{t=1}^{T}\bar{G}_{t}}{\gamma_{T}}-1\right]}\quad \] and $\lim_{N\rightarrow\infty}\epsilon_{N}=1$. \end{cor} It should be clear that despite Corollary~\ref{cor:convergence_epsilon}, the term $\prod_{t=1}^{T}\bar{G}_{t}/\gamma_{T}$ typically grows exponentially fast with $T$ whenever the potentials are not constant functions. Therefore, Proposition~\ref{prop:boundepsilonNwithuniformboundG} suggests that the number of particles $N$ should grow exponentially with $T$ in general. However, stronger assumptions on the system under consideration will allow us to maintain a given lower bound on $\epsilon_{N}$ by increasing $N$ only linearly with $T$. We first state our main result using the abstract condition (A\ref{hyp:mixingabstract}) and then show that classical strong mixing conditions (A\ref{hyp:strongmixingpotentialassumptions}) imply (A\ref{hyp:mixingabstract}). \begin{prop} \label{prop:mixing_bound}Assume (A\ref{hyp:mixingabstract}), then for any $N\geq2$ \[ \mathbb{E}_{\mathbf{1},x,\mathbf{2},y}^{N}\left[\hat{\gamma}_{T}^{N}\bigl(Z_{1:T}\bigr)\right]\leq\gamma_{T}\left(1+\frac{2(\alpha-1)}{N}\right)^{T}\quad. \] \end{prop} \begin{proof} First notice that for any $1\leq k\leq n$ \[ Q_{0,n}(1)=Q_{0,k}(1)\frac{Q_{0,n}(1)}{Q_{0,k}(1)}=Q_{0,k}(1)\eta_{k}Q_{k,n}(1)\quad, \] and therefore for any $s\in\{1,\ldots,T\}$ and $0<i_{1}<\cdots<i_{s-1}<i_{s}=T+1$ with the notation defined earlier, \[ Q_{0,T}(1)=Q_{0,i_{1}}(1)\prod_{k=1}^{s-1}\eta_{i_{k}}Q_{i_{k},i_{k+1}}(1)=G_{0,i_{1}}\prod_{k=1}^{s-1}\eta_{i_{k}}G_{i_{k},i_{k+1}}\quad, \] and from (A\ref{hyp:mixingabstract}), with $\bar{G}_{p,q}:=\sup_{z\in\mathsf{Z}}G_{p,q}(z)$, and applying Proposition \ref{prop:doubleconditionalSMC} yields the following upper bound for $\mathbb{E}_{\mathbf{1},x,\mathbf{2},y}^{N}\left[\hat{\gamma}_{T}^{N}\bigl(Z_{1:T}\bigr)\right]$: \begin{eqnarray*} & {\displaystyle \sum_{s=1}^{T+1}} & \left(\frac{N-2}{N}\right)^{T+1-s}\frac{2^{s-1}}{N{}^{s-1}}\sum_{0<i_{1}<\cdots<i_{s-1}<i_{s}=T+1}G_{0,i_{\text{1}}}\prod_{m=1}^{s-1}\frac{1}{2}G_{i_{m},i_{m+1}}^{1+2}\\ & \leq & \gamma_{T}\left(\frac{N-2}{N}\right)^{T}+\gamma_{T}\sum_{s=2}^{T+1}\left(\frac{N-2}{N}\right)^{T+1-s}\frac{2^{s-1}}{N{}^{s-1}}\sum_{\mathcal{I}_{T+1,s}}\frac{G_{0,i_{\text{1}}}}{G_{0,i_{1}}}\prod_{m=1}^{s-1}\frac{\bar{G}_{i_{m},i_{m+1}}}{\eta_{i_{k}}G_{i_{k},i_{k+1}}}\\ & \le & \gamma_{T}\sum_{s=1}^{T+1}\binom{T}{s-1}\left(\frac{N-2}{N}\right)^{T+1-s}\frac{2^{s-1}}{N{}^{s-1}}\alpha^{s-1}\quad, \end{eqnarray*} and we conclude by an application of the binomial theorem.\end{proof} \begin{cor} \label{cor:linear_in_T_epsilon_N_bound}Propositions~\ref{prop:uniformminorizationcrude} and~\ref{prop:boundepsilonNwithuniformboundG} together imply that for any $(x,S)\in\mathsf{X}\times\mathcal{B}(\mathsf{X})$, \[ P_{N}(x,S)\geq\epsilon_{N}\pi(S)\quad\text{with}\quad\epsilon_{N}=\left(\frac{1-1/N}{1+\frac{2(\alpha-1)}{N}}\right)^{T}\quad. \] Now, let $N-1\geq CT$ for some $C>0$. Then $\epsilon_{N}\geq\exp\left(-\frac{2\alpha-1}{C}\right)$.\end{cor} \begin{proof} Propositions~\ref{prop:uniformminorizationcrude} and~\ref{prop:mixing_bound} together imply that \[ \epsilon_{N}\geq\left(1+\frac{2\alpha-1}{N-1}\right)^{-T}. \] Since $(N-1)\geq CT$ for some $C>0$, and $\log(1+x)\leq x$ for all $x\geq0$, \[ \left(1+\frac{2\alpha-1}{N-1}\right)^{T}\le\left(1+\frac{2\alpha-1}{CT}\right)^{T}\leq\exp\left(\frac{2\alpha-1}{C}\right)\quad. \qedhere \] \end{proof} \begin{rem} \label{rem:tuning}The combination of the upper bound of ${\rm var}(f,P_{N})$ in Theorem~\ref{thm:THEtheorem} with Corollary~\ref{cor:linear_in_T_epsilon_N_bound} suggests a rough rule of thumb to select $N$ for the i-cSMC Markov kernel. In particular, there is generally a tradeoff between iterating a less computationally intensive Markov kernel more times and iterating a more computationally intensive expensive fewer times. This suggests that one should minimize the function $f(N):=N{\rm var}(f,P_{N})$. While an analytic expression for ${\rm var}(f,P_{N})$ is not available we can minimize its upper bound \[ (CT+1)\left\{ 2\exp\left(\frac{2\alpha-1}{C}\right)-1\right\} , \] with respect to $C$. Assuming that we are in the scenario where $N\gg1$ and therefore $CT+1\approx CT$ one then finds the unique minimum \[ C^{*}=\frac{2\alpha-1}{{\rm LambertW}(-\frac{1}{2\exp(1)})+1}\approx1.302\left(2\alpha-1\right)\;, \] (where\textcolor{black}{{} ${\rm Lambert_{W}}$ is the principal branch of the Lambert W }function) or correspondingly \[ \epsilon_{N}^{*}\approx0.464\,. \] \end{rem} Hence, under (A\ref{hyp:mixingabstract}) it is only required for $N$ to scale linearly with $T$ in order to maintain a non-vanishing ergodicity rate. Following, e.g.,~\citep{delmoral:2004,cerou-delmoral-guyader} we make the following assumptions on $\{M_{t}\}$ and the potentials $\{G_{t}\}$ which combined define an $m$-step ``strong mixing'' condition which automatically implies (A\ref{hyp:mixingabstract}). The following result relies on classical arguments~\citep[Lemma 4.3]{delmoral:2004,cerou-delmoral-guyader} \begin{lem} \label{lem:A2impliesA1}Assume (A\ref{hyp:strongmixingpotentialassumptions}). Then for any $k\in\mathbb{Z}_{+}$ we have \[ \sup_{z,z'\in\mathsf{Z}^{2}}\frac{Q_{p,p+k}(1)(z)}{Q_{p,p+k}(1)(z')}\leq\beta\delta\quad, \] i.e., (A\ref{hyp:mixingabstract}) is satisfied. \end{lem} \section{Necessity of the boundedness assumption and a conjecture\label{sec:conjectureonboundedness}} Proposition~\ref{prop:boundepsilonNwithuniformboundG} showed that the i-cSMC kernel is uniformly ergodic if the potentials are bounded. We study here the opposite case, where at least one of the potentials is unbounded. We discover that then the algorithm cannot be uniformly ergodic (Proposition~\ref{prop:non-uniform}), and in many cases the algorithm cannot be geometrically ergodic (Proposition~\ref{prop:non-geom-easy} and Lemma~\ref{lem:full-supp}; Remark~\ref{rem:strictly_positive}). We believe that the latter holds in general (Conjecture~\ref{conj:general-necessity}), but a proof has remained elusive. This dichotomy of algorithms which are uniformly ergodic and sub-geometrically ergodic would be in perfect analogy with the behaviour of the independent Metropolis--Hastings~\citep[~Theorem 2.1]{mengersen-tweedie}. We will denote hereafter the marginal densities of $\pi$ by \begin{equation} \pi_{t:u}(A):=\pi(\mathsf{Z}^{t-1}\times A\times\mathsf{Z}^{T-u})\qquad\text{for \ensuremath{A\in\mathcal{B}(\mathsf{Z}^{u-t+1})},}\label{eq:definitionpi_t:u} \end{equation} where $1\le t\le u\le T$ and we use the shorthand $\pi_{t}(A):=\pi_{t:t}(A)$. In this section, we will assume that $\mathsf{S}\in\mathcal{B}(\mathsf{Z})^{T}$ is a fixed set such that for all $x\in\mathsf{S}$, $\prod_{t=1}^{T}G_{t}(x_{t})>0$ and $\pi(\mathsf{S})=1$. Further, $\mathsf{S}$ contains all possible starting points of the algorithm, that is, we assume that the state space of the i-cSMC is $\mathsf{S}$. In the discrete case, the minimal $\mathsf{S}$ consists of the points of positive $\pi$-measure, and in the continuous case where $\pi$ admits a density, the set $\mathsf{S}$ can be taken as the set where the density is positive. Further, we will assume that $\pi_{1}$ is not concentrated on a single point. We can do this without loss of generality, because if $\pi_{1},\ldots,\pi_{t}$ were concentrated on single points of the state space, the algorithm would be deterministic until $\pi_{t+1}$ and we could consider the i-cSMC for $\pi'=\pi_{t+1:T}$. \begin{prop} \label{prop:non-uniform} Suppose $\pi_{t}$-${\rm ess}\sup_{x_{t}}G_{t}(x_{t})=\infty$ for some $t\in[T]$. Then, the i-cSMC kernel $P_{N}$ is not uniformly ergodic for any $N\in\mathbb{N}$. \end{prop} \begin{proof} If the i-cSMC kernel is uniformly ergodic, then there exist $K<\infty$ and $\rho\in(0,1)$ such that \[ \sup_{x\in\mathsf{S}}\|P_{N}^{n}(x,\,\cdot\,)-\pi(\,\cdot\,)\|_{TV}\le K\rho^{n}\qquad\text{for all \ensuremath{n\in\mathbb{N}}.} \] Fix $\epsilon'>0$ and let $n\in\mathbb{N}$ be such that $K\rho^{n}\le\epsilon'$. We will prove that there exists a set $B_{\epsilon'}\in\mathcal{B}(\mathsf{Z})$ such that $\pi_{1}(B_{\epsilon'})>0$ and $\inf_{x\in B_{\epsilon'}}P_{N}^{n}(x,\{x_{1}\}\times\mathsf{Z}^{T-1})\ge1-\epsilon'$. For all $x\in B_{\epsilon'}$, we have $|P_{N}^{n}(x,\{x_{1}\}\times\mathsf{Z}^{T-1})-\pi_{1}(\{x_{1}\})|\le K\rho^{n}\le\epsilon'$. This, with $\epsilon'>0$ small enough, will contradict $\pi_{1}(\{x_{1}\})<1$. Lemma~\ref{lem:choose_c1c2} shows that there exists $\phi:\mathbb{R}_{+}\to\mathbb{R}_{+}$ such that $\lim_{g\to\infty}\phi(g)=0$, and \[ P_{N}(x,\{x_{1}\}^{\complement}\times\mathsf{Z}^{T-1})\le\phi(G(x_{t})). \] Denote the level set $L_{t}(\underline{G}):=\{x_{t}\in\mathsf{Z}\,:\,G_{t}(x_{t})\le\underline{G}\}$. Lemma~\ref{lem:choose_c1c2} shows that there exists $c_{2}=c_{2}(N)\in[1,\infty)$ such that for $G_{t}(x_{t})\ge\underline{G}$ \[ P_{N}(x,\mathsf{Z}^{t-1}\times L_{t}(\underline{G})\times\mathsf{Z}^{T-t})\le c_{2}\underline{G}/G_{t}(x_{t}). \] Let $\epsilon\in(0,1)$ and define $\delta:=\epsilon/c_{2}$ and let $G_{*}$ be large enough so that $\phi(\delta^{n}G_{*})\le\epsilon$. Define the (sub-probability) kernels $\mu_{\bar{G}}(x,{\rm d}y):=P_{N}(x,{\rm d}y)\delta_{x_{1}}(y_{1})\mathbb{I}\left\{ G_{t}(y_{t})\ge\bar{G}\right\} $ on $(\mathsf{S},\mathcal{B}(\mathsf{S}))$ for any $\bar{G}>0$ and observe that we may estimate \begin{align*} & \mathbb{I}\left\{ G_{t}(x_{t})\ge G_{*}\right\} P_{N}^{n}(x,\{x_{1}\}\times\mathsf{Z}^{T-1})\\ & \ge\mathbb{I}\left\{ G_{t}(x_{t})\ge G_{*}\right\} \int\mu_{\delta G_{*}}(x,{\rm d}y^{(2)})\int\mu_{\delta^{2}G_{*}}(y^{(2)},{\rm d}y^{(3)})\cdots\int\mu_{\delta^{n-1}G_{*}}(y^{(n-1)},{\rm d}y^{(n)}). \end{align*} We may estimate for any $i\in[n]$ and all $x\in\mathsf{S}$ such that $G_{t}(x_{t})\ge\delta^{i-1}G_{*}$, \begin{align*} \int\mu_{\delta^{i}G_{*}}(x,{\rm d}y) & \ge1-P_{N}(x,\{x_{1}\}^{\complement}\times\mathsf{Z}^{T-1})-P_{N}(x,\mathsf{Z}^{t-1}\times L_{t}(\delta^{i}G_{*})\times\mathsf{Z}^{T-t})\\ & \ge1-2\epsilon. \end{align*} We conclude that for $x\in\mathsf{S}$ such that $G_{t}(x_{t})\ge G_{*}$, \[ P_{N}^{n}(x,\{x_{1}\}\times\mathsf{Z}^{T-1})\ge(1-2\epsilon)^{n}. \] This proves the claim, as $\epsilon>0$ was arbitrary.\end{proof} \begin{lem} \label{lem:choose_c1c2}For all $x\in\mathsf{S}$ and all $\underline{G}\in\mathbb{R}_{+}$, \begin{enumerate} \item $P_{N}(x,\{x_{1}\}^{\complement}\times\mathsf{Z}^{T-1})\le\phi(G(x_{t}))$, \item $P_{N}(x,\mathsf{Z}^{t-1}\times L_{t}(\underline{G})\times\mathsf{Z}^{T-t})\le(N-1)^{2}\underline{G}/G_{t}(x_{t})\quad\text{whenever \ensuremath{G_{t}(x_{t})\ge\underline{G}}}$. \end{enumerate} where $\phi:\mathbb{R}_{+}\to\mathbb{R}_{+}$ is a function such that $\lim_{g\to\infty}\phi(g)=0$.\end{lem} \begin{proof} In both cases, we consider the case $t<T$; the special case $t=T$ can be treated similarly. In order to facilitate the theoretical analysis, we introduce a non-standard implementation of the cSMC which relies on the remark that at any time instant a given particle can only have a maximum number $N$ of children. Hence when implementing the cSMC it is always possible to draw $N$ children first and then decide who is carried forward according to the standard selection mechanism. It is in fact possible to push this idea further and, given a fixed $x\in\mathsf{S}$, to sample the following $N$-ary tree of random variables first \begin{align*} \hat{Z}_{1}^{1} & =x_{1}, & \hat{Z}_{1}^{i} & \sim M_{1}(\cdot), & i\in[N]\setminus\{1\}\\ \hat{Z}_{2}^{1,1} & =x_{2}, & \hat{Z}_{2}^{i,j} & \sim M_{2}(\hat{Z}_{1}^{i},\cdot), & (i,j)\in[N]^{2}\setminus\{(1,1)\}\\ & & & \vdots\\ \hat{Z}_{T}^{{\bf 1}} & =x_{T}, & \hat{Z}_{T}^{i_{1},\ldots,i_{T}} & \sim M_{T}(\hat{Z}_{T-1}^{i_{1},\ldots,i_{T-1}},\cdot), & (i_{1},\ldots,i_{T})\in[N]^{T}\setminus\{{\bf 1}\}\;, \end{align*} and then prune the tree using the selection mechanism of the cSMC algorithm with fixed path $x\in\mathsf{S}$. As a result, each $Z_{t}^{j}$ in the cSMC is associated with some $\hat{Z}_{t}^{i}$. The construction above permits the bound \[ U:=\sum_{{\bf i}\in[N]^{t}}G_{t}(Z_{t}^{i_{t}})\mathbb{I}\left\{ i_{1}\neq1\right\} \prod_{p=2}^{t}\mathbb{I}\left\{ i_{p-1}=A_{p-1}^{i_{p}}\right\} \leq\sum_{{\bf i}\in\{2,\ldots,N-1\}^{t}}G_{t}(\hat{Z}_{t}^{{\bf i}})=:V, \] where $U$ corresponds to the sum of potentials associated with those $Z_{t}^{j}$ whose ancestral lineage does not contain the value $1$. It therefore follows that \begin{align*} P_{N}(x,\{x_{1}\}^{\complement}\times\mathsf{Z^{T-1})}=\mathbb{E}_{{\bf 1},x}^{N}\left[\frac{U}{G_{t}(x_{t})+\sum_{j=2}^{N}G_{t}(Z_{t}^{j})}\right] & \leq\mathbb{E}_{{\bf 1},x}^{N}\left[\frac{U}{G_{t}(x_{t})+U}\right]\\ & \le\mathbb{E}_{{\bf 1},x}^{N}\left[\frac{V}{G_{t}(x_{t})+V}\right], \end{align*} because $u\mapsto u/(g+u)$ is increasing. Now, $V$ is a finite non-negative random variable independent of $x$. We may define \[ \phi(g):=\mathbb{E}_{{\bf 1},x}^{N}\left[\frac{V}{g+V}\right] \] which satisfies $\lim_{g\to\infty}\phi(g)=0$ by the monotone convergence theorem. For the second inequality, we can show similarly that for $G_{t}(x_{t})\ge\underline{G}$ \[ \mathbb{P}_{{\bf 1},x}^{N}\left[G_{t}(Z_{t}^{A_{t}^{i}})\leq\underline{G}\right]=\mathbb{E}_{{\bf 1},x}^{N}\left[\frac{\sum_{k=2}^{N}G_{t}(Z_{t}^{k})\mathbb{I}\left\{ G_{t}(Z_{t}^{k})\leq\underline{G}\right\} }{G_{t}(x_{t})+\sum_{k=2}^{N}G_{t}(Z_{t}^{k})}\right]\leq\frac{(N-1)\underline{G}}{G_{t}(x_{t})} \] and so \[ P_{N}(x,\mathsf{Z}^{t-1}\!\times L_{t}(\underline{G})\times\mathsf{Z}^{T-t})\!\le\!\sum_{i=2}^{N}\mathbb{P}_{{\bf 1},x}^{N}\left[G_{t}(Z_{t}^{A_{t}^{i}})\leq\underline{G}\right]\!\!=\!(N-1)\mathbb{P}_{{\bf 1},x}^{N}\left[G_{t}(Z_{t}^{A_{t}^{i}})\leq\underline{G}\right].\qedhere \] \end{proof} To establish that $P_{N}$ cannot be even geometrically ergodic whenever $\pi_{t}$-${\rm ess}\sup_{x_{t}}G_{t}(x_{t})=\infty$ for some $t\in[T]$ in many settings, we use Proposition~\ref{prop:sticky-non-geometric}. This allows for the developments of Proposition~\ref{prop:non-geom-easy} and Lemma~\ref{lem:full-supp}, leading to the desired result under assumptions satisfied in many applications; see Remark~\ref{rem:strictly_positive}. \begin{prop} \label{prop:sticky-non-geometric}Suppose $P$ is an ergodic Markov kernel on a state space $\big(\mathsf{X},\mathcal{B}(\mathsf{X})\big)$ with invariant distribution $\pi$. Suppose that for any $\epsilon,\delta>0$ there exists a set $A\in\mathcal{B}(\mathsf{X})$ such that $\pi(A)\in(0,\delta)$ and $\inf_{x\in A}P(x,A)\ge1-\epsilon$. Then $P$ is not geometrically ergodic. \end{prop} \begin{proof} The result follows directly by following the proof of~\citep[~Theorem 3.1]{roberts-tweedie}, or by a conductance argument~\citep[~Theorem 1]{lee-latuszynski}.\end{proof} \begin{prop} \label{prop:non-geom-easy}Assume that for at least one $t\in[T]$ \begin{equation} \pi\text{-}{\rm ess}\sup_{x}\,\mathbb{E}_{{\bf 1},x}^{N}\bigg[\frac{G_{t}(x_{t})}{\sum_{k=1}^{N}G_{t}(Z_{t}^{k})}\bigg]=1.\label{eq:suff} \end{equation} Then $P_{N}$ cannot be geometrically ergodic.\end{prop} \begin{proof} Because of Proposition~\ref{prop:sticky-non-geometric} it suffices to establish that \begin{equation} \pi_{1:t}\text{-}{\rm ess}\sup_{x_{1:t}\in\mathsf{S}}\bigg\{\inf_{x_{t+1:T}}P_{N}\left(x_{1:T};\{x_{1:t}\}\times\mathsf{Z}^{T-t}\right)\bigg\}=1.\label{eq:sticky-simple} \end{equation} We note that \begin{eqnarray*} \inf_{x_{t+1:T}}P_{N}\left(x_{1:T},\{x_{1:t}\}\times\mathsf{Z}^{T-t}\right) & \geq & \mathbb{P}_{{\bf 1},x}^{N}\left(A_{t}^{1:N}=1\right)\\ & \ge & 1-\sum_{i=2}^{N}\mathbb{P}_{{\bf 1},x}^{N}\left(A_{t}^{i}\neq1\right), \end{eqnarray*} because $A_{t}^{1}=1$ by construction. We emphasize that $A_{t}^{i}$ are independent of $x_{t+1:T}$. Now (\ref{eq:sticky-simple}) follows directly from (\ref{eq:suff}) because for $i\in\{2,\ldots,N\}$, \[ \mathbb{P}_{{\bf 1},x}^{N}\left(A_{t}^{i}=1\right)=\mathbb{E}_{{\bf 1},x}^{N}\left[\frac{G_{t}(x_{t})}{\sum_{j=1}^{N}G_{t}(Z_{t}^{j})}\right].\qedhere \] \end{proof} \begin{lem} \label{lem:equiv-easy-condition}Assume that for any $\epsilon>0$ \[ \pi\text{-}{\rm ess}\inf_{x}\,\mathbb{P}_{{\bf 1},x}^{N}\bigg(\frac{G_{t}(Z_{t}^{2})}{G_{t}(x_{t})}\ge\epsilon\bigg)=0. \] Then, (\ref{eq:suff}) holds.\end{lem} \begin{proof} For any $\epsilon,\delta>0$ there exists $A_{\epsilon,\delta}$ such that $\pi(A_{\epsilon,\delta})>0$ and for $x\in A_{\epsilon,\delta}$ \[ \mathbb{P}_{{\bf 1},x}^{N}\bigg(\frac{G_{t}(Z_{t}^{2})}{G_{t}(x_{t})}\ge\epsilon\bigg)<\delta. \] Because of exchangeability, for any $x$ and $2\le k\le N$, \[ \mathbb{P}_{{\bf 1},x}^{N}\bigg(\frac{G_{t}(Z_{t}^{k})}{G_{t}(x_{t})}\ge\epsilon\bigg)=\mathbb{P}_{{\bf 1},x}^{N}\bigg(\frac{G_{t}(Z_{t}^{2})}{G_{t}(x_{t})}\ge\epsilon\bigg). \] Denote $B=\big\{\frac{\sum_{k=2}^{N}G_{t}(Z_{t}^{k})}{G_{t}(x_{t})}\ge(N-1)\epsilon\big\}$, then for $x\in A_{\epsilon,\delta}$ also \[ \mathbb{P}_{{\bf 1},x}^{N}(B)\le\sum_{k=2}^{N}\mathbb{P}_{{\bf 1},x}^{N}\bigg(\frac{G_{t}(Z_{t}^{k})}{G_{t}(x_{t})}\ge\epsilon\bigg)<(N-1)\delta. \] We may bound for any $x\in A_{\epsilon,\delta}$, \begin{align*} \mathbb{E}_{{\bf 1},x}^{N}\bigg[\frac{G_{t}(x_{t})}{\sum_{k=1}^{N}G_{t}(Z_{t}^{k})}\bigg] & \ge\mathbb{E}_{{\bf 1},x}^{N}\bigg[\mathbb{I}\left\{ B^{\complement}\right\} \frac{G_{t}(x_{t})}{\sum_{k=1}^{N}G_{t}(Z_{t}^{k})}\bigg]\\ & \ge\mathbb{E}_{{\bf 1},x}^{N}\bigg[\left\{ B^{\complement}\right\} \frac{1}{1+(N-1)\epsilon}\bigg]\\ & \ge\frac{1-(N-1)\delta}{1+(N-1)\epsilon}. \end{align*} Letting $\epsilon,\delta\to0$ completes the proof.\end{proof} \begin{lem} \label{lem:full-supp}Assume that there exists $t\in[T]$ such that $\pi_{t}$-${\rm ess}\sup_{x_{t}}G_{t}(x_{t})=\infty$, and if $t\ge2$, suppose also that for any $A\in\mathcal{B}(\mathsf{Z}^{1:t-1})$ and $B\in\mathcal{B}(\mathsf{Z})$, \[ \pi_{1:t-1}(A)>0\text{ and }\pi_{t}(B)>0\implies\pi_{1:t}(A\times B)>0. \] Then, the assumption of Lemma~\ref{lem:equiv-easy-condition} and consequently (\ref{eq:suff}) holds for $t$.\end{lem} \begin{proof} Assume that $t\in\{2,\ldots,T\}$, and for any $x_{1:t-1}\in\mathsf{Z}^{t-1}$ let $\mu_{x_{1:t-1}}$ denote the distribution of $G_{t}(Z_{t}^{2})$ under $\mathbb{P}_{{\bf 1},x_{1:t-1}}^{N}$. By~\citep[Lemma~\ref{lem:tightness}]{andrieu2015uniformsupplement}, there exists $A\in\mathcal{B}(\mathsf{Z}^{t-1})$ such that $\pi_{1:t-1}(A)\ge1/2$ and the family $\{\mu_{x_{1:t-1}}\}_{x_{1:t-1}\in A}$ is tight. Therefore, for any $\epsilon,\delta>0$ there exists $\bar{G}_{t}<\infty$ such that $\mathbb{P}_{{\bf 1},x}^{N}(G_{t}(Z_{t}^{2})/\bar{G}_{t}\ge\epsilon)<\delta$ for all $x_{1:t-1}\in A$. Because $\pi_{t}$-${\rm ess}\sup_{x_{t}}G_{t}(x_{t})=\infty$, the set $A\times\{x_{t}\,:\,G_{t}(x_{t})\ge\bar{G}_{t}\}\times Z^{T-t-1}$ is of positive $\pi$-measure. The case $t=1$ follows similarly because the distribution of $G_{1}(Z_{1}^{2})$ is independent of $x$.\end{proof} \begin{rem} \label{rem:strictly_positive}An immediate implication of Propositions~\ref{prop:non-geom-easy} and~\ref{prop:boundepsilonNwithuniformboundG} and Lemma~\ref{lem:full-supp} is that if $\pi$ is equivalent to a Lebesgue or counting measure on $\mathsf{X}$ then $P_{N}$ is geometrically ergodic for any $N\geq2$ if and only if $\pi_{t}$-${\rm ess}\sup_{x_{t}}G_{t}(x_{t})<\infty$ for all $t\in[T]$. This covers many applications in statistics, where often the potentials $G_{t}$ are strictly positive and for any $x_{t}\in\mathsf{Z}$, the Markov kernel $M_{t}(x_{t},\cdot)$ is equivalent to a Lebesgue or counting measure on $\mathsf{Z}$. \end{rem} Proposition~\ref{prop:non-geom-easy} does not characterize all situations in which $P_{N}$ fails to be geometrically ergodic. Indeed, in the following example (\ref{eq:suff}) does not hold, and $P_{N}$ still fails to be geometrically ergodic. \begin{example*} \label{ex:not-satisfying}Let $\mathsf{Z}=\mathbb{N}$, $T=2$, $G_{1}(z)\equiv1$ and $M_{1}(z_{1})$ be any probability distribution supported on $\mathbb{N}$ (e.g.,~a Poisson distribution). Define $M_{2}(z_{1},z_{2})=\frac{1}{2}\delta_{2z_{1}}(z_{2})+\frac{1}{2}\delta_{2z_{1}+1}(z_{2})$ and $G_{2}(z_{2})=z_{2}$. It is not difficult to see that this example does not satisfy (\ref{eq:suff}), but $\pi_{2}$-${\rm ess}\sup_{z_{2}}G_{2}(z_{2})=\infty$. It is easy to observe as well that the sets $A_{n}:=\{(n,2n),(n,2n+1)\}$ satisfy $\pi(A_{n})>0$ and that $\inf_{x\in A}P_{N}(x,A_{n})\ge1-\delta_{n}$ where $\delta_{n}\to0$ as $n\to\infty$. \end{example*} Our findings above suggest that the essential boundedness of the potentials could in fact be a necessary condition for geometric ergodicity. We have considered also various other examples, and it seems that in any specific scenario it is easy to identify ``sticky'' sets and conclude by Lemma~\ref{prop:sticky-non-geometric}. However, we have yet to identify such sets in general, and so have resorted to stating the following. \begin{conjecture} \label{conj:general-necessity}Suppose $\pi_{t}$-${\rm ess}\sup_{x_{t}}G_{t}(x_{t})=\infty$ for some $t\in[T]$. Then, the i-cSMC kernel is not geometrically ergodic for any $N\in\mathbb{N}$. \end{conjecture} \section{\label{sec:The-particle-Gibbs}The particle Gibbs sampler} In numerous situations of practical interest one is interested in sampling from a probability distribution $\pi\bigl({\rm d}\theta\times{\rm d}x\bigr)$ defined on some measurable space $\bigl(\Theta\times\mathsf{X},\mathcal{B}(\Theta)\times\mathsf{\mathcal{B}(X)}\bigr)$ for which direct sampling is difficult, but sampling from the associated conditional probability distributions $\pi_{\theta}({\rm d}x)$ and $\pi_{x}({\rm d}\theta)$ for any $(\theta,x)\in\Theta\times\mathsf{X}$ turns out to be easier. In fact when sampling exactly from these conditionals is possible one can define the two stage Gibbs sampler~\citep{robert1999monte} which alternately samples from these conditional distributions. More precisely, let us define, for any $(\theta,x)\in\Theta\times\mathsf{X}$ and $S\in\mathcal{B}(\Theta)\times\mathsf{\mathcal{B}(X)}$, \begin{equation} {\rm \Gamma}\bigl(\theta,x;S\bigr):=\int_{S}\pi_{x}\bigl({\rm d}\vartheta\bigr)\pi_{\vartheta}({\rm d}y)\quad.\label{eq:defGibbs} \end{equation} This can be interpreted as a Markov transition probability, and is precisely the Markov kernel underpinning the standard two stage Gibbs sampler. The corresponding Markov chain $\{(\theta_{i},X_{i}),i\geq0\}$ on $\Theta\times\mathsf{X}$ leaves $\pi$ invariant and is ergodic under fairly general and natural conditions. In fact it can be shown that $\{X_{i},i\geq0\}$ and $\{\theta_{i},i\geq0\}$ are themselves Markov chains leaving the marginals $\pi\bigl({\rm d}x\bigr)$ and $\pi({\rm d}\theta)$ invariant respectively. For reasons which will appear clearer below, we define for any $(x_{0},S)\in\mathsf{X}\times\mathcal{B}\bigl(\mathsf{X}\bigr)$ the Markov transition probability $\Gamma_{x}\bigl(x_{0},S\bigr):=\Gamma\bigl(x_{0},\Theta\times S\bigr)$ corresponding to the Markov chain $\{X_{i},i\geq0\}$ (we point out that the index $x$ in this notation is a name, not a variable). In some situations, however, while sampling from the conditional distribution $\pi_{x}\bigl({\rm d}\theta\bigr)$ may be routine, sampling from $\pi_{\theta}({\rm d}x)$ may be difficult and this step is instead replaced by a Markov transition probability $\Pi_{\theta}(x,{\rm d}y)$ leaving $\pi_{\theta}({\rm d}x)$ invariant for any $\theta\in\Theta$. The resulting algorithm, whose transition kernel $\Phi$ is given below, is often referred to as ``Metropolis-within-Gibbs'' in the common situation where $\Pi_{\theta}$ is a Metropolis--Hastings transition kernel---we will however use this name in order to refer to the general scenario. In the particular situation where $\Pi_{\theta}$ is a cSMC transition kernel the resulting algorithm is known as the particle Gibbs (PGibbs) sampler~\citep{andrieu-doucet-holenstein}. We note that in the general scenario, for any $(\theta_{0},x,S)\in\Theta\times\mathsf{X}\times\big(\mathcal{B}(\Theta)\times\mathsf{\mathcal{B}(X)}\big)$ \begin{align} \Phi(x,S)=\Phi(\theta_{0},x;S): & =\int_{S}\pi_{x}\bigl({\rm d}\theta\bigr)\Pi_{\theta}(x,{\rm d}y)\quad.\label{eq:def:met-within-gibbs} \end{align} Similarly to above one can show that $\{X_{i},i\geq1\}$ defines a Markov chain, with transition kernel, for $(x_{0},S)\in\mathsf{X}\times\mathcal{B}\bigl(\mathsf{X}\bigr)$, $\Phi_{x}(x_{0},S):=\Phi(x_{0},\Theta\times S)$ which is $\pi\bigl({\rm d}x\bigr)-$reversible, and positive as soon as $\Pi_{\theta}$ defines a positive operator for any $\theta\in\Theta$. Indeed since for any $f,g\in L^{2}\bigl(\mathsf{X},\pi\bigr)$, \begin{align*} \int_{\mathsf{X}}f(x)\pi({\rm d}x)\int_{\Theta\times\mathsf{X}}\pi_{x}\bigl({\rm d}\theta\bigr)\Pi_{\theta}(x,{\rm d}y)g(y) & =\int_{\Theta}\pi({\rm d}\theta)\int_{\mathsf{X}^{2}}f(x)g(y)\pi_{\theta}\bigl({\rm d}x\bigr)\Pi_{\theta}(x,{\rm d}y)\\ & =\int_{\Theta}\pi({\rm d}\theta)\int_{\mathsf{X}^{2}}f(x)g(y)\pi_{\theta}\bigl({\rm d}y\bigr)\Pi_{\theta}(y,{\rm d}x)\quad, \end{align*} we deduce the reversibility from the choice $f(x)=\mathbb{I}\{x\in S_{1}\}$ and $g(x)=\mathbb{I}\{x\in S_{2}\}$ for $S_{1},S_{2}\in\mathcal{B}\bigl(\mathsf{X}\bigr)$ and the positivity by letting $g=f$. This motivates the following simple result, which again draws on the standard Hilbert space techniques outlined in~\citep[Appendix~\ref{sec:Supplementary-material-for_minorization}]{andrieu2015uniformsupplement}, and is to the best of our knowledge not available in the literature. We naturally remark that $\Gamma$ is a particular instance of $\Phi$ corresponding to the case where for any $(\theta,x)\in\Theta\times\mathsf{X}$, $\Pi_{\theta}(x,\cdot)=\pi_{\theta}(\cdot)$, therefore also implying that $\Gamma_{x}$ is self-adjoint. Our first result, Theorem~\ref{thm:generalMWGresult}, takes advantage of the fact that $\Gamma_{x}$ is reversible, and therefore focuses on the asymptotic variance of functions $f\in L^{2}\bigl(\mathsf{X},\pi\big)$. Corollary~\ref{rem:pgibbs_geometric_result} follows from this result, providing a sufficient condition for geometric ergodicity of the PGibbs Markov chain. Our second result, Theorem~\ref{thm:MWG-functions-of-theta}, focuses on functions $g\in L^{2}\bigl(\Theta,\pi\bigr)$, but the same technique is not directly applicable in this scenario. Some of our results concern Dirichlet forms: for a generic $\mu$-reversible Markov kernel and a function $f\in L^{2}(\mathsf{E},\mu)$ we define the Dirichlet form $\mathcal{E}_{\Pi}(f):=\left\langle f,(I-\Pi)f\right\rangle _{\mu}$. \begin{thm} \label{thm:generalMWGresult}Let $\pi$ be a probability distribution defined on $\bigl(\Theta\times\mathsf{X},\mathcal{B}(\Theta)\times\mathcal{B}(\mathsf{X)}\bigr)$ and let $\left\{ \Pi_{\theta},\theta\in\Theta\right\} $ be a family of Markov transition probabilities $\left\{ \Pi_{\theta},\theta\in\Theta\right\} $ such that for any $\theta\in\Theta$ the Markov kernel $\Pi_{\theta}$ is reversible with respect to $\pi_{\theta}$, and let $\Gamma$ and $\Phi$ be as in (\ref{eq:defGibbs}) and (\ref{eq:def:met-within-gibbs}). Define \begin{align} \varrho: & =\inf_{f\in L^{2}\bigl(\mathsf{X},\pi\bigr)}\frac{\int_{\Theta}\pi({\rm d}\theta){\rm var}_{\pi_{\theta}}\left(f\right){\rm Gap}\bigl(\Pi_{\theta}\bigr)}{\int_{\Theta}\pi({\rm d}\theta){\rm var}_{\pi_{\theta}}\left(f\right)}\quad.\label{eq:varrho} \end{align} Then, for any $f\in L^{2}(\mathsf{X},\pi)$ we have the following inequalities, \begin{enumerate}[label=(\alph*)] \item \label{item:dirichlet-order}for the Dirichlet forms,\textup{ \begin{align*} 2\mathcal{E}_{{\rm \Gamma}_{x}}(f)\geq\mathcal{E}_{\Phi_{x}}(f) & \geq\varrho\times\mathcal{E}_{\Gamma_{x}}(f)\quad, \end{align*} } \item for the right spectral gaps\label{item:gap-order} \[ 2{\rm Gap}\left({\rm \Gamma}_{x}\right)\geq{\rm Gap}\bigl(\Phi_{x}\bigr)\geq\varrho\times{\rm Gap}\left(\Gamma_{x}\right)\quad, \] \item if the asymptotic variances,\label{item:var-order} \[ 0\leq\frac{{\rm var}\bigl(f,\Gamma_{x}\bigr)-{\rm var}_{\pi}(f)}{2}\leq{\rm var}\bigl(f,\Phi_{x}\bigr)\leq(\varrho^{-1}-1){\rm var}_{\pi}(f)+{\rm \varrho^{-1}}{\rm var}\left(f,\Gamma_{x}\right)\quad, \] where the latter inequality holds for $\varrho>0.$ \item In addition if\label{item:uniform-pos} \begin{enumerate} \item [(i)]\label{item:uniform-order}there exist $\epsilon>0$ such that for all $\theta\in\Theta$ and all $(x,B)\in\mathsf{X}\times\mathcal{B}\big(\mathsf{X}\big)$, the minorisation inequality $\Pi_{\theta}\big(x,B\big)\geq\epsilon\pi_{\theta}\big(B\big)$ holds, then for any $f\in L^{2}(\mathsf{X},\pi)$ \[ \frac{{\rm var}\bigl(f,\Gamma_{x}\bigr)-(1-\epsilon){\rm var}_{\pi}(f)}{(2-\epsilon)}\leq{\rm var}\bigl(f,\Phi_{x}\bigr)\;, \] \item [(ii)]\label{item:lowerboundvarwithPipositive}for all $\theta\in\Theta$, $\Pi_{\theta}$ is a positive operator then for any $f\in L^{2}(\mathsf{X},\pi)$ \[ {\rm var}\bigl(f,\Gamma_{x}\bigr)\leq{\rm var}\bigl(f,\Phi_{x}\bigr)\;. \] \end{enumerate} \end{enumerate} \end{thm} \begin{proof} We prove the first point. Without loss of generality we consider any $f\in L_{0}^{2}\bigl(\mathsf{X},\pi\bigr)$ and notice that \[ \mathcal{E}_{\Phi_{x}}(f)=\int_{\Theta}\pi({\rm d}\theta)\mathcal{E}_{\Pi_{\theta}}\bigl(f\bigr)\quad, \] since \begin{align*} \int_{\Theta\times\mathsf{X}^{2}}\pi({\rm d}x)\pi_{x}({\rm d}\theta)\Pi_{\theta}(x,{\rm d}y)\left[f(x)-f(y)\right]^{2} & =\int_{\Theta}\pi({\rm d}\theta)\int_{\mathsf{X}^{2}}\pi_{\theta}({\rm d}x)\Pi_{\theta}(x,{\rm d}y)\left[f(x)-f(y)\right]^{2}\quad. \end{align*} Now using that $\mathcal{E}_{\Gamma_{x}}(f)=\frac{1}{2}\int_{\Theta\times\mathsf{X}^{2}}\pi({\rm d}x)\pi_{x}({\rm d}\theta)\pi_{\theta}({\rm d}y)\left[f(x)-f(y)\right]^{2}=\int_{\Theta}\pi({\rm d}\theta){\rm var}_{\pi_{\theta}}\left(f\right)$ and letting $\bar{f}_{\theta}:=f-\pi_{\theta}(f)$ for any $\theta\in\Theta$, we obtain \begin{align*} \mathcal{E}_{\Phi_{x}}(f) & =\int_{\Theta}\pi({\rm d}\theta){\rm var}_{\pi_{\theta}}\left(f\right)\frac{\int_{\Theta}\pi({\rm d}\theta)\mathcal{E}_{\Pi_{\theta}}\bigl(f\bigr)}{\int_{\Theta}\pi({\rm d}\theta){\rm var}_{\pi_{\theta}}\left(f\right)}\\ & =\mathcal{E}_{\Gamma_{x}}(f)\times\frac{\int_{\Theta}\pi({\rm d}\theta)\mathbb{I}\{{\rm var}_{\pi_{\theta}}\left(f\right)>0\}{\rm var}_{\pi_{\theta}}\left(f\right)\frac{\mathcal{E}_{\Pi_{\theta}}\bigl(\bar{f}_{\theta}\bigr)}{{\rm var}_{\pi_{\theta}}\left(\bar{f}_{\theta}\right)}}{\int_{\Theta}\pi({\rm d}\theta){\rm var}_{\pi_{\theta}}\left(f\right)}\\ & \geq\mathcal{E}_{\Gamma_{x}}(f)\times\frac{\int_{\Theta}\pi({\rm d}\theta){\rm var}_{\pi_{\theta}}\left(f\right){\rm Gap}\bigl(\Pi_{\theta}\bigr)}{\int_{\Theta}\pi({\rm d}\theta){\rm var}_{\pi_{\theta}}\left(f\right)}\\ & \geq\mathcal{E}_{\Gamma_{x}}(f)\times\inf_{g\in L_{0}^{2}\bigl(\mathsf{X},\pi\bigr)}\frac{\int_{\Theta}\pi({\rm d}\theta){\rm var}_{\pi_{\theta}}\left(g\right){\rm Gap}\bigl(\Pi_{\theta}\bigr)}{\int_{\Theta}\pi({\rm d}\theta){\rm var}_{\pi_{\theta}}\left(g\right)}\quad, \end{align*} where we have used that for any $g\in L_{0}^{2}\bigl(\mathsf{X},\pi\bigr)$, $\mathcal{E}_{\Pi_{\theta}}\bigl(g\bigr)\leq2{\rm var}_{\pi_{\theta}}\left(g\right)$ and that the set $A:=\bigl\{\theta\in\Theta:{\rm var}_{\pi_{\theta}}(\bar{f}_{\theta})=\infty\bigr\}$ satisfies $\pi\bigl(A\times\mathsf{X}\bigr)=0$. The latter result follows from ${\rm var}_{\pi}(f)<\infty$ and the variance decomposition identity: $\|f\|_{\pi}^{2}=\|f-\bar{f}_{\theta}\|_{\pi}^{2}+\|\bar{f}_{\theta}\|_{\pi}^{2}$. We deduce \ref{item:dirichlet-order} from the last inequality. Points \ref{item:gap-order} and \ref{item:var-order} then follow from \citep[Lemma~\ref{lem:minorizationdirichletboundgapvariance}]{andrieu2015uniformsupplement}. We next turn into \ref{item:uniform-pos}. As above, we find that \begin{align*} \mathcal{E}_{\Phi_{x}}(f) & \leq\mathcal{E}_{\Gamma_{x}}(f)\times\frac{\int_{\Theta}\pi({\rm d}\theta)\mathbb{I}\{{\rm var}_{\pi_{\theta}}\left(f\right)>0\}{\rm var}_{\pi_{\theta}}\left(f\right)\sup_{g\in L_{0}^{2}(\mathsf{X},\pi_{\theta})}\frac{\mathcal{E}_{\Pi_{\theta}}\bigl(g\bigr)}{{\rm var}_{\pi_{\theta}}\left(g\right)}}{\int_{\Theta}\pi({\rm d}\theta){\rm var}_{\pi_{\theta}}\left(f\right)}\quad. \end{align*} Under the uniform minorisation condition, we have $\mathcal{E}_{\Pi_{\theta}}\bigl(g\bigr)\le(2-\epsilon)\mathrm{var}_{\pi_{\theta}}(g)$ \citep[Proposition~\ref{prop:boundsspectralgapandvarianceforP_N}]{andrieu2015uniformsupplement}, and consequently $\mathcal{E}_{\Phi_{x}}(f)\leq(2-\varepsilon)\mathcal{E}_{\Gamma_{x}}(f)$. When $\Pi_{\theta}$ is a positive operator for any $\theta\in\Theta$, we have $\mathcal{E}_{\Pi_{\theta}}\bigl(g\bigr)\le\mathrm{var}_{\pi_{\theta}}(g)$ and consequently $\mathcal{E}_{\Phi_{x}}(f)\leq\mathcal{E}_{\Gamma_{x}}(f)$.\end{proof} \begin{rem} \label{rem:pgibbs_geometric_result}In relation to Theorem \ref{thm:generalMWGresult} :\end{rem} \begin{enumerate}[label=(\alph*)] \item it may be easier in practice to use the lower bound $\underline{\varrho}:=\inf_{\theta\in\Theta}{\rm Gap}\bigl(\Pi_{\theta}\bigr)\leq\varrho$ which leads to ${\rm Gap}\bigl(\Phi_{x}\bigr)\geq\underline{\varrho}\times{\rm Gap}\left(\Gamma_{x}\right)$ and ${\rm var}\bigl(f,\Phi_{x}\bigr)\leq(\underline{\varrho}^{-1}-1){\rm var}_{\pi}(f)+{\rm \underline{\varrho}^{-1}}{\rm var}\left(f,\Gamma_{x}\right)$ when $\underline{\varrho}>0$, \item one could suggest iterating $\Pi_{\theta}$ sufficiently many times, say $k_{\theta}$ times, in order to ensure that $\Pi_{\theta}^{k_{\theta}}$ satisfies the uniform in $\theta$ properties of the type suggested above. This would require however a computable quantitative bound on the spectral gap of $\Pi_{\theta}$ , \item the lower bound in \ref{item:var-order} is motivated by the fact that $\{\Pi_{\theta},\theta\in\Theta\}$ may be a family with non-positive elements, which may introduce negative correlations. On the contrary in the situation where $\{\Pi_{\theta},\theta\in\Theta\}$ is a collection of positive operators (e.g. cSMC kernels) then \ref{item:gap-order} implies that $\Phi_{x}$ is geometrically ergodic as soon as $\Gamma_{x}$ is geometrically ergodic and $\varrho>0$ (and of course $\Gamma_{x}$ is always positive) and \ref{item:uniform-pos}(ii) that $\Phi_{x}$ is always inferior to $\Gamma_{x}$ in terms of asymptotic variance. In the context of the PGibbs sampler the latter result parallels what is known for pseudo-marginal algorithms~\citep{andrieu2015}, \item we note that from~\citep[Theorem~1; Proposition~1]{roberts2001markov} $\Phi$ is geometrically ergodic as soon as $\Phi_{x}$ is geometrically ergodic. \end{enumerate} Now we show how these results can be transferred to the $\{\theta_{i}\}$ chain. \begin{thm} \label{thm:MWG-functions-of-theta}Let the notation be as in Theorem~\ref{thm:generalMWGresult}. Then, \begin{enumerate}[label=(\alph*)] \item \label{enu:G-stuff}assume that for some class of functions $\mathcal{G}\subset\{g:\mathsf{X}\to\mathbb{R}\,:\,\pi(|g|)<\infty\}$ there exists a function $\left|\cdot\right|_{\mathcal{G}}:\mathcal{G}\rightarrow[0,\infty]$ and $\rho\in[0,1)$ such that for any probability distribution $\nu$ on $\bigl(\mathsf{X},\mathcal{B}(\mathsf{X})\bigr)$ there exist $W_{\nu}\in[0,\infty]$ such that for all $g\in\mathcal{G}$ and any $k\geq1$ \[ \left|\nu\Phi_{x}^{k}(g)-\pi(g)\right|\leq\left|g\right|_{\mathcal{G}}W_{\nu}\rho^{k}\quad, \] then for any $f:\Theta\rightarrow\mathbb{R}$ such that $\bar{f}(x):=\pi_{x}\bigl(f\bigr)\in\mathcal{G}$ and any $k\geq2$ \[ \left|\nu\Phi^{k}(f)-\pi(f)\right|\leq\left|\bar{f}\right|_{\mathcal{G}}W_{\nu}\rho^{k-1}\quad, \] \item for any $f\in L^{2}\bigl(\Theta,\pi\bigr)$, letting for any $x\in\mathsf{X}$ $\bar{f}(x):=\pi_{x}\bigl(f\bigr)\in L^{2}\bigl(\mathsf{X},\pi\bigr)$, we have for any $k\geq1$ \[ \left\langle f,\Phi^{k}f\right\rangle _{\pi}=\left\langle \bar{f},\Phi_{x}^{k-1}\bar{f}\right\rangle _{\pi} \] and \[ {\rm {\rm var}}\bigl(f,\Phi\bigr)={\rm var}_{\pi}\bigl(f\bigr)+{\rm var}_{\pi}\bigl(\bar{f}\bigr)+{\rm var}(\bar{f},\Phi_{x})\quad, \] \item if $\varrho>0$ defined in \ref{eq:varrho}, then for $f\in L^{2}\bigl(\Theta,\pi\bigr)$ \begin{align*} {\rm var}(f,\Phi)\leq & {\rm var}_{\pi}\bigl(f\bigr)+\varrho^{-1}{\rm var}_{\pi}\bigl(\bar{f}\bigr)+{\rm \varrho^{-1}}{\rm var}\left(\bar{f},\Gamma_{x}\right)\\ \leq & (1-\varrho^{-1}){\rm var}_{\pi}\bigl(f\bigr)+{\rm \varrho^{-1}}{\rm var}\bigl(f,\Gamma\bigr)\quad, \end{align*} \item if for all $\theta\in\Theta$, $\Pi_{\theta}$ is a positive operator, then for $f\in L^{2}\bigl(\Theta,\pi\bigr)$ ${\rm var}(f,\Phi)\geq{\rm var}(f,\Gamma)$. \end{enumerate} \end{thm} \begin{proof} We remark that without loss of generality we can let $f\in L_{0}^{2}(\Theta,\pi)$ throughout. First note that for $f\in L_{0}^{2}\bigl(\Theta,\pi\bigr)$ and any $\bigl(\theta,x_{0}\bigr)\in\Theta\times\mathsf{X}$ \begin{align*} \Phi\bigl(\theta,x_{0};f\bigr) & =\Phi(x_{0};f)=\pi_{x_{0}}\bigl(f\bigr)=\bar{f}(x_{0})\quad, \end{align*} and for $g\in L^{2}\bigl(\mathsf{X},\pi\bigr)$ and any $p\geq1$, $\Phi^{p}(\theta,x_{0};g)=\Phi_{x}^{p}(x_{0};g)$. The first result is straightforward upon remarking that for $k\geq1$ \[ \Phi^{k+1}(x_{0},f)-\pi(f)=\Phi_{x}^{k}(x_{0},\bar{f})-\pi(\bar{f})\quad. \] For the second and third point, using the remarks above, for $f\in L_{0}^{2}\bigl(\Theta,\pi\bigr)$ and $k\geq1$ \begin{align*} \left\langle f,\Phi^{k}f\right\rangle _{\pi} & =\left\langle f,\Phi^{k-1}\bar{f}\right\rangle _{\pi}=\left\langle \bar{f},\Phi_{x}^{k-1}\bar{f}\right\rangle _{\pi}\;. \end{align*} Now $\|f\|_{\pi}^{2}=\left\langle f-\bar{f}+\bar{f},f-\bar{f}+\bar{f}\right\rangle _{\pi}=\|\bar{f}\|_{\pi}^{2}+\|f-\bar{f}\|_{\pi}^{2}$, which is the variance decomposition identity and by noting that $\pi\bigl(\bar{f}\bigr)=0$ lets us deduce that $f\in L_{0}^{2}\bigl(\Theta,\pi\bigr)$ implies that $\bar{f}\in L_{0}^{2}\bigl(\mathsf{X},\pi\bigr)$. Now, \begin{eqnarray*} {\rm var}(f,\Phi) & = & \|f\|_{\pi}^{2}+2\sum_{k=1}^{\infty}\left\langle f,\Phi^{k}f\right\rangle _{\pi}=\|f\|_{\pi}^{2}+2\sum_{k=1}^{\infty}\left\langle \bar{f},\Phi_{x}^{k-1}\bar{f}\right\rangle _{\pi}\\ & = & \|f\|_{\pi}^{2}+2\|\bar{f}\|_{\pi}^{2}+2\sum_{k=1}^{\infty}\left\langle \bar{f},\Phi_{x}^{k}\bar{f}\right\rangle _{\pi}=\|f\|_{\pi}^{2}+\|\bar{f}\|_{\pi}^{2}+{\rm var}\bigl(\bar{f},\Phi_{x}\bigr)\quad. \end{eqnarray*} We conclude by noting that for $f\in L^{2}\bigl(\mathsf{X},\pi\bigr)$ then ${\rm var}_{\pi}\bigl(f\bigr)=\|f-\pi(f)\|_{\pi}^{2}$ and ${\rm var}_{\pi}\bigl(\bar{f}\bigr)=\|\bar{f}-\pi(f)\|_{\pi}^{2}=\|\overline{f-\pi(f)}\|_{\pi}^{2}$. We will also use the equality above for $\Gamma$ and $\Gamma_{x}$, since again the latter corresponds to a particular instance of the above. We can now use the bound from Theorem~\ref{thm:generalMWGresult}, which leads, for $f\in L_{0}^{2}\bigl(\Theta,\pi\bigr)$, to \begin{align*} {\rm var}(f,\Phi) & \leq\|f\|_{\pi}^{2}+\|\bar{f}\|_{\pi}^{2}+(\varrho^{-1}-1)\|\bar{f}\|_{\pi}^{2}+{\rm \varrho^{-1}}{\rm var}\left(\bar{f},\Gamma_{x}\right)\\ & =\|f\|_{\pi}^{2}+\varrho^{-1}\|\bar{f}\|_{\pi}^{2}+{\rm \varrho^{-1}}{\rm var}\left(\bar{f},\Gamma_{x}\right)\quad. \end{align*} From the remark above we deduce that \begin{align*} \|f\|_{\pi}^{2}+\varrho^{-1}\|\bar{f}\|_{\pi}^{2}+{\rm \varrho^{-1}}{\rm var}\left(\bar{f},\Gamma_{x}\right) & \leq\|f\|_{\pi}^{2}+\varrho^{-1}\|\bar{f}\|_{\pi}^{2}+{\rm \varrho^{-1}}\big[{\rm var}(f,\Gamma)-\|f\|_{\pi}^{2}-\|\bar{f}\|_{\pi}^{2}\big]\\ & =(1-\varrho^{-1})\|f\|_{\pi}^{2}+{\rm \varrho^{-1}}{\rm var}(f,\Gamma)\quad. \end{align*} We conclude as above. The final statement follows from ${\rm var}\bigl(\bar{f},\Phi_{x}\bigr)\geq{\rm var}\bigl(\bar{f},\Gamma_{x}\bigr)$ (see Theorem~\ref{thm:generalMWGresult}) and the equality established above for $\Phi$ and $\Phi_{x}$ and $\Gamma$ and $\Gamma_{x}$.\end{proof} \begin{cor} Consider the PGibbs sampler with $N\geq2$ particles with kernel $\Phi_{N}$ defined as in (\ref{eq:def:met-within-gibbs}) such that for any $\theta\in\Theta$, $\Pi_{\theta}=P_{\theta,N}$ is the i-cSMC kernel as defined in Section~\ref{sec:The-i-CSMC} for the families $\{M_{\theta,t}\}$and $\{G_{\theta,t}\}$ of kernels and potentials on $\mathsf{Z}\times\mathcal{B}\bigl(\mathsf{Z}\bigr)$ and $\mathsf{Z}$ respectively. For any $\theta\in\Theta$ we let $\gamma_{\theta,T}$ be the corresponding normalizing constant as defined below (\ref{eq:defofpiforSMCframework}). Then, the results of Theorems \ref{thm:generalMWGresult} and \ref{thm:MWG-functions-of-theta} hold as follows: \begin{enumerate}[label=(\alph*)] \item if \[ \pi-{\rm ess}\sup_{\theta}\frac{\prod_{t=1}^{T}\bar{G}_{\theta,t}}{\gamma_{\theta,T}}<\infty\quad, \] then $\varrho\ge\epsilon_{N}$ as defined in Corollary~\ref{cor:convergence_epsilon}, \item or we have the uniform mixing condition, for some $0\leq\alpha<\infty$, \[ \pi-{\rm ess}\sup_{\theta,z}\frac{Q_{\theta,p,p+k}(1)(z)}{\eta_{\theta,p}Q_{\theta,p,p+k}(1)}\leq\alpha\quad, \] then $\varrho\ge\epsilon_{N}$ as defined in Corollary~\ref{cor:linear_in_T_epsilon_N_bound}. \end{enumerate} \noindent In particular, in both cases $\varrho$ convergences to one as $N\rightarrow\infty$, implying that the spectral gaps and the asymptotic variances associated with the PGibbs sampler converge to those of the related Gibbs sampler. \end{cor} \begin{rem} It is worth noting that terms related to $\gamma_{\theta,T}$ appear in all these bounds. So, for example in the first part it is not sufficient that our potentials $\{G_{\theta,t}\}$ are essentially bounded, but it is sufficient if, for all $t\in[T]$, $\pi_{t}-{\rm ess}\sup_{\theta,x_{t}}G_{\theta,t}(x_{t})/\eta_{\theta,t}(G_{t})$ is bounded. \end{rem} \section{Discussion\label{sec:Discussion}} The developments above go some way in characterizing the behaviour of i-cSMC and associated PGibbs Markov chains, and raise a number of possible future directions for research. We have already embarked upon investigating some potentially practical uses of the minorization conditions and spectral properties for these chains. Of particular interest in practice is how to choose $N$ in the i-cSMC algorithm so as to balance the trade off between mixing properties of $P_{N}$ and the total number of iterations that can be performed with limited computational resources. Remark~\ref{rem:tuning}, for example, can be used to find approximately good values of $N$ in this spirit, but can only serve as a heuristic. In particular, while Proposition~\ref{prop:uniformminorizationcrude} may provide a fairly accurate bound in the large $N$ regime, it is unclear how much is lost in applying Jensen's inequality, and consequently how accurate estimates such as those in Remark~\ref{rem:tuning} can be. It is possible that results such as those in~\citep{berard2013lognormal} may provide a way to exploit additional structure often found in statistical applications. The results for the i-cSMC and PGibbs Markov chains developed here can be compared and contrasted with similar results for the Particle Independent Metropolis--Hastings (PIMH) and PMMH Markov chains~\citep{andrieu-doucet-holenstein}. We summarize here the detailed comparison provided in~\citep[Appendix~\ref{sec:Comparison-with-Particle}]{andrieu2015uniformsupplement}. Like i-cSMC, PIMH is an exact approximation of an independent sampler but PMMH is an exact approximation of an idealized Metropolis--Hastings kernel, rather than a Gibbs sampler. Just as i-cSMC can be viewed as a constituent element of PGibbs, PIMH can be viewed as playing the same role within PMMH. Central to the analysis of PIMH is the essential supremum of the normalizing constant estimate $\hat{\gamma}_{T}^{N}\bigl(Z_{1:T}\bigr)$ introduced in Section~\ref{sec:Minorization-and-Dirichlet} with respect to the law of a standard SMC algorithm and indeed the PIMH Markov chain is (uniformly) geometrically ergodic if and only if this supremum is finite as a consequence of the characterisation of independent Metropolis--Hastings chains in~\citep{mengersen-tweedie}. However, it can also be seen that the rate of convergence of PIMH will typically not improve as $N$ increases, in contrast with the convergence for the i-cSMC (see Propositions~\ref{prop:boundepsilonNwithuniformboundG} and~\ref{prop:mixing_bound}). For PMMH,~\citep{andrieu2015} show that if the essential supremum of the relative normalizing constant estimate $\hat{\gamma}_{\theta,T}^{N}\bigl(Z_{1:T}\bigr)/\gamma_{\theta,T}$ is moreover bounded essentially uniformly in $\theta$ then the existence of a spectral gap of the idealized Metropolis--Hastings Markov kernel it approximates is inherited by PMMH. However, the rate of convergence of the PMMH Markov chain when this occurs does not improve in general as $N$ increases, in contrast to our results for PGibbs Markov chains. In this context, weak convergence in $N$ of the asymptotic variance of estimates of $\pi(f)$ to the corresponding asymptotic variance of the Metropolis--Hasting kernel is nevertheless provided by~\citep[Proposition~19]{andrieu2015} for all $f\in L^{2}(\Theta,\pi)$ but this can be contrasted with quantitative bounds obtained in Theorem~\ref{thm:MWG-functions-of-theta}. The one step uniform minorization condition in Corollary~\ref{cor:uniformminorbymu}, where the minorization measure is the invariant distribution of the Markov chain, suggests that it may be possible to apply coupling from the past techniques (see, e.g.,~\citep{propp1996exact,murdoch:green:1998,hobert2004mixture}) in order to produce samples from exactly this distribution. It is, however, not clear how to implement such an algorithm in general, although~\citep{lee2014perfect} provides a perfect simulation algorithm motivated by Theorem~\ref{thm:THEtheorem}. Finally, our analysis has focused mainly on the case where the essential boundedness condition holds. However, a refined analysis may permit characterization of the i-cSMC and hence the PGibbs Markov chains even in the absence of this condition, with parallels to~\citep{andrieu2015}. \begin{acknowledgement*} CA's research was supported by EPSRC EP/K009575/1 Bayesian Inference for Big Data with Stochastic Gradient Markov Chain Monte Carlo and EP/K0\-14463/1 Intractable Likelihood: New Challenges from Modern Applications (ILike). MV was supported by Academy of Finland grant 250575. \end{acknowledgement*} \pagebreak{} \part*{Supplementary material} \appendix \section{Proof of Lemma~\ref{lem:simple_correspondence} \label{sec:Proof-of-Lemma_simplecorrespondence}} The proof of Lemma~\ref{lem:simple_correspondence} is a simple consequence of Lemma~\ref{lem:correspondenceP_1andP_2} \ref{enu:E-I}. We introduce the set of indices $\mathcal{J}_{T}:=\bigcup_{m=0}^{T}\{1\}^{m}\times\{2,\ldots,N\}^{T-m}$, which will allow us to define the lineages coalescing with $\mathbf{1}\in\{1\}^{T}$ at some point in the past, and $m_{\mathbf{i}}:=\max\{k:i_{k}=1\}$ (with the convention that $\max\emptyset=0$) the time at which coalescence occurs. \begin{lem} \label{lem:correspondenceP_1andP_2}For any $x\in\mathsf{X}$, $z_{1:T}\in\mathsf{X}^{T}$ and $a_{1:T}\in[N]^{N(T-1)}\times[N]$, \begin{enumerate}[label=(\alph*)] \item for any\textup{ $y_{2:T}\in\mathsf{Z}^{T-1}$ and }$\mathbf{k}=k_{1:T}\in[N]^{T}$ such that $k^{1}\neq1$ \[ \mathbb{P}_{\mathbf{1},x}^{N}\left(Z_{1}\in{\rm d}z_{1}\right)=\int_{\mathsf{Z}}M_{1}\bigl({\rm d}y_{1}\bigr)\mathbb{P}_{\mathbf{1},x,\mathbf{k},y}^{N}\left(Z_{1}\in{\rm d}z_{1}\right)\quad, \] and for $t\in\{2,\ldots,T\}$, any $(y_{1},\ldots,y_{t-1},y_{t+1},\ldots,y_{T})\in\mathsf{Z}^{T-1}$, $\mathbf{k}\in[N]^{T}$ such that $k_{t}\neq1$ and $a_{t-1}^{k_{t}}=k_{t-1}$ \begin{multline*} \mathbb{P}_{\mathbf{1},x}^{N}\left(Z_{t}\in{\rm d}z_{t},A_{t-1}=a_{t-1}\left|Z_{t-1}=z_{t-1}\right.\right)=\int_{\mathsf{Z}}\frac{G_{t-1}\bigl(z_{t-1}^{k_{t-1}}\bigr)}{\sum_{j=1}^{N}G_{t-1}\bigl(z_{t-1}^{j}\bigr)}M_{t}\bigl(z_{t-1}^{k_{t-1}},{\rm d}y_{t}\bigr)\\ \times\mathbb{P}_{\mathbf{1},x,\mathbf{k},y}^{N}\left(Z_{t}\in{\rm d}z_{t},A_{t-1}=a_{t-1}\left|Z_{t-1}=z_{t-1}\right.\right)\quad. \end{multline*} \item \label{enu:E-I}for $\mathbf{i}\in\mathcal{J}_{T}$ and $y_{1:m_{\mathbf{i}}}=x_{1:m_{\mathbf{i}}}$ we have \textup{ \begin{multline*} \mathbb{E}_{\mathbf{1},x}^{N}\left[I_{\mathbf{i}}\bigl(Z_{1:T},A_{1:T},S\bigr)\right]\\ =\int_{\mathsf{Z}^{T-m_{\mathbf{i}}}}M_{m_{\mathbf{i}},T}(x_{m_{\mathbf{i}}},{\rm d}y_{m_{\mathbf{i}}+1:T})\times\mathbb{E}_{\mathbf{1},x,\mathbf{i},y}^{N}\left[\frac{{\textstyle {\displaystyle {\textstyle \prod_{t=m_{\mathbf{i}}}^{T}}G_{t}(y_{t})}}\times\mathbb{I}\{y\in S\}}{{\displaystyle {\textstyle \prod_{t=m_{\mathbf{i}}}^{T}\sum_{j=1}^{N}}G_{t}(Z_{t}^{j})}}\right]. \end{multline*} } \item for $\mathbf{i}\notin\mathcal{J}_{T}$\textup{, $\mathbb{E}_{\mathbf{1},x}^{N}\left[I_{\mathbf{i}}\bigl(Z_{1:T},A_{1:T},S\bigr)\right]=0$.} \end{enumerate} \end{lem} We note that the above is well defined for $m_{\mathbf{i}}=0$ from the definition of $M_{p,l}$ in Section~\ref{sec:The-i-CSMC} and associated remark, and the convention that $x_{1:0}=y_{1:0}$ should be ignored in this case. \begin{proof}[Proof of Lemma~\ref{lem:correspondenceP_1andP_2}] In order to alleviate notation we omit $Z_{t}\in\cdot,Z_{t-1}=\cdot$ and $A_{t-1}=\cdot$ and set $G_{t}^{k}:=G_{t}\bigl(z_{t}^{k}\bigr)$. For the first point we note the independence on $(y_{2},\ldots,y_{T})\in\mathsf{Z}^{T-1}$ of \[ \mathbb{P}_{\mathbf{1},x,\mathbf{k},y}^{N}\left({\rm d}z_{1}\right)=\delta_{x_{1},y_{1}}\bigl({\rm d}z_{1}^{1}\times{\rm d}z_{1}^{k_{1}}\bigr)\prod_{i=2,i\neq k_{1}}^{N}M_{1}({\rm d}z_{1}^{i})\quad, \] and then since $k_{1}\neq1$, \[ \int_{\mathsf{Z}}M_{1}\bigl({\rm d}y_{1}\bigr)\delta_{x_{1},y_{1}}\bigl({\rm d}z_{1}^{1}\times{\rm d}z_{1}^{k_{1}}\bigr)\prod_{i=2,i\neq k_{1}}^{N}M_{1}({\rm d}z_{1}^{i})=\delta_{x_{1}}\bigl({\rm d}z_{1}^{1}\bigr)\prod_{i=2}^{N}M_{1}({\rm d}z_{1}^{i}) \] and conclude from (\ref{eq:def_P_=00007B1,x=00007D-time-1}). Similarly we note the independence on $(y_{1},\ldots,y_{t-1},y_{t+1},\ldots,y_{T})\in\mathsf{Z}^{T-1}$ of \[ \delta_{x_{t},y_{t}}\bigl({\rm d}z_{t}^{1}\times{\rm d}z_{t}^{k_{t}}\bigr)\times\mathbb{I}\{a_{t-1}^{1,k_{t}}=(1,k_{t-1})\}\prod_{i=2,i\neq k_{t}}^{N}\frac{G_{t-1}^{a_{t-1}^{i}}}{\sum_{j=1}^{N}G_{t-1}^{j}}M_{t}(z_{t-1}^{a_{t-1}^{i}},{\rm d}z_{t}^{i}) \] (we note however that we will have $\mathbb{P}_{\mathbf{1},x,\mathbf{k},y}\bigl(Z_{t-1}^{k_{t-1}}\in{\rm d}y_{t-1}\bigr)=1$) and since $k_{t}\neq1$ \begin{multline*} \mathbb{I}\{a_{t-1}^{1,k_{t}}=(1,k_{t-1})\}\int_{\mathsf{Z}}\frac{G_{t-1}^{k_{t-1}}}{\sum_{k=1}^{N}G_{t-1}^{k}}M_{t}\bigl(z_{t-1}^{k_{t-1}},{\rm d}y_{t}\bigr)\delta_{x_{t},y_{t}}\bigl({\rm d}z_{t}^{1}\times{\rm d}z_{t}^{k_{t}}\bigr)\prod_{i=2,i\neq k_{t}}^{N}\frac{G_{t-1}^{a_{t-1}^{i}}}{\sum_{j=1}^{N}G_{t-1}^{j}}M_{t}(z_{t-1}^{a_{t-1}^{i}},{\rm d}z_{t}^{i})\\ =\mathbb{I}\{a_{t-1}^{1}=1\}\delta_{x_{t}}\bigl({\rm d}z_{t}^{1}\bigr)\prod_{i=2}^{N}\frac{G_{t-1}^{a_{t-1}^{i}}}{\sum_{j=1}^{N}G_{t-1}^{j}}M_{t}(z_{t-1}^{a_{t-1}^{i}},{\rm d}z_{t}^{i}) \end{multline*} and we conclude with (\ref{eq:def_P_=00007B1,x=00007D-other-times}). For the second point, let $\mathbf{i}\in\mathcal{J}_{T}$, $a_{1:T}\in\bigl([N]^{N}\bigr)^{T-1}\times[N]$ such that $a_{t-1}^{i_{t}}=i_{t-1}$ for $t=m_{\mathbf{i}}+1,\ldots,T$, $a_{T}=i_{T}$ and $y_{1:m_{\mathbf{i}}}=x_{1:m_{\mathbf{i}}}$ then, with an obvious convention when $m_{\mathbf{i}}=1$ (i.e. $a_{0}$ does not exist and should be ignored), we have \begin{align*} {\textstyle \int_{\mathsf{Z}^{T-m_{\mathbf{i}}}}} & \mathbb{I}\{y\in S\}\times{\textstyle {\displaystyle \prod_{t=m_{\mathbf{i}}+1}^{T}}}M_{t}(y_{t-1},{\rm d}y_{t})\frac{G_{t-1}(y_{t-1})}{{\textstyle \sum_{j=1}^{N}}G_{t-1}^{j}}\times\mathbb{P}_{\mathbf{1},x,\mathbf{i},y}^{N}\left({\rm d}z_{m_{\mathbf{i}}+1:T},a_{m_{\mathbf{i}}:T-1}\left|z_{m_{\mathbf{i}}}\right.\right)\\ = & \int_{\mathsf{Z}^{T-m_{\mathbf{i}}}}{\displaystyle \prod_{t=m_{\mathbf{i}}+1}^{T}M_{t}(y_{t-1},{\rm d}y_{t})\frac{G_{t-1}^{i_{t-1}}}{{\textstyle \sum_{j=1}^{N}}G_{t-1}^{j}}}\mathbb{I}\{y\in S,a_{t-1}^{i_{t}}=i_{t-1}\}\mathbb{P}_{\mathbf{1},x,\mathbf{i},y}^{N}\left({\rm d}z_{t},a_{t-1}\left|z_{t-1}\right.\right)\\ = & \mathbb{I}\{(y_{1:m_{\mathbf{i}},}z_{m_{\mathbf{i}}+1:T}^{i_{m_{\mathbf{i}}+1:T}})\in S\}{\textstyle {\displaystyle \prod_{t=m_{\mathbf{i}}+1}^{T}}\mathbb{P}_{\mathbf{1},x}^{N}\left({\rm d}z_{t},a_{t-1}\left|z_{t-1}\right.\right)\mathbb{I}\{a_{t-1}^{i_{t}}=i_{t-1}\}}\\ = & \mathbb{I}\{(y_{1:m_{\mathbf{i}},}z_{m_{\mathbf{i}}+1:T}^{i_{m_{\mathbf{i}}+1:T}})\in S\}{\textstyle {\displaystyle \prod_{t=m_{\mathbf{i}}+1}^{T}}\mathbb{I}\{a_{t-1}^{i_{t}}=i_{t-1}\}}\mathbb{P}_{\mathbf{1},x}^{N}\left({\rm d}z_{m_{\mathbf{i}}+1:T},a_{m_{\mathbf{i}}:T-1}\left|z_{m_{\mathbf{i}}}\right.\right), \end{align*} where we have used the fact that from the structure of $\mathbb{P}_{\mathbf{1},x,\mathbf{i},y}^{N}\bigl(\cdot\bigr)$ we have $z_{m_{\mathbf{i}}+1:T}^{i_{m_{\mathbf{i}}+1:T}}=y_{m_{\mathbf{i}+1}:T}$. We notice that $\mathbb{P}_{\mathbf{1},x,\mathbf{k},y}^{N}\left(A_{T}=k\left|Z_{T}=z_{T}\right.\right)=\mathbb{P}_{\mathbf{1},x}^{N}\left(A_{T}=k\left|Z_{T}=z_{T}\right.\right)$ and conclude from the definition of $\mathbb{P}_{\mathbf{1},x}^{N}\bigl(\cdot\bigr)$. For the third point we remark that for any $z_{1:T},a_{1:T},S\in\bigl(\mathsf{Z}^{N}\bigr)^{T}\times\bigl([N]^{N}\bigr)^{T-1}\times[N]\times\mathcal{B}\bigl(\mathsf{X}\bigr)$ such that $a_{1:T}^{{\bf 1}}\in\{1\}^{T}$ then $I_{{\bf i}}(z_{1:T},a_{1:T},S)=0$ if $\mathbf{i}\notin\mathcal{J}_{T}$ and the result follows from the definition of $\mathbb{E}_{\mathbf{1},x,\mathbf{i},y}^{N}\bigl(\cdot\bigr)$. \end{proof} \section{Proof of Lemma~\ref{lem:selfajoint_positive} \label{sec:Proof-of-Lemma_selfadjoint}} \begin{proof}[Proof of Lemma~\ref{lem:selfajoint_positive}] We can define the artificial joint distribution \begin{equation} \tilde{\pi}({\bf k},{\rm d}z_{1:T},a_{1:T-1}):=\frac{1}{N^{T}}\int_{\mathsf{X}}\pi({\rm d}x)\mathbb{P}_{{\bf k},x}\left(Z\in{\rm d}z_{1:T},A_{1:T-1}=a_{1:T-1}\right).\label{eq:definitionartificialdistribution} \end{equation} This admits as a marginal \[ \tilde{\pi}({\rm d}z_{1:T},a_{1:T-1})=\sum_{{\bf k}\in[N]^{T}}\frac{1}{N^{T}}\int_{\mathsf{X}}\mathbb{P}_{{\bf k},x}\left(Z\in{\rm d}z_{1:T},A_{1:T-1}=a_{1:T-1}\right)\pi({\rm d}x). \] It is straightforward to check that the conditional distribution of ${\bf K}$ given $(z_{1:T},a_{1:T-1})$ can be written \[ \tilde{\pi}_{z_{1:T},a_{1:T-1}}({\bf k})=\frac{G_{T}(z_{T}^{k_{T}})}{\sum_{j=1}^{N}G_{T}(z_{T}^{j})}\prod_{t=2}^{T}\mathbb{I}\left\{ k_{t-1}=a_{t-1}^{k_{t}}\right\} . \] Indeed, we can define the Markov kernel $\tilde{P}_{N}$ \[ \tilde{P}_{N}(x,S):=\sum_{{\bf k}\in[N]^{T}}\frac{1}{N^{T}}\sum_{{\bf i}\in[N]^{T}}\int_{\mathsf{X}^{T}\times[N]^{T-1}}\mathbb{P}_{{\bf k},x}\left(Z_{1:T}\in{\rm d}z_{1:T},A_{1:T-1}=a_{1:T-1}\right)\pi_{z_{1:T},a_{1:T-1}}({\bf i})\mathbb{I}\left\{ z_{1:T}^{{\bf i}}\in S\right\} . \] The interpretation of this kernel is that it simulates from the conditional distribution of $\left(Z_{1:T}^{-{\bf k}},A_{1:T-1}\right)$ given $({\bf k},Z_{1:T}^{{\bf k}})$ and then draws ${\bf K}={\bf i}$ conditional upon $\left(Z_{1:T},A_{1:T-1}\right)$, returning $Z_{1:T}^{{\bf i}}$. This provides immediately that $\tilde{P}_{N}$ is a self-adjoint, positive operator on $L^{2}(\mathsf{X},\pi)$ (see Appendix \ref{sec:Supplementary-material-for_minorization}) since \begin{eqnarray*} & & \left\langle \tilde{P}_{N}f,g\right\rangle _{\pi}\\ & = & \int_{\mathsf{X}^{2}}g(x)f(y)\pi({\rm d}x)\tilde{P}_{N}(x,{\rm d}y)\\ & = & \int_{\mathsf{X}^{2}}g(x)f(y)\pi({\rm d}x)\\ & & \sum_{{\bf k}\in[N]^{T}}\frac{1}{N^{T}}\sum_{{\bf i}\in[N]^{T}}\int_{\mathsf{X}^{T}\times[N]^{T-1}}\mathbb{P}_{{\bf k},x}\left(Z\in{\rm d}z_{1:T},A_{1:T-1}=a_{1:T-1}\right)\pi_{z_{1:T},a_{1:T-1}}({\bf i})\delta_{z_{1:T}^{{\bf i}}}({\rm d}y)\\ & = & \sum_{{\bf k}\in[N]^{T}}\sum_{{\bf i}\in[N]^{T}}\int_{\mathsf{X}^{T}\times[N]^{T-1}}g(z_{1:T}^{{\bf k}})f(z_{1:T}^{{\bf i}})\tilde{\pi}({\bf k},{\rm d}z_{1:T},a_{1:T-1})\pi_{z_{1:T},a_{1:T-1}}({\bf i})\\ & = & \sum_{{\bf k}\in[N]^{T}}\sum_{{\bf i}\in[N]^{T}}\int_{\mathsf{X}^{T}\times[N]^{T-1}}g(z_{1:T}^{{\bf k}})f(z_{1:T}^{{\bf i}})\tilde{\pi}({\rm d}z_{1:T},a_{1:T-1})\pi_{z_{1:T},a_{1:T-1}}({\bf k})\pi_{z_{1:T},a_{1:T-1}}({\bf i}). \end{eqnarray*} Self-adjointness of $\tilde{P}_{N}$ follows, since clearly $\left\langle \tilde{P}_{N}f,g\right\rangle _{\pi}=\left\langle f,\tilde{P}_{N}g\right\rangle _{\pi}$ and the positivity follows because \[ \left\langle \tilde{P}_{N}f,f\right\rangle _{\pi}=\int_{\mathsf{X}^{T}\times[N]^{T-1}}\tilde{\pi}({\rm d}z_{1:T},a_{1:T-1})\pi_{z_{1:T},a_{1:T-1}}\left(\tilde{f}_{z_{1:T}}\right)^{2}\geq0, \] where $\tilde{f}_{z_{1:T}}({\bf k}):=f(z_{1:T}^{{\bf k}})$. \\ \\ In fact, when we implement the algorithm, we do not use $\tilde{P}_{N}$. However, we have \begin{eqnarray*} P_{N}(x,S) & = & \mathbb{E}_{{\bf 1},x}\left[\sum_{{\bf i}\in[N]^{T}}I_{{\bf i}}(Z_{1:T},A_{1:T},S)\right]\\ & = & \mathbb{E}_{{\bf k},x}\left[\sum_{{\bf i}\in[N]^{T}}I_{{\bf i}}(Z_{1:T},A_{1:T},S)\right], \end{eqnarray*} for any ${\bf k}\in[N]^{T}$ in the case of multinomial resampling. (see, e.g.,~\citep{chopin:singh:2013}), and as a consequence, $P_{N}(x,S)=\tilde{P}_{N}(x,S)$. \end{proof} \section{Supplementary material for Section~\ref{sec:Minorization-and-Dirichlet} \label{sec:Supplementary-material-for_minorization}} In the next proposition we gather general properties for generic reversible Markov chains satisfying a uniform minorization condition for which the minorization probability is precisely the invariant distribution of the Markov chain. We suspect these results to be widely known, but could not find a relevant reference. Let $L^{2}(\mathsf{E},\mu)$ and $L_{0}^{2}\bigl(\mathsf{E},\mu\bigr):=\bigl\{ f\in L^{2}(\mathsf{E},\mu):\mu\bigl(f\bigr)=0\bigr\}$ both endowed with the inner product defined for any $f,g\in L^{2}(\mathsf{E},\mu)$ as $\left\langle f,g\right\rangle _{\mu}:=\int_{\mathsf{E}}f(x)g(x)\mu({\rm d}x)$, which yields the associated norm $\|f\|_{\mu}:=\sqrt{\left\langle f,f\right\rangle _{\mu}}$. For any $f\in L^{2}(\mathsf{E},\mu)$ we define the Dirichlet forms \begin{align*} \mathcal{E}_{\Pi}(f) & :=\left\langle f,(I-\Pi)f\right\rangle _{\mu}\quad, \end{align*} where $I$ is the identity operator. The right and left spectral gaps of a generic reversible Markov transition kernel have the following variational representation \[ {\rm Gap}\left(\Pi\right):=\inf_{f\in L_{0}^{2}(\mathsf{E},\mu)}\frac{\mathcal{E}_{\Pi}(f)}{\|f\|_{\mu}^{2}}\;\text{and}\;{\rm Gap}_{L}\left(\Pi\right):=2-\sup_{f\in L_{0}^{2}(\mathsf{E},\mu)}\frac{\mathcal{E}_{\Pi}(f)}{\|f\|_{\mu}^{2}}\quad. \] The condition ${\rm Gap}\left(\Pi\right)>0$ and ${\rm Gap}_{L}\left(\Pi\right)>0$ implies geometric ergodicity of the Markov chain. It turns out that convergence is in fact uniformly geometric in the following scenario. \begin{prop} \label{prop:boundsspectralgapandvarianceforP_N}Let $\mu$ be a probability distribution on some measurable space $\bigl(\mathsf{E},\mathcal{B}\bigl(\mathsf{E}\bigr)\bigr)$ and let $\Pi:\mathsf{E}\times\mathcal{B}\bigl(\mathsf{E}\bigr)\rightarrow[0,1]$ be a Markov transition kernel reversible with respect to $\mu$. Assume that there exists $\varepsilon>0$ such that for any $(x,A)\in\mathsf{E}\times\mathcal{B}\bigl(\mathsf{E}\bigr)$, \[ \Pi(x,A)\geq\varepsilon\mu(A)\quad, \] then \begin{enumerate}[label=(\alph*)] \item \label{enu:Dirichlet-var}the Dirichlet forms satisfy for any $f\in L^{2}\bigl(\mathsf{E},\mu\bigr)$ \begin{align*} \varepsilon\mathrm{var}_{\mu}(f)\leq\mathcal{E}_{\Pi}(f) & \le(2-\varepsilon)\mathrm{var_{\mu}(f)}\;, \end{align*} \item \label{enu:the-spectral-gaps}the spectral gaps are lower bounded by \[ \min\left\{ {\rm Gap}\bigl(\Pi\bigr),{\rm Gap}_{L}\left(\Pi\right)\right\} \geq\varepsilon, \] \item \label{enu:rudolfclassicalresult}for any probability distribution $\nu\ll\mu$ and any $k\in\mathbb{N}$, \[ \|\nu\Pi^{k}\bigl(\cdot\bigr)-\mu\bigl(\cdot\bigr)\|_{L^{2}(\mathsf{E},\mu)}\leq\|\nu-\mu\|_{L^{2}(\mathsf{E},\mu)}(1-\varepsilon)^{k}, \] \item \label{enu:TVvariationconvergenceKontoMeyn}for any probability distribution $\nu\ll\mu$ we have \[ \|\nu\Pi^{k}\bigl(\cdot\bigr)-\mu\bigl(\cdot\bigr)\|_{TV}\leq\frac{1}{2}\|\nu-\mu\|_{L^{2}(\mathsf{E},\mu)}\left(1-\varepsilon\right)^{k}\quad, \] \item \label{enu:uniformdoeblin}for any $x\in\mathsf{X}$, \[ \|\delta_{x}\Pi^{k}\bigl(\cdot\bigr)-\mu\bigl(\cdot\bigr)\|_{TV}\leq\left(1-\varepsilon\right)^{k}\quad, \] \item and for any $f\in L^{2}\bigl(\mathsf{E},\mu\bigr)$ \[ \frac{\varepsilon}{2-\varepsilon}{\rm var}_{\mu}\bigl(f\bigr)\leq{\rm var}\bigl(f,\Pi\bigr)\leq\left(2\varepsilon^{-1}-1\right){\rm var}_{\mu}\bigl(f\bigr)\quad. \] and if $\Pi$ is a positive operator then naturally ${\rm var}\bigl(f,\Pi\bigr)\geq{\rm var}_{\mu}\bigl(f\bigr)$. \end{enumerate} \end{prop} \begin{proof}[Proof of Proposition~\ref{prop:boundsspectralgapandvarianceforP_N}] First, from the minorization condition one can write $\Pi(x,{\rm d}y)=\varepsilon\mu({\rm d}y)+(1-\varepsilon)R_{\Pi,\varepsilon}(x,{\rm d}y)$, where $R_{\Pi,\varepsilon}(x,A):=\frac{\Pi(x,A)-\varepsilon\mu(A)}{1-\varepsilon}$ is $\mu-$invariant. Now for $f\in L_{0}^{2}\big(\mathsf{E},\mu\big)$ \begin{eqnarray*} \left\langle f,\Pi f\right\rangle _{\mu} & = & \varepsilon\left\langle f,\mu(f)\right\rangle _{\mu}+(1-\varepsilon)\left\langle f,R_{\Pi,\varepsilon}f\right\rangle _{\mu}\\ & = & (1-\varepsilon)\left\langle f,R_{\Pi,\varepsilon}f\right\rangle _{\mu} \end{eqnarray*} and therefore with $\mathcal{E}_{\mu}(f)=\left\langle f,f\right\rangle _{\mu}$ the Dirichlet form of the (reversible) ``independent samples'' Markov chain we deduce \begin{align*} \varepsilon\mathcal{E}_{\mu}(f)\leq\mathcal{E}_{\Pi}(f) & \leq(2-\varepsilon)\left\langle f,f\right\rangle _{\mu}=(2-\varepsilon)\mathcal{E}_{\mu}(f)\;, \end{align*} which implies \ref{enu:Dirichlet-var}. The bounds on the spectral gaps \ref{enu:the-spectral-gaps} follow immediately and the results in points~\ref{enu:rudolfclassicalresult} and~\ref{enu:TVvariationconvergenceKontoMeyn} are now a consequence of the resulting property of the spectrum and e.g.~\citep[Proposition 3.12, p. 44]{rudolf2011explicit} and~\citep[Proposition 1.5]{kontoyiannis:meyn:2012}. Result~\ref{enu:uniformdoeblin} is due to Doeblin~\citep{lindvall}, while the two bounds on the asymptotic variance are direct consequences of Lemma~\ref{lem:minorizationdirichletboundgapvariance} and coincide in this case with the ``Kipnis--Varadhan'' upper bound \citep{kipnis1986central}. \end{proof} \begin{lem} \label{lem:minorizationdirichletboundgapvariance}Let $\Pi_{1},\Pi_{2}$ be reversible with respect to $\mu$ and assume that there exists $\varrho\ge0$ such that for any $f\in L_{0}^{2}\bigl(\mathsf{E},\mu\bigr)$ \[ \mathcal{E}_{\Pi_{2}}\bigl(f\bigr)\geq\varrho\mathcal{E}_{\Pi_{1}}\bigl(f\bigr)\quad, \] then \[ {\rm Gap}\bigl(\Pi_{2}\bigr)\geq\varrho{\rm Gap}\bigl(\Pi_{1}\bigr)\quad, \] and if $\varrho>0$ \[ {\rm var}\bigl(f,\Pi_{2}\bigr)\leq(\varrho^{-1}-1){\rm var}_{\pi}(f)+\varrho^{-1}{\rm var}\left(f,\Pi_{1}\right)\quad. \] \end{lem} \begin{proof} The first result is straightforward. For the second result, first notice that \begin{align*} \sup_{g\in L_{0}^{2}\bigl(\mathsf{E},\mu\bigr)}2\bigl\langle f,g\bigr\rangle_{\mu}-\mathcal{E}_{\Pi_{2}}(g) & \leq\sup_{g\in L_{0}^{2}\bigl(\mathsf{E},\mu\bigr)}2\bigl\langle f,g\bigr\rangle_{\mu}-\varrho\mathcal{E}_{\Pi_{1}}(g)\\ & =\varrho^{-1}\Big(\sup_{g\in L_{0}^{2}\bigl(\mathsf{E},\mu\bigr)}2\bigl\langle f,\varrho g\bigr\rangle_{\mu}-\mathcal{E}_{\Pi_{1}}(\varrho g)\Big)\\ & =\varrho^{-1}\Big(\sup_{g\in L_{0}^{2}\bigl(\mathsf{E},\mu\bigr)}2\bigl\langle f,g\bigr\rangle_{\mu}-\mathcal{E}_{\Pi_{1}}(g)\Big)\quad, \end{align*} and since ${\rm var}\bigl(f,\Pi\bigr)=2\bigl[\sup_{g\in L_{0}^{2}\bigl(\mathsf{E},\mu\bigr)}2\bigl\langle f,g\bigr\rangle_{\mu}-\mathcal{E}_{\Pi}(g)\bigr]-\|f\|_{\mu}^{2}$ we conclude that \[ {\rm var}\bigl(f,\Pi_{2}\bigr)\leq(\varrho^{-1}-1){\rm var}_{\mu}(f)+\varrho^{-1}{\rm var}\left(f,\Pi_{1}\right)\quad. \] \end{proof} \section{Supplementary material for Section~\ref{sec:Estimates-of-the} \label{sec:Supplementary-quantitative}} The proof of Proposition~\ref{prop:doubleconditionalSMC} relies on the following technical lemma, and is given after this intermediate result. \begin{lem} \label{lem:backwardinduction}Let $x,y\in\mathsf{X}$, then, \begin{enumerate}[label=(\alph*)] \item \label{enu:condexpect_a}\textup{for any $t\geq2$, $z_{1:t-1}\in\mathsf{Z}^{N(t-1)}$ such that $(z_{1:t-\text{1}}^{1},z_{1:t-\text{1}}^{2})=(x_{1:t-1},y_{1:t-1})$ and $f_{t}:\mathsf{Z}\rightarrow\mathbb{R}$ we have \begin{multline*} \mathbb{E}_{\mathbf{1},x,\mathbf{2},y}^{N}\left[\sum_{m=1}^{N}f_{t}(Z_{t}^{m})\,\Bigg|\,Z_{t-1}=z_{t-1}\right]\\ =f_{t}(x_{t})+f_{t}(y_{t})+\frac{N-2}{\sum_{l=1}^{N}G_{t-1}(z_{t-1}^{l})}\sum_{k=1}^{N}Q_{t-1,t}(f_{t})(z_{t-1}^{k})\quad, \end{multline*} } \item \label{enu:condexpect_b}for any $k=1,\dots,T-1$, any \textup{$z_{1:T-k}\in\mathsf{Z}^{N(T-k)}$ such that $(z_{1:T-k}^{1},z_{1:T-\text{k}}^{2})=(x_{1:T-k},y_{1:T-k})$} \[ \mathbb{E}_{\mathbf{1},x,\mathbf{2},y}^{N}\left[\prod_{t=T-k+1}^{T}\sum_{j=1}^{N}G_{t}(Z_{t}^{j})\,\Bigg|\,Z_{T-k}=z_{T-k}\right]=A_{T-k}+B_{T-k} \] where \begin{align*} A_{T-k}: & =\sum_{s=1}^{k}(N-2)^{k-s}\sum_{\mathbf{i}\in\mathcal{I}_{k,s}}\left[G_{T-k+1,i_{\text{1}}}\left(x_{T-k+1}\right)+G_{T-k+1,i_{\text{1}}}(y_{T-k+1})\right]C_{k,s}\bigl(\mathbf{i},x,y\bigr)\quad,\\ B_{T-k}: & =\frac{N-2}{\sum_{l=1}^{N}G_{T-k}(z_{T-k}^{l})}\bigg(\sum_{s=1}^{k}(N-2)^{k-s}\sum_{\mathbf{i}\in\mathcal{I}_{k,s}}\sum_{r=1}^{N}G_{T-k,i_{\text{1}}}\bigl(z_{T-k}^{r}\bigr)C_{k,s}\bigl(\mathbf{i},x,y\bigr)\bigg)\quad, \end{align*} and $\mathcal{I}_{k,s}$ and $C_{k,s}$ are as in Proposition~\ref{prop:doubleconditionalSMC}. \end{enumerate} \end{lem} \begin{proof}[Proof of Lemma~\ref{lem:backwardinduction}] The property in~\ref{enu:condexpect_a} is immediate from the linearity of the expectation and the definition of the process. We now prove property~\ref{enu:condexpect_b} by induction on $k=1,\ldots,T-1$. In order to alleviate notation we let $G_{p,q}^{i}:=G_{p,q}\bigl(Z_{p}^{i}\bigr)$ when found inside an expectation and $G_{p,q}^{i}:=G_{p,q}\bigl(z_{p}^{i}\bigr)$ otherwise, $G_{p,q}^{1+2}:=G_{p,q}\bigl(x_{p}\bigr)+G_{p,q}\bigl(y_{p}\bigr)$ and $C_{k,s}(\mathbf{i}):=C_{k,s}(\mathbf{i},x,y)$. The case $k=1$ follows from~\ref{enu:condexpect_a} with $t=T$ by observing that $\mathcal{I}_{1,1}=\bigl\{ T+1\bigr\}$, $C_{1,1}\bigl(\mathbf{i},x,y\bigr)=1$ and that $G_{T-1,T+1}^{r}=Q_{T-1,T}\bigl(G_{T}\bigr)\bigl(z_{T-1}^{r}\bigr)$ : \begin{align*} \mathbb{E}_{\mathbf{1},x,\mathbf{2},y}^{N} & \left[\sum_{m=1}^{N}G_{T}^{m}\,\Bigg|\,Z_{T-1}=z_{T-1}\right]\\ & =G_{T}(x_{T})+G_{T}(y_{T})+\frac{N-2}{\sum_{l=1}^{N}G_{T-1}(z_{T-1}^{l})}\sum_{k=1}^{N}Q_{T-1,T}(G_{T})(z_{T-1}^{k})\\ & =G_{T}(x_{T})+G_{T}(y_{T})+\frac{N-2}{\sum_{l=1}^{N}G_{T-1}(z_{T-1}^{l})}\sum_{k=1}^{N}G_{T-1,T+1}^{k}\quad. \end{align*} Now we assume the property true for some $k\in\{1,\ldots,T-2\}$ and establish it for $k+1$. We have \begin{align*} \mathbb{E}_{\mathbf{1},x,\mathbf{2},y}^{N}\left[\prod_{t=T-k}^{T}\sum_{j=1}^{N}G_{t}^{j}\,\Bigg|\,Z_{T-k-1}=z_{T-k-1}\right] & =A+B\quad, \end{align*} with \begin{align*} A:= & \mathbb{E}_{\mathbf{1},x,\mathbf{2},y}^{N}\left[A_{T-k}\sum_{j=1}^{N}G_{T-k}^{j}\,\Bigg|\,Z_{T-k-1}=z_{T-k-1}\right]\\ B:= & \mathbb{E}_{\mathbf{1},x,\mathbf{2},y}^{N}\left[B_{T-k}\sum_{j=1}^{N}G_{T-k}^{j}\,\Bigg|\,Z_{T-k-1}=z_{T-k-1}\right]\quad, \end{align*} and we deal with the two terms separately. Observe that $A_{T-k}$ only depends on $x_{T-k+1:T}$ and $y_{T-k+1:T}$, then by application of the first result of the lemma we obtain \[ A=A_{T-k}\,\left(G_{T-k}^{1+2}+\frac{N-2}{\sum_{l=1}^{N}G_{T-k-1}^{l}}\sum_{l=1}^{N}G_{T-k-1,T-k+1}^{l}\right) \] and, noting that $C_{k,s}(\mathbf{i})$ depends on $x_{T-k+2:T},y_{T-k+2:T}$ only \begin{align*} B & =\sum_{s=1}^{k}(N-2)^{k+1-s}\sum_{\mathcal{I}_{k,s}}\mathbb{E}_{\mathbf{1},x,\mathbf{2},y}^{N}\left[\sum_{r=1}^{N}G_{T-k,i_{\text{1}}}^{r}\,\Bigg|\,Z_{T-k-1}=z_{T-k-1}\right]C_{k,s}\bigl(\mathbf{i}\bigr)\\ & =\sum_{s=1}^{k}(N-2)^{k+1-s}\sum_{\mathcal{I}_{k,s}}\bigg[G_{T-k,i_{\text{1}}}^{1+2}+\frac{N-2}{\sum_{l=1}^{N}G_{T-k-1}^{l}}\sum_{r=1}^{N}G_{T-k-1,i_{\text{1}}}^{r}\bigg]C_{k,s}\bigl(\mathbf{i}\bigr)\quad, \end{align*} where we have again applied the first result of the lemma. Consequently we can group the terms as follows \begin{multline} A+B=A_{T-k}G_{T-k}^{1+2}+\sum_{s=1}^{k}(N-2)^{k+1-s}\sum_{\mathcal{I}_{k,s}}G_{T-k,i_{\text{1}}}^{1+2}C_{k,s}\bigl(\mathbf{i}\bigr)\\ +\frac{N-2}{\sum_{l=1}^{N}G_{T-k-1}^{l}}\left[A_{T-k}\sum_{l=1}^{N}G_{T-k-1,T-k+1}^{l}+\sum_{s=1}^{k}(N-2)^{k+1-s}\sum_{\mathcal{I}_{k,s}}\bigg(\sum_{r=1}^{N}G_{T-k-1,i_{\text{1}}}^{r}\bigg)C_{k,s}\bigl(\mathbf{i}\bigr)\right]\quad.\label{eq:A+B} \end{multline} Now we first focus on the first term on on the RHS on the first line (with the sum now written in extension in order to help and we note that we do not use the double indexing $i_{j}^{s}$ in order to keep notation simple), \begin{align*} A_{T-k}G_{T-k}^{1+2} & =\sum_{s=1}^{k}(N-2)^{k-s}G_{T-k}^{1+2}\sum_{T-k+1<i_{1}\cdots<i_{s-1}<i_{s}=T+1}G_{T-k+1,i_{\text{1}}}^{1+2}\prod_{m=1}^{s-1}G_{i_{m},i_{m+1}}^{1+2}\\ & =\sum_{s=1}^{k}(N-2)^{k-s}\sum_{i_{0}=T-k+1<i_{1}\cdots<i_{s-1}<i_{s}=T+1}G_{T-k,i_{\text{0}}}^{1+2}\prod_{m=0}^{s-1}G_{i_{m},i_{m+1}}^{1+2}\\ & =\sum_{s'=2}^{k+1}(N-2)^{k+1-s'}\sum_{j_{1}=T-k+1<j_{2}\cdots<j_{s'-1}<j_{s'}=T+1}G_{T-k,j_{1}}^{1+2}\prod_{m=1}^{s'-1}G_{j_{m},j_{m+1}}^{1+2}\quad, \end{align*} where we have used the following changes of variables: $j_{m}=i_{m-1}$ for $m=1,\ldots,s+1$ followed by $s=s'-1$. Note that we can extend the sum in order to include the term $s'=1$, since we cannot have $j_{1}=T+1\neq T-k+1=j_{1}$. We examine the second term on the RHS of the first line of (\ref{eq:A+B}) \[ \sum_{s=1}^{k}(N-2)^{k+1-s}\sum_{T-k+1<i_{1}<\cdots<i_{s-1}<i_{s}=T+1}G_{T-k,i_{\text{1}}}^{1+2}\prod_{m=1}^{s-1}G_{i_{m},i_{m+1}}^{1+2}\quad, \] and we notice that we can extend the sum in order to include the term $s=k+1$ because $\sharp\left\{ T-k+2,\ldots,T+1\right\} =k$, which implies that $\mathcal{I}_{k,k+1}=\emptyset$. Consequently we deduce that \[ A_{T-k}G_{T-k}^{1+2}+\sum_{s=1}^{k}(N-2)^{k+1-s}\sum_{\mathcal{I}_{k,s}}G_{T-k,i_{\text{1}}}^{1+2}C_{k,s}\bigl(\mathbf{i}\bigr)=A_{T-(k+1)}\quad. \] We now turn to the second line of (\ref{eq:A+B}) and examine the two terms within the brackets and use similar ideas. First we have \begin{align*} \sum_{l=1}^{N} & G_{T-k-1,T-k+1}^{l}A_{T-k}\\ & =\sum_{s=1}^{k}(N-2)^{k-s}\sum_{l=1}^{N}G_{T-k-1,T-k+1}^{l}\sum_{T-k+1<i_{1}\cdots<i_{s-1}<i_{s}=T+1}G_{T-k+1,i_{\text{1}}}^{1+2}\prod_{m=1}^{s-1}G_{i_{m},i_{m+1}}^{1+2}\\ & =\sum_{s=1}^{k}(N-2)^{k-s}\sum_{i_{0}=T-k+1<i_{1}\cdots<i_{s-1}<i_{s}=T+1}\sum_{l=1}^{N}G_{T-k-1,i_{0}}^{l}\prod_{m=0}^{s-1}G_{i_{m},i_{m+1}}^{1+2}\\ & =\sum_{s=2}^{k+1}(N-2)^{k+1-s}\sum_{i_{1}=T-k+1<i_{2}\cdots<i_{s-1}<i_{s}=T+1}\sum_{l=1}^{N}G_{T-k-1,i_{1}}^{l}\prod_{m=1}^{s-1}G_{i_{m},i_{m+1}}^{1+2} \end{align*} and the other term is, in extension, \[ \sum_{s=1}^{k}(N-2)^{k+1-s}\sum_{T-k+1<i_{1}\cdots<i_{s-1}<i_{s}=T+1}\bigl[\sum_{r=1}^{N}G_{T-k-1,i_{\text{1}}}^{r}\bigr]\prod_{m=1}^{s-1}G_{i_{m},i_{m+1}}^{1+2}\quad. \] We therefore conclude that \begin{align*} \frac{N-2}{\sum_{l=1}^{N}G_{T-k-1}^{l}}\left[A_{T-k}\sum_{l=1}^{N}G_{T-k-1,T-k+1}^{l}+\sum_{s=1}^{k}(N-2)^{k+1-s}\sum_{\mathcal{I}_{k,s}}\bigl[\sum_{r=1}^{N}G_{T-k-1,i_{\text{1}}}^{r}\bigr]C_{k,s}\bigl(\mathbf{i}\bigr)\right]\\ =B_{T-(k+1)}\;, \end{align*} which finishes the proof. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:doubleconditionalSMC}] We start with the second result of Lemma~\ref{lem:backwardinduction} for $k=T-1$ and we proceed as in the beginning of the proof of that lemma, using similar notation and arguments. Here we have however \begin{align*} A & =A_{1}\times\left(G_{1}^{1+2}+(N-2)G_{0,2}\right)\\ & =\sum_{s=2}^{T}(N-2)^{T-s}\sum_{i_{1}=2<i_{2}\cdots<i_{s-1}<i_{s}=T+1}G_{1,i_{1}}^{1+2}\prod_{m=1}^{s-1}G_{i_{m},i_{m+1}}^{1+2}\\ & +(N-2)\sum_{s=2}^{T}(N-2)^{T-s}\sum_{i_{1}=2<i_{2}<\cdots<i_{s-1}<i_{s}=T+1}G_{0,i_{\text{1}}}\prod_{m=1}^{s-1}G_{i_{m},i_{m+1}}^{1+2}\quad, \end{align*} and \begin{align*} B & =\sum_{s=1}^{T-1}(N-2)^{T-s}\sum_{\mathcal{I}_{T-1,s}}\bigl[G_{1,i_{\text{1}}}^{1+2}+(N-2)G_{0,i_{\text{1}}}\bigr]\prod_{m=1}^{s-1}G_{i_{m},i_{m+1}}^{1+2}\\ & =\sum_{s=1}^{T-1}(N-2)^{T-s}\sum_{2<i_{1}<\cdots<i_{s-1}<i_{s}=T+1}G_{1,i_{\text{1}}}^{1+2}\prod_{m=1}^{s-1}G_{i_{m},i_{m+1}}^{1+2}\\ & +\sum_{s=1}^{T-1}(N-2)^{T+1-s}\sum_{2<i_{1}<\cdots<i_{s-1}<i_{s}=T+1}G_{0,i_{\text{1}}}\prod_{m=1}^{s-1}G_{i_{m},i_{m+1}}^{1+2}\quad, \end{align*} and using arguments similar to those of the proof of Lemma~\ref{lem:backwardinduction}, \begin{align*} A+B & =\sum_{s=1}^{T}(N-2)^{T-s}\sum_{1<i_{1}<\cdots<i_{s-1}<i_{s}=T+1}G_{1,i_{\text{1}}}^{1+2}\prod_{m=1}^{s-1}G_{i_{m},i_{m+1}}^{1+2}\\ & +(N-2)\sum_{s=1}^{T}(N-2)^{T-s}\sum_{1<i_{1}<\cdots<i_{s-1}<i_{s}=T+1}G_{0,i_{\text{1}}}\prod_{m=1}^{s-1}G_{i_{m},i_{m+1}}^{1+2}\quad, \end{align*} which can be rewritten as (again we use $s'-1=s$ and the fact that $G_{0,1}=G_{0}=1$ by convention) \begin{align*} A+B & =\sum_{s=1}^{T}(N-2)^{T-s}\sum_{i_{0}=1<i_{1}<\cdots<i_{s-1}<i_{s}=T+1}\prod_{m=0}^{s-1}G_{i_{m},i_{m+1}}^{1+2}\\ & +\sum_{s=1}^{T}(N-2)^{T+1-s}\sum_{1<i_{1}<\cdots<i_{s-1}<i_{s}=T+1}G_{0,i_{\text{1}}}\prod_{m=1}^{s-1}G_{i_{m},i_{m+1}}^{1+2}\\ & =\sum_{s'=2}^{T+1}(N-2)^{T+1-s'}\sum_{i_{1}=1<i_{2}<\cdots<i_{s'-1}<i_{s'}=T+1}\prod_{m=1}^{s'-1}G_{i_{m},i_{m+1}}^{1+2}\\ & +\sum_{s=1}^{T}(N-2)^{T+1-s}\sum_{1<i_{1}<\cdots<i_{s-1}<i_{s}=T+1}G_{0,i_{\text{1}}}\prod_{m=1}^{s-1}G_{i_{m},i_{m+1}}^{1+2}\\ & =\sum_{s=1}^{T+1}(N-2)^{T+1-s}\sum_{0<i_{1}<\cdots<i_{s-1}<i_{s}=T+1}G_{0,i_{\text{1}}}\prod_{m=1}^{s-1}G_{i_{m},i_{m+1}}^{1+2}\quad. \end{align*} We conclude. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:A2impliesA1}] Consider first the case where $k\geq m$, for $z_{p},z'_{p}\in\mathsf{Z}^{2}$, \begin{align*} \frac{Q_{p,p+k}(1)(z_{p})}{Q_{p,p+k}(1)(z'_{p})} & =\frac{G_{p}(z_{p})}{G_{p}(z'_{p})}\frac{M_{p+1}\left(G_{p+1}M_{p+2}Q_{p+2,p+k}(1)\right)(z_{p})}{M_{p+1}\left(G_{p+1}M_{p+2}Q_{p+2,p+k}(1)\right)(z'_{p})}\\ & \leq\delta^{1/m}\frac{\sup_{z\in\mathsf{Z}}G_{p+1}(z)}{\inf_{z'\in\mathsf{Z}}G_{p+1}(z')}\frac{M_{p,p+2}\left(Q_{p+2,p+k}(1)\right)(z_{p})}{M_{p,p+2}\left(Q_{p+2,p+k}(1)\right)(z'_{p})}\\ & \leq\delta\frac{M_{p,p+m}\left(Q_{p+m,p+k}(1)\right)(z_{p})}{M_{p,p+m}\left(Q_{p+m,p+k}(1)\right)(z'_{p})}\quad, \end{align*} by using (A\ref{hyp:strongmixingpotentialassumptions})\ref{hyp:enu:Gcondition} and a straightforward induction. Now we can conclude by using (A\ref{hyp:strongmixingpotentialassumptions})\ref{hyp:enu:Mcondition}. When $k<m$ we simply note that, proceeding as above, for any $z_{p},z'_{p}\in\mathsf{X}^{2}$, \[ \frac{Q_{p,p+k}(1)(z_{p})}{Q_{p,p+k}(1)(z'_{p})}\leq\delta^{k/m}\leq\delta\leq\beta\delta\quad. \] \end{proof} \section{Supplementary material for Section~\ref{sec:conjectureonboundedness}} \begin{lem} \label{lem:tightness}Assume that $\{\mu_{x}\}_{x\in\mathsf{X}}$ is a family of finite measures on $(\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}))$ such that $x\mapsto\mu_{x}(A)$ is a measurable mapping for each $A\in\mathcal{B}(\mathsf{\mathbb{R}}^{d})$, and suppose that $\xi$ is a probability measure on $(\mathsf{X},\mathcal{B}(\mathsf{X}))$. For any $\epsilon>0$ there exists a set $A\in\mathcal{B}(\mathsf{X})$ such that $\{\mu_{x}\}_{x\in A}$ is tight and $\xi(A)\ge1-\epsilon$.\end{lem} \begin{proof} Denote by $B_{r}$ the closed ball of radius $r$ centred at the origin and define the sets \[ A_{k,r}:=\big\{ x\in\mathsf{X}\,:\,\mu_{x}\big(B_{r}^{\complement}\big)\le k^{-1}\big\}, \] for $k\in\mathbb{N}$ and $r\in\mathbb{R}_{+}$; observe that $A_{k,r}\in\mathcal{B}(\mathsf{X})$. Define the finite constants \[ r_{\epsilon,k}:=\inf\big\{ r\in\mathbb{R}_{+}\,:\,\xi(A_{k,r})\ge1-\epsilon2^{-k}\big\}. \] We may define $A:=\cap_{k\ge1}A_{k,r_{\epsilon,k}}$ which satisfies $\xi(A^{\complement})\le\sum_{k=1}^{\infty}\epsilon2^{-k}=\epsilon$. \end{proof} \section{\label{sec:Comparison-with-Particle}Detailed comparisons with the PIMH and PMMH} In this section we contrast the performance properties of the i-cSMC (resp. PGibbs sampler), as established in Section~\ref{sec:Estimates-of-the} (resp. Section~\ref{sec:The-particle-Gibbs}), with those of the Particle Independent Metropolis--Hastings kernel (PIMH) (resp. particle Marginal Metropolis--Hastings (PMMH)) also proposed in~\citep{andrieu-doucet-holenstein}, which also aims to (indirectly) sample from $\pi$ as defined in Section~\ref{sec:The-i-CSMC} (resp. Section~\ref{sec:The-particle-Gibbs}). We use notation similar to that used in Section~\ref{sec:The-i-CSMC} for the i-cSMC algorithm. The Markov kernel of the PIMH can be defined for $(a,z)\in\mathsf{W}$ (with an obvious abuse of notation in order to alleviate notation), $W\in\mathcal{B}\bigr(\mathsf{W}\bigr)$ and $N\geq1$ as \[ \check{P}_{N}(a,z;W):=\mathbb{E}^{N}\left[\mathbb{I}\{(A,Z)\in W\}\left\{ 1\wedge\frac{\hat{\gamma}_{T}^{N}(Z)}{\hat{\gamma}_{T}^{N}(z)}\right\} +\left\{ 1-1\wedge\frac{\hat{\gamma}_{T}^{N}(Z)}{\hat{\gamma}_{T}^{N}(z)}\right\} \mathbb{I}\{(a,z)\in W\}\right]\;, \] where $\mathbb{E}^{N}$ is the expectation corresponding to the law $\mathbb{P}^{N}$ of the standard SMC algorithm, defined on $\mathsf{W}\times\mathcal{B}\bigl(\mathsf{W}\bigr)$ via the following conditionals, with $z_{t}\in\mathsf{Z}^{N}$ for $t\in[T]$, $a_{t}\in[N]^{T-1}$ and $a_{T}\in[N]$, \[ \mathbb{P}^{N}\left(Z_{1}\in{\rm d}z_{1}\right):=\prod_{i=1}^{N}M_{1}({\rm d}z_{1}^{i})\quad, \] and for $t\in\{2,\ldots,T\}$, \begin{align*} \mathbb{P}^{N}\big(Z_{t}\in{\rm d}z_{t},A_{t-1}=a_{t-1} & \mid Z_{1:t-1}=z_{1:t-1},A_{1:t-2}=a_{1:t-2}\big)\\ = & \mathbb{P}^{N}\left(Z_{t}\in{\rm d}z_{t},A_{t-1}=a_{t-1}\mid Z_{t-1}=z_{t-1}\right)\\ := & \prod_{i=1}^{N}\sum_{k=1}^{N}\frac{G_{t-1}(z_{t-1}^{k})}{\sum_{j=1}^{N}G_{t-1}(z_{t-1}^{j})}\mathbb{I}\left\{ a_{t-1}^{i}=k\right\} M_{t}(z_{t-1}^{k},{\rm d}z_{t}^{i})\quad\text{ and}\\ \mathbb{P}^{N}\left(A_{T}=a_{T}\mid Z_{T}=z_{T}\right)= & \frac{G_{T}(z_{T}^{a_{T}})}{\sum_{j=1}^{N}G_{T}(z_{T}^{j})}\quad, \end{align*} $A=A_{1:T}$ and $Z:=Z_{1:T}$. We note that $\hat{\gamma}_{T}^{N}(z)$ is not a random quantity in the definition of $\check{P}_{N}(a,z;W)$. The invariant distribution of the Markov chain, which evolves on $\mathsf{W}$, is given for any $W\in\mathcal{B}\bigr(\mathsf{W}\bigr)$ by \begin{equation} \check{\pi}^{N}(W):=\sum_{{\bf i}\in[N]^{T}}\frac{1}{N^{T}}\int_{\mathsf{X}}\mathbb{P}_{{\bf i},x}\left((A,Z)\in W\right)\pi({\rm d}x)\quad.\label{eq:def_check_pi} \end{equation} As suggested by its name and as shown in~\citep{andrieu-doucet-holenstein}, this algorithm can be interpreted as being a standard independent Metropolis--Hastings (IMH) kernel with target distribution $\check{\pi}_{N}$ and proposal distribution the standard SMC law $\mathbb{P}^{N}$. Samples from $\pi$ can be recovered as a byproduct of $A$ and $Z$~\citep{andrieu-doucet-holenstein} : this should not be surprising since $\check{\pi}$ is the invariant distribution of the i-cSMC algorithm as seen as a Markov chain on the extended space $\mathsf{W}$ and not $\mathsf{X}$ solely. The interpretation as an IMH algorithm allows us to use a well known result by~\citep{mengersen-tweedie} to deduce that the PIMH is (uniformly) geometrically ergodic if and only if $\check{\pi}-{\rm ess}\sup_{z}\hat{\gamma}_{T}^{N}(z)<\infty$ with rate $r\leq1-\check{\epsilon}_{N}$ where \[ \check{\epsilon}_{N}:=\frac{\gamma_{T}}{\check{\pi}_{N}-{\rm ess}\sup_{z}\hat{\gamma}_{T}^{N}(z)}\quad. \] Clearly $\check{\epsilon}_{N}>0$ whenever $\check{\pi}_{N}-{\rm ess}\sup_{z}\hat{\gamma}_{T}^{N}(z)<\infty$, which is similar to what we have obtained in Propositions~\ref{prop:boundepsilonNwithuniformboundG} and~\ref{prop:mixing_bound} for the i-cSMC. An important difference, which may explain the widely perceived superiority of the i-cSMC, is that the rate of convergence of PIMH will typically not improve (and in particular converge to $1$) as $N$ increases, even for bounded potentials, which is in contrast with the corresponding convergence rate of the i-cSMC (see Propositions~\ref{prop:boundepsilonNwithuniformboundG} and~\ref{prop:mixing_bound}). We can also compare the results of Section~\ref{sec:The-particle-Gibbs} for the PGibbs sampler with the corresponding results for the PMMH algorithm \citep{andrieu-doucet-holenstein}. This latter algorithm evolves on $\Theta\times\mathsf{W}$ with transition probability \begin{align*} \check{\Phi}_{N}\bigl(\theta,a,z;S\times W\bigr)= & \int_{S}\mathbb{E}_{\vartheta}^{N}\left[\mathbb{I}\{(A,Z)\in W\}\left\{ 1\wedge\frac{\varpi({\rm d}\vartheta)q(\vartheta,{\rm d}\theta)}{\varpi({\rm d}\theta)q(\theta,{\rm d}\vartheta)}\frac{\hat{\gamma}_{\vartheta,T}^{N}(Z)}{\hat{\gamma}_{\theta,T}^{N}(x)}\right\} \right]q(\theta,{\rm d}\vartheta)\\ +\int_{\Theta}\mathbb{E}_{\vartheta}^{N} & \left[\left\{ 1-1\wedge\frac{\varpi({\rm d}\vartheta)q(\vartheta,{\rm d}\theta)}{\varpi({\rm d}\theta)q(\theta,{\rm d}\vartheta)}\frac{\hat{\gamma}_{\vartheta,T}^{N}(Z)}{\hat{\gamma}_{\theta,T}^{N}(x)}\right\} \mathbb{I}\left\{ (\theta,a,z)\in S\times W\right\} \right]q(\theta,{\rm d}\vartheta) \end{align*} which leaves the distribution $\pi({\rm d}\theta)\check{\pi}_{\theta}^{N}({\rm d}w)$ invariant, where for any $\theta\in\Theta$, $\check{\pi}_{\theta}^{N}$ is as in (\ref{eq:def_check_pi}) but with $\mathbb{P}_{\theta}^{N}$ (and $\mathbb{E}_{\theta}^{N}$) corresponding to the SMC process as defined above for a family of Markov kernels $\{M_{\theta}\}$ and potentials $\{G_{\theta,t},t\in[T]\}$. Just as $\Phi_{N}$ can be viewed as an exact approximation of $\Gamma$, $\check{\Phi}_{N}$ can be viewed as an exact approximation of a Markov kernel $\Phi^{*}$, evolving only on $\Theta$ as, for $(\theta,S)\in\Theta\times\mathcal{B}\bigl(\Theta\bigr)$ \begin{align*} \Phi^{*}\bigl(\theta,S\bigr)= & \int_{S}\left\{ 1\wedge\frac{\varpi({\rm d}\vartheta)q(\vartheta,{\rm d}\theta)}{\varpi({\rm d}\theta)q(\theta,{\rm d}\vartheta)}\frac{\gamma_{\vartheta,T}}{\gamma_{\theta,T}}\right\} q(\theta,{\rm d}\vartheta)\\ & +\int_{\Theta}\left\{ 1-1\wedge\frac{\varpi({\rm d}\vartheta)q(\vartheta,{\rm d}\theta)}{\varpi({\rm d}\theta)q(\theta,{\rm d}\vartheta)}\frac{\gamma_{\vartheta,T}}{\gamma_{\theta,T}}\right\} \mathbb{I}\left\{ \theta\in S\right\} q(\theta,{\rm d}\vartheta)\;. \end{align*} In~\citep{andrieu2015}, it is shown that when \[ \pi-{\rm ess}\sup_{\theta}\left(\check{\pi}_{\theta}-{\rm ess}\sup_{z}\frac{\hat{\gamma}_{\theta,T}^{N}(z)}{\gamma_{\theta,T}}\right)<\infty\quad, \] ${\rm Gap}(\check{\Phi}_{N})>0$ whenever ${\rm Gap}(\Phi^{*})>0$, i.e. the existence of a spectral gap of $\Phi^{*}$ is ``inherited'' by $\check{\Phi}_{N}$. This coincides in many cases with inheritance of geometric ergodicity, for example when $\check{\Phi}_{N}$ is positive. The rate of convergence of a geometrically ergodic PMMH Markov chain does not improve in general as $N$ increases, in contrast to our results for PGibbs Markov chains. In this context, weak convergence in $N$ of the asymptotic variance of estimates of $\pi(f)$ using $\check{\Phi}_{N}$ to that of $\Phi^{*}$ is nevertheless provided by~\citep[Proposition~19]{andrieu2015} for all $f\in L^{2}(\Theta,\pi)$. This can be contrasted with quantitative bounds obtained in Theorem~\ref{thm:MWG-functions-of-theta}. \end{document}
\begin{document} \title{Adjusting the adjusted Rand Index - A multinomial story } \author{Martina Sundqvist \and Julien Chiquet \and Guillem Rigaill } \institute{M. Sundqvist \at MIA-Paris, UMR 518 AgroParisTech, INRAE, Université Paris-Saclay, 75005, Paris, France \at Institut Curie - PSL Research University, Translational Research Department, Breast Cancer Biology Group, 26 rue d’Ulm, 75005 Paris, France \email{[email protected]} \and J. Chiquet \at MIA-Paris, UMR 518 AgroParisTech, INRAE, Université Paris-Saclay, 75005, Paris, France \email{[email protected]} \and G. Rigaill \at Université Paris-Saclay, CNRS, INRAE, Univ Evry, Institute of Plant Sciences Paris-Saclay (IPS2), 91405, Orsay, France \at Université de Paris, CNRS, INRAE, Institute of Plant Sciences Paris-Saclay (IPS2), 91405, Orsay, France \at Laboratoire de Mathématiques et Modélisation d'Evry (LaMME), Université d'Evry Val d’Essonne, UMR CNRS 8071, ENSIIE, USC INRA, [email protected] } \date{Received: date / Accepted: date} \maketitle \begin{abstract} The Adjusted Rand Index ($ARI$) is arguably one of the most popular measures for cluster comparison. The adjustment of the $ARI$ is based on a hypergeometric distribution assumption which is unsatisfying from a modeling perspective as (i) it is not appropriate when the two clusterings are dependent, (ii) it forces the size of the clusters, and (iii) it ignores randomness of the sampling. In this work, we present a new "modified" version of the Rand Index. First, we redefine the $MRI$ by only counting the pairs consistent by similarity and ignoring the pairs consistent by difference, increasing the interpretability of the score. Second, we base the adjusted version, $MARI$, on a multinomial distribution instead of a hypergeometric distribution. The multinomial model is advantageous as it does not force the size of the clusters, properly models randomness, and is easily extended to the dependant case. We show that the $ARI$ is biased under the multinomial model and that the difference between the $ARI$ and $MARI$ can be large for small $n$ but essentially vanish for large $n$, where $n$ is the number of individuals. Finally, we provide an efficient algorithm to compute all these quantities ($(A)RI$ and $M(A)RI$) by relying on a sparse representation of the contingency table in our \texttt{aricode} package. The space and time complexity is linear in the number of samples and importantly does not depend on the number of clusters as we do not explicitly compute the contingency table. \keywords{ Clustering \and Rand Index \and Multinomial distribution \and Statistical Inference} \end{abstract} \section{Introduction}\label{sec:Intro} With the increasing amount of data available, development of clustering methods have become crucial in unsupervised learning to explore and find patterns in data sets. Despite the wealth of theoretical research on this subject, in practice selecting and validating a clustering is difficult. To answer these questions, one often resorts to a measure of clustering comparison: when the data is labeled, the quality of the clustering is evaluated by measuring the overlap with the original labeling; in the absence of labels, the reliability of the clustering can be assessed by evaluating its stability \citep[see, e.g.][]{von2010clustering}. This can be done by comparing several clusterings obtained by perturbing the initial data set (i.e. with resampling), or by running different clustering methods on the same data set. The idea of clustering stability is dug deeper in cluster ensembles \citep{strehl2002cluster} and its variants, which involve measures of clustering comparison in the construction of the clustering itself. Among the many measures proposed for pairwise clustering comparisons \citep[see][for an overview]{vinh2010information} one of the most popular is the Rand index ($RI$) \citep{rand1971} and its adjusted variant \citep{hubert1985comparing, morey1984measurement}. The $RI$ is designed to estimate the probability of having a coherent pair, which is a pair for which its two observations are either in the same group in the two compared clusterings or in different groups. It is computed from the contingency table of the two classifications. However, the $RI$ depends on the number of groups \citep{morey1984measurement} and is therefore difficult to interpret. To overcome this issue, the Adjusted Rand Index (in short $ARI$) is obtained by subtracting to the $RI$ an estimator of its expected value obtained under the assumption of two independent clusterings. To obtain such an estimator, a population distribution has to be assumed upon the two compared clusterings, or more specifically upon the marginals of the contingency table of the two clusterings. Considering either the clusters sizes fixed or not, the two natural hypotheses that arise are either the hypergeometric distribution or the multinomial distribution. In the literature, there is discordance as to which of these hypotheses to use. The $RI$ and $ARI$ as defined by \cite{brennan1974measuring} and then adapted by \cite{hubert1985comparing} are based on the hypergeometric distribution hypothesis. In fact, considering fixed cluster sizes makes calculations easier and the expected value of the $RI$ deterministic. However, this is a strong assumption that is violated in all cluster studies since no clustering algorithm fixes cluster sizes \citep[see][for a detailed discussion]{wagner2007comparing}. Moreover, from a modeling perspective, it implicitly ignores any randomness of the sampling procedure and considers that the set of individuals that we observed is fixed. Hence under this model the $(A)RI$ are post-hoc quantities for which no inference to a parental population can be done, which limits the interpretation exclusively to the observed data points. Assuming the marginal to be fixed certainly simplifies the calculations under the hypothesis of independence between clustering. However, modeling dependency between clusterings under this assumption is not straightforward and rather unnatural compared to the multinomial model. Yet one certainly hopes to compare clusterings that are alike or dependant. In comparison, the multinomial model does not assume the size of the clusters to be fixed, by considering a sample observed from an infinite population. Modeling dependent clusterings and adjusting accordingly is then greatly simplified. For all these reasons we argue that the multinomial model is more natural from a statistical perspective. Note that \cite{morey1984measurement} already studied this model to propose an adjusted version of the $RI$. Nonetheless, as pointed out in \cite{hubert1985comparing, steinley2004properties,steinley2018note}, \citeauthor{morey1984measurement} made an error in their calculation of the expected value of the $RI$, assuming that the expected value of a squared variable is the square of the expected value, which is wrong in general. We are convinced that this error is the reason for the problem described in \cite{steinley2018note}, advocating unfairly for the hypergeometric version of the $(A)RI$. \begin{center} \S \end{center} In this work, we essentially make a rigorous statistical analysis of the $RI$ under the hypothesis of a multinomial distribution. In details, our contributions are the following: \begin{enumerate} \item Define new versions of the $RI$ and the $ARI$, denoted by $MRI$ and $MARI$ (for "modified" $(A)RI$), only counting consistent pairs by similarity. Indeed, we show that counting consistent pairs by dissimilarity is unnecessary and blurs the interpretation. In terms of our newly defined $MARI$, considering those pairs would simply result in a multiplication by 2. \item Finalise the work of \cite{morey1984measurement} and derive an unbiased estimator of the expected value of the $MRI$ under a multinomial distribution valid for data under ${\mathcal{H}_1} $ (dependent clusterings) and ${\mathcal{H}_0} $ (independent clusterings). \item Provide an efficient algorithm to compute all these quantities ($(A)RI$ and $M(A)RI$) by relying on a sparse representation of the contingency table. The complexity is in $\mathcal{O}(n)$ time and space where $n$ is the number of individuals. This is better than the usual $\mathcal{O}(n + KL)$ complexity, where $K$ and $L$ are the sizes of the two clusterings one which to compare, typically obtained when using the non-sparse contingency table. Our code is available in versions $\geq 1.0.0$ of the \texttt{R} package \texttt{aricode} \citep{aricode}. \item Investigate the difference with the hypergeometric \citeauthor{hubert1985comparing}'s $ARI$ and show that it is biased under the multinomial distribution, even if the difference between the two estimators remains small. This is in contradiction with the results of \cite{steinley2018note} that used the faulty $ARI$ of \cite{morey1984measurement}. \end{enumerate} \section{Statistical Model}\label{sec:modelStat} \subsection{A new Rand Index - counting only pairs consistent by similarity}\label{sec:newdefRI} The Rand Index ($RI$) proposed by \cite{rand1971} counts all the consistent pairs in two given classifications. In details, let us consider two classifications $C^1 $ and $C^2 $ in respectively $K$ and $L$ classes of the same $n$ individuals. The labels of individual $i$ are given by $c^1_{i} \in [1, \ldots ,K]$ and $c^2_{i} \in [1, \ldots, L]$. The consistent pairs are all pairs where observations $i$ and $j$ are in the same group (consistent by similarity), or in different groups (consistent by difference) in $C^1 $ and $C^2 $. We introduce the two quantities $c^1_{ij}$ and $c^2_{ij}$ indicating whether $i$ and $j$ are in the same group for respectively classification $C^1 $ and $C^2 $ : \begin{equation*} c^1_{ij} = \left\{ \begin{array}{ll} 1 & \text{if } c^1_{i} = c^1_{j} = k, \\ 0 & \text{otherwise}, \end{array} \right. \quad \text{and} \quad c^2_{ij} = \left\{ \begin{array}{ll} 1 & \text{if } c^2_{i} = c^2_{j} = \ell, \\ 0 & \text{otherwise}. \end{array} \right. \end{equation*} Note that $c^1_{ij}$ and $c^2_{ij}$ are the realisations of Bernoulli random variables denoted by $C^1_{ij} $ and $C^2_{ij} $ that will prove useful later in our statistical analysis, while studying the $RI$ and other similar quantities as random variables. Using these two quantities we see that a pair is consistent by similarity if $c^1_{ij} c^2_{ij} = 1$ and consistent by difference if $(1-c^1_{ij})(1-c^2_{ij})= 1$. Now considering all pairs, we get the following formula for the $RI$ as defined by \citeauthor{rand1971}: \begin{equation} \begin{split} RI(C^1 ,C^2 ) & = \frac{1}{{n\choose{2}}} \sum_{i<j} c^1_{ij}c^2_{ij} + \sum_{i<j}(1- c^1_{ij})(1- c^2_{ij}) \\ & = 1 + \frac{1}{{n\choose{2}}}\bigg[2\sum_{i<j} c^1_{ij}c^2_{ij} - \sum_{i<j} c^1_{ij} - \sum_{i<j} c^2_{ij} \bigg]. \end{split} \label{eq:redef_justif} \end{equation} In Equation \eqref{eq:redef_justif}, we remark that only the product $\sum c^1_{ij}c^2_{ij}$ depends on the joint distribution of $C^1 $ and $C^2 $: all other terms, coming exclusively from coherent pairs by difference, depend on the marginal distributions of $C^1 $ and $C^2 $. These terms will thus be cancelled out in any adjusted version of the $RI$, correcting for what would happen if $C^1 $ and $C^2 $ were drawn independently. Hence, we argue that considering the consistent pairs by difference unnecessarily complicates the reasoning and the probabilistic analysis of the $RI$. For simplicity we thus redefine the index and refer to it as the $MRI$ (for "modified" $RI$): \begin{equation}\label{eq:RI} \begin{split} MRI(C^1 ,C^2 ) & = \frac{1}{{n\choose{2}}} \sum_{i<j} c^1_{ij}c^2_{ij}. \end{split} \end{equation} \begin{remark} For the derivation of the expected value of $MRI$, $RI$ and their adjusted version $MARI$ and $ARI$, using the definition involving $c^1_{ij}$ and $c^2_{ij}$ (or more exactly $C^1_{ij}$ and $C^2_{ij}$ in a probabilistic perspective) considerably simplify the calculations compared to their classical combinatorial formulations. These combinatorial formulations are recalled in the next section as they are classically used to compute the $RI$ and its variants. \end{remark} \subsection{Computing the Rand Index from the $n_{k\ell}$ contingency table }\label{sec:RICompute} The information from two observed classifications is usually summarized in a contingency table like Table \ref{tab:n_kl Contigency}, representing the number of observations $n_{k\ell}$ in group $k$ in $C^1 $ and in group $\ell$ in $C^2 $. \begin{table}[!htbp] \caption{Contingency Table between clusterings $C^1$ and $C^2$; each entry $n_{k\ell}$ corresponds to the number of observations in group $k$ in $C^1$ and group $\ell$ in $C^2$.} \begin{center} \begin{tabular}{c|ccccc|c} ${ \atop C^1 }\!\diagdown\!^{C^2 }$ & $c^2_{1}$& $\cdots$ & $c^2_{\ell}$ & $\cdots$ & $c^2_{L}$ & Sums \\ \hline $c^1_{1}$ & $n_{11}$ & $\cdots$ & $n_{1\ell}$ & $\cdots$ & $n_{1L}$ & $n_{1.}$ \\ \vdots & \vdots & $\ddots$ & \vdots & $\ddots$ & \vdots & \vdots \\ $c^1_{k}$ & $n_{k1}$ & $\cdots$ &$n_{k\ell}$ & $\cdots$ & $n_{2L}$ & $n_{k.}$ \\ \vdots & \vdots & $\ddots$ & \vdots & $\ddots$ & \vdots & \vdots \\ $c^1_{K}$ & $n_{K1}$ & $\cdots$ & $n_{K\ell}$ & $\cdots$ & $n_{KL}$ & $n_{K.}$ \\\hline Sums & $n_{.1}$ & $\cdots$ & $n_{.\ell}$ & $\cdots$ & $n_{.L}$ & $\sum_{k\ell} n_{k\ell}=n$\\ \end{tabular} \end{center} \label{tab:n_kl Contigency} \end{table} Using basics combinatorics we get the following relations between $n_{k\ell}, n_{k.}, n_{.\ell}$ and $c_{ij}^1,c_{ij}^2$: \begin{equation}\label{eq:cVarCounts} \sum_{i < j} c_{ij}^1= \sum_k {n_{k.} \choose 2}, \quad \sum_{i < j} c_{ij}^2 = \sum_\ell {n_{.\ell} \choose 2} \text{ and } \sum_{i < j} c_{ij}^1 c_{ij}^2 = {n_{k\ell} \choose 2}. \end{equation} Expressions \eqref{eq:redef_justif} and \eqref{eq:RI} of $RI$ and $MRI$ turn to \begin{eqnarray} \label{eq:formRIM} MRI(C^1 ,C^2 ) & = & \frac{1}{{n\choose{2}}} \sum_{k,\ell} {n_{k\ell}\choose{2}} = \frac{1}{2{n\choose{2}}}\sum_{k,\ell} (n_{k\ell}^2 - n) \\[1.5ex] RI(C^1 ,C^2 ) & = & 1 + \frac{2}{{n\choose{2}}} \sum_{k, l} {n_{k\ell}\choose{2}} - \frac{1}{{n\choose{2}}}\bigg[\sum_{k}{n_{k.}\choose{2}} + \sum_{l}{n_{.l}\choose{2}} \bigg]. \label{eq:formRIOld} \end{eqnarray} Using these formula, one can see that the minimum of the $MRI$ is obtained when all $n_{k\ell}$ are equal, which has a simple and straightforward interpretation (as two perfectly independent and balanced clusterings). On the other hand the minimum of the $RI$ is obtained for an extremely unbalanced table, \textit{i.e.} when one of the two clustering consists of a single cluster and the other only of clusters containing single points. This makes the interpretation of the $RI$ rather difficult (i.e. the lowest value is not obtained for two perfectly independent and balanced clusterings) and give more credibility to the definition of $MRI$ that does not consider consistent pairs by difference. \subsection{Probabilistic model and properties of the Rand Index}\label{sec:probabilisticRI} So far, the $(M)RI$ have been computed from the \textit{observed quantities} $c^1_{ij}, c^2_{ij}$, or equivalently from the observed contingency table $n_{k\ell}$. From now, we aim to study the statistical properties of the $MRI$ and consider its status of random variable\footnote{By a slight abuse of notation, we use $MRI$ for both its observed value and its definition as a random variable. We think that the context suffices for the reader to remove any ambiguity.}: \begin{equation} \label{eq:MRI_prob} MRI(C^1 ,C^2 ) = \frac{1}{{n\choose{2}}} \sum_{i<j} C^1_{ij} \, C^2_{ij}, \end{equation} where we recall that $C^1_{ij}$ and $C^2_{ij}$ are Bernoulli random variables indicating whether individual $i$ and $j$ are in the same groups in classification $C^1$ respectively $C^2$. To derive the probability of success associated to $C^1_{ij}$ and $C^1_{ij}$, we need a probabilistic model for the classification of a given individual in $C^1$ and $C^2$, that is, a counterpart for generating the two observed clusterings $c^1_i$ and $c^2_i$ for the $n$ data points. We denote by $C^1_i$ and $C^2_i$ the corresponding random variables. A natural model is the multinomial model, which give the joint distribution of $(C^1_i, C^2_i)$ as follows: for all $(k,\ell) \in \{1,\dots,K\} \times \{1,\dots,L\}$, \begin{equation*} \mathbb{P}(C^1_i = k, C^2_i = \ell) = \pi_{k\ell}, \quad \text{s.c. } \sum_{k,\ell}^{K,L}\pi_{k\ell} = 1. \end{equation*} The marginal probabilities of a given group is defined for $k$ in $C^1 $ by $ \sum_{\ell}^{L}\pi_{k\ell} = \pi_{k.}$ and for $\ell$ in $C^2 $ by $\sum_{k}^{K}\pi_{k\ell} = \pi_{.\ell}$. See Table \ref{tab:Pi Contigency} for a global picture. Compared to the hypergeometric model, the multinomial model easily deals with dependent classifications and does not force the size of the clusters. \begin{table}[!htbp] \caption{Multinomial model: probabilistic distributions $\pi_{k\ell} = \mathbb{P}(C^1_i = k, C^2_i = \ell)$ and marginal distributions $\pi_{k.} = \mathbb{P}(C^1_i = k)$ and $\pi_{.\ell} = \mathbb{P}(C^2_i = \ell)$} \begin{center} \begin{tabular}{c|ccccc|c} ${ \atop C^1 }\!\diagdown\!^{C^2 }$ & $c^2_{i}=1$& $\cdots$ & $c^2_i=\ell$ & $\cdots$ & $c^2_i = L$ & Sums \\ \hline $c^1_i = 1$ & $\pi_{11}$ & $\cdots$ & $\pi_{1\ell}$ & $\cdots$ & $\pi_{1L}$ & $\pi_{1.}$ \\ \vdots & \vdots & $\ddots$ & \vdots & $\ddots$ & \vdots & \vdots \\ $c^1_i = k$ & $\pi_{k1}$ & $\cdots$ &$\pi_{k\ell}$ & $\cdots$ & $\pi_{2L}$ & $\pi_{k.}$ \\ \vdots & \vdots & $\ddots$ & \vdots & $\ddots$ & \vdots & \vdots \\ $c^1_i = K$ & $\pi_{K1}$ & $\cdots$ & $\pi_{K\ell}$ & $\cdots$ & $\pi_{KL}$ & $\pi_{K.}$ \\\hline Sums & $\pi_{.1}$ & $\cdots$ & $\pi_{.\ell}$ & $\cdots$ & $\pi_{.L}$ & $\sum_{k\ell} \pi_{k\ell}=1$\\ \end{tabular} \end{center} \label{tab:Pi Contigency} \end{table} Based on this multinomial model for $C^1_i$ and $C^2_i$, it is then relatively straightforward to derive the joint distribution and marginals of $C^1_{ij} $ and $C^2_{ij} $. In particular we have: \begin{equation}\label{eq:basicProbsOnC} \mathbb{P}( C^1_{ij} = 1) = \sum_k \pi_{k.}^2, \quad \mathbb{P}( C^2_{ij} = 1) = \sum_\ell \pi_{.\ell}^2 \ \text{and} \quad \mathbb{P}( C^1_{ij} C^2_{ij} = 1) = \sum_{k,\ell} \pi_{k\ell}^2. \end{equation} However, in order to derive the expectation, variance and unbiased adjustment of the $MRI$ under the multinomial model, one not only needs to characterize events on the classification $C^1$ and $C^2$ on (unordered) pairs of individual $\{i,j\}$, but also on \textit{pairs of pairs} of individual $\{i,j\}$ and $\{i',j'\}$, with terms like the expectation of $C^1_{ij} \times C^2_{i'j'}$. The following section derives a couple of technical -- yet simple -- lemmas, on events implying such random variables so that the final calculation of the moments of $MRI$ under the multinomial model are straightforward. \begin{remark} To our knowledge most derivations of the expectation and variance of the $RI$ found in the literature are based on the combinatorial formulation given in Equation \eqref{eq:formRIOld}: these derivations rely on general results on the moments of either the multinomial or the generalized hypergeometric distribution and involve tedious calculations. In contrast, our proofs, found in the next sections, are short, self-contained and easily accessible to any reader with some basic knowledge in probability and statistics. For this reason we argue that our proofs are interesting in their own rights. \end{remark} \subsubsection{Subsets of Pairs of Pairs - preparing the derivations of the moments of the $MRI$}\label{sec:quadruplets} Consider $\{i, j\}$ and $\{i', j'\}$ the ${\mathcal{P}} \times {\mathcal{P}}$ set of unordered pairs of $\{1,\dots,n\}^2$ such that $i<j$ and $i'<j'$. This set is composed by pairs of pairs, and can equivalently be seen as the set of all quadruplets of $\{1,\dots,n\}^4$ such that $i<j$ and $i'<j'$. We partition this set into the three following subsets: \begin{enumerate} \item the unordered pairs ${\mathcal{P}}$, \item the ordered-triplets ${\mathcal{T}},$ \item the ordered quadruplets ${\mathcal{Q}}$. \end{enumerate} These three subsets ${\mathcal{P}}, {\mathcal{T}}$ and ${\mathcal{Q}}$ makes a partition of ${\mathcal{P}} \times {\mathcal{P}}$ and in particular, \[ |{\mathcal{P}}|^2 = |{\mathcal{P}}| + |{\mathcal{T}}| + |{\mathcal{Q}}|. \] We now study respectively ${\mathcal{P}}, {\mathcal{T}}$ and ${\mathcal{Q}}$ in the three following lemmas: we derive their cardinality and compute some expectations involving these subsets and the $C^1_{ij}$, $C^2_{ij}$ variables under the multinomial model. These three lemmas will be the building blocks for the characterization of the $MRI$. \begin{lemma}[Subset of unordered pairs ${\mathcal{P}}$] \label{lemma:pairs} With a slight abuse of notation, we consider ${\mathcal{P}}$ as a subset of ${\mathcal{P}} \times {\mathcal{P}}$: \begin{equation*} {\mathcal{P}} = \{\{i, j, i', j'\}: |\{i, j\} \cup \{i', j'\}| = 2.\} \end{equation*} The cardinality of ${\mathcal{P}}$ is $|{\mathcal{P}}| = {n \choose 2}$ and \begin{equation}\label{eq:Pairs_expected} \mathbb{E}\left(\sum_{i, j \in {\mathcal{P}}} C^1_{ij} C^2_{ij} \right) = {n \choose 2} \sum_{k\ell} \pi_{k\ell}^2. \end{equation} \end{lemma} \begin{proof} For any $i, j \in {\mathcal{P}}$, we have from \eqref{eq:basicProbsOnC} that $\mathbb{E}( C^1_{ij} C^2_{ij} ) = \sum_{k\ell} \pi_{k\ell}^2$. We just need to sum over all possible pairs to get the desired result. \end{proof} \begin{lemma}[Subset of ordered triplets ${\mathcal{T}}$] \label{lemma:triplet} Consider the subset ${\mathcal{T}}$ of ${\mathcal{P}}\times{\mathcal{P}}$ \begin{equation*} {\mathcal{T}} = \{\{i, j, i', j'\}: |\{i, j\} \cup \{i', j'\}| = 3\}. \end{equation*} The cardinality of ${\mathcal{T}}$ is $|{\mathcal{T}}| = n(n-1)(n-2)$ and \begin{equation} \label{eq:Triplets_expected} \mathbb{E} \left( \sum_{{\mathcal{T}}} C^1_{ij}C^2_{ij'} \right) = n(n-1)(n-2) \sum_{k\ell} \pi_{k\ell}\pi_{k.}\pi_{.\ell}. \end{equation} \begin{equation} \label{eq:Triplets_expected2} \mathbb{E}\left( \sum_{{\mathcal{T}}} C^1_{ij} C^2_{ij} C^1_{ij'}C^2_{ij'}\right) = n(n-1)(n-2) \sum_{k\ell} \pi_{k\ell}^3. \end{equation} \end{lemma} \begin{proof} For the cardinality of ${\mathcal{T}}$, one can map to the set of arrangements of $\{1, \ldots, n\}^3.$ For \eqref{eq:Triplets_expected}, remark that $C^1_{ij} C^2_{ij'}$ is a Bernoulli variable equal to 1 only when $i$ and $j$ are in the same cluster $k$ in $C^1$ and $i$ and $j'$ are in the same cluster $\ell$ in $C^2$ . Hence, $j$ can be in any cluster $\ell'$ in $C^2 $ and $j'$ can be in any cluster $k'$ in $C^1 $. From here one easily get its expectation, \begin{equation*} \mathbb{E}(C^1_{ij}C^2_{ij'} ) = \sum_{k\ell k'\ell'}\pi_{k\ell}\pi_{k'\ell}\pi_{k\ell'} = \sum_{k\ell k'\ell'}\pi_{k\ell}\sum_{k'}\pi_{k'\ell}\sum_{\ell'}\pi_{k\ell'} = \sum_{k\ell}\pi_{k\ell}\pi_{.\ell}\pi_{k.} \end{equation*} and we get the desired result by summing over all triplets. For \eqref{eq:Triplets_expected2}, remark that $C^1_{ij} C^2_{ij} C^1_{ij'}C^2_{ij'}$ is a Bernoulli variable equal to 1 if and only if $i, j$ and $j'$ are in the same clusters for both classifications. Summing over all ${\mathcal{T}}$ we get \eqref{eq:Triplets_expected2}. \end{proof} \begin{lemma}[Subset of ordered quadruplets ${\mathcal{Q}}$]\label{lemma:quad} Consider the following subset ${\mathcal{Q}}$ of ${\mathcal{P}}\times{\mathcal{P}}$: \begin{equation*} {\mathcal{Q}} = \{\{i, j, i', j'\}: |\{i, j\} \cup \{i', j'\}| \}= 4\}. \end{equation*} The cardinality is $|{\mathcal{Q}}| = 6 {n \choose 4}$ and \begin{equation}\label{eq:Quadruplets_expected} \mathbb{E} \left( \sum_{{\mathcal{Q}}} C^1_{ij}C^2_{i',j'} \right) = 6 {n \choose 4} \sum_{k,\ell} \pi_{k.}^2\pi_{.\ell}^2. \end{equation} \end{lemma} \begin{proof} There are ${n \choose 4}$ ways to pick 4 distinct elements of $\{1, ..., n\}^4$. We can then arrange those in ${4 \choose 2}$ to get an element of ${\mathcal{Q}}$. Hence, all together there are $6 {n \choose 4}$ quadruplets. We get $\mathbb{E}( C^1_{ij}C^2_{ij'} )$ using the fact that $i, j, i', j'$ are all different and that their classes are drawn independently. We then sum over ${\mathcal{Q}}$. \end{proof} \subsubsection{Expectation and Variance of the Rand Index}\label{sec:RIpropChar} With Lemmas~\ref{lemma:pairs}, ~\ref{lemma:triplet} and \ref{lemma:quad}, we are now equipped to easily derive the moments of the $MRI$. We use $\mathbb{E}$ for stating the expectation understood under the multinomial model in general. With the additional assumption of independence between the classification, what we refer to as the \textit{null hypothesis}, we use $\mathbb{E}_{\mathcal{H}_0}$. This terms is classically used for adjusting the Rand index. \begin{prop}[Expectations of the $MRI$] Let $\theta$ denote the expectation of the $MRI$ and $\theta_0$ the expectation under $\mathcal{H}_0$. Then, $$ \theta = \mathbb{E}(MRI) = \sum_{k\ell} \pi_{k\ell}^2 , \qquad \qquad \theta_0 = \mathbb{E}_{\mathcal{H}_0} (MRI) = \sum_{k\ell} \pi_{k.}^2 \pi_{.\ell}^2 $$ \end{prop} \begin{proof} By Definition \ref{eq:MRI_prob} and Lemma \ref{lemma:pairs} we obtain $\theta$. For $\theta_0$, it suffices to replace $\pi_{kl}$ by $\pi_{k.}\pi_{.l}$ in the previous formula. \end{proof} Similarly, we derive the expectation of the "usual" $RI$. \begin{prop}\label{prop:ExOldRI} Let $\theta^{RI}$ denotes the expectation of the $RI$ and $\theta_0^{RI}$ the expectation under $\mathcal{H}_0$. Then, \begin{eqnarray*} \theta^{RI} & = & \mathbb{E}(RI) = 1 + 2\sum_{k\ell} \pi_{k\ell}^2 - \sum_{k} \pi_{k.}^2 - \sum_{\ell}\pi_{.\ell}^2 \\ \theta_0^{RI} & = & \mathbb{E}_{\mathcal{H}_0} (RI)= 1 + 2\sum_{k\ell} \pi_{k.}^2 \pi_{.\ell}^2 - \sum_{k} \pi_{k.}^2 - \sum_{\ell}\pi_{.\ell}^2 \end{eqnarray*} \end{prop} \begin{proof} Compared to the $MRI$, the only additional terms are $1 + \sum_{i,j} C^1_{ij} + \sum_{i,j} C^2_{ij}$. Using \eqref{eq:basicProbsOnC} and summing over all pairs ${\mathcal{P}}$ we get the desired results. \end{proof} We now continue with the variance of the $MRI$. \begin{prop} Let $\sigma^2= \mathbb{V}(MRI)$ be the variance of the $MRI$. Then, \begin{equation*} \sigma^2 = \frac{1}{{{n}\choose{2}}} \ \left(\sum_{k,\ell} \pi_{k\ell}^2 - \left[\sum_{k,\ell} \pi_{k\ell}^2\right]^2 \right) + \frac{n(n-1)(n-2)}{{{n}\choose{2}}^2} \left( \sum_{k,\ell} \pi_{k\ell}^3 - \left[\sum_{k,\ell} \pi_{k\ell}^2\right]^2 \right) \end{equation*} \end{prop} \begin{proof} To obtain the variance of the $MRI$, first rewrite the variance in terms of covariance: \begin{eqnarray*} \sigma^2 & = & \frac{1}{{{n}\choose{2}}^2} \ \mathbb{V} \Big( \sum_{{\mathcal{P}}\times{\mathcal{P}}} C^1_{ij} C^2_{ij}\Big) \\ & = & \frac{1}{{{n}\choose{2}}^2} \ {\mathrm{Cov}} \Big( \sum_{{\mathcal{P}}\times{\mathcal{P}}} C^1_{ij} C^2_{ij}, \sum_{{\mathcal{P}}\times{\mathcal{P}}} C^1_{ij} C^2_{ij} \Big) \\ & = & \frac{1}{{{n}\choose{2}}^2} \ \sum_{{\mathcal{P}}\times{\mathcal{P}}} {\mathrm{Cov}} \Big( C^1_{ij} C^2_{ij}, C^1_{i'j'} C^2_{i'j'} \Big) \end{eqnarray*} We then split this final sum using our partition of ${\mathcal{P}} \times {\mathcal{P}}$. Also noticing that for all $\{i,j\}, \{i', j'\} \in {\mathcal{Q}}$ we have ${\mathrm{Cov}}(C^1_{ij} C^2_{ij}, C^1_{i'j'} C^2_{i'j'}) = 0,$ we get, \begin{eqnarray*} \sigma^2 & = & \frac{1}{{{n}\choose{2}}^2} \left[ \sum_{{\mathcal{P}}} {\mathrm{Cov}}\Big(C^1_{ij} C^2_{ij}, C^1_{ij} C^2_{ij}\Big) + \sum_{{\mathcal{T}}} \ {\mathrm{Cov}}\Big(C^1_{ij} C^2_{ij}, C^1_{ij'} C^2_{ij'}\Big)\right] \\ & = & \frac{1}{{{n}\choose{2}}} \ \mathbb{V}\Big(C^1_{ij} C^2_{i,j}\Big) + \frac{n(n-1)(n-2)}{{{n}\choose{2}}^2} \ {\mathrm{Cov}} \Big(C^1_{ij} C^2_{ij}, C^1_{ij'} C^2_{ij'} \Big) \\ & = & \frac{1}{{{n}\choose{2}}} \ \left(\sum_{k,\ell} \pi_{k\ell}^2 - \left[\sum_{k,\ell} \pi_{k\ell}^2\right]^2 \right)+ \frac{n(n-1)(n-2)}{{{n}\choose{2}}^2} \left( \sum_{k,\ell} \pi_{k\ell}^3 - \left[\sum_{k,\ell} \pi_{k\ell}^2\right]^2 \right) \end{eqnarray*} We get the second line by enumerating the elements of ${\mathcal{P}}$ and ${\mathcal{T}}$. We get the third line using the definition of the covariance (for any two variable $X$ and $Y$: ${\mathrm{Cov}}(X, Y) = \mathbb{E}(XY) - \mathbb{E}(X)\mathbb{E}(Y)$) and Lemmas \ref{lemma:pairs} and \ref{lemma:triplet}. \end{proof} \begin{remark} Importantly, for a fixed $\pi_{k\ell}$, $\sigma^2$ goes towards $0$ when $n$ grows to infinity: the larger $n$, the better the estimation of $\theta$. \end{remark} \subsubsection{The Rand Index depends on the number of groups}\label{sec:RIdepOnNbGrs} In the multinomial model with uniform clusters (equal cluster size), \cite{morey1984measurement} showed that $\theta^{RI}_0$ depends on the number of groups in $C^1 $ and $C^2 $. This is also true for $MRI$ and easier to prove since it does not include the marginal terms of coherence by difference. We also prove the following lemma showing that if one splits a cluster of $C^1 $ or $C^2 $ into two, the $MRI$ always decreases. Note that this latter lemma does not assume independence between classifications. \begin{lemma} Consider two classifications $C^1 $ and $C^2 $ in $K+1$ respectively $L$ clusters. Let $C^1 {'}$ be the classification obtained by fusing two clusters of $C^1 $. Then, \[MRI(C^1 , C^2 ) \leq MRI(C^1 {'}, C^2 ).\] Also, for any distribution on $C^1 $ and $C^2 $ we have \[\theta(C^1 , C^2 ) \leq \theta(C^1 {'}, C^2 ))\] \end{lemma} \begin{proof} Assuming without loss of generality that clusters $1$ and $2$ were merged, we get \begin{eqnarray*} MRI(C^1 , C^2 ) - MRI(C^1 {'}, C^2 ) & = & \frac{1}{2{n \choose 2}} \Big(\sum_{\ell} n_{1\ell}^2 + n_{2\ell}^2 - (n_{1\ell} + n_{2\ell})^2\Big) \\ & = & -\frac{1}{{n \choose 2}} \sum_{\ell} n_{1\ell}n_{2\ell} \leq 0. \end{eqnarray*} Since the expectation is linear, we can consider any particular model on $C^1 $ and $C^2 $ to get the final result. \end{proof} \subsection{The Adjusted version of the Rand Index }\label{sec:ARI} Since the $(M)RI$ depends on the number of groups, it needs to be adjusted for chance. A way to do so, is to subtract its expectation under the null hypothesis ${\mathcal{H}_0} $ \citep[as motivated in][]{brennan1974measuring,hubert1985comparing,morey1984measurement}. Ideally one would like to get $\theta - \theta_0$ with their true values. Under our multinomial model this quantity is \[ \theta - \theta_0 = \sum_{k\ell}\pi_{k\ell}^2 - \sum_{k\ell}\pi_{k.}^2\pi_{.\ell}^2 \] which is equal to zero under ${\mathcal{H}_0} $ (independence of the classifications), that is, when $\pi_{k.} \pi_{.\ell} = \pi_{k\ell}$ for all $k,\ell$. In practice, one can only estimate the quantities $\theta - \theta_0$ from observed classifications. Our goal is therefore to get an unbiased estimator of $\theta - \theta_0$. The $MRI$ being by definition an unbiased estimator of $\theta$, we only need an unbiased estimator of $\theta_0$, that is $\sum_{k\ell} \pi_{k.}^2 \pi_{.\ell}^2$. However, under the alternative ${\mathcal{H}_1} $ (i.e. when the compared classifications are not independent, the most natural case), deriving an unbiased estimator of $\theta_0$ is trickier and depends on the model assumption. \cite{morey1984measurement} proposed a plug-in estimator for the multinomial model, but as pointed out by \cite{hubert1985comparing, steinley2004properties,steinley2018note}, they made errors in their calculations. In the next section we continue their work by proposing an unbiased estimator for $\theta_0$. We also show that the hypergeometric estimator of \cite{hubert1985comparing} for $\theta_0$, used as correction in the "traditional" $ARI$, is biased under our multinomial ${\mathcal{H}_1} $. \paragraph*{A new Adjusted Rand Index.} We now define our own adjusted version of the $MRI$ that we denote $MARI$: \begin{equation} \label{eq:MARI} MARI = \widehat{\theta} - \widehat{\theta}_0. \end{equation} with $$\widehat{\theta} = \sum_{{\mathcal{P}}} C_{ij}^1 C_{ij}^2 \Big/ {n \choose 2} \qquad \widehat{\theta}_0 = \sum_{{\mathcal{Q}}} C_{ij}^1 C_{i'j'}^2 \Big/ 6 {n \choose 4}. $$ and its observed value, $$MARI^{obs} = \sum_{{\mathcal{P}}} c_{ij}^1 c_{ij}^2 \Big/ {n \choose 2} - \sum_{{\mathcal{Q}}} c_{ij}^1 c_{i'j'}^2 \Big/ 6 {n \choose 4}.$$ where we recall that $c_{ij}^1$ and $c_{ij}^2$ are the observed counterparts of $C^1_{ij}, C^2_{ij}$ and ${\mathcal{P}}, {\mathcal{Q}}$ are defined in Section \ref{sec:quadruplets}. \begin{lemma} Under the multinomial model, the $MARI$ is unbiased, that is, $$\mathbb{E}(MARI) = \theta - \theta_0.$$ \end{lemma} \begin{proof} The proof is straightforward using Lemma \ref{lemma:quad}. \end{proof} \paragraph*{Computing the $MARI$ from a contingency table.} In practice, the comparison of two classifications is given as a contingency table as Table \ref{tab:n_kl Contigency}, and we thus need a formulation of the $MARI$ defined in \eqref{eq:MARI} as a function of $n_{k\ell}$. We already gave in \eqref{eq:formRIM} an expression of $\widehat{\theta}$ as a function of $n_{k\ell}$. As we will see, $\widehat{\theta}_0$ can as well be computed from the $n_{k\ell}$ contingency Table \ref{tab:n_kl Contigency} even if summing over all elements of ${\mathcal{Q}}$ rather than ${\mathcal{P}}$ is a bit less straightforward. To get $\sum_{ {\mathcal{Q}} } c^1_{ij} c^2_{i'j'}$, we will use the term $\sum_{k\ell}n_{k.}^2n_{.\ell}^2$ from which we will, as a direct result of Definition \eqref{eq:cVarCounts}, derive the $(\sum_{{\mathcal{P}}}c^1_{ij} )(\sum_{{\mathcal{P}}} c^2_{i'j'} )$ terms. These latter can be decomposed as follows: \begin{equation} \label{eqn:allNtupltsSimilar} (\sum_{{\mathcal{P}}} c^1_{ij} )(\sum_{{\mathcal{P}}} c^2_{i'j'} ) = \sum_{ {\mathcal{P}}}c^1_{ij}c^2_{ij} + \sum_{{\mathcal{T}}} c^1_{ij} c^2_{ij'} + \sum_{ {\mathcal{Q}} }c^1_{ij}c^2_{i'j'} . \end{equation} It is then sufficient to subtract the terms of ${\mathcal{P}}$ and ${\mathcal{T}}$ from the left side of Equation \eqref{eqn:allNtupltsSimilar} to get $\sum_{ {\mathcal{Q}} } c^1_{ij} c^2_{i'j'}$. All terms summing over ${\mathcal{P}}$ are easy to recover (see Definition~\ref{eq:cVarCounts}). However, the terms involving elements of ${\mathcal{T}}$ are more tedious to obtain and are derived in Lemma~\ref{lemma:counttriplets}. The terms of ${\mathcal{Q}}$ derived in Lemma~\ref{lemma:countquadruplets}. \begin{lemma}\label{lemma:counttriplets} We have the following expression of $\sum_{{\mathcal{T}}} c^1_{ij} c^2_{ij'}$ in terms of $n_{k\ell}$: \begin{eqnarray*} \sum_{{\mathcal{T}}} c^1_{ij} c^2_{i'j} = 2n + \sum_{k,\ell} n_{k.}n_{k\ell}n_{.\ell} - \sum_{k, \ell} n_{k\ell}^2 - \sum_k n_{k.}^2 - \sum_{\ell} n_{.\ell}^2 \end{eqnarray*} \end{lemma}{} \begin{proof} We need to consider all $i$ in $\{1, ..., n\}$. Assuming for now that $i$ is in classes $(k,\ell)$, that is $c^1_i=k$ and $c^2_i=\ell$, let us consider all $j, j'$ such that $c^1_{ij}c^2_{ij'}=1$. The term $c^1_{ij}c^2_{ij'}$ is equal to one if $c^1_j=k$ and $c^2_{j'} = \ell$. We then get different scenarios according to whether $c^1_{j'} = k$ or not and whether $c^2_{j} = \ell$. Those scenarios are enumerated in Table \ref{tab:scenario}. \begin{table}[ht!] \centering \caption{Four scenarios to be considered for $j$ and $j'$ in the calculation of the terms in$\sum_{{\mathcal{T}}} c^1_{ij}c^2_{ij'}$ when $i$ is in class $(k, \ell).$}\label{tab:scenario} \begin{tabular}{|c|c|c|} \hline & $j$ in $\ell$ & $j$ not in $\ell$ \\ \hline $j'$ in $k$ & $(n_{k\ell}-1)(n_{k\ell}-2)$ & $(n_{k\ell} - 1)(n_{.\ell} - n_{k\ell}) $ \\ \hline $j'$ not in $k$ & $(n_{k.} - n_{k\ell})(n_{k\ell}-1)$ & $(n_{k.} - n_{k\ell})(n_{.\ell} - n_{k\ell})$ \\ \hline \end{tabular} \end{table} Summing all terms of Table \ref{tab:scenario} we get $n_{k.}n_{.\ell} + 2 - n_{k\ell} - n_{k.} - n_{.\ell}$. To account for all $i$ belonging to class $(k, \ell)$ we then multiply by $n_{k\ell}$. Eventually we sum over all $k, \ell$ to recover \begin{eqnarray*} \sum_{{\mathcal{T}}} c^1_{ij} c^2_{ij'} & = & \sum_{k, \ell} n_{k\ell} ( 2 + n_{k.}n_{.\ell} - n_{k\ell} - n_{k.} - n_{.\ell} ) \\ & = & 2n + \sum_{k,\ell} n_{k.}n_{k\ell}n_{.\ell} - \sum_{k, \ell} n_{k\ell}^2 - \sum_k n_{k.}^2 - \sum_{\ell} n_{.\ell}^2 \end{eqnarray*} \end{proof} \begin{lemma} \label{lemma:countquadruplets} We have the following expression of $\sum_{{\mathcal{Q}}} c^1_{ij} c^2_{i'j}$ in terms of $n_{k\ell}$: \begin{multline*} \sum_{{\mathcal{Q}}} c^1_{ij} c^2_{i'j'} = \\ \bigg[\sum_{k\ell} n_{k.}^2n_{.\ell}^2 - \bigg(4 \sum_{k\ell}{n_{k\ell}\choose{2}} + 4(2n + \sum_{k,\ell} n_{k.}n_{k\ell}n_{.\ell} - \sum_{k, \ell} n_{k\ell}^2 - \sum_k n_{k.}^2 - \sum_{\ell} n_{.\ell}^2) \\ \qquad\qquad\qquad + 2n \big(\sum_{k} {n_{k.} \choose{2}} + \sum_{\ell} {n_{k.} \choose{2}} \big) + n^2\bigg)\bigg]\bigg/4 \end{multline*} \end{lemma} \begin{proof} From Equation \eqref{eq:cVarCounts} we can derive $\sum_{k\ell}n_{k.}^2n_{.\ell}^2$ as a function of $\sum_{{\mathcal{P}}\times{\mathcal{P}}} c^1_{ij} c^2_{i'j'}$ and $n$, since, $\sum_k n_{k.}^2 = n + 2 \sum_{\mathcal{P}} c^1_{ij} $ and $\sum_\ell n_{.\ell}^2 = n + 2 \sum_{{\mathcal{P}}}c^2_{ij} $ with, \begin{equation} \begin{split} \sum_{k\ell}n_{k.}^2 n_{.\ell}^2 & = (2 \sum_{i < j} c^1_{ij} + n)(2 \sum_{i' < j'} c^2_{i'j'} + n) \\ & = 4 \sum_{{\mathcal{P}}\times{\mathcal{P}}} c^1_{ij}c^2_{i'j'} + 2n \bigg(\sum_{{\mathcal{P}}} c^1_{ij} + \sum_{{\mathcal{P}}} c^2_{i'j'} \bigg) + n ^2 \\ \end{split} \end{equation} Using equation \eqref{eqn:allNtupltsSimilar}, we decompose $\sum_{{\mathcal{P}}\times{\mathcal{P}}} c^1_{ij}c^2_{i'j'} $ into terms of ${\mathcal{P}}$, ${\mathcal{T}}$ and ${\mathcal{Q}}$ and get, \begin{multline*} \sum_{\mathcal{Q}} c^1_{ij}c^2_{i'j'} = \bigg[\sum_{k\ell} n_{k.}^2n_{.\ell}^2 - \bigg(4 \sum_{{\mathcal{P}}}c^1_{ij}c^2_{ij} + 4\sum_{{\mathcal{T}}}c^1_{ij}c^2_{ij'} + 2n \big(\sum_{{\mathcal{P}}} c^1_{ij} + \sum_{{\mathcal{P}}} c^2_{ij} \big) + n ^2\bigg)\bigg]\bigg/4\\ \quad\quad\quad\quad = \bigg[\sum_{k\ell} n_{k.}^2n_{.\ell}^2 - \bigg(4 \sum_{k\ell}{n_{k\ell}\choose{2}} + 4(2n + \sum_{k,\ell} n_{k.}n_{k\ell}n_{.\ell} - \sum_{k, \ell} n_{k\ell}^2 - \sum_k n_{k.}^2 - \sum_{\ell} n_{.\ell}^2)\\ + 2n \big(\sum_{k} {n_{k.} \choose{2}} + \sum_{\ell} {n_{k.} \choose{2}} \big) + n ^2\bigg)\bigg]\bigg/4 \end{multline*} \end{proof} \section{Implementation - package \texttt{aricode}}\label{sec:aricode} We implemented code for fast computation of the $MRI$ and its adjusted version the $MARI$, as well as a number of other clustering comparison measures in the \texttt{R/C++} package \texttt{aricode}, which is available on CRAN. Computing these measures is straightforward by means of the whole $K\times L$ contingency table. However, the time and space complexity is in $\mathcal{O}(n + KL)$, which is somewhat inefficient when $K$ and $L$ are large. Our implementation in \texttt{aricode} is in $\mathcal{O}(n)$: the key idea is that, given $n$ observations, at most $n$ elements of the $n_{k\ell}$ contingency matrix can be non zero. To recover these non zero elements one can proceed in two simple steps: first, all observations are sorted in lexicographical order in terms of their first and second cluster index. This can be done in $\mathcal{O}(n)$ using \texttt{bucket sort} \citep{Cormen2001introduction} or \texttt{radix sort} (as implemented in \texttt{R} \citep{Rproject}). Note that once the observations are sorted, all $i$ that are in clusters $k$ and $\ell$ are one after the other in the data table. Thus, in a second step \texttt{aricode} counts all non zero $n_{kl}$ in a single path over the data table. Internally this is done using \texttt{Rcpp} \citep{eddelbuettel2011rcpp}. In Figure \ref{fig:TimingAricodeVsMclust} we compare our implementation of the standard ARI with the implementation of \texttt{mclust} \citep{mclust} (that uses the whole contingency table). As can be noted, the cost of the latter can be prohibitive for large vectors. \begin{figure} \caption{Timings comparing the cost of computing the ARI with \texttt{aricode} or with the commonly used function \texttt{adjustedRandIndex} of the \texttt{mclust} package.} \label{fig:TimingAricodeVsMclust} \end{figure} \section{\citeauthor{hubert1985comparing}'s ARI}\label{sec:HubertARI} In this section we study the expectation of the 'standard' $RI$ of \cite{brennan1974measuring} (by contrast with our $MRI$); the expression of which results from the hypergeometric model. This expression was used by \cite{hubert1985comparing} for adjusting the $RI$ and producing the usual $ARI$. We study this expected value when the expectation corresponds to the multinomial distribution. We show that this estimator is biased in general under the alternative hypothesis, that is, when the two compared clusterings are not independent. \subsection{Expectation of \citeauthor{hubert1985comparing}'s $ARI$}\label{sec:HubertARIExp} Consider the observed value of the $ARI$ proposed by \cite{brennan1974measuring, hubert1985comparing}: in order to analyse this quantity in our multinomial setup, we first give its definition in terms of $c^1_{ij}$ and $c^2_{ij}$, that is \begin{eqnarray*} ARI^{\text{obs}} & = & \frac{2}{{n\choose{2}}} \sum^{KL}_{kl} {n_{kl}\choose{2}} -\frac{2}{{n\choose{2}}^2} \sum^{KL}_{kl}{n_{k.}\choose{2}}{n_{.l}\choose{2}} \\ & = & \frac{2}{{n\choose{2}}} \sum_{{\mathcal{P}}} c^1_{ij}c^2_{ij} -\frac{2}{{n\choose{2}}^2} \sum_{{\mathcal{P}}} c^1_{ij} \sum_{{\mathcal{P}}} c^2_{ij}, \end{eqnarray*} where we recall that $c^1_{ij}$,$c^2_{ij}$ are realisations of the Bernoulli variables $C^1_{ij}, C^2_{ij}$. In a probabilistic perspective, we consider the $ARI$ as a random variable: \begin{equation} ARI = \underbrace{\frac{2}{{n\choose{2}}} \sum_{{\mathcal{P}}} C^1_{ij}C^2_{ij}}_{\widehat{\theta}^{RI}} -\underbrace{\frac{2}{{n\choose{2}}^2} \sum_{{\mathcal{P}}} C^1_{ij} \sum_{{\mathcal{P}}} C^2_{ij}}_{\widehat{\theta}^{RI}_0}, \end{equation} where, as for the $MRI$, we ignored the marginal terms in our definitions of $\widehat{\theta}^{RI}$ and $\widehat{\theta}^{RI}_0$ that cancel in the $ARI$. We now claim the following proposition. \begin{prop} Under the multinomial model we have $$ \mathbb{E}(ARI) = \mathbb{E}(\hat{\theta}^{RI}) - \mathbb{E}(\hat{\theta}^{RI}_{0}), $$ with \begin{gather*} \mathbb{E}(\hat{\theta}^{RI}) = 2\sum_{k\ell}^{KL}\pi_{k\ell}^2 \text{ and }\\ \mathbb{E}(\hat{\theta}^{RI}_{0}) = \frac{2}{{n\choose{2}}^2}\bigg[{n\choose 2}\sum_{k\ell}^{KL}\pi_{k\ell}^2 + n(n-1)(n-2)\sum_{k\ell}^{KL}\pi_{k\ell}\pi_{k.}\pi_{.\ell} + 6{n\choose 4} \sum_{k\ell}^{KL}\pi_{k.}^2\pi_{.\ell}^2 \bigg] \end{gather*} Assuming we are under the null this simplifies so that $\mathbb{E}_{{\mathcal{H}_0} }(ARI) = 0$. \end{prop} \begin{proof} Using Lemma~\ref{lemma:pairs}, we have $$\mathbb{E}(\sum_{{\mathcal{P}}} C^1_{ij} C^2_{ij}) = {n\choose{2}}\sum_{k\ell}^{KL}\pi_{k\ell}^2.$$ Using Definition~\eqref{eqn:allNtupltsSimilar} and Lemmas \ref{lemma:pairs}, \ref{lemma:triplet}, \ref{lemma:quad} we obtain $$\mathbb{E}(\sum_{{\mathcal{P}}} C^1_{ij} \sum_{{\mathcal{P}}} C^2_{ij}) = {n\choose 2}\sum_{k\ell}^{KL}\pi_{k\ell}^2 + n(n-1)(n-2)\sum_{k\ell}^{KL}\pi_{k\ell}\pi_{k.}\pi_{.\ell} + 6{n\choose 4} \sum_{k\ell}^{KL}\pi_{k.}^2\pi_{.\ell}^2. $$ Under the null we have $\pi_{k\ell}^2=\pi_{k.}^2\pi_{.\ell}^2$ and we get $$\mathbb{E}_{\mathcal{H}_0} (\sum_{{\mathcal{P}}} C^1_{ij} C^2_{ij}) = {n\choose 2}\sum_{k\ell}^{KL}\pi_{k.}^2\pi_{.\ell}^2$$ \begin{eqnarray*} \mathbb{E}_{\mathcal{H}_0} (\sum_{{\mathcal{P}}} C^1_{ij} \sum_{{\mathcal{P}}} C^2_{ij}) & = & \sum_{k\ell}^{KL}\pi_{k.}^2\pi_{.\ell}^2\bigg[{n\choose 2} + n(n-1)(n-2) + 6{n\choose 4} \bigg] \\ &=& {n\choose 2}^2\sum_{k\ell}^{KL}\pi_{k.}^2\pi_{.\ell}^2. \end{eqnarray*} The expectations $\mathbb{E}(\hat{\theta}^{RI})$ and $\mathbb{E}(\hat{\theta}^{RI}_0)$ are obtained by scaling respectively with $2/{n\choose{2}}$ and $2/{n\choose{2}}^2$; $\mathbb{E}(ARI)$ is their difference. \end{proof} From these results we conclude that \citeauthor{hubert1985comparing}'s $ARI$ is biased under the multinomial model in general, since the term used for the adjustment is biased as $\mathbb{E}(\widehat{\theta}^{RI}_0) \neq \theta^{RI}_0$. Note, however, that this estimator is not biased under the null ${\mathcal{H}_0} $. \subsection{Study of the bias \citeauthor{hubert1985comparing} 's $ARI$}\label{section:bias} The quantity that we study in this section is \begin{equation*} \label{eqn:HubertBias} \begin{split} \text{bias}_n(\theta^{RI}_0) & = \theta^{RI}_0 - \mathbb{E}(\widehat{\theta}^{RI}_0)\\ & = \sum_{k,\ell}^{K,L}\pi_{k.}^2\pi_{.\ell}^2 - \Big[{n\choose{2}} \sum_{kl}^{K,L}\pi_{k\ell}^2 + 6{n\choose{3}}\sum_{k\ell}^{K,L}\pi_{k\ell}\pi_{.\ell}\pi_{k.} + 6{n\choose{4}}\sum_{k\ell}^{K,L}\pi_{k.}^2\pi_{.\ell}^2 \Big] \bigg/{n\choose{2}}^2 \end{split} \end{equation*} \paragraph{Bias disappear when $n$ goes to infinity.} The bias can be rewritten as \begin{equation*} \label{eq:biasRewritten} \begin{split} \text{bias}_n(\theta^{RI}_0) = \frac{4n - 6}{n(n-1)}\sum_{k,\ell}^{K,L}\pi_{k.}^2\pi_{.\ell}^2 - \frac{2}{n(n-1)}\sum_{kl}^{K,L}\pi_{k\ell}^2 - \frac{4(n-2)}{n(n-1)}\sum_{k\ell}^{K,L}\pi_{k\ell}\pi_{.\ell}\pi_{k.} \end{split} \end{equation*} From this expression we get \begin{lemma}\label{lemma:boundbias} \begin{equation*} |\text{bias}_n(\theta^{RI}_0)| \leq \frac{8}{n} \end{equation*} \begin{equation*} |\text{bias}_n(\theta^{RI}_0)| = O(1/n), \quad \text{and} \quad \lim_{n \to +\infty}\text{bias}_n(\theta^{RI}_0) = 0. \end{equation*} \end{lemma} \begin{proof} As seen in Equation~\eqref{eq:biasRewritten}, the bias consist of three terms. The absolute value of the sum of these three terms is bounded by the sum of their absolute values. Then, using that $\sum_{k, \ell} \pi_{k\ell} = 1$ and all $\pi_{k\ell} \geq 0$, we bound $\sum_{k,\ell}\pi_{k.}^2\pi_{.\ell}^2$, $\sum_{kl}\pi_{k\ell}^2$ and $\sum_{k\ell}\pi_{k\ell}\pi_{.\ell}\pi_{k.}$ by $1$ and we get $|\text{bias}_n(\theta^{RI}_0)| \leq \frac{4(2n-3)}{n(n-1)}$. We have, $(2n-3) < 2(n-1)$ and the result follows. \end{proof} \paragraph{Empirical bias.} In the case of independence the bias is zero. In the case of dependence, using Lemma \ref{lemma:boundbias} we get that the bias is smaller than $0.04$ for $n$ larger than $200$. Following the work of \cite{steinley2018note}, we study the importance of the difference empirically for small value of $n$ in the next paragraph. In summary for $n$ larger than $64$ we observe a small bias, typically smaller than $10^{-2}$. For smaller values of $n$ the bias can be larger. \paragraph*{Simulation setting.} We study the evolution of the bias by comparing two classifications with equal number of groups ($K = L$), with values varying in $K \in \{2, 4, 8, 16, 32, 64, 128 \}$ and a growing number of individuals. For drawing the two compared classifications under the multinomial model, see Table~\ref{tab:Pi Contigency}. We consider three scenarios described below where we tune the level of difficulty by controlling the balance between group sizes with the parameters $\epsilon$. \begin{description} \item{Scenario 1.} In the first scenario we investigate a $\pi_{kl}$ distribution with a disproportionate diagonal. All other entries being null. \begin{equation*} \pi_{k\ell} = \begin{pmatrix} 1-\epsilon & 0 & \cdots & 0 \\ 0 & \frac{\epsilon}{K-1} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \frac{\epsilon}{K-1} \end{pmatrix} \end{equation*} \item{Scenario 2.} In the second scenario we investigate a $\pi_{kl}$ distribution with a proportional diagonal and extra diagonal dependency. All other entries being null. \begin{equation*} \pi_{k\ell} = \begin{pmatrix} (1-\epsilon)/K & \epsilon/K & \cdots & 0 \\ 0 & (1-\epsilon)/K & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ \epsilon/K & 0 & \cdots & (1-\epsilon)/K \end{pmatrix} \end{equation*} \item{Scenario 3.} In the third scenario we investigate a $\pi_{kl}$ distribution with one line and one column being disproportional and all other entries being null. \begin{equation*} \pi_{k\ell} = \begin{pmatrix} 1-\epsilon & \frac{\epsilon}{K+L-2} & \cdots & \frac{\epsilon}{K+L-2} \\ \frac{\epsilon}{K+L-2} & 0 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\epsilon}{K+L-2} & 0 & \cdots & 0 \end{pmatrix} \end{equation*} \end{description} \paragraph{Results.} The results are shown in Figure~\ref{fig:BiasHubert} where the bias is shown in its absolute value with $\log_2/\log_{10}$ scales. For the different scenarios, the parameter of imbalanceness $\epsilon$,is fixed to $0.3$ and $0.8$. In the different scenarios, the bias remains moderates for most values of $K$ and $n$. When the number of individuals is small however, the difference turns to be more important and using the $(A)RI$ lead to misguiding conclusions. \begin{figure} \caption{\citeauthor{hubert1985comparing}'s ARI bias for different scenarios of $\pi_{k\ell}$-distribution} \label{fig:BiasHubert} \end{figure} \section{Conclusion} As a conclusion, we argue that one should always prefer our $M(A)RI$ to the $(A)RI.$ There are four main reasons for this. \begin{itemize} \item The adjustment of the $RI$ is based on a hypergeometric distribution which is unsatisfying from a modeling perspective. In particular, it forces the size of the clusters to be the same and it ignores randomness of the sampling (see the introduction). A multinomial model of the $MARI$ does not force the size of the clusters and properly model randomness. Furthermore, the model easily extends to the dependant case. \item The difference between the $ARI$ and $MARI$ can be large for small $n$ but essentially vanish for large $n$ (see Section \ref{section:bias}). \item The $M(A)RI$ can be computed just as fast as the $(A)RI$ in only $O(n)$ rather than $O(n + KL)$ using our \texttt{aricode} package. \item The $M(A)RI$ does not take into account pairs coherent by difference which -- as argued in Section \ref{sec:newdefRI} -- unnecessarily complexify the analysis and interpretation of the $(A)RI$. \end{itemize} \section*{Conflict of interest} We declare that we have no conflict of interest. \eject \end{document}
\begin{document} \title{"Physical quantity" and " Physical reality" in Quantum Mechanics: an epistemological path.} \author{David Vernette} \author{Michele Caponigro} \affiliation{} \emph{\date{\today}} \begin{abstract} We reconsider briefly the \textbf{relation} between "\textbf{physical quantity}" and "\textbf{physical reality}" in the light of recent interpretations of Quantum Mechanics. We argue, that these interpretations are conditioned from the epistemological relation between these two fundamental concepts. In detail, the choice as ontic level of the concept affect, the relative interpretation. We note, for instance, that the informational view of quantum mechanics ( primacy of the subjectivity) is due mainly to the evidence of the "random" physical quantities as ontic element. We will analyze four positions: Einstein, Rovelli, d'Espagnat and Zeilinger. \end{abstract} \maketitle \section{Introduction} What do we mean with physical quantities? In quantum mechanics they play a central role, specifically in a measurement process. Physical quantities give us information on the state of a physical system. What do we mean instead with physical reality? We have not any clear definition. There are many hypothesis on their relation, most important (Einstein position), was the tentative to establish a perfect "isomorphism". We are interested to analyze this possible relation. We retain fundamental this debate, because the evolution of the two concepts are strictly linked with the foundations of physical laws. We will utilize the foundations of Quantum Theory as useful tool to go at the heart of the problem. \subsection{Realism} We need to give a "general" definition of "realism". There are many forms of realism, stronger and weaker. Realism, roughly speaking, is the belief that there exists an objective world “out there” independent of our observations. The doctrines of realism are divided into a number of varieties: ontological, semantical, epistemological, axiological, methodological. Ontological studies the nature of reality, especially problems concerning existence, semantical is interested in the relation between language and reality. Epistemological investigates the possibility nature and scope of human knowledge. The question of the aims of enquiry is one of the subject of axiology, while methodological studies the best, or most effective means of attaining knowledge. In synthesis: \begin{itemize} \item (ontological):Which entities are real? Is there a mind-independent world?. \item (semantical):Is truth an objective language-world relation?. \item (epistemological):Is knowledge about the world possible?. \item (axiological): Is truth one of the aims of enquiry? \item (methodological): What are the best methods for pursuing knowledge. \end{itemize} In this paper, we are interested to "ontological realism", specifically the ontological realism in quantum mechanics. We will analyze four significative positions: Einstein, Rovelli, d'Espagnat and Zeilinger. In advance we can say that, starting from Einstein to Zeilinger, we will assist to a gradual disappearance of the physical reality (and their relative isomorphism). \section{Physical quantity and physical reality in: Einstein, Rovelli, d'Espagnat, Zeilinger} \subsection{Einstein position\cite{Ei}:} \emph{If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity}.\\ \\ {\footnotesize This was the basic conjecture of the EPR argument with the primary objective to prove the incompleteness of QM. The original paper used entangled pairs of particles states wave, whose function cannot be written as tensor products. Instead of using the quite general configuration, usually is considered an entangled pairs of spin-$\frac{1}{2}$ particles that are prepared, following Bohm\cite{Bo}, in the so-called \emph{singlet state} that is rotation invariant and given along any vector by: \[\Psi(x_1,x_2)=\frac{1}{\sqrt{2}}(| +\rangle _1\otimes| -\rangle_2-| -\rangle_1\otimes| +\rangle_2)\,\] .} The above citation lead us to analyze two mentioned fundamental concepts: (i) physical quantity and (ii) physical reality. We retain the debate on these two notions completely opened, because we have not any univocal and deep definition. The importance of the above statement, to us, is the following strong epistemological affirmation:\\ \emph{[..]\textbf{then there exists an element of physical reality corresponding to this physical quantity}}.\\ We note a forced "isomorphism" between two concepts. Through this line,according us, starts the genuine differences between various interpretations of quantum theory. Is it correct to "force" the isomorphic-relation? The relation could be much more complex.\\ We can do some theoretical considerations, first: in a"realist’s world view", there exist physical quantities with "objective properties", which are independent of any acts of observation or measurement, but we can not exclude the existence others elements of physical reality, with a definite values, which do not depend by measurement. We summarize, below theoretical conjectures: \begin{itemize} \item The perfect "isomorphism" between two assumptions ( e.g. Einstein position) \item Physical quantity (measurable) without correspondence in the physical reality (e.g. Zeilinger position) {\footnotesize\item Physical quantity (measurable) with \emph{"veiled"} correspondence in the physical reality (e.g. d'Espagnat position)} \item Unmeasurable physical quantity with possible existence in the physical reality. \item Unmeasurable physical quantity with any existence in the physical reality. \end{itemize} Of course, Philosophers can to ascribe these epistemological positions to philosophical schools. Here, we can easily do many questions, for instance, (i)what is a physical reality unmeasurable? (ii)Is it possible that all physical quantities are measurable? (iii) What is a physical quantity without the correspondent physical reality? \\ How, we can go out? There are some interesting works, for instance, the relational quantum mechanics. \subsection{Rovelli position\cite{Rov}:} \emph{Rovelli departs radically from such strict Einstein realism, the \textbf{physical reality} is taken to be formed by the \textbf{individual quantum events} through which interacting systems (objects)affect one another. \textbf{Quantum events exist only in interactions} and the reality of each quantum event is only relative to the system involved in the interaction. In Relational QM, the preferred observer is abandoned. Indeed, it is a fundamental assumption of this approach that nothing distinguishes,a priori, systems and observers: any physical system provides a potential observer, and physics concerns what can be said about nature on the basis of the information that any physical system can, in principle, have. Different observers can of course exchange information, but we must not forget that such information exchange is itself a quantum mechanical interaction. An exchange of information is therefore a quantum measurement performed by one observing system $A$ upon another observing system $B$.} \\ These considerations are based on the following basic concepts\cite{Rov}:\\ \emph{The physical theory is concerned with relations between physical systems. In particular, it is concerned with the description that observers give about observed systems. Following our hypothesis ( i.e. All systems are equivalent: Nothing a priori distinguishes observer systems from quantum systems. If the observer O can give a description of the system S, then it is also legitimate for an observer O' to give a quantum description of the system formed by the observer O),we reject any fundamental or metaphysical distinctions as: system / observer, quantum system /classical system, physical system / consciousness. We assume the existence of an ensemble of systems, each of which can be equivalently considered as an observing system or as an observed system. A system (observing system ) may have information about another system (observed system). Information is exchanged via physical interactions. The actual process through which information is collected and stored is not of particular interest here, but can be physically described in any specific instance.} \\ Rovelli position, lead us to think the following epistemological implications: \begin{itemize} \item (i) rejection of individual object \item (ii) rejection of individual intrinsic properties \end{itemize} Some consequence:(a)is not possible to give a definition of the \textbf{individual} object in a spatio-temporal location, (b)is not possible to characterize the properties of the objects, in order to distinguish from the other ones. In other words, if we adopt the \textbf{interaction} like basic level of the physical reality, we accept the philosophy of the \textbf{relations} and: \begin{itemize} \item (i) we renounce at the possible existence of intrinsic properties. \item (ii) and we accept relational properties ( math models). \end{itemize} {\footnotesize We remember, for instance, that a mathematical model based on the relationist principle accept that the position of an object can only be defined respect to other matter. We do not venture in the philosophical implications of the relationalism (i.e. the monism which affirm that there are not distinction a priori between physical entities). An important advantage of these approaches is the possibility to eliminate the privileged role of the observer. This is the importance of Rovelli's approach to quantum mechanics. In details, Rovelli\cite{Rov} claim that QM itself drives us to the relational perspective, and the founding postulate of relational quantum mechanics is to stipulate that we shall not talk about \textbf{properties of systems in the abstract}, but only of properties of systems relative to one system, we can never juxtapose properties relative to different systems. Relational QM is not the claim that reality is described by the collection of all properties relatives to all systems, rather, reality admits one description per each (observing) system, each such description is internally consistent. As Einstein's original motivation with EPR was not to question locality, but rather to question the completeness of QM, so the relation interpretation can be interpreted as the discovery of the incompleteness of the description of reality that any \textbf{single observer} can give: in this particular sense, relational QM can be said to show the "incompleteness" of single-observer Copenhagen QM.} Rovelli's approach seem do not venture in the clarification of two notions: physical quantity and physical reality. As we have seen, he retain fundamental the relation \textbf{between} systems. The \textbf{math nature} of the relation is the real problem. Of course, we can ask: math law of what? \subsection{d'Espagnat position\cite{Es}:} \emph{"defines his philosophical view as ”open realism”; existence precedes knowledge; something exists independently of us even if it cannot be described".} According d'Espagnat, we are unable to describe the physical reality, but he admit his \textbf{existence}. For this reason, respect our analysis, is not clear this position, according d'Espagnat, we can trust only of physical quantities but we have not any tool to verify their correspondence in the physical world. \subsection{Zeilinger position\cite{Ze}:} The \textbf{individuality} notion have introduced recently radical interpretation of quantum mechanics. The forced equivalence is between \textbf{information and individuality} (and not between physical quantity and physical reality), this is Zeilinger\cite{Ze}view. He put forward an idea which connects the concept of information with the notion of elementary systems:\\ \emph{First we note that our description of the physical world is represented by propositions. Any physical object can be described by a set of true propositions. Second, we have knowledge or information about an object only through observations. It does not make any sense to talk about reality without the information about it. Any complex object which is represented by numerous propositions can be decomposed into constituent systems which need fewer propositions to be specified. The process of subdividing reaches its limit when the individual subsystems only represent a single proposition, and such a system is denoted as an elementary system. (qubit of modern quantum physics).}\\ In short, \textbf{random physical quantity} is the main fundamental rule to fix \textbf{any} correspondence with the physical reality. Opposite Einstein's position. \section{conclusion} We have analyzed, how starting from the genuine realism we have reached a genuine subjectivism. The physical reality step by step is \textbf{gradually disappearance}. The physical reality is replaced by the subject. We ascribe this evolution to the unclear epistemological relation between physical quantity and physical reality, so, the interpretation of quantum mechanics is not only due to the analysis of the formalism. Finally, we conclude with a paradoxical question: Was Einstein a realist? As we have seen, he was the only \textbf{real "idealist"} because, he did not give up to research the physical reality. \section{{\tiny }} {\footnotesize------------------\\ $\diamond$ David Vernette:Quantum Philosophy Theories www.qpt.org.uk\\david.vernette.org.uk \\ $\diamond$ Michele Caponigro: University of Camerino, Physics Department [email protected]} \end{document}
\begin{document} \begin{center} {\bf On Relatively Prime Subsets and Supersets} \vskip 20pt {\bf Mohamed El Bachraoui\footnote{Supported by RA at UAEU, grant: 02-01-2-11/09}} \\ \emph{Dept. Math. Sci., United Arab Emirates University, P.O.Box 17551, Al-Ain, UAE} \\ {\tt [email protected]}\\ \vskip 10pt \end{center} \vskip 30pt \vskip 30pt \centerline{\bf Abstract} \noindent A nonempty finite set of positive integers $A$ is relatively prime if $\gcd(A)=1$ and it is relatively prime to $n$ if $\gcd(A\cup \{n\})=1$. The number of nonempty subsets of $A$ which are relatively prime to $n$ is $\Phi(A,n)$ and the number of such subsets of cardinality $k$ is $\Phi_k(A,n)$. Given positive integers $l_1$, $l_2$, $m_2$, and $n$ such that $l_1\leq l_2\leq m_2$ we give $\Phi( [1,m_1]\cup [l_2, m_2],n)$ along with $\Phi_k( [1,m_1]\cup [l_2, m_2],n)$. Given positive integers $l, m$, and $n$ such that $l\leq m$ we count for any subset $A$ of $\{l,l+1,\ldots,m\}$ the number of its supersets in $[l,m]$ which are relatively prime and we count the number of such supersets which are relatively prime to $n$. Formulas are also obtained for corresponding supersets having fixed cardinalities. Intermediate consequences include a formula for the number of relatively prime sets with a nonempty intersection with some fixed set of positive integers. \pagestyle{myheadings} \thispagestyle{empty} \baselineskip=15pt \vskip 30pt \noindent \textbf{Keywords:}\quad Relatively prime sets, Phi function, M\"obius inversion. \\ \\ \noindent \textbf{Subject Class:}\quad 11A25, 11B05, 11B75. \section*{\normalsize 1. Introduction} Throughout let $k, l, m , n$ be positive integers such that $l \leq m$, let $[l,m] = \{l,l+1,\ldots,m\}$, let $\mu$ be the M\"obius function, and let $\lfloor x \rfloor$ be the floor of $x$. If $A$ is a set of integers and $d\not= 0$, then $\frac{A}{d}= \{ a/d:\ a \in A\}$. A nonempty set of positive integers $A$ is called \emph{relatively prime} if $\gcd(A)=1$ and it is called \emph{relatively prime to $n$} if $\gcd(A\cup \{n\}) = \gcd(A,n) = 1$. Unless otherwise specified $A$ and $B$ will denote nonempty sets of positive integers. We will need the following basic identity on binomial coefficients stating that for nonnegative integers $L\leq M \leq N$ \begin{equation}\label{binomial} \sum_{j=M}^{N}\binom{j}{L} = \binom{N+1}{L+1}-\binom{M}{L+1}. \end{equation} \noindent {\bf Definition 1.} Let \[ \begin{split} \Phi(A,n) &= \# \{X\subseteq A:\ X\not= \emptyset\ \text{and\ } \gcd(X,n) = 1 \}, \\ \Phi_k (A,n) &= \# \{X\subseteq A:\ \# X= k \ \text{and\ } \gcd(X,n) = 1 \}, \\ f(A) &= \# \{X\subseteq A:\ X\not= \emptyset\ \text{and\ } \gcd(X) = 1 \}, \\ f_k (A) &= \# \{X\subseteq A:\ \# X= k \ \text{and\ } \gcd(X) = 1 \}. \end{split} \] \noindent Nathanson in \cite{Nathanson} introduced $f(n)$, $f_k(n)$, $\Phi(n)$, and $\Phi_k(n)$ (in our terminology $f([1,n])$, $f_k([1,n])$, $\Phi([1,n],n)$, and $\Phi_k([1,n],n)$ respectively) and gave their formulas along with asymptotic estimates. Formulas for $f([m,n])$, $f_k([m,n])$, $\Phi([m,n],n)$, and $\Phi_k([m,n],n)$ are found in \cite{ElBachraoui1, Nathanson-Orosz} and formulas for $\Phi([1,m],n)$ and $\Phi_k([1,m],n)$ for $m\leq n$ are obtained in \cite{ElBachraoui2}. Recently Ayad and Kihel in \cite{Ayad-Kihel2} considered phi functions for sets which are in arithmetic progression and obtained the following more general formulas for $\Phi([l,m],n)$ and $\Phi_k ([l,m],n)$. \noindent {\bf Theorem 1.}\ We have \[ \begin{split} \text{(a)\quad } &\ \Phi([l,m],n) = \sum_{d|n}\mu(d) 2^{\lfloor m/d \rfloor- \lfloor (l-1)/d \rfloor}, \\ \text{(b)\quad } &\ \Phi_k ([l,m],n) = \sum_{d|n} \mu(d) \binom{\lfloor m/d \rfloor- \lfloor (l-1)/d \rfloor}{k}. \end{split} \] \section*{\normalsize 2. Relatively prime subsets for $[1,m_1]\cup [l_2,m_2]$} If $[1,m_1]\cap[l_2,m_2]= \emptyset$, then phi functions for $[1,m_1]\cup[l_2,m_2]= [1,m_2]$ are obtained by Theorem 1. So we may assume that $1 \leq m_1 < l_2 \leq m_2$. \noindent {\bf Lemma 1.}\label{lem:psi} Let \[ \Psi(m_1,l_2,m_2, n)= \# \{X \subseteq [1,m_1]\cup [l_2,m_2]:\ l_2\in X\ \text{and\ } \gcd(X,n)=1 \}, \] \[ \Psi_k(m_1,l_2,m_2,n)=\# \{X \subseteq [1,m_1]\cup [l_2,m_2]:\ l_2\in X,\ |X| = k,\ \text{and\ } \gcd(X,n)=1 \}. \] Then \[ \text{(a)\quad } \Psi(m_1,l_2,m_2, n) = \sum_{d|(l_2,n)}\mu(d) 2^{\lfloor m_1/d \rfloor + \lfloor m_2/d \rfloor- l_2/d}, \] \[ \text{(b)\quad } \Psi_k (m_1,l_2,m_2, n) = \sum_{d|(l_2,n)}\mu(d) \binom{\lfloor m_1/d \rfloor + \lfloor m_2/d \rfloor - l_2/d }{k-1}. \] \begin{proof} (a) Assume first that $m_2\leq n$. Let $\mathcal{P}(m_1,l_2,m_2)$ denote the set of subsets of $[1,m_1]\cup[l_2,m_2]$ containing $l_2$ and let $\mathcal{P}(m_1,l_2,m_2,d)$ be the set of subsets $X$ of $[1,m_1]\cup[l_2,m_2]$ such that $l_2\in X$ and $\gcd(X,n) = d$. It is clear that the set $\mathcal{P}(m_1,l_2,m_2)$ of cardinality $2^{m_1+m_2-l_2}$ can be partitioned using the equivalence relation of having the same $\gcd$ (dividing $l_2$ and $n$). Moreover, the mapping $A \mapsto \frac{1}{d} X$ is a one-to-one correspondence between $\mathcal{P}(m_1,l_2,m_2,d)$ and the set of subsets $Y$ of $[1, \lfloor m_1/d \rfloor ]\cup [l_2/d,\lfloor m_2/d \rfloor]$ such that $l_2/d \in Y$ and $\gcd(Y,n/d)= 1$. Then \[ \# \mathcal{P}(m_1,l_2,m_2,d) = \Psi(\lfloor m_1/d \rfloor,l_2 /d,\lfloor m_2/d \rfloor,n/d). \] Thus \[ 2^{m_1+m_2-l_2} = \sum_{d|(l_2,n)} \# \mathcal{P}(m_1,l_2,m_2,d)= \sum_{d|(l_2,n)} \Psi (\lfloor m_1/d \rfloor,l_2 /d,\lfloor m_2/d \rfloor,n/d), \] which by the M\"obius inversion formula extended to multivariable functions \cite[Theorem 2]{ElBachraoui1} is equivalent to \[ \Psi(m_1,l_2,m_2,n) = \sum_{d|(l_2,n)}\mu(d) 2^{\lfloor m_1/d \rfloor + \lfloor m_2/d \rfloor - l_2/d}. \] Assume now that $m_2 >n$ and let $a$ be a positive integer such that $m_2 \leq n^a$. As $\gcd(X,n)=1$ if and only if $\gcd(X,n^a)=1$ and $\mu(d) =0$ whenever $d$ has a nontrivial square factor, we have \[ \begin{split} \Psi(m_1,l_2,m_2,n) &= \Psi(m_1,l_2,m_2,n^a) \\ &= \sum_{d|(l_2,n^a)}\mu(d) 2^{\lfloor m_1/d \rfloor + \lfloor m_2/d \rfloor - l_2/d} \\ &= \sum_{d|(l_2,n)}\mu(d) 2^{\lfloor m_1/d \rfloor + \lfloor m_2/d \rfloor - l_2/d}. \end{split} \] (b) For the same reason as before, we may assume that $m_2 \leq n$. Noting that the correspondence $X\mapsto \frac{1}{d} X$ defined above preserves the cardinality and using an argument similar to the one in part (a), we obtain the following identity \[ \binom{m_1+m_2-l_2}{k-1}= \sum_{d|(l_2,n)}\Psi_k (\lfloor m_1/d \rfloor,l_2 /d,\lfloor m_2/d \rfloor, n/d ) \] which by the M\"obius inversion formula \cite[Theorem 2]{ElBachraoui1} is equivalent to \[ \Psi_k (m_1,l_2,m_2,n) = \sum_{d|(l_2,n)}\mu(d)\binom{\lfloor m_1/d \rfloor + \lfloor m_2/d \rfloor - l_2/d }{k-1}, \] as desired. \end{proof} \noindent {\bf Theorem 2.}\label{thm:main2} We have \[ \begin{split} \text{(a)\quad } \Phi([1,m_1]\cup [l_2,m_2],n) &= \sum_{d|n}\mu(d) 2^{\lfloor \frac{m_1}{d} \rfloor +\lfloor \frac{m_2}{d} \rfloor - \lfloor\frac{l_2 -1}{d} \rfloor}, \\ \text{(b)\quad } \Phi_k ([1,m_1]\cup [l_2,m_2],n) &= \sum_{d|n} \mu(d) \ \binom{\lfloor \frac{m_1}{d} \rfloor +\lfloor \frac{m_2}{d} \rfloor - \lfloor\frac{l_2 -1}{d} \rfloor}{k}. \end{split} \] \begin{proof} (a) Clearly \begin{equation}\label{help1} \begin{split} \Phi([1,m_1]\cup [l_2,m_2],n) & = \Phi([1,m_1]\cup [l_2 -1,m_2],n) - \Psi(m_1,l_2 -1,m_2,n) \\ &= \Phi([1,m_1]\cup [m_1+1,m_2],n) - \sum_{i=m_1 +1}^{l_2 -1}\Psi(m_1,i,m_2,n) \\ &= \Phi([1,m_2] - \sum_{i=m_1 +1}^{l_2 -1}\Psi(m_1,i,m_2,n) \\ &= \sum_{d|n} \mu(d) 2^{\lfloor m_2/d \rfloor} - \sum_{i=m_1 +1}^{l_2 -1}\sum_{d|(n, i)} \mu(d) 2^{\lfloor \frac{m_1}{d} \rfloor +\lfloor \frac{m_2}{d} \rfloor - \frac{i}{d}}, \end{split} \end{equation} where the last identity follows by Theorem 1 for $l=1$ and Lemma 1. Rearranging the last summation in (\ref{help1}) gives \begin{equation}\label{help2} \begin{split} \sum_{i=m_1 +1}^{l_2- 1}\sum_{d|(n, i)} \mu(d) 2^{\lfloor \frac{m_1}{d} \rfloor +\lfloor \frac{m_2}{d} \rfloor - \frac{i}{d}} &= \sum_{d|n}\sum_{\substack{i=m_1+1\\ d|i}}^{l_2-1} \mu(d) 2^{\lfloor \frac{m_1}{d} \rfloor +\lfloor \frac{m_2}{d} \rfloor - \frac{i}{d}} \\ &= \sum_{d|n}\mu(d) 2^{\lfloor \frac{m_1}{d} \rfloor +\lfloor \frac{m_2}{d} \rfloor} \sum_{j=\lfloor \frac{m_1}{d} \rfloor +1}^{\lfloor \frac{l_2-1}{d} \rfloor} 2^{-j} \\ &= \sum_{d|n}\mu(d) 2^{\lfloor \frac{m_2}{d} \rfloor} \left(1- 2^{-\lfloor \frac{l_2-1}{d}\rfloor +\lfloor \frac{m_1}{d}\rfloor} \right). \end{split} \end{equation} Now combining identities (\ref{help1}, \ref{help2}) yields the result. \noindent (b) Proceeding as in part (a) we find \begin{equation}\label{help3} \begin{split} \Phi_k ([1,m_1]\cup [l_2,m_2],n) &= \sum_{d|n} \mu(d) \binom{\lfloor \frac{m_2}{d}\rfloor}{k} - \sum_{i=m_1 +1}^{l_2-1}\sum_{d|(n, i)} \mu(d) \binom{\lfloor \frac{m_1}{d} \rfloor +\lfloor \frac{m_2}{d} \rfloor - \frac{i}{d}}{k-1}. \end{split} \end{equation} Rearranging the last summation on the right of (\ref{help3}) gives \begin{equation}\label{help4} \begin{split} \sum_{i=m_1 +1}^{l_2 -1}\sum_{d|(n, i)} \binom{\lfloor \frac{m_1}{d} \rfloor +\lfloor \frac{m_2}{d} \rfloor - \frac{i}{d}}{k-1} &= \sum_{d|n}\mu(d) \sum_{j=\lfloor \frac{m_1}{d} \rfloor +1}^{\lfloor \frac{l_2-1}{d} \rfloor} \binom{\lfloor \frac{m_1}{d} \rfloor +\lfloor \frac{m_2}{d} \rfloor- j}{k-1}\\ &= \sum_{d|n}\mu(d) \sum_{i=\lfloor \frac{m_1}{d} \rfloor +\lfloor \frac{m_2}{d} \rfloor-\lfloor \frac{l_2-1}{d} \rfloor}^{ \lfloor \frac{m_2}{d} \rfloor -1} \binom{i}{k-1} \\ &= \sum_{d|n}\mu(d) \left( \binom{\lfloor \frac{m_2}{d} \rfloor}{k}- \binom{\lfloor \frac{m_1}{d} \rfloor +\lfloor \frac{m_2}{d} \rfloor-\lfloor \frac{l_2-1}{d} \rfloor}{k} \right), \end{split} \end{equation} where the last identity follows by formula (\ref{binomial}). Then identities (\ref{help3}, \ref{help4}) yield the desired result. \end{proof} \noindent {\bf Definition 2.} Let \[ \begin{split} \varepsilon(A,B,n) &= \# \{ X \subseteq B:\ X \not=\emptyset,\ X \cap A= \emptyset,\ \text{and\ } \gcd(X,n)=1 \}, \\ \varepsilon_k(A,B,n) &= \# \{ X \subseteq B:\ \# X = k,\ X \cap A= \emptyset,\ \text{and\ } \gcd(X,n)=1 \}. \end{split} \] If $B= [1,n]$ we will simply write $\varepsilon(A,n)$ and $\varepsilon_k(A,n)$ rather than $\varepsilon(A,[1,n],n)$ and $\varepsilon_k(A,[1,n],n)$ respectively. \noindent {\bf Theorem 3.} If $l \leq m < n$, then \[ \text{(a)\ } \varepsilon([l,m],n) = \sum_{d|n} \mu(d) 2^{\lfloor (l-1)/d \rfloor + n/d - \lfloor m/d \rfloor}, \] \[ \text{(b)\ } \varepsilon_k([l,m],n) = \sum_{d|n} \mu(d) \binom{\lfloor (l-1)/d \rfloor + n/d - \lfloor m/d \rfloor}{k} . \] \begin{proof} Immediate from Theorem 2 since \[ \varepsilon([l,m],n) = \Phi([1,l-1]\cup [m+1,n],n)\ \text{and\ } \varepsilon_k([l,m],n) = \Phi_k([1,l-1]\cup [m+1,n],n). \] \end{proof} \section*{\normalsize 3. Relatively prime supersets} In this section the sets $A$ and $B$ are not necessary nonempty. \noindent {\bf Definition 3.} If $A\subseteq B$ let \[ \begin{split} \overline{\Phi}(A,B,n) &= \# \{X\subseteq B:\ X \not= \emptyset,\ A\subseteq X,\ \text{and\ } \gcd(X,n)=1 \}, \\ \overline{\Phi}_k(A,B,n) &= \# \{X\subseteq B:\ A\subseteq X,\ \# X=k,\ \text{and\ } \gcd(X,n)=1 \}, \\ \overline{f}(A,B) &= \# \{X\subseteq B:\ X \not= \emptyset,\ A\subseteq X,\ \text{and\ } \gcd(X)=1 \}, \\ \overline{f}_k (A,B) &= \# \{X\subseteq B:\ \# X = k,\ A\subseteq X,\ \text{and\ } \gcd(X)=1 \}. \end{split} \] \noindent The purpose of this section is to give formulas for $\overline{f}(A,[l,m])$, $\overline{f}_k(A,[l,m])$, $\overline{\Phi}(A,[l,m],n)$, and $\overline{\Phi}_k(A,[l,m],n)$ for any subset $A$ of $[l,m]$. We need a lemma. \noindent {\bf Lemma 2.} If $A \subseteq [1,m]$, then \[ \text{(a)\quad } \overline{\Phi}(A,[1,m],n) = \sum_{d| (A,n)}\mu(d) 2^{\lfloor m/d \rfloor - \# A}, \] \[ \text{(b)\ } \overline{\Phi}_k(A,[1,m],n) = \sum_{d| (A,n)}\mu(d) \binom{ \lfloor m/d \rfloor - \# A}{k- \# A}\ \text{whenever\ } \# A \leq k \leq m. \] \begin{proof} If $A = \emptyset$, then clearly \[ \overline{\Phi}(A,[1,m],n) = \Phi([1,m],n) \ \text{and\ } \overline{\Phi}_k (A,[1,m],n) = \Phi_k ([1,m],n) \] and the identities in (a) and (b) follow by Theorem 1 for $l=1$. Assume now that $A \not= \emptyset$. If $m\leq n$, then \[ 2^{m- \# A} = \sum_{d|(A,n)} \overline{\Phi}(\frac{A}{d},[1,\lfloor m/d \rfloor],n/d) \] and \[ \binom{m- \# A}{k- \# A} = \sum_{d| (A,n)}\mu(d) \overline{\Phi}_k(\frac{A}{d},[1,\lfloor m/d \rfloor],n/d) \] which by M\"obius inversion \cite[Theorem 2]{ElBachraoui1} are equivalent to the identities in (a) and in (b) respectively. If $m >n$, let $a$ be a positive integer such that $m \leq n^a$. As $\gcd(X,n)=1$ if and only if $\gcd(X,n^a)=1$ and $\mu(d) =0$ whenever $d$ has a nontrivial square factor we have \[ \begin{split} \overline{\Phi}(A,[1,m],n) &= \overline{\Phi}(A,[1,m],n^a) \\ &= \sum_{d| (A,n^a)}\mu(d) 2^{\lfloor m/d \rfloor - \# A} \\ &= \sum_{d| (A,n)}\mu(d) 2^{\lfloor m/d \rfloor - \# A}. \end{split} \] The same argument gives the formula for $\overline{\Phi}_k(A,[1,m],n)$. \end{proof} \noindent {\bf Theorem 4.}\label{thm:main3} If $A\subseteq [l,m]$, then \[ \text{(a)\quad } \overline{\Phi}(A,[l,m],n)= \sum_{d| (A,n)}\mu(d) 2^{\lfloor m/d \rfloor - \lfloor (l-1)/d \rfloor -\# A}, \] \[ \text{(b)\quad } \overline{\Phi}_k (A,[l,m],n)= \sum_{d| (A,n)}\mu(d) \binom{ \lfloor m/d \rfloor - \lfloor (l-1)/d \rfloor -\# A}{k- \# A}\ \text{whenever\ } \# A \leq k \leq m-l+1. \] \begin{proof} If $A= \emptyset$, then clearly \[ \overline{\Phi}(A,[l,m],n)= \Phi ([l,m],n) \] and \[ \overline{\Phi}_k (A,[l,m],n)= \Phi_k ([l,m],n) \] and the identities in (a) and (b) follow by Theorem 1. \\ Assume now that $A\not= \emptyset$. Let \[ \Psi (A,l,m,n) = \# \{ X\subseteq [l,m]:\ A\cup\{l\} \subseteq X, \text{and\ } \gcd(X,n)=1 \}. \] Then \[ 2^{m-l- \# A}= \sum_{d|(A,l,n)} \Psi (\frac{A}{d}, l/d, \lfloor m/d \rfloor, n/d), \] which by M\"obius inversion \cite[Theorem 2]{ElBachraoui1} means that \begin{equation} \label{eq:one} \Psi (A,l,m,n) = \sum_{d|(A,l,n)} \mu(d) 2^{\lfloor m/d \rfloor -l/d - \# A}. \end{equation} Then combining identity (\ref{eq:one}) with Lemma 2 gives \begin{equation} \begin{split} \overline{\Phi}(A,[l,m],n) &= \overline{\Phi}([A,[1,m],n) - \sum_{i=1}^{l-1} \Psi(i,m,A,n) \\ &= \sum_{d|(A,n)} \mu(d) 2^{ \lfloor m/d \rfloor - \# A} - \sum_{i=1}^{l-1} \sum_{d|(A,i,n)} \mu(d) 2^{\lfloor m/d \rfloor -i/d - \# A} \\ &= \sum_{d|(A,n)} \mu(d) 2^{ \lfloor m/d \rfloor - \# A} - \sum_{d|(A,n)} \mu(d) 2^{ \lfloor m/d \rfloor - \# A} \sum_{j=1}^{\lfloor (l-1)/d \rfloor} 2^{-j} \\ &= \sum_{d|(A,n)} \mu(d) 2^{ \lfloor m/d \rfloor - \# A} - \sum_{d|(A,n)} \mu(d) 2^{ \lfloor m/d \rfloor - \# A}(1 - 2^{- \lfloor (l-1)/d \rfloor}) \\ &= \sum_{d| (A,n)}\mu(d) 2^{\lfloor m/d \rfloor - \lfloor (l-1)/d \rfloor -\# A}. \end{split} \end{equation} This completes the proof of (a). Part (b) follows similarly. \end{proof} \noindent As to $\overline{f}(A,[l,m])$ and $\overline{f}_k (A,[l,m])$ we similarly have: \noindent {\bf Theorem 5.}\label{thm:main4} If $A \subseteq [l,m]$, then \[ \text{(a)\quad } \overline{f}(A,[l,m])= \sum_{d| \gcd(A)}\mu(d) 2^{\lfloor \frac{m}{d} \rfloor - \lfloor \frac{l-1}{d} \rfloor -\# A}, \] \[ \text{(b)\quad } \overline{f}_k (A,[l,m])= \sum_{d| \gcd(A)}\mu(d) \binom{ \lfloor \frac{m}{d}\rfloor - \lfloor \frac{l-1}{d} \rfloor -\# A}{k- \# A},\ \text{whenever\ } \# A \leq k\leq m-l+1. \] We close this section by formulas for relatively prime sets which have a nonempty intersection with $A$. \noindent {\bf Definition 4.} Let \[ \begin{split} \overline{\varepsilon}(A,B,n) &= \# \{ X \subseteq B:\ X \cap A\not= \emptyset\ \text{and\ } \gcd(X,n)=1 \}, \\ \overline{\varepsilon}_k(A,B,n) &= \# \{ X \subseteq B:\ \# X =k,\ X \cap A\not= \emptyset,\ \text{and\ } \gcd(X,n)=1 \}, \\ \overline{\varepsilon}(A,B) &= \# \{ X \subseteq B:\ X \cap A\not= \emptyset\ \text{and\ } \gcd(X)=1 \}, \\ \overline{\varepsilon}_k(A,B) &= \# \{ X \subseteq B:\ \# X =k,\ X \cap A\not= \emptyset,\ \text{and\ } \gcd(X)=1 \}. \end{split} \] \noindent {\bf Theorem 6.} We have \[ \text{(a)\quad } \overline{\varepsilon}(A,[l,m],n)= \sum_{\emptyset\not= X \subseteq A} \sum_{d|(X,n)} \mu(d) 2^{\lfloor \frac{m}{d} \rfloor - \lfloor \frac{l-1}{d} \rfloor -\# X}, \] \[ \text{(b)\quad } \overline{\varepsilon}_k(A,[l,m],n)= \sum_{\substack{\emptyset \not= X\subseteq A \\ \# X \leq k}} \sum_{d|(X,n)} \mu(d) \binom{ \lfloor \frac{m}{d}\rfloor - \lfloor \frac{l-1}{d} \rfloor -\# X}{k- \# X}, \] \[ \text{(c)\ } \overline{\varepsilon}(A,B) = \sum_{\emptyset\not= X \subseteq A} \sum_{d|\gcd(X)} \mu(d) 2^{\lfloor \frac{m}{d} \rfloor - \lfloor \frac{l-1}{d} \rfloor -\# X}, \] \[ \text{(d)\ } \overline{\varepsilon}_k(A,B) = \sum_{\substack{\emptyset \not= X\subseteq A \\ \# X \leq k}} \sum_{d|\gcd(X)} \mu(d) \binom{ \lfloor \frac{m}{d}\rfloor - \lfloor \frac{l-1}{d} \rfloor -\# X}{k- \# X}. \] \begin{proof} These formulas Follow by Theorems 4, 5 and the facts that \[ \overline{\varepsilon}(A,[l,m],n)= \sum_{\emptyset\not= X \subseteq A} \overline{\Phi}(X,[l,m],n), \] \[ \overline{\varepsilon}_k(A,[l,m],n)= \sum_{\substack{\emptyset \not= X\subseteq A \\ \# X \leq k}} \overline{\Phi}_k (X,[l,m],n), \] \[ \overline{\varepsilon}(A,[l,m]) = \sum_{\emptyset\not= X \subseteq A}\overline{f}(X,[l,m]), \] \[ \overline{\varepsilon}_k(A,[l,m]) = \sum_{\substack{\emptyset\not= X \subseteq A \\ \# X \leq k}} \overline{f}_k(X,[l,m]). \] \end{proof} \end{document}
\begin{document} \title{Restricted Partition Functions as \\ Bernoulli and Euler Polynomials of Higher Order} \author{Boris Y. Rubinstein${}^{\dag}$ and Leonid G. Fel${}^{\ddag}$\\ \\ ${}^{\dag}$Department of Mathematics, University of California, Davis, \\One Shields Dr., Davis, CA 95616, U.S.A. \\ and \\ ${}^{\ddag}$Department of Civil and Environmental Engineering,\\ Technion, Haifa 32000, Israel} \date{\today} \maketitle \begin{abstract} Explicit expressions for restricted partition function $W(s,{\bf d}^m)$ and its quasiperiodic components $W_j(s,{\bf d}^m)$ (called {\em Sylvester waves}) for a set of positive integers ${\bf d}^m = \{d_1, d_2, \ldots, d_m\}$ are derived. The formulas are represented in a form of a finite sum over Bernoulli and Euler polynomials of higher order with periodic coefficients. A novel recursive relation for the Sylvester waves is established. Application to counting algebraically independent homogeneous polynomial invariants of the finite groups is discussed. \end{abstract} \section{Introduction} \label{intro} The problem of partitions of positive integers has long history started from the work of Euler who laid a foundation of the theory of partitions \cite{GAndrews}, introducing the idea of generating functions. Many prominent mathematicians contributed to the development of the theory using the Euler idea. J.J. Sylvester provided a new insight and made a remarkable progress in this field. He found \cite{Sylv1,Sylv2} the procedure enabling to determine a {\it restricted} partition functions, and described symmetry properties of such functions. The restricted partition function $W(s,{\bf d}^m) \equiv W(s,\{d_1,d_2,\ldots,d_m\})$ is a number of partitions of $s$ into positive integers $\{d_1,d_2,\ldots,d_m\}$, each not greater than $s$. The generating function for $W(s,{\bf d}^m)$ has a form \begin{equation} F(t,{\bf d}^m)=\prod_{i=1}^m\frac{1}{1-t^{d_{i}}} =\sum_{s=0}^{\infty} W(s,{\bf d}^m)\;t^s\;, \label{genfunc} \end{equation} where $W(s,{\bf d}^m)$ satisfies the basic recursive relation \begin{equation} W(s,{\bf d}^m) - W(s-d_m,{\bf d}^m) = W(s,{\bf d}^{m-1})\;. \label{SW_recursion} \end{equation} Sylvester also proved the statement about splitting of the partition function into periodic and non-periodic parts and showed that the restricted partition function may be presented as a sum of "waves", which we call the {\em Sylvester waves} \begin{equation} W(s,{\bf d}^m) = \sum_{j=1} W_j(s,{\bf d}^m)\;, \label{SylvWavesExpand} \end{equation} where summation runs over all distinct factors in the set ${\bf d}^m$. The wave $W_j(s,{\bf d}^m)$ is a quasipolynomial in $s$ closely related to prime roots $\rho_j$ of unit. Namely, Sylvester showed in \cite{Sylv2} that the wave $W_j(s,{\bf d}^m)$ is a coefficient of ${t}^{-1}$ in the series expansion in ascending powers of $t$ of \begin{equation} F_j(s,t)=\sum_{\rho_j} \frac{\rho_j^{-s} e^{st}}{\prod_{k=1}^{m} \left(1-\rho_j^{d_k} e^{-d_k t}\right)}\;. \label{generatorWj} \end{equation} The summation is made over all prime roots of unit $\rho_j=\exp(2\pi i n/j)$ for $n$ relatively prime to $j$ (including unity) and smaller than $j$. This result is just a recipe for calculation of the partition function and it does not provide explicit formula. Using the Sylvester recipe we find an explicit formula for the Sylvester wave $W_j(s,{\bf d}^m)$ in a form of finite sum of the Bernoulli polynomials of higher order \cite{bat53,NorlundMemo} multiplied by a periodic function of integer period $j$. The periodic factor is expressed through the generalized Euler polynomials of higher order \cite{Carlitz1960}. A special symbolic technique is developed in the theory of polynomials of higher order, which significantly simplifies computations performed with these polynomials. A short description of this technique required for better understanding of this paper is given in Appendix \ref{appendix1}. \section{Sylvester wave $W_1(s,{\bf d}^m)$ and Bernoulli polynomials \\ of higher order} \label{1} Consider a polynomial part of the partition function corresponding to the wave $W_1(s,{\bf d}^m)$. It may be found as a residue of the generator \begin{equation} F_1(s,t) = \frac{e^{st}}{\prod_{i=1}^m (1-e^{-d_i t})}\;. \label{generatorW1} \end{equation} Recalling the generating function for the Bernoulli polynomials of higher order \cite{bat53}: \begin{equation} \frac{e^{st} t^m \prod_{i=1}^m d_i}{\prod_{i=1}^m (e^{d_it}-1)} = \sum_{n=0}^{\infty} B^{(m)}_n(s|{\bf d}^m) \frac{t^{n}}{n!}\;, \label{genfuncBernoulli0} \end{equation} and a transformation rule $$ B^{(m)}_n(s|-{\bf d}^m) = B^{(m)}_n(s+\sum_{i=1}^m d_i|{\bf d}^m)\;, $$ we obtain the relation \begin{equation} \frac{e^{st}}{\prod_{i=1}^m (1-e^{-d_it})} = \frac{1}{\pi_m} \sum_{n=0}^{\infty} B^{(m)}_n(s+s_m|{\bf d}^m) \frac{t^{n-m}}{n!}\;, \label{genfuncBernoulli} \end{equation} where $$ s_m = \sum_{i=1}^m d_i, \ \ \pi_m = \prod_{i=1}^m d_i\;. $$ It is immediately seen from (\ref{genfuncBernoulli}) that the coefficient of $1/t$ in (\ref{generatorW1}) is given by the term with $n=m-1$ \begin{equation} W_1(s,{\bf d}^m) = \frac{1}{(m-1)!\;\pi_m} B_{m-1}^{(m)}(s + s_m | {\bf d}^m)\;. \label{W_1} \end{equation} The polynomial part also admits a symbolic form frequently used in theory of higher order polynomials \begin{equation} W_1(s,{\bf d}^m) = \frac{1}{(m-1)!\;\pi_m} \left(s+s_m + \sum_{i=1}^m d_i \;{}^i\! B\right)^{m-1}\;, \label{W_1symb} \end{equation} where after expansion powers $r_i$ of ${}^i\! B$ are converted into orders of the Bernoulli numbers \begin{equation} {}^i \! B^{r_i} \Rightarrow B_{r_i}\;. \label{replacement_rule} \end{equation} It is easy to recognize in (\ref{W_1}) the explicit formula reported recently in \cite{Beck}, which was obtained by a straightforward computation of the complex residue of the generator (\ref{generatorW1}). Note that basic recursive relation for the Bernoulli polynomials \cite{NorlundMemo} \begin{equation} B_{n}^{(m)}(s + d_m | {\bf d}^m) - B_{n}^{(m)}(s | {\bf d}^m) = n d_m B_{n-1}^{(m-1)}(s | {\bf d}^{m-1}) \label{Bernoulli_recursion} \end{equation} naturally leads to the basic recursive relation for the polynomial part of the partition function: \begin{equation} W_1(s,{\bf d}^m) - W_1(s-d_m,{\bf d}^m) = W_1(s,{\bf d}^{m-1})\;, \label{SW1_recursion} \end{equation} which coincides with (\ref{SW_recursion}). This indicates that the Bernoulli polynomials of higher order represent a natural basis for expansion of the partition function and its waves. \section{Sylvester wave $W_2(s,{\bf d}^m)$ and Euler numbers \\ of higher order} \label{2} In order to compute the Sylvester wave with period $j>1$ we note, that the summand in the expression (\ref{generatorWj}) can be rewritten as a product \begin{equation} F_j(s,t) = \sum_{\rho_j} \frac{e^{st}}{\prod_{i=1}^{\omega_j} (1-e^{-d_it})} \times \frac{\rho_j^{-s}}{\prod_{i=\omega_j+1}^m (1-\rho_j^{d_i}e^{-d_i t})}\;, \label{generator_product} \end{equation} where the elements in ${\bf d}^m$ are sorted in a way that $j$ is a divisor for first $\omega_j$ elements (we say that $j$ has weight $\omega_j$), and the rest elements in the set are not divisible by $j$. Then a 2-periodic Sylvester wave $W_2(s,{\bf d}^m)$ is a residue of the generator \begin{equation} F_2(s,t) = \frac{e^{st}}{\prod_{i=1}^{\omega_2} (1-e^{-d_it})} \times \frac{(-1)^{s}}{\prod_{i=\omega_2+1}^m (1+e^{-d_i t})}\;, \label{generatorW2} \end{equation} where first $\omega_2$ integers $d_i$ are even, and the summation is omitted being trivially restricted to the only value $\rho_2=-1$. Recalling the generating function for the Euler polynomials of higher order \cite{bat53, NorlundMemo} and corresponding recursive relation \begin{eqnarray} \frac{2^m e^{st}}{\prod_{i=1}^m (e^{d_i t}+1)} = \sum_{n=0}^{\infty} E_n^{(m)}(s | {\bf d}^m) \frac{t^n}{n!}\;, \label{Euler_GF} \\ E_n^{(m)}(s+d_m | {\bf d}^m)+E_n^{(m)}(s | {\bf d}^m)= 2E_n^{(m-1)}(s | {\bf d}^{m-1})\;,\nonumber \end{eqnarray} we may rewrite (\ref{generatorW2}) as double infinite sum \begin{equation} \frac{(-1)^{s}}{2^{m-\omega_2} \pi_{\omega_2}} \sum_{n=0}^{\infty} B^{(\omega_2)}_n(s+s_{\omega_2}|{\bf d}^{\omega_2}) \frac{t^{n-\omega_2}}{n!} \sum_{l=0}^{\infty} E_l^{(m-\omega_2)}(s_m-s_{\omega_2} | {\bf d}^{m-\omega_2}) \frac{t^l}{l!}\;. \label{g2sum} \end{equation} The coefficient of $1/t$ in the above series is found for $n+l=\omega_2-1$, so that we end up with a finite sum: \begin{equation} W_2(s,{\bf d}^m) = \frac{(-1)^{s}}{(\omega_2-1)!\; 2^{m-\omega_2} \pi_{\omega_2}} \sum_{n=0}^{\omega_2-1} \binom{\omega_2-1}{n} B^{(\omega_2)}_n(s+s_{\omega_2}|{\bf d}^{\omega_2}) E_{\omega_2-1-n}^{(m-\omega_2)}(s_m-s_{\omega_2} | {\bf d}^{m-\omega_2}). \label{W2} \end{equation} This expression may be rewritten as a symbolic power similar to (\ref{W_1symb}): \begin{equation} W_2(s,{\bf d}^m) = \frac{(-1)^{s}}{(\omega_2-1)! \; 2^{m-\omega_2} \pi_{\omega_2}} \left( s+s_m + \sum_{i=1}^{\omega_2} d_i \;{}^i\! B + \sum_{i=\omega_2+1}^{m} d_i \;{}^i\! E(0) \right)^{\omega_2-1}, \label{W2symb} \end{equation} where the rule for the Euler polynomials at zero $E_n(0)$ similar to (\ref{replacement_rule}) is applied. It is easy to rewrite formula (\ref{W2symb}) in a form \begin{equation} W_2(s,{\bf d}^m) = \frac{(-1)^{s}}{(\omega_2-1)!\; 2^{m-\omega_2} \pi_{\omega_2}} \sum_{n=0}^{\omega_2-1} \binom{\omega_2-1}{n} B^{(\omega_2)}_n(s+s_{m}|{\bf d}^{\omega_2}) E_{\omega_2-1-n}^{(m-\omega_2)}(0|{\bf d}^{m-\omega_2}), \label{W2last} \end{equation} where $E_{n}^{(m)}(0|{\bf d}^{m})$ denote the Euler polynomials of higher orders computed at zero as follows: \begin{equation} E_{n}^{(m)}(0|{\bf d}^{m}) = \left[ \sum_{i=1}^{m} d_i \;{}^i\! E(0) \right]^{n}. \label{Enumbers} \end{equation} The formula (\ref{W2last}) shows that the wave $W_2(s,{\bf d}^{m})$ can be written as an expansion over the Bernoulli polynomials of higher order with constant coefficients, multiplied by a 2-periodic function $(-1)^s$. \section{Sylvester waves $W_j(s,{\bf d}^m) \ (j>2)$ and Euler \\ polynomials of higher order} \label{j} Frobenius \cite{Frobenius} studied in great detail the polynomials $H_n(s,\rho)$ satisfying the generating function \begin{equation} \frac{(1-\rho) e^{st}}{e^t-\rho} = \sum_{n=0}^{\infty} H_n(s,\rho) \frac{t^n}{n!}, \ \ (\rho \ne 1), \label{EulerRegGFnew} \end{equation} which reduces to definition of the Euler polynomials at fixed value of the parameter $\rho$ $$ E_n(s) = H_n(s,-1). $$ The polynomials $H_n(\rho) \equiv H_n(0,\rho)$ satisfy the symbolic recursion ($H_0(\rho)=1$) \begin{equation} \rho H_n(\rho) = (H(\rho)+1)^n, \ \ \ n>0. \label{EulerSymbolic} \end{equation} The generalization of (\ref{Euler_GF}) is straightforward \begin{equation} \frac{ e^{st} \prod_{i=1}^m (1-\rho^{d_i})} {\prod_{i=1}^m (e^{d_i t}-\rho^{d_i})} = \sum_{n=0}^{\infty} H_n^{(m)}(s, \rho | {\bf d}^m) \frac{t^n}{n!}, \ \ (\rho^{d_i} \ne 1), \label{Euler_GFnew} \end{equation} where the corresponding recursive relation for $H_n^{(m)}(s, \rho | {\bf d}^m)$ has the form \begin{equation} H_n^{(m)}(s+d_m,\rho | {\bf d}^m)-\rho^{d_m}H_n^{(m)}(s,\rho | {\bf d}^m)= \left(1-\rho^{d_m}\right)H_n^{(m-1)}(s,\rho | {\bf d}^{m-1})\;. \label{Euler_GFnewrecur} \end{equation} The {\em generalized Euler polynomials of higher order} $H_n^{(m)}(s, \rho | {\bf d}^m)$ introduced by L. Carlitz in \cite{Carlitz1960} can be defined through the symbolic formula \begin{equation} H_n^{(m)}(s, \rho | {\bf d}^m) = \left(s + \sum_{i=1}^{m} d_i \;{}^i\! H(\rho^{d_i}) \right)^n, \label{EulerNewSymbolic} \end{equation} where $H_n(\rho)$ computed from the relation $$ \frac{1-\rho}{e^t-\rho} = \sum_{n=0}^{\infty} H_n(\rho) \frac{t^n}{n!}, $$ or using the recursion (\ref{EulerSymbolic}). Using the polynomials $H_n^{(m)}(s, \rho | {\bf d}^m)$ we can compute Sylvester wave of arbitrary period. Consider a $j$-periodic Sylvester wave $W_j(s,{\bf d}^m)$, and rewrite the summand in (\ref{generator_product}) as double infinite sum \begin{equation} \frac{\rho_j^{-s}}{\pi_{\omega_j} \; \prod_{i=\omega_j+1}^m (1-\rho_j^{d_i})} \sum_{n=0}^{\infty} B^{(\omega_j)}_n(s+s_{\omega_j}|{\bf d}^{\omega_j}) \frac{t^{n-\omega_j}}{n!} \sum_{l=0}^{\infty} H_l^{(m-\omega_j)}(s_m-s_{\omega_j}, \rho_j | {\bf d}^{m-\omega_j}) \frac{t^l}{l!}. \label{gjsum} \end{equation} The coefficient of $1/t$ in the above series is found for $n+l=\omega_j-1$, so that we have a finite sum: \begin{eqnarray} W_j(s,{\bf d}^m) & = & \frac{1}{(\omega_j-1)! \; \pi_{\omega_j}} \sum_{\rho_j} \frac{\rho_j^{-s}}{\prod_{i=\omega_j+1}^m (1-\rho_j^{d_i})} \times \nonumber \\ &&\sum_{n=0}^{\omega_j-1} \binom{\omega_j-1}{n} B^{(\omega_j)}_n(s+s_{\omega_j}|{\bf d}^{\omega_j}) H_{\omega_j-1-n}^{(m-\omega_j)}(s_m-s_{\omega_j}, \rho_j | {\bf d}^{m-\omega_j})\;. \label{Wj} \end{eqnarray} This expression may be rewritten as a symbolic power similar to (\ref{W2symb}): \begin{equation} W_j(s,{\bf d}^m) = \frac{1}{(\omega_j-1)! \; \pi_{\omega_j}} \sum_{\rho_j} \frac{\rho_j^{-s}}{\prod_{i=\omega_j+1}^m (1-\rho_j^{d_i})} \left( s+s_m + \sum_{i=1}^{\omega_j} d_i \;{}^i\! B + \!\!\! \sum_{i=\omega_j+1}^{m} \!\!\! d_i \;{}^i\! H(\rho_j^{d_i}) \right)^{\omega_j-1} \!\!\!\!\!\!\!\!\;, \label{Wjsymb} \end{equation} which is equal to \begin{eqnarray} W_j(s,{\bf d}^m) & = & \frac{1}{(\omega_j-1)! \; \pi_{\omega_j}} \sum_{n=0}^{\omega_j-1} \binom{\omega_j-1}{n} B^{(\omega_j)}_n(s+s_{m}|{\bf d}^{\omega_j}) \times \nonumber \\ && \sum_{\rho_j} \frac{\rho_j^{-s}}{\prod_{i=\omega_j+1}^m (1-\rho_j^{d_i})} H_{\omega_j-1-n}^{(m-\omega_j)}[\rho_j |{\bf d}^{m-\omega_j}]\;, \label{WjBern} \end{eqnarray} where \begin{equation} H_{n}^{(m)}[\rho |{\bf d}^{m}] = H_{n}^{(m)}(0,\rho |{\bf d}^{m}) = \left[\sum_{i=1}^{m} d_i \;{}^i\! H(\rho^{d_i})\right]^n\;, \label{Zrhonumbers} \end{equation} are generalized Euler numbers of higher order and it is assumed that $$ H_{0}^{(0)}[\rho | \emptyset] = 1, \ H_{n}^{(0)}[\rho | \emptyset] = 0\;, \ n>0\;. $$ It should be underlined that the presentation of the Sylvester wave as a finite sum of the Bernoulli polynomials with periodic coefficients (\ref{WjBern}) is not unique. The symbolic formula (\ref{Wjsymb}) can be cast into a sum of the generalized Euler polynomials \begin{eqnarray} W_j(s,{\bf d}^m) & = & \frac{1}{(\omega_j-1)! \; \pi_{\omega_j}} \sum_{n=0}^{\omega_j-1} \binom{\omega_j-1}{n} B^{(\omega_j)}_n[{\bf d}^{\omega_j}] \times \nonumber \\ && \sum_{\rho_j} \frac{\rho_j^{-s}}{\prod_{i=\omega_j+1}^m (1-\rho_j^{d_i})} H_{\omega_j-1-n}^{(m-\omega_j)}(s+s_m, \rho_j |{\bf d}^{m-\omega_j})\;, \label{WjEuler} \end{eqnarray} where $$ B^{(m)}_n[{\bf d}^{m}] = B^{(m)}_n(0|{\bf d}^{m}) $$ are the Bernoulli numbers of higher order. Substitution of the expression (\ref{WjBern}) into the expansion (\ref{SylvWavesExpand}) immediately produces the partition function $W(s,{\bf d}^{m})$ as finite sum of the Bernoulli polynomials of higher order multiplied by periodic functions with period equal to the least common multiple of the elements in ${\bf d}^m$ \begin{eqnarray} W(s,{\bf d}^m) & = & \sum_j \frac{1}{(\omega_j-1)! \; \pi_{\omega_j}} \sum_{n=0}^{\omega_j-1} \binom{\omega_j-1}{n} B^{(\omega_j)}_n(s+s_{m}|{\bf d}^{\omega_j}) \times \nonumber \\ && \sum_{\rho_j} \frac{\rho_j^{-s}}{\prod_{i=\omega_j+1}^m (1-\rho_j^{d_i})} H_{\omega_j-1-n}^{(m-\omega_j)}[\rho_j | {\bf d}^{m-\omega_j}]\;. \label{WBern} \end{eqnarray} The partition function $W(s,{\bf d}^m)$ has several interesting properties. Analysis of the generating function (\ref{genfunc}) shows that the partition function is a homogeneous function of zero order with respect to all its arguments, i.e., \begin{equation} W(k s,k {\bf d}^m) = W(s,{\bf d}^m). \label{homogeneity} \end{equation} This property appears very useful for computation of the partition function in case when the elements $d_i$ have a common factor $k$, then \begin{equation} W(s, k {\bf d}^m) = W\left(\frac{s}{k},{\bf d}^m\right). \label{factor} \end{equation} The case of $m$ identical elements ${\bf p}^m=\{p,\ldots,p\}$ appears to be the simplest and is reduced to the known formula for Catalan partitions \cite{catal838}: {\it the Diophantine equation $x_1+x_2+\;.\;.\;.\;+x_m=s$ has ${s+m-1\choose s}$ sets of non-negative solutions.} Using (\ref{factor}) for $s$ divisible by $p$ we arrive at $$ W(s,{\bf p}^m) = W\left(\frac{s}{p},{\bf 1}^m\right) = W_1\left(\frac{s}{p},{\bf 1}^m\right) = \frac{B_{m-1}^{(m)}(s/p + m | {\bf 1}^m)}{(m-1)!}. $$ The straightforward computation shows that $$ B_{m-1}^{(m)}(s + m | {\bf 1}^m) = \prod_{k=1}^{m-1} (s+k) = \frac{(s+m-1)!}{s!}, $$ so that \begin{equation} W(s,{\bf p}^m) = \left\{ \begin{array}{ll} \prod_{k=1}^{m-1} \left(1+\frac{s}{kp} \right), & s=0 \pmod p,\\ 0 \;, & s \ne 0 \pmod p. \end{array}\right. \label{identical_d} \end{equation} In the end of this Section we consider a special case of the tuple $\{p_1,p_2,\ldots p_m\}$ of primes $p_j$ which leads to essential simplification of the formula (\ref{WBern}). The first Sylvester wave $W_1$ is given by (\ref{W_1}) while all higher waves arising are purely periodic \begin{equation} W_{p_{i}}(s;\{p_1,p_2,\ldots,p_m\})= \frac{1}{p_i}\sum_{k=1}^{p_{i}-1}\frac{\rho_{p_{i}}^{-ks}} {\prod_{j\neq i}^m\left(1-\rho_{p_{i}}^{kp_{j}}\right)}\;. \label{prim2} \end{equation} The further simplification $m=2,\,s=ap_1p_2$ makes it possible to verify the partition identity \begin{eqnarray} W(a p_1p_2,\{p_1,p_2\})=a+1\;, \label{aa1} \end{eqnarray} which follows from the recursion relation (\ref{SW_recursion}) for the restricted partition function and its definition \begin{eqnarray} &&W(ap_1p_2,\{p_1,p_2\}) - W(ap_1p_2-p_1,\{p_1,p_2\}) = W(ap_1p_2,\{p_2\})\;,\nonumber\\ &&W(ap_1p_2,\{p_2\})=1\;,\;\;\;W(ap_1p_2-p_1,\{p_1,p_2\})= W((a-l)p_1p_2+(lp_2-1)p_1,\{p_1,p_2\})=a\;,\nonumber \end{eqnarray} where $a$ solutions of the Diophantine equation $p_1X+p_2Y=(a-l)p_1p_2+(lp_2-1)p_1$ correspond to $l=1,\ldots,a$. The relation (\ref{aa1}) has an important geometrical interpretation, namely, a line $p_1X+p_2Y=ap_1p_2$ in the $XY$ plane passes exactly through $a+1$ points with non-negative integer coordinates. The verification of (\ref{aa1}) is straightforward (see Appendix B for details): \begin{eqnarray} &&W_1(ap_1p_2,\{p_1,p_2\})=a+\frac{1}{2}\left(\frac{1}{p_1}+ \frac{1}{p_2}\right)\;,\nonumber\\ &&W_{p_1}(ap_1p_2,\{p_1,p_2\})=\frac{1}{2}-\frac{1}{2p_1}\;,\;\;\; W_{p_2}(ap_1p_2,\{p_1,p_2\})=\frac{1}{2}-\frac{1}{2p_2}\;, \label{sylvp1p2} \end{eqnarray} which produces the required result. A generalization of (\ref{aa1}) is possible using the explicit form of the partition function \begin{equation} W(s,\{p_1,p_2\}) = \frac{1}{p_1p_2}\left(s+\frac{p_1+p_2}{2}\right)+ \frac{1}{p_1} \sum_{\rho_{p_1}} \frac{\rho_{p_1}^{-s}}{1-\rho_{p_1}^{p_2}} + \frac{1}{p_2} \sum_{\rho_{p_2}} \frac{\rho_{p_2}^{-s}}{1-\rho_{p_2}^{p_1}}. \label{2primes} \end{equation} Setting here $s=ap_1p_2+b, \, 0 \le b < p_1p_2$ and noting that the value of two last terms in (\ref{2primes}) don't depend on the integer $a$, one can easily see that \begin{equation} W(ap_1p_2+b,\{p_1,p_2\}) = a + W(b,\{p_1,p_2\}), \label{reduct} \end{equation} which reduces the procedure to computation of the first $p_1p_2$ values of $W(s,\{p_1,p_2\})$. Recalling that $W(0,\{p_1,p_2\}) = 1$ we immediately recover (\ref{aa1}) as a particular case of (\ref{reduct}). \section{Recursive Relation for Sylvester Waves} \label{recurs} In this Section we prove that the recursive relation similar to (\ref{SW_recursion}) holds not only for the entire partition function $W(s,{\bf d}^m)$ and its polynomial part $W_1(s,{\bf d}^m)$ but also for each Sylvester wave \begin{equation} W_j(s,{\bf d}^m) - W_j(s-d_m,{\bf d}^m) = W_j(s,{\bf d}^{m-1})\;. \label{SWj_recursion} \end{equation} When $j$ is not a divisor of $d_m$, the weight $\omega_j$ doesn't change in transition from ${\bf d}^{m-1}$ to ${\bf d}^{m}$. Denoting for brevity $$ A(s) = s+s_{m-1} + \sum_{i=1}^{\omega_j}d_i \;{}^i\! B + \!\!\! \sum_{i=\omega_j+1}^{m-1} \!\! d_i \;{}^i\! H(\rho_j^{d_i})\;, \ \ B_{\omega_j} = \frac{1}{(\omega_j-1)! \; \pi_{\omega_j}}\;, $$ we have \begin{eqnarray} W_j(s,{\bf d}^m) & = & B_{\omega_j} \sum_{\rho_j} \frac{\rho_j^{-s}}{\prod_{i=\omega_j+1}^m (1-\rho_j^{d_i})} \left( A(s) + d_m[1 + H(\rho_j^{d_m})] \right)^{\omega_j-1} \nonumber \\ &=& B_{\omega_j} \sum_{\rho_j} \frac{\rho_j^{-s}}{\prod_{i=\omega_j+1}^m (1-\rho_j^{d_i})} \sum_{l=0}^{\omega_j-1} \binom{\omega_j-1}{l} A^{\omega_j-1-l}(s) d_m^l [1 + H(\rho_j^{d_m})]^l\;. \nonumber \end{eqnarray} Now using (\ref{EulerSymbolic}) we have \begin{eqnarray} W_j(s,{\bf d}^m) & = & B_{\omega_j} \sum_{\rho_j} \frac{\rho_j^{-s}}{\prod_{i=\omega_j+1}^m (1-\rho_j^{d_i})} \left\{A^{\omega_j-1}(s) + \rho_j^{d_m} \sum_{l=1}^{\omega_j-1} \binom{\omega_j-1}{l} A^{\omega_j-1-l}(s) d_m^l H_l(\rho_j^{d_m}) \right\} \nonumber \\ &=& B_{\omega_j} \sum_{\rho_j}\frac{\rho_j^{-s}}{\prod_{i=\omega_j+1}^m (1-\rho_j^{d_i})} \left\{(1-\rho_j^{d_m})A^{\omega_j-1}(s) + \rho_j^{d_m}\left( A(s) + d_m H(\rho_j^{d_m})\right)^{\omega_j-1}\right\} \nonumber \\ &=& B_{\omega_j} \sum_{\rho_j} \frac{\rho_j^{-(s-d_m)}}{\prod_{i=\omega_j+1}^m (1-\rho_j^{d_i})} \left( A(s) + d_m H(\rho_j^{d_m}) \right)^{\omega_j-1} + B_{\omega_j} \sum_{\rho_j} \frac{\rho_j^{-s}A^{\omega_j-1}(s)} {\prod_{i=\omega_j+1}^{m-1} (1-\rho_j^{d_i})} \nonumber \\ &=& W_j(s-d_m,{\bf d}^m) + W_j(s,{\bf d}^{m-1})\;. \end{eqnarray} In case of $j$ being divisor of $d_m$ the weight of $j$ for the set ${\bf d}^{m-1}$ is equal to $\omega_j-1$, and we have \begin{equation} W_j(s,{\bf d}^{m-1}) = \frac{(\omega_j-1) d_m}{(\omega_j-1)! \; \pi_{\omega_j}} \sum_{\rho_j} \frac{\rho_j^{-s}}{\prod_{i=\omega_j}^{m-1} (1-\rho_j^{d_i})} \left( s+s_{m-1} + \sum_{i=1}^{\omega_j-1} d_i \;{}^i\! B + \!\!\! \sum_{i=\omega_j}^{m-1} \!\!\! d_i \;{}^i\! H(\rho_j^{d_i}) \right)^{\omega_j-2} \!\!\!\!\!\!\!\!\;. \label{Wm-1} \end{equation} Denoting $$ A(s) = s+s_{m-1} + \sum_{i=1}^{\omega_j-1} d_i \;{}^i\! B + \!\! \sum_{i=\omega_j}^{m-1} \! d_i \;{}^i\! H(\rho_j^{d_i}), \ \ \ D(s,\rho_j) = \frac{\rho_j^{-s}}{\prod_{i=\omega_j+1}^m (1-\rho_j^{d_i})}\;, $$ and using the symbolic formula for the Bernoulli numbers \cite{NorlundMemo} $$ (B+1)^n = B^n = B_n \ \ (n \ne 1)\;, $$ we obtain \begin{eqnarray} W_j(s,{\bf d}^m) & = & B_{\omega_j} \sum_{\rho_j} D(s,\rho_j) [A(s) + d_m(B+1)]^{\omega_j-1} \nonumber \\ &=& B_{\omega_j} \sum_{\rho_j} D(s,\rho_j) \sum_{l=0}^{\omega_j-1} \binom{\omega_j-1}{l} A^{\omega_j-1-l}(s) d_m^l (B+1)^l \\ &=& B_{\omega_j} \sum_{\rho_j} D(s,\rho_j) [A(s) + d_m B]^{\omega_j-1} + B_{\omega_j} d_m (\omega_j-1) \sum_{\rho_j} D(s,\rho_j) A^{\omega_j-2}(s) \nonumber \\ &=& W_j(s-d_m,{\bf d}^m) +W_j(s,{\bf d}^{m-1})\;, \nonumber \end{eqnarray} which completes the proof. \section{Partition function $W\left(s,\{\overline{m}\}\right)$ for a set of natural \\numbers \label{Sm}} Sylvester waves for a set of consecutive natural numbers $\{1,2,\dots,m\}=\{\overline{m}\}$ was under special consideration in \cite{Rama}. An importance of this case based on its relation to the invariants of symmetric group $S_m$ (see next Section) and, second, $W(s,\{\overline{m}\})$ form a natural basis to utilize the partition functions for every subsets of $\{1,2,\dots,m\}$. This case is also important due to the famous Rademacher formula \cite{Radem37} for {\it unrestricted partition function} $W(s,\{\overline{s}\})$, but the latter already belongs to the analytical number theory. The representation for $W(s,\{\overline{m}\})$ in terms of higher Bernoulli polynomials comes when we put into (\ref{WBern}) \begin{equation} \omega_j=\left[\frac{m}{j}\right]\;,\;\;\;\; \pi_{\omega_j}=\omega_j!\;j^{\omega_j}\;,\;\;\;\; s_{\omega_j}=\frac{\omega_j(\omega_j+1)}{2}\;, \label{symmet} \end{equation} where $[x]$ denotes integer part of $x$. The partition function in this case reads \begin{eqnarray} W(s,\{\overline{m}\}) & = & \sum_{j=1}^m \frac{j^{-\omega_j}}{(\omega_j-1)!\; \omega_j!} \sum_{n=0}^{\omega_j-1} \binom{\omega_j-1}{n} B^{(\omega_j)}_n\left(s+\frac{m(m+1)}{2}|{\bf d}^{\omega_j}\right) \times \nonumber \\ && \sum_{\rho_j} \frac{\rho_j^{-s}} {\prod_{i=\omega_j+1}^m (1-\rho_j^{d_i})} H_{\omega_j-1-n}^{(m-\omega_j)}[\rho_j|{\bf d}^{m-\omega_j}]\;, \label{WSmWODenom} \end{eqnarray} where ${\bf d}^{\omega_j} = j\{\overline{\omega}\}$, so that for each $j$ we have elements divisible by $j$ at first $\omega_j$ positions. The expression for the Sylvester wave of the maximal period $m$ looks particularly simple \begin{equation} W_m(s,\{\overline{m}\}) = \frac{1}{m^2} \sum_{\rho_m} \rho_m^{-s}\;. \label{WmSm} \end{equation} The straightforward calculations show that the expression (\ref{WSmWODenom}) produces exactly the same formulas for $m=1,2,\ldots,12$ which were obtained in \cite{Rama}. \begin{figure}\label{W21approx} \end{figure} It needs to be noted that typically the argument $s$ in all formulas derived above is assumed to have integer values, but it is obvious that all results can be extended to real values of $s$, though such extension is not unique. Continuous values of the argument provide a convenient way to analyze the behavior of the partition function and its waves. In this work we choose the natural extension scheme based on the trigonometric functions $$ \rho_j^s = e^{2 \pi i n s/j} = \cos \frac{2 \pi n s}{j} + i \sin \frac{2 \pi n s}{j}. $$ We finish this Section with a brief discussion of a phenomenon better observed in graphics of $W(s,\{\overline{m}\})$ with large $m$ rather from the explicit expressions (see formulas (52) and Figures of restricted partition functions in \cite{Rama}). In the range $[-\frac{m(m+1)}{2},0]$ where $W(s,\{\overline{m}\})$ has all its zeroes, one can easily assume an existence of a function $\widetilde{W}(s,\{\overline{m}\})$ which envelopes $W(s,\{\overline{m}\})$ or approximates it in some sense. The decomposition of $W(s,\{\overline{m}\})$ into the Sylvester waves shows that this role may be assigned to the wave $W_1(s,\{\overline{m}\})$. The Figures \ref{W21approx}, \ref{W21diff} show that $W_1(s,\{\overline{21}\})$ serves as a good approximant for $W(s,\{\overline{21}\})$ in this range as well as for large $s$. \begin{figure}\label{W21diff} \end{figure} \section{Application to invariants of finite groups} \label{finitegroup} The restricted partition function $W(s,{\bf d}^m)$ has a strong relationship to the invariants of finite reflection groups $G$ acting on the vector space $V$ over the field of complex numbers. If $M^G(t)$ is a Molien function of the finite group, $d_{r}$ and $m$ are degrees and a number of the basic homogeneous invariants respectively, then its series expansion in $t$ gives a number $P(s,G)$ of algebraically independent invariants of the degree $s$. The set of natural numbers $\{\overline{m}\}$ corresponds to the symmetric group $S_m:\;W(s,\{\overline{m}\})=P(s,S_m)$. The list of $P(s,G)$ for all indecomposable reflections groups $G$ acting over the field of real numbers and known as {\it Coxeter groups} is presented in \cite{Rama}. It is easy to extend these formulas over indecomposable pseudoreflections groups acting over the field of complex numbers using the list of 37 groups given by Shepard and Todd \cite{Shepar54}. In this Section we extend the results of Section \ref{j} to all finite groups. First, we recall an algebraic setup of the problem. The fundamental problem of the invariant theory consists in determination of an algebra ${\sf R}^G$ of invariants. Its solution is given by the Noether theorem \cite{Benson93}: ${\sf R}^G$ is generated by a polynomial $\vartheta_k(x_j)$ as an algebra due to action of finite group $G\subset GL(V^q)$ on the $q$-dimensional vector space $V^q(x_j)$ over the field of complex numbers by not more than ${|G|+q\choose q}$ homogeneous invariants, of degrees not exceeding the order $|G|$ of group \begin{equation} k\leq {|G|+q\choose q}\;,\;\;\;j\leq \dim V^q=q\;,\;\;\; \deg \vartheta_k(x_j)\leq |G|\;. \label{molien0} \end{equation} To enumerate the invariants explicitly, it is convenient to classify them by their degrees (as polynomials). A classical theorem of Molien \cite{Benson93} gives an explicit expression for a number $P(s,G)$ of all homogeneous invariants of degree $s$ \begin{equation} M^G(t)=\frac{1}{|G|}\sum_{l=1}^{|G|} \frac{\widetilde{\chi}({\widehat g}_l)} {\det({\hat I} -t\; {\widehat g}_l)}= \sum_{s=0}^{\infty} P(s,G) t^s\;,\;\;\;\;P(0,G)=1\;, \label{molien1} \end{equation} where ${\widehat g}_l$ are non--singular $(n\times n)$--permutation matrices with entries, which form the regular representation of $G$, ${\widehat I}$ is the identity matrix and $\widetilde{\chi}$ is the complex conjugate to character $\chi$. The further progress is due to Hilbert and his {\it syzygy theorem} \cite{Benson93}. For our purpose it is important that $M^G(t)$ is a rational polynomial \begin{equation} M^G(t)=\frac{N^G(t)}{\prod_{l=1}^n \left(1-t^{d_l}\right)}\;,\;\;\;\; N^G(t)=\sum_{k=0}Q(k,G)\;t^k. \label{molien2a} \end{equation} The formula (\ref{molien2a}) is very convenient to express the function $P(s,G)$ through the Sylvester waves $W(s,{\bf d}^m)$. Recalling the definition (\ref{genfunc}) of the generating function $F(t,{\bf d}^m)$ consider a general term $t^k F(t,{\bf d}^m)$ of the Molien function (\ref{molien2a}) \begin{eqnarray} t^k F(t,{\bf d}^m) = \sum_{s=0}^{\infty} W(s,{\bf d}^m) t^{s+k} = \sum_{s=k}^{\infty} W(s-k,{\bf d}^m) t^{s}, \label{gen_term_Molien} \end{eqnarray} so that the corresponding partition function is $W(s-k,{\bf d}^m)$, which implies that the number $P(s,G)$ of all homogeneous invariants of degree $s$ for the finite group $G$ can be expressed through the simple relation \begin{equation} P(s,G)=\sum_{k=0}^{s}Q(k,G) W(s-k,{\bf d}^m)\;. \label{fin2} \end{equation} We consider several instructive examples for which the explicit expression of the Molien function $M^G(t)$ and the corresponding number of homogeneous invariants $P(s,G)$ are given. 1. Alternating group ${\sf A}_n$ generated by its natural $n$--dimensional representation, $|{\sf A}_n|=n!/2$. \begin{eqnarray} M_{{\sf A}_n}(t) & = & \left[1+t^{\binom{n}{2}}\right] \prod_{k=1}^n\frac{1}{1-t^k}\;. \nonumber \\ P(s,{\sf A}_n) & = & W(s,\{\overline{n}\}) + W\left(s-\frac{n(n-1)}{2},\{\overline{n}\}\right). \label{altern} \end{eqnarray} The group ${\sf A}_n$ is acting on Euclidean vector space ${\mathbb R}^n$. 2. Group ${\sf G}_2$ generated by matrix {\footnotesize $ \left(\begin{array}{cc} \rho_{n} & 0 \\ 0 & \rho_{n}^{-1}\end{array}\right)$}, where $\rho_{n}=e^{2\pi i/n}$ is a primitive $n$--th root of unity, $|{\sf G}_2|=n$. \begin{eqnarray} M_{{\sf G}_2}(t)&=& \frac{1+t^n}{(1-t^2)(1-t^n)}\;. \nonumber \\ P(s,{\sf G}_2) & = & W(s,\{2,n\}) + W(s-n,\{2,n\}). \label{rotation} \end{eqnarray} ${\sf G}_2$ is isomorphic as an abstract group to the cyclic group ${\sf Z}_n$ acting on Euclidean vector space ${\mathbb R}^2$. 3. Group ${\sf G}_3$ generated by the matrices {\footnotesize $\left(\begin{array}{cc} \rho_{n} & 0 \\ 0 & \rho_{n}^{-1}\end{array}\right)$} and {\footnotesize $\left(\begin{array}{cc} 0 & 1 \\ 1 & 0\end{array}\right)$}, $|{\sf G}_3|=2n$. \begin{eqnarray} M_{{\sf G}_3}(t)=\frac{1}{(1-t^2)(1-t^n)}\;,\;\;\; P(s,{\sf G}_3)=W(s,\{2,n\}) \label{dihedr} \end{eqnarray} ${\sf G}_3$ is isomorphic as an abstract group to the dihedral group ${\sf I}_n$ acting on Euclidean vector space ${\mathbb R}^2$. 4. Group ${\sf G}_4$ generated by $(n\times n)$--diagonal matrix {\sf diag}$(-1,-1,\dots,-1)$, $|{\sf G}_4|=2$. \begin{eqnarray} M_{{\sf G}_4}(t)&=& \frac{1}{(1-t^2)^n}\sum_{k=0}^{\left[\frac{n}{2}\right]} \binom{n}{2k}t^{2k}\;. \nonumber \\ P(s,{\sf G}_4) & = & \left\{ \begin{array}{ll} \sum_{k=0}^{\left[\frac{n}{2}\right]} \binom{n}{2k} W(s-2k,{\bf 2}^n) = W(s,{\bf 1}^n), & s=0 \pmod 2,\\ 0, & s \ne 0 \pmod 2. \end{array}\right. \label{groupG} \end{eqnarray} ${\sf G}_4$ is isomorphic as an abstract group to the cyclic group ${\sf Z}_2$ acting on Euclidean vector space ${\mathbb R}^n$. It is easy to see that both groups ${\sf G}_2$ and ${\sf G}_4$ acting on ${\mathbb R}^2$ give rise to the same Molien function and corresponding number of invariants \begin{eqnarray} M_{{\sf Z}_2}(t)=\frac{1+t^2}{(1-t^2)^2}\;,\;\;\; P(s,{\sf Z}_2)=\left\{ \begin{array}{ll} W(s,{\bf 1}^2), & s=0 \pmod 2,\\ 0, & s \ne 0 \pmod 2. \end{array}\right. \label{groupPR} \end{eqnarray} 5. Group ${\sf Q}_{4n}$ generated by the matrices {\footnotesize $\left(\begin{array}{cc} \rho_{2n} & 0 \\ 0 & \rho_{2n}^{-1}\end{array}\right)$} and {\footnotesize $\left(\begin{array}{cc} 0 & i \\ i & 0\end{array}\right)$}, $|{\sf Q}_{4n}|=4n$. \begin{eqnarray} M_{{\sf Q}_{4n}}(t) &= & \frac{1+t^{2n+2}}{(1-t^4)(1-t^{2n})}\;, \label{groupQ4n} \\ P(s,{\sf Q}_{4n})&=&\left\{ \begin{array}{ll} W(\frac{s}{2},\{2,n\}) + W(\frac{s}{2}-n-1,\{2,n\}), & s=0 \pmod 2,\\ 0, & s \ne 0 \pmod 2. \nonumber \end{array}\right. \end{eqnarray} In the case of quaternion group ${\sf Q}_8$ formula (\ref{groupQ4n}) is reduced to \begin{eqnarray} M_{{\sf Q}_8}(t) &= & \frac{1+t^6}{(1-t^4)^2}\;,\;\;\; P(s,{\sf Q}_8) =\left\{ \begin{array}{ll} W(s,{\bf 1}^2)/2, & s=0 \pmod 4,\\ 0, & s \ne 0 \pmod 4. \end{array}\right. \label{groupQ8} \end{eqnarray} More sophisticated examples of the finite groups one can find in Appendices A, B of the book \cite{Benson93}. \section{Conclusion} 1. The explicit expression for restricted partition function $W(s,{\bf d}^m)$ and its quasiperiodic components $W_j(s,{\bf d}^m)$ ({\em Sylvester waves}) for a set of positive integers ${\bf d}^m = \{d_1, d_2, \ldots, d_m\}$ is derived. The formulas are represented as a finite sum over Bernoulli and Euler polynomials of higher order with periodic coefficients. \noindent 2. Every Sylvester wave $W_j(s,{\bf d}^m)$ satisfies the same recursive relation as the whole partition function $W(s,{\bf d}^m)$. \noindent 3. The application of restricted partition function to the problem of counting all algebraically independent invariants of the degree $s$ which arise due to action of finite group $G$ on the vector space $V$ over the field of complex numbers is discussed. \section*{Appendices} \appendix \renewcommand{\thesection\arabic{equation}}{\thesection\arabic{equation}} \section{Symbolic Notation \label{appendix1}} \setcounter{equation}{0} The symbolic technique for manipulating sums with binomial coefficients by expanding polynomials and then replacing powers by subscripts was developed in nineteenth century by Blissard. It has been known as symbolic notation and the classical umbral calculus \cite{Roman1978}. This notation can be used \cite{Gessel} to prove interesting formulas not easily proved by other methods. An example of this notation is also found in \cite{bat53} in section devoted to the Bernoulli polynomials $B_k(x)$. The well-known formulas $$ B_n(x+y) = \sum_{k=0}^{n} {n\choose k} B_k(x) y^{n-k}, \ \ B_n(x) = \sum_{k=0}^{n} {n\choose k} B_k x^{n-k}, $$ are written symbolically as $$ B_n(x+y) = (B(x)+y)^n, \ \ B_n(x) = (B+x)^n. $$ After the expansion the exponents of $B(x)$ and $B$ are converted into the orders of the Bernoulli polynomial and the Bernoulli number, respectively: \begin{equation} [B(x)]^k \Rightarrow B_k(x), \ \ \ B^k \Rightarrow B_k. \label{conv_rule} \end{equation} We use this notation in its extended version suggested in \cite{NorlundMemo} in order to make derivation more clear and intelligible. N\"orlund introduced the Bernoulli polynomials of higher order defined through the recursion \begin{equation} B_{n}^{(m)}(x|{\bf d}^m) = \sum_{k=0}^n \binom{n}{k} d^k B_k(0) B_{n-k}^{(m-1)}(x|{\bf d}^{m-1}), \label{Bern_poly_HO_def} \end{equation} starting from $B_{n}^{(1)}(x|d_1) = d_1^n B_n(\frac{x}{d_1})$. In symbolic notation it takes form $$ B_{n}^{(m)}(x) = \left( d_m B(0) + B^{(m-1)}(x) \right)^n, $$ and recursively reduces to more symmetric form \begin{equation} B_{n}^{(m)}(x|{\bf d}^m) = \left( x + d_1 \;{}^1\! B(0) + d_2 \;{}^2\! B(0) + \ldots + d_m \;{}^m\! B(0) \right)^n = \left( x + \sum_{i=1}^m d_i \;{}^i\! B(0) \right)^n, \label{Bern_poly_HO_symm} \end{equation} where each $[{}^i \! B(0)]^k$ is converted into $B_k(0)$. \label{appendix2} \section{Partition function for two primes} \setcounter{equation}{0} The polynomial part is computed according to (\ref{W_1}) \begin{equation} W_1(ap_1p_2,\{p_1,p_2\}) = \frac{1}{p_1p_2} B_{1}^{(2)}(ap_1p_2 + p_1+p_2 | \{p_1,p_2\})=a+\frac{1}{2}\left(\frac{1}{p_1}+ \frac{1}{p_2}\right)\;. \label{w1p1p2} \end{equation} Two other waves read \begin{equation} W_{p_1}(ap_1p_2,\{p_1,p_2\})=\frac{1}{p_1} \sum_{r=1}^{p_1-1} \frac{1}{1-\rho_{p_1}^{r}}\;,\;\;\; W_{p_2}(ap_1p_2,\{p_1,p_2\})=\frac{1}{p_2} \sum_{r=1}^{p_2-1} \frac{1}{1-\rho_{p_2}^{r}}\;. \label{w12} \end{equation} where we use trivial identity $\rho_{p_1}^{ap_1p_2}=\rho_{p_2}^{ap_1p_2}=1$. Computation of the sums in (\ref{w12}) we start with the identity (see \cite{Vandiver1942}) \begin{equation} \prod_{r=0}^{m-1} (x-\rho_m^r) = x^m-1, \label{identity1} \end{equation} and differentiation it with respect to $x$, and division by $x^m-1$ \begin{equation} \sum_{r=0}^{m-1} \frac{1}{x-\rho_m^r} = \frac{mx^{m-1}}{x^m-1}. \label{identity1diff1} \end{equation} Subtracting $1/(x-1)$ from both sides of (\ref{identity1diff1}) and taking a limit at $x \rightarrow 1$ we obtain \begin{equation} \sum_{r=1}^{m-1} \frac{1}{1-\rho_m^r} = \frac{m-1}{2}. \label{form2} \end{equation} Using this result we have for the periodic waves in (\ref{w12}) \begin{equation} W_{p_1}(ap_1p_2,\{p_1,p_2\})=\frac{p_1-1}{2p_1}\;,\;\;\; W_{p_2}(ap_1p_2,\{p_1,p_2\})=\frac{p_2-1}{2p_2}\;. \label{w12f} \end{equation} \section*{Acknowledgment} We thank I. M. Gessel for information about Ref. \cite{Gessel}. The research was supported in part (LGF) by the Gileadi Fellowship program of the Ministry of Absorption of the State of Israel. \end{document} \pagestyle{empty} Author Affiliations: Boris Y. Rubinstein,\\ Department of Mathematics\\ University of California, Davis\\ One Shields Dr. \\ Davis, CA 95616, U.S.A. \vskip0.5cm Leonid G. Fel\\ Department of Civil and Environmental Engineering,\\ Technion \\ Haifa 32000, Israel \\ \vskip1cm The research was supported in part (LGF) by the Gileadi Fellowship program of the Ministry of Absorption of the State of Israel. \vskip1cm 2000 Mathematics Subject Classification:\\ Primary -- 11P81; Secondary -- 11B68, 11B37. \vskip1cm Key words: restricted partitions, Bernoulli polynomials of higher order, Euler polynomials of higher order, recursive relation. Contact author \\ \\ Dr. Boris Rubinstein\\ Dept. of Mathematics\\ University of California at Davis,\\ One Shield Drive\\ Davis, CA 95616\\ USA\\ \\ tel.: (530)-400-6910\\ fax: (530)-752-6635\\ e-mail: [email protected] \centerline{\bf Figure Captions} Fig.1. Plots of the partition function $W(s,\{\overline{21}\})$ ({\it black curve}) and its first Sylvester wave $W_1(s,\{\overline{21}\})$ ({\it white curve}) showing that the polynomial part provides an important information about the partition function behavior. \vskip1cm Fig.2. Plot of the normalized difference $[W(s,\{\overline{21}\})/W_1(s,\{\overline{21}\})-1]$ showing that the polynomial part $W_1(s,\{\overline{21}\})$ at large values of the argument $s$ gives a very accurate approximation to the partition function $W(s,\{\overline{21}\})$. \vskip10cm \end{document}
\begin{document} \theoremstyle{plain} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{conjecture}[theorem]{Conjecture} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \theoremstyle{remark} \newtheorem*{remark}{Remark} \newtheorem{example}{Example} \title{Physical realization of realignment criteria using structural physical approximation} \author{Shruti Aggarwal, Anu Kumari, Satyabrata Adhikari} \email{[email protected], [email protected], [email protected]} \affiliation{Delhi Technological University, Delhi-110042, Delhi, India} \begin{abstract} Entanglement detection is an important problem in quantum information theory because quantum entanglement is a key resource in quantum information processing. Realignment criteria is a powerful tool for detection of entangled states in bipartite and multipartite quantum system. It works well not only for negative partial transpose entangled states (NPTES) but also for positive partial transpose entangled states (PPTES). Since the matrix corresponding to realignment map is indefinite so the experimental implementation of the map is an obscure task. In this work, firstly, we have approximated the realignment map to a positive map using the method of structural physical approximation (SPA) and then we have shown that the structural physical approximation of realignment map (SPA-R) is completely positive. Positivity of the constructed map is characterized using moments which can be physically measured. Next, we develop a separability criterion based on our SPA-R map in the form of an inequality and have shown that the developed criterion not only detect NPTES but also PPTES. Further we have shown that for a special class of states called Schmidt symmetric states, the SPA-R separability criteria reduces to the original form of realignment criteria. We have provided some examples to support the results obtained. Moreover, we have analysed the error that may occur because of approximating the realignment map. \end{abstract} \pacs{03.67.Hk, 03.67.-a} \maketitle \section{Introduction} Entanglement \cite{horodeckirev} is a key ingredient in quantum physics and the future of quantum technologies. It has advantages in various quantum information processing tasks such as quantum communication\cite{bennett,bennett2,ekert}, quantum computation\cite{nielsen}, remote state preparation\cite{pati}, quantum simulation\cite{lioyd} and thus, detection of entanglement is an important problem in quantum information theory. Detection of entanglement is also important because even if an experiment is carried out to generate entangled state in bipartite or multipartite quantum system, the generated state may not be entangled due to the presence of noise in the environment and it is quite difficult to check whether the generated state is entangled or not. Despite many efforts, a complete solution for the separability problem is still not known. Positive maps are strong detectors of entanglement. However, not every positive map can be regarded as physical, for example, in case of describing a quantum channel or the reduced dynamics of an open system, a stronger positivity condition is required \cite{kraus}. Completely positive maps play an important role in quantum information theory, since a positive map is physical whenever it is completely positive. Completely positive maps were introduced by Stinespring in the study of dilation problems for operators \cite{stein}. Let $B(\mathcal{H}_A)$ and $B(\mathcal{H}_B)$ denote the set of bounded operators on the Hilbert spaces $\mathcal{H}_A$ and $\mathcal{H}_B$, respectively. If the Hilbert space $\mathcal{H}= \mathcal{H}_A \otimes\mathcal{H}_B$ has dimension $k$, we identify $B(\mathcal{H})$ with $M_k(\mathbb{C})$, the space of $k \times k$ matrices in $\mathbb{C}$. A linear map $\Phi: B(\mathcal{H}_A) \longrightarrow B(\mathcal{H}_B)$ is positive if $\Phi(\rho)$ is positive for each positive $\rho \in B(\mathcal{H}_A) $. The map $\Phi$ is completely positive, if for each positive integer $k$, the map $I_k \otimes \Phi : M_k (B(\mathcal{H}_A)) \longrightarrow M_k (B(\mathcal{H}_B))$ is positive. In quantum information theory, completely positive maps are important because they are used to characterize quantum operations \cite{nielsen}. Choi described the operator sum representation of completely positive maps in \cite{choi}. In \cite{poon}, necessary and sufficient conditions for the existence of completely positive maps are given. A physical way by which positive maps can be approximated by completely positive maps is called structural physical approximation (SPA) \cite{horodecki3,jaro, augusiak11,hakye,augusiak14,jbae}. The idea is to mix a positive map $\Phi$ with maximally mixed state, making the mixture $\tilde{\Phi}$ completely positive \cite{horodecki3}. The resulting map can then be physically realized in a laboratory and its action characterizes entanglement of the states detected by $\Phi$. In addition, the resulting map keeps the structure of the output of the non physical map $\Phi$ since the direction of generalized Bloch vector of the output state remaims same as the output state of the original nonphysical map, only the length of the vector is rescaled by some factor \cite{horo2003}. The SPA to the map $\Phi$ in $d\otimes d$ dimensional space is given by \begin{equation} \tilde{\Phi}(\rho) = \frac{p^*}{d^2} I_{d^2} + (1-p^*)\Phi(\rho) \end{equation} where $I_{d^2}$ denotes the identity matrix of order $d^2$ and $p^*$ is the minimum value of the probability $p$ for which the approximated map $\tilde{\Phi}$ is completely positive \cite{adh}.\\ Although, there exist various methods in literature for detection of entangled states, the first solution to this problem is connected to the theory of positive maps. It was proposed by Peres in the form of partial transposition (PT) criteria \cite{peres}. Later, Horodecki proved that this criteria is necessary and sufficient for $2\otimes 2$ and $2\otimes 3$ dimensional quantum system \cite{horodecki2}. Although this criteria is one of the most important and widely used criteria but it suffers from serious drawbacks. One of the major drawback is that it is based on the negative eigenvalues of the partially transposed matrix and thus used to detect negative partial transpose entangled states (NPTES) only. Another drawback is that the partial transposition map is positive but not a completely positive map and hence, may not be implemented in an experiment. In order to make it experimentally implementable, partial transposition map have been approximated to a completely positive map using the method of SPA \cite{horodecki3}. A lot of work have been done on the structural physical approximation of partial transposition (SPA-PT) \cite{horodecki3,adh,hlim1, hlim3, kumari1}. The SPA-PT have been used to detect and quantify entanglement \cite{horodecki3,adhikari1} but till now it can only be used to detect and quantify NPTES only. \\ As the PPT criterion fails to detect bound entanglement in higher dimensions, certain other criterion have been proposed in the literature which can detect some positive partial transpose entangled states (PPTES). These include the realignment criteria or computable cross-norm criteria (CCNR) \cite{chen, rudolph5}, range criterion \cite{terhal}, covariance matrix criterion \cite{hyllus}. Moreover, it has been shown that the PPT criterion and the CCNR criterion are equivalent under permutations of the density matrix’s indices \cite{rudolph2004}. The generalization of CCNR criterion were investigated in \cite{chen2}. The symmetric function of Schmidt coefficients have been used to improve the CCNR criterion in \cite{lupo}. Separability criteria based on the realignment of density matrices and reduced density matrices have been proposed in \cite{shen}. In \cite{shruti1,shruti2}, witness operators using the realignment map is constructed which efficiently detect and quantify PPT entangled states. In \cite{XQi}, rank of realigned matrix is used to obtain a necessary and sufficient product criteria for quantum states. Recently, methods for detecting bipartite entanglement based on estimating moments of the realignment matrix have been proposed \cite{tzhang,shruti4}. Realignment criteria is a powerful criteria in the sense that it may be used to detect NPTES as well as PPTES. PPTES, also known as bound entangled states, which are weak entangled states that cannot be distilled by performing local operations and classical communications (LOCC). Although it is one of the best for the detection of PPTES but the problem with this criteria is that it may not be used to detect entanglement practically because the realignment map corresponds to a non-positive map and it is known that the non-positive maps are not experimentally implementable. Also, it is known that completely positive maps may be realized in an experiment \cite{korbicz}. The defect that the realignment map may not be realized in an experiment may be overcome by approximating the non-positive realignment map to a completely positive map. Our work is significant because although there has been considerable progress in entanglement detection using the SPA of partial transposition map but the idea of SPA of realignment operation is still unexplored. \\ In this work we approximate the non-positive realignment map to a completely positive map. To achieve this goal, we first approximate the non-positive realignment map with a positive map and then we show that the obtained positive map is also completely positive. We estimate the eigenvalues of the realignment matrix using moments which may be used physically in an experiment \cite{brun,tanaka,sougato,imai}. Further, we formulate a separability criterion that we call SPA-R criteria, using our approximated map that not only detects NPT entangled states but also PPT entangled states. Next, we have shown that the SPA-R criteria reduces to the original formulation of realignment criteria for a class of states called Schmidt-symmetric states. Moreover we discuss the accuracy of our approximated realignment (SPA-R) map by calculating the error of the approximation in trace norm. We also introduce an error inequality which holds for all separable states. This paper is organized as follows: In Sec. II, we will revisit realignment criteria and review some preliminary results that we will use in further sections. In Sec. III, we approximate the non-positive realignment map to a positive map and further, we will show that the approximated positive map is completely positive. In Sec. IV, we develop our separability criteria called SPA-R criteria based on approximated realignment map. Furthermore, we show that the SPA-R criteria and the original form of realignment criteria will become same for Schmidt symmetric states. In Sec. V, we investigate the error generated due to the approximation procedure. In Sec. VI, we illustrate some examples to support the results obtained in this work. In Sec. VII, we discuss the efficiency of SPA-R criteria. Finally, we conclude in Sec. VIII. \section{Preliminaries} In this section, we stated the realignment criteria and some results which are discussed in the literature. We will use these results in the subsequent section to obtain the modified form of realignment criteria that may be realizable in the experiment.\\ \subsection{Realignment Criteria} First, let us recall the definition of the realignment operation. For any $m \times m$ block matrix $X$ with each block $X_{ij}$ of size $n \times n$, $i, j = 1, \; . \;.\;., m$, the realigned matrix $R(X)$ is defined by \begin{eqnarray} R(X)= [vec(X_{11}), . . ., vec(X_{m1}), . . ., vec(X_{1m}), . . ., vec(X_{mm})]^t \nonumber\\ \end{eqnarray} where for any $n \times n$ matrix $X_{ij}$ with entries $x_{ij}$, $vec(X_{ij})$ is defined as \begin{eqnarray} vec(X_{ij})= [x_{11}, . . ., x_{n1}, x_{12} . . ., x_{n2}, . . ., x_{1n}, . . ., x_{nn}]^t \end{eqnarray} \\ Let us consider a bipartite quantum system described by a density operator $\rho$ in $H_A^{d_1}\otimes H_B^{d_2}$ dimensional quantum system. The density operator $\rho$ may be expressed as \begin{eqnarray} \rho=\sum_{i,j,k,l}p_{ij,kl}|ij\rangle \langle kl| \end{eqnarray} where $d_1$ and $d_2$ are the dimensions of the Hilbert spaces $H_A$ and $H_B$ respectively. After applying realignment operation on $\rho$, the realigned matrix $R(\rho)$ may be expressed as \begin{eqnarray} R(\rho)=\sum_{i,j,k,l}p_{ij,kl}|ik\rangle \langle jl| \end{eqnarray} Then realignment criteria may be stated as: If $\rho$ represent a separable state then $||R(\rho)||_1 \leq 1$. Here $||.||_1$ denotes the trace norm and it may be defined as $||T||_1=Tr(\sqrt{TT^{\dagger}})$ \cite{chen}.\\ \subsection{A few well-known results} In this section, we mention about a few important results that may require in the following section. To proceed with, we employ a useful theorem by Weyl \cite{weyl}, that connects the eigenvalues of the sum of Hermitian matrices to those of the individual matrices. We use this theorem to prove the positivity of our approxiamted map. For convenience, Weyl's theorem can be stated as follows:\\ \textbf{Result 1:} (Weyl's Inequality \cite{weyl}) Let $A,B\in M_n$ be two Hermitian matrices and let $\{\lambda_i[A]\}_{i=1}^n$, $\{\lambda_i[B]\}_{i=1}^n$ and $\{\lambda_i[A+B]\}_{i=1}^n$ be the eigenvalues of $A,B$ and $A+B$ respectively, arranged in ascending order, i.e., $\lambda_1 \leq \lambda_2 \leq. . . \leq \lambda_n$. Then \begin{eqnarray} (i)~~\lambda_k[A+B]\leq \lambda_{k+j}[A]+\lambda_{n-j}[B],~~j=0,...,n-k \end{eqnarray} \begin{eqnarray} (ii)~~\lambda_{k-j+1}[A]+\lambda_j[B]\leq \lambda_k[A+B],~~j=1,...,k \label{weyl} \end{eqnarray} It may not be an easy task to directly compute the eigenvalues of a matrix, thus bounds for eigenvalues are of great importance. Bounds for eigenvalues using traces have been studied in \cite{wolko}. Further, the bound of the eigenvalues expressed in terms of moments may be useful for the experimentalist to estimate eigenvalues in the laboratory. We now state the result \cite{wolko} given below that determine a lower bound for the minimum eigenvalue of a matrix in terms of first and second order moments of the matrix. We will use the following result 2 in the subsequent section to prove the positivity of our approximated map.\\ \textbf{Result 2 \cite{wolko}}: Let $A \in M_n(\mathbb{C})$ be any matrix with real eigenvalues and $\lambda_{min}^{lb}[A]$ denotes the lower bound of the minimum eigenvalue of $A$. Then \begin{eqnarray} \lambda_{min}^{lb}[A] \leq \lambda_{min}[A] \ \end{eqnarray} where the lower bound is given by \begin{eqnarray} \lambda_{min}^{lb}[A] = \frac{Tr[A]}{n} - \sqrt{(n-1)\left(\frac{Tr[A^2]}{n}- (\frac{Tr[A]}{n})^2\right)}\nonumber\\ \label{lb} \end{eqnarray} The useful conditions for the existence of completely positive maps have been studied in \cite{poon}. We have exploited the conditions to prove the completely positivity of the introduced approximated map. The conditions may be expressed as the Result 3 below.\\ \textbf{Result 3 \cite{poon}:} Consider a map $\Phi: M_n(\mathbb{C}) \longrightarrow M_m(\mathbb{C})$. Let $A \in M_n$ and $B \in M_m$ be Hermitian matrices such that $\Phi(A)=B$. Then the map $\Phi$ is completely positive iff there exist non-negative real numbers $\gamma_1$ and $\gamma_2$ such that the following conditions hold: \begin{eqnarray} \lambda_{min}[B]&\geq& \gamma_1 \lambda_{min}[A]\\ \lambda_{max}[B]&\leq& \gamma_2 \lambda_{max}[A] \end{eqnarray} \section{Structural Physical Approximation of Realignment Map: Positivity and Completely positivity} \noindent In this section, we employ the method of structural physical approximation to approximate the realignment map. To proceed toward our aim, let us first recall the depolarizing map which may be defined in the following way:\\ A map $\Phi_d: M_n \longrightarrow M_n$ is said to be depolarizing if \begin{eqnarray} \Phi(A)= \frac{Tr[A]}{n} I_{n} \label{dep} \end{eqnarray} In the method of structural physical approximation, we mix an appropriate proportion of realignment map with a depolarizing map in such a way that the resulting map will be positive. This may happen because the lowest negative eigenvalues generated by the realignment map can be offset by the eigenvalues of the maximally mixed state generated by the depolarizing map.\\ Consider any quantum state $\rho$ in $d \otimes d$ dimensional system $\mathcal{D} \subset \mathcal{H}_A \otimes \mathcal{H}_B$ such that $\mathcal{D}$ contains the states $\rho$ whose realignment matrix $R(\rho)$ have real eigenvalues and positive trace. The structural physical approximation of realignment map may be defined as $\widetilde{R}: M_{d^2} (\mathbb{C}) \longrightarrow M_{d^2}(\mathbb{C})$ such that \begin{eqnarray} \widetilde{R}(\rho)=\frac{p}{d^2}I_{d\otimes d}+\frac{(1-p)}{Tr[R(\rho)]}{R(\rho)},~~0 \leq p \leq 1 \label{sparealign} \end{eqnarray} \subsection{Positivity of structural physical approximation of realignment map} It is known that $R(\rho)$ forms an indefinite matrix, its eigenvalues may be negative or positive. Let us first consider the case when all the eigenvalues of $R(\rho)$ are non negative. So, by the definition of $\widetilde{R}$ given in (\ref{sparealign}), $\widetilde{R}(\rho)$ is positive for all $ p \in [0,1]$ and hence $\widetilde{R}$ defines a positive map. On the other hand if $R(\rho)$ has negative eigenvalues, then $\widetilde{R} (\rho)$ may be positive under some conditions. But since the realignment operation is not physically realizable, it is not feasible to compute the eigenvalues of $R(\rho)$. To overcome this challenge, we find the range of $p$ in terms of $\lambda_{min}^{lb}[R(\rho)]$ defined in (\ref{lb}), which can be expressed in terms of $Tr[R(\rho)]$ and $Tr[(R(\rho))^2]$. The first and second moments of $R(\rho)$ may be measured experimentally \cite{sougato}. Now the problem is: how to determine the sign of the real eigenvalues of $R(\rho)$ experimentally without directly computing its eigenvalues? The method we develop here to tackle this problem is described below in detail. \subsubsection{Method for determining the sign of real eigenvalues of $R(\rho)$} Let $\rho \in \mathcal{D}$ be a $d \otimes d$ dimensional state such that $R(\rho)$ has real eigenvalues $\lambda_1, \lambda_2, . . ., \lambda_{d^2}$. The characteristic polynomial of $R(\rho)$ is given as \begin{eqnarray} f(x) = \prod_{i=1}^{d^2} (x - \lambda_i) = \sum_{k=0}^{d^2} (-1)^k a_k x^{d^2-k} \end{eqnarray} where $a_0 = 1$ and $\{a_k\}_{k=1}^{d^2}$ are the functions of eigenvalues of $R(\rho)$.\\ Let us now consider the polynomial $f(-x)$, which effectively replaces the positive eigenvalues of $R(\rho)$ by negative ones and vice versa. For a polynomial with real roots, Descartes’ rule of sign states that the number of positive roots is given by the number of sign changes between consecutive elements in the ordered list of its nonzero coefficients \cite{descartes}. The matrix $R(\rho)$ is positive semi-definite iff the number of sign change in the ordered list of non-zero coefficients of $f(x)$ is equal to the degree of the polynomial $f(x)$. These non-zero coefficients can be determined in terms of moments of the matrix $R(\rho)$. The coefficients $a_i$'s are related to the moments of $R(\rho)$ by the recursive formula \cite{newton} \begin{eqnarray} a_k = \frac{1}{k} \sum_{i=0}^k (-1)^{i-1} a_{k-i} m_i (R(\rho)) \label{rec} \end{eqnarray} where $ m_i (R(\rho)) = Tr[(R(\rho))^i]$ denotes the $ith$ order moment of the matrix $R(\rho)$. For convenience, we write $ m_i (R(\rho))$ as $m_i$. The ith order moment can be explicitly expressed as \begin{eqnarray} m_i = (-1)^{i-1} i a_i + \sum_{k=1}^{i-1} (-1)^{i-1+k} a_{i-k} m_k \end{eqnarray} Using (\ref{rec}), we get \begin{eqnarray} a_1 &=& m_1\\ a_2 &=& \frac{1}{2} (m_1^2 - m_2) \\ a_3 &=& \frac{1}{6} ( m_1^3 - 3 m_1 m_2 + 2m_3) \end{eqnarray} and so on.\\ Therefore, the matrix $R(\rho)$ is positive semi-definite iff $a_i \geq 0$ for all $i = 1, ..., d^2$. \\ \subsubsection{Positivity of $\widetilde{R}(\rho)$:} In this section, we derive the condition for which the approximated map $\tilde{R}(\rho)$ will be positive when (i) $R(\rho)$ is positive; and when (ii) $R(\rho)$ is indefinite. The obtained conditions are stated in the following theorem.\\ \textbf{Theorem-1} Let $\rho$ be a $d \otimes d$ dimensional state such that its realignment matrix $R(\rho)$ has real eigenvalues. The structural physical approximation of realignment map $\widetilde{R}(\rho)$ is a positive operator for $p \in [l, 1]$, where $l$ is given by \begin{eqnarray} l= \left\{ \begin{array}{lrr} 0 & when & \lambda_{min} [R(\rho)] \geq 0 \\ \frac{d^2k}{Tr[R(\rho)]+d^2k} \leq p \leq 1 & when & \lambda_{min} [R(\rho)] < 0 \end{array}\right\} \label{thm1} \end{eqnarray} where $k=max[0,-\lambda_{min}^{lb}[R(\rho)]]$ and $\lambda_{min}^{lb}[R(\rho)])$ denotes the lower bound of the minimum eigenvalue of $R(\rho)$ defined in (\ref{lb}).\\ \textbf{Proof:} Recalling the definition (\ref{sparealign}) of the SPA of the realignment map, the minimum eigenvalue of $\widetilde{R}(\rho)$ is given by \begin{eqnarray} \lambda_{min}[\widetilde{R}(\rho)]=\lambda_{min}[\frac{p}{d^2}I_{d\otimes d}+\frac{(1-p)}{Tr[R(\rho)]}{R(\rho)}] \label{minr} \end{eqnarray} where $\lambda_{min}(.)$ denote the minimum eigenvalue of $[.]$. Using Weyl's inequality given in (\ref{weyl}) on RHS of (\ref{minr}), it reduces to \begin{eqnarray} \lambda_{min}[\widetilde{R}(\rho)]&\geq& \lambda_{min}[\frac{p}{d^2}I_{d\otimes d}]+\lambda_{min}[\frac{(1-p)}{Tr[R(\rho)]}{R(\rho)}]\nonumber\\&=& \frac{p}{d^2}+\frac{(1-p)}{Tr[R(\rho)]}\lambda_{min}{[R(\rho )]} \label{lambmin} \end{eqnarray} \noindent Now our task is to find the range of $p$ for which $\widetilde{R}$ defines a positive map. Based on the sign of $\lambda_{min}[R(\rho )]$, we consider the following two cases.\\ \textbf{Case-I:} When $\lambda_{min}[R(\rho )]\geq 0$, the RHS of the inequality in (\ref{lambmin}) is positive for every $0 \leq p\leq 1$ and hence $\widetilde{R}(\rho)$ represent a positive map for all $p \in [0,1]$.\\ \textbf{Case-II:} If $\lambda_{min}[R(\rho)]<0$, then (\ref{lambmin}) may be rewritten as \begin{eqnarray} \lambda_{min}[\widetilde{R}(\rho)] &\geq& \frac{p}{d^2}+\frac{(1-p)}{Tr[R(\rho)]}\lambda_{min}^{lb}[R(\rho )] \label{lambminneg} \end{eqnarray} where $\lambda_{min}^{lb}[R(\rho)]$ is given in (\ref{lb}) and may be re-expressed in terms of moments as \begin{eqnarray} \lambda_{min}^{lb}[R(\rho)]= \frac{m_1}{d^2} - \sqrt{(d^{2}-1)\left(\frac{m_2}{d^2}- \left(\frac{m_1}{d^2}\right)^2\right)}\nonumber\\ \label{lbr} \end{eqnarray} where $m_1 = Tr[R(\rho)]$ and $m_2 = Tr[(R(\rho))^2] $.\\ Taking $\lambda_{min}^{lb}[R(\rho)]=-k,~~k(>0)\in \mathbb{R}$, (\ref{lambminneg}) reduces to \begin{eqnarray} \lambda_{min}[\widetilde{R}(\rho)] &\geq& \frac{p}{d^2}-k\frac{(1-p)}{Tr[R(\rho)]} \label{lambminneg1} \end{eqnarray} Now, if we impose the condition on the parameter $p$ as $p\geq \frac{d^2k}{Tr[R(\rho)]+d^2k}=l$ then $\lambda_{min}[\widetilde{R}(\rho)]\geq 0$. Thus combining the above discussed two case, we can say that the approximated map $\widetilde{R}(\rho)$ represent a positive map when (\ref{thm1}) holds. Hence the theorem is proved.\\ \subsection{Completely positivity of structural physical approximation of realignment map} In order to show that the approximated map $\widetilde{R}(\rho)$ defined in (\ref{sparealign}) may be realized in an experiment, it is not enough to show that $\widetilde{R}(\rho)$ is positive but also we need to show that it is completely positive.\\ When $l \leq p \leq 1$ there exist non-negative real numbers $\gamma_1$ and $\gamma_2$ such that the following conditions hold \begin{eqnarray} \lambda_{min} [\widetilde{R}(\rho)] &\geq& \gamma_1 \lambda_{min}[\rho] \label{26}\\ \lambda_{max} [\widetilde{R}(\rho)] &\leq& \gamma_2 \lambda_{max}[\rho] \label{27} \end{eqnarray} Hence, using Result-3, $\widetilde{R}(\rho)$ is a completely positive operator for $p \in [l,1]$. \section{Detection using the Experimental Implementable form of Realignment criteria} \noindent In this section, we will derive a separability condition for the detection of NPTES and PPTES that may be implemented in the laboratory. The separability condition obtained depends on the structural physical approximation of Realignment criterion and thus the condition may be termed as SPA-R criterion. We will then further identify a class of states known as Schmidt-symmetric state for which the SPA-R criterion is equivalent to original form of realignment criterion \cite{rudolph2004,chen2} and weak form of realignment criterion \cite{hertz}. \subsection{SPA-R Criterion} We are now in a position to derive the laboratory-friendly (for clarification, see Appendix-II) separability criterion that may detect the NPTES and PPTES. The proposed entanglement detection criterion is based on the structural physical approximation of Realignment criterion and it may be stated in the following theorem.\\ \textbf{Theorem 2:} If any quantum system described by a density operator $\rho_{sep}$ in $d\otimes d$ system is separable then \begin{eqnarray} ||\widetilde{R}(\rho_{sep})||_1 \leq \frac{p[Tr[R(\rho_{sep})]-1]+1}{Tr[R(\rho_{sep})]}=\widetilde{R}(\rho_{sep})_{UB} \label{thm2} \end{eqnarray} \textbf{Proof:} Let us consider a two-qudit bipartite separable state described by the density matrix $\rho_{sep}$, then the approximated realignment map (\ref{sparealign}) may be recalled as \begin{eqnarray} \widetilde{R}(\rho_{sep}) = \frac{p}{d^2}I_{d\otimes d}+\frac{1-p}{Tr[R(\rho_{sep})]}R(\rho_{sep}) \label{spa1} \end{eqnarray} Taking trace norm on both sides of (\ref{spa1}) and using triangular inequality on norm, it reduces to \begin{eqnarray} ||\widetilde{R}(\rho_{sep})||_1 &\leq& ||\frac{p}{d^2}I_{d\otimes d}||_1+||\frac{1-p}{Tr[R(\rho_{sep})]}R(\rho_{sep})||_1\nonumber\\ &=& p+\frac{1-p}{Tr[R(\rho_{sep})]}||R(\rho_{sep})||_1 \label{spa3} \end{eqnarray} Since $\rho_{sep}$ denote a separable state so using realignment criteria, we have $||R(\rho_{sep})||_1\leq 1$ \cite{chen,rudolph5}. Therefore, (\ref{spa3}) further reduces to \begin{eqnarray} ||\widetilde{R}(\rho_{sep})||_1 &\leq& p+\frac{1-p}{Tr[R(\rho_{sep})]}\nonumber\\ &=& \frac{p[Tr[R(\rho_{sep})]-1]+1}{Tr[R(\rho_{sep})]} \end{eqnarray} Hence proved.\\ \textbf{Corollary-1:} If for any two-qudit bipartite state $\rho$, the inequality \begin{eqnarray} ||\widetilde{R}(\rho)||_1 > \frac{p[Tr[R(\rho)]-1]+1}{Tr[R(\rho)]}=\widetilde{R}(\rho)_{UB} \label{cor1} \end{eqnarray} holds then the state $\rho$ is an entangled state.\\ We should note an important fact that the $\widetilde{R}(\rho_{sep})_{UB}$ given in (\ref{thm2}) and (\ref{cor1}) depends on $Tr(R(\rho))$, which can be considered as the first moment of $R(\rho)$ and it may be measured in an experiment \cite{sougato} (See Appendix II).\\ \textbf{Corollary-2:} If for any separable state $\rho_{sep}^{(1)}$, $Tr[R(\rho_{sep}^{(1)})]=1$ holds then (\ref{thm2}) reduces to \begin{eqnarray} ||\widetilde{R}(\rho_{sep}^{(1)})||_1 \leq 1 \label{cor2} \end{eqnarray} \subsection{Schmidt-symmetric states} Let us consider a class of states known as Schmidt-symmetric states which may be defined as \cite{hertz} \begin{eqnarray} \rho_{sc}=\sum_{i} \lambda_{i} A_{i}\otimes A_{i}^{*} \label{sc} \end{eqnarray} where $A_{i}$ represent the orthonormal bases of the operator space and $\lambda_{i}$ denote non-negative real numbers known as Schmidt coefficients.\\ We are considering this particular class of states because we will show in this section that the separability criteria using SPA-R map becomes equivalent to the original form of realignment criteria for such class of state. Hertz et.al. studied the Schmidt-symmetric states and proved that a bipartite state $\rho_{sc}$ is Schmidt-symmetric if and only if \begin{eqnarray} ||R(\rho_{sc})||_1 = Tr[R(\rho_{sc})] \label{sch} \end{eqnarray} For any Schmidt-symmetric state described by the density operator $\rho_{sc}$, the realignment matrix $R(\rho_{sc})$ defines a positive semi-definite matrix. Hence, using $Theorem-1$, $\widetilde{R}(\rho_{sc})$ is positive $ \forall \; p\in [0,1]$. Also, using (\ref{26}) and (\ref{27}), $\widetilde{R}(\rho_{sc})$ can be shown as a completely positive. To achieve the motivation of this section, let us start with the following lemma.\\ \textbf{Lemma 1:} For any Schmidt-symmetric state $\rho_{sc}$, \begin{eqnarray} ||\widetilde{R}(\rho_{sc})||_1 = 1 \label{lemma1} \end{eqnarray} \textbf{Proof:} Let us recall (\ref{sparealign}), which may provide the structural physical approximation of the realignment of Schmidt-symmetric state. The SPA-R of $\rho_{sc}$ is denoted by $\widetilde{R}(\rho_{sc})$ and it is given by \begin{eqnarray} \widetilde{R}(\rho_{sc}) = \frac{p}{d^2}I_{d\otimes d}+\frac{(1-p)}{Tr[R(\rho_{sc})]}{R(\rho_{sc})} \end{eqnarray} Taking trace norm on both sides and using triangle inequality we have, \begin{eqnarray} ||\widetilde{R}(\rho_{sc})||_1 \leq p + \frac{(1-p)}{Tr[R(\rho_{sc})]} ||R(\rho_{sc}) ||_1 \label{m1} \end{eqnarray} Using (\ref{sch}), the inequality (\ref{m1}) reduces to \begin{eqnarray} ||\widetilde{R}(\rho_{sc})||_1 \leq 1 \label{r1} \end{eqnarray} Again, using (\ref{sparealign}), the trace of the approximated map $\widetilde{R}(\rho_{sc})$ is given by \begin{eqnarray} Tr[\widetilde{R}(\rho_{sc})] = Tr[\frac{p}{d^2}I_{d\otimes d}+\frac{(1-p)}{Tr[R(\rho_{sc})]}{R(\rho_{sc})}] = 1 \label{trace} \end{eqnarray} Moreover, it is known that the trace norm of an operator is greater than or equal to its trace. Therefore, applying this result on $\widetilde{R}(\rho_{sc})$, we get \begin{eqnarray} Tr[\widetilde{R}(\rho_{sc})] \leq ||\widetilde{R}(\rho_{sc})||_1 \label{r11} \end{eqnarray} Using (\ref{trace}), the inequality (\ref{r11}) reduces to \begin{eqnarray} ||\widetilde{R}(\rho_{sc})||_1 \geq 1 \label{ineq1} \end{eqnarray} Both (\ref{r1}) and (\ref{ineq1}) holds only when \begin{eqnarray} ||\widetilde{R}(\rho_{sc})||_1 = 1 \label{r2} \end{eqnarray} holds. Thus proved.\\ We are now in a position to show that SPA-R criteria may reduce to original form of realignment criteria for Schmidt-symmetric states. It may be expressed in the following theorem.\\ \textbf{Theorem 3:} For Schmidt-symmetric state, SPA-R separability criterion reduces to the original form of realignment criterion.\\ \textbf{Proof:} Let $\rho_{sc}^{sep}$ be any separable Schmidt-symmetric state. The SPA-R separability criterion for $\rho_{sc}^{sep}$ is given by \begin{eqnarray} ||\widetilde{R}(\rho_{sc}^{sep})||_1 \leq \widetilde{R}(\rho_{sc}^{sep})_{UB} \label{f1} \end{eqnarray} Using (\ref{lemma1}), the inequality (\ref{f1}) reduces to \begin{eqnarray} &&\widetilde{R}(\rho_{sc}^{sep})_{UB} = \frac{p[Tr[R(\rho_{sc}^{sep})]-1]+1}{Tr[R(\rho_{sc}^{sep})]} \geq 1 \nonumber\\ &\implies& \;\; p[Tr[R(\rho_{sc}^{sep})]-1] + 1 \geq Tr[R(\rho_{sc}^{sep})]\nonumber\\ &\implies&\;\; Tr[R(\rho_{sc}^{sep})] (p-1) \geq (p-1)\nonumber\\ &\implies&\;\; Tr[R(\rho_{sc}^{sep})] \leq 1 \nonumber\\ &\implies&\;\; ||R(\rho_{sc}^{sep})||_1 \leq 1 \end{eqnarray} The last step follows from (\ref{sch}). Hence the theorem. \section{Error in the approximated map} \noindent In this section, we have studied and analysed the error generated when $R(\rho)$ is approximated by its SPA. In the approximated map, we have added an appropriate proportion of maximally mixed state such that the approximated map has no negative eigenvalue. The error between the approximated map $\widetilde{R}(\rho)$ and the realignment map $R(\rho)$ may be calculated as: \begin{eqnarray} ||\widetilde{R}(\rho)-R(\rho)||_1&=&||\frac{p}{d^2}I_{d\otimes d}+\frac{(1-p)}{Tr[R(\rho)]}R(\rho)-R(\rho)||_1\nonumber\\ &=&||\frac{p}{d^2}I_{d\otimes d}+[\frac{(1-p)}{Tr[R(\rho)]}-1]R(\rho)||_1\nonumber\\ \label{rrho1} \end{eqnarray} Using triangular inequality for trace norm, (\ref{rrho1}) reduces to \begin{eqnarray} ||\widetilde{R}(\rho)-R(\rho)||_1\leq p+{\frac{1-p-Tr[R(\rho)]}{Tr[R(\rho)]}}||R(\rho)||_1 \label{error} \end{eqnarray} The inequality (\ref{error}) may be termed as error inequality. The error inequality holds for any two-qudit bipartite state.\\ \textbf{Proposition 1:} The equality relation \begin{eqnarray} ||\widetilde{R}(\rho_{sep})-R(\rho_{sep})||_1= \frac{(1-p)(1-Tr[R(\rho_{sep})])}{Tr[R(\rho_{sep})]} \label{errorsep1} \end{eqnarray} holds for separable state described by the density operator $\rho_{sep}$ such that $||R(\rho_{sep})||_1=1$.\\ \textbf{Proof:} Equality in (\ref{error}) holds if and only if \begin{eqnarray} \frac{p}{d^2}I_{d\otimes d}=[\frac{(1-p)}{Tr[R(\rho)]}-1]R(\rho) \end{eqnarray} i.e. equality in (\ref{error}) holds when the realigned matrix takes the form \begin{eqnarray} R(\rho)=\frac{pTr[R(\rho)]}{1-p-Tr[R(\rho)]}\frac{I}{d^2},~~0\leq p\leq 1 \label{eq22} \end{eqnarray} Taking trace norm, (\ref{eq22}) reduces to \begin{eqnarray} ||R(\rho)||_1=\frac{pTr[R(\rho)]}{1-p-Tr[R(\rho)]},~~0\leq p\leq 1 \label{eq23} \end{eqnarray} For separable state $\rho_{sep}$, (\ref{eq23}) reduces to \begin{eqnarray} 1=\frac{pTr[R(\rho)]}{1-p-Tr[R(\rho)]},~~0\leq p\leq 1 \label{eq24} \end{eqnarray} Simplifying (\ref{eq24}), the value of $p$ and $1-p$ may be expressed as \begin{eqnarray} p=\frac{1-Tr[R(\rho_{sep})]}{1+Tr[R(\rho_{sep})]},~~1-p=\frac{2Tr[R(\rho_{sep})]}{1+Tr[R(\rho_{sep})]} \end{eqnarray} Substituting values of $p$ and $1-p$ in (\ref{eq22}), the realigned matrix for separable state, $R(\rho_{sep})$ takes the form \begin{eqnarray} R(\rho_{sep})=\frac{1}{d^2}I \label{mes} \end{eqnarray} Therefore, (\ref{mes}) holds only for separable states. This means that there exist a separable state $\rho_{sep}$ such that $||\rho_{sep}||_{1}=1$ for which the equality condition in the error inequality (\ref{error}) holds.\\ \textbf{Result-4} If any quantum system described by a density operator $\rho$ in $d\otimes d$ system is separable then the error inequality is given by \begin{eqnarray} ||\widetilde{R}(\rho)-R(\rho)||_1&\leq& \frac{(1-p)[1-Tr[R(\rho)]]}{Tr[R(\rho)]} \label{errorsep1} \end{eqnarray} \textbf{Proof:} Let us consider a separable state $\rho_{sep}$. Thus, we have $||R(\rho_{sep})||_{1}\leq 1$. Therefore, error inequality (\ref{error}) reduces to \begin{eqnarray} ||\widetilde{R}(\rho_{sep})-R(\rho_{sep})||_1&\leq& p+{\frac{1-p-Tr[R(\rho_{sep})]}{Tr[R(\rho_{sep})]}}\nonumber\\ &=& \frac{(1-p)[1-Tr[R(\rho_{sep})]]}{Tr[R(\rho_{sep})]} \label{errorsep} \end{eqnarray} Hence proved.\\ \textbf{Corollary 3:} If inequality (\ref{errorsep1}) is violated by any bipartite $d \otimes d$ dimensional quantum state, then the state under investigation is entangled.\\ \section{Illustrations} \textbf{Example 1:} Consider the family of two-qubit states $\rho(r,s,t)$ discussed in \cite{rudolph3}. For $r=\frac{1}{4}$ and $s=\frac{1}{2}$, the family is represented by \begin{eqnarray} \rho_t=\frac{1}{2} \begin{pmatrix} \frac{5}{4} & 0 & 0 & t\\ 0 & 0 & 0 & 0\\ 0 & 0 & \frac{1}{4} & 0\\ t & 0 & 0 & \frac{1}{2} \end{pmatrix} \label{rhot} \end{eqnarray} $\rho_{t}$ may be defined as a valid quantum state when $|t| \leq \frac{\sqrt{\frac{5}{2}}}{2} \approx 0.790569$. By PPT criterion, $\rho_t$ is entangled when $t \neq 0$. Realignment criteria detect the entangled states for $|t| > 0.116117$.\\ Using the prescription given in (\ref{sparealign}), we construct the SPA-R map $\widetilde{R}: M_4 (\mathbb{C}) \longrightarrow M_4 (\mathbb{C})$ as \begin{eqnarray} \widetilde{R}(\rho_t) = \frac{p}{4} I_4 + \frac{(1-p)}{Tr[R(\rho_t)]} R(\rho_t) \end{eqnarray} where $0 \leq p \leq 1$.\\ Using Descarte's rule of sign, we find that $R(\rho_t)$ is positive semi-definite for $t \geq 0$ (Detailed calculation given in Appendix).\\ Applying $Theorem-1$, it can be shown that the approximated map $\widetilde{R}(\rho_t)$ is positive as well as completely positive for $l \leq p \leq 1$ where \begin{eqnarray} l = \left\{ \begin{array}{lrr} p_1(t) &\text{if }& -0.790569 \leq t < 0\\ 0 & \text{if }& 0 \leq t \leq 0.790569 \end{array} \right\} \end{eqnarray} where \begin{eqnarray} p_1(t) = \frac{2(13-24t+8t^2)-\sqrt{3(67-112t+64t^2)}}{(-5+4t)^2} \label{p1} \end{eqnarray} Thus, the SPA-R map $\widetilde{R}(\rho_t)$, which is a completely positive map may be suitable for detecting the entanglement in the family of states described by the density operator $\rho_{t}$. Now we apply our separability criterion discussed in $Theorem-2$ which involves the comparison of $||\widetilde{R}(\rho_t)||_1$ and the upper bound $\widetilde{R}(\rho_t)_{UB}$ defined in (\ref{thm2}). After few steps of the simple calculation, we obtain \begin{eqnarray} ||\widetilde{R}(\rho_t)||_1 > \widetilde{R}(\rho_t)_{UB} \label{rhoteq} \end{eqnarray} for\\ $\begin{array}{lrl} t \in (-0.790569, -0.665506] &\text{when}& p_1(t) \leq p < p_2(t) \\ t \in (0.116117, 0.125] &\text{when}& 0 \leq p < p_3(t) \\ t \in (0.125, 0.790569] &\text{when}& 0 \leq p\leq 1 \end{array}$ where \begin{eqnarray} p_2(t) &=& \frac{(-91 - 48 t - 64 t^2) - \sqrt{u(t)}}{2 (-7 + 48 t)^2}\\ p_3(t) &=& \frac{(14 - 128 t + 64 t^2)}{(7 - 80 t + 128 t^2)} \end{eqnarray} The function $u(t)$ is given by $u= 8673 + 9632 t - 8832 t^2 - 6144 t^3 + 4096 t^4$.\\ Thus, the inequality (\ref{thm2}) is violated when $t > 0.116117$ and $t<-0.665506$ which implies that the state $\rho_{t}$ is entangled for $t \in [-0.790569,-0.665506) \cup (0.116117, 0.790569] $.\\ The comparison of $||\widetilde{R}(\rho_t)||_1$ and $\widetilde{R}(\rho_t)_{UB}$ for the two-qubit state $\rho_t$ has been studied in Fig-1 for different range of $t$ given in (\ref{rhoteq}). From Fig-1 it can be observed that the inequality (\ref{rhoteq}) holds for $t > 0.116117$ which implies that the entanglement of $\rho_t$ is detected in this region. \begin{figure} \caption{The comparison between the $||\widetilde{R}(\rho_t)||_1$ and $\widetilde{R}(\rho_t)_{UB}$ for the two-qutrit state $\rho_{t>0}$ has been displayed. In Fig 1a., one can observe that the inequality (\ref{thm2}) obtained in Theorem-2 is violated when $-0.790569 \leq t < -0.665506$ for $p \in [p_1(t),p_2(t)]$ whereas in Fig 1b. the inequality is violated when $0.116117< t \leq 0.125$ and $p$ lies in the interval $[0,p_3(t))$. Fig. 1c. shows the violation of inequality (\ref{thm2}) when $t > 0.125$ and $0 \leq p \leq 1$.} \end{figure}\\ \textbf{Example 2:} Consider a two-qutrit state defined in \cite{swapan}, which is described by the density operator \begin{eqnarray} \rho_a = \frac{1}{5+2a^2}\sum_{i=1}^{3}{|\psi_i\rangle \langle \psi_i|},~~\frac{1}{\sqrt{2}}\leq a \leq 1 \label{eg2} \end{eqnarray} where, $|\psi_i\rangle=|0i\rangle-a|i0\rangle$, for $i=\{1,2\}$ and\\ $|\psi_3\rangle=\sum_{i=0}^{2}{|ii\rangle}$.\\ The state described by the density operator $\rho_a$ is NPTES \cite{swapan}. Using the prescription given in (\ref{sparealign}), we construct the SPA-R map $\widetilde{R}: M_9 (\mathbb{C}) \longrightarrow M_9 (\mathbb{C})$ as \begin{eqnarray} \widetilde{R}(\rho_a) = \frac{p}{9} I_9 + \frac{(1-p)}{Tr[R(\rho_a)]}{R(\rho_a)},~ 0 \leq p \leq 1 \end{eqnarray} Using Descarte's rule of sign, we find that $R(\rho_a)$ is not a positive semi-definite operator. (Detailed calculation given in Appendix). Using $Theorem-1$, the approximated map $\widetilde{R}(\rho_a)$ is positive as well as completely positive for $l_{1} \leq p \leq 1$ where \begin{eqnarray} l_{1} = \frac{-1+15\sqrt{2}w+6\sqrt{2}a^2w}{3\sqrt{2}(5+2a^2)w},~w=\sqrt{\frac{1}{56+9a^2(5+a^2)}} \end{eqnarray} Thus, the SPA-R map $\widetilde{R}(\rho_a)$ is suitable for detecting the entanglement in the state $\rho_a$ experimentally. Now we apply our separability criterion discussed in $Theorem-2$ which involves the comparison of $||\widetilde{R}(\rho_a)||_1$ and the upper bound $\widetilde{R}(\rho_a)_{UB}$ defined in (\ref{thm2}). For $\frac{1}{\sqrt{2}}\leq a \leq 1$, we find that \begin{eqnarray} ||\widetilde{R}(\rho_a)||_1 > \widetilde{R}(\rho_a)_{UB} \end{eqnarray} The comparison of $||\widetilde{R}(\rho_a)||_1$ and $\widetilde{R}(\rho_a)_{UB}$ for the two-qutrit state $\rho_a$ has been studied in Fig-2. From Fig-2, it is evident that the inequality (\ref{thm2}) obtained in $Theorem-2$ is violated. Thus, the state described by the density operator $\rho_a$ is an entangled state.\\ \begin{figure} \caption{The comparison between the $||\widetilde{R}(\rho_a)||_1$ and $\widetilde{R}(\rho_a)_{UB}$ for the two-qutrit state $\rho_a$ has been displayed. It has been observed that the inequality (\ref{thm2}) is violated for $\rho_a$ in the whole range of $a$ and for $p \in [l_{1}, 1]$} \end{figure} \textbf{Example 3:} Let us consider a two-qutrit isotropic state described by the density operator $\rho_{\beta}$ \cite{iso} \begin{eqnarray} \rho_{\beta}=\beta|\phi_{+}\rangle \langle \phi_{+}|+\frac{1-\beta}{9}I_9, ~~-\frac{1}{8}\leq \beta \leq 1 \label{betastate} \end{eqnarray} where $I_9$ denotes the identity matrix of order 9 and the state $|\phi_{+}\rangle$ represents a Bell state in a two-qutrit system and may be expressed as \begin{eqnarray} |\phi_{+}\rangle=\frac{1}{\sqrt{3}}(|11\rangle +|22\rangle +|33\rangle) \label{bellstate} \end{eqnarray} Using realignment criteria, the state $\rho_{\beta}$ is an entangled state for $\frac{1}{3}< \beta \leq 1$. Using Descarte's rule of sign, we find that the realignment matrix $R(\rho_{\beta})$ is positive semi-definite. The comparison between $||\widetilde{R}(\rho_{\beta})||_1$ and $\widetilde{R}(\rho_{\beta})_{UB}$ has been studied in Fig-3. \begin{figure} \caption{The comparison between the $||\widetilde{R}(\rho_{\beta})||_1$ and $\widetilde{R}(\rho_{\beta})_{UB}$ for the two-qutrit state $\rho_{\beta}$ has been displayed. It has been observed that the inequality (\ref{thm2}) is violated for all values of $\beta \in (1/3, 1]$ and for any $p \in [0,1]$.} \end{figure} From Fig-3, it can be observed that the inequality (\ref{thm2}) is violated for $\frac{1}{3}< \beta \leq 1$ and $p\in [0,1]$. Thus, using $Theorem-2$, the state described by the density operator $\rho_{\beta}$ is an entangled state.\\ \textbf{Example 4:} Consider $\alpha$-state for $0\leq \alpha \leq 1$ described by the density operator $\rho_{\alpha}$ may be defined as \begin{eqnarray} \rho_{\alpha}=\frac{1}{8\alpha+1} \begin{pmatrix} \alpha & 0 & 0 & 0 & \alpha & 0 & 0 & 0 & \alpha \\ 0 & \alpha & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \alpha & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \alpha & 0 & 0 & 0 & 0 & 0 \\ \alpha & 0 & 0 & 0 & \alpha & 0 & 0 & 0 & \alpha \\ 0 & 0 & 0 & 0 & 0 & \alpha & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \frac{1+\alpha}{2} & 0 & \frac{\sqrt{1-\alpha^2}}{2} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha & 0 \\ \alpha & 0 & 0 & 0 & \alpha & 0 & \frac{\sqrt{1-\alpha^2}}{2} & 0 & \frac{1+\alpha}{2} \\ \end{pmatrix} \label{eg4} \end{eqnarray}\\ It has been shown that this state is PPTES for $0<\alpha<1$ \cite{horodecki7}. Using Descarte's rule of sign, we find that the realignment matrix $R(\rho_{\alpha})$ is positive semi- definite (See appendix for detailed calculation). Further, using $Result-2$, it can be easily shown that the SPA-R map $\widetilde{R}(\rho_{\alpha})$ is completely positive for any $p \in[0,1]$. It has been observed that the inequality (\ref{thm2}) is violated for different range of $p$ for some values of $\alpha$, which is shown in the table given below. \begin{table}[h!] \begin{center} \begin{tabular}{ p{2.0cm} p{2.8cm} p{1.8cm} } \hline $\alpha$ & $Range~of~p$ & $Theorem-2$ \\ \hline $0.1$ &$0\leq p \leq 0.019383$ & $Violated$ \\ $0.2$ &$0\leq p \leq 0.022143$ & $Violated$\\ $0.3$ &$0\leq p \leq 0.021903$ & $Violated$\\ $0.4$ &$0\leq p \leq 0.020444$ & $Violated$\\ $0.5$ &$0\leq p \leq 0.018284$ & $Violated$\\ $0.6$ &$0\leq p \leq 0.015611$ & $Violated$\\ $0.7$ &$0\leq p \leq 0.012488$ & $Violated$\\ $0.8$ &$0\leq p \leq 0.008904$ & $Violated$\\ $0.9$ &$0\leq p \leq 0.004791$ & $Violated$\\ \hline \end{tabular} \end{center} \caption{The table shows the range of the probability $p$ for which the inequality (\ref{thm2}) is violated for different values of the state parameter $\alpha$}. \label{table1} \end{table} Thus, we have shown that the criterion given by $Theorem-2$ is violated by $\rho_{\alpha}$ and thus our criterion detect the bound entangled state given by (\ref{eg4}). \section{Efficiency of SPA-R criterion} In this section, we show how SPA-R criterion is efficient in comparison to other entanglement detection criteria. In particular, we are considering three entaglement detection criterion such as (a) separability criterion based on realigned moment \cite{tzhang} and (b) partial realigned moment criterion \cite{shruti4} for comparing the efficiency of SPA-R criterion. \subsection{Comparing SPA-R and moment based criterion (a)} To compare the SPA-R criterion with the moment based criterion (a), we will use example-1 and example-4.\\ (i) Let us recall example-1, where the family of states is described by the density operator $\rho_t$. Interestingly, for this family of states when $t>0$, our SPA-R criteria detects entanglement in the region $t \in (0.116117, 0.790569]$. But the realignment moment based criteria given in \cite{tzhang} detect the entangled state in the range $t\in (0.370992, 0.790569]$. Clearly, SPA-R criteria detects the NPTES $\rho_t$ for $t>0$ in a better range than the moment based criteria (a).\\ (ii) Let us consider the BES studied in example-4, which is described by the density operator $\rho_{\alpha}$, $0< \alpha <1$. The realignment moment for a bipartite state $\rho_{\alpha}$ may be defined as \cite{tzhang} \begin{eqnarray} r_k (R(\rho_{\alpha})) = Tr[R(\rho_{\alpha}) (R(\rho_{\alpha}))^{\dagger}]^{k/2},\; k = 1, 2, 3, . . ., n \label{zhangdef} \end{eqnarray} where $n$ denote the order of the matrix $R(\rho_{\alpha})$.\\ The separability criterion based on realignment moments $r_2$ and $r_3$ may be stated as \cite{tzhang}: If a quantum state $\rho_{\alpha}$ is separable, then \begin{eqnarray} Q_1 = (r_2(R(\rho_{\alpha}))^2 - r_3(R(\rho_{\alpha}) \leq 0 \label{rzhang} \end{eqnarray} $Q_1 > 0$ certifies that the given state is entangled.\\ Fig-\ref{q2al} shows that the inequality (\ref{rzhang}) is not violated for the BES $\rho_{\alpha}$ in the whole range $0 < \alpha < 1$. Hence the BES $\rho_{\alpha}$ is undetected by this realignment moment based criteria. \begin{figure} \caption{The red curve represents $Q_1$ for the state $\rho_{\alpha}$ and \textit{x}-axis depicts the state parameter $\alpha$.} \label{q2al} \end{figure} \subsection{Comparing SPA-R and moment based criterion (b)} Let us again recall example-1 and example-4 to compare the SPA-R criterion with the moment based criterion (b).\\ (i) In the example-1, the family of states described by the density operator $\rho_t$. By $R$-moment criterion given in \cite{shruti4}, $\rho_t$ is detected when $t \in (0.214312, 0.790569] \subset (0.116117, 0.790569] $. Therefore, SPA-R criteria detects' more entangled states than the $R$-moment criterion.\\ (ii) Let us now consider the BES studied in example-4. Applying $R$-moment criterion \cite{shruti4} on the BES described by the density operator $\rho_{\alpha}$, $0<\alpha<1$, we get \begin{eqnarray} Q_2 \equiv 56 D_8^{1/8} + T_1 - 1 \leq 0 \;\; \forall \; \alpha \in (0,1) \label{ineqal} \end{eqnarray} where $D_8 = \prod_{i=1}^8 \sigma_i^2(\rho_{\alpha})$ and $T_1 = Tr[R(\rho_{\alpha})]$. Here $\sigma_i(\rho_{\alpha})$ represents the $ith$ singular value of $\rho_{\alpha}$. Since the above inequality is not violated for any $\alpha$, the BES $\rho_{\alpha}$ is undetected by $R$-moment based criteria. This is shown in Fig-\ref{q1al}. \begin{figure} \caption{The red curve represents $Q_2$ for the state $\rho_{\alpha}$ and x-axis depicts the state parameter $\alpha$.} \label{q1al} \end{figure} \section{Conclusion} \noindent To summarize, we have developed a separability criterion by approximating realignment operation via structural physical approximation (SPA). Since the partial transposition (PT) operation is limited to detect only NPTES, so we have studied here the realignment operation, which may detect both NPTES and PPTES. But since realignment map is not a positive map and thus it does not represent a completely positive map so it is difficult to implement it in a laboratory. Therefore, in order to make realignment map completely positive, firstly, we have approximated it to a positive map using the method of SPA and then we have shown that this approximated map is also completely positive. We have shown that the positivity of the SPA-R map can be verified in an experiment because the lower bound of the fraction $p$ can be expressed in terms of the first and second moments of the realignment matrix. Interestingly, we have shown that the separability criterion derived in this work using the approximated (SPA-R) map detect NPT and PPT bipartite entangled state. Some examples are cited to support the result obtained in this work. Although there are other PPT criterion that may detect NPTES and PPTES but our result is interesting in the sense that it may be realized in an experiment. Our obtained results may be realized in an experiment but to achieve this aim, we pay a price in terms of the short range detection. This fact can be observed in Example 1 where the range of the state parameter for the detection of entangled state is smaller than the range obtained by realignment operation (without approximation). We also have analyzed the error occured during the structural physical approximation of the realignment map and it is described by an inequality known as error inequality. Lastly, we have obtained an inequality which is satisfied by all bipartite $d\otimes d$ dimensional separable state and the violation of this inequality guarantees the fact that the state under probe is entangled. Interestingly, the SPA-R criteria conincides with the original realignment criteria for Schmidt-symmetric states.\\ \section*{Appendix I} \subsection{Example 1} Consider the $2$-qubit state $\rho_t$ defined in (\ref{rhot}). The characteristic polynomial of the matrix $R(\rho_t)$ can be expressed as \begin{eqnarray} f_1(x) = x^4 - a_1(t) x^3 + a_2(t) x^2 - a_3(t) x + a_4(t) \end{eqnarray} Using (\ref{rec}), we get \begin{eqnarray} &&a_1(t) = m_1 = t + \frac{7}{8} \label{a1t}\\ &&a_2(t) =\frac{1}{2} (m_1^2 - m_2) = \frac{1}{32}(8t^2 +28 t + 5)\\ &&a_3(t) = \frac{1}{6} ( m_1^3 - 3 m_1 m_2 + 2m_3)= \frac{1}{32}(7t^2 + 5t)\\ &&a_4(t) = \frac{1}{24} (m_1^4 - 6 m_1^2 m_2 + 8 m_1 m_3 + 3 m_2^2 - 6m_4) = \frac{5}{128}t^2 \nonumber\\ \end{eqnarray} where $m_k=Tr[(R(\rho_t))^k]$ $R(\rho_t)$ is positive semi-definite iff $a_i(t) \geq 0$ for all $i = 1$ to $4$. After simple calculation we get, \begin{eqnarray*} &&a_1(t) > 0 \;\; \text{for} \;\; t \in [-0.790569, 0.790569] \\ &&a_2(t) \geq 0 \;\; \text{for} \;\; t \in [-0.188751, 0.790569]\\ &&a_3(t) \geq 0 \;\; \text{for} \;\; t \in [-0.790569, -0.714286] \cup [0, 0.790569] \nonumber\\ &&a_4(t) \geq 0 \;\; \text{for} \;\; t \in [-0.790569, 0.790569] \end{eqnarray*} From above calculations, we observe the following:\\ \textbf{Case 1:} If $t\geq 0$ then all the coefficients of the characteristic polynomial $f_1(-x)$ are positive, i.e., there is no sign change in the ordered list of coefficients of $f_1(-x)$. Thus $R(\rho_t)$ has no negative eigenvalue for $t\geq 0$.\\ \textbf{Case 2:} If $t < 0$ then (i) $a_2(t) < 0$ for $t \in [-0.790569, -0.188751]$ and (ii) $a_3(t) < 0$ for $t \in [-0.714286, 0] $, Hence, for every $t$, atleast one coefficient of $f_1(x)$ is negative. Hence $R(\rho_t)$ has atleast one negative eigenvalue, i.e., $R(\rho_t)$ is not positive semi-definite (PSD) for $t<0$.\\ From above analysis, it is trivial that $\widetilde{R}(\rho_{t})$ defines a positive map when $t \geq 0$. Now by $Theorem-1$, for $t < 0$, $\widetilde{R}(\rho_{t}) > 0$ when the lower bound $l$ of the proportion $p$ is given as \begin{eqnarray} l = \frac{4k}{Tr[R(\rho_t)] + 4k} = p_1(t) \end{eqnarray} where $Tr[R(\rho_t)]=a_{1}(t)$ is given in (\ref{a1t}), $p_1 (t)$ is defined in (\ref{p1}) and $k$ is given by \begin{eqnarray} k &=& -\lambda_{min}^{lb}[\rho_t] \nonumber\\&=& \frac{-1}{32} (7+ 8t - \sqrt{3(67 - 112 t + 64 t^2)}) \end{eqnarray} \subsection{Example 2} Consider the two-qutrit state defined in (\ref{eg2}). Let $f_2(x)$ be chracteristic polynomial of $\rho_{a}$ given as \begin{eqnarray} f_2(x) &=& x^9 - a_1(a) x^8 +a_2(a) x^7 - a_3(a) x^6 + a_4(a) x^5 \nonumber \\&& - a_5(a) x^4 + a_6(a) x^3 - a_7(a) x^2 + a_8(a) x - a_{9}(a)\nonumber \end{eqnarray} where the coefficients $a_i(a)$ calculated in terms of moments using (\ref{rec}) are given as \begin{eqnarray} a_1(a)&=&\frac{9}{5+2a^2},\nonumber\\a_2(a) &=&- \frac{4(-9+a^2)}{(5+2a^2)^2}, \nonumber\\ a_3(a) &=& -\frac{28(-3+a^2)}{(5+2a^2)^3},\nonumber\\a_4(a) &=& \frac{126-84a^2+5a^4}{(5+2a^2)^4}, \nonumber\\ a_5(a) &=& \frac{126-140a^2+25a^4}{(5+2a^2)^5}, \; \;\nonumber\\ a_6(a) &=& -\frac{2(-42+70a^2-25a^4+a^6)}{(5+2a^2)^6}, \nonumber\\ a_7(a) &=&-\frac{2(-18+42a^2-25a^4+3a^6)}{(5+2a^2)^7},\nonumber\\ a_8(a) &=&- \frac{-9+28a^2-25a^4+6a^6}{(5+2a^2)^8} \end{eqnarray} and \begin{eqnarray} a_9(a) =-\frac{(-1+a^2)^2(-1+2a^2)}{(5+2a^2)^9} \end{eqnarray}. From the coefficients of $f_2(x)$, it can be observed that atleast one coefficient of $f_2(x)$ is negative. This means $R(\rho_a)$ has atleast one negative eigenvalue, i.e., $R(\rho_a)$ is not PSD. Using $Theorem-1$, the approximated map $\widetilde{R}(\rho_a)$ is positive as well as completely positive when the lower bound $l$ of the proportion $p$ is given as \begin{eqnarray} l = \frac{9k}{Tr[R(\rho_a)] + 9k} \end{eqnarray} where $Tr[R(\rho_a)]$ is the trace of $R(\rho_a)$ and \begin{eqnarray} k &=& -\lambda_{min}^{lb}[\rho_a] \nonumber\\&=& -\frac{1}{5+2a^2}+3\sqrt{2}\sqrt{\frac{1}{56+45a^2+9a^4}} \end{eqnarray} Substituting value of $k$ and $Tr[R(\rho_a)]$, the lower bound $l_1$ may be expressed as \begin{eqnarray} l_1 = \frac{-1+15\sqrt{2}w+6\sqrt{2}a^2w}{3\sqrt{2}(5+2a^2)w} \end{eqnarray} where $w=\sqrt{\frac{1}{56+9a^2(5+a^2)}}$. \subsection{Example 3} Let us consider a two-qutrit isotropic state described by the density operator $\rho_{\beta}$ in (\ref{betastate}). Let $f_3(x)$ be chracteristic polynomial of $\rho_{\beta}$ given as \begin{eqnarray} f_3(x) &=& x^9 - a_1(\beta) x^8 +a_2(\beta) x^7 - a_3(\beta) x^6 + a_4(\beta) x^5 \nonumber \\&& - a_5(\beta) x^4 + a_6(\beta) x^3 - a_7(\beta) x^2 + a_8(\beta) x - a_{9}(\beta)\nonumber \end{eqnarray} where the coefficients $a_i(\beta)$, in terms of moments may be expressed as \begin{eqnarray} &a_1(\beta) = \frac{1}{3}(1+8\beta), \; \; &a_2(\beta) = \frac{4}{9}f(2+7\beta), \nonumber\\& a_3(\beta) = \frac{28}{27}f^2(1+2\beta), \; \; &a_4(\beta) = \frac{14}{81}f^3(4+5\beta), \nonumber\\ &a_5(\beta) = \frac{14}{243}f^4(5+4\beta), \; \; &a_6(\beta) = \frac{28}{729}f^5(2+\beta), \nonumber\\ &a_7(\beta) = \frac{4\beta^6(7+2\beta)}{2187}, \; \; &a_8(\beta) = \frac{\beta^7(8+\beta)}{6561} \end{eqnarray} and $a_9(\beta) =\frac{\beta^8}{19623}$. Since all the coefficients $a_i(\beta),~~i=1~ \text{to} ~9$ of $f_3(x)$ are positive, realignment matrix $R(\rho_{\beta})$ is positive semi-definite. Thus $\widetilde{R}(\rho_\beta)$ is completely positive for $0 \leq p \leq 1$. \subsection{Example 4} Consider the $\alpha$-state defined in (\ref{eg4}). Let $f_2(x)$ be chracteristic polynomial of $\rho_{\alpha}$ given as \begin{eqnarray} f_4(x) &=& x^9 - a_1(\alpha) x^8 +a_2(\alpha) x^7 - a_3(\alpha) x^6 + a_4(\alpha) x^5 \nonumber \\&& - a_5(\alpha) x^4 + a_6(\alpha) x^3 - a_7(\alpha) x^2 + a_8(\alpha) x - a_{9}(\alpha)\nonumber \end{eqnarray} where the coefficients $a_i(\alpha)$ calculated in terms of moments using (\ref{rec}) are given as \begin{eqnarray} &a_1(\alpha) = \frac{1 + 17\alpha}{2(1 + 8\alpha)}, \; \; &a_2(\alpha) = \frac{\alpha(7+ 59\alpha)}{2(1+8\alpha)^2}, \nonumber\\& a_3(\alpha) = \frac{a^2(21 + 109\alpha)}{2(1+8\alpha)^3}, \; \; &a_4(\alpha) = \frac{5\alpha^3(7+ 23\alpha)}{2(1+8\alpha)^4}, \nonumber\\ &a_5(\alpha) = \frac{\alpha^4(35 + 67\alpha)}{2(1+8\alpha)^5}, \; \; &a_6(\alpha) = \frac{\alpha^5(21 + 17\alpha)}{2(1+8\alpha)^6}, \nonumber\\ &a_7(\alpha) = \frac{\alpha^6(7 - \alpha)}{2(1+8\alpha)^7}, \; \; &a_8(\alpha) = \frac{\alpha^7(1-\alpha)}{2(1+8\alpha)^8} \end{eqnarray} and $a_9(\alpha) =0$. Now since $a_i(\alpha) \geq 0$ for $i=1$ to $9$, $R(\rho_{\alpha})$ is PSD. Hence, by $Theorem-1$, $\widetilde{R}(\rho_{\alpha})$ defines a positive map for $0 \leq p \leq 1$ and for all $\alpha \in (0,1)$. \section*{Appendix II: Estimation of first moment of $R(\rho)$} Let $\rho_{AB}$ be a $d\otimes d$ dimensional state. In \cite{sougato}, it has been shown that the measurement of moments of a partial transposed matrix is technically possible using $m$ copies of the state $\rho_{AB}$ and SWAP operations. In this process, the matrix power is written as expectation value of a permutation operator. We can apply the same method adopted in the references \cite{horodecki8,ekert} but on the single copy of realigned matrix as \begin{eqnarray} m_1 &=& Tr[R(\rho_{AB})P] \end{eqnarray} where $P$ is the normalized permutation operator. Now since $R(\rho_{AB})$ is not physically realizable, we need to express the first moment $m_1$ of $R(\rho_{AB})$ in terms of a physically realizable operator. From the definition (\ref{sparealign}) of the SPA of realigned matrix, we can write \begin{eqnarray} R(\rho_{AB}) \propto \widetilde{R}(\rho_{AB}) - \frac{p}{d^2} I_{d \otimes d} \end{eqnarray} Therefore, the first moment of $R(\rho_{AB})$ may be expressed as \begin{eqnarray} m_1 &\simeq& Tr [(\widetilde{R}(\rho_{AB}) - \frac{p}{d^2} I_{d \otimes d} )P]\nonumber\\ &=& Tr [\widetilde{R}(\rho_{AB})P] - \frac{p}{d^2} Tr[P]\nonumber\\ &=& Tr [\widetilde{R}(\rho_{AB})P] - \frac{p}{d^2}\nonumber\\ &\leq& Tr [\widetilde{R}(\rho_{AB})P] - \frac{ k}{m_1+ d^2k} \label{m11} \end{eqnarray} In the last line, we have used $p \geq \frac{ d^2k}{m_1+ d^2k} $ and $k=max[0,-\lambda_{min}^{lb}[R(\rho_{AB})]]$, which is defined in Theorem 1. The equality is obtained when all the eigenvalues of $R(\rho_{AB})$ is positive.\\ The inequality (\ref{m11}) may be re-expressed as \begin{eqnarray} m_1 + \frac{ k}{m_1+ d^2k} \leq Tr [\widetilde{R}(\rho_{AB})P] := s \label{s1} \end{eqnarray} Since $\widetilde{R}(\rho_{AB})$ is a positive semi-definite operator with unit trace, so $s = Tr[\widetilde{R}(\rho_{AB})P]$ can be measured using controlled swap operations [42].\\ The inequality (\ref{s1}) can be re-expressed as \begin{eqnarray} m_1^2 +m_1(d^2k -s) + k(1-d^2s) \leq 0 \label{quad1} \end{eqnarray} Solving the above quadratic equation for $m_1$, we have \begin{eqnarray} \frac{ -(d^2k - s) - \sqrt{(d^2k-s)^2 - 4k(1-d^2s)}}{2} \leq m_1 \nonumber\\ \leq \frac{ -(d^2k - s) + \sqrt{(d^2k-s)^2 - 4k(1-d^2s)}}{2} \end{eqnarray} For $m_{1}$ to be real, we have \begin{eqnarray} (d^2k-s)^2 - 4k(1-d^2s) \geq 0 \label{ineq1} \end{eqnarray} Also, and let us assume $1-d^2s \geq 0$. The inequality (\ref{ineq1}) may be further simplified to \begin{eqnarray} (d^2k-s)^2 - 4k(1-d^2s) \geq 0\nonumber\\ \Rightarrow d^4k^2 + 2k(d^2s -2) + s^2 \geq 0 \label{quad2} \end{eqnarray} Inequality (\ref{quad2}) holds when either $k \geq \frac{2 -d^2s+ 2\sqrt{1-d^2s}}{d^4}$ or $k \leq \frac{2 -d^2s - 2\sqrt{1-d^2s}}{d^4}$.\\ \textbf{Case 1:} If $ 2-d^2s+ 2\sqrt{1-d^2s} \leq d^4k \leq d^4 $ then \begin{eqnarray} f_l(s)\leq m_1 \leq f_u(s) \label{case1} \end{eqnarray} \textbf{Case 2:} If $0 \leq d^ 4k \leq 2 -d^2s - 2\sqrt{1-d^2s}$ then \begin{eqnarray} g_l(s)\leq m_1 \leq g_u(s) \label{case2} \end{eqnarray} The functions $f_l(s)$, $f_u(s)$, $g_l(s)$, $g_u(s)$ are given as follows: \begin{eqnarray} f_l(s) &=& \frac{1}{2} (-d^2 + s) \nonumber\\&& -\frac{1}{2d^2} \sqrt{d^8 + 2d^6s + 4d^2s + d^4s^2 -8(1+\sqrt{x})}\nonumber\\ \label{e1} \end{eqnarray} \begin{eqnarray} f_u(s) &=& \frac{-1}{d^2}(x + \sqrt{x}) + \nonumber\\&& \frac{1}{2d^2}\left(\sqrt{d^8 + 2d^6s + 4d^2s +d^4s^2 -8 (1+ \sqrt{x})}\right)\nonumber\\ \label{e2} \end{eqnarray} \begin{eqnarray} g_l(s) &=& \frac{1}{d^2}\left( -x + \sqrt{x} - \sqrt{1+x-2\sqrt{x}} \right) \label{e3} \end{eqnarray} \begin{eqnarray} g_u(s) &=& \frac{s}{2} + \frac{1}{d^2} \sqrt{1+x-2\sqrt{x}} \label{e4} \end{eqnarray} where $x = 1 - d^2s$.\\ Hence, the first moment of $R(\rho_{AB})$ may be estimated using (\ref{case1}), (\ref{case2}), (\ref{e1}), (\ref{e2}), (\ref{e3}), (\ref{e4}). Since the functions $f_{l}$, $f_{u}$, $g_{l}$ and $g_{u}$ are expressed in terms of $s= Tr[\widetilde{R}(\rho_{AB})P]$, the first moment of $R(\rho_{AB})$ can be estimated experimentally. \end{document}
\begin{document} \title{Robust Variable Selection Criteria for the Penalized Regression} \begin{abstract} We propose a robust variable selection procedure using a divergence based M-estimator combined with a penalty function. It produces robust estimates of the regression parameters and simultaneously selects the important explanatory variables. An efficient algorithm based on the quadratic approximation of the estimating equation is constructed. The asymptotic distribution and the influence function of the regression coefficients are derived. The widely used model selection procedures based on the Mallows's $C_p$ statistic and Akaike information criterion (AIC) often show very poor performance in the presence of heavy-tailed error or outliers. For this purpose, we introduce robust versions of these information criteria based on our proposed method. The simulation studies show that the robust variable selection technique outperforms the classical likelihood-based techniques in the presence of outliers. The performance of the proposed method is also explored through the real data analysis. \end{abstract} \noindent{\textbf{MSC2010 subject classifications}}: 62J07, 62F35. \noindent{\textbf{Keywords}}: Penalized Variable Selection, Robust Regression, Robust Information Criterion, M-estimator, Degrees of Freedom. \section{Introduction} We address the development of a robust method for modeling and analyzing high-dimensional data in the presence of outliers. Due to advanced technology and wide source of data collection, a high-dimensional data is available in several fields including healthcare, bioinformatics, medicine, epidemiology, economics, finance, sociology and climatology. In those data-sets, outliers are commonly encountered generally due to heterogeneous sources or effect of some confounding variables. The standard approaches often fail to model such data and produce misleading information. The modeling approaches can also be challenged by model misspecification and heavy-tailed error distribution. Thus, a suitable robust statistical method is essential to analyze these data which can properly eliminate the effect of outliers. In the initial stage of modeling, generally, a large number of predictors are included to get maximum information from data. However, in practice, very few predictors contain relevant information about the response variable. Thus, variable selection is an important topic in regression analysis when there are large number of predictors. It enhances the predictability power of the model, and reduces the chance of over-fitting. It also provides a better understanding of the underlying process that generated the data, and gives a faster and more cost-effective predictors. Including many predictors in the final model unnecessarily adds noise to the estimation of main quantities that we are interested in. The classical regression analysis is badly affected by multi-collinearity when too many variables tries to do the same job in explaining the response variables. Therefore, to explore the data in the simplest way, one needs to remove redundant predictors. As a powerful tool for selecting the subset of important predictors associated with responses, penalization plays a significant role in the high-dimensional statistical modeling. Methods that have been proposed include the bridge estimator \citep{frank1993statistical}, least absolute shrinkage and selection operator or LASSO \citep{MR1379242}, the smoothly clipped absolute deviation or SCAD \citep{MR1946581}, the elastic net \citep{MR2137327}, the adaptive LASSO \citep{MR2279469} and the minimum concave penalty approach or MCP \citep{MR2604701}. The statistical properties of these methods are extensively studied in the literature, however, most of these existing methods such as penalized least-squares or penalized likelihood \citep{fan2011nonconcave} are designed for light-tailed distributions. Not only these methods break down in the presence of outliers, but also the effect of outliers is not well studied for many variable selection techniques (\citealp{MR2836768}). Therefore, the robust variable selection, that can withstand the effect of outliers, is essential to model and analyze such data. In the literature, robust regularization methods such as the least absolute deviation (LAD) regression and quantile regression have been widely used for variable selection \citep{MR2424800, MR2418651}. \cite{MR2797841, MR2949353} studied the penalized quantile regression in high-dimensional sparse models where the dimensionality could be larger than the sample size. \cite{MR3025129} obtained bounds on the prediction error of a large class of $L_1$-penalized estimators, including quantile regression. \cite{MR3189488, MR2815779} introduced the penalized quantile regression with the weighted $L_1$-penalty for robust regularization. Variable selection methods based on M-estimators are addressed in (\citealp{MR2836768, MR2796868,kawashima2017robust}). In this paper, we propose a variable selection method based on the density power divergence (DPD) measure \citep{MR1665873}. A penalized variable selection method uses a regularized parameter in the penalty function which controls the complexity of the model. A commonly used method is the cross validation technique where the model parameters are estimated from the training data, and then the regularized parameter is selected from the remaining test data \citep{golub1979generalized}. However, if there are outliers in data, both the estimation and testing process may be severely affected. Therefore, the classical cross validation technique may not work properly in the presence of outliers. Moreover, the cross validation technique is computationally intensive. For the same reason, the bootstrap based methods may also fail in the presence of outliers. Another widely used technique is the information based criteria for the model selection. The Mallows's $C_p$ statistic \citep{mallows1973some}, the Akaike information criterion (AIC) \citep{MR0483125} and the Bayes information criterion (BIC) \citep{MR0468014} play an important role in high-dimensional data analysis. Unfortunately, as most selection criteria are developed based on the ordinary least-squares (OLS) estimates, their performance under heavy-tailed errors is very poor. \cite{MR784761, MR1294082} modified the classical selection criteria using the Huber's M-estimator. Consequently, \cite{MR1045193} derived a set of useful model selection criteria based on the LAD estimates. Despite their usefulness, these LAD-based variable selection criteria, have some limitations -- the major one being the computational burden \citep{MR2380753}. To address the deficiencies of traditional model selection methods, we propose two information criteria using robust estimators based on the density power divergence. The detailed theoretical derivations are provided for these methods, and their performance is verified from the simulation studies and a real data example. The rest of the paper is organized as follows. Section \ref{sec:intro} gives the background of the classical penalized regression analysis. Our proposed method for the robust penalized regression is introduced in Section \ref{sec:robust_reg}. In Sections \ref{sec:algo} and \ref{sec:asymp}, we presented the computation algorithm and the asymptotic distribution, respectively, of the proposed estimator. The robustness properties of the estimator is discussed from the view of the influence function analysis in Section \ref{sec:inf}. Then, in Section \ref{sec:selection}, two information criteria for model selection are proposed as robust versions of the Mallow's $C_p$ statistic and Akaike information criterion (AIC). An extensive simulation study and a real data analysis are presented to explore the effectiveness of the proposed method in Sections \ref{sec:simulatoin} and \ref{sec:data}, respectively. Some concluding remarks are given in Section \ref{sec:conc}, and the proofs and theoretical derivations are provided in the supplementary materials. \section{Classical Penalized Regression} \label{sec:intro} Suppose the pair $(y_i, {\boldsymbol{x}}_i)$ denote the observation from the $i$-th subject, where $y_i \in \mathbb{R}$ is the response variable and ${\boldsymbol{x}}_i \in \mathbb{R}^{p+1}$ is the set of linearly independent predictors with the first element of ${\boldsymbol{x}}_i$ being one for the intercept parameter. Consider the following linear regression model: \begin{equation} y_i = {\boldsymbol{x}}_i^T {\boldsymbol{\beta}} + \epsilon_i, \ \ \ i = 1, 2, \cdots, n, \label{reg_model} \end{equation} where ${\boldsymbol{\beta}} = (\beta_0, \beta_1, \cdots, \beta_p)^T$ is the regression coefficient, and $\epsilon_i$ is the random error. We assume that the error term $\epsilon_i \overset{iid}\sim N(0, \sigma^2)$. So, we have $ y_i \sim N({\boldsymbol{x}}_i^T {\boldsymbol{\beta}}, \sigma^2), \ i = 1, 2, \cdots, n. $ We define the response vector as ${\boldsymbol{y}} = (y_1, y_2, \cdots, y_n)^T$ and the design matrix as ${\boldsymbol{X}}=({\boldsymbol{x}}_1, {\boldsymbol{x}}_2, \cdots, {\boldsymbol{x}}_n)^T$. Under the classical setup when $n>p$, the OLS estimate of ${\boldsymbol{\beta}}$ is obtained by minimizing the square error loss function $||{\boldsymbol{y}} - {\boldsymbol{X}} {\boldsymbol{\beta}}||^2$, where $||\cdot||$ is the $L_2$ norm. The solution is $\hat{{\boldsymbol{\beta}}} = ({\boldsymbol{X}}^T{\boldsymbol{X}})^{-1} {\boldsymbol{X}}^T {\boldsymbol{y}}$, which is also the maximum likelihood estimator (MLE) of ${\boldsymbol{\beta}}$. Let $\mathcal{A} = \{j: 0\leq j \leq p, \beta_j \neq 0\}$ be the set of indices where ${\boldsymbol{\beta}}$ has non-zero coefficients. In the true model, if there are $p_0$ non-zero coefficients, then $p_0 = |\mathcal{A}|$, the cardinality of $\mathcal{A}$. Without loss of generality, we assume that $\beta_j\neq 0$ for $j\leq p_0$ and $\beta_j= 0$ for $j> p_0$. The OLS estimator is unbiased for ${\boldsymbol{\beta}}$, but in small or moderate sample sizes when $p_0 < p$, it often has a large variance. On the other hand, shrinking or setting some regression coefficients to zero may improve the prediction accuracy. In this case, we may incorporate a small bias, but a greater reduction in the variance term is achieved. Thus, it often improves the overall mean square error (MSE). We assume that the design matrix ${\boldsymbol{X}}$ is standardized so that $\sum_i x_{ij}/n=0$ and $\sum_i x^2_{ij}/n =1$ for all $j=2,3,\cdots, p$, where $x_{ij}$ is the $(i,j)$-th element of ${\boldsymbol{X}}$. Parameter shrinkage is imposed by considering a penalized loss function \begin{equation} L({\boldsymbol{\beta}}| \lambda_n) = \frac{1}{2n} ||{\boldsymbol{y}} - {\boldsymbol{X}} {\boldsymbol{\beta}} ||^2 + \sum_{j=1}^p P_{\lambda_n}(|\beta_j|), \label{loss_penelty} \end{equation} where the penalty function $P_{\lambda_n}(\cdot)$, indexed by regularized parameter $\lambda_n>0$, controls the model complexity. We assume that $P_{\lambda_n}(t)$ is non-decreasing function in $t$ and has a continuous derivative $P'_{\lambda_n}(t)=(\partial/\partial t) P_{\lambda_n}(t)$ in $(0,\infty)$. \cite{MR1157714} have shown that, under further assumption $P'_{\lambda_n}(0^+)>0$, the minimizer of Equation (\ref{loss_penelty}) has variable selection feature with zero components. In general, $P_{\lambda_n}(\cdot) = \lambda_n P(\cdot)$, where $\lambda_n$ balances between the bias and variance of the estimators. For example, in LASSO $P_{\lambda_n}(|\beta|) = \lambda_n |\beta|$. More predictors are included as $\lambda_n \rightarrow 0^+$, producing smaller bias, but higher variance. For $\lambda_n=0$, we get the OLS estimate. On the other hand, fewer predictors stay in the model as $\lambda_n$ increases, and finally, only the intercept parameter remains when $\lambda_n$ is larger than a threshold, say $\lambda_n > \lambda_0$. Therefore, with a properly tuned $\lambda_n$, the optimum prediction accuracy is achieved. \section{Proposed Robust Penalized Regression} \label{sec:robust_reg} The density power divergence (DPD) measure between the model density $f_{\boldsymbol{\theta}}$ with parameter ${\boldsymbol{\theta}} \in \Theta$ and the empirical (or true) density $g$ is defined as \begin{equation} d_\alpha(f_{\boldsymbol{\theta}}, g) = \left\{ \begin{array}{ll} \int_y\left\{ f^{1+\alpha}_{\boldsymbol{\theta}}(y)-\left( 1+\frac{1}{\alpha}\right) f^{\alpha }_{\boldsymbol{\theta}}(y)g(y)+ \frac{1}{\alpha}g^{1+\alpha}(y)\right\} dy, & \text{for}\mathrm{~}\alpha>0, \\% [2ex] \int_y g(y)\log\left( \displaystyle\frac{g(y)}{f_{\boldsymbol{\theta}}(y)}\right) dy, & \text{for} \mathrm{~}\alpha=0, \end{array} \right. \label{dpd} \end{equation} where $\alpha$ is a tuning parameter \citep{MR1665873}. For $\alpha=0$, the DPD is obtained as a limiting case of $\alpha \rightarrow 0^+$; and the measure is called the Kullback-Leibler divergence. Given a parametric model, we estimate ${\boldsymbol{\theta}}$ by minimizing the DPD measure with respect to ${\boldsymbol{\theta}}$ over its parametric space $\Theta$. We call the estimator as the minimum power divergence estimator (MDPDE). For $\alpha=0$, it is equivalent to maximize the log-likelihood function. Thus, the MLE is a special case of the MDPDE. The tuning parameter $\alpha$ controls the trade-off between efficiency and robustness of the MDPDE -- robustness measure increases if $\alpha$ increases, but at the same time efficiency decreases. Let ${\boldsymbol{\theta}} = ({\boldsymbol{\beta}}^T, \sigma^2)^T$ be the parameter of the regression model defined in Equation (\ref{reg_model}). The probability density function (pdf) of $y_i$, denoted by $f_{\boldsymbol{\theta}}(y_i|{\boldsymbol{x}}_i)$ or in short $f_i$, is given by \begin{equation} f_i \equiv f_{\boldsymbol{\theta}}(y_i|{\boldsymbol{x}}_i) = \frac{1}{\sqrt{2\pi}\sigma} \exp^{-\frac{1}{2\sigma^2} (y_i - {\boldsymbol{x}}_i^T {\boldsymbol{\beta}} )^2 }, \ \ \ i = 1, 2, \cdots, n. \label{fi} \end{equation} Suppose data is centered and scaled in the pre-processing step. Although, the penalty function does not involve $\sigma$, but for notational simplicity, we denote the penalty function by $P_{\lambda_n}({\boldsymbol{\theta}}) = \sum_{j=1}^p P_{\lambda_n}(|\beta_j|)$. It is obvious that the classical penalized regression analysis does not produce robust estimators due to the square error loss function in Equation (\ref{loss_penelty}). Therefore, we propose a modified penalized loss function using the DPD measure as \begin{equation} L_\alpha({\boldsymbol{\theta}}|{\boldsymbol{X}}, \lambda_n) = \frac{1}{n}\sum_{i=1}^n d_\alpha(f_i, g_i) + P_{\lambda_n}({\boldsymbol{\theta}}), \label{cont} \end{equation} where $g_i \ i=1, 2, \cdots, n$ are the empirical probability density functions. As we are concern about the robustness properties of estimators, data should be centered and scaled using robust statistics, such as the median and the mean absolute deviation (MAD). For $\alpha>0$, the loss function in Equation (\ref{cont}) is simplified as \begin{equation} L_\alpha({\boldsymbol{\theta}}|{\boldsymbol{X}}, \lambda_n) = \frac{1}{n}\sum_{i=1}^n V_i({\boldsymbol{\theta}}|{\boldsymbol{X}}, \lambda_n, \alpha) + P_{\lambda_n}({\boldsymbol{\theta}}) + c(\alpha) , \label{cont1} \end{equation} where $c(\alpha) = \frac{1}{\alpha} \int_y g^{1+\alpha}(y) dy$, the third term of Equation (\ref{dpd}), is free of ${\boldsymbol{\theta}}$ and \begin{equation} V_i({\boldsymbol{\theta}}|{\boldsymbol{X}}, \lambda,\alpha) = \frac{1}{(2\pi)^{\frac{\alpha}{2}} \sigma^\alpha \sqrt{1 + \alpha}} - \frac{1+\alpha}{ \alpha} f_i^\alpha . \label{vi} \end{equation} The MDPDEs of ${\boldsymbol{\beta}}$ and $\sigma$ are obtained by minimizing $L_\alpha({\boldsymbol{\theta}}|{\boldsymbol{X}}, \lambda_n)$ over ${\boldsymbol{\beta}} \in \mathcal{R}^{p+1}$ and $\sigma >0$. If the $i$-th observation is an outlier, the value of $f_i$ is very small compared to other samples. When $\alpha>0$, the second term of Equation (\ref{vi}) is negligible for that $i$, thus the resulting MDPDE becomes robust against outlier. On the other hand, when $\alpha=0$, we have $V_i({\boldsymbol{\theta}}|{\boldsymbol{X}}, \lambda_n,\alpha) = -\log(f_i)$; and it diverges as $f_i \rightarrow 0$. So, the MLE breaks down in the presence of outliers as they dominate the loss function. \section{Computation Algorithm} \label{sec:algo} Let us define \begin{equation} \nabla V({\boldsymbol{\beta}}) = \frac{1}{n}\sum_{i=1}^n \frac{\partial }{\partial {\boldsymbol{\beta}}} V_i({\boldsymbol{\theta}}|{\boldsymbol{X}}, \lambda_n,\alpha) = - \frac{1+\alpha}{n}\sum_{i=1}^n {\boldsymbol{u}}_i f_i^\alpha, \label{2nd_diff} \end{equation} where the score function $ {\boldsymbol{u}}_i = \frac{\partial}{\partial {\boldsymbol{\beta}}} \log f_i = \frac{(y_i - {\boldsymbol{x}}_i^T {\boldsymbol{\beta}} )}{\sigma^2} {\boldsymbol{x}}_i, $ and $f_i$ is given in Equation (\ref{fi}). Note that $\nabla V({\boldsymbol{\beta}})$ depends on ${\boldsymbol{X}}, \lambda_n,\alpha, {\boldsymbol{\beta}}$ and $\sigma$, but for simplicity in the notation, we wrote it as a function of ${\boldsymbol{\beta}}$ only. The estimating equations for MDPDEs of ${\boldsymbol{\beta}}$ and $\sigma$ are given by: \begin{align} - \frac{1+\alpha}{n}\sum_{i=1}^n {\boldsymbol{u}}_i f_i^\alpha + P'_{\lambda_n}({\boldsymbol{\beta}}) &= 0, \label{est_beta}\\ -\frac{\alpha}{(2\pi)^{\alpha/2} \sigma^\alpha \sqrt{1+\alpha}} + \frac{1+\alpha}{n} \sum_{i=1}^n &\left\{1 - \frac{(y_i - {\boldsymbol{x}}_i^T {\boldsymbol{\beta}} )^2}{\sigma^2} \right\} f_i^\alpha = 0, \label{est_sigma} \end{align} where $P'_{\lambda_n}({\boldsymbol{\beta}}) = \frac{\partial}{\partial {\boldsymbol{\beta}}} P_{\lambda_n}({\boldsymbol{\theta}}) = \sum_{j=1}^p P'_{\lambda_n}(|\beta_j|)$. Equations (\ref{est_beta}) and (\ref{est_sigma}) contain a system of $(p+2)$ non-linear equations, which may be difficult to solve. Following \cite{MR2395832}, we approximate the first term of the loss function in Equation (\ref{cont1}) by a quadratic function of ${\boldsymbol{\beta}}$. Differentiating Equation (\ref{2nd_diff}) we get \begin{equation} \nabla^2 V({\boldsymbol{\beta}}) = - \frac{1+\alpha}{n}\sum_{i=1}^n \left( \alpha {\boldsymbol{u}}_i {\boldsymbol{u}}_i^T f_i^\alpha + \nabla {\boldsymbol{u}}_i f_i^\alpha \right), \label{2nd} \end{equation} where $ \nabla {\boldsymbol{u}}_i = - \frac{1}{\sigma^2} {\boldsymbol{x}}_i {\boldsymbol{x}}_i^T $. Now, $\nabla^2 V({\boldsymbol{\beta}})$ is a positive semi-definite matrix and can be decomposed as ${\boldsymbol{Z}}^T {\boldsymbol{Z}}$, where ${\boldsymbol{Z}}$ is a $(p+1)\times (p+1)$ matrix. Let us define $\boldsymbol{Y}^* = ({\boldsymbol{Z}}^T)^{-1} ( \nabla^2 V({\boldsymbol{\beta}}) {\boldsymbol{\beta}} - \nabla V({\boldsymbol{\beta}}) )$. Then, the loss function in Equation (\ref{cont1}) can be approximated by: \begin{equation} L_\alpha({\boldsymbol{\theta}} |{\boldsymbol{X}}, \lambda_n) \approx \frac{1}{2n} ||\boldsymbol{Y}^* - {\boldsymbol{Z}}{\boldsymbol{\beta}}||^2 + P_{\lambda_n}({\boldsymbol{\theta}}) + c(\alpha) . \label{cont2} \end{equation} Therefore, the MDPDE of ${\boldsymbol{\beta}}$ can be obtained iteratively using the existing package of the corresponding penalized regression, eg., we may use the LARS algorithm in case of LASSO penalty \citep{MR2060166}. The estimation procedure is given in Algorithm \ref{algo:MDPDE}. \begin{algorithm} \caption{Computation of MDPDEs of ${\boldsymbol{\beta}}$ and $\sigma$} \begin{algorithmic}[1] \State Choose tuning parameters $\alpha$ and $\lambda_n$. \State {\bf Pre-processing:} Center and scale data using robust statistics. \State {\bf Initialization:} Initialize ${\boldsymbol{\beta}}$ and $\sigma$ by OLS or any robust estimators. \While {convergence of the estimators of ${\boldsymbol{\beta}}$ and $\sigma$} \State Compute $\nabla V({\boldsymbol{\beta}}), \nabla^2 V({\boldsymbol{\beta}}), Z$ and $\boldsymbol{Y}^*$. \State Update ${\boldsymbol{\beta}}$ by minimizing Equation (\ref{cont2}). \State Update $\sigma$ by solving Equation (\ref{est_sigma}) or minimizing Equation (\ref{cont1}). \EndWhile \State {\bf Post-processing:} Unstandardize ${\boldsymbol{\beta}}$ and $\sigma$ by inverting pre-processing step. \end{algorithmic} \label{algo:MDPDE} \end{algorithm} \section{Asymptotic Distribution of the MDPDE} \label{sec:asymp} Suppose $g$ is the true data generating distribution, whereas $f_{\boldsymbol{\theta}}$ with ${\boldsymbol{\theta}} \in \Theta$ is the family containing the model distributions. We define $f_i = f_{{\boldsymbol{\theta}}}(\cdot|{\boldsymbol{x}}_i)$ and $g_i = g(\cdot|{\boldsymbol{x}}_i)$ for $i=1,2, \cdots, n$. Let ${\boldsymbol{\theta}}_g= ({\boldsymbol{\beta}}_g^T, \sigma_g^2)^T$ be the value of ${\boldsymbol{\theta}}$ that minimizes $\sum_i d_\alpha(f_i, g_i)$ over ${\boldsymbol{\theta}} \in \Theta$. In Section \ref{sec:robust_reg}, $g_i$ is referred to an empirical pdf, and the resulting minimizer of $\sum_i d_\alpha(f_i, g_i)$ produces the non-penalized MDPDE. Notice that ${\boldsymbol{\theta}}_g$ is the true value of the parameter if the model is correctly specified. However, it is not necessary that $g$ is a member of the model family. In that case, $f_{{\boldsymbol{\theta}}_g}$ is the closest density function to $g$ with respect to the DPD measure $\sum_i d_\alpha(f_i, g_i)$ over ${\boldsymbol{\theta}} \in \Theta$. We assume that ${\boldsymbol{\beta}}_g$ is sparse, and the set corresponding to the non-zero elements is given by $\mathcal{A} = \{j: 0\leq j \leq p, \beta_{gj} \neq 0\}$ where $|{\mathcal{A}}| = p_1 \leq p+1$. Let us define ${\boldsymbol{\beta}}_\mathcal{A}$ as the vector obtained from ${\boldsymbol{\beta}}_g$ by selecting the elements corresponding to set $\mathcal{A}$. The remaining part of ${\boldsymbol{\beta}}_g$ is called ${\boldsymbol{\beta}}_{\bar{{\mathcal{A}}}}$. So, ${\boldsymbol{\beta}}_{\bar{{\mathcal{A}}}}=\mathbf{0}$, the $(p+1-p_1)$-dimensional zero vector. Suppose $\hat{{\boldsymbol{\theta}}} = (\hat{{\boldsymbol{\beta}}}^T, \hat{\sigma}^2)^T$ is the MDPDE of ${\boldsymbol{\theta}}$ obtained by minimizing the loss function defined in Equation (\ref{cont1}). We also partition $\hat{{\boldsymbol{\beta}}}$ as $\hat{{\boldsymbol{\beta}}}_{\mathcal{A}}$ and $\hat{{\boldsymbol{\beta}}}_{\bar{{\mathcal{A}}}}$, where $\hat{{\boldsymbol{\beta}}}_{\mathcal{A}}$ is a $p_1$-dimensional vector. Similarly, ${\boldsymbol{X}}$ is partitioned as ${\boldsymbol{X}}_{\mathcal{A}}$ and ${\boldsymbol{X}}_{\bar{{\mathcal{A}}}}$, where ${\boldsymbol{X}}_{\mathcal{A}}$ is a matrix of dimension $n\times p_1$. Let us define $\mathbf{\Sigma}_{\mathcal{A}} = \displaystyle { \lim_{n\rightarrow \infty} \frac{1}{n} {\boldsymbol{X}}_{\mathcal{A}}^T {\boldsymbol{X}}_{\mathcal{A}}}$ and \begin{equation} \xi_\alpha = (2\pi)^{-\frac{\alpha}{2}} \sigma^{-(\alpha+2)} (1+\alpha)^{-\frac{3}{2}} \mbox{ and } \eta_\alpha = \frac{1}{4}(2\pi)^{-\frac{\alpha}{2}} \sigma^{-(\alpha+4)} \frac{2 + \alpha^2}{(1+\alpha)^{\frac{5}{2}}} . \label{xi} \end{equation} The asymptotic distribution of the non-penalized MDPDE is derived in \cite{MR3117102} without assuming a sparse representation of the true model. \cite{MR2796868} derived the asymptotic distribution of an M-estimator where the dimension ($p_n$) of predictors increases over the sample size. The MDPDE is a special case of the M-estimator, but here we assume that the dimension of predictors is fixed. To derive the asymptotic distribution of the MDPDE we require assumptions (A1)--(A7) of \cite{MR3117102} and some selected assumptions from \cite{MR2796868} as follows: \begin{itemize} \item[(C1)] $\max_j\{P'_{\lambda_n}(\beta_j): j \in {\mathcal{A}}\} = O(n^{-1/2})$, where ${\boldsymbol{\beta}}_{\mathcal{A}}=(\beta_1, \beta_1, \cdots, \beta_{p_1})^T$. \item[(C2)] $\max_j\{P''_{\lambda_n}(\beta_j): j \in {\mathcal{A}}\} \rightarrow 0$ as $n \rightarrow \infty$. \item[(C3)] $\displaystyle \liminf_{n\rightarrow \infty} \liminf_{\beta \rightarrow 0+} P'_{\lambda_n}(\beta)/\lambda_n > 0.$ \item[(C4)] There exist two constants $C$ and $D$ such that $|P''_{\lambda_n}(\beta_1) - P''_{\lambda_n}(\beta_2)| \leq D|\beta_1 - \beta_2|$, if $\beta_1, \beta_2 > C\lambda_n$. \item[(C5)] Let $d_n^2 = \max_i {\boldsymbol{x}}_i^T {\mathbf S}_n^{-1} {\boldsymbol{x}}_i$ where ${\mathbf S}_n = {\boldsymbol{X}}^T {\boldsymbol{X}}$. For large $n$, there exists a constant $s>0$ such that $d_n \leq s n^{-1/2}$. \end{itemize} \begin{theorem} Assume that the regularity conditions (A1)--(A7) of \cite{MR3117102} and (C1)--(C5) hold. Then, the asymptotic distributions of the MDPDEs $\hat{{\boldsymbol{\beta}}} = (\hat{{\boldsymbol{\beta}}}_{\mathcal{A}}, \hat{{\boldsymbol{\beta}}}_{\bar{{\mathcal{A}}}})^T$ and $\hat{\sigma}^2$ have the following properties. \begin{enumerate} \item Sparsity: $\hat{{\boldsymbol{\beta}}}_{\bar{{\mathcal{A}}}} = \mathbf{0}$ with probability tending to 1. \item Asymptotic Normality of $\hat{{\boldsymbol{\beta}}}_{\mathcal{A}}$: $\sqrt{n} (\hat{{\boldsymbol{\beta}}}_{\mathcal{A}} - {\boldsymbol{\beta}}_{\mathcal{A}} ) \overset{a}{\sim} N\left(\bo{b}, \frac{\xi_{2\alpha}}{\xi_{\alpha}^2} \mathbf{\Sigma}_{\mathcal{A}}^{-1}\right)$, where $\bo{b} = \frac{\sqrt{\xi_{2\alpha}}}{\xi_{\alpha}} \mathbf{\Sigma}_{\mathcal{A}}^{-1/2} \lim_{n\rightarrow \infty} P'_{\lambda_n}({\boldsymbol{\beta}}_{\mathcal{A}})$. \item Asymptotic Normality of $\hat{\sigma}^2$: $ \sqrt{n} (\hat{\sigma}^2 - \sigma_g^2) \overset{a}{\sim} N(0, \sigma_\alpha^2), \mbox{ where } \sigma_\alpha^2 = \frac{\eta_{2\alpha} - \frac{\alpha^2}{4} \xi_\alpha^2}{ \eta_\alpha^2} . $ \item Independence: $\hat{{\boldsymbol{\beta}}}_{\mathcal{A}}$ and $\hat{\sigma}^2$ are asymptotically independent. \end{enumerate} \label{theorem:asymp} \end{theorem} The theorem ensures that, for large sample sizes, our procedure correctly drops the variables that don't have any significant contribution to the true model. So, the method selects variables consistently. Moreover, the estimators of nonzero coefficients $(\hat{{\boldsymbol{\beta}}}_{\mathcal{A}})$ have the same asymptotic distribution as they would if the zero coefficients $({\boldsymbol{\beta}}_{\bar{{\mathcal{A}}}})$ were known in advance. But the penalized MDPDE is a biased estimator. This feature is also observed in other penalized estimators. An asymptotic unbiased estimator of ${\boldsymbol{\beta}}_{\mathcal{A}}$ is $\hat{{\boldsymbol{\beta}}}_{\mathcal{A}} - \bo{b}/\sqrt{n}$. One important use of the asymptotic distribution of the penalized MDPDE is in selecting the optimum value of the DPD parameter $\alpha$. In practice, $\alpha$ is chosen by the user depending on the desired level of robustness measure at the cost of efficiency. Alternatively, following \cite{Warjones}, one may minimize the mean square error (MSE) of $\hat{{\boldsymbol{\beta}}}_{\mathcal{A}}$ to obtain the optimum value of $\alpha$ adaptively. The empirical estimate of the MSE, as the function of a pilot estimator ${\boldsymbol{\beta}}^P_{\mathcal{A}}$, is given by \begin{equation} \widehat{MSE}(\alpha) = (\hat{{\boldsymbol{\beta}}}_{\mathcal{A}} - {\boldsymbol{\beta}}^P_{\mathcal{A}})^T (\hat{{\boldsymbol{\beta}}}_{\mathcal{A}} - {\boldsymbol{\beta}}^P_{\mathcal{A}}) +\frac{\xi_{2\alpha}}{\xi_{\alpha}^2}\tr({\boldsymbol{X}}_{\mathcal{A}}^T {\boldsymbol{X}}_{\mathcal{A}})^{-1}. \label{adaptive_alpha} \end{equation} In particular, we recommend that a robust estimator, such as the Huber or Tukey's M-estimator, should be used as a pilot estimator. This method is implemented to calculate the optimum value of $\alpha$ for the real data example in Section \ref{sec:data}. \section{Influence Function}\label{sec:inf} In this section, we present the influence function following the approach of Huber \citep{MR606374}. It measures the effect of extreme outliers on the estimator. Let $f_{{\boldsymbol{\theta}}}, {\boldsymbol{\theta}} \in \Theta$ be the family of the target densities, where ${\boldsymbol{\theta}}_g$ is the true value of ${\boldsymbol{\theta}}$. We denote $f_i = f_{\boldsymbol{\theta}}(\cdot|{\boldsymbol{x}}_i)$ for $i=1,2,\cdots, n$. Suppose the true data generating distribution $g_\tau$ has $\tau$ proportion contamination from $T$, where $T$ is either a fixed point or a random variable. Then, the true density given ${\boldsymbol{x}}_i$ is written as $g_{\tau, i} = (1-\tau)f_i + \tau \delta_{t_i}$, where $\delta_{t_i}$ is a point-mass density function of $T$ at $t_i$. Here, $t_i$ is a realization of $T$ for $i=1, 2, \cdots, n$. Suppose the true value of the parameter ${\boldsymbol{\theta}}_g$ is now shifted to ${\boldsymbol{\theta}}_{\tau, g}$ due to contamination. So, for large sample sizes, $f_{{\boldsymbol{\theta}}_{\tau, g}}$ is the closest density function to $f_{{\boldsymbol{\theta}}_g}$ with respect to the DPD measure $\sum_i d_\alpha(f_i, g_{\tau, i})$ over ${\boldsymbol{\theta}} \in \Theta$. The influence function is defined by $IF({\boldsymbol{\theta}}_g, \boldsymbol{t}) = \frac{\partial{\boldsymbol{\theta}}_{\tau, g}}{\partial \tau}|_{\tau = 0}$, where $\boldsymbol{t}=(t_1, t_2, \cdots, t_n)^T$. It gives the rate of asymptotic bias of an estimator to infinitesimal contamination in the distribution. A bounded influence function suggests that the corresponding estimator is robust against extreme outliers. Let us define \begin{equation} \Psi_n = \left( \begin{array}{c c} \frac{\xi_\alpha}{n} {\boldsymbol{X}}^T {\boldsymbol{X}} & 0\\ 0 & \eta_\alpha \end{array} \right), \ \ \Psi = \left( \begin{array}{c c} \xi_\alpha \bo{\Sigma} & 0\\ 0 & \eta_\alpha \end{array} \right), \label{omega} \end{equation} where $\mathbf{\Sigma} = \displaystyle { \lim_{n\rightarrow \infty} \frac{1}{n} {\boldsymbol{X}}^T {\boldsymbol{X}}}$ and $\xi_{\alpha}$ and $\eta_\alpha$ are defined in Equation (\ref{xi}). The following theorem gives the influence function of the MDPDE. \begin{theorem} The influence function of the MDPDE for $\alpha>0$ is given by \begin{equation} IF({\boldsymbol{\theta}}_g, \boldsymbol{t}) = \left[\Psi_n + \frac{1}{1+\alpha} P''_{\lambda_n}({\boldsymbol{\theta}}_g) \right]^{-1} \frac{1}{n}\sum_{i=1}^n \left( \begin{array}{c} \frac{t_i - {\boldsymbol{x}}_i^T {\boldsymbol{\beta}}_g }{(2\pi)^{\alpha/2}\sigma_g^{\alpha +2}} \exp\left[ - \frac{(t_i - {\boldsymbol{x}}_i^T {\boldsymbol{\beta}}_g )^2}{2\sigma_g^2}\right] {\boldsymbol{x}}_i\\ \frac{(y_i - {\boldsymbol{x}}_i^T {\boldsymbol{\beta}}_g )^2 - \sigma_g^2}{2(2\pi)^{\alpha/2}\sigma_g^{\alpha +4}} \exp\left[ - \frac{(t_i - {\boldsymbol{x}}_i^T {\boldsymbol{\beta}}_g )^2}{2\sigma_g^2}\right] -\frac{\alpha }{2} \eta_\alpha \end{array} \right). \end{equation} \label{theorem:influence} \end{theorem} Under assumption (C2), $\displaystyle {\lim_{n\rightarrow \infty} \left[\Psi_n + \frac{1}{1+\alpha} P''_{\lambda_n}({\boldsymbol{\theta}}_g)\right] = \Psi}$. So, for large sample sizes, the penalty function does not have any role in the robustness of the MDPDE. We observe that $IF({\boldsymbol{\theta}}_g, \boldsymbol{t})$ is bounded for all $\alpha>0$ as $\exp(-x^2)$, $x\exp(-x^2)$ and $x^2\exp(-x^2)$ are bounded functions for $x \in \mathbb{R}$. For this reason, the penalized MDPDE of ${\boldsymbol{\beta}}$ and $\sigma$ are robust against outliers. On the other hand, it is well known that the OLS estimator (corresponds to $\alpha=0$) is non-robust as its influence function is unbounded. In the simulation study, we further explore the robustness properties of the penalized MDPDE. \section{Robust Model Selection Criteria}\label{sec:selection} The model selection criterion plays a key role in choosing the best model for high-dimensional data analysis. In a regression setting, it is well known that omitting an important explanatory variable may produce severe bias is parameter estimates and prediction results. On the other hand, including unnecessary predictors may degrade the efficiency of the resulting estimation and yields less accurate prediction. Hence, selecting the best model based on a finite sample is always a problem of interest for both theory and application in this field. There are several important and widely used selection criteria, e.g. the Mallows's $C_p$ statistic \citep{mallows1973some}, the Akaike information criterion (AIC) \citep{MR0483125} etc. However, those selection criteria are based on the classical estimators, so they show very poor performance in the presence of heavy-tailed error and outliers. To overcome the deficiency, we propose robust versions of those methods to select the best sub-model by choosing the optimum value of regularization parameter $\lambda_n$. \subsection{Robust $C_p$ Statistic and Degrees of Freedom} \label{sec_df} Suppose for the sub-model, the true selection set is given by $\mathcal{A} = \{j: 0\leq j \leq p, \beta_j \neq 0\}$. Let us define ${\boldsymbol{X}}_\mathcal{A}$ as the matrix obtained from ${\boldsymbol{X}}$ by selecting columns corresponding to set $\mathcal{A}$. Similarly, $\hat{{\boldsymbol{\beta}}}_\mathcal{A}$ and ${\boldsymbol{\beta}}_\mathcal{A}$ are defined based on the set ${\mathcal{A}}$. We further define \begin{equation} J_\mathcal{A} = (\hat{{\boldsymbol{\beta}}}_\mathcal{A} - {\boldsymbol{\beta}}_\mathcal{A})^T {\boldsymbol{X}}_\mathcal{A}^T {\boldsymbol{X}}_\mathcal{A} (\hat{{\boldsymbol{\beta}}}_\mathcal{A} - {\boldsymbol{\beta}}_\mathcal{A}). \end{equation} Following \cite{mallows1973some}, we consider $\frac{1}{\sigma^2} E[J_\mathcal{A}]$ as a measure of prediction adequacy. Let $RSS_\mathcal{A}$ be the residual sum of squares for the sub-model. Then, if the sub-model is true, we have \begin{align} E(RSS_\mathcal{A}) &= E\left[({\boldsymbol{y}} - {\boldsymbol{X}}_\mathcal{A} \hat{{\boldsymbol{\beta}}}_\mathcal{A})^T ({\boldsymbol{y}} - {\boldsymbol{X}}_\mathcal{A} \hat{{\boldsymbol{\beta}}}_\mathcal{A})\right]\\ &= E\left[({\boldsymbol{y}} - {\boldsymbol{X}}_\mathcal{A} {\boldsymbol{\beta}}_\mathcal{A})^T ({\boldsymbol{y}} - {\boldsymbol{X}}_\mathcal{A} {\boldsymbol{\beta}}_\mathcal{A})\right] - 2 E[({\boldsymbol{y}} - {\boldsymbol{X}}_\mathcal{A} {\boldsymbol{\beta}}_\mathcal{A})^T {\boldsymbol{X}}_\mathcal{A} (\hat{{\boldsymbol{\beta}}}_\mathcal{A} - {\boldsymbol{\beta}}_\mathcal{A})] \\ & \ \ \ \ \ \ \ \ + E[(\hat{{\boldsymbol{\beta}}}_\mathcal{A} - {\boldsymbol{\beta}}_\mathcal{A})^T {\boldsymbol{X}}_\mathcal{A}^T {\boldsymbol{X}}_\mathcal{A} (\hat{{\boldsymbol{\beta}}}_\mathcal{A} - {\boldsymbol{\beta}}_\mathcal{A})]\\ & = n\sigma^2 - 2 \sigma^2 df + E(J_\mathcal{A}), \label{rcp} \end{align} where $df = \frac{1}{\sigma^2} E[({\boldsymbol{y}} - {\boldsymbol{X}}_\mathcal{A} {\boldsymbol{\beta}}_\mathcal{A})^T {\boldsymbol{X}}_\mathcal{A} (\hat{{\boldsymbol{\beta}}}_\mathcal{A} - {\boldsymbol{\beta}}_\mathcal{A})]$ is called the degrees of freedom or the ``effective number of parameters'' of the regression model. \begin{lemma} If the sub-model is true, the degrees of freedom is expressed as \begin{equation} df = \frac{\xi_\alpha }{n} \tr ( {\boldsymbol{X}}_\mathcal{A} {\boldsymbol{S}_{{\mathcal{A}} n}}^{-1} {\boldsymbol{X}}^T_\mathcal{A}) + o(1), \label{df} \end{equation} where \begin{equation} {\boldsymbol{S}_{{\mathcal{A}} n}} = \frac{\xi_\alpha }{n} {\boldsymbol{X}}^T_\mathcal{A} {\boldsymbol{X}}_\mathcal{A} + \frac{1}{1 + \alpha} P''_{\lambda_n}({\boldsymbol{\beta}}_\mathcal{A}), \label{A} \end{equation} and $\xi_\alpha$ is given in Equation (\ref{xi}). \label{lemma_cp} \end{lemma} The proof of the lemma is given in the supplementary materials. Using this lemma, we estimate $\frac{1}{\sigma^2} E[J_\mathcal{A}]$ from Equation (\ref{rcp}). We denote it by $RC_p$, the robust $C_p$ statistic. So, the $RC_p$ is given by \begin{equation} RC_p = \frac{1}{\hat{\sigma}_u^2} RSS_\mathcal{A} - n + \frac{2 \hat{\xi_\alpha} }{n} \tr ( {\boldsymbol{X}}_\mathcal{A} {\boldsymbol{S}_{{\mathcal{A}} n}}^{-1} {\boldsymbol{X}}^T_\mathcal{A}), \end{equation} where $\hat{\xi_\alpha}$ is the estimate of $\xi_\alpha$ obtained from Equation (\ref{xi}). Here, $\hat{\sigma}_u$ is a robust and unbiased estimator of $\sigma$ preferably using the full model where $\lambda_n=0$. The optimum value of the penalty parameter $\lambda_n$ is obtained by minimizing $RC_p$ using an iterative algorithm. Note that $RC_p$ is a function of $\lambda_n$ as it controls the selection set $\mathcal{A}$ and the estimates of the parameters. More specifically, $\hat{\xi_\alpha}$ and $RSS_\mathcal{A}$ are computed using the penalized MDPDE that involves $\lambda_n$. Now, $RSS_\mathcal{A}$ is outlier sensitive, so using Theorem \ref{theorem:asymp}, it is replaced by a consistent estimator $n \hat{\sigma}$, where $\hat{\sigma}$ is the penalized MDPDE of $\sigma$ under the sub-model. Using assumption (C2), we have $\lim_{n\rightarrow \infty} {\boldsymbol{S}_{{\mathcal{A}} n}} = \xi_{\alpha} \bo{\Sigma}_\mathcal{A}$, where $\bo{\Sigma}_\mathcal{A}= \lim_{n\rightarrow \infty} \frac{1 }{n} {\boldsymbol{X}}^T_\mathcal{A} {\boldsymbol{X}}_\mathcal{A}$. So, asymptotically the degrees of freedom simplifies to $df=|{\mathcal{A}}|$, the number of non-zero regression coefficients. Therefore, for large sample sizes $ RC_p = \frac{n \hat{\sigma}}{\hat{\sigma}_u^2} - n + 2|{\mathcal{A}}|, $ which is equivalent to the classical $C_p$ statistic, but the estimators are replaced by suitable penalized MDPDE estimators. \subsection{Robust AIC} \label{sec_raic} In this section, we assume that the true density $g$ belongs to the family of the model densities, i.e. $g = f_{{\boldsymbol{\theta}}_g}$ for some ${\boldsymbol{\theta}}_g \in \Theta$. Suppose the penalty function $P_{\lambda_n}(\cdot)$ in Equation (\ref{cont}) creates a sub-model where the set of non-zero elements of ${\boldsymbol{\beta}}$ is given by $\mathcal{A} = \{j: 0\leq j \leq p, \beta_j \neq 0\}$ and $|{\mathcal{A}}| = p_1 \leq p+1$. Let ${\boldsymbol{\theta}}_{\mathcal{A}}$ be the value of ${\boldsymbol{\theta}} \in \Theta_{\mathcal{A}}$ under the restricted sub-model that minimize $E(L_\alpha({\boldsymbol{\theta}}|{\boldsymbol{X}}, \lambda_n))$. In this section, all expectations are calculated under the generating model $g$. Suppose $\hat{{\boldsymbol{\theta}}}_{\mathcal{A}}$ is the penalized MDPDE of ${\boldsymbol{\theta}}_{\mathcal{A}}$ for a given value of $\alpha$. For $\alpha=0$, $\hat{{\boldsymbol{\theta}}}_{\mathcal{A}}$ is the MLE of ${\boldsymbol{\theta}}_{\mathcal{A}}$ and $d_0(f_{\hat{{\boldsymbol{\theta}}}_{\mathcal{A}}}, f_{{\boldsymbol{\theta}}_g})$ becomes the Kullback-Leiber distance between two densities $f_{\hat{{\boldsymbol{\theta}}}_{\mathcal{A}}}$ and $f_{{\boldsymbol{\theta}}_g}$. The classical AIC minimizes the estimate of $E[d_0(f_{\hat{{\boldsymbol{\theta}}}_{\mathcal{A}}}, f_{{\boldsymbol{\theta}}_g})]$ assuming that ${\boldsymbol{\theta}}_{\mathcal{A}}$ lies very close to ${\boldsymbol{\theta}}_g$ (\citealp{MR1458291}). To make the procedure robust against outliers, we minimize the estimate of $E[d_\alpha(f_{\hat{{\boldsymbol{\theta}}}_{\mathcal{A}}}, f_{{\boldsymbol{\theta}}_g})]$ using a suitable value of $\alpha$. For a random sample of size $n$, our goal is to find the optimum value of $\lambda_n$ that produces the best sub-model. Let us define \begin{equation} {\boldsymbol{\Sigma}}^*_{\mathcal{A}} = \left( \begin{array}{c c} \frac{\xi_{2\alpha}}{\xi_{\alpha}^2} \mathbf{\Sigma}_{\mathcal{A}}^{-1} & 0\\ 0 & \sigma_\alpha^2 \end{array} \right), \ \ \Psi_{\mathcal{A}} = \left( \begin{array}{c c} \xi_\alpha \bo{\Sigma}_{\mathcal{A}} & 0\\ 0 & \eta_\alpha \end{array} \right), \ \ \bo{b}^* = \left( \begin{array}{c } \bo{b}\\ 0 \end{array} \right), \label{sigma_star} \end{equation} where $\bo{\Sigma}_{\mathcal{A}}, \ \sigma_\alpha, \ \xi_\alpha, \ \eta_\alpha$ and $\bo{b}$ are given in Theorem \ref{theorem:asymp}. Note that ${\boldsymbol{\Sigma}}^*_{\mathcal{A}}$ and $\bo{b}^*$ are, respectively, the variance-covariance matrix and bias of $\sqrt{n}\hat{{\boldsymbol{\theta}}}_{\mathcal{A}}$. The following theorem gives the expression of the robust AIC. \begin{theorem} Suppose the regularity conditions of Theorem \ref{theorem:asymp} hold true. Then, the robust Akaike information criterion (AIC) is defined by \begin{equation} RAIC = \left\{ \begin{array}{ll} - \frac{1+\alpha}{ \alpha} \sum_{i=1}^n f_i^\alpha + \tr\left[(\hat{{\boldsymbol{\Sigma}}}_{\mathcal{A}}^* + \hat{\bo{b}}^{*} \hat{\bo{b}}^{*T}) \left\{ \hat{\Psi}_{\mathcal{A}} + P''_{\lambda_n}(\hat{{\boldsymbol{\theta}}}_{\mathcal{A}}) \right\}\right], & \text{for}\mathrm{~}\alpha>0, \\% [2ex] - \sum_{i=1}^n \log(f_i) + \tr\left[(\hat{{\boldsymbol{\Sigma}}}_{\mathcal{A}}^* + \hat{\bo{b}}^{*} \hat{\bo{b}}^{*T}) \left\{ \hat{\Psi}_{\mathcal{A}} + P''_{\lambda_n}(\hat{{\boldsymbol{\theta}}}_{\mathcal{A}}) \right\}\right], & \text{for} \mathrm{~}\alpha=0, \end{array} \right. \end{equation} where $\hat{{\boldsymbol{\Sigma}}}_{\mathcal{A}}, \ \hat{\bo{b}}^*$, $\hat{\Psi}_{\mathcal{A}}$, $\hat{\xi}_{\alpha}$ and $\hat{\xi}_{2\alpha}$ are the estimates of ${\boldsymbol{\Sigma}}_{\mathcal{A}}, \ \bo{b}^*$, $\Psi_{\mathcal{A}}$, $\xi_{\alpha}$ and $\xi_{2\alpha}$, respectively. \label{theorem:aic} \end{theorem} The derivation of the robust AIC is given in supplementary materials. For a given $\alpha$, we fit a set of candidate models by conducting a grid for $\lambda_n$ values in the log-scale over a suitable interval. The optimum value of the penalty parameter $\lambda_n$ is then selected that minimizes the RAIC. \iffalse \subsection{Robust BIC} The robust Bayesian information criterion (BIC) is defined by \begin{equation} RBIC = - n \widehat{d}_\alpha(f_{\hat{{\boldsymbol{\theta}}}}, f_{{\boldsymbol{\theta}}_0}) + \frac{p^*}{2} \log(2\pi) - \frac{1}{2} \log(\det(S)) - \frac{p^*}{2} \log n, \label{GBIC} \end{equation} where $p^*$ is the number of non-zero regression coefficients, $\widehat{d}_\alpha(f_{\hat{{\boldsymbol{\theta}}}}, f_{{\boldsymbol{\theta}}_0})$ is defined in Equation (\ref{d_hat}) and $det(S)$ is the determinant of $S$, defined in Equation (\ref{A}). For large $n$, the first and the last term dominate in Equation (\ref{GBIC}), so the RBIC is simplified as \begin{equation} RBIC = - n \widehat{d}_\alpha(f_{\hat{\theta}}, f_{\theta_0}) - \frac{p^*}{2} \log n. \label{GBIC1} \end{equation} For $\alpha=0$, the RBIC turns out to be the classical BIC. \fi \section{Simulation Study} \label{sec:simulatoin} We have presented an extensive simulation study to demonstrate the advantage of our proposed method. We simulated a data-set from the regression model given in Equation (\ref{reg_model}) with $p=25$ predictors. A sparse regression coefficient ${\boldsymbol{\beta}}$ with 60\% null components (i.e. $\beta_j=0$) is considered for this study. The value of the intercept parameter is fixed to $\beta_0=1$. At first, around half of the non-null regression coefficients are independently generated from the uniform distribution $U(1,2)$ and other half are taken from $U(-2,-1)$. Then, the regression coefficients are kept fixed for all simulations. The regressor variables are generated from a multivariate normal distribution, where each individual component $X_j$ follows the standard normal distribution. To introduce dependency among the regressor variables, the structure of the covariance matrix is taken as the first order auto-regressive model, AR(1), with correlation coefficient $\rho = 0.5$. The values of $\sigma$, the standard deviation of the error term in Model (\ref{reg_model}), are chosen such a way that the signal-to-noise (SNR) ratios are 1 and 10 in Figure \ref{fig:mse} for plot (a) and (b), respectively. We generated samples of sizes $n=50$ to $n=200$, and replicated the process 500 times for all $n$. In Equation (\ref{cont}), we considered the penalized DPD with $\alpha=0.2$ and the $L_1$ penalty function as used in LASSO. It should be mentioned here that, although we used only the $L_1$ penalty for the illustrative purpose, any other penalty function could be used in this example. Our method is very general and all the theoretical results are derived for an arbitrary penalty function provided the standard regularity conditions are satisfied. We also tested several values of $\alpha$, but for simplicity in the presentation, the results for $\alpha=0.2$ are reported here. The optimum regularized parameter $\lambda$ is calculated based on the robust $C_p$ and AIC as discussed in Sections \ref{sec_df} and \ref{sec_raic}, respectively. These penalized MDPDEs are denoted by $RCp(0.2)$ and $RAIC(0.2)$, respectively. Our proposed method is compared with the classical LASSO estimators where the optimum $\lambda$ is selected using the classical $C_p$ and AIC. For comparison, other than the OLS, we have taken two robust (non-penalized) regression methods using the Huber and Tukey's M-estimators \citep{MR606374}. The default parameters in `rlm' package of R are used for these two estimators. Once we calculated seven set of estimators based on a training data, we simulated another set of 1,000 test data to compare their performance using $E[(\hat{y} - x^T{\boldsymbol{\beta}})^2]$, the relative prediction error (RPE). For each estimator and for each $n$, the medians of the empirical RPE compared to the OLS are plotted in Figure \ref{fig:mse} (a) and (b), where $SNR=1$ and 10, respectively. The both figures show that penalization technique increases the performance of these estimators in case of sparse regression. All the penalized estimators perform equally good in these situations, however, the methods using the AIC or RAIC are slightly better. This simulation study also demonstrates the well known fact that the prediction error of the classical robust estimators is higher than the OLS in pure data. \begin{figure} \caption{The median of the estimated RPE relative to OLS for different estimators over 500 replications in the uncontaminated data (a and b), and data with (c) 1\% outliers at $\mu_c=10\sigma$ and (d) 5\% outliers at $\mu_c=5\sigma$. In (a) and (c) SNR=1 and in (b) and (d) SNR=10.} \label{fig:mse} \end{figure} In the next simulation setup, we have contaminated $\tau\%$ outliers in the data, where the error term is generated from $\epsilon \sim N(\mu_c, 0.01)$ with $\mu_c$ being moved to a large value. Remaining $(100-\tau)\%$ data are generated using the first setup where $\epsilon \sim N(0, \sigma^2)$. So, Model (\ref{reg_model}) is generated using a heavy-tailed error distribution. Other than outliers, all parameters were kept unchanged in this simulation. In the first contaminated case where $SNR=1$, we have taken $\mu_c=10\sigma$ and 1\% outliers, i.e. $\tau=0.01$. The plot for the estimates of the relative RPE is given in Figure \ref{fig:mse}(c). We observe that the performance of our robust estimators is far better than other methods. In fact, the raw RPEs of all robust methods considered here are almost unchanged in the presence of outliers, whereas the performance of the OLS and LASSO estimators deteriorated significantly, and thus it creates a big difference in the relative performance. The plot also shows that our estimators have much smaller prediction errors compared to the Huber and Tukey's estimators. Moreover, these classical robust methods produce non-sparse coefficients, but the penalized MDPDEs have high values of specificity (around 80\%--100\% for RCp and 40\%--80\% for RAIC) and sensitivity (around 40\% for RCp and 60\% for RAIC). Similar result is obtained in the second contaminated case with $SNR=10$ when there are 5\% outliers at $\mu_c=5\sigma$, see Figure \ref{fig:mse}(d). Overall the simulation study shows that our penalized DPD based estimators outperform the classical robust methods and the LASSO in the presence of outliers in data. And in pure data without any outliers, their performance is quite competitive with the LASSO. It should be noted here that, for $L_1$ penalty, Algorithm \ref{algo:MDPDE} uses LASSO iteratively; thus it can also handle high-dimensional data with $n\ll p$. \section{Real Data Analysis} \label{sec:data} The data-set was obtained from a pilot grant funded by Blue Cross Blue Shield of Michigan Foundation. The primary goal was to develop a pilot computer system (Intelligent Sepsis Alert) aimed towards increased automated sepsis detection capacity. We analyzed a subset of the data that is available to us. It contains 51 variables from 8,975 cases admitted to Detroit Medical Center (DMC) from 2014--2015. Both demographic and clinical data are available during the first six hours of patients' emergency department stay. The outcome variable ($y$) is the length of hospital stay. Few variables are deleted as they had more than 75\% missing values. Some variables are also excluded as they contain texts mostly the notes from the doctor or nurse. We randomly partitioned the data-set into four equal parts where one sub-sample is used as the test set and remaining three sub-samples form the training set. We have calculated seven different estimators as used in the simulation study in Section \ref{sec:simulatoin}. For the penalized MDPDE, we considered several values of $\alpha$, however only the result using $\alpha=0.4$ is presented as the optimum $\alpha$ is around 0.4 in the full data. More precisely, we calculated the optimum value of $\alpha$ by minimizing the MSE in Equation (\ref{adaptive_alpha}). When the optimum penalty parameter $\lambda_n$ is obtained from the robust $C_p$ statistic, the optimum value of $\alpha$ becomes $0.3750$, and using the robust AIC it is $\alpha =0.3875$. The performance of the estimators is compared by the mean absolute prediction error (MAPE) in the test data, where $\mbox{MAPE} = {\frac {1}{n_t}}\sum _{{i=1}}^{n_t}|(y_i -{\boldsymbol{x}}_i^T\hat{{\boldsymbol{\beta}}})/y_i|$, $n_t$ being the number observations of the test set. The process was replicated 100 times using different random partition of the data-set, then the mean MAPE relative to the OLS and the median percentage of dimension reduction are reported in Table \ref{tab:sepsis}. The result shows that our proposed methods reduce the MAPE by around 38\% compared to the OLS and classical LASSO. At the same time, unlike the Huber or Tukey's estimators, the penalized MDPDEs considerably reduce the dimension of predictor variables. \begin{table} \centering \tabcolsep=0.1cm \small \begin{tabular}{l|ccccccc} \hline Estimators & OLS & Huber & Tukey & LASSO(Cp) & RCp(0.4) & LASSO(AIC) & RAIC(0.4) \\ \hline Relative MAPE & 1.00 & 0.75 & 0.66 & 1.02 & 0.62 & 1.02 & 0.62 \\ Dim. Reduction & 0 & 0 & 0 & 22.86 & 28.57 & 25.71 & 34.29 \\ \hline \end{tabular} \label{tab:sepsis} \caption{The mean of MAPE relative to the OLS and the median percentage of dimension reduction over 100 random resamples for different estimators for the sepsis data.} \end{table} The increased efficiency of the robust methods clearly indicates that the data-set contains a significant amount of outliers and they are not easy to detect for the high-dimensional data. For this reason, the multiple $R^2$ from the OLS method is just 11.42\%, but the value dramatically increased to 82.50\% and 85.32\% in case of Huber and Tukey's M-estimators, respectively. On the other hand, LASSO reduces 22.86\% and 25.71\% of variables, respectively, when the $C_p$ and the AIC are used. So, it reveals that several regressor variables do not contain any significant information for predicting the outcome variable $y$. Our proposed method successfully combines these two important properties -- the robustness property and a sparse representation of model. In our future research, we would like to extend the robust penalized regression to the generalized linear model (GLM) so that it will be helpful to model the binary outcome of sepsis. These methods could also be extended to obtain a suitable imputation method to deal with missing values. \iffalse The data-set was obtained from a pilot grant funded by Blue Cross Blue Shield of Michigan Foundation, titled ``A checklist tool to improve the quality of care for critically ill patients boarded in the emergency department.'' The primary goal was to develop a pilot computer system (Intelligent Sepsis Alert) aimed towards increased automated sepsis detection capacity. Sepsis is defined as a life-threatening condition that arises when the body's response to infection injures its own tissues and organs. It is a major cause of morbidity in emergency departments (EDs) and at trauma centers across the country. Sepsis is a national and international priority as demonstrated by the Surviving Sepsis Campaign (SSC), which cites the lack of routine, accurate sepsis identification as a major obstacle to providing evidence-based interventions that have been shown to improve patient outcomes. The pilot data was collected to build a pilot clinical decision support (CDS) tool at the Detroit Medical Center (DMC). Both demographic and clinical data are available during the first six hours of patients' ED stay. The outcome variable ($y$) is the length of hospital stay. Number of variables collected for each subject are more than 100, and many of them are generated from routine medical check-up in ED, however we do not know how many of them are really useful in the modeling. There are several outliers in this data-set which are difficult to detect, and thus seriously affects the prediction performance. \begin{table} \centering \tabcolsep=0.1cm \small \begin{tabular}{l|ccccccc} \hline Estimators & OLS & Huber & Tukey & LASSO(Cp) & RCp(0.4) & LASSO(AIC) & RAIC(0.4) \\ \hline Relative MAPE & 1 & 0.74 & 0.64 & 1.04 & 0.62 & 1.01 & 0.62 \\ Dim. Reduction & 0 & 0 & 0 & 62.07 & 20.69 & 24.14 & 31.03 \\ \hline \end{tabular} \label{tab:sepsis} \caption{The mean of MAPE relative to the OLS and the median percentage of dimension reduction over 100 random resamples for different estimators using the sepsis data.} \end{table} We analyzed a subset of the data-set that is available to us; and it contains 51 variables from 24,483 cases admitted to DMC from 2014--2015. Few variables are deleted as they had more than 75\% missing values. Some variables are also excluded as they contain texts mostly the notes from the doctor or nurse. We randomly partitioned the data-set into four equal parts where one sub-sample is used as the test set and remaining three sub-samples form the training set. We have calculated seven different estimators as used in the simulation study in Section \ref{sec:simulatoin}. For the penalized MDPDE, we considered several values of $\alpha$, however only the result using $\alpha=0.4$ is presented here. The performance of the estimators is compared by the mean absolute prediction error (MAPE) in the test data, where $\mbox{MAPE} = {\frac {1}{n_t}}\sum _{{i=1}}^{n_t}|(y_i -{\boldsymbol{x}}_i^T\hat{{\boldsymbol{\beta}}})/y_i|$, $n_t$ being the number observations in the test set. The process was replicated 100 times using different random partition of the data-set, then the mean MAPE relative to the OLS and the median percentage of dimension reduction are reported in Table \ref{tab:sepsis}. The result shows that our proposed method reduces the MAPE by around 38\% compared to the OLS and classical LASSO. At the same time, unlike the Huber or Tukey's MM estimators, the penalized MDPDEs considerably reduce the dimension of predictor variables. The increased efficiency of the robust methods clearly indicates that the data-set contains a significant amount of outliers. Results from the penalized methods reveal that several regressor variables do not contain any significant information for predicting the outcome of interest. Our proposed method combines robustness and sparse representation for modeling this data set. In our future research, we would like to extend the robust penalized regression to the generalized linear model (GLM) so that it will be helpful to model the binary outcome of sepsis. These methods could also be extended to obtain a suitable imputation method of the missing values for such data-set. \fi \section{Conclusion} \label{sec:conc} We have developed a robust penalized regression method that can perform regression shrinkage and selection like the LASSO or SCAD, while being resistant to outliers or heavy-tailed errors like the LAD or quantile regression. The basic idea is to use an M-estimator based on the DPD measure to estimate the model parameters, and then select the best model by using a suitable information criterion modified by the same robust estimators. A fast algorithm for the regression estimators is proposed that can be successfully applied in the high-dimensional data analysis. The asymptotic distribution and the influence function of the estimator are derived. Two robust information criteria are introduced by modifying the Mallow's $C_p$ and AIC to make the variable selection procedure stable against outliers. All the theoretical results are based on a generalized penalty function. So, using this procedure, one may robustify the classical penalized regression methods, such as LASSO, adaptive LASSO, SCAD, MCP, elastic net etc. The simulation studies as well as the real data example show improved performance of the proposed method over the classical procedures. Thus, the new procedure is expected to improve prediction power significantly for the high-dimensional data where presence of outliers is very common. \appendix \setcounter{page}{1} \setcounter{section}{1} \setcounter{equation}{0} \makeatletter\@addtoreset{equation}{section} \def\thesection.\arabic{equation}{\thesection.\arabic{equation}} \begin{center} {\Huge{\bf Supplementary Materials}} \end{center} \subsection*{Proof of Theorem \ref{theorem:asymp}} The sparsity of the regression coefficient is directly proved from the Lemma 1 of \cite{MR2796868}. So, $\hat{{\boldsymbol{\beta}}}_{\bar{{\mathcal{A}}}}=\mathbf{0}$ with probability tending to 1. At the same time, for sufficiently large sample sizes, $\hat{{\boldsymbol{\beta}}}_{\mathcal{A}}$ stays away from zero. Now, we derive the asymptotic distribution of $(\hat{{\boldsymbol{\beta}}}_{\mathcal{A}}, \hat{\sigma}^2)^T$ from the estimating equations (\ref{est_beta}) and (\ref{est_sigma}). Suppose the corresponding equations are written together as $ M_n(\hat{{\boldsymbol{\theta}}}) = 0$, where the first $p_1$ equations are obtained from (\ref{est_beta}) by taking equations corresponding to set ${\mathcal{A}}$, and the last equation is Equation (\ref{est_sigma}). Let $M_n^j(\hat{{\boldsymbol{\theta}}})$ be the $j$-th element of $M_n(\hat{{\boldsymbol{\theta}}}), \ j = 1, 2, \cdots,p_1+1$. We define ${\mathcal{A}}^* = \{{\mathcal{A}}, p+2\}$. Using a Taylor series expansion of $M_n^j({\boldsymbol{\theta}})$, we write \begin{equation} M_n^j(\hat{{\boldsymbol{\theta}}}) = M_n^j({\boldsymbol{\theta}}_g) + \sum_{k \in {\mathcal{A}}^*} (\hat{\theta}_k - \theta_{g,k}) M_n^{jk}({\boldsymbol{\theta}}_g) + \frac{1}{2} \sum_{k \in {\mathcal{A}}^*} \sum_{l \in {\mathcal{A}}^*} (\hat{\theta}_k - \theta_{g,k}) (\hat{\theta}_l - \theta_{g,l}) M_n^{jkl}(\tilde{{\boldsymbol{\theta}}}_g), \label{taylor} \end{equation} where $\tilde{{\boldsymbol{\theta}}}_g$ belongs to the line segment connecting $\hat{{\boldsymbol{\theta}}}$ and ${\boldsymbol{\theta}}_g$. Here, $M_n^{jk}$ and $M_n^{jkl}$ are, respectively, the first and second order partial derivatives of $M_n^{j}$ with respect to the indicated components of ${\boldsymbol{\theta}}$. As $M_n^j(\hat{{\boldsymbol{\theta}}})=0$, we get \begin{equation} \sum_{k \in {\mathcal{A}}^*} (\hat{\theta}_k - \theta_{g,k}) \left[ M_n^{jk}({\boldsymbol{\theta}}_g) + \frac{1}{2} \sum_{l \in {\mathcal{A}}^*} (\hat{\theta}_l - \theta_{g,l}) M_n^{jkl}(\tilde{{\boldsymbol{\theta}}}_g) \right]= - M_n^j({\boldsymbol{\theta}}_g). \label{taylor1} \end{equation} Let us define $\hat{{\boldsymbol{\theta}}}^*=(\hat{{\boldsymbol{\beta}}}_{\mathcal{A}}^T, \hat{\sigma}^2)^T$ and ${\boldsymbol{\theta}}_g^*=({\boldsymbol{\beta}}_{\mathcal{A}}^T, \sigma_g^2)^T$. Combining terms for $j =1, 2, \cdots, p_1+1$, the above equation is written as \begin{equation} \bo{A}_n (\hat{{\boldsymbol{\theta}}}^* - {\boldsymbol{\theta}}_g^*) =- M_n({\boldsymbol{\theta}}_g), \label{taylor11} \end{equation} where the $(j, k)$-th element of $\bo{A}_n$ is \begin{equation} A_{n, j, k} = M_n^{jk}({\boldsymbol{\theta}}_g) + \frac{1}{2} \sum_{l \in {\mathcal{A}}^*} (\hat{\theta}_l - \theta_{g,l}) M_n^{jkl}(\tilde{{\boldsymbol{\theta}}}_g). \end{equation} We define \begin{equation} \Psi_{\mathcal{A}} = \left( \begin{array}{c c} \xi_\alpha {\boldsymbol{\Sigma}}_{\mathcal{A}} & 0\\ 0 & \eta_\alpha \end{array} \right), \ \ \Omega_{\mathcal{A}} = \left( \begin{array}{c c} \xi_{2\alpha}{\boldsymbol{\Sigma}}_{\mathcal{A}} & 0\\ 0 & \eta_{2\alpha} - \frac{\alpha^2}{4} \xi_\alpha^2 \end{array} \right), \label{omega_new} \end{equation} where $\xi_{\alpha}$ and $\eta_\alpha$ are defined in Equation (\ref{xi}). A direct calculation shows that $E(M_n({\boldsymbol{\theta}}_g))=\bo{b}_1$ and $n V(M_n({\boldsymbol{\theta}}_g)) = (1+\alpha)^2 \Omega_{\mathcal{A}} + o_p(\mathbf{1})$, where $\bo{b}_1=(\bo{b}_2^T, 0)^T$ and $\bo{b}_2 = \lim_{n\rightarrow \infty} P'_{\lambda_n}({\boldsymbol{\beta}}_{\mathcal{A}})$. Moreover, using the central limit theorem (CLT), we get \begin{equation} \frac{\sqrt{n}}{1+\alpha} \Omega_{\mathcal{A}}^{-\frac{1}{2}} M_n({\boldsymbol{\theta}}_g) \overset{a}{\sim} N(\bo{b}_1, I_{p+2}). \end{equation} Therefore, from (\ref{taylor11}), we have \begin{equation} \frac{\sqrt{n}}{1+\alpha} \Omega_{\mathcal{A}}^{-\frac{1}{2}} \bo{A}_n (\hat{{\boldsymbol{\theta}}}^* - {\boldsymbol{\theta}}_g^*) \overset{a}{\sim} N(\bo{b}_1, I_{p+2}). \label{conv} \end{equation} Using the week law of large numbers (WLLN), it can be shown that, under condition (C2) \begin{equation} \frac{1}{1+\alpha} \bo{A}_n - \Psi_{\mathcal{A}} \overset{P}{\rightarrow} 0, \label{an_conv} \end{equation} where $\Psi_{\mathcal{A}}$ is defined in Equation (\ref{omega_new}). So, from Equation (\ref{conv}), we find \begin{equation} \sqrt{n} \Omega_{\mathcal{A}}^{-\frac{1}{2}} \Psi_{\mathcal{A}} (\hat{{\boldsymbol{\theta}}}^* - {\boldsymbol{\theta}}_g^*) \overset{a}{\sim} N(\bo{b}_1, I_{p+2}). \end{equation} Thus, the theorem is proved by collecting the corresponding components of $\hat{{\boldsymbol{\theta}}}^*$. \subsection*{Proof of Theorem \ref{theorem:influence}} The loss function in Equation (\ref{cont}) is written as \begin{align} L_\alpha({\boldsymbol{\theta}}|{\boldsymbol{X}}, \lambda_n) &= \frac{1}{n}\sum_{i=1}^n d_\alpha(f_i, g_{\tau, i}) + P_{\lambda_n}({\boldsymbol{\theta}})\\ &= \frac{1}{n}\sum_{i=1}^n \int \left[f_i^{1+\alpha} - \left(1 + \frac{1}{\alpha}\right) f_i^\alpha g_{\tau, i}\right]dy_i + P_{\lambda_n}({\boldsymbol{\theta}}) + c(\alpha), \label{cont_inf} \end{align} where $c(\alpha)$ is given in Equation (\ref{cont1}). $L_\alpha({\boldsymbol{\theta}}|{\boldsymbol{X}}, \lambda_n)$ is minimized at ${\boldsymbol{\theta}}={\boldsymbol{\theta}}_{\tau, g}$. So, for $\alpha>0$, the estimating equation at ${\boldsymbol{\theta}}={\boldsymbol{\theta}}_{\tau, g}$ becomes \begin{equation} \frac{1}{n}\sum_{i=1}^n \int \left[f_i^{1+\alpha} {\boldsymbol{u}}_i^* - f_i^\alpha {\boldsymbol{u}}_i^* g_{\tau, i}\right]dy_i + \frac{1}{1+\alpha} P'_{\lambda_n}({\boldsymbol{\theta}}) = 0, \label{inf_est} \end{equation} where \begin{equation} {\boldsymbol{u}}_i^* = \frac{\partial}{\partial {\boldsymbol{\theta}}} \log f_i = \left( \begin{array}{c} \frac{y_i - {\boldsymbol{x}}_i^T {\boldsymbol{\beta}} }{\sigma^2} {\boldsymbol{x}}_i\\ \frac{(y_i - {\boldsymbol{x}}_i^T {\boldsymbol{\beta}} )^2}{2\sigma^4} -\frac{1}{2\sigma^2} \end{array} \right). \label{ui_star} \end{equation} Note that both $f_i$ and ${\boldsymbol{u}}_i^*$ are functions of ${\boldsymbol{\theta}}$. Differentiating Equation (\ref{inf_est}) with respect to $\tau$, we get \begin{align} \frac{1}{n}\sum_{i=1}^n \int &\Big[\left\{(1+\alpha) f_i^{1+\alpha} {\boldsymbol{u}}_i^* {\boldsymbol{u}}_i^{*T} + f_i^{1+\alpha} \nabla {\boldsymbol{u}}_i^* - \alpha f_i^\alpha {\boldsymbol{u}}_i^* {\boldsymbol{u}}_i^{*T} g_{\tau, i} - f_i^\alpha \nabla {\boldsymbol{u}}_i^* g_{\tau, i}\right\}\frac{\partial{\boldsymbol{\theta}}}{\partial \tau} \\ & + f_i^\alpha {\boldsymbol{u}}_i^* g_i - f_i^\alpha {\boldsymbol{u}}_i^* \delta_{t_i} \Big]dy_i \Bigg|_{{\boldsymbol{\theta}}={\boldsymbol{\theta}}_{\tau, g}} + \frac{1}{1+\alpha} P''_{\lambda_n}({\boldsymbol{\theta}}) \frac{\partial{\boldsymbol{\theta}}}{\partial \tau}\Bigg|_{{\boldsymbol{\theta}}={\boldsymbol{\theta}}_{\tau, g}} = 0, \label{inf} \end{align} where $\nabla{\boldsymbol{u}}_i^* = \frac{\partial^2}{\partial {\boldsymbol{\theta}}^T \partial {\boldsymbol{\theta}}} \log f_i$. Now, it can be shown that \begin{equation} \Psi_n = \lim_{\tau \rightarrow 0}\frac{1}{n}\sum_{i=1}^n \int \left[(1+\alpha) f_i^{1+\alpha} {\boldsymbol{u}}_i^* {\boldsymbol{u}}_i^{*T} + f_i^{1+\alpha} \nabla {\boldsymbol{u}}_i^* - \alpha f_i^\alpha {\boldsymbol{u}}_i^* {\boldsymbol{u}}_i^{*T} g_i - f_i^\alpha \nabla {\boldsymbol{u}}_i^* g_i\right]dy_i, \end{equation} where $\Psi_n$ is given in Equation (\ref{omega}). Therefore, rearranging the terms of Equation (\ref{inf}) and taking limit as $\tau \rightarrow 0$, we get \begin{equation} IF({\boldsymbol{\theta}}_g, \boldsymbol{t}) = \left[\Psi_n + \frac{1}{1+\alpha} P''_{\lambda_n}({\boldsymbol{\theta}}_g) \right]^{-1} \frac{1}{n}\sum_{i=1}^n \left[f_i^\alpha {\boldsymbol{u}}_i^*|_{y_i = t_i} - \int f_i^\alpha {\boldsymbol{u}}_i^* g_i dy_i \right], \end{equation} where $f_i$ and ${\boldsymbol{u}}_i^*$ are evaluated at ${\boldsymbol{\theta}}= {\boldsymbol{\theta}}_g$. A direct calculation shows that \begin{equation} \int f_i^\alpha {\boldsymbol{u}}_i^* g_i dy_i = \left( \begin{array}{c} \boldsymbol{0}\\ -\frac{\alpha }{2} \eta_\alpha \end{array} \right), \end{equation} where $\boldsymbol{0}$ is a null vector of length $p+1$. Thus, the final form of $IF({\boldsymbol{\theta}}_g, \boldsymbol{t})$ is obtained using the expressions of $f_i$ and ${\boldsymbol{u}}_i^*$ from Equation (\ref{fi}) and (\ref{ui_star}), respectively. \subsection*{Proof of Lemma \ref{lemma_cp}} In the proof of Theorem \ref{theorem:asymp}, we used a Taylor series expansion of the estimating Equations (\ref{est_beta}) and (\ref{est_sigma}) with respect to ${\boldsymbol{\beta}}_{\mathcal{A}}$ and $\sigma^2$. Here, we expand only Equation (\ref{est_beta}) with respect to ${\boldsymbol{\beta}}_{\mathcal{A}}$ treating $\sigma$ as constant. For notational simplicity, we define $\nabla V({\boldsymbol{\beta}}_{\mathcal{A}})$ as the vector obtained by taking elements from Equation (\ref{2nd_diff}) corresponding to set ${\mathcal{A}}$. Similarly, $\nabla^2 V({\boldsymbol{\beta}}_{\mathcal{A}})$ is defined from Equation (\ref{2nd}). Then, similar to Equation (\ref{taylor11}), we write \begin{equation} \bo{A}_n^* (\hat{{\boldsymbol{\beta}}}_{\mathcal{A}} - {\boldsymbol{\beta}}_{\mathcal{A}}) = M_n^*({\boldsymbol{\beta}}_{\mathcal{A}}), \label{taylor111} \end{equation} where \begin{align} M_n^*({\boldsymbol{\beta}}_{\mathcal{A}}) &= -\nabla V({\boldsymbol{\beta}}_{\mathcal{A}}) - P'_{\lambda_n}({\boldsymbol{\beta}}_{\mathcal{A}}),\\ \bo{A}_n^* &= E\left[\nabla^2 V({\boldsymbol{\beta}}_{\mathcal{A}})\right] + P''_{\lambda_n}({\boldsymbol{\beta}}_{\mathcal{A}}) + o_p(1). \label{an} \end{align} Suppose ${\boldsymbol{x}}_{{\mathcal{A}} i}$ is the obtained from ${\boldsymbol{x}}_i$ by selecting elements corresponding to ${\mathcal{A}}$. From Model (\ref{reg_model}), we have $ \epsilon_i = {\boldsymbol{y}}_i - {\boldsymbol{x}}_i^T {\boldsymbol{\beta}}_g = {\boldsymbol{y}}_i - {\boldsymbol{x}}_{{\mathcal{A}} i}^T {\boldsymbol{\beta}}_{\mathcal{A}}$. Let us define $ {\boldsymbol{u}}_{{\mathcal{A}} i} = \frac{\partial}{\partial {\boldsymbol{\beta}}_{\mathcal{A}}} \log f_i = \frac{(y_i - {\boldsymbol{x}}_{{\mathcal{A}} i}^T {\boldsymbol{\beta}}_{\mathcal{A}} )}{\sigma^2} {\boldsymbol{x}}_{{\mathcal{A}} i}, $ for $i=1,2, \cdots, n$. So, we write \begin{align} E[\nabla^2 V({\boldsymbol{\beta}}_{\mathcal{A}})] &= - \frac{1+\alpha}{n}\sum_{i=1}^n E\left[\left( \alpha {\boldsymbol{u}}_{{\mathcal{A}} i} {\boldsymbol{u}}_{{\mathcal{A}} i}^T + \nabla {\boldsymbol{u}}_{{\mathcal{A}} i} \right)f_i^\alpha\right]\\ &= - \frac{1+\alpha}{n}\sum_{i=1}^n E\left[\left( \frac{ \alpha\epsilon_i^2}{\sigma^4} - \frac{1}{\sigma^2} \right) {\boldsymbol{x}}_{{\mathcal{A}} i} {\boldsymbol{x}}_{{\mathcal{A}} i}^Tf_i^\alpha\right]\\ &= - \frac{1+\alpha}{n} \left(\frac{1}{\sqrt{2\pi}\sigma}\right)^\alpha {\boldsymbol{X}}_{\mathcal{A}}^T {\boldsymbol{X}}_{\mathcal{A}} E\left[\left( \frac{ \alpha\epsilon^2}{\sigma^4} - \frac{1}{\sigma^2} \right) \exp\left(-\frac{\alpha\epsilon^2}{2\sigma^2}\right)\right]\\ &= - \frac{1+\alpha}{n} \left(\frac{1}{\sqrt{2\pi}\sigma}\right)^\alpha {\boldsymbol{X}}_{\mathcal{A}}^T {\boldsymbol{X}}_{\mathcal{A}} \int_{\mathbb{R}}\left[\left( \frac{ \alpha\epsilon^2}{\sigma^4} - \frac{1}{\sigma^2} \right) \frac{1}{\sqrt{2\pi}\sigma} \exp\left(-\frac{(1+\alpha)\epsilon^2}{2\sigma^2}\right)\right]d\epsilon \\ &= - \frac{\sqrt{1+\alpha}}{n} \left(\frac{1}{\sqrt{2\pi}\sigma}\right)^\alpha {\boldsymbol{X}}_{\mathcal{A}}^T {\boldsymbol{X}}_{\mathcal{A}} \left( \frac{ \alpha}{\sigma^2 (1+\alpha)} - \frac{1}{\sigma^2} \right) \\ &= \frac{(1+\alpha)\xi_\alpha}{n}{\boldsymbol{X}}_{\mathcal{A}}^T {\boldsymbol{X}}_{\mathcal{A}}, \label{2nd1} \end{align} where $\xi_\alpha$ is defined in Equation (\ref{xi}). So, from Equation (\ref{an}), we write $\bo{A}_n^* = (1 + \alpha) {\boldsymbol{S}_{{\mathcal{A}} n}} + o_p(1).$ Now, from Equation (\ref{taylor111}), we get \begin{equation} (\hat{{\boldsymbol{\beta}}} - {\boldsymbol{\beta}}_{\mathcal{A}}) =- \frac{1}{1 + \alpha} {\boldsymbol{S}_{{\mathcal{A}} n}}^{-1} M_n^*({\boldsymbol{\beta}}_{\mathcal{A}}) + o_p\left(1\right). \label{beta} \end{equation} Let us define $\boldsymbol{\epsilon} = (\epsilon_1, \epsilon_2, \cdots, \epsilon_n)^T$ and $\boldsymbol{B}_{\mathcal{A}} ={\boldsymbol{X}}_{\mathcal{A}} {\boldsymbol{S}_{{\mathcal{A}} n}}^{-1}$. Suppose $b_{ij}$ is the $(i,j)$-th element of $\boldsymbol{B}_{\mathcal{A}}$ and $m_i$ is the $i$-th element of $M_n^*({\boldsymbol{\beta}}_{\mathcal{A}})$ for $i, j=1,2,\cdots, n$. Then, from Equation (\ref{beta}), we get \begin{align} E[({\boldsymbol{y}} & - {\boldsymbol{X}}_{\mathcal{A}} {\boldsymbol{\beta}}_{\mathcal{A}})^T {\boldsymbol{X}}_{\mathcal{A}} (\hat{{\boldsymbol{\beta}}}_{\mathcal{A}} - {\boldsymbol{\beta}}_{\mathcal{A}})] =- \frac{1}{1 + \alpha} E[\boldsymbol{\epsilon}^T {\boldsymbol{X}}_{\mathcal{A}} {\boldsymbol{S}_{{\mathcal{A}} n}}^{-1} M_n^*({\boldsymbol{\beta}}_{\mathcal{A}})] + o(1)\\ &=- \frac{1}{1 + \alpha} E\left[\sum_{i,j} \epsilon_i b_{ij} m_j \right]+ o(1)\\ &= \frac{1}{n\sigma^2}E\left[\sum_{i,j} \epsilon_i b_{ij} \left(\sum_{k} \epsilon_k x_{kj} f_k^\alpha + P'_{\lambda_n}({\boldsymbol{\beta}}_{\mathcal{A}})\right) \right]+ o(1)\\ &= \frac{1+\alpha}{n\sigma^2} E\left[\sum_{i,j} \epsilon_i^2 b_{i,j} x_{ij} f_i^\alpha \right]+ o(1)\\ &= \frac{1}{n\sigma^2} E(\epsilon^2f^\alpha)\sum_{i,j} b_{ij} x_{ij} + o(1)\\ &= \frac{1}{n\sigma^2} \tr(\boldsymbol{B}_{\mathcal{A}} {\boldsymbol{X}}_{\mathcal{A}}^T) \left(\frac{1}{\sqrt{2\pi}\sigma}\right)^\alpha \int_{\mathbb{R}} \frac{\epsilon^2}{\sqrt{2\pi}\sigma} \exp\left(-\frac{(1+\alpha)\epsilon^2}{2\sigma^2}\right) d\epsilon + o(1)\\ &= \frac{1}{n\sigma^2} \tr({\boldsymbol{X}} {\boldsymbol{S}_{{\mathcal{A}} n}}^{-1} {\boldsymbol{X}}_{\mathcal{A}}^T) \left(\frac{1}{\sqrt{2\pi}\sigma}\right)^\alpha \frac{1}{\sqrt{1+\alpha}} \frac{\sigma^2}{1+\alpha} + o(1)\\ &= \frac{\xi_\alpha \sigma^2}{n} \tr({\boldsymbol{X}}_{\mathcal{A}} {\boldsymbol{S}_{{\mathcal{A}} n}}^{-1} {\boldsymbol{X}}_{\mathcal{A}}^T) + o(1). \end{align} \subsection*{Proof of Theorem \ref{theorem:aic}} Let us define $D_\alpha({\boldsymbol{\theta}}|{\boldsymbol{X}}, \lambda_n) = \frac{1}{n}\sum_{i=1}^n d_\alpha(f_i, g_i)$. Note that \begin{equation} E[d_\alpha(f_{\hat{{\boldsymbol{\theta}}}_{\mathcal{A}}}, f_{{\boldsymbol{\theta}}_g})] = E[E[D_\alpha({\boldsymbol{\theta}} |{\boldsymbol{X}}, \lambda_n) | {\boldsymbol{\theta}} = \hat{{\boldsymbol{\theta}}}_{\mathcal{A}}]] \end{equation} Now, from Equation (\ref{cont}), we get \begin{equation} L_\alpha({\boldsymbol{\theta}}|{\boldsymbol{X}}, \lambda_n) =D_\alpha({\boldsymbol{\theta}} |{\boldsymbol{X}}, \lambda_n) + P_{\lambda_n}({\boldsymbol{\theta}}). \label{cont_new} \end{equation} As $L_\alpha({\boldsymbol{\theta}}|{\boldsymbol{X}}, \lambda_n)$ is minimized at ${\boldsymbol{\theta}} = \hat{{\boldsymbol{\theta}}}_{\mathcal{A}}$ for ${\boldsymbol{\theta}} \in \Theta_R$, we have \begin{equation} \left[\frac{\partial}{\partial {\boldsymbol{\theta}}} L_\alpha({\boldsymbol{\theta}} |{\boldsymbol{X}}, \lambda_n) \right]_{{\boldsymbol{\theta}} = \hat{{\boldsymbol{\theta}}}_{\mathcal{A}}} =0. \label{est_der} \end{equation} Moreover, $E[L_\alpha({\boldsymbol{\theta}}|{\boldsymbol{X}}, \lambda_n)]$ is minimized at ${\boldsymbol{\theta}} = {\boldsymbol{\theta}}_{\mathcal{A}}$ for ${\boldsymbol{\theta}} \in \Theta_R$, so \begin{equation} E\left[\left[\frac{\partial}{\partial {\boldsymbol{\theta}}} L_\alpha({\boldsymbol{\theta}} |{\boldsymbol{X}}, \lambda_n)\right]_{{\boldsymbol{\theta}} = {\boldsymbol{\theta}}_{\mathcal{A}}}\right] =0. \label{true_der} \end{equation} Using Theorem \ref{theorem:asymp}, we obtain from Equation (\ref{an_conv}) \begin{equation} \left[\frac{\partial^2}{\partial {\boldsymbol{\theta}}^T \partial {\boldsymbol{\theta}}} L_\alpha({\boldsymbol{\theta}} |{\boldsymbol{X}}, \lambda_n) \right]_{{\boldsymbol{\theta}} = \hat{{\boldsymbol{\theta}}}_{\mathcal{A}}} = \left[\frac{\partial^2}{\partial {\boldsymbol{\theta}}^T \partial {\boldsymbol{\theta}}} L_\alpha({\boldsymbol{\theta}} |{\boldsymbol{X}}, \lambda_n) \right]_{{\boldsymbol{\theta}} = {\boldsymbol{\theta}}_{\mathcal{A}}} + o_p(1) = \Psi_{\mathcal{A}} + \frac{1}{1 + \alpha} P''_{\lambda_n}({\boldsymbol{\theta}}_{\mathcal{A}}) + o_p(1). \label{2nd_der} \end{equation} Now, $nE[E[D_\alpha({\boldsymbol{\theta}} |{\boldsymbol{X}}, \lambda_n) | {\boldsymbol{\theta}} = \hat{{\boldsymbol{\theta}}}_{\mathcal{A}}]]$ is written as follows \begin{align} nE[E[D_\alpha({\boldsymbol{\theta}} |{\boldsymbol{X}}, \lambda_n) | {\boldsymbol{\theta}} = \hat{{\boldsymbol{\theta}}}_{\mathcal{A}}]] &= nE[D_\alpha(\hat{{\boldsymbol{\theta}}}_{\mathcal{A}} |{\boldsymbol{X}}, \lambda_n) ] + n\left\{E[D_\alpha({\boldsymbol{\theta}}_{\mathcal{A}} |{\boldsymbol{X}}, \lambda_n)] - E[D_\alpha(\hat{{\boldsymbol{\theta}}}_{\mathcal{A}} |{\boldsymbol{X}}, \lambda_n) ] \right\}\\ & \hspace{.4in} + n\left\{E[E[D_\alpha({\boldsymbol{\theta}} |{\boldsymbol{X}}, \lambda_n) | {\boldsymbol{\theta}} = \hat{{\boldsymbol{\theta}}}_{\mathcal{A}}]] - E[D_\alpha({\boldsymbol{\theta}}_{\mathcal{A}} |{\boldsymbol{X}}, \lambda_n) ] \right\}\\ & = nE[D_\alpha(\hat{{\boldsymbol{\theta}}}_{\mathcal{A}} |{\boldsymbol{X}}, \lambda_n) ] + n\left\{E[L_\alpha({\boldsymbol{\theta}}_{\mathcal{A}} |{\boldsymbol{X}}, \lambda_n)] - E[L_\alpha(\hat{{\boldsymbol{\theta}}}_{\mathcal{A}} |{\boldsymbol{X}}, \lambda_n) ] \right\}\\ & \hspace{.4in} + n\left\{E[E[L_\alpha({\boldsymbol{\theta}} |{\boldsymbol{X}}, \lambda_n) | {\boldsymbol{\theta}} = \hat{{\boldsymbol{\theta}}}_{\mathcal{A}}]] - E[L_\alpha({\boldsymbol{\theta}}_{\mathcal{A}} |{\boldsymbol{X}}, \lambda_n) ] \right\}. \label{expr} \end{align} A Taylor series expansion of $n E[L_\alpha({\boldsymbol{\theta}}_g |{\boldsymbol{X}}, \lambda_n)]$ about ${\boldsymbol{\theta}} = \hat{{\boldsymbol{\theta}}}_{\mathcal{A}}$ and using Equations (\ref{est_der}) and (\ref{2nd_der}), we get \begin{align} n & E[L_\alpha({\boldsymbol{\theta}}_{\mathcal{A}} |{\boldsymbol{X}}, \lambda_n)] - nE[L_\alpha(\hat{{\boldsymbol{\theta}}}_{\mathcal{A}} |{\boldsymbol{X}}, \lambda_n) ] \\ &= nE\left[({\boldsymbol{\theta}}_{\mathcal{A}} - \hat{{\boldsymbol{\theta}}}_{\mathcal{A}}) \left[\frac{\partial}{\partial {\boldsymbol{\theta}}} L_\alpha({\boldsymbol{\theta}} |{\boldsymbol{X}}, \lambda_n) \right]_{{\boldsymbol{\theta}} = \hat{{\boldsymbol{\theta}}}_{\mathcal{A}}} \right]\\ & \hspace{1in} + \frac{n}{2}E\left[({\boldsymbol{\theta}}_{\mathcal{A}} - \hat{{\boldsymbol{\theta}}}_{\mathcal{A}})^T \left[\frac{\partial^2}{\partial {\boldsymbol{\theta}}^T \partial {\boldsymbol{\theta}}} L_\alpha({\boldsymbol{\theta}} |{\boldsymbol{X}}, \lambda_n) \right]_{{\boldsymbol{\theta}} = \hat{{\boldsymbol{\theta}}}_{\mathcal{A}}} ({\boldsymbol{\theta}}_{\mathcal{A}} - \hat{{\boldsymbol{\theta}}}_{\mathcal{A}})\right] + o(1)\\ &= \frac{n}{2}E\left[({\boldsymbol{\theta}}_{\mathcal{A}} - \hat{{\boldsymbol{\theta}}}_{\mathcal{A}})^T \left\{ \Psi_{\mathcal{A}} + \frac{1}{1 + \alpha} P''_{\lambda_n}({\boldsymbol{\theta}}_{\mathcal{A}}) \right\} ({\boldsymbol{\theta}}_{\mathcal{A}} - \hat{{\boldsymbol{\theta}}}_{\mathcal{A}})\right] + o(1)\\ &= \frac{1}{2} \tr\left[({\boldsymbol{\Sigma}}_{\mathcal{A}}^* + \bo{b}^{*} \bo{b}^{*T}) \left\{ \Psi_{\mathcal{A}} + \frac{1}{1 + \alpha} P''_{\lambda_n}({\boldsymbol{\theta}}_{\mathcal{A}}) \right\}\right] + o(1), \label{taylor_1} \end{align} The final step is obtained from the asymptotic distribution of $\hat{{\boldsymbol{\theta}}}_{\mathcal{A}}$ using Theorem \ref{theorem:asymp}. A Taylor series expansion of $n E[E[L_\alpha({\boldsymbol{\theta}} |{\boldsymbol{X}}, \lambda_n) | {\boldsymbol{\theta}} = \hat{{\boldsymbol{\theta}}}_{\mathcal{A}}]]$ about ${\boldsymbol{\theta}} = {\boldsymbol{\theta}}_{\mathcal{A}}$ and using Equations (\ref{true_der}) and (\ref{2nd_der}), we get \begin{align} n E[E[L_\alpha({\boldsymbol{\theta}} &|{\boldsymbol{X}}, \lambda_n) | {\boldsymbol{\theta}} = \hat{{\boldsymbol{\theta}}}_{\mathcal{A}}]] - nE[L_\alpha({\boldsymbol{\theta}}_{\mathcal{A}} |{\boldsymbol{X}}, \lambda_n)] \\ & = nE\left[(\hat{{\boldsymbol{\theta}}}_{\mathcal{A}} - {\boldsymbol{\theta}}_{\mathcal{A}})E\left[\left[ \frac{\partial}{\partial {\boldsymbol{\theta}}} L_\alpha({\boldsymbol{\theta}} |{\boldsymbol{X}}, \lambda_n) \right]_{{\boldsymbol{\theta}} = {\boldsymbol{\theta}}_{\mathcal{A}}}\right]\right] \\ & \ \ \ \ + \frac{n}{2}E\left[(\hat{{\boldsymbol{\theta}}}_{\mathcal{A}} - {\boldsymbol{\theta}}_{\mathcal{A}})^T E\left[ \left[\frac{\partial^2}{\partial {\boldsymbol{\theta}}^T \partial {\boldsymbol{\theta}}} L_\alpha({\boldsymbol{\theta}} |{\boldsymbol{X}}, \lambda_n) \right]_{{\boldsymbol{\theta}} = {\boldsymbol{\theta}}_{\mathcal{A}}} \right](\hat{{\boldsymbol{\theta}}}_{\mathcal{A}} - {\boldsymbol{\theta}}_{\mathcal{A}})\right] + o(1)\\ &= \frac{n}{2}E\left[({\boldsymbol{\theta}}_{\mathcal{A}} - \hat{{\boldsymbol{\theta}}}_{\mathcal{A}})^T \left\{ \Psi_{\mathcal{A}} + \frac{1}{1 + \alpha} P''_{\lambda_n}({\boldsymbol{\theta}}_{\mathcal{A}}) \right\} ({\boldsymbol{\theta}}_{\mathcal{A}} - \hat{{\boldsymbol{\theta}}}_{\mathcal{A}})\right] + o(1)\\ &= \frac{1}{2}\tr\left[({\boldsymbol{\Sigma}}_{\mathcal{A}}^* + \bo{b}^{*} \bo{b}^{*T}) \left\{ \Psi_{\mathcal{A}} + \frac{1}{1 + \alpha} P''_{\lambda_n}({\boldsymbol{\theta}}_{\mathcal{A}}) \right\}\right] + o(1). \label{taylor_2} \end{align} Substituting (\ref{taylor_1}) and (\ref{taylor_2}) in Equation (\ref{expr}), we find \begin{align} nE[E[D_\alpha({\boldsymbol{\theta}} |{\boldsymbol{X}}, \lambda_n) | {\boldsymbol{\theta}} = \hat{{\boldsymbol{\theta}}}_{\mathcal{A}}]] =& nE[D_\alpha(\hat{{\boldsymbol{\theta}}}_{\mathcal{A}} |{\boldsymbol{X}}, \lambda_n) ] \nonumber\\ &+ \tr\left[({\boldsymbol{\Sigma}}_{\mathcal{A}}^* + \bo{b}^{*} \bo{b}^{*T}) \left\{ \Psi_{\mathcal{A}} + \frac{1}{1 + \alpha} P''_{\lambda_n}({\boldsymbol{\theta}}_{\mathcal{A}}) \right\}\right] + o(1). \end{align} Now, $E[D_\alpha(\hat{{\boldsymbol{\theta}}}_{\mathcal{A}} |{\boldsymbol{X}}, \lambda_n) ]$ is estimated by $D_\alpha(\hat{{\boldsymbol{\theta}}}_{\mathcal{A}} |{\boldsymbol{X}}, \lambda_n) $. From Equation (\ref{vi}), we observe that the first term in $V_i$ remains unchanged for all sub-models. Therefore, for $\alpha>0$, the robust AIC is expressed as \begin{equation} RAIC = - \frac{1+\alpha}{ \alpha} \sum_{i=1}^n f_i^\alpha + \tr\left[(\hat{{\boldsymbol{\Sigma}}}_{\mathcal{A}}^* + \hat{\bo{b}}^{*} \hat{\bo{b}}^{*T}) \left\{ \hat{\Psi}_{\mathcal{A}} + \frac{1}{1 + \alpha} P''_{\lambda_n}(\hat{{\boldsymbol{\theta}}}_{\mathcal{A}}) \right\}\right], \end{equation} where $\hat{{\boldsymbol{\Sigma}}}_{\mathcal{A}}, \ \hat{\bo{b}}^*$, $\hat{\Psi}_{\mathcal{A}}$, $\hat{\xi}_{\alpha}$ and $\hat{\xi}_{2\alpha}$ are the estimates of ${\boldsymbol{\Sigma}}_{\mathcal{A}}, \ \bo{b}^*$ and $\Psi_{\mathcal{A}}$, $\xi_{\alpha}$ and $\xi_{2\alpha}$ respectively. For $\alpha=0$, it becomes \begin{equation} RAIC = - \sum_{i=1}^n \log(f_i) + \tr\left[(\hat{{\boldsymbol{\Sigma}}}_{\mathcal{A}}^* + \hat{\bo{b}}^{*} \hat{\bo{b}}^{*T}) \left\{ \hat{\Psi}_{\mathcal{A}} + P''_{\lambda_n}(\hat{{\boldsymbol{\theta}}}_{\mathcal{A}}) \right\}\right]. \end{equation} \end{document}
\begin{document} \thispagestyle{empty} \date{} \title{\textsf{\bf\huge A comparison of functional summary statistics to detect anisotropy of three-dimensional point patterns} \begin{abstract} The growing availability of three-dimensional point process data asks for a development of suitable analysis techniques. In this paper, we focus on two recently developed summary statistics, the conical and the cylindrical $K$-function, which may be used to detect anisotropies in 3D point patterns. We give some recommendations on choosing their arguments and investigate their ability to detect two special types of anisotropy. Finally, both functions are compared on some real data sets from neuroscience and glaciology. \noindent\textit{Key words: conical $K$-function, cylindrical $K$-function, Poisson line cluster point processes, Mat\'{e}rn hard core point processes, random ball packing, polar ice, minicolumn hypothesis} \end{abstract} \section{Introduction}\label{sec:intro} In some situations, the spatial correlation between the points in a point pattern is not only a function of the distances between the points, but also of the direction of the vector connecting them. Classical functional summary statistics such as Ripley's $K$-function or the nearest neighbor distance distribution function fail to detect such anisotropies. Hence, there is some interest in developing methods which allow for a detection and characterization of the degree of anisotropy in a spatial point pattern. In the literature, the case of two-dimensional point patterns has been at the focus of interest up to now. Various approaches for anisotropy analysis have been introduced, including spectral methods \citep{bartlett-64,MuggRen-96,Renshaw-02}, wavelet transformations \citep{ErcoleMateu-13-1,ErcoleMateu-13-2,ErcoleMateu-14}, and an anisotropy test based on the asymptotic joint normality of the sample second-order intensity function \citep{Guanetal-06}. In addition, directional versions of functional summary statistics have been introduced in \citet{OhSt-81}, \citet{StoyanBenes-91}, \citet{Stoyan-91}, and \citet{StoyStoy-94}. Moreover, \citet{MoellerHokan-14} introduced geometric anisotropic pair correlation functions. At least some of these methods may be transferred to the three-dimensional case in theory. In practice, however, their application might be hampered, e.g.\ by problems in finding a suitable partition of the unit sphere. Furthermore, the visualization and verification of results is more challenging. Recently, two directional counterparts of Ripley's $K$-function for the analysis of three-dimensional point patterns were introduced. The common idea of both approaches is to replace the ball used in the definition of the $K$-function by a structuring element which is sensitive to direction. The motivation of the work in \citet{Redenbachetal-09} was to detect anisotropy introduced by the compression of a regular point pattern. For this purpose, the mean numbers of points contained in cones centered in the typical point and pointing to different directions were investigated. The motivating data sets were point patterns of bubble centers extracted from tomographic images of polar ice cores. In contrast, \citet{plcpp-15} studied some data from neuroscience where the points are believed to be organized in linear columns. Hence, they decided to use a cylinder instead of a cone. These examples illustrate how the development of methods may be triggered by the particular shape of anisotropy that should be detected. In the current paper, we want to investigate the generality of the two directional versions of the $K$-function. For this purpose, we will apply them to both real and simulated point patterns with different sources and various degrees of anisotropy. In Section~\ref{sec:data} we introduce the data sets used throughout the paper. Section~\ref{sec:summaries} defines the conical and cylindrical $K$-functions. Based on a non-parametric isotropy test, some simulation-based recommendations on the parametrization of these summary statistics are given in Section~\ref{sec:volume}. Finally, in Section~\ref{sec:examples}, we apply these recommendations when comparing the functions in detecting the anisotropy in real pyramidal cell and ice data sets as well as realizations of models mimicking the structure of these data. \section{Data sets}\label{sec:data} In this section, we introduce the data sets used for the subsequent analyses. We start with presenting the real data studied in \citet{plcpp-15} and \citet{Redenbachetal-09} providing the motivation for the development of the two versions of the directional $K$-function. To allow for an investigation of the performance of the methods under varying degrees of anisotropy, our analysis is extended to simulated data sets. The models are chosen to reproduce the type of anisotropy present in the neuroscience data and the ice, respectively. \subsection{Real data sets}\label{sec:realdata} \subsubsection{Pyramidal cell point patterns} \label{sec:pyramidalData} The first set of data consists of four samples containing the locations of the pyramidal cells from the Brodmann area 4 of the gray matter of the human brain collected by the Center for Stochastic Geometry and Advanced Bioimaging, Denmark. According to the minicolumn hypothesis in neuroscience (see e.g.\ \citet{Mountcastle-57} and \citet{RSDRMN-15}), the point patterns are expected to be anisotropic due to the linear arrangement of the cells in a direction perpendicular to the pial surface of the brain, i.e, the $xy$-plane here. For more details on these data sets, see \citet{RSDRMN-15}. A visualization of one sample is shown in Figure \ref{fig1:dataAll}. \subsubsection{Ice data} \label{sec:IceData} The second set of data consists of a subset of the samples investigated in \cite{Redenbachetal-09}. The point patterns consist of the center locations of air bubbles extracted from tomographic images of the Talos Dome ice core. The data were provided by the Alfred-Wegener-Institute for Polar and Marine Research, Bremerhaven, Germany. Details on the acquisition and the processing of the data can be found in \cite{Redenbachetal-09}. Here, we consider 14 samples taken from a depth of 505 m where the anisotropy is most prominent. The point patterns can be interpreted as realizations of a regular point process. Anisotropy is introduced by a compression of the point pattern along the $z$-axis. Due to the location of the drilling site for this ice core, isotropy within the $xy$-plane can be assumed. A visualization of one sample is shown in Figure \ref{fig1:dataAll}. \begin{figure}\label{fig1:dataAll} \end{figure} \subsection{Simulated datasets}\label{sec:simdata} \subsubsection{Poisson line cluster point processes} \label{sec:data-plcpp} Motivated by the pyramidal cell data, a Cox process model called Poisson line cluster point process (PLCPP) for anisotropic spatial point processes was developed in \citet{plcpp-15}. The anisotropy of the realizations of this model is caused by linear arrangement of the points. For this purpose, we start with an anisotropic Poisson line process with intensity $\rho_L$ and a given directional distribution of lines. On each line $l_i$ contained in this process, a homogeneous Poisson process $Y_i$ with intensity $\alpha$ is independently generated. Finally, the points of the $Y_i$ are displaced in a plane orthogonal to $l_i$ by e.g.\ a zero-mean normal distribution with the standard deviation $\sigma$ yielding independent Poisson processes $X_i$ whose superposition forms the PLCPP model $X$. The parameter $\sigma$ controls the distances between the points and the lines. The intensity of $X$, i.e.\ the parameter $\rho$, is equal to the product of the intensity $\rho_L$ of the Poisson line process and the intensity $\alpha$ of the Poisson processes $Y_i$ on the lines. Our investigations are based on PLCPP models with intensity $\rho = 500$, $\alpha = 2.5$, and $\rho_L = 200$, where the lines are parallel to the $z$-axis. We consider a high ($\sigma = 0.001$), medium ($\sigma = 0.01$), low ($\sigma = 0.02$), and very low ($\sigma = 0.04$) degree of linearity. Figure~\ref{fig1:dataAll} shows a realization of a PLCPP model with a high degree of linearity. For the simulation study reported in the following, $m=1000$ realizations were generated for each set of parameters. \subsubsection{Compressed regular point patterns}\label{sec:regular} As discussed in \citet{Redenbachetal-09}, the structure of the ice data can be modelled via compression of isotropic regular point processes. To represent different degrees of regularity, we consider both a Mat\'{e}rn hard-core process (low regularity, \cite[Section 6.5.2]{Illianetal-08}) and the center locations of balls in a dense packing simulated using the force-biased algorithm (high regularity, \cite[Section 6.5.5]{Illianetal-08}). In both cases, the intensity was chosen as $\rho = 500$ and the hard core radius was $R = 0.05$. Anisotropy was then introduced by applying a volume-preserving linear transformation $T_c = \operatorname{diag}(1/\sqrt{c}, 1/\sqrt{c}, c), c\in [0,1]$, to these isotropic regular point patterns. This implies that the data are compressed by a factor $0<c<1$ in $z$-direction while they are isotropically stretched by a factor $1/\sqrt{c}$ in the $xy$-plane. As in the case of the PLCPP models, $m = 1000$ realizations for each model and each set of parameters were generated within the unit cube. Different degrees of compression were realized by choosing $c=0.7,$ $0.8$, and $0.9$. Figure~\ref{fig1:dataAll} shows a realization of a point pattern obtained from a ball packing compressed by a factor $c = 0.7$. \section{Conical and cylindrical $K$-functions} \label{sec:summaries} Ripley's $K$-function is a well-known summary statistic which is defined as the mean number of further points within a circle/sphere with radius $r$ centered in a typical point of the point pattern divided by the intensity. Naturally, anisotropy cannot be detected using this function due to its symmetric structuring element. \citet{Redenbachetal-09} generalized the 2D directional $K$-function (see e.g.\ \citet{StoyStoy-94}) to the three-dimensional case by replacing the sector of a circle by a double cone. For a unit vector $\mathbf u$, the conical $K$-function is defined as \[ K_{\mathbf u, \text{cn}}(r_\text{cn}, \theta)=\frac{1}{\rho^2 |W|}\,\mathrm E\sum_{\mathbf x_1,\mathbf x_2\in \mathbf X}^{\not=}\mathbf1[\mathbf x_1\in W,\mathbf x_2-\mathbf x_1\in C_{\mathbf u}(r_\text{cn},\theta)],\quad 0<r_\text{cn},\,0\leq\theta\leq \frac{\pi}{2} \] where $\rho$ is the intensity, and $C_{\mathbf u}(r_\text{cn},\theta)$ denotes a double spherical cone in the direction $\mathbf u$ with an slant height of length $r_\text{cn}$ and an apex angle of size $2\theta$ centered in $0$ (see Figure~\ref{fig2:strucElmnt}). Briefly speaking, ${\rho}K_{\mathbf u, \text{cn}}(r_\text{cn}, \theta)$ is the mean number of further points within a cone $x_0+ C_{\mathbf u}(r_\text{cn},\theta)$ centered in a typical point $x_0$ of the point pattern. In \citet{Redenbachetal-09} the function $K_{\mathbf u, \text{cn}}$ was called directional $K$-function. Here, we will call it conical $K$-function to distinguish it from the cylindrical $K$-function introduced below. \begin{figure} \caption{Structuring elements of the conical (the double cone) and the cylindrical (the cylinder) $K$-functions. } \label{fig2:strucElmnt} \end{figure} \citet{plcpp-15} introduced a summary statistic, called the cylindrical $K$-function, to detect anisotropy of point patterns with columnar structure. It is a version of the space-time $K$-function \citep{Diggleetal-95} and is defined via \[ K_{\mathbf u, \text{cl}}(r_\text{cl},h)=\frac{1}{\rho^2|W|}\,\mathrm E\sum_{\mathbf x_1,\mathbf x_2\in \mathbf X}^{\not=}\mathbf1[\mathbf x_1\in W,\mathbf x_2-\mathbf x_1\in Z_{\mathbf u}(r_\text{cl},h)],\quad r_\text{cl},h>0, \] where $Z_{\mathbf u}(r_\text{cl},h)$ denotes a cylinder with center $0$, base radius $r_\text{cl}$, and height $2h$ in the direction $\mathbf u$ (see Figure~\ref{fig2:strucElmnt}). Briefly speaking, $\rho K_{\mathbf u, \text{cl}}(r_\text{cl},h)$ is the mean number of further points within a cylinder $x_0+Z_{\mathbf u}(r_\text{cl},h)$ centered in a typical point $x_0$ of the point pattern. For more details on the cylindrical $K$-function, see \citet{plcpp-15}. Ratio-unbiased non-parametric estimates of the functions are, respectively, given by \begin{equation}\label{e:conKest} \hat K_{\mathbf u, \text{cn}}(r_\text{cn}, \theta) = \frac{1}{\widehat{\rho^2}}\sum_{\mathbf x_1,\mathbf x_2\in \mathbf W}^{\not=} w(\mathbf x_1,\mathbf x_2) \mathbf1[\mathbf x_2-\mathbf x_1\in C_{\mathbf u}(r_\text{cn},\theta)]. \end{equation} and \begin{equation}\label{e:conKest} \hat K_{\mathbf u, \text{cl}}(r_\text{cl},h) =\frac{1}{\widehat{\rho^2}}\sum_{\mathbf x_1,\mathbf x_2\in \mathbf W}^{\not=} w(\mathbf x_1,\mathbf x_2) \mathbf1[\mathbf x_2-\mathbf x_1\in Z_{\mathbf u}(r_\text{cl},h)]. \end{equation} where $n$ is the number of points in the point pattern, $\widehat{\rho^2}=n(n-1)/|W|^2$ is an unbiased estimate of $\rho^2$ (see e.g.\ \citet{Illianetal-08}), and $w$ is the translation edge correction factor defined as $$w(\mathbf x_1,\mathbf x_2) = 1/|W\cap W_{\mathbf x_2-\mathbf x_1}|$$ in which $W_{\mathbf x}$ denotes the translation of the observation window $W$ by the vector $\mathbf x$ (see e.g.\ \citet{StoyStoy-94}). Both estimators can easily be evaluated using a spherical or cylindrical coordinate system, respectively. However, unlike in the two-dimensional case, it is impossible to partition the unit sphere into equally sized cones or cylinders pointing to different directions. In practice, the directional $K$-functions can be evaluated for a set of directions evenly distributed on the unit sphere. Approaches for deriving such sets of directions are discussed in \cite{Alt11}. If possible, the choice of the number of directions should be based on prior knowledge on the main directions of anisotropy. While the classical summary statistics for point processes, e.g.\ Ripley's $K$-function, depend on one parameter, the summary statistics introduced above depend on two parameters which makes the investigations more challenging. For the cone, it seems natural to fix the parameter $\theta$ in advance such that the conical $K$-function only depends on the parameter $r_\text{cn}$. In practice, $\theta$ should be chosen depending on the number of directions to be investigated and the intensity of the point pattern. In \cite{Redenbachetal-09} an angle of $\theta=\frac \pi 4$ was chosen when considering only coordinate directions. For larger sets of directions, $\theta$ should be reduced to avoid overlap of the cones for different directions. Additionally, the angle should be large enough to observe a reasonable number of points within the cones. For the cylinder, the situation is more complicated as there are three ways to expand a cylinder (see Figure~\ref{fig3:cylinders}) depending on the two parameters $r_\text{cl}$ and $h$. A priori, none of these methods seems more natural than the other. In \cite{RSDRMN-15}, the height of the cylinder was fixed while expanding its radius. In the present study, we are interested in a comparison of the cylindrical and the conical $K$-function. Hence, the expansion scenario should be chosen such that both functions behave similarly in some sense. Two possible approaches are discussed in the following section. \begin{figure} \caption{Three ways of expanding a cylinder when detecting the anisotropy in the point patterns: expanding $h$ given a fixed $r_\text{cl}$ (left panel), expanding $r_\text{cl}$ given a fixed $h$ (middle panel), or expanding both $r_\text{cl}$ and $h$ (right panel).} \label{fig3:cylinders} \end{figure} \section{Choice of parametrization}\label{sec:volume} \subsection{Equal volume} The first parametrization is based on the fact that sets of equal volume will contain a similar number of points. Hence, we suggest to parametrize the functions such that the volumes of the cone and the cylinder are equal. The details are as follows. Recall that $r_{\text{cn}}$ and $r_{\text{cl}}$ refer to the radius of the cone (or the radius of the circumscribed sphere), and the radius of the cylinder, respectively (see Figure~\ref{fig2:strucElmnt}). Knowing that the volume of a cylinder, a cone, and a spherical cap are, respectively, given by \begin{equation*} V_{\text{cl}} = \pi r_{\text{cl}}^2{2h},\, V_{\text{cone}} = \frac{1}{3}\pi r_{\text{cl}}^2{h}, V_{\text{cap}} = \frac{\pi d^2}{3}(3r_{\text{cn}} - d) \end{equation*} where $d = r_{\text{cn}} - h$ is the height of the cap, the volume of the double cone (used as the structuring element of the conical $K$-function) is given by \begin{align*} V_{\text{cn}} & = 2[V_{\text{cone}} + V_{\text{cap}}]\\ & = 2[\frac{1}{3}\pi r_{\text{cl}}^2{h} + \frac{\pi}{3}{(r_{\text{cn}} - h)^2}(3r_{\text{cn}} - (r_{\text{cn}} - h))]\\ & = \frac{1}{3}\pi r_{\text{cl}}^2{2h} + \frac{2 \pi}{3}{(r_{\text{cn}} - h)^2}(2r_{\text{cn}} + h) \end{align*} Using the above formula, those values of $r_{\text{cn}}$, $r_{\text{cl}}$, and $h$ satisfying \begin{equation}\label{volume} r_{\text{cl}}^2{2h} = {(r_{\text{cn}} - h)^2}(2r_{\text{cn}} + h). \end{equation} lead us to the equation $V_{\text{cn}} = V_{\text{cl}}$, i.e., the equality of the volumes of the structuring elements of the two functions. Equation \eqref{volume} leaves two degrees of freedom. In practice, it can be accompanied by further constraints such as the choice of an aspect ratio for the cylinder (see below). \subsection{Equal shape} An alternative approach is to require that similar regions of the data are scanned in the sense that the shapes of the structuring elements are similar. This is achieved by placing the cone inside the cylinder as shown in Figure~\ref{fig2:strucElmnt}. In this case, the following equations hold: \begin{equation} \label{AR} \cot(\theta)=\frac{h}{r_{\text{cl}}} \end{equation} and \begin{equation} \label{rrh} r^2_{\text{cn}} = h^2 + r^2_\text{cl}. \end{equation} Following the recommendation given in \citet{plcpp-15} on using an elongated cylinder, i.e.\ where $h>r_\text{cl}$, the right hand side of equation \eqref{AR} can be considered as an aspect ratio. It is clear that when this ratio is equal to one, no anisotropy is expected to be detected by this function. Taking an aspect ratio $\cot(\theta)=a > 1$ and using equations~\eqref{AR} and \eqref{rrh} results in \begin{equation} \label{rh} h = ar_{\text{cl}} \end{equation} and \begin{equation} \label{rr} r_{\text{cn}} = r_{\text{cl}}\sqrt{a^2 + 1} \end{equation} which provides us with an alternative relationship between the three parameters $r_{\text{cn}}$, $r_{\text{cl}}$ and $h$. In the following, we will use the parametrization based on equations~\eqref{rh} and \eqref{rr}. \subsection{Isotropy test}\label{sec:isotropytest} \citet{Redenbachetal-09} introduced a non-parametric method to detect anisotropies in the point patterns as follows. Assuming isotropy in the $xy$-plane and knowing that the anisotropy is directed along the $z$-axis, the isotropy test for $m$ replicated point patterns is based on the statistics given by \begin{equation*} T_{xy,i}= \int_{r_1}^{r_2} | \hat{S}_{x,i}(r)-\hat{S}_{y,i}(r)| dr \end{equation*} and \begin{equation*} T_{z,i}= \min \left( \int_{r_1}^{r_2} | \hat{S}_{x,i}(r)-\hat{S}_{z,i}(r)| dr, \int_{r_1}^{r_2} | \hat{S}_{y,i}(r)-\hat{S}_{z,i}(r)| dr \right) \end{equation*} where $[r_1, r_2]$ is a given interval, and $\hat S_x$, $\hat S_y$, and $\hat S_z$ are estimates of a summary statistic (here, either the conical or the cylindrical $K$-function) in the directions of the $x$-, $y$-, and $z$-axis, respectively. Here, $r_\text{cl}$ or $r_\text{cn}$ are chosen as the integration variable while the remaining parameters $h$ and $\theta$ are chosen by any of the approaches discussed above. In case of isotropy, these three estimates should behave similarly, while $\hat{S}_z$ should be clearly different from $\hat{S}_x$ and $\hat{S}_y$ if the anisotropy is directed along the $z$-axis. Hence, the null hypothesis of isotropy will be rejected at significance level $\alpha$ if the value of $T_{z,i}$ corresponding to the $i$-th point pattern is larger than $100(1-\alpha)\%$ of the estimated $T_{xy, i}$ values. The performance of the test is evaluated using its power, estimated by the average number of times the null hypothesis is rejected in $1000$ repetitions of the test. Note that the values of $r_1$ and $r_2$ should be chosen depending on the type of anisotropy. We fix $r_1=0$ and will investigate the effect of different choices of $r_2$ on the power of the test (see also \citet{Redenbachetal-09}). When using the equations obtained in the above sections, one should also decide on an appropriate aspect ratio $a$. Figure ~\ref{fig4:Power} shows plots of the power of the isotropy test at level 5\% versus the parameter $r_2$ for the cylindrical $K$-function (and the corresponding $r_2$ for the conical $K$-function obtained using \eqref{rr}) for the aspect ratios $a$ from $1.5$ to $3$ with an increment of size $0.5$, based on $m = 1000$ realizations under the PLCPP model introduced in Section~\ref{sec:data-plcpp}. The results indicate that the use of longer cylinders results in larger powers of the isotropy tests. This supports the recommendation given in \citet{plcpp-15} on using an elongated cylinder. In each plot, the maximum is obtained for approximately the same $r_2$ value, no matter which $h$ is chosen. For higher degrees of linearity, the power of the test is higher in general. Furthermore, it is less sensitive to the choice of $r_2$. \begin{figure} \caption{The power of the isotropy test at level 5\% versus the parameter $r_2$ for the realizations of the PLCPP model with (from top left to bottom right) very low, low, medium, and high degree of linearity based on the cylindrical $K$-function. The curves from bottom (black) to top (brown) correspond to the aspect ratios from $a = 1.5$ to $a = 3$ with an increment of size $0.5$. } \label{fig4:Power} \end{figure} \section{Application} \label{sec:examples} Even though the findings presented in the previous section suggest using a cylinder as long as possible, we have chosen an aspect ratio of $a=2$ for the subsequent analyses. The reasons are as follows: When using a very long cylinder, serious edge effects may occur already for small values of $r_{\text{cl}}$ resulting in poor estimates of the cylindrical $K$-function. In addition, increasing the length of the cylinder would mean to reduce the angle used for the cone. As we already mentioned, one should make sure that the cone is not too narrow as in this case it will only contain very few points. \begin{figure} \caption{Means of the estimated values of the cylindrical (black) and conical (red) $K$-functions for the realizations of the PLCPP with $\sigma = 0.04, 0.02, 0.01, 0.001$ (first four panels from top left to bottom right), Mat\'{e}rn hard-core (third row), and random packing of balls (last row), in the direction of the $z$ (solid), $x$ (dotted), and $y$ (dashed) axis. The order in the last two rows corresponds to the factors $c = 0.9, 0.8, 0.7$, from left to right, respectively. } \label{fig5:MeanCylKDirK} \end{figure} Hence, in the applications, we chose $\theta = 0.4636476$ which is corresponding to $a = 2$, i.e.\ the case where the height of the cylinder is twice the diameter of its base. Figure~\ref{fig5:MeanCylKDirK} shows the means of the estimated values of conical and the cylindrical $K$-functions for 1000 realizations of the simulated data sets introduced in Sections~\ref{sec:data-plcpp} and \ref{sec:regular}. The mean values are obtained using the ratio estimation method described in \citet{badRep-93}. The $x$-axis of the plot shows the values for $r_\text{cl}$ (and the corresponding parameters $r_\text{cn}$ and $h$ are obtained from equations~\eqref{rh} and \eqref{rr} to get comparable scales). With the exception of the Mat\'{e}rn case where the anisotropy is only weakly pronounced, both functions are able to detect the anisotropy. However, it is not easy to see which function is more sensitive to the structure of the anisotropy. Therefore, we made a comparison based on the power of the isotropy tests as follows. \begin{figure} \caption{The powers of the isotropy test at level 5\% as a function of $r_2$ with respect to $r_\text{cl}$ when using the cylindrical (black) and conical (red) $K$-function, for the realizations of the PLCPP with $\sigma = 0.04, 0.02, 0.01, 0.001$ (first four panels from top left to bottom right), Mat\'{e}rn hard-core (third row), and random packing of balls (last row). The last two rows correspond to the factors $c = 0.9, 0.8, 0.7$, from left to right, respectively. } \label{fig6:power_both} \end{figure} The first four panels of Figure~\ref{fig6:power_both} show the plots of the powers of the isotropy test at a $5\%$ significance level using $m = 1000$ simulations under the PLCPP models with four degrees of linearity as mentioned in Section~\ref{sec:data-plcpp}. The plots indicate that the power of the anisotropy test is slightly higher when using the cylindrical $K$-function than when using the conical one. The shape of the two curves is similar in all plots. In contrast, the last two rows of this figure show that the conical $K$-function is more powerful than the cylindrical one in detecting the anisotropy caused by compression of the regular point patterns when choosing $r_2$ close to the hardcore radius while the cylindrical $K$-function is better for large $r_2$. Extra information provided by the plots is that the power of the test obtains its maximum where the whole column in the point patterns with columnar structure is captured. As an example to clarify this point, the fourth panel, which is corresponding to a realization of a PLCPP with $\sigma = 0.001$, satisfies our expectation of the diameter of a cylindrical cluster of points to be approximately $4\sigma = 0.004$ (by definition of the PLCPP models). This pattern is followed by the other three values of $\sigma$ as well. In case of the regular point patterns, the maximum is obtained for $r_2$ close to the hardcore radius of $R=0.05$ which corresponds to the findings in \cite{Redenbachetal-09}. Figure~\ref{fig7:KestReal} shows the estimated $K$-functions for samples of the pyramidal cell and the ice data sets introduced in Sections~\ref{sec:IceData} and \ref{sec:pyramidalData} using the parametrization obtained in equations~\eqref{rh} and \eqref{rr}. As expected based on the power of the isotropy test, the conical $K$-function is more powerful than the cylindrical one in detecting the anisotropy in the ice data. On the other hand, the cylindrical $K$-function is stronger than the conical one in detecting the anisotropy caused by the linear arrangement of the pyramidal cells. Note that we obtained the same behavior when using the rest of samples. \begin{figure} \caption{Estimates of the cylindrical (black) and the conical (red) $K$-function in the direction of the $z$ (sold), $x$ (dotted), and $y$ (dashed) axis for a sample of the pyramidal cell data (left panel) and the ice data (right panel).} \label{fig7:KestReal} \end{figure} \section{Discussion}\label{sec:discussion} In this paper, we have presented a comparison of two directional versions of Ripley's $K$-function using a cone or a cylinder as structuring element. We derived a parametrization to make both functions comparable. Then, both functions were applied to data sets with different sources of anisotropy. The cylindrical $K$-function is generally more powerful than the conical one in case of columnar anisotropy and vice versa in case of compression. In situations where the anisotropy is clearly pronounced, although it can be detected by both functions, the cylindrical $K$-function is clearly more powerful than the conical $K$-function in detecting columnarity. Our application examples show quite different model geometries: points clustered in linear patterns in the minicolumn data and compressed regular point patterns in the ice data. In order to get a comparable testing scenario, we decided to use the nonparametric setting suggested in \cite{Redenbachetal-09}. While this approach is pretty general, it requires replicated data which are not always available in practice. Nevertheless, an investigation of plots of the directional $K$-functions for different directions may give an indication of existing anisotropies. In cases where a suitable model for the data is available, the test could be replaced by a model based Monte Carlo test. The examples given in this paper emphasize the importance of an appropriate choice of the combination of the parameters $(r_\text{cl}, h)$ and $(r_\text{cn}, \theta)$ as well as the integration interval in the test. An unfavorable choice may result in a poor performance of the functions in detecting the anisotropy of a point pattern. In practical situations, prior information on the construction of the anisotropy, e.g.\ the diameter of the clusters of points in case of the pyramidal cells or the hardcore radius in the regular data, can be used to determine interesting ranges of $r$ values. Throughout this paper, we assume that the main anisotropy directions are known and fixed. An approach for estimating the main directions in case of the cylindrical $K$-function was discussed in \citet{plcpp-15}. A similar investigation for the ice data has been done in \citet{Rajalaetal-16}. \subsection*{Acknowledgments} This project was supported by the Danish Council for Independent Research | Natural Sciences, grant 12-124675, `Mathematical and Statistical Analysis of Spatial Data', by the Centre for Stochastic Geometry and Advanced Bioimaging, funded by a grant from the Villum Foundation, and by the Center for Mathematical and Computational Modelling (CM)$^\text{2}$ funded by the state of Rhineland-Palatinate, Germany. We thank Jens Randel Nyengaard, Karl-Anton Dorph Petersen, and Ali H.\ Rafati at the Center for Stochastic Geometry and Bioimaging (CSGB), Denmark, for collecting the 3D pyramidal cell data, and Johannes Freitag, Alfred-Wegener Institute Bremerhaven, for providing the polar ice data. \end{document}
\begin{document} \title{Vertical slice modelling of nonlinear Eady waves using a compatible finite element method} \author[1,*]{Hiroe Yamazaki} \author[1]{Jemma Shipton} \author[2]{Michael J. P. Cullen} \author[1,3]{Lawrence Mitchell} \author[1]{Colin J. Cotter} \affil[1]{Department of Mathematics, Imperial College London, UK} \affil[2]{Met Office, Exeter, UK} \affil[3]{Department of Computing, Imperial College London, UK} \affil[*]{Correspondence to: \texttt{[email protected]}} \maketitle \begin{abstract} A vertical slice model is developed for the Euler-Boussinesq equations with a constant temperature gradient in the direction normal to the slice (the Eady-Boussinesq model). The model is a solution of the full three-dimensional equations with no variation normal to the slice, which is an idealized problem used to study the formation and subsequent evolution of weather fronts. A compatible finite element method is used to discretise the governing equations. To extend the Charney-Phillips grid staggering in the compatible finite element framework, we use the same node locations for buoyancy as the vertical part of velocity and apply a transport scheme for a partially continuous finite element space. For the time discretisation, we solve the semi-implicit equations together with an explicit strong-stability-preserving Runge-Kutta scheme to all of the advection terms. The model reproduces several quasi-periodic lifecycles of fronts despite the presence of strong discontinuities. An asymptotic limit analysis based on the semi-geostrophic theory shows that the model solutions are converging to a solution in cross-front geostrophic balance. The results are consistent with the previous results using finite difference methods, indicating that the compatible finite element method is performing as well as finite difference methods for this test problem. We observe dissipation of kinetic energy of the cross-front velocity in the model due to the lack of resolution at the fronts, even though the energy loss is not likely to account for the large gap on the strength of the fronts between the model result and the semi-geostrophic limit solution. \end{abstract} \noindent \textbf{keywords:} mixed finite elements; frontogenesis; Eady model; asymptotic convergence; semi-geostrophic; numerical weather prediction \section{Introduction} In the last two decades, Finite element methods have become a popular discretisation approach for numerical weather prediction (NWP). The main focus has been on spectral elements or discontinuous Galerkin (DG) methods \citep{fournier2004spectral, thomas2005ncar, dennis2012cam, kelly2012continuous, kelly2013implicit, marras2013simulations, brdar2013comparison, bao2015horizontally, marras2015review}. Another track of research, which we continue here, has been on compatible finite element methods \citep[e.g.][]{cotter2012mixed, staniforth2013analysis, cotter2014finite, mcrae2014energy, natale2016compatible}. This work is motivated by the need to move away from the conventional latitude-longitude grids, whilst retaining properties of the Arakawa C-grid staggered finite difference method. The compatible finite element method is a family of mixed finite element methods where different finite element spaces are selected for different variables. Compatible finite element methods are built from finite element spaces that have differential operators such as grad and curl that map from one space to another. This embedded property makes the compatible finite element method analogous to the Arakawa C-grid staggered finite difference method \citep[][]{arakawa1977computational} with extra flexibility in the choice of discretisation to optimize the ratio between global velocity degrees of freedom (DoFs) and global pressure DoFs for the sake of avoiding spurious modes \citep[][]{cotter2012mixed}. In addition, it allows the use of arbitrary grids with no requirement of orthogonality. In this study, the compatible finite element method is applied to the Euler-Boussinesq equations with a constant temperature gradient in the $y$-direction: the Eady-Boussinesq vertical slice model \citep[][]{hoskins1972atmospheric}. The model is a solution of the full three-dimensional equations with no variation normal to the slice. As the domain of the vertical slice model consists of a two-dimensional slice, it can be run much quicker on a workstation than a full three-dimensional model. This makes the model ideal for numerical studies and test problems for NWP models. There have been many studies on this idealized problem to examine the formation and subsequent evolution of weather fronts \citep[e.g.][]{williams1967atmospheric, nakamura1989nonlinear, nakamura1994nonlinear, snyder1993frontal, budd2013monge, visram2014framework, visram2014asymptotic}. From a mathematical perspective, the connections with optimal transportation have been exposed, leading to new numerical methods and analytical insight (see \citet{cullen2007modelling} for a review), whilst \citet{cotter2013variational} considered the geometric structure and conservation laws of this slice model. \citet{visram2014framework} presented a framework for evaluating model error in terms of asymptotic convergence in the Eady model. The framework is based on the semi-geostrophic (SG) theory in which hydrostatic balance and geostrophic balance of the out-of-slice component of the wind are imposed to the equations. The SG equation provides a suitable limit for asymptotic convergence in the Eady model as the Rossby number decreases to zero \citep[][]{cullen2008comparison}. We use this framework to validate the numerical implementation and assess the long term performance of the model developed using the compatible finite element method. The main goal of this paper is to demonstrate the compatible finite element approach for NWP in the context of this frontogenesis test case. A challenge in using the compatible finite element method in NWP models is the implementation of (the finite element version of) the Charney-Philips grid staggering in the vertical direction that is used in many current operational forecasting models, such as the Met Office Unified Model \citep[][]{wood2014inherently}. This requires the temperature space to be a tensor product of discontinuous functions in the horizontal direction and continuous functions in the vertical direction \citep[][]{cotter2016embedded}. Therefore we propose a new advection scheme for a partially continuous finite element space and use it to discretise the temperature equation. The other key features of the model are: i) an upwind DG method is applied for the momentum equations; ii) the semi-implicit equations are solved together with an explicit strong-stability-preserving Runge-Kutta (SSPRK) scheme to all of the advection terms; iii) a balanced initialisation is introduced to enforce hydrostatic and geostrophic balances in the initial fields. The rest of the paper is structured as follows. Section \ref{model_description} provides the model description, including formation of the Eady problem, discretisation of the governing equations in time and space, and settings of the frontogenesis experiment. In section \ref{results}, we present the results of the frontogenesis experiments using the developed model. Here we evaluate model error in terms of SG limit analysis as well as energy dynamics. Finally, in section \ref{conclusion} we provide a summary and outlook. \section{The incompressible Euler-Boussinesq Eady slice model}\label{model_description} \subsection{Governing equations} In this section, we describe the model equations for the vertical slice Eady problem. The equations are as described in \citet{visram2014framework}, but we repeat them here to establish notation. To derive a set of equations for the vertical slice Eady problem, we start from the three-dimensional incompressible Euler-Boussinesq equations with rigid-lid conditions on the upper and lower boundaries, \begin{eqnarray} \frac{\partial \MM{u}}{\partial t} + (\MM{u} \cdot \nabla)\MM{u} + f \MM{\hat{z}} \times \MM{u} &=& -\frac{1}{\rho_{0}} \nabla p + \frac{g}{\theta_{0}}\theta \MM{\hat{z}},\\ \frac{\partial \theta}{\partial t} + (\MM{u} \cdot \nabla)\theta &=& 0,\\ \nabla \cdot \MM{u} &=& 0, \end{eqnarray} where $\MM{u} = (u, v, w)$ is the velocity vector, $\nabla = (\partial_x, \partial_y, \partial_z)$ is the gradient operator, and $\MM{\hat{z}}$ is a unit vector in the $z$-direction; $p$ is the pressure, $\theta$ is the potential temperature, and $g$ is the acceleration due to gravity; $\rho_{0}$ and $\theta_{0}$ are reference density and potential temperature values at the surface, respectively. The rotation frequency $f$ is constant. In the Eady slice model, we consider perturbations to the constant background temperature and pressure profiles, $\bar{\theta}(y,z)$ and $\bar{p}(y,z)$, respectively. All perturbation variables are then assumed to be independent of $y$, denoted with primed variables as follows, \begin{eqnarray} \theta &=& \bar{\theta} (y, z) + \theta'(x, z, t), \\ p &=& \bar{p}(y,z) + p'(x, z, t). \end{eqnarray} Following \citet{snyder1993frontal}, we select a background profile assuming the geostrophic balance: \begin{eqnarray} \bar{\theta}(y, z) &=& \frac{\theta_{0}}{g}(-f \Lambda y + N^{2}z), \label{background_theta}\\ -\frac{1}{\rho_{0}}\frac{\partial \bar{p}}{\partial y} &=& f\Lambda \left(z-\frac{H}{2}\right), \end{eqnarray} where $\Lambda$ is the constant vertical shear, $N$ is the Brunt-V\"ais\"al\"a frequency, and $H$ is the height of the domain. The variation in $y$ of the background pressure is therefore written as \begin{eqnarray} \frac{\partial \bar{p}}{\partial y} &=& \frac{\rho_{0}g}{\theta_{0}} \frac{\partial \bar{\theta}}{\partial y}\left(z-\frac{H}{2}\right), \label{dpdy} \end{eqnarray} where \begin{eqnarray} \frac{\partial \bar{\theta}}{\partial y} &=& -\frac{\theta_{0}f\Lambda}{g} = \mbox{const}. \end{eqnarray} Similarly, the variation in $z$ of the background pressure is given by the hydrostatic balance as \begin{eqnarray} \frac{\partial \bar{p}}{\partial z} &=& \frac{\rho_{0}g}{\theta_{0}}\bar{\theta}.\label{dpdz} \end{eqnarray} Substituting in the background profiles \eqref{dpdy} and \eqref{dpdz} and $\partial/\partial y = 0$ for all perturbation variables, we obtain the nonhydrostatic, incompressible Euler-Boussinesq Eady equations in the vertical slice with rigid-lid conditions on the upper and lower boundaries, \begin{eqnarray} \frac{\partial u}{\partial t} + u\frac{\partial u}{\partial x} + w\frac{\partial u}{\partial z} - fv &=& -\frac{1}{\rho_{0}}\frac{\partial p'}{\partial x},\\ \frac{\partial v}{\partial t} + u\frac{\partial v}{\partial x} + w\frac{\partial v}{\partial z} + fu &=& -\frac{g}{\theta_{0}}\frac{\partial \bar{\theta}} {\partial y}\left(z-\frac{H}{2}\right),\\ \frac{\partial w}{\partial t} + u\frac{\partial w}{\partial x} + w\frac{\partial w}{\partial z} &=& - \frac{1}{\rho_{0}}\frac{\partial p'}{\partial z} + \frac{g}{\theta_0}\theta' ,\\ \frac{\partial \theta'}{\partial t} + u\frac{\partial \theta'}{\partial x} + w\frac{\partial \theta'}{\partial z} + v\frac{\partial \bar{\theta}}{\partial y} + w\frac{\partial \bar{\theta}}{\partial z} &=& 0,\\ \frac{\partial u}{\partial x} + \frac{\partial w}{\partial z} &=& 0, \end{eqnarray} where all variables $u, v, w, \theta'$ and $p'$ are functions of $(x, z, t)$. Now, we redefine the velocity vector and the gradient operator as those in the vertical slice, \begin{eqnarray} \MM{u} = (u, w), \ \ \ \ \nabla = (\partial_x, \partial_z), \end{eqnarray} and introduce the in-slice buoyancy and the background buoyancy, \begin{eqnarray} b' = \frac{g}{\theta_{0}}\theta', \ \ \ \ \bar{b}=\frac{g}{\theta_{0}}\bar{\theta}, \label{def_b} \end{eqnarray} respectively. Finally, dropping the primes gives the vector form of the Eady slice model equations as \begin{eqnarray} \frac{\partial \MM{u}}{\partial t} + (\MM{u} \cdot \nabla)\MM{u} - fv\hat{\MM{x}} &=& -\frac{1}{\rho_{0}}\nabla p + b\hat{\MM{z}}, \label{ueq}\\ \frac{\partial v}{\partial t} + \MM{u} \cdot \nabla v + f\MM{u} \cdot \hat{\MM{x}} &=& -\frac{\partial \bar{b}} {\partial y}\left(z-\frac{H}{2}\right),\label{veq}\\ \frac{\partial b}{\partial t} + \MM{u} \cdot \nabla b + \frac{\partial \bar{b}}{\partial y} v + N^{2} \MM{u} \cdot \hat{\MM{z}} &=& 0,\label{beq}\\ \nabla \cdot \MM{u} &=& 0 \label{peq}, \end{eqnarray} where $\MM{\hat{x}}$ is a unit vector in the $x$-direction, and in \eqref{beq} we used the relationship \begin{eqnarray} N^{2} = \frac{g}{\theta_{0}}\frac{\partial \bar{\theta}}{\partial z} = \frac{\partial \bar{b}}{\partial z}, \end{eqnarray} obtained from \eqref{background_theta} and \eqref{def_b}. The solutions to the slice model equations \eqref{ueq} to \eqref{peq} are equivalent to a $y$-independent solution of the full three dimensional equations. The slice model conserves the total energy \begin{eqnarray} E = K_{u} + K_{v} + P, \label{total_energy} \end{eqnarray} where \begin{eqnarray} K_{u} &=& \rho_{0} \int_{\Omega} \frac{1}{2} |\MM{u}|^{2} \ \diff x, \label{kinetic_u}\\ K_{v} &=& \rho_{0} \int_{\Omega} \frac{1}{2} v^{2} \ \diff x,\label{kinetic_v}\\ P &=& -\rho_{0} \int_{\Omega} b \left(z-\frac{H}{2}\right) \ \diff x, \label{potential} \end{eqnarray} are the kinetic energy from the in-slice velocity components, kinetic energy from the out-of-slice velocity component, and potential energy, respectively. \subsection{Finite element discretisation} \subsubsection{Compatible finite element spaces} In this study, a compatible finite element method is used to discretise the governing equations. First we take our computational domain, denoted by $\Omega$, to be a rectangle in the vertical plane with a periodic boundary condition in the $x$-direction, and rigid-lid conditions on the upper and lower boundaries. We refer to the combination of the upper and lower boundaries as $\partial\Omega$. Next we choose finite element spaces with the following properties: \begin{equation} \begin{CD} \underbrace{\mathbb{V}_0(\Omega)}_{\mbox{\small Continuous}} @> \nabla^\perp >> \underbrace{ \mathbb{V}_1(\Omega)}_{\mbox{\small Continuous\ normal\ components}} @>\nabla\cdot >> \underbrace{ \mathbb{V}_2(\Omega)}_{\mbox{\small Discontinuous}}, \end{CD} \label{spaces} \end{equation} where $\nabla^\perp = (-\partial_{z}, \partial_{x})$, $ \mathbb{V}_0$ contains scalar-valued continuous functions, $ \mathbb{V}_1$ contains vector-valued functions with continuous normal components across element boundaries, and $ \mathbb{V}_2$ contains scalar-valued functions that are discontinuous across element boundaries. The use of these spaces may be considered as an extension of the Arakawa-C horizontal grid staggering in finite difference methods. On quadrilateral elements, \citet{cotter2012mixed} advocated the choice ($ \mathbb{V}_0$, $ \mathbb{V}_1$, $ \mathbb{V}_2$) = (CG$_k$, RT$_{k-1}$, DG$_{k-1}$) for $k > 0$, where CG$_k$ denotes the continuous finite element space of polynomial degree $k$, RT$_k$ denotes the quadrilateral Raviart-Thomas space of polynomial degree $k$, and DG$_k$ denotes the discontinuous finite element space of polynomial degree $k$. This set of spaces ensures the ratio between global velocity DoFs and global pressure DoFs to be exactly 2:1, which helps to avoid spurious modes. Figure~\ref{fig:space-nodes} provides diagrams in the vertical plane showing the nodes for the three spaces for the cases $k = 1$ and $k = 2$. \begin{figure} \caption{Diagrams showing the nodes for the finite element spaces ($ \mathbb{V}_0$, $ \mathbb{V}_1$, $ \mathbb{V}_2$) = (CG$_k$, RT$_{k-1}$, DG$_{k-1}$) on quadrilaterals in the vertical plane. Circles denote scalar nodes, whilst arrows denote normal and tangential components of a vector. Normal components are continuous across element boundaries. Since tangential components are not required to be continuous, these values are not shared by neighbouring elements. (a) From left to right: $ \mathbb{V}_0$, $ \mathbb{V}_1$ and $ \mathbb{V}_2$ with $k$ = 1. (b) From left to right: $ \mathbb{V}_0$, $ \mathbb{V}_1$ and $ \mathbb{V}_2$ with $k$ = 2.} \label{fig:space-nodes} \end{figure} We then restrict the model variables to suitable function spaces. First the velocity and pressure are defined as \begin{eqnarray} \quad \MM{u}\in \mathring{\mathbb{V}}_1, \quad p \in \mathbb{V}_2, \end{eqnarray} where $\mathring{\mathbb{V}}_1$ is the subspace defined by \begin{equation} \mathring{\mathbb{V}}_1 = \left\{ \MM{u}\in \mathbb{V}_1:\MM{u}\cdot\MM{n}=0 \mbox{ on }\partial\Omega\right\}. \end{equation} To be consistent with the three-dimensional Arakawa-C grid staggering, we can choose $v$ from the same space as $p$ as \begin{eqnarray} \quad v \in \mathbb{V}_2. \end{eqnarray} There are two main options for arranging the temperature/buoyancy in finite difference models: the Lorenz grid (temperature collocated with pressure), and the Charney-Phillips grid (temperature collocated with vertical velocity). To mimic the Lorenz grid, we can simply choose $b$ from the pressure space $\mathbb{V}_2$. In this study, we use the Charney-Phillips grid since it avoids spurious hydrostatic pressure modes. For this purpose, we introduce a scalar space $\mathbb{V}_b$ which is obtained by the tensor product of the DG$_{k-1}$ space in the horizontal direction and the CG$_{k}$ space in the vertical direction. As shown in Figure~\ref{fig:vertical-nodes}, $\mathbb{V}_b$ has the same node locations as the vertical part of $ \mathbb{V}_1$. Then we choose the buoyancy as \begin{eqnarray} \quad b \in \mathbb{V}_b. \end{eqnarray} This constructs the extension of the Charney-Phillips staggering to compatible finite element spaces. \citet{natale2016compatible} showed that this choice of finite element spaces leads to a one-to-one mapping between pressure and buoyancy in the hydrostatic balance equation and therefore, as in the finite difference models using the Charney-Phillips grid, avoids spurious hydrostatic pressure modes. Since the space $ \mathbb{V}_b$ is discontinuous in the horizontal direction and continuous in the vertical direction, we need a transport scheme for a partially continuous finite element space, which we detail in the next subsection. \begin{figure} \caption{Diagrams showing the nodes for (a) the vertical part of $ \mathbb{V}_1$ (left) and $ \mathbb{V}_b$ (right) with $k$ = 1, and (b) those with $k$ = 2. Circles denote scalar nodes, whilst arrows denote normal and tangential components of a vector.} \label{fig:vertical-nodes} \end{figure} \subsubsection{Spatial discretisation}\label{space} We now use the compatible finite element spaces introduced above to discretise the model equations \eqref{ueq} to \eqref{peq}. Here we start with the discretisation of the in-slice velocity equation \eqref{ueq}. First we rewrite the advection term as \begin{equation} (\MM{u}\cdot\nabla)\MM{u} = (\nabla^{\perp}\cdot\MM{u})\MM{u}^{\perp}+\frac{1}{2}\nabla|\MM{u}|^{2}. \label{invariant} \end{equation} Then, taking \eqref{ueq}, dotting with a test function $\MM{w} \in \mathring{\mathbb{V}}_1$, and integrating over the domain gives \begin{eqnarray} \int_\Omega\MM{w}\cdot \frac{\partial \MM{u}}{\partial t} \diff x + \int_\Omega\MM{w}\cdot(\nabla^{\perp}\cdot\MM{u})\MM{u}^{\perp}\diff x = \int_\Omega\MM{w}\cdot fv\MM{\hat{x}} \diff x - \int_\Omega \MM{w} \cdot \nabla \left(\frac{p}{\rho_{0}} + \frac{1}{2}|\MM{u}|^{2}\right) \diff x + \int_\Omega \MM{w}\cdot b\hat{\MM{z}} \diff x, \ \ \forall\MM{w}\in \mathring{\mathbb{V}}_1. \label{udis} \end{eqnarray} Recalling that $\MM{u}$ is in $ \mathring{\mathbb{V}}_1$, $\nabla^{\perp}\cdot\MM{u}$ in the second term is not generally defined, since the tangential component of $\MM{u}$ is not continuous across element boundaries in general. We resolve this by integrating the term by parts. For the contribution to the integral from each element $e$ we obtain \begin{equation} \int_e \MM{w}\cdot(\nabla^\perp\cdot\MM{u})\MM{u}^\perp \diff x = -\int_e \nabla^\perp(\MM{w}\cdot\MM{u}^\perp)\cdot\MM{u} \diff x +\int_{\partial e}\MM{n}^\perp\cdot\tilde{\MM{u}}\MM{w}\cdot\MM{u}^\perp \diff S, \end{equation} where $\tilde{\MM{u}}$ is the upwind value of $\MM{u}$ on the element boundary $\partial e$. Summing over all elements, the advection term becomes \begin{equation} \int_\Omega \MM{w}\cdot(\nabla^\perp\cdot\MM{u})\MM{u}^\perp \diff x = -\int_\Omega \nabla^\perp(\MM{w}\cdot\MM{u}^\perp)\cdot\MM{u} \diff x +\int_\Gamma \jump{\MM{w}\cdot\MM{u}^\perp}^{\perp}\cdot\tilde{\MM{u}} \diff S, \label{uadv_int} \end{equation} where $\Gamma$ is the set of interior facets in the finite element mesh with the two sides of each facet arbitrarily labelled by + and −, the jump operator is defined by \begin{eqnarray} \jump{q} &=& q^{+}\MM{n}^{+} + q^{-}\MM{n}^{-}, \\ \jump{\MM{v}} &=& \MM{v}^{+}\cdot\MM{n}^{+} + \MM{v}^{-}\cdot\MM{n}^{-}, \end{eqnarray} for any scalar $q$ and vector $\MM{v}$, and $\tilde{\MM{u}}$ is evaluated on the upwind side as \begin{eqnarray} \tilde{\MM{u}} = \left\{ \begin{array}{l} \MM{u}^{+} \ \ \ \ \mbox{if} \ \MM{u}\cdot\MM{n}^{+} < 0, \\ \MM{u}^{-} \ \ \ \ \mbox{otherwise.} \end{array} \right. \end{eqnarray} A variational derivation and numerical analysis of the discretisation of this term can be found in \citet{natale2016variational}. Turning attention to the pressure gradient term $\nabla \left(\frac{p}{\rho_{0}} + \frac{1}{2}|\MM{u}|^{2}\right)$, we also integrate by parts. The discrete form of the in-slice velocity equation becomes \begin{eqnarray} \int_{\Omega}\MM{w}\cdot \frac{\partial \MM{u}}{\partial t} \diff x &=& \int_\Omega \nabla^\perp(\MM{w}\cdot \MM{u}^{\perp})\cdot\MM{u} \diff x + \int_{\Omega} \nabla\cdot\MM{w} \left(\frac{p}{\rho_{0}} + \frac{1}{2}|\MM{u}|^{2}\right) \diff x \nonumber \\ &+& \int_{\Omega}\MM{w}\cdot fv\MM{\hat{x}} \diff x + \int_{\Omega} \MM{w}\cdot b\hat{\MM{z}} \diff x - \int_\Gamma \jump{\MM{w}\cdot\MM{u}^\perp}^\perp\cdot\tilde{\MM{u}} \diff S, \ \ \forall\MM{w}\in \mathring{\mathbb{V}}_1. \label{ueq_int} \end{eqnarray} The out-of-slice velocity space $\mathbb{V}_2$ is discontinuous. An upwind DG treatment of \eqref{veq} leads to \begin{eqnarray} \int_{\Omega}\phi\frac{\partial v}{\partial t} \diff x - \int_\Omega \nabla \cdot (\phi \MM{u}) v \diff x + \int_\Gamma \jump{\phi \MM{u}} \tilde{v} \diff S + \int_{\Omega} \phi f \MM{u} \cdot \hat{\MM{x}} \diff x + \int_{\Omega} \phi \frac{\partial \bar{b}} {\partial y} \left(z-\frac{H}{2}\right) \diff x = 0, \ \ \forall\phi\in \mathbb{V}_2, \label{veq_int} \end{eqnarray} where $\tilde{v}$ denotes the upwind value of $v$. We now describe how we discretise the buoyancy equation \eqref{beq}. Recall that $b$ is in the finite element space $ \mathbb{V}_b$, which is obtained by the tensor product of a discrete finite element space in the horizontal direction with a continuous finite element space in the vertical direction. Therefore we propose a blend of an upwind DG method in the horizontal direction and Streamline Upwind Petrov Galerkin (SUPG) method in the vertical direction. First, multiplying the equation \eqref{beq} by a test function $\gamma \in \mathbb{V}_b$ and integrating it over each column $C$ gives \begin{eqnarray} \int_{C} \gamma \frac{\partial b}{\partial t} \diff x + \int_{C} \gamma \MM{u} \cdot \nabla b \diff x + \int_{C} \gamma \frac{\partial \bar{b}}{\partial y} v \diff x + \int_{C} \gamma N^{2}w \diff x = 0, \ \ \forall\gamma\in \mathbb{V}_b. \end{eqnarray} To obtain the DG formulation, we apply integration by parts in each column, \begin{eqnarray} \int_{C} \gamma \frac{\partial b}{\partial t} \diff x - \int_{C} \nabla \cdot (\gamma \MM{u}) b \diff x + \int_{C} \gamma \frac{\partial \bar{b}}{\partial y} v \diff x + \int_{C} \gamma N^{2} w \diff x + \int_{\partial C} \gamma \MM{u} \cdot \MM{n} \tilde{b} \diff S = 0, \ \ \forall\gamma\in \mathbb{V}_b, \end{eqnarray} where $\tilde{b}$ denotes the upwind value of $b$ on the column boundary $\partial C$. Integrating by parts again gives \begin{eqnarray} \int_{C} \gamma \frac{\partial b}{\partial t} \diff x + \int_{C} \gamma \MM{u} \cdot \nabla b \diff x + \int_{C} \gamma \frac{\partial \bar{b}}{\partial y} v \diff x + \int_{C} \gamma N^{2} w \diff x + \int_{\partial C} \gamma \MM{u} \cdot \MM{n} \tilde{b} - \gamma \MM{u} \cdot \MM{n} b \diff S = 0, \ \ \forall\gamma\in \mathbb{V}_b, \end{eqnarray} where the new boundary term contains the value of $b$ on the interior of the column. Summing over the whole domain, we obtain \begin{eqnarray} \int_{\Omega} \gamma \frac{\partial b}{\partial t} \diff x + \int_{\Omega} \gamma \MM{u} \cdot \nabla b \diff x + \int_{\Omega} \gamma \frac{\partial \bar{b}}{\partial y} v \diff x + \int_{\Omega} \gamma N^{2} w \diff x + \int_{\Gamma_v} \jump{\gamma \MM{u}} \tilde{b} - \jump{\gamma \MM{u} b} \diff S = 0, \ \ \forall\gamma\in \mathbb{V}_b, \end{eqnarray} where $\Gamma_v$ denotes the set of vertical facets of each column. We then apply the SUPG method in the vertical direction by replacing the test function $\gamma$ with $\gamma + \tau \gamma_z$, where $\gamma_z$ denotes the vertical derivative of $\gamma$: \begin{eqnarray} \int_{\Omega} (\gamma + \tau \gamma_z) \frac{\partial b}{\partial t} \diff x + \int_{\Omega} (\gamma + \tau \gamma_z) \MM{u} \cdot \nabla b \diff x + \int_{\Omega} \gamma \frac{\partial \bar{b}}{\partial y} v \diff x + \int_{\Omega} (\gamma + \tau \gamma_z) N^{2} w \diff x \nonumber \\ + \int_{\Gamma_v} \jump{(\gamma + \tau \gamma_z) \MM{u}} \tilde{b} - \jump{(\gamma + \tau \gamma_z) \MM{u} b} \diff S = 0, \ \ \forall\gamma\in \mathbb{V}_b. \label{beq_int} \end{eqnarray} Here $\tau$ is an upwinding coefficient \begin{eqnarray} \tau = c \Delta t \MM{u} \cdot \hat{\MM{z}}, \end{eqnarray} where $\Delta t$ is the time step and the constant $c$ is set at $1/15^{\frac{1}{2}}$ following \citet{raymond1976selective}. Finally we discretise the continuity equation \eqref{peq}. Multiplying by a test function $ \sigma \in \mathbb{V}_2$ and integrating it over the domain $\Omega$ gives \begin{eqnarray} \int_\Omega \sigma \nabla \cdot \MM{u} \diff x = 0, \forall \sigma\in \mathbb{V}_2. \label{cont_int} \end{eqnarray} Since $\nabla \cdot \MM{u}$ can be defined globally in $ \mathbb{V}_2$, the projection of \eqref{cont_int} is trivial, i.e., the incompressible condition is satisfied exactly under this discretisation. \subsubsection{Time discretisation}\label{time_discretisation} Now we discretise the equations \eqref{ueq_int}, \eqref{veq_int} and \eqref{beq_int} in time using a semi-implicit time-discretisation scheme. This is most easily described as a fixed number of iterations for a Picard iteration scheme applied to a fully implicit time integration scheme. The implicit time integration scheme is obtained by applying a (possibly off-centred) implicit time discretisation average to all of the forcing terms, as well as the advecting velocity in all of the equations. We then apply an explicit SSPRK scheme to all of the advection terms. To write down the scheme, we first define operators $L_{\MM {u}}$, $L_{v}$ and $L_{b}$ in the following implicit time-stepping formulation, \begin{eqnarray} \int_{\Omega}\MM{w}\cdot L_{\MM {u}}\MM{u} \diff x &=& \Delta t \int_\Omega \nabla^\perp(\MM{w}\cdot\MM{u}^{*\perp})\cdot\MM{u} \diff x + \Delta t \int_{\Omega} \nabla\cdot\MM{w} \left(\frac{p^*}{\rho_{0}} + \frac{1}{2}|\MM{u}^*|^{2}\right) \diff x + \Delta t \int_{\Omega}\MM{w}\cdot fv^*\MM{\hat{x}} \diff x \nonumber \\ &+& \Delta t \int_{\Omega} \MM{w}\cdot b^*\hat{\MM{z}} \diff x - \Delta t \int_\Gamma \jump{\MM{w}\cdot\MM{u}^{*\perp}}^\perp\cdot\tilde{\MM{u}}^* \diff S, \ \ \forall\MM{w}\in \mathring{\mathbb{V}}_1, \label{ueq_L} \\ \int_{\Omega}\phi L_{v} v \diff x &=& \Delta t \int_\Omega \nabla \cdot (\phi \MM{u}^*) v \diff x - \Delta t \int_{\Omega} \phi f \MM{u}^* \cdot \hat{\MM{x}} \diff x - \Delta t \int_{\Omega} \phi \frac{\partial \bar{b}} {\partial y} \left(z-\frac{H}{2}\right) \diff x \nonumber \\ &-& \Delta t \int_\Gamma \jump{\phi \MM{u}^*} \tilde{v}^* \diff S , \ \ \forall\phi\in \mathbb{V}_2, \label{veq_L} \\ \int_{\Omega} (\gamma + \tau \gamma_z) L_{b} b \diff x &=& - \Delta t \int_{\Omega} (\gamma + \tau \gamma_z) \MM{u}^* \cdot \nabla b \diff x - \Delta t \int_{\Omega} (\gamma + \tau \gamma_z) \frac{\partial \bar{b}}{\partial y} v^* \diff x - \Delta t \int_{\Omega} (\gamma + \tau \gamma_z) N^{2} w^* \diff x \nonumber \\ &-& \Delta t \int_{\Gamma_v} \jump{(\gamma + \tau \gamma_z) \MM{u}^*} \tilde{b}^* - \jump{(\gamma + \tau \gamma_z) \MM{u}^* b^*} \diff S, \ \ \forall\gamma\in \mathbb{V}_b, \label{beq_L} \end{eqnarray} where the star denotes $y^{*} = (1-\alpha)y^{n} + \alpha y^{n+1}$ with a time-centring parameter $\alpha$. A 3rd order 3 step SSPRK time-stepping method \citep{shu1988efficient} is then applied as \begin{eqnarray} \varphi^{1}_{y} &=& y^{n}+L_{y}y^{n}, \\ \varphi^{2}_{y} &=& \frac{3}{4}y^{n}+\frac{1}{4}(\varphi^{1}_{y}+L_{y} \varphi^{1}_{y}), \\ Ay^{n} &=& \frac{1}{3}y^{n}+\frac{2}{3}(\varphi^{2}_{y}+L_{y} \varphi^{2}_{y}), \label{eq:Advection_step} \end{eqnarray} for each variable $y = \MM{u}, v$ and $b$, where $A$ is the advection operator. Finally, we solve for $\MM{u}^{n+1}$, $v^{n+1}$, $b^{n+1}$ and $p^{n+1}$ iteratively using a Picard iteration method, \begin{eqnarray} \int_{\Omega}\MM{w}\cdot \Delta \MM{u} \diff x - \alpha \Delta t \int_{\Omega} \nabla\cdot\MM{w} \left(\frac{\Delta p}{\rho_{0}} \right) \diff x - \alpha \Delta t \int_{\Omega}\MM{w}\cdot f \Delta v \MM{\hat{x}} \diff x \nonumber \\ - \alpha \Delta t \int_{\Omega} \MM{w}\cdot \Delta b\hat{\MM{z}} \diff x &=& -R_u[\MM{w}], \ \ \forall\MM{w}\in \mathring{\mathbb{V}}_1, \\ \int_{\Omega}\phi \Delta v \diff x + \alpha \Delta t \int_{\Omega} \phi f \Delta u \diff x &=& -R_v[\phi], \ \ \forall\phi\in \mathbb{V}_2, \\ \int_{\Omega} \gamma \Delta b \diff x + \alpha \Delta t \int_{\Omega} \gamma N^{2} \Delta w \diff x &=& -R_b[\gamma], \ \ \forall\gamma\in \mathbb{V}_b, \\ \int_{\Omega} \sigma \nabla \cdot \Delta \MM{u} \diff x &=& -R_p[ \sigma], \ \ \forall \sigma\in \mathbb{V}_2. \end{eqnarray} Here $R_{\MM{u}}[\MM{w}]$, $R_v[\phi]$, $R_b[\gamma]$ and $R_p[ \sigma]$ are the residuals for the implicit system, \begin{eqnarray} R_u[\MM{w}] &=& \int_{\Omega} (\MM{u}^{n+1} - A\MM{u}^{n})\cdot\MM{w} \diff x, \ \ \forall\MM{w}\in \mathring{\mathbb{V}}_1, \label{eq:R_u_w}\\ R_v[\phi] &=& \int_{\Omega} (v^{n+1} - Av^{n})\phi \diff x, \ \ \forall\phi\in \mathbb{V}_2, \label{eq:R_v_phi}\\ R_b[\gamma] &=& \int_{\Omega} (b^{n+1} - Ab^{n})\gamma \diff x, \ \ \forall\gamma\in \mathbb{V}_b, \label{eq:R_b_gamma} \\ R_p[\sigma] &=& \int_{\Omega} \nabla \cdot \MM{u}^{n+1} \, \sigma \diff x, \ \ \forall \sigma\in \mathbb{V}_2, \label{eq:R_p_sigma} \end{eqnarray} where $A$ is as defined in \eqref{eq:Advection_step}. After obtaining $\Delta \MM{u}$, $\Delta v$, $\Delta b$ and $\Delta p$, we replace $\MM{u}^{n+1}$, $v^{n+1}$, $b^{n+1}$ and $p^{n+1}$ with $\MM{u}^{n+1} + \Delta \MM{u}$ , $v^{n+1} + \Delta v$ , $b^{n+1} + \Delta b$ and $p^{n+1} + \Delta p$, respectively. Then we repeat the iterative procedure for a fixed number of times, which is set to 4 in this study. Figure \ref{fig:pseudocode} provides pseudocode for the timestepping procedure. As the 3rd order SSPRK schemes are stable for both DG and SUPG methods, the system is well conditioned for stable Courant numbers, and can be solved with a few iterations of preconditioned GMRES applied to the full coupled system of 4 variables. \begin{figure} \caption{Pseudocode for the timestepping procedure. The variable $y$ represents the model variables $\MM{u}, v, b$ and $p$, and $J[\Delta y]$ denotes the Jacobian from the linear system. The constant $i_\mathrm{max}$ denotes a fixed number for a Picard iteration, and $k_\mathrm{max}$ denotes the total number of time steps.} \label{fig:pseudocode} \end{figure} We use a block diagonal ``Riesz-map'' preconditioner \citep{mardal2011preconditioning} for the GMRES iterations. This operator has an $H(\text{div})$ inner product in the velocity block and mass matrices in the other diagonal blocks; see \citet{natale2016compatible} for details of $H(\text{div})$ finite element spaces. Inverting the mass matrices is straightforward, requiring only a few iterations of a stationary iteration such as Jacobi; the $H(\text{div})$ block is more challenging due to the non-trivial kernel. We use an LU factorisation, provided by MUMPS -- the MUltifrontal Massively Parallel sparse direct Solver \citep{MUMPS01,MUMPS02} -- to invert it, since we do not currently have access to a suitable preconditioner for this operator. \subsection{Experimental settings} \subsubsection{Constants}\label{constants} In the frontogenesis experiments, the model constants are set to the values below, following \citet{nakamura1989nonlinear}, \citet{cullen2008comparison}, \citet{visram2014framework}, and \citet{visram2014asymptotic}: \begin{eqnarray} L &=& 1000\ \mathrm{km},\ H = 10\ \mathrm{km},\ f = 10^{-4}\ \mathrm{s}^{-1}, \ \nonumber\\ g &=& 10\ \mathrm{m\ s}^{-2},\ \rho_{0} = 1\ \mathrm{kg\ m}^{-3},\ \theta_{0} = 300\ \mathrm{K}, \nonumber\\ \Lambda &=& 10^{-3}\ \mathrm{s}^{-1},\ N^{2} = 2.5 \times 10^{-5}\ \mathrm{s}^{-1}, \nonumber \end{eqnarray} where $L$ and $H$ determine the model domain $\Omega = [-L, L] \times [0,H]$. The $y$ component of the background buoyancy in \eqref{veq} and \eqref{beq} is therefore calculated as \begin{eqnarray} \frac{\partial \bar{b}}{\partial y} = -f\Lambda = -10^{-7}\ \mathrm{s}^{-2}. \end{eqnarray} The Rossby and Froude numbers are given in the model as \begin{eqnarray} \mathrm{Ro} = \frac{u_{0}}{fL} = 0.05, \label{Rossby} \\ \mathrm{Fr} = \frac{u_{0}}{NH} = 0.1, \end{eqnarray} where $u_{0} = 5\ \mathrm{m\ s}^{-1}$ is a representative velocity. The ratio of Rossby number to the Froude number defines the Burger number, \begin{eqnarray} \mathrm{Bu} = \mathrm{Ro}/\mathrm{Fr} = 0.5, \end{eqnarray} which is used when initialising the model. \subsubsection{Initialisation}\label{initialisation} The model field is initialised with a small perturbation with the wavelength corresponding to the most unstable mode. In this study, the following form of the small perturbation is applied to the in-slice buoyancy, \begin{eqnarray} \label{perturbation} b(x, z) &=& aN \left\{-\left[1-\frac{\mathrm{Bu}}{2}\coth\left(\frac{\mathrm{Bu}}{2}\right)\right] \sinh Z \cos \left( \frac{\pi x}{L} \right) -n \mathrm{Bu} \cosh Z \sin \left( \frac{\pi x}{L} \right) \right\}, \label{init_buoyancy} \end{eqnarray} which is the structure of the normal mode taken from \citet{williams1967atmospheric}. The constant $a$ corresponds to the amplitude of the perturbation, and the constant $n$ takes the form of \begin{eqnarray} n = \frac{1}{\mathrm{Bu}}\left\{ \left[\frac{\mathrm{Bu}}{2} - \tanh \left(\frac{\mathrm{Bu}}{2} \right) \right] \left[\coth \left( \frac{\mathrm{Bu}}{2} - \frac{\mathrm{Bu}}{2} \right) \right] \right\}^{\frac{1}{2}}. \end{eqnarray} The modified vertical coordinate $Z$ is defined as \begin{eqnarray} Z = \mathrm{Bu}\left[\left(\frac{z}{H} - \frac{1}{2} \right) \right]. \end{eqnarray} Next we initialise the pressure $p$. Given the $b$ in \eqref{perturbation}, we seek a pressure in hydrostatic balance, \begin{equation} \label{pvbalance} \frac{\partial p}{\partial z} = \rho_{0}b. \end{equation} Since we have rigid-lid conditions at the upper and lower boundaries, we require a symmetry condition for the pressure, \begin{equation} \label{sym} \int_{z=0}^{z=H} p(x,z) \diff z = 0, \quad \forall x. \end{equation} This boundary condition is hard to enforce in the solver, so we first solve for a hydrostatic pressure $\hat{p}$ with a free surface boundary condition on the top, \begin{eqnarray} \frac{\partial \hat{p}}{\partial z} - \rho_{0} b = 0, \quad \hat{p}(z=H) = 0. \end{eqnarray} The finite element approximation can be found with a test function $\gamma \in \mathbb{V}_b$ as \begin{equation} \int_\Omega \gamma \frac{\partial \hat{p}}{\partial z} \diff x - \int_\Omega \gamma \rho_{0} b \diff x = - \int_\Omega \frac{\partial\gamma}{\partial z} \hat{p} \diff x - \int_\Omega \gamma \rho_{0} b \diff x = 0, \ \ \forall\gamma\in \mathbb{V}_b, \end{equation} where we have integrated the pressure gradient term by parts in the second line. We then add an arbitrary function of $x$ to the solution $\hat{p}$ to find $p$ satisfying the symmetry condition \eqref{sym}. Next we initialize the out-of-slice velocity $v$ by seeking a velocity in geostrophic balance with the initialised $p$ as \begin{eqnarray} \frac{\partial p}{\partial x} = \rho_{0} fv. \end{eqnarray} As $\partial p/\partial x$ is not defined in our finite element framework, first we find $\MM{s} = \nabla p$ in $\mathring{\mathbb{V}}_1$ as \begin{eqnarray} \int_{\Omega} \MM{w} \cdot \MM{s} \diff x = \int_{\Omega} \MM{w} \cdot \nabla p \diff x = -\int_{\Omega} \nabla \cdot \MM{w} \, p \diff x + \int_{\Gamma} \MM{w} \cdot \MM{n} p \diff S, \ \ \forall\MM{w}\in \mathring{\mathbb{V}}_1, \end{eqnarray} where we have integrated the pressure gradient term by parts in the last equality. Then we solve for the initial $v$ as \begin{eqnarray} \int_{\Omega} \phi \rho_{0} f v \diff x = \int_{\Omega} \phi \MM{s}\cdot\hat{\MM{x}} \diff x, \ \ \forall\MM{\phi}\in \mathbb{V}_2. \label{init_v} \end{eqnarray} To initialise the in-slice velocity $\MM{u} = (u, w)$, we seek a solution to the linear equations for $v$ and $b$, \begin{eqnarray} \pp{v_g}{t} &=& -fu - \pp{\bar{b}}{y} \left(z-\frac{H}{2}\right), \\ \pp{b_g}{t} &=& -\pp{\bar{b}}{y}{v_g} - N^2w, \end{eqnarray} where we make the SG approximation that $v_g$ and $b_g$ are given by the geostrophic and hydrostatic balance. If the pressure is in geostrophic and hydrostatic balance then we have \begin{eqnarray} \nabla p = \rho_{0}\begin{pmatrix} fv_g \\ b_g \\ \end{pmatrix}, \end{eqnarray} and therefore we have \begin{eqnarray} \nabla \dot{p} = \rho_{0} \begin{pmatrix} f\dot{v}_g \\ \dot{b}_g \\ \end{pmatrix} = \rho_{0} \begin{pmatrix} -f^2 u - f\pp{\bar{b}}{y} \left(z-\frac{H}{2}\right)\\ -\pp{\bar{b}}{y}{v_g} - N^2w \\ \end{pmatrix}, \end{eqnarray} where the dot denotes $\dot{y} = \pp{y}{t}$. We rewrite this in a vector form as \begin{eqnarray} \rho_{0} \begin{pmatrix} f^2 & 0 \\ 0 & N^2 \\ \end{pmatrix} \MM{u} + \nabla \dot{p} = \rho_{0} \pp{\bar{b}}{y} \begin{pmatrix} -f \left(z-\frac{H}{2}\right)\\ -{v_g} \\ \end{pmatrix}. \end{eqnarray} The finite element approximation is then \begin{eqnarray} \int_{\Omega} \MM{w} \cdot \rho_{0} \begin{pmatrix} f^2 & 0 \\ 0 & N^2 \\ \end{pmatrix} \MM{u} \diff x - \int_\Omega \nabla \cdot \MM{w} \dot{p} \diff x = \int_{\Omega} \MM{w}\cdot \rho_{0}\pp{\bar{b}}{y} \begin{pmatrix} -f \left(z-\frac{H}{2}\right)\\ -{v_g} \\ \end{pmatrix}\diff x, \quad \forall \MM{w} \in \mathring{\mathbb{V}}_1, \label{balance_int} \end{eqnarray} where we have integrated the pressure gradient term by parts in the first line. Here we introduce $\psi \in \mathbb{V}_0$, with $\nabla^\perp \psi=\MM{u}$, and choose $\MM{w}=\nabla^\perp \xi$, then we have \begin{align} \int_{\Omega} \nabla \xi \cdot \begin{pmatrix} N^2 & 0 \\ 0 & f^2 \\ \end{pmatrix} \nabla \psi \diff x = \int_{\Omega} \nabla\xi\cdot \pp{\bar{b}}{y} \begin{pmatrix} -{v_g} \\ f\left(z-\frac{H}{2}\right)\\ \end{pmatrix}\diff x, \quad \forall \xi \in \mathbb{V}_0. \end{align} With boundary conditions $\psi=0$ on top and bottom, and substituting the initialised $v$ from \eqref{init_v} for $v_g$, we obtain the balanced $\psi$. We then solve for the initial $\MM{u}$ from $\psi$ to complete the initialisation of the model field. Finally, we introduce a breeding procedure used by \citet{visram2014framework} and \citet{visram2014asymptotic} to remove any remaining unbalanced modes in the initial condition. In the experiments performed in section \ref{results}, the model field is initialised with a small perturbation by choosing $a$ = -7.5 in \eqref{init_buoyancy}. The simulation is then advanced for three computational days until the maximum amplitude of $v$ reaches 3 m s$^{-1}$, at which point the time is reset to zero to match the amplitude of the initial perturbation with that of \citet{nakamura1989nonlinear} and \citet{visram2014framework} as closely as possible. \subsubsection{Asymptotic limit analysis}\label{settings_sg} To validate the numerical implementation and assess the long term performance of the model, we introduce the asymptotic limit analysis based on the SG theory. The test outlined here follows that described in \citet{cullen2008comparison}, and used in \citet{visram2014framework} and \citet{visram2014asymptotic}. First we apply the SG approximation to the governing equations \eqref{ueq} to \eqref{peq} by imposing the hydrostatic balance and the geostrophic balance of the $v$ component of the wind, \begin{eqnarray} - fv \hat{\MM{x}} &=& -\frac{1}{\rho_{0}}\nabla p + b\hat{\MM{z}}. \label{ueq_sg} \end{eqnarray} The equation \eqref{ueq_sg}, together with the equations \eqref{veq} to \eqref{peq}, are the SG equations of the Eady slice model. Now, the solutions of the SG equations are invariant to the changes of variables, \begin{equation} x \rightarrow \beta x,\ \ u \rightarrow \beta u,\ \ f \rightarrow \frac{f}{\beta}, \label{rescaling} \end{equation} where $\beta$ is a rescaling parameter and all other variables are invariant. Recalling the definition of the Rossby number \eqref{Rossby}, the rescaling \eqref{rescaling} converts $\mathrm{Ro} \rightarrow \beta \mathrm{Ro}$. The SG solution therefore provides an asymptotic limit of the model as the Rossby number tends to zero. With the rescaling parameter $\beta$, the limit of $\mathrm{Ro} \rightarrow 0$ is equivalent to $\beta \rightarrow 0$. Thus the convergence of the model to the SG solution can be tested by performing a sequence of simulations with decreasing $\beta$. Here we follow \citet{cullen2008comparison} in defining the out-of-slice geostrophic imbalance as \begin{equation} \eta = v - \frac{1}{\rho_{0}f}\frac{\partial p}{\partial x}, \label{gi} \end{equation} which is expected to converge at a rate proportional to $\mathrm{Ro}^{2}$, i.e. $\beta^{2}$. To calculate $\eta$ in our finite element framework, first we find the vector of geostrophic and hydrostatic imbalance $\MM{q}$ in $\mathring{\mathbb{V}}_1$ defined as \begin{eqnarray} \MM{q} = \rho_{0} \begin{pmatrix} fv\\ b\\ \end{pmatrix} - \nabla p. \end{eqnarray} where \begin{eqnarray} \eta = \frac{1}{\rho_{0}f} \MM{q}\cdot\MM{x}. \label{gi_v} \end{eqnarray} The finite element approximation is obtained as \begin{eqnarray} \int_{\Omega} \MM{w} \cdot \MM{q} \diff x = \int_{\Omega} \MM{w} \cdot \rho_{0} \begin{pmatrix} fv\\ b\\ \end{pmatrix} \diff x - \int_{\Omega} \MM{w} \cdot \nabla p \diff x = \int_{\Omega} \MM{w} \cdot \rho_{0} \begin{pmatrix} fv\\ b\\ \end{pmatrix} \diff x + \int_{\Omega} \nabla \cdot \MM{w} \, p \diff x, \ \ \forall\MM{w}\in \mathring{\mathbb{V}}_1, \end{eqnarray} where we have integrated the pressure gradient term by parts in the third line. We then calculate $\eta$ from $\MM{q}$ using \eqref{gi_v} and assemble it over the domain. If the geostrophic imbalance in the model tends to zero with decreasing $\beta$, then the limit is a solution of the SG equations. This analysis is performed in section \ref{results_sg}. \section{Results}\label{results} In this section, we present the results of the frontogenesis experiments using the Eady vertical slice model developed in this study, with the use of the finite element code generation library Firedrake \citep{rathgeber2016firedrake}. The constants used to set up the experiments are shown in section \ref{constants}. At the beginning of each experiment, the model is initialised in the way described in section \ref{initialisation}, then integrated for 25 days in each experiment. The model resolution is given by \begin{eqnarray} \Delta x = \frac{2L}{N_{x}},\ \Delta z = \frac{H}{N_{z}}, \label{grid_space} \end{eqnarray} where $N_x$ and $N_z$ are the number of quadrilateral elements in the $x$- and $z$-directions, respectively. Unless stated otherwise, we use a resolution of $N_{x}$ = 60 and $N_{z}$ = 30, which is comparable in terms of DoFs to that used in previous work of \citet{visram2014framework} and \citet{visram2014asymptotic} : $N_{x}$ = 121 and $N_{z}$ = 61. Note that in our model the effective grid spacings are half the size of the lengths given by \eqref{grid_space} as we used a higher-order finite element spaces with $k$ = 2, as shown in Figure \ref{fig:space-nodes}b and Figure \ref{fig:vertical-nodes}b, for all experiments. \citet{nakamura1989nonlinear} used a lower resolution of $N_{x}$ = 100 and $N_{z}$ = 20. They repeated the experiment with twice the horizontal and vertical resolution (results not shown) and found very small differences to the low-resolution results. Therefore we use their result with $N_{x}$ = 100 and $N_{z}$ = 20 as a comparable result to our control-run result in this section. For the control run, the time-centring parameter $\alpha$ and the rescaling parameter $\beta$ are set to 0.5 and 1, respectively, and a time step of $\Delta t$ = 50 s is used based on stability requirements. In section \ref{results_general}, we investigate the general results of frontogenesis from the control run. Then the asymptotic convergence of the model to the SG limit is examined in section \ref{results_sg}, by repeating the experiment with various $\beta$. We also assess the effect of off-centring on the long term performance of the model by increasing $\alpha$. Finally, we discuss the model results in terms of energy dynamics in section \ref{results_energy}. \subsection{General results of frontogenesis}\label{results_general} Figures \ref{fig:velocity-contours} and \ref{fig:buoyancy-contours} show the snapshots of the out-of-slice velocity $v$ and buoyancy $b$ fields, respectively, of the control run. At day 2, both fields show very similar structures to those from the simulation using the linearised equations in \citet{visram2014asymptotic}. It suggests that at this early stage the motion is well described by the linearised equations. The model shows some early signs of front formation at day 4. The general shape of the $v$-field is similar to that of day 2. However, the gradient in the cyclonic region is now larger than that in the anticyclonic region, which indicates the beginning of front formation. The maximum gradient in the $v$-field is found at the upper and lower boundaries. In the $b$-field, the region of warm air occupies a smaller area at the lower boundary than it does at the upper boundary. As in the case with $v$, the $b$-field shows the largest gradients near the upper and lower boundaries. \begin{figure} \caption{Out-of-slice velocity field. The contour intervals are as specified in each panel.} \label{fig:velocity-contours} \caption{In-slice buoyancy field. The contour intervals are as specified in each panel.} \label{fig:buoyancy-contours} \caption{Snapshots of out-of-slice velocity and in-slice bouyancy in the control run at days 2, 4, 7, and 11.} \label{fig:slice-contours} \end{figure} The frontal discontinuity becomes most intense around day 7. Strong gradients are found in both $v$ and $b$ fields. The frontal zone tilts westward with height in both fields. In the $b$-field, the warm region is now lifted off the surface, showing that the front is occluded. Day 11 corresponds to the first minimum after the initial frontogenesis. At this stage the vertical tilt in the $v$-field reverses, which is a sign of energy conversion from kinetic back to potential energy. In the $b$-field, the discontinuity vanishes and the solution looks almost vertically stratified. Overall, these results are qualitatively consistent with the early studies \citep[e.g.][]{williams1967atmospheric, nakamura1989nonlinear, nakamura1994nonlinear, cullen2007modelling, budd2013monge, visram2014framework, visram2014asymptotic}. Now, as the out-of-slice velocity is the dominant source of the kinetic energy in the Eady problem, we take it as a quantity to compare the strength of the fronts reproduced in the models. The thin solid curve in Figure \ref{fig:rmsv-velocity-comparison} shows the time evolution of the root mean square of $v$ (RMSV) in our model. The result shows that the model reproduces several further quasi-periodic lifecycles after the first frontogenesis. Also shown in Figure~\ref{fig:rmsv-velocity-comparison} in black are the nonlinear results of \citet{nakamura1989nonlinear} and \citet{visram2014framework}, the linear result of \citet{visram2014asymptotic}, and the SG limit solution from \citet{cullen2007modelling}. All results show a good agreement up to around day 5, where the RMSV of the nonlinear results grow exponentially following the growth of the linear mode. After day 5, at which point the front is close to the grid scale, the nonlinear effects become significant and begin to reduce the growth rate. \begin{figure} \caption{Comparison of the root mean square of the out-of-slice velocity in Eady models. Thin dark solid line shows the result from the control run of this study. Gray line shows the result using two times higher resolution than that of the control run. Dashed and dotted lines show the nonlinear results of \citet{nakamura1989nonlinear} and \citet{visram2014framework}, respectively, and the thick line shows the semi-geostrophic limit solution given by \citet{cullen2007modelling}, based on data from Figure 4 of \citet{visram2014framework}. Dot-dashed line shows the linear result of \citet{visram2014asymptotic} based on data from Figure 5.2a of \citet{visram2014asymptotic}.} \label{fig:rmsv-velocity-comparison} \end{figure} For the period of the first frontogenesis, our result is reasonably close to the result of \citet{visram2014framework}, who applied a finite difference method with semi-implicit time-stepping and semi-Lagrangian transport on the same governing equations as in this study. Then the two solutions diverge for the subsequent lifecycles. Compared to the result of \citet{nakamura1989nonlinear}, who used hydrostatic primitive equations with a viscous Eulerian method, both our result and the result of \citet{visram2014framework} show larger peak amplitudes of the fronts. However, compared to the SG limit solution given by \citet{cullen2007modelling}, our result, and the results of \citet{nakamura1989nonlinear} and \citet{visram2014framework}, are all much smaller in amplitude. We believe this to be because \citet{cullen2007modelling} used a Lagrangian discretisation that resolves fronts even at very coarse resolution, and very fine resolution of the front is required to allow this additional transfer of potential to balanced kinetic energy. \citet{visram2014framework} showed that the Lagrangian conservation properties were badly violated in the Eulerian calculations, even at higher resolution, concluding that \cite{cullen2007modelling} is capturing the correct solution after the front is formed. To evaluate the effect of resolution on our model result, we repeated the experiment using two times higher resolution than that of the control run: ($N_x$, $N_z$) = (120, 60). A time step of $\Delta t$ = 25 s is used for the high-resolution run. The evolution of RMSV in the high-resolution run is shown by the gray curve in Figure \ref{fig:rmsv-velocity-comparison}. Only a slight increase in the peak amplitudes of RMSV is found in the high-resolution run compared to that of the control run. It indicates that, due to the rapid formation of the frontal discontinuity, the front reaches the grid scale very quickly even when double the resolution is used. As a result, the use of the high resolution can only slightly delay the collapse of the fronts, and thus makes very little contribution to filling the gap between the RMSV values of the model and the SG limit. This result suggests that we would need resolution of several orders of magnitude greater than presently used to reach the RMSV of the SG limit. \begin{figure} \caption{$\alpha = 0.5$} \label{fig:geostrophic-imbalance-a} \caption{$\alpha = 0.55$} \label{fig:geostrophic-imbalance-b} \caption{Comparison of the geostrophic imbalance in the results of the rescaling tests using (a) $\alpha$ = 0.5, and (b) $\alpha$ = 0.55. Black dashed and solid lines correspond to the first- and second-order convergence rates, respectively. Colored lines show the variations of the geostrophic imbalance with rescaling parameter $\beta$ at day 2, 4, 6, 8 and 10.} \label{fig:geostrophic-imbalance} \end{figure} \begin{figure} \caption{$\alpha = 0.5$} \label{fig:rmsv-velocity-rescaling-a} \caption{$\alpha = 0.55$} \label{fig:rmsv-velocity-rescaling-b} \caption{Comparison of the root mean square of the out-of-slice velocity in the results of the rescaling tests using (a) $\alpha$ = 0.5, and (b) $\alpha$ = 0.55. Thick lines show the results with $\beta$ = 1. The other lines show the results with different values of $\beta$ as shown in the legend.} \label{fig:rmsv-velocity-rescaling} \end{figure} \subsection{Asymptotic convergence to the SG solution}\label{results_sg} In this section, the validation test of the asymptotic convergence outlined in section \ref{settings_sg} is performed. First, the frontogenesis experiment is repeated using eight different values of the rescaling parameter: $\beta$ = 2$^{2}$, 2, 2$^{-1}$, 2$^{-2}$, 2$^{-3}$, 2$^{-4}$, 2$^{-5}$, and 2$^{-6}$. The time step $\Delta t$ is set to 50 s for the experiments using 2$^{-2}$ $\leq$ $\beta$ $\leq$ 2$^{2}$, 25 s for $\beta$ = 2$^{-3}$ and 2$^{-4}$, and 12.5 s for $\beta$ = 2$^{-5}$ and 2$^{-6}$. The other settings are the same as the control run. We calculated the geostrophic imbalance $\eta$ defined by \eqref{gi} in each experiment. We then plotted it over the period of the initial frontogenesis, together with $\eta$ in the control run where $\beta$ is unity. Figure \ref{fig:geostrophic-imbalance-a} shows the variation of the geostrophic imbalance $\eta$ with rescaling parameter $\beta$ alongside the theoretical first- and second-order convergence rates. The convergence rate starts at around the second-order for $\beta \geq 2^{-3}$ and the first-order for $\beta < 2^{-3}$ as shown by the slope at day 2. It improves to the second-order for $\beta \geq 2^{-5}$ at day 4. However, it doesn't converge at all at day 6, at which point a strong discontinuity is formed in the model. After the peak of the initial frontogenesis at around day 7, the convergence rate recovers a little but stays at less than first-order at days 8 and 10. For the results in Figure \ref{fig:geostrophic-imbalance-a}, the time-centring parameter $\alpha$ is set to 0.5 as that is in the control run. To test the effect of off-centring, the rescaling test was repeated using $\alpha$ = 0.55. This result is shown in Figure \ref{fig:geostrophic-imbalance-b}. It shows that increasing the implicitness of the solution gives more balanced solutions. In particular, a reduction of the imbalance is found throughout the initial frontogenesis for $\beta < 2^{-3}$. As a result, the overall second-order convergence is achieved at day 2, and the first-order convergence is recovered at day 10 in Figure \ref{fig:geostrophic-imbalance-b}. This result is comparable to the result of \citet{visram2014framework} (see their Figure 2), where $\alpha =$ 0.55 was used, indicating that the compatible finite element method is performing as well as a finite difference method for this test problem. Note that \citet{visram2014framework} used the range of $2^{-3} \leq \beta \leq 2^{2}$, whereas we show the convergence of geostrophic imbalance for smaller $\beta$ as well. The result is also consistent with the results reported by \citet{cullen2007modelling} with compressible equations. Figure \ref{fig:rmsv-velocity-rescaling-a} and \ref{fig:rmsv-velocity-rescaling-b} show the evolutions of RMSV in each experiment corresponding to Figure \ref{fig:geostrophic-imbalance-a} and \ref{fig:geostrophic-imbalance-b}, respectively. In Figure \ref{fig:rmsv-velocity-rescaling-b}, there are some increase in the peak amplitude of fronts compared to that in Figure \ref{fig:rmsv-velocity-rescaling-a}, especially in the second peak of the results for $\beta \leq 2^{-2}$. It indicates that damping out some of the unbalanced motion with off-centring improves the predictability of quasi-periodic lifecycles. Compared to the off-centred results of \citet{visram2014framework} (see their Figure 4), which very quickly began to diverge as they decreased the Rossby number, our model shows good predictability throughout the range of $\beta$. However, in both cases of Figure \ref{fig:rmsv-velocity-rescaling}, decreasing $\beta$ does not make a big difference in the peak amplitude of fronts, thereby leaving the large gap between the model result and the SG limit solution on the strength of the fronts regardless of the value of $\beta$. \subsection{Energy dynamics}\label{results_energy} In sections \ref{results_general} and \ref{results_sg}, we showed that our model reproduced results which are consistent with the early model studies based on finite difference methods, and showed that the model solutions converge to a solution in geostrophic balance when we decrease the Rossby number. In this section, we will again look into our result of the control run with focus on the energy dynamics. Figure \ref{fig:time-energy-control} shows the time evolution of the total energy $E$, the kinetic energy $K_{u}$ and $K_{v}$, and the potential energy $P$, which are defined by the equations \eqref{total_energy} to \eqref{potential}. The kinetic energy $K_{v}$ reaches the maximum amplitude at around day 7, then reduces to the first minimum at around day 11 followed by smaller amplitude lifecycles, just as RMSV does in Figure \ref{fig:rmsv-velocity-comparison}. The time evolution of potential energy $P$ shows the same behaviour with opposite sign, which demonstrates the exchange from potential to kinetic energy over several lifecycles. The amplitude of the kinetic energy $K_u$ is very small compared to that of $K_v$ and $P$ throughout the experiment. As a result, the total energy $E$ can be interpreted as the difference of the amplitude of $K_{v}$ from that of $P$, which shows a gradual decrease with time. \begin{figure} \caption{Time evolution of energy of the control run. Thick line represents the evolution of total energy. Dotted and solid lines represent the evolutions of in-slice and out-of-slice components of the kinetic energy. Dot-dashed line represents the evolution of potential energy.} \label{fig:time-energy-control} \end{figure} \begin{figure} \caption{Enlarged view of the evolution of total energy shown in Figure \ref{fig:time-energy-control}. The vertical scale is one order of magnitude less than that of Figure \ref{fig:time-energy-control}.} \label{fig:time-total-energy} \end{figure} Figure \ref{fig:time-total-energy} provides an enlarged view of the time evolution of the total energy. Note that the thick lines in Figure \ref{fig:time-energy-control} and Figure \ref{fig:time-total-energy} show the same evolution of the total energy of the control run, and the vertical scale of Figure \ref{fig:time-total-energy} is one order of magnitude less than that of Figure \ref{fig:time-energy-control}. The total energy stays constant up until day 5, then starts decreasing. It becomes quasi-constant from around day 10 to day 13, then decreases again. By comparing this with the lifecycles of fronts shown as the evolution of RMSV in Figure \ref{fig:rmsv-velocity-comparison}, it appears that the model starts losing energy every time the discontinuity reaches the grid scale. In addition, the reduction in the total energy starts at almost the same time as the RMSV of the control run diverges from the SG solution. Therefore we consider that the lack of resolution is a significant cause of the loss of energy, and it also stops the growth of RMSV too early. Now, to estimate the potential loss in the kinetic energy $K_{v}$ caused by our advection scheme for $v$, we perform a test considering the dummy velocity $v_d$ which obeys the following advection-only equation, \begin{eqnarray} \frac{\partial v_d}{\partial t} + \MM{u} \cdot \nabla v_d &=& 0, \label{dummyv} \end{eqnarray} and the dummy kinetic energy $K_{v_d}$ defined by \begin{eqnarray} K_{v_d} &=& \rho_{0} \int_{\Omega} \frac{1}{2} v_d^{2} \ \diff x.\label{kinetic_vd} \end{eqnarray} In this test, we solve the equation \eqref{dummyv} in parallel with the governing equations \eqref{ueq} to \eqref{peq}. The same experimental settings including the constants, initial and boundary conditions and the resolution as in the control run are used in this test. Here we apply the same advection scheme used for $v$ to $v_d$ as \begin{eqnarray} \int_{\Omega}\phi\frac{\partial v_d}{\partial t} \diff x - \int_\Omega \nabla \cdot (\phi \MM{u}) v_d \diff x + \int_\Gamma \jump{\phi \MM{u}} \tilde{v_d} \diff S = 0, \ \ \forall\phi\in \mathbb{V}_2, \label{dummyv_dis} \end{eqnarray} and the same semi-implicit time-stepping method described in section \ref{time_discretisation} to \eqref{dummyv_dis}. After every time step, we calculate the difference between $K_v$ and $K_{v_d}$ as \begin{eqnarray} \epsilon = \rho_{0} \int_{\Omega} \frac{1}{2} \{(v_d^{n+1})^{2} - (v^{n})^{2}\} \ \diff x. \end{eqnarray} Then we replace $v_{d}^{n+1}$ with $v^{n+1}$ and repeat the time integration. By accumulating the energy difference $\epsilon$ every time step, we can estimate the potential loss of kinetic energy caused by the discretisation of the advection term in the $v$ equation \eqref{veq}. This result is shown by the dashed line in Figure \ref{fig:energy-dummy}. Also shown in Figure \ref{fig:energy-dummy} as the thick line is the loss of total energy in the control run from the initial state. The two energy values are almost identical to each other, showing that almost all the energy loss in the model is caused in the $v$ advection. \begin{figure} \caption{Result of the energy analysis using the dummy velocity. Thick line represents the loss of total energy from the initial state of the control run. Dashed line shows the accumulated difference between the kinetic energy $K_v$ and the dummy kinetic energy $K_{v_d}$.} \label{fig:energy-dummy} \end{figure} To investigate the cause of the energy loss in the $v$ advection, we calculated the residual in the out-of-slice velocity field, which is defined as the difference between LHS and RHS of the equation \eqref{veq}, \begin{eqnarray} r_{v} = \frac{\partial v}{\partial t} + \MM{u} \cdot \nabla v + f\MM{u} \cdot \hat{\MM{x}} + \frac{\partial \bar{b}} {\partial y}\left(z-\frac{H}{2}\right). \end{eqnarray} With $\phi \in \mathbb{V}_2$, we calculated $r_{v}$ by solving \begin{eqnarray} \int_{\Omega} \phi r_{v}^{n+1} \diff x = \int_{\Omega} \phi \frac{v^{n+1}-v^{n}}{\Delta t} \diff x + \int_{\Omega} \phi \MM{u}^{n+\frac{1}{2}} \cdot \nabla v^{n+\frac{1}{2}} \diff x + \int_{\Omega} \phi f\MM{u}^{n+\frac{1}{2}} \cdot \hat{\MM{x}} \diff x + \int_{\Omega} \phi \frac{\partial \bar{b}} {\partial y}\left(z-\frac{H}{2}\right) \diff x, \ \ \forall\MM{\phi}\in \mathbb{V}_2. \label{vtres} \end{eqnarray} Figure \ref{fig:velocity-slice-residual} shows the residual at day 7, which is when the front reaches the first peak. Compared with the $v$-field at day 7 in Figure \ref{fig:velocity-contours}, we can see a large increase in the residual occurring along the frontal discontinuity. In particular, the maximum amplitude of the residual is found near the upper and lower boundaries, where the discontinuity is most intense. Figure \ref{fig:time-residual} shows the time evolution of the maximum amplitude of $r_v$. It shows the biggest peak during the first lifecycle followed by small peaks during the second and third lifecycles. These results indicate that our advection scheme for $v$ does not converge well enough at the fronts due to the strong discontinuity. This then leads to the dissipation of the the kinetic energy $K_{v}$ in the model every time the discontinuity reaches the grid scale. \begin{figure} \caption{Residual in the out-of-slice velocity field calculated at day 7 of the control run.} \label{fig:velocity-slice-residual} \end{figure} \begin{figure} \caption{Time evolution of the maximum amplitude of the residual in the out-of-slice velocity field of the control run.} \label{fig:time-residual} \end{figure} \begin{figure} \caption{Comparison of the loss of total energy from the initial state in experiments with different settings. Thick line represents the same loss of total energy of the control run as in Figure \ref{fig:energy-dummy}. Dashed line shows the loss when using $\alpha$ = 0.55. Thin line represents the loss in the high-resolution run.} \label{fig:compare-total-energy} \end{figure} Finally, Figure \ref{fig:compare-total-energy} provides a comparison of the loss of total energy in the experiments performed in the previous sections. Note that with all three results shown in Figure \ref{fig:compare-total-energy} the rescaling factor $\beta$ is unity. The thick line shows the same loss of total energy from the initial state in the control run as in Figure \ref{fig:energy-dummy}. The dashed line shows the loss of the total energy in the experiment with off-centring, which is performed in section \ref{results_sg}. It is shown that the use of off-centring has an insignificant effect on the loss of energy. This result, together with the results in Figures \ref{fig:geostrophic-imbalance-b} and \ref{fig:rmsv-velocity-rescaling-b}, suggests that off-centring does not have a big impact on the large scale dynamics but the unbalanced motion. The thin solid curve in Figure \ref{fig:compare-total-energy} represents the loss of total energy in the high-resolution run, which is performed in section \ref{results_general}. There is a clear improvement in energy conservation with the use of the high resolution; the total loss at day 25 is about 25 \% less than that of the control run. It supports our assumption that the lack of resolution is a significant cause of the loss of energy. However, as shown in Figure \ref{fig:rmsv-velocity-comparison}, the high-resolution run gives only a slight increase in the peak amplitudes of RMSV despite the improvement in energy conservation. Therefore we have concluded that the energy loss in the model does not account for the large gap between the model result and the SG limit solution on RMSV. \section{Conclusion}\label{conclusion} A new vertical slice model of nonlinear Eady waves was developed using a compatible finite element method. To extend the Charney-Phillips grid staggering in the compatible finite element framework, the buoyancy is chosen from the function space which has the same degrees of freedom as the vertical part of the velocity space. As the buoyancy space is discontinuous in the horizontal direction and continuous in the vertical direction, we proposed a blend of an upwind DG method in the horizontal direction and SUPG method in the vertical direction. The model reproduced several quasi-periodic lifecycles of fronts despite the presence of strong discontinuities. The general results of frontogenesis are consistent with the early studies. To validate the numerical implementation and assess the long term performance of the model, the asymptotic convergence to the SG limit solution is examined. Despite the large difference with the SG solution in RMSV, the solutions of the vertical slice model were converging to a solution in geostrophic balance as the Rossby number was reduced. With off-centring, the model showed the expected second-order convergence rate from the early stage up to the formation of the discontinuity, and showed a first-order rate for some time afterwards. This result is comparable to the previous results using a finite difference semi-implicit semi-Lagrangian method \citep{visram2014framework}, indicating that the compatible finite element method is performing well for this test problem. In particular, the use of Eulerian advection schemes rather than semi-Lagrangian schemes does not degrade the results. The energy analysis showed that the model suffers from dissipation of kinetic energy of the cross-front velocity due to the lack of resolution at the fronts in the $v$ advection scheme. However, the energy loss is very small compared to the amplitudes of potential energy and kinetic energy of the cross-front velocity, and is unlikely to account for the large gap between the RMSV values of the model and the SG limit. The large gap corresponds more likely to the fact that the lack of resolution shuts off the growth of the RMSV of the model about two days early compared to that of the SG limit. As the frontal discontinuity reaches the grid scale very quickly, we would need resolution of several orders of magnitude greater than presently used to reach the RMSV of the SG solution. The frontogenesis test case shown in this paper demonstrates several aspects of the compatible finite element framework including the treatment of advection terms in the velocity equation, and an advection scheme for the vertically-staggered temperature space proposed here. In concurrent research we are incorporating these techniques into a discretisation of the compressible Euler equations for NWP, and the formulation and test cases will be reported in a future paper. For a scalable solution approach for the implicit linear system in \eqref{eq:R_u_w}--\eqref{eq:R_p_sigma}, we are currently developing a hybridisation capability within the Firedrake package and intend to use this in future versions of the code. As an extension of the vertical slice modelling of nonlinear Eady waves, we are also considering a development of a parameterisation scheme which could prevent the model shutting off the growth of the RMSV too early, so that we could increase the peak amplitudes of the fronts without using unrealistically high resolution. \section*{\large Acknowledgement} We would like to acknowledge NERC grants NE/K012533/1 and NE/K006789/1, EPSRC Platform grant EP/L000407/1, and the Firedrake Project. LM additionally acknowledges support from EPSRC grant EP/M011054/1. The source code for this project is located at \url{https://bitbucket.org/colinjcotter/slicemodels} with the tag \url{paper.20161110}; the simulation code itself is located within this repository under \url{paper_examples/}. This version of the code is also archived on Zenodo: \cite{zenodo_eady_code}. All numerical experiments in this paper were performed with the following versions of software, archived on Zenodo: \cite{zenodo_firedrake}; \cite{zenodo_pyop2}; \cite{zenodo_tsfc}; \cite{zenodo_fiat}; \cite{zenodo_ufl}; \cite{zenodo_coffee}; \cite{zenodo_petsc}; \cite{zenodo_petsc4py}. \end{document}
\begin{document} \dedicatory{To Nigel Hitchin on the occassion of his 70th birthday} \title[Involutions of rank 2 Higgs bundle moduli spaces] {Involutions of rank 2 Higgs bundle moduli spaces} \author[Oscar Garc{\'\i}a-Prada]{Oscar Garc{\'\i}a-Prada} \address{Instituto de Ciencias Matem\'aticas \\ CSIC \\ Nicol\'as Cabrera, 13--15 \\ 28049 Madrid \\ Spain} \email{[email protected]} \author[S. Ramanan]{S. Ramanan} \address{Chennai Mathematical Institute\\ H1, SIPCOT IT Park, Siruseri\\ Kelambakkam 603103\\ India} \email{[email protected]} \thanks{ Partially supported by the Europena Commission Marie Curie IRSES MODULI Programme PIRSES-GA-2013-61-25-34. } \subjclass[2000]{Primary 14H60; Secondary 57R57, 58D29} \begin{abstract} We consider the moduli space $\mathcal{H}(2,\delta)$ of rank 2 Higgs bundles with fixed determinant $\delta$ over a smooth projective curve $X$ of genus 2 over $\mathbb{C}$, and study involutions defined by tensoring the vector bundle with an element $\alpha$ of order 2 in the Jacobian of the curve, combined with multiplication of the Higgs field by $\pm 1$. We describe the fixed points of these involutions in terms of the Prym variety of the covering of $X$ defined by $\alpha$, and give an interpretation in terms of the moduli space of representations of the fundamental group. \end{abstract} \maketitle \section{Introduction} Let $X$ be a smooth projective curve of genus $g\geqslant 2$ over $\mathbb{C} $. A {\it Higgs bundle} $(E, \varphi )$ on $X$ consists of a vector bundle $E$ and a twisted endomorphism $\varphi :E \to E\otimes K$, where $K$ is the canonical bundle of $X$. The {\it slope} of $E$ is the rational number defined as $$\mu (E) = {\deg E}/{\rank E}. $$ A Higgs bundle is said to be {\it stable} (resp. {\it semistable}) if $$\mu (F) < ({\rm resp. } \leqslant )~ \mu (E)$$ for every proper subbundle $F$ of $E$ invariant under $\varphi $ in the sense that $\varphi (F) \subset F\otimes K$. Also, a Higgs bundle $(E, \varphi )$ is {\it polystable} if $(E, \varphi ) = \oplus_i (E_i, \varphi _i)$ where all the $(E_i, \varphi _i)$ are stable and all $E_i$ have the same slope as that of $E$. Let $\delta $ be a line bundle on $X$. We are interested in the moduli space $\mathcal{H}(n,\delta )$ of isomorphism classes of polystable Higgs bundles $(E, \varphi )$ of rank $n$ with determinant $\delta $ and traceless $\varphi $. This moduli space was constructed analytically by Hitchin \cite{hitchin} and later algebraically via geometric invariant theory by Nitsure \cite{nitsure}. This space is a normal quasi-projective variety of dimension $2(n^2-1)(g-1)$. If the degree of ${\delta }$ and $n$ are coprime, $\mathcal{H}(n,\delta )$ is smooth. Let $M(n,\delta )$ be the moduli space of polystable vector bundles of rank $n$ and determinant $\delta $. The set of points corresponding to stable bundles form a smooth open set and the cotangent bundle of it is a smooth, open, dense subvariety of $\mathcal{H}(n,\delta)$. In this paper, we focus on vector bundles and Higgs bundles of rank $2$, leaving the study of those of of higher rank (and indeed of $G$-principal bundles with $G$ reductive) for \cite{garcia-prada-ramanan}. There are two kinds of involutions that we consider. Firstly the subgroup $J_2$ of elements of the Jacobian $J$ consisting of elements of order $2$ acts on $\mathcal{H}(2,\delta )$ by tensor product. We also consider the involutions where in addition, the sign of the Higgs field is changed. More explicitly, for $\alpha\in J_2$ we consider the involutions \begin{equation}\label{involutions} \begin{aligned} \iota(\alpha)^\pm: \mathcal{H}(2,\delta) & \to \mathcal{H}(2,\delta) \\ (E,\varphi) & \mapsto (E\otimes\alpha,\pm \varphi). \end{aligned} \end{equation} We determine the fixed point varieties in all these cases, and their corresponding subvarieties of the moduli space of representations of the fundamental group of $X$ (and its universal central extension) under the correspondence between this moduli space and the moduli space of Higgs bundles, established by Hitchin \cite{hitchin} and Donaldson \cite{donaldson}. The case of the involution $(E,\varphi)\mapsto (E,-\varphi)$ is already covered in the beautiful paper of Hitchin \cite{hitchin}. \section{Line bundles} To start with, we consider involutions in the case of line bundles. The moduli space of line bundles of degree $d$ is the {\it Jacobian variety} $J^d$. There is a universal line bundle (called a Poincar{\'e} bundle) on $J^d\times X$ which is unique up to tensoring by a line bundle pulled back from $J^d$. We will denote $J^0$ simply by $J$. The involution $\iota :L \to L^{-1}$ of $J$ has obviously the finite set $J_2$ of elements of order $2$, as its fixed point variety. The Higgs moduli space of line bundles consists of pairs $(L, \varphi )$ where $L$ is a line bundle of fixed degree and $\varphi $ is a section of $K$. The moduli space of rank 1 Higgs bundles of degree $d$ is thus isomorphic to $J^d\times H^0(X,K)$. There are a few involutions to consider even in this case. Firstly on the Higgs moduli space of line bundles of degree $d$, one may consider the involution $(L, \varphi ) \to (L, -\varphi )$. The fixed point variety is just $J^d$ imbedded in the Higgs moduli space by the map $L \mapsto (L, 0)$ since any automorphism of $L$ induces identity on the set of Higgs fields on $L$. When $d = 0$, one may also consider the involution $(L, \varphi ) \mapsto (L^{-1}, \varphi )$. This has as fixed points the set $\{ (L, \varphi ): L \in J_2 \;\;\mbox{and}\;\; \varphi\in H^0(X,K)\} $. Also, we may consider the composite of the two actions, namely $(L, \varphi ) \mapsto (L^{-1}, -\varphi )$. Again it is obvious that the fixed points are just points of $J_2$ with Higgs fields $0$. Finally, translations by elements of $J_2\smallsetminus \{ 0 \}$ are involutions without fixed points. \section{Fixed Points of $\iota(\alpha)^{-}$}\label{triples} We wish now to look at involutions of $M = M(2, \delta )$ and $\mathcal{H} = \mathcal{H}(2, \delta )$. We will often assume that $\delta $ is either ${\mathcal{O}}$ or a line bundle of degree $1$. There is no loss of generality, since the varieties $M$ and $\mathcal{H}$ for any $\delta $ are isomorphic (on tensoring with a suitable line bundle) to ones with $\delta $ as above. In general, we denote by $d$ the degree of $\delta $. If $d$ is odd, the spaces $M$ and $\mathcal{H}$ are smooth and the points correspond to stable bundles and stable Higgs bundles, respectively. If $d$ is even (and $\delta $ trivial), there is a natural morphism $J \to M$ which takes $L$ to $L\oplus L^{-1}$ and imbeds the quotient of $J$ by the involution $\iota$ on $J$, namely the Kummer variety, in $M$. This is the non-stable locus (which is also the singular locus if $g > 2$) of $M$ and has $J_2$ as its own singular locus. \begin{remark} If $(E, \varphi )\in \mathcal{H}$, but $E$ is not semi-stable, then there is a line sub-bundle $L$ of $E$ which is of degree $>d/2$. Moreover, it is the unique sub-bundle with degree $\geqslant d/2$. Clearly, since $(E, \varphi )$ is semi-stable, $\varphi $ does not leave $L$ invariant. Hence $(E, \varphi )$ is actually a stable Higgs bundle. In particular, it is a smooth point of $\mathcal{H}$. \end{remark} Before we take up the study of the involutions (\ref{involutions}) in general, we note that even when $\alpha$ is trivial, the involution $\iota^-:=\iota(\mathcal{O})^{-}$ is non-trivial and is of interest. In this case, the fixed point varieties were determined by Hitchin \cite{hitchin} and we recall the results with some additions and clarifications. \begin{proposition} Polystable Higgs bundles $(E, \varphi )$ fixed by the involution $\iota^{-}:(E, \varphi ) \mapsto (E, -\varphi )$ fall under the following types: \begin{itemize} \item[(i)] $E\in M = M(2, \delta )$ and $\varphi = 0$. \item[(ii)] For every integer $a$ satisfying $ 0 < 2a - d \leqslant 2g - 2$, consider the set $T_a$ of triples $(L, \beta,\gamma )$ consisting of a line bundle $L$ of degree $a$ and homomorphisms $\beta : L^{-1}\otimes \delta \to L\otimes K$, with $\gamma \neq 0$ and $\gamma : L\to L^{-1}\otimes \delta \otimes K$. \item[(iii)] Same as in ii), but with $2a = d$ if $d$ is even. To every triple as in ii) or iii), associate the Higgs bundle $(E, \varphi )$ where \begin{equation}\label{higgs-bundle} E = L\oplus (L^{-1}\otimes \delta ) \;\;\;\; \;\; \mbox{and}\;\;\;\;\;\; \varphi = \begin{pmatrix} 0 & \beta \\ \gamma & 0 \end{pmatrix}. \end{equation} \end{itemize} Any type {\em (ii)} Higgs bundle $(E, \varphi)$ is stable whereas $E$ is not even semi-stable. In type {\em (iii)} if $L^2$ is not isomorphic to $\delta $, and $\beta $ and $\gamma $ are both non-zero, then $(E, \varphi )$ is stable. If $L^2\cong \delta$ and $\beta $ and $\gamma $ (both of which are then sections of $K$) are linearly independent, then $(E, \varphi )$ is stable. \end{proposition} \begin{proof} Firstly, if $E\in M$ and $\varphi = 0$, it is obvious that it is fixed under the above involution. On the other hand, it is clear that if $(E, \varphi )$ is of type (ii) or (iii), then the automorphism of $E$ \begin{equation}\label{i-matrix} \begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix}, \end{equation} \noindent takes $\varphi $ to $-\varphi $. In type (ii), since $2a - d > 0$, it follows that $L$ is the only line sub-bundle of $E$ of degree $\geqslant d/2$. Since $(E, \varphi )$ is semi-stable, $L$ is not invariant under $\varphi $, (which is the case if and only if $\gamma $ is non-zero). Therefore, $(E, \varphi )$ is stable. Type (iii) is relevant only when $d$ is even and so we will assume that $\delta $ is trivial. If $L^2$ is not trivial, then every line subbundle of $E$ of degree $0$ is either $L$ or $L^{-1}$. Since we have assumed that $(E, \varphi )$ is poly-stable, either $\varphi $ leaves both $L$ and $L^{-1}$ invariant or neither, i.e. $\beta $ and $\gamma $ are both zero or both non-zero. The former case is covered under type i) and in the latter case, $(E, \varphi )$ is stable. Finally, if $L^2$ is trivial, then every line sub-bundle of degree $0$ is isomorphic to $L$, and all imbeddings of $L$ in $E = L \oplus L$ are given by $v \mapsto (\lambda v , \mu v)$, with $(\lambda , \mu ) \neq 0$. The restriction of $\varphi $ to $L$ composed with the projection of $E\otimes K$ to $(E/L) \otimes K = (L\otimes K)$, is given by $\lambda \gamma + \mu \beta $. Hence this imbedding of $L$ is invariant under $\varphi $ if and only if $\lambda \gamma + \mu \beta = 0$, proving that if $\beta $ and $\gamma $ are linearly independent, then $(E, \varphi )$ is stable. Otherwise, $(L,0)$ is a (Higgs) subbundle of $(E, \varphi )$ and hence it is covered again in i). Conversely, let $(E, \varphi )$ be a {\it stable} Higgs bundle fixed by the involution. Then there exists an automorphism $f$ of $E$ (of determinant 1) which takes $\varphi $ to $-\varphi $. If $E$ is a stable vector bundle, all its automorphisms are scalar multiplications which take $\varphi $ into itself. Hence $\varphi = 0$ in this case. Let $E$ be nonstable. Obviously, then $\varphi $ is non-zero. Since $f^2$ is an automorphism of the stable Higgs bundle $(E, \varphi )$, we have $f^2 = \pm \Id_E$. This implies that $f_x$ is semi-simple for all $x \in X$. If $f^2 = \Id_E$, the eigenvalues of $f_x$ are $\pm 1$ and since $\det(f_x) = 1 $ we have $f = \pm \Id_E$ which would actually leave $\varphi $ invariant. So $f_x$ has $\pm i$ as eigenvalues at all points. We conclude that $E$ is a direct sum of line bundles corresponding to the eigenvalues $\pm i$. Thus $f^2 =-\Id_E$ and $E = L \oplus (L^{-1}\otimes \delta )$ with $f|L = i.\Id_E$, and $f|(L^{-1}\otimes \delta ) = -i.\Id_E$. We may assume that $\deg L = a \geqslant d/2$, replacing $L$ by $L^{-1}\otimes \delta $ (and $f$ by $-f$) if necessary. If $ a > d/2$, it also follows that the composite of $\varphi |L $ and the projection $E\otimes K \to L^{-1}\otimes \delta \otimes K$ is nonzero (since $(E, \varphi )$ is semi-stable) which implies that $a \leqslant -a + d +2g -2$ , i.e . $2a - d \leqslant 2g - 2$. Moreover, from the fact that $f$ takes $\varphi $ to $-\varphi $, one deduces that $\varphi $ is of the form claimed. If $(E, \varphi )$ is not stable, in which case we may assume $\delta $ is trivial, $(E, \varphi )$ is a direct sum of $(L, \psi )$ and $(L^{-1}, -\psi )$ with $\deg L = 0$. If $\psi $ is nonzero, then $(E, \varphi )$ is isomorphic to $(E, -\varphi )$ if and only if $L\cong L^{-1}$. If then $L\cong L^{-1}$ we may take $g =1/\sqrt 2\begin{pmatrix}1&1\\-1&1\end{pmatrix}$ and change the decomposition of $E$ to $g(L) \oplus g(L)$ and see that $(E, \varphi )$ falls under type (iii). \end{proof} \subsection{The set of triples} The above proposition leads us to consider the set of triples as in type (ii) and type (iii) above with $d \leqslant 2 a \leqslant d + 2g - 2$. Set $m = 2a - d$. To such a triple, we have associated the Higgs bundle $(E,\varphi )$ given by $E = L \oplus (L^{-1}\otimes \delta )$ and $\varphi $ by the matrix in (\ref{higgs-bundle}). Notice however that this triple and the triple $(L, \lambda ^{-1}\beta,\lambda \gamma)$ give rise to isomorphic Higgs bundles. So we consider the set of triples $(L, \beta , \gamma )$ as above, make $\mathbb{C} ^*$ act on it, in which $\lambda \in \mathbb{C} ^*$ takes $(L, \beta , \gamma )$ to $(L, \lambda ^{-1}\beta, \lambda \gamma)$ and pass to the quotient. We have thus given an injective map of this quotient into the $\iota ^{-}$-fixed subvariety of Higgs bundles. We will equip this quotient with the structure of a variety. \subsection{Construction of the space of triples.} Take any line bundle ${{\mathscr{L}}}$ on $T \times X$, where $T$ is any parameter variety. For any $t\in T$, denote by ${{\mathscr{L}}}_t$ the line bundle ${\mathscr{L}}|{{t}\times X}$. Assume that $\deg ({{\mathscr{L}}}_t) = r$ for all $t\in T$. Then we get a (classifying) morphism $c_{{\mathscr{L}}}:T\to J^r$ mapping $t$ to the isomorphism class of ${{\mathscr{L}}}_t$. There is a natural morphism of $S = S^r(X) \to J^r$ since $S\times X$ has a universal divisor giving rise to a family of line bundles on $X$ of degree $r$, parametrised by $S$. The pull back of any Poincar{\'e} bundle on $J^r\times X$ to $S\times X$ is the tensor product of the line bundle given by the universal divisor on $S \times X$ and a line bundle pulled back from $S$. The composite of the projection of this line bundle $U$ to $S$ and the morphism $S\to J$ blows down the zero section of the line bundle to $Z$ and yields actually an affine morphism and the fibre over any $L\in J^r$ can be identified with $H^0(X,L)$, coming up with a section $Z$ of this affine morphism. Notice that if $r > 2g -2$, this is actually a vector bundle over $J^r$ of rank $r + 1 - g$ and $Z$ is its zero section. If ${\mathscr{L}}$ is a family of line bundles of degree $r$ on $X$, parametrised by $T$ as above the pull back of the morphism $U \to J^r$ by $c_E:T \to J^r$ will be denoted $A({{\mathscr{L}}})$. If $m >0$, let $V$ be the pull back by the map $J^a \to J^{2g - 2 + m}$ given by $L\to K\otimes L^2 \otimes \delta ^{-1}$ of the above vector bundle. On the other hand the map $L \to K \otimes L^{-2} \otimes \delta $ of $J^a \to J^{2g - 2 - m}$ pulls back the symmetric product $S^{2g - 2 -m}$ and gives a $2^{2g}$-sheeted \'etale covering. The inverse image of $V$ tensored with a line bundle on $S^{2g -2 -m}$ thus gives the required structure on the quotient of $T_a$ by $\mathbb{C} ^*$. \begin{proposition} For each $m$ with $0 < m < 2g -2$, consider the pull-back of the map $S^{2g - 2 - m} \to J^{2g - 2 - m}$ by the map $L \to K \otimes L^{-2}\otimes \delta $. A vector bundle over this of rank $g-1 + m$ is isomorphic to a subvariety of Higgs bundles which are all fixed by $\iota^-$. \end{proposition} We have seen that $M$ imbedded in $H$ by $E$ to $(E, 0)$ is a fixed point variety. It is of course closed and in fact, compact as well. The set of type (ii) fixed points is the disjoint union of $T_a$ with $d/2 < a < g - 1 + d$ and (disjoint from $M$ as well). Each of these gives an injective morphism of a vector bundle on a $2^{2g}$-sheeted \'etale covering of $S^a$ into the fixed point subvariety. Since the subvariety of $H$ corresponding to nonstable vector bundles is smooth and closed, this morphism is an isomorphism onto the image. We need to describe the image of the subvariety $T_a$ when $a = d/2$. We will assume $d =0$ and $\delta $ is trivial. Consider the natural map of $S^{2g -2}$ onto $J^{2g -2}$. Pull it back to $J$ by the two maps $L \to K\otimes L^2$ and $L\to K\otimes L^{-2}$. Take their fibre product and the quotient by the involution which changes the two factors. There is a natural map of this quotient into ${\mbox{$I\!\!\! P$}}H^0(K^2)$. Pull back the line bundle $\mathcal{O}(1)$ on ${\mbox{$I\!\!\! P$}}H^0 (K^2)$ to this . It is easy to check that this is irreducible and closed. There are other irreducible components of type (iii) in the case of $g = 2$. Take any line bundle $L$ of order 2 and consider \begin{equation} \begin{pmatrix} 0 & \beta \\ \gamma & 0 \end{pmatrix} \end{equation} as a Higgs field on $L \oplus L$. Consider the tensor product map $\beta \otimes \gamma$ into $H^0 (K^2)$. This is surjective and can be identified with the quotient by $\mathbb{C} ^*$ and $\mathbb{Z}/2$ of the fixed point set given by $(L, \beta , \gamma)$ with $L\in J_2$. \subsection{An Alternative point of view} Note that both in Type (ii) and Type (iii) we have a natural morphism of these components into $H^0(X, K^2)$ given by $(\beta , \gamma )\mapsto -\beta\gamma $. Clearly this is the restriction of the Hitchin map. Given a (non-zero) section of $H^0 (K^2)$ we can partition its divisor into two sets of cardinality $2g -2 -m $ and $2g -2 + m$. They yield elements of $J^{2g -2 -m}$ and $J^{2g -2 +m}$ together with non-zero sections $\beta $ and $\gamma $ which are defined up to the action of $\mathbb{C}^*$ as we have defined above. Passing to a $2^2g$-sheeted \'etale covering we get the required set. In particular it follows that except in case i) when the Hitchin map is $0$, in all other cases, the Hitchin map is finite and surjective. \section{Prym varieties and rank 2 bundles}\label{prym} Let now $\alpha \in J_2\smallsetminus \{0\}$. To start with, we will determine the fixed points of the involution defined on $M$ defined by tensoring by $\alpha$ \begin{proposition} Let $E$ be a vector bundle of rank $2$ on $X$, and let $\alpha $ be a non-trivial line bundle of order $2$ such that $(E\otimes \alpha )\cong E$. Then $E$ is polystable. Moreover if $E$ is not stable it is of the form $L \oplus (L\otimes \alpha )$ with $L^2 \cong\alpha $. \end{proposition} \begin{proof} Assume that $(E\otimes \alpha )\cong E$. If $E$ is not poly-stable, then it has a unique line subbundle $L$ of maximal degree. This implies that $(L\otimes \alpha )\cong L$ which is absurd. If $E$ is of the form $L \oplus M$, then under our assumption, it follows that $M \cong L\otimes \alpha $. \end{proof} We recall \cite{mumford,narasimhan-ramanan} the relation between the Prym variety of a two-sheeted {\'e}tale cover of $X$ and vector bundles of rank 2 on $X$. If $\alpha $ is a non-trivial element of $J_2(X)$, there is associated to it a canonical $2$-sheeted {\' e}tale cover $\pi :X_{\alpha } \to X$, namely $\Spec({\mathcal{O}} \oplus \alpha )$ with the obvious algebra structure on this locally free sheaf. Let $\iota$ be the Galois involution. For every line bundle $L$ of degree $d$ on $X_\alpha$, the line bundle $L\otimes \iota^*L$ of degree $2d$ with the natural lift of $\iota$ clearly descends to a line bundle $\Nm(L)$ of degree $d$ on $X$. This gives the {\it norm homomorphism} $\Nm:\operatorname{Pic}(X_\alpha)\to \operatorname{Pic}(X)$. Its kernel consists of two components and the one which contains the trivial line bundle is the {\it Prym variety} $P_{\alpha }$ associated to $\alpha $. If $L$ is a line bundle on $X_\alpha$, its direct image $\pi _*(L)$ on $X$ is a vector bundle of rank $2$. Note that $\det(\pi _*({\mathcal{O}})) = \det({\mathcal{O}}\oplus \alpha) = \alpha$, and more generally that $\det(\pi _*(L)) = \Nm(L) \otimes \alpha $ for all $L$. The fibres of $\Nm$ consist of two cosets $F_\alpha$ of $P_{\alpha }$ and the Galois involution interchanges the two if the degree is odd and leaves each component invariant if the degree is even. In particular, it acts on $P_{\alpha }$, and indeed as $L\mapsto L^{-1}$ on it. \begin{proposition} For any line bundle $L$ on $X_\alpha$, the direct image $E = \pi _*L$ is a polystable vector bundle of rank $2$ on $X$ such that $E\otimes \alpha \cong E $. If $E$ is not stable, it is of the form $\xi \oplus (\xi \otimes \alpha)$. \end{proposition} \begin{proof} Indeed, if $\xi $ is any line subbundle of $E$, its inclusion in $E$ gives rise to a nonzero homomorphism $\pi ^*\xi \to L$, and hence $2\deg \xi = \deg (\pi ^*\xi ) \leqslant \deg(L) = \deg(E)$, proving $E$ is semi-stable. If $\deg \xi = \deg E/2$, the homomorphism $\pi ^*\xi \to L$ is an isomorphism. But then $\pi _*L = \pi _*(\pi ^*\xi ) = \xi \otimes \pi _*{\mathcal{O}} = \xi \otimes ({\mathcal{O}} \oplus \alpha )$ proving our assertion. \end{proof} We have thus a morphism of $\Nm^{-1}(\delta \otimes \alpha )$ into $M(2, \delta )$ which maps $L$ to $\pi _*L$. Let $E$ be stable such that $E\otimes \alpha \cong E$. we may then choose an isomorphism $f:E \to E\otimes {\alpha }$ such that its iterate $(f\otimes \Id_{\alpha})\circ f: E\to E$ is the identity. Indeed this composite is an automorphism of $E$ and hence a non-zero scalar. We can then replace the isomorphism by a scalar multiple so that this composite is $\Id_E$. Now the locally free sheaf ${\mathcal{E}}$ can be provided a module structure over ${\mathcal{O}}\oplus \alpha $ by using the above isomorphism. This means that it is the direct image of an invertible sheaf on $X_\alpha$. On the other hand, if $E$ is poly-stable but not stable, it is isomorphic to $L\oplus M$. If $E\otimes \alpha $ is isomorphic to $E$, it follows that $L\cong M \otimes \alpha $. Hence we deduce that the above morphism $\Nm^{-1}(\delta\otimes \alpha )\to M(2, \delta )$ is onto the fixed point variety under the action of tensoring by $\alpha$ on $M(2, \delta )$. If $\pi _*L \cong \pi _*L'$, then by applying $\pi ^*$ to it, we see that $L'$ is isomorphic either to $L$ or $\iota^*L$. In other words, the above map descends to an isomorphism of the quotient of $\Nm^{-1}(\delta\otimes\alpha )$ by the Galois involution onto the $\alpha $-fixed subvariety of $M(2, \delta)$. Since the fibres of $\Nm$ are interchanged by the Galois involution when $\delta$ is of odd degree, this fixed point variety is isomorphic to a coset of the Prym variety. When $\delta $ is of even degree, the $\alpha $-fixed variety has two connected components, each isomorphic to the quotient of the Prym variety by the involution $L\to L^{-1}$, that is to say to the Kummer variety of Prym. We collect these facts in the following. \begin{theorem}\label{fixed-points-M} Let $\alpha $ be a non-trivial element of $J_2(X)$. It acts on $M(2, \delta )$ by tensor product: $\iota(\alpha)(E):=E\otimes \alpha$. The fixed point variety $F_\alpha(\delta)$ is isomorphic to the Prym variety of the covering $\pi:X_{\alpha }\to X$ given by $\alpha $ if $d = \deg \delta$ is odd, and is isomorphic to the union of two irreducible components, each isomorphic to the Kummer variety of the Prym variety, if $d$ is even. \end{theorem} \begin{remarks} (1) If $L$ is a line bundle on $X_{\alpha }$ and $E = \pi _*L$, then since $E\otimes \alpha \cong E$, $\alpha $ is a line sub-bundle of $\ad(E)$. Indeed, since $E$ is poly-stable, $\alpha $ is actually a direct summand. To see this, interpret $\ad(E)$ as $S^2(E)\otimes \det(E)^{-1}$ and notice that there is a natural surjecion of $S^2(\pi_* L)$ onto $\pi_*(L^2)$. It follows that $\pi_*(L^2) \det(E)^{-1}$ is contained in $\ad(E)$. Thus we see that $$\ad (\pi _*L) \cong \alpha \oplus ((\pi _*L^2)\otimes \alpha \otimes \Nm(L^{-1})).$$ (2) As we have seen above, in the case $\delta $ is trivial, the fixed point variety intersects the non-stable locus, namely the Kummer variety of the Jacobian at bundles of the form $\xi \oplus (\xi \otimes \alpha )$, where $\xi $ is a line bundle with $\xi ^2 \cong \alpha $. Clearly, $\xi $ and $\xi \otimes \alpha $ give the same bundle. Thus the intersection of the two copies of the Prym Kummer variety (corresponding to any non-trivial $\alpha \in J_2$) with the Jacobian--Kummer variety is an orbit of smooth points, under the action of $J_2$. This geometric fact can be stated in the context of principally polarised abelian varieties and is conjectured to be characteristic of Jacobians. Analytically expressed, this is the Schottky equation. \end{remarks} \section{Fixed Points of $\iota(\alpha )^{\pm }$ when $d$ is odd.}\label{fix-odd} If $(E, \varphi )$ is a polystable Higgs bundle fixed under either of the involutions $\iota(\alpha )^{\pm }$, we observe that $E$ is isomorphic to $E\otimes \alpha $. This implies that $E$ is itself polystable. Hence if $d$ is odd, we have only to consider the action of $\alpha$ on $M = M(2, \delta )$, given by $E\mapsto E\otimes \alpha$, and look at its action on the cotangent bundle. Let $F_\alpha$ be the fixed point variety in $M$ under the action of $\alpha$ (see Theorem \ref{fixed-points-M}), we have the exact sequence $$0\to N(F_\alpha, M)^* \to T^*(M)|_{F_\alpha} \to T^*(F_\alpha) \to 0,$$ where $N(F_\alpha,M)$ is the normal bundle of $F_\alpha$ in $M$. This sequence splits canonically since $\alpha $ acts on the restriction of the tangent bundle of $M$ to $F_\alpha$ and splits it into eigen-bundles corresponding to the eigen-values $\pm 1$. Clearly the subbundle corresponding to the eigen-value $+1$ (resp. -1) is $T(F_\alpha)$ (resp. $N(F_\alpha, M)$). Since $E\otimes \alpha \cong E$ and $d$ is odd, $E$ is stable, and we have the following. \begin{theorem} If $\deg \delta$ is odd the fixed point subvariety $\mathcal{F}_\alpha^+$ (resp. $\mathcal{F}_\alpha^-$) of the action of $\iota(\alpha )^+$ (resp. $\iota(\alpha )^{-}$) on $\mathcal{H}(2,\delta)$ is the cotangent bundle $T^*(F_\alpha)$ of $F_\alpha$ (resp. the conormal bundle $N(F_\alpha,M)^*$ of $F_\alpha$). \end{theorem} \section{Fixed points of $\iota(\alpha )^\pm$ when $d$ is even.}\label{fix-even} We may assume that the determinant is trivial in this case. If $(E, \varphi )$ is fixed by either of the involutions $\iota(\alpha )^{\pm }$, with $E$ stable, the above discussion is still valid so that we have \begin{itemize} \item[(i)] The sub-variety of fixed points of $\iota(\alpha )^+$ is $T^*(F_\alpha^{stable})$. \item[(ii)] The sub-variety of fixed points of $\iota(\alpha )^{-}$ is $N^*(F_\alpha^{stable}, M)$. \end{itemize} Assume then that $(E,\varphi)$ is a fixed point of $\iota(\alpha )^{\pm }$, where $E$ is polystable of the form $L \oplus L^{-1}$. We have $L^{-1}\cong L \otimes \alpha $ and $\varphi $ is of the form \begin{equation}\label{higgs-field} \varphi= \begin{pmatrix} \omega & \beta \\ \gamma & -\omega \end{pmatrix}, \end{equation} \noindent with $\beta,\gamma \in H^0(K\otimes \alpha )$ and $\omega \in H^0(K)$. Since the summands of $E$ are distinct, any isomorphism $f: E \otimes \alpha \to E$ has to be of the form $$ \begin{pmatrix} 0 & \lambda \\ -\lambda^{-1} & 0 \end{pmatrix}, $$ \noindent with $\lambda \in \mathbb{C}^*$. Also, $f$ takes $\varphi $ to $\pm \varphi $ if and only if $$ \begin{pmatrix} 0 & \lambda \\ -\lambda^{-1} & 0 \end{pmatrix} \begin{pmatrix} \omega & \beta \\ \gamma & -\omega \end{pmatrix} \begin{pmatrix} 0 & -\lambda \\ \lambda^{-1} & 0 \end{pmatrix} =\pm \begin{pmatrix} \omega & \beta \\ \gamma & -\omega \end{pmatrix}. $$ In other words, \begin{equation}\label{condition} \begin{pmatrix} -\omega & \lambda^{-2}\gamma \\ \lambda^{2}\beta & \omega \end{pmatrix} =\pm \begin{pmatrix} \omega & \beta \\ \gamma & -\omega \end{pmatrix}. \end{equation} We analyse the cases $\iota(\alpha)^+$ and $\iota(\alpha)^-$ separately. \subsection{Fixed points of $\iota(\alpha)^+$} In the case of $\iota(\alpha )^{+}$, (\ref{condition}) implies that $\omega = 0$ and $\lambda ^2 \beta = \gamma $. If $\beta $ or $\gamma $ is $0$, the Higgs bundle is $S$-equivalent to $(L \oplus (L\otimes \alpha ), 0)$. Hence this fixed point variety is isomorphic to the product of $J/\alpha $ and the space of decomposable tensors in $H^0 (K) \otimes H^0(K)$. \begin{remark} Since $E$ is of the form $\pi _*(L)$, we conclude that the tangent space at $E$ to $M$ (assuming that $E$ is stable) is $$H^1(\ad (E)) = H^1(\alpha ) \oplus H^1(\pi_* (L^2)\otimes \alpha ).$$ It is clear that the first summand here is the tangent space to the Prym variety while the second is the space normal to Prym in $M$. \end{remark} \subsection{Fixed points of $\iota(\alpha)^-$} It is clear that \begin{equation} \begin{pmatrix} \omega & \beta \\ \gamma & -\omega \end{pmatrix}. \end{equation} is taken to its negative under the action of \begin{equation} \begin{pmatrix} 0 & \lambda \\ - \lambda^{-1} & 0 \end{pmatrix} \end{equation} if and only if $\beta $ and $\gamma $ are multiples of each other, in which case we may as well assume that $\beta = \gamma $. In other words, $E$ belongs to the Prym variety and $\varphi $ belongs to $H^0(K) \oplus H^0(K \otimes \alpha )$. \section{Higgs bundles and representations of the fundamental group} Let $G$ be a reductive Lie group, and let $\pi_1(X)$ be the fundamental group of $X$. A representation $\rho:\pi_1(X)\longrightarrow G$ is said to be {\em reductive} if the composition of $\rho$ with the adjoint representation of $G$ in its Lie algebra is completely reducible. When $G$ is algebraic, this is equivalent to the Zariski closure of the image of $\rho$ being a reductive group. If $G$ is compact or abelian every representation is reductive. We thus define the \emph{moduli space of representations} of $\pi_1(X)$ in $G$ to be the orbit space $$ \mathcal{R}(G) = \Hom^{red}(\pi_1(X),G) / G $$ of reductive representations. With the quotient topology, $\mathcal{R}(G)$ has the structure of an algebraic variety. In this section we briefly review the relation between rank 1 and rank 2 Higgs bundles, and representations of the fundamental group of the surface and its universal central extension in $\mathbb{C}^\ast$, $\mathrm{U}(1)$, $\mathbb{R}^*$, $\mathrm{SL}(2,\mathbb{C})$, $\mathrm{SU}(2)$ and $\mathrm{SL}(2,\mathbb{R})$. For more details, see \cite{hitchin,donaldson,corlette,narasimhan-seshadri}. \subsection{Rank 1 Higgs bundles and representations} As is well-known $\mathcal{R}(U(1))$ is in bijective correspondence with the space $J$ of isomorphism classes of line bundles of degree $0$. Also, if we identify $\mathbb{Z} /2$ with the subgroup $\pm 1$ in $U(1)$ we get a bijection of $\mathcal{R}(\mathbb{Z} /2)$ with the set $J_2$ of line bundles of order $2$. By Hodge theory one shows that $\mathcal{R}(\mathbb{C}^*)$ is in bijection with $T^*J\cong J\times H^0(X,K)$, the moduli space of Higgs bundles or rank 1 and degree $0$. The subvariety of fixed points of the involution $(L,\varphi)\to (L^{-1},\varphi)$ in this moduli space is $J_2\times H^0(X,K)$ and corresponds to the subvariety $\mathcal{R}(\mathbb{R}^*)\subset \mathcal{R}(\mathbb{C}^\ast)$. \subsection{Rank 2 Higgs bundles and representations} The notion of stability of a Higgs bundle $(E,\varphi)$ emerges as a condition for the existence of a Hermitian metric on $E$ satisfying the Hitchin equations. More precisely, Hitchin \cite{hitchin} proved the following. \begin{theorem} \label{hk} An $\mathrm{SL}(2,\mathbb{C})$-Higgs bundle $(E,\varphi)$ is polystable if and only if $E$ admits a hermitian metric $h$ satisfying $$ F_h +[\varphi,\varphi^{*_h}]= 0, $$ where $F_h$ is the curvature of the Chern connection defined by $h$. \end{theorem} Combining Theorem \ref{hk} with a theorem of Donaldson \cite{donaldson} about the existence of a harmonic metric on a flat $\mathrm{SL}(2,\mathbb{C})$-bundle with reductive holonomy representation, one has the following non-abelian generalisation of the Hodge correspondence explained above for the rank 1 case \cite{hitchin}. \begin{theorem}\label{correspondence} The varieties $\mathcal{H}(2,\mathcal{O})$ and $\mathcal{R}(\mathrm{SL}(2,\mathbb{C}))$ are homeomorphic. \end{theorem} The representation $\rho$ corresponding to a polystable Higgs bundle is the holonomy representation of the flat $\mathrm{SL}(2,C)$-connection given by \begin{equation}\label{higgs-connection} D=\bar{\partial}_E+\partial_h+\varphi+\varphi^{*_h}, \end{equation} where $h$ is the solution to Hitchin equations and $\bar{\partial}_E+\partial_h$ is the $\mathrm{SU}(2)$-connection defined by $\bar{\partial}_E$, the Dolbeault operator of $E$ and $h$. \begin{remark} Notice that the complex structures of $\mathcal{H}(2,\mathcal{O})$ and $\mathcal{R}(\mathrm{SL}(2,\mathbb{C}))$ are different. The complex structure of $\mathcal{H}(2,\mathcal{O})$ is induced by the complex structure of $X$, while that of $\mathcal{R}(\mathrm{SL}(2,\mathbb{C}))$ is induced by the complex structure of $\mathrm{SL}(2,\mathbb{C})$. \end{remark} Higgs bundles with fixed determinant $\delta$ of odd degree can also be interpreted in terms of representations. For this we need to consider the universal central extension of $\pi_1(X)$ (see \cite{atiyah-bott,hitchin}). Recall that the fundamental group, $\pi_1(X)$, of $X$ is a finitely generated group generated by $2g$ generators, say $A_{1},B_{1}, \ldots, A_{g},B_{g}$, subject to the single relation $\prod_{i=1}^{g}[A_{i},B_{i}] = 1$. It has a universal central extension \begin{equation}\label{eq:gamma} 0\longrightarrow\mathbb{Z}\longrightarrow\Gamma\longrightarrow\pi_1(X)\longrightarrow 1 \ \end{equation} \noindent generated by the same generators as $\pi_1(X)$, together with a central element $J$ subject to the relation $\prod_{i=1}^{g}[A_{i},B_{i}] = J$. Representations of $\Gamma$ into $\mathrm{SL}(2,\mathbb{C})$ are of two types depending on whether the central element $1\in \mathbb{Z}\subset \Gamma$ goes to $+I$ or $-I$ in $\mathrm{SL}(2,\mathbb{C})$. In the first case the representation is simply obtained from a homomorphism from $\Gamma/\mathbb{Z}=\pi_1(X)$ into $\mathrm{SL}(2,\mathbb{C})$. The $+I$ case corresponds to Higgs bundles with trivial determinant as we have seen. The $-I$ case corresponds to Higgs bundles with odd degree determinant. Namely, let \begin{equation}\label{rgamma} \mathcal{R}^\pm(\Gamma,\mathrm{SL}(2,\mathbb{C})) = \{\rho\in \Hom^{red}(\Gamma,\mathrm{SL}(2,\mathbb{C})) / \mathrm{SL}(2,\mathbb{C}) \;\;: \;\;\rho(J)=\pm I\}. \end{equation} Here a reductive representation of $\Gamma$ is defined as at the beginning of the section, replacing $\pi_1(X)$ by $\Gamma$. Note that $\mathcal{R}^+(\Gamma,\mathrm{SL}(2,\mathbb{C}))=\mathcal{R}(\mathrm{SL}(2,\mathbb{C}))$. We then have the following \cite{hitchin}. \begin{theorem}\label{correspondence} Let $\delta$ be a line bundle over $X$. Then there are homeomorphisms (i) $\mathcal{H}(2,\delta)\cong \mathcal{R}^+(\Gamma,\mathrm{SL}(2,\mathbb{C}))$ if $\deg \delta$ is even, (ii) $\mathcal{H}(2,\delta)\cong \mathcal{R}^-(\Gamma,\mathrm{SL}(2,\mathbb{C}))$ if $\deg \delta$ is odd. \end{theorem} \subsection{Fixed points of $\iota(\mathcal{O})^{-}$ and representations of $\Gamma$} For any reductive subgroup $G\subset \mathrm{SL}(2,\mathbb{C})$ containing $-I$ we consider \begin{equation}\label{rgamma} \mathcal{R}^\pm(\Gamma,G) = \{\rho\in \Hom^{red}(\Gamma,G) / G \;\;: \;\;\rho(J)=\pm I\}. \end{equation} In particular we have $\mathcal{R}^\pm(\Gamma,\mathrm{SU}(2))$ and $\mathcal{R}^\pm(\Gamma,\mathrm{SL}(2,\mathbb{R}))$. Note that, since $\mathrm{SU}(2)$ is compact, every representation of $\Gamma$ in $\mathrm{SU}(2)$ is reductive. We can define the subvarieties $\mathcal{R}_k^\pm(\Gamma,\mathrm{SL}(2,\mathbb{R}))$ of $\mathcal{R}^\pm(\Gamma,\mathrm{SL}(2,\mathbb{R}))$ given by the representations of $\Gamma$ in $\mathrm{SL}(2,\mathbb{R})$ with Euler class $k$. By this, we mean that the corresponding flat $\mathrm{PSL}(2,\mathbb{R})$ bundle has Euler class $k$. If the $\mathrm{PSL}(2,\mathbb{R})$ bundle can be lift to an $\mathrm{SL}(2,\mathbb{R})$ bundle then $k=2d$, otherwise $k=2d-1$. The Milnor inequality \cite{milnor} says that the Euler class $k$ of any flat $\mathrm{PSL}(2,\mathbb{R})$ bundle satisfies $$ |k|\leqslant 2g-2, $$ where $g$ is the genus of $X$. Hitchin proves the following \cite{hitchin}. \begin{theorem} Consider the involution $\iota(\mathcal{O})^-$ of $\mathcal{H}(2,\delta)$. We have the following. (i) The fixed point subvariety of $\iota(\mathcal{O})^-$ of points $(E,\varphi)$ with $\varphi=0$ is homeomorphic to the image of $\mathcal{R}^\pm(\Gamma,\mathrm{SU}(2))$ in $\mathcal{R}^\pm(\Gamma,\mathrm{SL}(2,\mathbb{C}))$, where we have $\mathcal{R}^+$ if the degree of $\delta$ is even and $\mathcal{R}^-$ if the degree of $\delta$ is odd. (ii) The fixed point subvariety of $\iota(\mathcal{O})^-$ of points $(E,\varphi)$ with $\varphi\neq 0$ is homeomorphic to the image of $\mathcal{R}^\pm(\Gamma,\mathrm{SL}(2,\mathbb{R}))$ in $\mathcal{R}^\pm(\Gamma,\mathrm{SL}(2,\mathbb{C}))$, where we have $\mathcal{R}^+$ if the degree of $\xi$ is even and $\mathcal{R}^-$ if the degree of $\xi$ is odd. (iii) More precisely, the subvariety of triples $\mathcal{H}_a\subset \mathcal{H}(2,\delta)$ defined in Section \ref{triples} is homeomorphic to the image of $\mathcal{R}_{2a}^+(\Gamma,\mathrm{SL}(2,\mathbb{R}))$ in $\mathcal{R}^+(\Gamma,\mathrm{SL}(2,\mathbb{C}))$ if the degree of $\delta$ is even or to to the image of $\mathcal{R}_{2a-1}^-(\Gamma,\mathrm{SL}(2,\mathbb{R}))$ in $\mathcal{R}^-(\Gamma,\mathrm{SL}(2,\mathbb{C}))$ if the degree of $\delta$ is odd. \end{theorem} \begin{proof} The conjugations with respect to both real forms, $\mathrm{SU}(2)$ and $\mathrm{SL}(2,\mathbb{R})$, of $\mathrm{SL}(2,\mathbb{C})$ are inner equivalent and hence they induce the same antiholomorphic involution of the moduli space $\mathcal{R}^\pm(\Gamma,\mathrm{SL}(2,C))$, where we recall that the complex structure of this variety is the one naturally induced by the complex structure of $\mathrm{SL}(2,\mathbb{C})$. To be more precise, at the level of Lie algebras, the conjugation with respect to the real form $\mathfrak{su}(2)$ is given by the $\mathbb{C}$-antilinear involution \begin{equation}\nonumber \begin{aligned} \tau:\mathfrak{sl}(2,\mathbb{C}) & \to \mathfrak{sl}(2,\mathbb{C}) \\ A &\mapsto -\overline{A}^t, \end{aligned} \end{equation} while the conjugation with respect to the real form $\mathfrak{sl}(2,\mathbb{R})$ is given by the $\mathbb{C}$-antilinear involution \begin{equation}\nonumber \begin{aligned} \sigma:\mathfrak{sl}(2,\mathbb{C}) & \to \mathfrak{sl}(2,\mathbb{C}) \\ A &\mapsto \overline{A}. \end{aligned} \end{equation} Now, $$ \sigma(A)=J\tau(A)J^{-1} $$ for $J\in \mathfrak{sl}(2,\mathbb{R})$ given by $$ J = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}. $$ This is simply because for every $A\in\mathfrak{sl}(2,\mathbb{R})$, one has that \begin{equation}\label{n=2} JA=-A^tJ. \end{equation} Under the correspondence $\mathcal{H}(2,\delta)\cong\mathcal{R}^\pm(\Gamma,\mathrm{SL}(2,\mathbb{C}))$, the antiholomorphic involution of $\mathcal{R}^\pm(\Gamma,\mathrm{SL}(2,\mathbb{C}))$ defined by $\tau$ and $\sigma$ becomes the holomorphic involution $\iota(\mathcal{O})^-$ of $\mathcal{H}(2,\delta)$ \begin{equation}\label{involution} (E,\varphi)\mapsto (E,-\varphi), \end{equation} where we recall that the complex structure of $\mathcal{H}(2,\delta)$ is that induced by the complex structure of $X$. This follows basically from the fact that the $\mathrm{SL}(2,\mathbb{C})$-connection $D$ corresponding to $(\bar{\partial}_E,\varphi)$ under Theorem \ref{correspondence} is given by (\ref{higgs-connection}) and hence $$ \tau(D)=\ast_{h}(\bar{\partial}_E)+\bar{\partial}_E +(\varphi)^{\ast_h}-\varphi, $$ from which we deduce that $\tau(D)$ is in correspondence with $(E,-\varphi)$. Notice also that $\tau(D)\cong \sigma(D)$. The proof of (i) follows now from the fact that if $\varphi=0$ in (\ref{higgs-connection}) the connection $D$ is and $\mathrm{SU}(2)$ connection. Note that this reduces to the Theorem of Narasimhan and Seshadri for $\mathrm{SU}(2)$ \cite{narasimhan-seshadri}. To proof of (ii) and (iii) one easily checks that the connection $D$ defined by a Higgs bundle in $\mathcal{H}_a(\delta)$ is $\sigma$-invariant and hence defines an $\mathrm{SL}(2,\mathbb{R})$-connection. Now, the Euler class $k$ of the $\mathrm{PSL}(2,\mathbb{R})$ bundle is $k=2d$ if $E=L\oplus L^{-1}$, or $k=2d-1$ $E=L\oplus L^{-1}\delta$, where $d=\deg L$. \end{proof} \section{Fixed points of $\iota(\alpha)^\pm$ with $\alpha\neq \mathcal{O}$ and representations of $\Gamma$} Consider the normalizer $N\mathrm{SO}(2)$ of $\mathrm{SO}(2)$ in $\mathrm{SU}(2)$. This is generated by $\mathrm{SO}(2)$ and $J=\begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix}$. The group generated by $J$ is isomorphic to $\mathbb{Z}/4$ and fits in the exact sequence \begin{equation}\label{z4} 0\longrightarrow \mathbb{Z}/2\longrightarrow \mathbb{Z}/4\longrightarrow \mathbb{Z}/2\longrightarrow 1, \end{equation} where the subgroup $\mathbb{Z}/2\subset \mathbb{Z}/4$ is $\{\pm I\}$. We thus have an exact sequence \begin{equation}\label{normalizer} 1\longrightarrow \mathrm{SO}(2)\longrightarrow N\mathrm{SO}(2)\longrightarrow \mathbb{Z}/2\longrightarrow 1. \end{equation} The normalizer $N\mathrm{SO}(2,\mathbb{C})$ of $\mathrm{SO}(2,\mathbb{C})$ in $\mathrm{SL}(2,\mathbb{C})$ fits also in an extension \begin{equation}\label{c-normalizer} 1\longrightarrow \mathrm{SO}(2,\mathbb{C})\longrightarrow N\mathrm{SO}(2,\mathbb{C})\longrightarrow \mathbb{Z}/2\longrightarrow 1, \end{equation} which is, of course, the complexification of (\ref{normalizer}). Similarly, we also have that $N\mathrm{SL}(2,\mathbb{R})$, the normalizer of $\mathrm{SL}(2,\mathbb{R})$ in $\mathrm{SL}(2,\mathbb{C})$, is given by \begin{equation}\label{normalizer=sl2r} 1\longrightarrow \mathrm{SL}(2,\mathbb{R})\longrightarrow N\mathrm{SL}(2,\mathbb{R})\longrightarrow \mathbb{Z}/2\longrightarrow 1. \end{equation} Note that $N\mathrm{SO}(2)$ is a maximal compact subgroup of $N\mathrm{SL}(2,\mathbb{R})$. Given a representation $\rho:\Gamma \longrightarrow N\mathrm{SO}(2)$ there is a topological invariant $\alpha\in H^1(X,\mathbb{Z}/2)$, which is given by the map $$ H^1(X,N\mathrm{SO}(2))\longrightarrow H^1(X,\mathbb{Z}/2) $$ induced by (\ref{normalizer}). Let $$ \mathcal{R}_{\alpha}^\pm(\Gamma,N\mathrm{SO}(2)):=\{\rho\in \mathcal{R}^\pm(\Gamma,N\mathrm{SO}(2))\;\;:\;\; \mbox{with invariant} \;\;\alpha\in H^1(X,\mathbb{Z}/2)\}. $$ Similarly, we have this $\alpha$-invariant for representations of $\Gamma$ in $N\mathrm{SO}(2,\mathbb{C})$ and in $N\mathrm{SL}(2,\mathbb{R})$, and we can define $\mathcal{R}_\alpha(\Gamma,N\mathrm{SO}(2,\mathbb{C}))$ and $\mathcal{R}_{\alpha}^\pm(\Gamma,N\mathrm{SL}(2,\mathbb{R}))$. \begin{theorem} Let $\alpha\in J_2(X)=H^1(X,\mathbb{Z}/2)$. Then we have the following. (i) The subvariety $F_\alpha$ of fixed points of the involution $\iota(\alpha)$ in $M(\delta)$ defined by $E\mapsto E\otimes \alpha$ is homeomorphic to the image of $\mathcal{R}^\pm_\alpha(\Gamma,N\mathrm{SO}(2))$ in $\mathcal{R}^\pm(\Gamma,\mathrm{SU}(2))$, where we have $\mathcal{R}^+$ if the degree of $\delta$ is even and $\mathcal{R}^-$ if the degree of $\delta$ is odd. (ii) The subvariety $\mathcal{F}_\alpha^+$ of fixed points of the involution $\iota(\alpha)^+$ of $\mathcal{H}(\delta)$ is homeomorphic to the image of $\mathcal{R}^\pm_\alpha(\Gamma,N\mathrm{SO}(2,\mathbb{C}))$ in $\mathcal{R}^\pm(\Gamma,\mathrm{SL}(2,\mathbb{C}))$, where we have $\mathcal{R}^+$ if the degree of $\delta$ is even and $\mathcal{R}^-$ if the degree of $\delta$ is odd. (iii) The subvariety $\mathcal{F}_\alpha^-$ of fixed points of the involution $\iota(\alpha)^-$ of $\mathcal{H}(\xi)$ is homeomorphic to the image of $\mathcal{R}^\pm_\alpha(\Gamma,N\mathrm{SL}(2,\mathbb{R}))$ in $\mathcal{R}^\pm(\Gamma,\mathrm{SL}(2,\mathbb{C}))$, where we have $\mathcal{R}^+$ if the degree of $\delta$ is even and $\mathcal{R}^-$ if the degree of $\delta$ is odd. \end{theorem} \begin{proof} The element $\alpha\in J_2(X)=H^1(X,\mathbb{Z}/2)$ defines a $\mathbb{Z}/2$ \'etale covering $\pi: X_\alpha\longrightarrow X$. The strategy of the proof is to lift to $X_\alpha$ and apply a $\mathbb{Z}/2$-invariant version of the correspondence between Higgs bundles on $X_\alpha$ and representations of $\Gamma_\alpha$ --- the universal central extension of $\pi_1(X_\alpha)$. We have a sequence \begin{equation}\label{gamma-alpha} 1\longrightarrow \Gamma_\alpha\longrightarrow \Gamma \longrightarrow \mathbb{Z}/2\longrightarrow 1. \end{equation} since $\Gamma_\alpha$ is the kernel of the homomorphism $\alpha:\Gamma\to \mathbb{Z}/2$ defined by $\alpha$. For convenience, let $G$ be any of the subgroups $\mathrm{SO}(2)\subset \mathrm{SU}(2)$, $\mathrm{SO}(2,\mathbb{C})\subset \mathrm{SL}(2,\mathbb{C})$ and $\mathrm{SL}(2,\mathbb{R})\subset \mathrm{SL}(2,\mathbb{C})$, and let $NG$ be its normalizer in the corresponding group. We then have an extension \begin{equation}\label{ng} 1\longrightarrow G\longrightarrow NG\longrightarrow \mathbb{Z}/2\longrightarrow 1. \end{equation} Let $\Hom_\alpha(\Gamma,NG)$ be the subset of $\Hom(\Gamma,NG)$ consisting of representations of $\rho: \Gamma\to NG$ such that the following diagram is commutative \begin{equation}\label{commu} \begin{matrix} 1 & \longrightarrow & \Gamma_\alpha&\longrightarrow &\Gamma & \stackrel{\alpha}{\longrightarrow} & \mathbb{Z}/2 &\longrightarrow & 1\\ && \Big\downarrow && ~\Big\downarrow\rho && \Vert\\ 1 & \longrightarrow & G &\longrightarrow & NG & \longrightarrow & \mathbb{Z}/2 &\longrightarrow & 1 \end{matrix} \end{equation} The group $NG$ is a disconnected group with $\mathbb{Z}/2$ as the group of connected components and $G$ as the connected component containing the identity. If $G$ is abelian ($G=\mathrm{SO}(2),\mathrm{SO}(2,\mathbb{C})$), $\mathbb{Z}/2$ acts on $G$ and, since $\mathbb{Z}/2$ acts on $X_\alpha$ (as the Galois group) and hence on $\Gamma_\alpha$, there is thus an action of $\mathbb{Z}/2$ on $\Hom(\Gamma_\alpha,G)$. A straightforward computation shows that \begin{equation}\label{reps-inv-reps} \Hom_\alpha(\Gamma, NG)\cong \Hom(\Gamma_\alpha,G)^{\mathbb{Z}/2}. \end{equation} If $G$ is not abelian (which is the case for $G=\mathrm{SL}(2,\mathbb{R})$), the extension (\ref{ng}) still defines a homomorphism $\mathbb{Z}/2 \to \Out(G)=\Aut(G)/\Int(G)$. We can then take a splitting of the sequence \begin{equation}\label{aut-g} 1\longrightarrow \Int(G)\longrightarrow \Aut(G) \longrightarrow \Out(G)\longrightarrow 1, \end{equation} which always exists \cite{de-siebenthal}. This defines an action on $\Hom(\Gamma_\alpha,G)$. However only the action on $\Hom(\Gamma_\alpha,G)/G$ is independent of the splitting. In particular, as consequence of (\ref{reps-inv-reps}), we have homeomorphisms $$ \mathcal{R}_\alpha^\pm(\Gamma,NG)\cong \mathcal{R}^\pm(\Gamma_\alpha,G)^{\mathbb{Z}/2}. $$ The result follows now from the usual correspondences between representations of $\Gamma_\alpha$ and vector bundles or Higgs bundles on $X_\alpha$, combined with the fact that the fixed point subvarieties $F_\alpha$, $\mathcal{F}^\pm_\alpha$ described in Sections \ref{prym}, \ref{fix-odd} and \ref{fix-even} are push-forwards to $X$ of objects on $X_\alpha$ that satisfy the $\mathbb{Z}/2$-invariance condition (see \cite{garcia-prada-ramanan} for more details). \end{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \end{document}
\begin{document} \title[Algebraic central division algebras]{On algebraic central division algebras over Henselian fields of finite absolute Brauer $p$-dimensions and residually arithmetic type} \keywords{Division LBD-algebra, Henselian field, field of arithmetic type, absolute Brauer $p$-dimension, tamely ramified extension, $p$-splitting field\\ 2020 MSC Classification: 16K40, 12J10 (primary), 12E15, 12G05, 12F10 (secondary).} \author{I.D. Chipchakov} \address{Institute of Mathematics and Informatics\\Bulgarian Academy of Sciences\\1113 Sofia, Bulgaria: E-mail address: [email protected]} \begin{abstract} Let $(K, v)$ be a Henselian field with a residue field $\widehat K$ and a value group $v(K)$, and let $\mathbb{P}$ be the set of prime numbers. This paper finds conditions on $K$, $v(K)$ and $\widehat K$ under which every algebraic associative central division $K$-algebra $R$ contains a $K$-subalgebra $\widetilde R$ decomposable into a tensor product of central $K$-subalgebras $R _{p}$, $p \in \mathbb{P}$, of finite $p$-primary degrees, such that each finite-dimensional $K$-subalgebra $\Delta $ of $R$ is isomorphic to a $K$-subalgebra $\widetilde \Delta $ of $\widetilde R$. \end{abstract} \maketitle \section{\bf Introduction} All algebras considered in this paper are assumed to be associative with a unit. Let $E$ be a field, $E _{\rm sep}$ its separable closure, Fe$(E)$ the set of finite extensions of $E$ in $E _{\rm sep}$, $\mathbb{P}$ the set of prime numbers, and for each $p \in \mathbb{P}$, let $E(p)$ be the maximal $p$-extension of $E$ in $E _{\rm sep}$, i.e. the compositum of all finite Galois extensions of $E$ in $E _{\rm sep}$ whose Galois groups are $p$-groups. It is known, by the Wedderburn-Artin structure theorem (cf. \cite{He}, Theorem~2.1.6), that an Artinian $E$-algebra $\mathcal{A}$ is simple if and only if it is isomorphic to the full matrix ring $M _{n}(\mathcal{D}_{\mathcal{A}})$ of order $n$ over a division $E$-algebra $\mathcal{D}_{\mathcal{A}}$. When this holds, $n$ is uniquely determined by $\mathcal{A}$, and so is $\mathcal{D}_{\mathcal{A}}$, up-to isomorphism; $\mathcal{D}_{\mathcal{A}}$ is called an underlying division $E$-algebra of $\mathcal{A}$. The $E$-algebras $\mathcal{A}$ and $\mathcal{D}_{\mathcal{A}}$ share a common centre $Z(\mathcal{A})$; we say that $\mathcal{A}$ is a central $E$-algebra if $Z(\mathcal{A}) = E$. \par Denote by Br$(E)$ the Brauer group of $E$, by $s(E)$ the class of finite-dimensional central simple algebras over $E$, and by $d(E)$ the subclass of division algebras $D \in s(E)$. For each $A \in s(E)$, let deg$(A)$, ind$(A)$ and exp$(A)$ be the degree, the Schur index and the exponent of $A$, respectively. It is well-known (cf. \cite{P}, Sect. 14.4) that exp$(A)$ divides ind$(A)$ and shares with it the same set of prime divisors; also, ind$(A) \mid {\rm deg}(A)$, and deg$(A) = {\rm ind}(A)$ if and only if $A \in d(E)$. Note that ind$(B _{1} \otimes _{E} B _{2}) = {\rm ind}(B _{1}){\rm ind}(B _{2})$ whenever $B _{1}, B _{2} \in s(E)$ and g.c.d.$\{{\rm ind}(B _{1}), {\rm ind}(B _{2})\} = 1$; equivalently, $B _{1} ^{\prime } \otimes _{E} B _{2} ^{\prime } \in d(E)$, if $B _{j} ^{\prime } \in d(E)$, $j = 1, 2$, and g.c.d.$\{{\rm deg}(B _{1} ^{\prime }), {\rm deg}(B _{2} ^{\prime })\}$ $= 1$ (see \cite{P}, Sect. 13.4). Since Br$(E)$ is an abelian torsion group, and ind$(A)$, exp$(A)$ are invariants both of $A$ and its equivalence class $[A] \in {\rm Br}(E)$, these results prove the classical primary tensor product decomposition theorem, for an arbitrary $D \in d(E)$ (see \cite{P}, Sect. 14.4). They also indicate that the study of the restrictions on the pairs ind$(A)$, exp$(A)$, $A \in s(E)$, reduces to the special case of $p$-primary pairs, for an arbitrary $p \in \mathbb{P}$. The Brauer $p$-dimensions Brd$_{p}(E)$, $p \in \mathbb P$, contain essential information on these restrictions. We say that Brd$_{p}(E) = n < \infty $, for a given $p \in \mathbb P$, if $n$ is the least integer $\ge 0$, for which ind$(A _{p}) \mid {\rm exp}(A _{p}) ^{n}$ whenever $A _{p} \in s(E)$ and $[A _{p}]$ lies in the $p$-component Br$(E) _{p}$ of Br$(E)$; if no such $n$ exists, we put Brd$_{p}(E) = \infty $. For instance, Brd$_{p}(E) \le 1$, for all $p \in \mathbb P$, if and only if deg$(D) = {\rm exp}(D)$, for each $D \in d(E)$; Brd$_{p'}(E) = 0$, for some $p ^{\prime } \in \mathbb P$, if and only if Br$(E) _{p'}$ is trivial. \par The absolute Brauer $p$-dimension of $E$ is defined to be the supremum abrd$_{p}(E)$ of Brd$_{p}(R)\colon R \in {\rm Fe}(E)$. It is a well-known consequence of Albert-Hochschild's theorem (cf. \cite{S1}, Ch. II, 2.2) that abrd$_{p}(E) = 0$, $p \in \mathbb{P}$, if and only if $E$ is a field of dimension $\le 1$, i.e. Br$(R) = \{0\}$, for every finite extension $R/E$. When $E$ is perfect, we have dim$(E) \le 1$ if and only if the absolute Galois group $\mathcal{G}_{E} = \mathcal{G}(E _{\rm sep}/E)$ is a projective profinite group, in the sense of \cite{S1}. Also, by class field theory, Brd$_{p}(E) = {\rm abrd}_{p}(E) = 1$, $p \in \mathbb{P}$, if $E$ is a global or local field. \par This paper is devoted to the study of locally finite-dimensional (abbr., LFD) subalgebras of algebraic central division algebras over a field $K$ with abrd$_{p}(K)$ finite, for every $p \in \mathbb{P}$. Our research is motivated by the following conjecture: \par \begin{conj} \label{conj1.1} Assume that $K$ is a field with {\rm abrd}$_{p}(K) < \infty $, $p \in \mathbb{P}$, and let $R$ be an algebraic central division $K$-algebra. Then $R$ possesses a $K$-subalgebra $\widetilde R$ with the following properties: \par {\rm (a)} $\widetilde R$ is $K$-isomorphic to the tensor product $\otimes _{p \in \mathbb P} R _{p}$, where $\otimes = \otimes _{K}$ and $R _{p} \in d(K)$ is a $K$-subalgebra of $R$ of $p$-primary degree $p ^{k(p)}$, for each $p \in \mathbb P$; \par {\rm (b)} Every $K$-subalgebra $\Delta $ of $R$, which is {\rm LFD} of at most countable dimension, is embeddable in $\widetilde R$ as a $K$-subalgebra; in particular, $K$ equals the centralizer $C _{R}(\widetilde R) = \{c \in R\colon c\tilde r = \tilde rc, \tilde r \in \widetilde R\}$, and for each $p \in \mathbb{P}$, $k(p)$ is the maximal integer for which there is $\rho _{p} \in R$ such that $p ^{k(p)}$ divides $[K(\rho _{p})\colon K]$. \end{conj} \par For technical reasons, we restrict our considerations almost exclusively to the special case where $K$ is a virtually perfect field. By definition, this means that char$(K) = q$, and in case $q > 0$, the degree $[K\colon K ^{q}]$ is finite, where $K ^{q} = \{\alpha ^{q}\colon \alpha \in K\}$ is the subfield of $K$ formed by the $q$-th powers of its elements. Our main results, stated as Theorems \ref{theo3.1} and \ref{theo3.2}, prove Conjecture \ref{conj1.1}, under the hypothesis that $K$ lies in a large class of noncountable Henselian fields with virtually perfect residue fields of arithmetic type (see Definition~2), including higher local fields and maximally complete equicharacteristic fields. \par \section{\bf Background and further motivation} \par Let $E$ be a field of characteristic $q$ with Brd$_{p}(E) < \infty $, for all $p \in \mathbb{P}$. It follows from well-known general properties of the basic types of algebraic extensions (cf. \cite{L}, Ch. V, Sects.~4 and 6) that Brd$_{p}(E ^{\prime }) \le {\rm abrd}_{p}(E)$, for any algebraic extension $E ^{\prime }/E$, and any $p \in \mathbb{P}$ not equal to $q$ (see also \cite{Ch4}, (1.3), and \cite{P}, Sect.~13.4). The question of whether algebraic extensions $E ^{\prime }/E$ satisfy the same inequality in case $p = q$ seems to be difficult. Fortunately, it is not an obstacle to our considerations if the field $E$ is virtually perfect. When this holds, finite extensions of $E$ are virtually perfect fields as well; in case $q > 0$, this is implied by the following result (cf. \cite{BH}, Lemma~2.12, or \cite{L}, Ch. V, Sect. 6): \par \noindent (2.1) $[E ^{\prime }\colon E ^{\prime q}] = [E\colon E ^{q}]$, for every finite extension $E ^{\prime }/E$. \par \noindent Statement (2.1) enables one to deduce from Albert's theorem (cf. \cite{A1}, Ch. VII, Theorem~28) and the former conclusion of \cite{Ch6}, Lemma~4.1, that if $[E\colon E ^{q}] = q ^{\kappa }$, where $\kappa \in \mathbb{N}$, then Brd$_{q}(E ^{\prime }) \le \kappa $. It is easily verified (cf. \cite{Ch3}, Proposition~2.4) that every virtually perfect field $K$ with abrd$_{p}(K) < \infty $, for all $p \in \mathbb{P}$, is an FC-field, in the sense of \cite{Ch1} and \cite{Ch3}. As shown in \cite{Ch1} (see also \cite{Ch3}), this sheds light on the structure of central division LFD-algebras over $K$, as follows: \par \begin{prop} \label{prop2.1} Let $K$ be a virtually perfect field with {\rm abrd}$_{p}(K) < \infty $, for each $p \in \mathbb{P}$, and suppose that $R$ is a central division {\rm LFD}-algebra over $K$, i.e. finitely-generated $K$-subalgebras of $R$ are finite-dimensional. Then $R$ possesses a $K$-subalgebra $\widetilde R$ with the following properties: \par {\rm (a)} $\widetilde R$ is $K$-isomorphic to the tensor product $\otimes _{p \in \mathbb P} R _{p}$, where $\otimes = \otimes _{K}$ and $R _{p} \in d(K)$ is a $K$-subalgebra of $R$ of $p$-primary degree $p ^{k(p)}$, for each $p$; \par {\rm (b)} Every $K$-subalgebra $\Delta $ of $R$ of at most countable dimension is embeddable in $\widetilde R$ as a $K$-subalgebra; hence, for each $p \in \mathbb{P}$, $k(p)$ is the greatest integer for which there exists $r _{p} \in R$ of degree $[K(r _{p})\colon K]$ divisible by $p ^{k(p)}$; \par {\rm (c)} $\widetilde R$ is isomorphic to $R$ if the dimension $[R\colon K]$ is at most countable. \end{prop} \par By the main result of \cite{Ch1}, the conclusion of Proposition \ref{prop2.1} remains valid whenever $K$ is an FC-field; in particular, this holds if char$(K) = q$, abrd$_{p}(K) < \infty $, $p \in \mathbb{P} \setminus \{q\}$, and in case $q > 0$, there exists $\mu \in \mathbb{N}$, such that Brd$_{q}(K ^{\prime }) \le \mu $, for every finite extension $K ^{\prime }/K$. As already noted, the latter condition is satisfied if $q > 0$ and $[K\colon K ^{q}] = q ^{\mu }$. It is worth mentioning, however, that the existence of an upper bound $\mu $ as above is sometimes possible in case $[K\colon K ^{q}] = \infty $. More precisely, for each $q \in \mathbb{P}$, there are fields $E _{n}$, $n \in \mathbb{N}$, with the following properties, for each $n$ (see Proposition \ref{prop4.9}): \par \noindent (2.2) char$(E _{n}) = q$, $[E _{n}\colon E _{n} ^{q}] = \infty $, Brd$_{p}(E _{n}) = {\rm abrd}_{p}(E _{n}) = [n/2]$, for all $p \in \mathbb{P} \setminus \{q\}$, and Brd$_{q}(E _{n} ^{\prime }) = n - 1$, for every finite extension $E _{n} ^{\prime }/E _{n}$. \par \noindent In particular, FC-fields of characteristic $q > 0$ need not be virtually perfect. Therefore, it should be pointed out that if $F/E$ is a finitely-generated extension of transcendency degree $\nu > 0$, where $E$ is a field of characteristic $q > 0$, then Brd$_{q}(F) < \infty $ if and only if $[E\colon E ^{q}] < \infty $ \cite{Ch6}, Theorem~2.2; when $[E\colon E ^{q}] = q ^{u} < \infty $, we have $[F\colon F ^{q}] = q ^{u+\nu }$, which means that abrd$_{q}(F) \le \nu + u$. This attracts interest in the following open problem: \par \begin{prob} \label{prob2.2} Let $E$ be a field with {\rm abrd}$_{p}(E) < \infty $, for some $p \in \mathbb{P}$ different from {\rm char}$(E)$. Find whether {\rm abrd}$_{p}(F) < \infty $, for any finitely-generated transcendental field extension $F/E$. \end{prob} \par Global fields and local fields are virtually perfect (cf. \cite{Ef}, Example~4.1.3) of absolute Brauer $p$-dimensions one, for all $p \in \mathbb{P}$, so they satisfy the conditions of Proposition \ref{prop2.1}. In view of a more recent result of Matzri \cite{Mat}, Proposition \ref{prop2.1} also applies to any field $K$ of finite Diophantine dimension, that is, to any field $K$ of type $C _{m}$, in the sense of Lang, for some integer $m \ge 0$. By type $C _{m}$, we mean that every nonzero homogeneous polynomial $f$ of degree $d$ and with coefficients from $K$ has a nontrivial zero over $K$, provided that $f$ depends on $n > d ^{m}$ algebraically independent variables over $K$. For example, algebraically closed fields are of type $C _{0}$; finite fields are of type $C _{1}$, by the Chevalley-Warning theorem (cf. \cite{GiSz}, Theorem~6.2.6). Complete discrete valued fields with algebraically closed residue fields are also of type $C _{1}$ (Lang's theorem, see \cite{S1}, Ch. II, 3.3), and by Koll\'{a}r's theorem (see \cite{FJ}, Remark~21.3.7), so are pseudo algebraically closed (abbr., PAC) fields of characteristic zero. Perfect PAC fields of characteristic $q > 0$ are of type $C _{2}$ (cf. \cite{FJ}, Theorem~21.3.6). \par The present research is essentially a continuation of \cite{Ch3}. Since the class of fields of finite Diophantine dimensions consists of virtually perfect fields and it is closed under the formation of both field extensions of finite transcendency degree (by the Lang-Nagata-Tsen theorem \cite{Na}) and formal Laurent power series fields in one variable \cite{Gr}, the above-noted result of \cite{Mat} significantly extends the scope of applicability of the main result of \cite{Ch1}. This gives rise to the expectation, expressed by Conjecture \ref{conj1.1}, that it is possible to reduce the research into noncommutative algebraic central division algebras over finitely-generated extensions $E$ of fields $E _{0}$ with interesting arithmetic, algebraic, diophantine, topological, or other specific properties to the study of their finite-dimensional $E$-subalgebras (see \cite{Ch3}, Theorem~4.2 and Sect.~5, for examples of such a reduction). In view of Problem \ref{prob2.2}, it is presently unknown whether one may take as $E _{0}$ any virtually perfect field with abrd$_{p}(E _{0}) < \infty $, $p \in \mathbb{P}$. Therefore, it should be noted that the suggested approach to the study of algebraic central division $K$-algebras can be followed whenever $K$ is a virtually perfect field with abrd$_{p}(K) < \infty $, $p \in \mathbb{P}$, over which Conjecture \ref{conj1.1} holds in general. \par One may clearly restrict considerations of the main aspects of our conjecture to the case where $[R\colon K] = \infty $. For reasons clarified in the sequel, in this paper we assume further that $R$ belongs to the class of $K$-algebras of linearly bounded degree, in the sense of Amitsur \cite{Am}. This class is defined as follows: \par \noindent {\bf Definition 1.} An algebraic algebra $\Psi $ over a field $F$ is said to be an algebra of linearly (or locally) bounded degree (briefly, an LBD-algebra), if the following condition holds, for any finite-dimensional $F$-subspace $V$ of $\Psi $: there exists $n(V) \in \mathbb{N}$, such that $[F(v)\colon F] \le n(V)$, for each $v \in V$. \par It is not known whether every algebraic associative division algebra $\Psi $ over a field $F$ is LFD. This problem has been posed by Kurosh in \cite{Ku} as a division ring-theoretic analogue to the Burnside problem for torsion (periodic) groups. Evidently, if the stated problem is solved affirmatively, then Conjecture \ref{conj1.1} will turn out to be a restatement of Proposition \ref{prop2.1}, in case $K$ is virtually perfect. The problem will be solved in the same direction if and only if the answers to following two questions are positive: \par \noindent {\bf Questions 2.3.} {\it Let $F$ be a field. \par {\rm (a)} Find whether algebraic division $F$-algebras are {\rm LBD}-algebras over $F$. \par {\rm (b)} Find whether division {\rm LBD}-algebras over $F$ are {\rm LFD}.} \par Although Questions~2.3~(a) and (b) are closely related to the Kurosh problem, each of them makes interest in its own right. For example, the main results of \cite{Ch3} indicate that an affirmative answer to Question~2.3~(a) would prove Conjecture \ref{conj1.1} in the special case where $K$ is a global or local field, or more generally, a virtually perfect field of arithmetic type, in the following sense: \par \noindent {\bf Definition 2.} A field $K$ is said to be of arithmetic type, if abrd$_{p}(K)$ is finite and abrd$_{p}(K(p)) = 0$, for each $p \in \mathbb{P}$. \par \noindent It is of primary importance for the present research that the answer to Question~2.3~(a) is affirmative when $F$ is a noncountable field. Generally, by Amitsur's theorem \cite{Am}, algebraic associative algebras over such $F$ are LBD-algebras. Furthermore, it follows from Amitsur's theorem that if $A$ is an arbitrary LBD-algebra over any field $E$, then the tensor product $A \otimes _{E} E ^{\prime }$ is an LBD-algebra over any extension $E ^{\prime }$ of $E$ (see \cite{Am}). These results are repeatedly used (without an explicit reference) for proving the main results of this paper. \par \section{\bf Statement of the main result} \par Assume that $K$ is a virtually perfect field with abrd$_{p}(K) < \infty $, for all $p \in \mathbb{P}$, and let $R$ be an algebraic central division $K$-algebra. Evidently, if $R$ possesses a $K$-subalgebra $\widetilde R$ with the properties described by Conjecture \ref{conj1.1}, then there is a sequence $k(p)$, $p \in \mathbb{P}$, of integers $\ge 0$, such that $p ^{k(p)+1}$ does not divide $[K(r)\colon K]$, for any $r \in R$, $p \in \mathbb{P}$. The existence of such a sequence is guaranteed if $R$ is an LBD-algebra over $K$ (cf. \cite{Ch3}, Lemma~3.9). When $k(p) = k(p) _{R}$ is the minimal integer satisfying the stated condition, it is called a $p$-power of $R/K$. In this setting, the notion of a $p$-splitting field of $R/K$ is defined as follows: \par \noindent {\bf Definition 3.} Let $K ^{\prime }$ be a finite extension of $K$, $R ^{\prime }$ the underlying (central) division $K ^{\prime }$-algebra of $R \otimes _{K} K ^{\prime }$, and $\gamma (p)$ the integer singled out by the Wedderburn-Artin $K ^{\prime }$-isomorphism $R \otimes _{K} K ^{\prime } \cong M _{\gamma (p)}(R ^{\prime })$. We say that $K ^{\prime }$ is a $p$-splitting field of $R/K$ if $p ^{k(p)}$ divides $\gamma (p)$. \par \noindent Note that the class of $p$-splitting fields of a central division LBD-algebra $R$ over a virtually perfect field $K$ with abrd$_{p'}(K) < \infty $, $p' \in \mathbb{P}$, is closed under the formation of finite extensions. Indeed, it is well-known (cf. \cite{He}, Lemma~4.1.1) that $R \otimes _{K} K ^{\prime }$ is a central simple $K ^{\prime }$-algebra, for any field extension $K ^{\prime }/K$. This algebra is a left (and right) vector space over $R$ of dimension equal to $[K ^{\prime }\colon K]$, which implies it is Artinian whenever $[K ^{\prime }\colon K]$ is finite. As $R \otimes _{K} K _{2}$ and $(R \otimes _{K} K _{1}) \otimes _{K _{1}} K _{2}$ are isomorphic $K _{2}$-algebras, for any tower of field extensions $K \subseteq K _{1} \subseteq K _{2}$ (cf. \cite{P}, Sect. 9.4, Corollary~(a)), these observations enable one to deduce our assertion about $p$-splitting fields of $R/K$ from the Wedderburn-Artin theorem (and well-known properties of tensor products of matrix algebras, see \cite{P}, Sect. 9.3, Corollary~b). Further results on $k(p)$ and the $p$-power of the underlying division $K ^{\prime }$-algebra of $R \otimes _{K} K ^{\prime }$, obtained in the case where $K ^{\prime }/K$ is a finite extension, are presented at the beginning of Section~5 (see Lemma \ref{lemm5.1}). They have been proved in \cite{Ch3}, Sect. 3, under the extra hypothesis that dim$(K _{\rm sol}) \le 1$, where $K _{\rm sol}$ is the compositum of finite Galois extensions of $K$ in $K _{\rm sep}$ with solvable Galois groups. These results partially generalize well-known facts about finite-dimensional central division algebras over arbitrary fields, leaving open the question of whether the validity of the derived information depends on the formulated hypothesis (see Remark \ref{rema5.2}). \par The results of Section 5, combined with Amitsur's theorem referred to in Section 2, form the basis of the proof of the main results of the present research. Our proof also relies on the theory of Henselian fields and their finite-dimensional division algebras (cf. \cite{JW}). Taking into consideration the generality of Amitsur's theorem, we recall that the class $\mathcal{HNF}$ of Henselian noncountable fields contains every maximally complete field, i.e. any nontrivially valued field $(K, v)$ which does not admit a valued proper extension with the same value group and residue field. For instance, $\mathcal{HNF}$ contains the generalized formal power series field $K _{0}((\Gamma ))$ over a field $K _{0}$, where $\Gamma $ is a nontrivial ordered abelian group, and $v$ is the standard valuation of $K _{0}((\Gamma ))$ trivial on $K _{0}$ (see \cite{Ef}, Example~4.2.1 and Theorem~18.4.1). Moreover, for each $m \in \mathbb{N}$, $\mathcal{HNF}$ contains every complete $m$-discretely valued field with respect to its standard $\mathbb{Z} ^{m}$-valued valuation, where $\mathbb{Z} ^{m}$ is viewed as an ordered abelian group by the inverse-lexicographic ordering. \par By a complete $1$-discretely valued field, we mean a complete discrete valued field, and when $m \ge 2$, a complete $m$-discretely valued field with an $m$-th residue field $K _{0}$ means a field $K _{m}$ which is complete with respect to a discrete valuation $w _{0}$, such that the residue field $\widehat K _{m} := K _{m-1}$ of $(K _{m}, w _{0})$ is a complete $(m - 1)$-discretely valued field with an $(m - 1)$-th residue field $K _{0}$. If $m \ge 2$ and $v _{m-1}$ is the standard $\mathbb{Z} ^{m-1}$-valued valuation of $K _{m-1}$, then the composite valuation $v _{m} = v _{m-1} {\ast }w _{0}$ is the standard $\mathbb{Z} ^{m}$-valued valuation of $K _{m}$. It is known that $v _{m}$ is Henselian (cf. \cite{TW}, Proposition~A.15) and $K _{0}$ equals the residue field of $(K _{m}, v _{m})$. This applies to the important special case where $K _{0}$ is a finite field, i.e. $K _{m}$ is an $m$-dimensional local field, in the sense of Kato and Parshin. \par The purpose of this paper is to prove Conjecture \ref{conj1.1} for two types of Henselian fields. Our first main result can be stated as follows: \par \begin{theo} \label{theo3.1} Let $K = K _{m}$ be a complete $m$-discretely valued field with a virtually perfect $m$-th residue field $K _{0}$, for some integer $m > 0$, and let $R$ be an algebraic central division $K$-algebra. Suppose that {\rm char}$(K _{0}) = q$ and $K _{0}$ is of arithmetic type. Then $R$ possesses a $K$-subalgebra $\widetilde R$ with the properties claimed by Conjecture \ref{conj1.1}. \end{theo} \par When char$(K _{m}) = {\rm char}(K _{0})$, $(K _{m}, w _{m})$ is isomorphic to the iterated formal Laurent power series field $\mathcal{K} _{m} := K _{0}((X _{1})) \dots ((X _{m}))$ in $m$ variables, considered with its standard $\mathbb{Z} ^{m}$-valued valuation, say $\tilde w _{m}$, acting trivially on $K _{0}$; in particular, this is the case where char$(K _{0}) = 0$. It is known that $(\mathcal{K} _{m}, \tilde w _{m})$ is maximally complete (cf. \cite{Ef}, Sect. 18.4). As $K _{0}$ is a virtually perfect field, whence, so is $\mathcal{K} _{m}$, this enables one to prove the assertion of Theorem \ref{theo3.1} by applying our second main result to $(\mathcal{K} _{m}, \tilde w _{m})$: \par \begin{theo} \label{theo3.2} Let $(K, v)$ be a Henselian field with $\widehat K$ of arithmetic type. Assume also that {\rm char}$(K) = {\rm char}(\widehat K) = q$, $K$ is virtually perfect and {\rm abrd}$_{p}(K)$ is finite, for each $p \in \mathbb{P} \setminus \{q\}$. Then every central division {\rm LBD}-algebra $R$ over $K$ has a central $K$-subalgebra $\widetilde R$ admissible by Conjecture \ref{conj1.1}. \end{theo} \par The assertions of Theorems \ref{theo3.1} and \ref{theo3.2} are known in case $R \in d(K)$. When $[R\colon K] = \infty $, they can be deduced from the following lemma, by the method of proving Theorem~4.1 of \cite{Ch3} (see Remark \ref{rema5.7} and Section 9). \par \begin{lemm} \label{lemm3.3} Assume that $K$ is a field and $R$ is a central division $K$-algebra, which satisfy the conditions of Theorem \ref{theo3.1} or Theorem \ref{theo3.2}. Then, for each $p \in \mathbb{P}$, $K$ has a finite extension $E _{p}$ in $K(p)$ that is a $p$-splitting field of $R/K$; equivalently, $p$ does not divide $[E _{p}(\rho _{p})\colon E _{p}]$, for any element $\rho _{p}$ of the underlying division $E _{p}$-algebra $\mathcal{R} _{p}$ of $R \otimes _{K} E _{p}$. \end{lemm} \par The fulfillment of the conditions of Lemma \ref{lemm3.3} ensures that dim$(K _{\rm sol}) \le 1$ (see Lemmas \ref{lemm6.4} and \ref{lemm6.5} below). This plays an essential role in the proof of Theorems \ref{theo3.1} and \ref{theo3.2}, which also relies on the following known result: \par \noindent (3.1) Given an HDV-field $(K, v)$, the scalar extension map Br$(K) \to {\rm Br}(K _{v})$, where $K _{v}$ is a completion of $K$ with respect to the topology of $v$, is an injective homomorphism which preserves Schur indices and exponents (cf. \cite{Cohn}, Theorem~1); hence, Brd$_{p'}(K) \le {\rm Brd}_{p'}(K _{v})$, for every $p' \in \mathbb P$. \par The earliest draft of this paper is contained in the manuscript \cite{Ch2}. Here we extend the scope of results of \cite{Ch2}, obtained before the theory of division algebras over Henselian fields in \cite{JW}, and the progress in absolute Brauer $p$-dimensions made in \cite{Mat}, \cite{PS} and other papers allowed us to consider the topic of the present research in the desired generality (including, for example, $m$-dimensional local fields which are not of arithmetic type, in the sense of Definition~2, for any $m \ge 2$, see Lemmas \ref{lemm4.8}, \ref{lemm7.4} and Remark \ref{rema7.5}). \par The basic notation, terminology and conventions kept in this paper are standard and virtually the same as in \cite{He}, \cite{L}, \cite{P} and \cite{Ch6}. Throughout, Brauer groups, value groups and ordered abelian groups are written additively, Galois groups are viewed as profinite with respect to the Krull topology, and by a profinite group homomorphism, we mean a continuous one. For any algebra $A$, we consider only subalgebras containing its unit. Given a field $E$, $E ^{\ast }$ denotes its multiplicative group, $E ^{\ast n} = \{a ^{n}\colon \ a \in E ^{\ast }\}$, for each $n \in \mathbb N$, and for any $p \in \mathbb P$, $_{p}{\rm Br}(E)$ stands for the maximal subgroup $\{b _{p} \in {\rm Br}(E)\colon \ pb _{p} = 0\}$ of Br$(E)$ of period dividing $p$. We denote by $I(E ^{\prime }/E)$ the set of intermediate fields of any field extension $E ^{\prime }/E$, and by Br$(E ^{\prime }/E)$ the relative Brauer group of $E ^{\prime }/E$ (the kernel of the scalar extension map Br$(E) \to {\rm Br}(E ^{\prime })$). In case char$(E) = q > 0$, we write $[a, b) _{E}$ for the $q$-symbol $E$-algebra generated by elements $\xi $ and $\eta $, such \par\vskip0.032truecm\noindent that $\eta \xi = (\xi + 1)\eta $, $\xi ^{q} - \xi = a \in E$ and $\eta ^{q} = b \in E ^{\ast }$. \par Here is an overview of the rest of this paper. Section 4 includes preliminaries on Henselian fields used in the sequel. It also shows that a Henselian field $(K, v)$ satisfies the condition abrd$_{p}(K) < \infty $, for some $p \in \mathbb{P}$ not equal to char$(\widehat K)$, if and only if abrd$_{p}(\widehat K) < \infty $ and the subgroup $pv(K)$ of the value group $v(K)$ is of finite index. When $(K, v)$ is maximally complete with char$(K) = q > 0$, we prove in addition that abrd$_{q}(K) < \infty $ if and only if $K$ is virtually perfect. These results fully characterize generalized formal power series fields (and, more generally, maximally complete equicharacteristic fields, see \cite{Ka}, page~320, and \cite{Ef}, Theorem~18.4.1) of finite absolute Brauer $p$-dimensions, for all $p \in \mathbb{P}$, and so prove their admissibility by Proposition \ref{prop2.1}. Section 5 presents the main ring-theoretic and Galois cohomological ingredients of the proofs of Lemma \ref{lemm3.3} and our main results. Most of them have been extracted from \cite{Ch3}, wherefore, they are stated here without proof. As noted above, we also show in Section~5 how to deduce Theorems \ref{theo3.1} and \ref{theo3.2} from Lemma \ref{lemm3.3} by the method of proving \cite{Ch3}, Theorem~4.1. In Section 6 we prove that Henselian fields $(K, v)$ with char$(K) = q > 0$ satisfy abrd$_{q}(K(q)) = 0$, and so do HDV-fields of residual characteristic $q$. Section 7 collects valuation-theoretic ingredients of the proof of Lemma \ref{lemm3.3}; these include a tame version of the noted lemma, stated as Lemma \ref{lemm7.6}. Sections~8 and 9 are devoted to the proof of Lemma \ref{lemm3.3}, which is done by adapting to Henselian fields the method of proving \cite{Ch3}, Lemma~8.3. Specifically, our proof relies on Lemmas \ref{lemm7.3} and \ref{lemm7.6}, as well as on results of Sections~5 and 6. In the setting of Theorem \ref{theo3.1}, when $m \ge 2$ and char$(K _{m}) = 0 < q$, we use Lemma \ref{lemm4.3} at a crucial point of the proof. \par \section{\bf Preliminaries on Henselian fields and their finite-dimensional division algebras and absolute Brauer $p$-dimensions} \par Let $K$ be a field with a nontrivial valuation $v$, $O _{v}(K) = \{a \in K\colon \ v(a) \ge 0\}$ the valuation ring of $(K, v)$, $M _{v}(K) = \{\mu \in K\colon \ v(\mu ) > 0\}$ the maximal ideal of $O _{v}(K)$, $O _{v}(K) ^{\ast } = \{u \in K\colon \ v(u) = 0\}$ the multiplicative group of $O _{v}(K)$, $v(K)$ and $\widehat K = O _{v}(K)/M _{v}(K)$ the value group and the residue field of $(K, v)$, respectively; put $\nabla _{0}(K) = \{\alpha \in K\colon \alpha - 1 \in M _{v}(K)\}$. We say that $v$ is Henselian if it extends uniquely, up-to equivalence, to a valuation $v _{L}$ on each algebraic extension $L$ of $K$. This holds, for example, if $K = K _{v}$ and $v(K)$ is an ordered subgroup of the additive group $\mathbb R$ of real numbers (cf. \cite{L}, Ch. XII). Maximally complete fields are also Henselian, since Henselizations of valued fields are their immediate extensions (see, e.g., \cite{Ef}, Proposition~15.3.7, or \cite{TW}, Corollary~A.28). In order that $v$ be Henselian, it is necessary and sufficient that any of the following two equivalent conditions is fulfilled (cf. \cite{Ef}, Theorem~18.1.2, or \cite{TW}, Theorem~A.14): \par \noindent (4.1) (a) Given a polynomial $f(X) \in O _{v}(K) [X]$ and an element $a \in O _{v}(K)$, such that $2v(f ^{\prime }(a)) < v(f(a))$, where $f ^{\prime }$ is the formal derivative of $f$, there is a zero $c \in O _{v}(K)$ of $f$ satisfying the equality $v(c - a) = v(f(a)/f ^{\prime }(a))$; \par (b) For each normal extension $\Omega /K$, $v ^{\prime }(\tau (\mu )) = v ^{\prime }(\mu )$ whenever $\mu \in \Omega $, $v ^{\prime }$ is a valuation of $\Omega $ extending $v$, and $\tau $ is a $K$-automorphism of $\Omega $. \par \noindent When $v$ is Henselian, so is $v _{L}$, for any algebraic field extension $L/K$. In this case, we put $O _{v}(L) = O _{v _{L}}(L)$, $M _{v}(L) = M _{v_{L}}(L)$, $v(L) = v _{L}(L)$, and denote by $\widehat L$ the residue field of $(L, v _{L})$. Clearly, $\widehat L/\widehat K$ is an algebraic extension and $v(K)$ is an ordered subgroup of $v(L)$; the index $e(L/K)$ of $v(K)$ in $v(L)$ is called a ramification index of $L/K$. By Ostrowski's theorem (see \cite{Ef}, Sects.~17.1, 17.2) if $[L\colon K]$ is finite, then $[\widehat L\colon \widehat K]e(L/K)$ divides $[L\colon K]$, and the integer $[L\colon K][\widehat L\colon \widehat K] ^{-1}e(L/K) ^{-1}$ is not divisible by any $p \in \mathbb P$, $p \neq {\rm char}(\widehat K)$. The extension $L/K$ is defectless, i.e. $[L\colon K] = [\widehat L\colon \widehat K]e(L/K)$, in the following three cases: \par \noindent (4.2) (a) If char$(\widehat K) \nmid [L\colon K]$ (apply Ostrowski's theorem); \par (b) If $(K, v)$ is HDV and $L/K$ is separable (see \cite{TW}, Theorem~A.12); \par (c) When $(K, v)$ is maximally complete (cf. \cite{Wa}, Theorem~31.22). \par \noindent Assume that $(K, v)$ is a Henselian field and $R/K$ is a finite extension. We say that $R/K$ is inertial if $[R\colon K] = [\widehat R\colon \widehat K]$ and $\widehat R/\widehat K$ is a separable extension; $R/K$ is said to be totally ramified if $e(R/K) = [R\colon K]$. Inertial extensions of $K$ have the following useful properties (see \cite{TW}, Theorem~A.23, Proposition~A.17 and Corollary~A.25): \par \begin{lemm} \label{lemm4.1} Let $(K, v)$ be a Henselian field. Then: \par {\rm (a)} An inertial extension $R ^{\prime }/K$ is Galois if and only if $\widehat R ^{\prime }/\widehat K$ is Galois. When this holds, the Galois groups $\mathcal{G}(R ^{\prime }/K)$ and $\mathcal{G}(\widehat R ^{\prime }/\widehat K)$ are isomorphic. \par {\rm (b)} The compositum $K _{\rm ur}$ of inertial extensions of $K$ in $K _{\rm sep}$ is a Galois extension of $K$ with $\mathcal{G}(K _{\rm ur}/K) \cong \mathcal{G}_{\widehat K}$. \par {\rm (c)} Finite extensions of $K$ in $K _{\rm ur}$ are inertial, and the natural mapping of $I(K _{\rm ur}/K)$ into $I(\widehat K _{\rm sep}/\widehat K)$ is bijective. \par {\rm (d)} For each $K _{1} \in {\rm Fe}(K)$, the intersection $K _{0} = K _{1} \cap K _{\rm ur}$ equals the maximal inertial extension of $K$ in $K _{1}$; in addition, $\widehat K _{0} = \widehat K _{1}$. \end{lemm} \par \noindent When $(K, v)$ is Henselian, the finite extension $R/K$ is called tamely ramified, if char$(\widehat K) \nmid e(R/K)$ and $\widehat R/\widehat K$ is separable (this holds if char$(\widehat K) \nmid [R\colon K]$). The next lemma gives an account of some basic properties of tamely ramified extensions of $K$ in $K _{\rm sep}$ (see \cite{MT}, and \cite{TW}, Theorems~A.9 (i),(ii) and A.24): \par \begin{lemm} \label{lemm4.2} Let $(K, v)$ be a Henselian field with {\rm char}$(\widehat K) = q$, $K _{\rm tr}$ the compositum of tamely ramified extensions of $K$ in $K _{\rm sep}$, $\mathbb{P} ^{\prime } = \mathbb{P} \setminus \{q\}$, and let $\hat \varepsilon _{p}$ be a primitive $p$-th root of unity in $\widehat K _{\rm sep}$, for each $p \in \mathbb{P} ^{\prime }$. Then $K _{\rm tr}/K$ is a Galois extension with $\mathcal{G}(K _{\rm tr}/K _{\rm ur})$ abelian, and the following holds: \par {\rm (a)} All finite extensions of $K$ in $K _{\rm tr}$ are tamely ramified. \par {\rm (b)} There is $T(K) \in I(K _{\rm tr}/K)$ with $T(K) \cap K _{\rm ur} = K$ and $T(K).K _{\rm ur} = K _{\rm tr}$; hence, finite extensions of $K$ in $T(K)$ are tamely and totally ramified. \par {\rm (c)} The field $T(K)$ singled out in {\rm (b)} is isomorphic as a $K$-algebra to \par\noindent $\otimes _{p \in \mathbb{P}'} T _{p}(K)$, where $\otimes = \otimes _{K}$, and for each $p$, $T _{p}(K) \in I(T(K)/K)$ and every finite extension of $K$ in $T _{p}(K)$ is of $p$-primary degree; in particular, $T(K)$ equals the compositum of the fields $T _{p}(K)$, $p \in \mathbb{P} ^{\prime }$. \par {\rm (d)} With notation being as in {\rm (c)}, $T _{p}(K) \neq K$, for some $p \in \mathbb{P} ^{\prime }$, if and only if $v(K) \neq pv(K)$; when this holds, $T _{p}(K) \in I(K(p)/K)$ if and only if $\hat \varepsilon _{p} \in \widehat K$ (equivalently, if and only if $K$ contains a primitive $p$-th root of unity). \end{lemm} \par The Henselian property of $(K, v)$ guarantees that $v$ extends to a unique, up-to equivalence, valuation $v _{D}$ on each $D \in d(K)$ (cf. \cite{TW}, Sect. 1.2.2). Put $v(D) = v _{D}(D)$ and denote by $\widehat D$ the residue division ring of $(D, v _{D})$. It is known that $\widehat D$ is a division $\widehat K$-algebra, $v(D)$ is an ordered abelian group and $v(K)$ is an ordered subgroup of $v(D)$ of finite index $e(D/K)$ (called the ramification index of $D/K$). Note further that $[\widehat D\colon \widehat K] < \infty $, and by the Ostrowski-Draxl theorem (cf. \cite{Dr2} and \cite{TW}, Propositions~4.20, 4.21), $[\widehat D\colon \widehat K]e(D/K) \mid [D\colon K]$ and $[D\colon K][\widehat D\colon \widehat K] ^{-1}e(D/K) ^{-1}$ has no prime divisor $p \neq {\rm char}(\widehat K)$. The division $K$-algebra $D$ is called inertial if \par\noindent $[D\colon K] = [\widehat D\colon \widehat K]$ and $\widehat D \in d(\widehat K)$; it is called totally ramified if \par\noindent $[D\colon K] = e(D/K)$. We say that $D/K$ is defectless if $[D\colon K] = [\widehat D\colon \widehat K]e(D/K)$; this holds in the following two cases: \par \noindent (4.3) (a) If char$(\widehat K) \nmid [D\colon K]$ (apply the Ostrowski-Draxl theorem); \par (b) If $(K, v)$ is an HDV-field (see \cite{TY}, Proposition~2.2). \par \noindent The algebra $D \in d(K)$ is called nicely semi-ramified (abbr., NSR), in the sense of \cite{JW}, if $e(D/K) = [\widehat D\colon \widehat K] = {\rm deg}(D)$ and $\widehat D/\widehat K$ is a separable field extension. As shown in \cite{JW}, when this holds, $\widehat D/\widehat K$ is a Galois extension, $\mathcal{G}(\widehat D/\widehat K)$ is isomorphic to the quotient group $v(D)/v(K)$, and $D$ decomposes into a tensor product of cyclic NSR-algebras over $K$ (see also \cite{TW}, Propositions~8.40 and 8.41). The result referred to allows to prove our next lemma, stated as follows: \par \begin{lemm} \label{lemm4.3} Let $(K, v)$ be a Henselian field, such that {\rm abrd}$_{p}(\widehat K(p)) = 0$, for some $p \in \mathbb{P}$ not equal to {\rm char}$(\widehat K)$. Then every $\Delta _{p} \in d(K)$ of $p$-primary degree has a splitting field that is a finite extension of $K$ in $K(p)$. \end{lemm} \par Lemma \ref{lemm4.3} shows that if $R$ is a central division LBD-algebra over a field $K$ satisfying the conditions of some of the main results of the present paper, and if there is a $K$-subalgebra $\widetilde R$ of $R$ with the properties claimed by Conjecture \ref{conj1.1}, then for each $p \in \mathbb{P}$ with at most one exception, $K$ has a finite extension $E _{p}$ in $K(p)$ that is a $p$-splitting field of $R/K$ (see also Lemma \ref{lemm5.3} (c) below). This leads to the idea of proving Theorems \ref{theo3.1} and \ref{theo3.2} on the basis of Lemma \ref{lemm3.3} (for further support of the idea and a step to its implementation, see Theorem \ref{theo6.3}). \par \vskip0.4truecm\noindent {\it Proof of Lemma \ref{lemm4.3}.} The assertion is obvious if $\Delta _{p}$ is an NSR-algebra over $K$, or more generally, if $\Delta _{p}$ is Brauer equivalent to the tensor product of cyclic division $K$-algebras of $p$-primary degrees. When the $K$-algebra $\Delta _{p}$ is inertial, we have $\widehat \Delta _{p} \in d(\widehat K)$ (cf. \cite{JW}, Theorem~2.8), so our conclusion follows from the fact that abrd$_{p}(\widehat K(p)) = 0$ and $\widehat {K(p)} = \widehat K(p)$, which ensures \par\vskip0.048truecm\noindent that $[\Delta _{p}] \in {\rm Br}(K _{\rm ur} \cap K(p)/K)$. Since, by \cite{JW}, Lemmas~5.14 and 6.2, \par\vskip0.04truecm\noindent $[\Delta _{p}] = [I _{p} \otimes _{K} N _{p} \otimes _{K} T _{p}]$, for some inertial $K$-algebra $I _{p}$, an NSR-algebra $N _{p}/K$, and a tensor product $T _{p}$ of totally ramified cyclic division $K$-algebras, \par\vskip0.04truecm\noindent such that $[I _{p}], [N _{p}]$ and $[T _{p}] \in {\rm Br}(K) _{p}$, these observations prove Lemma \ref{lemm4.3}. \par The following two lemmas give a valuation-theoretic characterization of those Henselian virtually perfect fields $(K, v)$ with char$(K) = {\rm char}(\widehat K)$, which satisfy the condition abrd$_{p}(K) < \infty $, for some $p \in \mathbb{P}$. \par \begin{lemm} \label{lemm4.4} Let $(K, v)$ be a Henselian field. Then {\rm abrd}$_{p}(K) < \infty $, for a given $p \in \mathbb{P}$ different from {\rm char}$(\widehat K)$, if and only if {\rm abrd}$_{p}(\widehat K) < \infty $ and the quotient group $v(K)/pv(K)$ is finite. \end{lemm} \par \begin{proof} We have abrd$_{p}(\widehat K) \le {\rm abrd}_{p}(K)$ (by \cite{JW}, Theorem~2.8, and \cite{TW}, Theorem~A.23), so our assertion can be deduced from \cite{Ch8}, Proposition~6.1, Theorem~5.9 and Remark~6.2 (or \cite{Ch8}, (3.3) and Theorem~2.3). \end{proof} \par Lemma \ref{lemm4.4} and our next lemma show that a maximally complete field $(K, v)$ with char$(K) = {\rm char}(\widehat K)$ satisfies abrd$_{p}(K) < \infty $, $p \in \mathbb{P}$, if and only if $\widehat K$ is virtually perfect and for each $p \in \mathbb{P}$, abrd$_{p}(\widehat K) < \infty $ and $v(K)/pv(K)$ is finite. When this holds, $K$ is virtually perfect as well (see \cite{Ch8}, Lemma~3.2). \par \begin{lemm} \label{lemm4.5} Let $(K, v)$ be a Henselian field with {\rm char}$(\widehat K) = q > 0$. Then: \par {\rm (a)} $[\widehat K\colon \widehat K ^{q}]$ and $v(K)/qv(K)$ are finite, provided that {\rm Brd}$_{q}(K) < \infty $; \par {\rm (b)} The inequality {\rm abrd}$_{q}(K) < \infty $ holds, in case $\widehat K$ is virtually perfect and some of the following two conditions is satisfied: \par {\rm (i)} $v$ is discrete; \par {\rm (ii)} {\rm char}$(K) = q$ and $K$ is virtually perfect; in particular, this occurs if {\rm char}$(K) = q$, $v(K)/qv(K)$ is finite and $(K, v)$ is maximally complete. \end{lemm} \par \begin{proof} Statement (a) is implied by \cite{Ch7}, Proposition~3.4, so one may assume that $[\widehat K\colon \widehat K ^{q}] = q ^{\mu }$ and $v(K)/qv(K)$ has order $q ^{\tau }$, for some integers $\mu \ge 0$, $\tau \ge 0$. We prove statement (b) of the lemma. Suppose first that $v$ is discrete. Then Brd$_{q}(K) \le {\rm Brd}_{q}(K _{v})$, by (3.1), so it is sufficient to prove that abrd$_{q}(K) < \infty $, provided that $K = K _{v}$. If char$(K) = 0$, this is contained in \cite{PS}, Theorem~2, and when char$(K) = q$, the finitude of abrd$_{q}(K)$ is obtained as a special case of Lemma \ref{lemm4.5} (b) (ii) and the fact that $(K, v)$ is maximally complete (cf. \cite{L}, Ch. XII, page~488). It remains for us to prove Lemma \ref{lemm4.5} (b) (ii). Our former assertion follows from \cite{A1}, Ch. VII, Theorem~28, statement (2.1) and \cite{Ch6}, Lemma~4.1 (which ensure that abrd$_{q}(K) \le {\rm log}_{q}[K\colon K ^{q}]$). Observe finally that if $(K, v)$ is maximally complete with char$(K) = q$, then $[K\colon K ^{q}] = q ^{\mu + \tau }$. This can be deduced from (4.2) (c), since $(K ^{q}, v _{q})$ is maximally complete, $v _{q}(K ^{q}) = qv(K)$ and $\widehat K ^{q}$ is the residue field of $(K ^{q}, v _{q})$, where $v _{q}$ is the valuation of $K ^{q}$ induced by $v$. More precisely, it follows from (4.2) (c) and the noted properties of $(K ^{q}, v _{q})$ that the degrees of finite extensions of $K ^{q}$ in $K$ are at most equal to $q ^{\mu + \tau }$, which yields $[K\colon K ^{q}] \le q ^{\mu + \tau }$ and so allows to conclude that $[K\colon K ^{q}] = q ^{\mu + \tau }$ (whence, abrd$_{q}(K) \le \mu + \tau $). Thus Lemma \ref{lemm4.5} (b) (ii) is proved. \end{proof} \par \begin{coro} \label{coro4.6} Let $K _{0}$ be a field and $\Gamma $ a nontrivial ordered abelian group. Then the formal power series field $K = K _{0}((\Gamma ))$ satisfies the inequalities {\rm abrd}$_{p}(K) < \infty $, $p \in \mathbb{P}$, if and only if $K _{0}$ is virtually perfect, {\rm abrd}$_{p}(K _{0}) < \infty $ whenever $p \in \mathbb{P} \setminus \{{\rm char}(K _{0})\}$, and the quotient groups $\Gamma /p\Gamma $ are finite, for all $p \in \mathbb{P}$. \end{coro} \par \begin{proof} Let $v _{\Gamma }$ be the standard valuation of $K$ inducing on $K _{0}$ the trivial valuation. Then $(K, v _{\Gamma })$ is maximally complete (cf. \cite{Ef}, Theorem~18.4.1) with $v(K) = \Gamma $ and $\widehat K = K _{0}$ (see \cite{Ef}, Sect. 2.8 and Example~4.2.1), so Corollary \ref{coro4.6} can be deduced from Lemmas \ref{lemm4.4} and \ref{lemm4.5}. \end{proof} \par \begin{rema} \label{rema4.7} Given a field $K _{0}$ and an ordered abelian group $\Gamma \neq \{0\}$, the standard realizability of the field $K _{1} = K _{0}((\Gamma ))$ as a maximally complete field, used in the proof of Corollary \ref{coro4.6}, allows us to determine the sequence \par\noindent $(b, a) = {\rm Brd}_{p}(K _{1}), {\rm abrd}_{p}(K _{1})\colon p \in \mathbb{P}$, in the following two cases: {\rm (i)} $K _{0}$ is a global or local field (see \cite{Ch8}, Proposition~5.1, and \cite{Ch7}, Corollary~3.6 and Sect. 4, respectively); {\rm (ii)} $K _{0}$ is perfect and dim$(K _{0}) \le 1$ (see \cite{Ch7}, Proposition~3.5, and \cite{Ch8}, Propositions~5.3, 5.4). In both cases, $(b, a)$ depends only on $K _{0}$ and $\Gamma $. Moreover, if $(K, v)$ is Henselian with $\widehat K = K _{0}$ and $v(K) = \Gamma $, then: {\rm (a)} $(b, a) = {\rm Brd}_{p}(K), {\rm abrd}_{p}(K)$, $p \in \mathbb{P}$, provided that $(K, v)$ is maximally complete, $K _{0}$ is perfect and char$(K) = {\rm char}(K _{0})$; {\rm (b)} {\rm Brd}$_{p}(K) = {\rm Brd}_{p}(K _{1})$ and {\rm abrd}$_{p}(K) = {\rm abrd}_{p}(K _{1})$, for each $p \neq {\rm char}(K _{0})$. When $K _{0}$ is finite and $p \neq {\rm char}(K _{0})$, Brd$_{p}(K)$ has been computed also in \cite{Br}, Sect. 7, by a method independent of \cite{Ch7} and \cite{Ch8}. \end{rema} \par Next we show that abrd$_{p}(K _{m}) < \infty $, $p \in \mathbb{P}$, if $K _{m}$ is a complete $m$-discretely valued field with $m$-th residue field admissible by Theorem \ref{theo3.2}. \par \begin{lemm} \label{lemm4.8} Let $K _{m}$ be a complete $m$-discretely valued field with an $m$-th residue field $K _{0}$. Then {\rm abrd}$_{p}(K _{m}) < \infty $, for all $p \in \mathbb{P}$, if and only if $K _{0}$ is virtually perfect with {\rm abrd}$_{p}(K _{0}) < \infty $, for every $p \in \mathbb{P} \setminus \{{\rm char}(K _{0})\}$. \end{lemm} \par \begin{proof} Clearly, it suffices to consider the case where abrd$_{p}(K _{m-1}) < \infty $, for all $p \in \mathbb{P}$. Then our assertion follows from Lemmas \ref{lemm4.4} and \ref{lemm4.5}. \end{proof} \par The concluding result of this Section proves (2.2) and leads to the following open question: given a field $E$ with char$(E) = q > 0$, $[E\colon E ^{q}] = \infty $ and abrd$_{q}(E) < \infty $, does there exist an integer $\mu (E)$, such that Brd$_{q}(E ^{\prime }) \le \mu (E)$, for every finite extension $E ^{\prime }/E$? An affirmative answer to this question would imply the removal of the condition that $K$ is a virtually perfect field does not affect the validity of the assertion of Proposition \ref{prop2.1}. \par \begin{prop} \label{prop4.9} Let $F _{0}$ be an algebraically closed field of nonzero characteristic $q$ and $F _{n}$: $n \in \mathbb{N}$, be a tower of extensions of $F _{0}$ defined inductively as follows: when $n > 0$, $F _{n} = F _{n-1}((T _{n}))$ is the formal Laurent power series field in a variable $T _{n}$ over $F _{n-1}$. Then the following holds, for each $n \in \mathbb{N}$: \par {\rm (a)} $F _{n}$ possesses a subfield $\Lambda _{n}$ that is a purely transcendental extension of infinite transcendency degree over the rational function field $F _{n-1}(T _{n})$. \par {\rm (b)} The maximal separable (algebraic) extension $E _{n}$ of $\Lambda _{n}$ in $F _{n}$ satisfies the equalities $[E _{n}\colon E _{n} ^{q}] = \infty $, {\rm Brd}$_{p}(E _{n}) = {\rm abrd}_{p}(E _{n}) = [n/2]$, for all \par\noindent $p \in \mathbb{P} \setminus \{q\}$, and {\rm Brd}$_{q}(E _{n} ^{\prime }) = n - 1$, for every finite extension $E _{n} ^{\prime }/E _{n}$. \end{prop} \par \begin{proof} The assertion of Proposition \ref{prop4.9} (a) is known (cf., e.g., \cite{BlKu}), and it implies $[E _{n}\colon E _{n} ^{q}] = \infty $. Let $w _{n}$ be the natural discrete valuation of $F _{n}$ trivial on $F _{n-1}$, and $v _{n}$ be the valuation of $E _{n}$ induced by $w _{n}$. Then $(F _{n}, w _{n})$ is complete and $E _{n}$ is dense in $F _{n}$, which yields $v _{n}(E _{n}) = w _{n}(F _{n})$ and $F _{n-1}$ is the residue field of $(E _{n}, v _{n})$ and $(F _{n}, w _{n})$; hence, $v _{n}$ is discrete. Similarly, if $n \ge 2$, then the natural $\mathbb{Z} ^{n}$-valued valuation $\theta _{n} '$ of $F _{n}$ (trivial on $F _{0}$) is Henselian and induces on $E _{n}$ a valuation $\theta _{n}$. Also, $v _{n}$ is Henselian (cf. \cite{Ef}, Corollary~18.3.3), and $\theta _{n}$ extends the natural $\mathbb{Z} ^{n-1}$-valued valuation $\theta _{n-1}'$ of $F _{n-1}$. As $\theta _{n-1}'$ is Henselian and $F _{n-1}(T _{n}) \subset E _{n}$, this ensures that so is $\theta _{n}$ (see \cite{TW}, Proposition~A.15), $F _{0}$ is the residue field of $(E _{n}, \theta _{n})$, and $\theta _{n}(E _{n}) = \mathbb{Z} ^{n}$. At the same time, it follows from (3.1) and the Henselian property of $v _{n}$ that Brd$_{p}(E _{n}) \le {\rm Brd}_{p}(F _{n})$, for each $p$. In addition, $(F _{n}, \theta _{n}')$ is maximally complete with a residue field $F _{0}$ (cf. \cite{Ef}, Theorem~18.4.1), \par\vskip0.057truecm\noindent whence, by \cite{Ch7}, Proposition~3.5, Brd$_{q}(F _{n}) = {\rm abrd}_{q}(F _{n}) = n - 1$. Since \par\vskip0.057truecm\noindent $\theta _{n}(E _{n} ^{\prime }) \cong \mathbb{Z} ^{n} \cong \theta _{n}'(F _{n} ^{\prime })$ and $F _{0}$ is the residue field of $(E _{n} ^{\prime }, \theta _{n,E _{n}'})$ and $(F _{n}, \theta _{n,F _{n}'}')$ whenever $E _{n} ^{\prime }/E _{n}$ and $F _{n} ^{\prime }/F _{n}$ are finite extensions, one obtains from \cite{Ch8}, Proposition~5.3 (b), and \cite{Ch6}, Lemma~4.2, that Brd$_{q}(E _{n} ^{\prime }) \ge n - 1$ and \par\vskip0.057truecm\noindent Brd$_{p}(E _{n} ^{\prime }) = {\rm Brd}_{p}(F _{n} ^{\prime }) = [n/2]$, for each $p \neq q$. Note finally that \par\vskip0.057truecm\noindent $v _{n}(E _{n} ^{\prime }) \cong \mathbb{Z} \cong w _{n}(F _{n} ^{\prime })$, and the completion of $(E _{n} ^{\prime }, v _{n,E _{n}'})$ is a finite \par\vskip0.063truecm\noindent extension of $F _{n}$ (cf. \cite{L}, Ch. XII, Proposition~3.1), so it follows from (3.1) and the preceding observations that Brd$_{q}(E _{n} ^{\prime }) = n - 1$, for all $n$. \end{proof} \par \section{\bf On $p$-powers and finite-dimensional central subalgebras of division {\rm LBD}-algebras} \par Let $R$ be a central division LBD-algebra over a virtually perfect field $K$ with abrd$_{p}(K) < \infty $, $p \in \mathbb{P}$. The existence of finite $p$-powers $k(p)$ of $R/K$, $p \in \mathbb{P}$, imposes essential restrictions on a number of algebraic properties of $R$, especially, on those extensions of $K$ which are embeddable in $R$ as $K$-subalgebras. For example, it turns out that if $K(p) \neq K$, for some $p > 2$, then $K(p)/K$ is an infinite extension (the additive group $\mathbb{Z} _{p}$ of $p$-adic integers, endowed with its natural topology, is a homomorphic image of $\mathcal{G}(K(p)/K)$, see \cite{Wh}), whence, $K(p)$ is not isomorphic to a $K$-subalgebra of $R$. In this Section we present results on $p$-powers and $p$-splitting fields, obtained in the case of dim$(K _{\rm sol}) \le 1$. These results form the basis for the proofs of Theorems \ref{theo3.1}, \ref{theo3.2} and Lemma \ref{lemm3.3}. The first one is an immediate consequence of \cite{Ch3}, Lemmas~3.12 and 3.13, and can be stated as follows: \par \begin{lemm} \label{lemm5.1} Assume that $R$ is a central division {\rm LBD}-algebra over a virtually perfect field $K$ with {\rm dim}$(K _{\rm sol}) \le 1$ and {\rm abrd}$_{p}(K) < \infty $, $p \in \mathbb{P}$. Let $K ^{\prime }/K$ be a finite extension, $R ^{\prime }$ the underlying (central) division $K ^{\prime }$-algebra of the {\rm LBD}-algebra $R \otimes _{K} K ^{\prime }$, $\gamma $ the integer for which $R \otimes _{K} K ^{\prime }$ and the full matrix ring $M _{\gamma }(R ^{\prime })$ are isomorphic as $K ^{\prime }$-algebras, and for each $p \in \mathbb{P}$, let $k(p)$ and $k(p)'$ be the $p$-powers of $R/K$ and $R ^{\prime }/K ^{\prime }$, respectively. Then: \par {\rm (a)} The greatest integer $\mu (p) \ge 0$ for which $p ^{\mu (p)} \mid \gamma $ is equal to $k(p) - k(p)'$; hence, $k(p) \ge k(p)'$ and $p ^{1+k(p)} \nmid \gamma $, for any $p \in \mathbb{P}$; \par {\rm (b)} The equality $k(p) = k(p)'$ holds if and only if $p \nmid \gamma $; specifically, if \par\noindent $k(p) = 0$, then $k(p)'= 0$ and $p \nmid \gamma $. \par {\rm (c)} $K ^{\prime }$ is a $p$-splitting field of $R/K$ if and only if $k(p)' = 0$, that is, \par\noindent $p \nmid [K ^{\prime }(r')\colon K ^{\prime }]$, for any $r' \in R ^{\prime }$. \end{lemm} \par As a matter of fact, Lemma \ref{lemm5.1} (a) is identical in content with \cite{Ch3}, Lemmas~3.12 and 3.13, and it also implies Lemma \ref{lemm5.1} (b) and (c). \par \begin{rema} \label{rema5.2} The proofs of \cite{Ch3}, Lemmas~3.12 and 3.13, rely essentially on the condition that dim$(K _{\rm sol}) \le 1$, more precisely, on its restatement that abrd$_{p}(K _{p}) = 0$, for each $p \in \mathbb{P}$, where $K _{p}$ is the fixed field of a Hall pro-$(\mathbb{P} \setminus \{p\})$-subgroup $H _{p}$ of $\mathcal{G}(K _{\rm sol}/K)$. It is not known whether the assertions of Lemma \ref{lemm5.1} remain valid if this condition is dropped; also, the question of whether {\rm dim}$(E _{\rm sol}) \le 1$, for every field $E$ (posed in \cite{Koe}) is open. Here we note that the conclusion of Lemma \ref{lemm5.1} holds if the assumption that dim$(K _{\rm sol}) \le 1$ is replaced by the one that $R \otimes _{K} K ^{\prime }$ is a division $K ^{\prime }$-algebra. Then it follows from \cite{Ch3}, Proposition~3.3, that $k(p) = k(p)'$, for every $p \in \mathbb{P}$. \end{rema} \par \begin{lemm} \label{lemm5.3} Assuming that $K$ and $R$ satisfy the conditions of Lemma \ref{lemm5.1}, let $D \in d(K)$ be a $K$-subalgebra of $R$, and let $k(p)$ and $k(p)'$, $p \in \mathbb{P}$, be the $p$-powers of $R/K$ and $C _{R}(D)/K$, respectively. Then: \par {\rm (a)} For each $p \in \mathbb{P}$, $k(p) - k(p)'$ equals the power of $p$ in the primary decomposition of {\rm deg}$(D)$; in particular, $k(p) \ge k(p)'$; \par {\rm (b)} $k(p) = k(p)'$ if and only if $p \nmid {\rm deg}(D)$; in this case, a finite extension $K ^{\prime }$ of $K$ is a $p$-splitting field of $R/K$ if and only if so is $K ^{\prime }$ for $C _{R}(D)/K$; \par {\rm (c)} If $k(p)' = 0$, for some $p \in \mathbb{P}$, then a finite extension $K ^{\prime }$ of $K$ is a $p$-splitting field of $R/K$ if and only if $p \nmid {\rm ind}(D \otimes _{K} K ^{\prime })$. \end{lemm} \par \begin{proof} It is known (cf. \cite{P}, Sect. 13.1, Corollary~b) that if $K _{1}$ is a maximal subfield of $D$, then $K _{1}/K$ is a field extension, $[K _{1}\colon K] = {\rm deg}(D) := d$ and $D \otimes _{K} K _{1} \cong M _{d}(K _{1})$ as $K _{1}$-subalgebras. Also, by the Double Centralizer Theorem (see \cite{He}, Theorems~4.3.2 and 4.4.2), $R = D \otimes _{K} C _{R}(D)$ and $C _{R}(D) \otimes _{K} K _{1}$ is a central division $K _{1}$-algebra equal to $C _{R}(K _{1})$. In view of \cite{Ch3}, Propositions~3.1 and 3.3, this ensures that $k(p)'$ equals the $p$-power of $(C _{R}(D) \otimes _{K} K _{1})/K _{1}$, for each $p \in \mathbb{P}$. Applying now Lemma \ref{lemm5.1}, one proves Lemma \ref{lemm5.3} (a). Lemma \ref{lemm5.3} (b)-(c) follows from Lemmas \ref{lemm5.1} and \ref{lemm5.3} (a), combined with \cite{Ch3}, Lemma~3.5, and \cite{P}, Sect. 9.3, Corollary~b. \end{proof} \par The following lemma (for a proof, see \cite{Ch3}, Lemma~7.4) can be viewed as a generalization of the uniqueness part of the primary tensor product decomposition theorem for algebras $D \in d(K)$ over an arbitrary field $K$. \par \begin{lemm} \label{lemm5.4} Let $\Pi $ be a finite subset of $\mathbb P$, and let $S _{1}$, $S _{2}$ be central division {\rm LBD}-algebras over a field $K$ with {\rm abrd}$_{p}(K) < \infty $, for all $p \in \mathbb P$. Assume that $k(p)_{S _{1}} = k(p)_{S _{2}} = 0$, $p \in \Pi $, the $K$-algebras $R _{1} \otimes _{K} S _{1}$ and $R _{2} \otimes S _{2}$ are $K$-isomorphic, where $R _{i} \in s(K)$, $i = 1, 2$, and {\rm deg}$(R _{1}){\rm deg}(R _{2})$ is not divisible by any $\bar p \in \mathbb P \setminus \Pi $. Then $R_{1} \cong R _{2}$ as $K$-algebras. \end{lemm} \par For a proof of our next lemma, we refer the reader to \cite{Ch3}, Lemma~8.3, which has been proved under the assumption that $R$ is a central division LBD-algebra over a field $K$ of arithmetic type. Therefore, we note that the proof in \cite{Ch3} remains valid if the assumption on $K$ is replaced by the one that abrd$_{p}(K) < \infty $, $p \in \mathbb{P}$, dim$(K _{\rm sol}) \le 1$, $K$ is virtually perfect, and there exist $p$-splitting fields $E _{p}: p \in \mathbb{P}$, of $R/K$ with $E _{p} \subseteq K(p)$, for each $p$. \par \begin{lemm} \label{lemm5.5} Let $K$ be a field with {\rm dim}$(K _{\rm sol}) \le 1$, $R$ a central division {\rm LBD}-algebra over $K$, and $k(p)\colon p \in \mathbb{P}$, the sequence of $p$-powers of $R/K$. Assume that, for each $p \in \mathbb{P}$, $E _{p}$ is a finite extension of $K$ in $K(p)$, which is a $p$-splitting field of $R/K$. Then: \par {\rm (a)} The full matrix ring $M _{\gamma (p)}(R)$, where $\gamma (p) = [E _{p}\colon K].p ^{-k(p)}$, is an \par\noindent Artinian central simple {\rm LBD}-algebra over $K$, which possesses a subalgebra \par\noindent $\Delta _{p} \in s(K)$, such that {\rm deg}$(\Delta _{p}) = [E _{p}\colon K]$ and $E _{p}$ is isomorphic to a $K$-subalgebra of $\Delta _{p}$. Moreover, if $[E _{p}\colon K] = p ^{k(p)}$, i.e. $E _{p}$ is embeddable in $R$ as a $K$-subalgebra, then $\Delta _{p}$ is a $K$-subalgebra of $R$. \par {\rm (b)} The centralizer of $\Delta _{p}$ in $M _{\gamma (p)}(R)$ is a central division $K$-algebra of $p$-power zero. \end{lemm} \par The following lemma generalizes \cite{Ch3}, Lemma~8.5, to the case where $K$ and $R$ satisfy the conditions of Lemma \ref{lemm5.5}. For this reason, we take into account that the proof of the lemma referred to, given in \cite{Ch3}, remains valid under the noted weaker conditions. Our next lemma can also be viewed as a generalization of the well-known fact that, for any field $E$, $D _{1} \otimes _{E} D _{2} \in d(E)$ whenever $D _{i} \in d(E)$, $i = 1, 2$, and $\gcd \{{\rm deg}(D _{1}), {\rm deg}(D _{2})\} = 1$ (see \cite{P}, Sect. 13.4). Using this lemma and the uniqueness part of the Wedderburn-Artin theorem, one obtains that, in the setting of Lemma \ref{lemm5.5}, the underlying central division $K$-algebra of $\Delta _{p}$ is embeddable in $R$ as a $K$-subalgebra. \par \begin{lemm} \label{lemm5.6} Let $K$ be a field, $R$ a central division {\rm LBD}-algebra over $K$, and $E _{p}$, $p \in \mathbb{P}$, be extensions of $K$ satisfying the conditions of Lemma \ref{lemm5.5}. Also, let $D \in d(K)$ be a division $K$-algebra such that $\gcd \{{\rm deg}(D), [K(\alpha )\colon K]\} = 1$, for each $\alpha \in R$. Then $D \otimes _{K} R$ is a central division {\rm LBD}-algebra over $K$. \end{lemm} \par \begin{rema} \label{rema5.7} Assume that $K$ is a field and $R$ is a central division $K$-algebra satisfying the conditions of Theorem \ref{theo3.1} or Theorem \ref{theo3.2}, and let $E _{p}\colon p \in \mathbb{P}$, be $p$-splitting fields of $R/K$ with the properties required by Lemma \ref{lemm3.3}. Then it follows from Lemmas \ref{lemm5.5} and \ref{lemm5.6} that, for each $p \in \mathbb{P}$, there exists a unique, up-to $K$-isomorphism, $K$-subalgebra $R _{p} \in d(K)$ of $R$ of degree deg$(R _{p}) = p ^{k(p)}$, where $k(p)$ is the $p$-power of $R/K$. Moreover, Lemma \ref{lemm5.3} implies $R _{p}$, $p \in \mathbb{P}$, can be chosen so that $R _{p'} \subseteq C _{R}(R _{p''})$ whenever $p', p'' \in \mathbb{P}$ and $p' \neq p''$. Therefore, there exist $K$-subalgebras $T _{n}$, $n \in \mathbb{N}$, of $R$, such that $T _{n} \cong \otimes _{j=1} ^{n} R _{p _{j}}$ and $T _{n} \subseteq T _{n+1}$, for each $n$; here $\otimes = \otimes _{K}$ and $\mathbb{P}$ is presented as a sequence $p _{n}\colon n \in \mathbb{N}$. Hence, the union $\widetilde R = \cup _{n=1} ^{\infty } T _{n} := \otimes _{n=1} ^{\infty } R _{p _{n}}$ is a \par\vskip0.048truecm\noindent central $K$-subalgebra of $R$. Note further that $R = T _{n} \otimes _{K} C _{R}(T _{n})$, for every \par\vskip0.04truecm\noindent $n \in \mathbb{N}$, which enables one to deduce from Lemmas \ref{lemm5.1}, \ref{lemm5.4}, \ref{lemm5.6}, and \cite{Ch3}, Lemma~3.5, that a finite-dimensional $K$-subalgebra $T$ of $R$ is embeddable in $T _{n}$ as a $K$-subalgebra in case $p _{n'} \nmid [T\colon K]$, for any $n' > n$. One also sees that $K = \cap _{n=1} ^{\infty } C _{R}(T _{n}) = C _{R}(\widetilde R)$, and by \cite{Ch3}, Lemma~9.3, every LFD-subalgebra of $R$ (over $K$) of countable dimension is embeddable in $\widetilde R$. \end{rema} \par The following two lemmas are used at crucial points of our proof of Lemma \ref{lemm3.3}. The former one has not been formally stated in \cite{Ch3}. However, special cases of it have been used in the proof of \cite{Ch3}, Lemma~8.3. \par \begin{lemm} \label{lemm5.8} Let $K$ and $R$ satisfy the conditions of Lemma \ref{lemm5.1}, and let $K _{1}$, $K _{2}$ be finite extensions of $K$ in an algebraic closure of $K _{\rm sep}$. Denote by $R _{1}$ and $R _{2}$ the underlying division algebras of $R \otimes _{K} K _{1}$ and $R \otimes _{K} K _{2}$, respectively, and suppose that there exist $D _{i} \in d(K _{i})$, $i = 1, 2$, such that $D _{i}$ is a $K _{i}$-subalgebra of $R _{i}$ and {\rm deg}$(D _{i}) = p ^{k(p)}$, for a given $p \in \mathbb{P}$ and each index $i$, where $k(p)$ is the $p$-power of $R/K$. Then: \par {\rm (a)} The underlying division $K _{1}K _{2}$-algebras of $R _{1} \otimes _{K _{1}} K _{1}K _{2}$, $R _{2} \otimes _{K _{2}} K _{1}K _{2}$ and $R \otimes _{K} K _{1}K _{2}$ are isomorphic; \par {\rm (b)} $p$ does not divide $[K _{i}(c _{i})\colon K _{i}]$, for any $c _{i} \in C _{R _{i}}(D _{i})$, and $i = 1, 2$. \par {\rm (c)} If $p \nmid [K _{1}K _{2}\colon K]$, then $D _{1} \otimes _{K _{1}} K _{1}K _{2}$ and $D _{2} \otimes _{K _{2}} K _{1}K _{2}$ are isomorphic central division $K _{1}K _{2}$-algebras; for example, this holds in case $p \nmid [K _{i}\colon K]$, $i = 1, 2$, and $\gcd \{[K _{1}\colon K _{0}], [K _{2}\colon K _{0}]\} = 1$, where $K _{0} = K _{1} \cap K _{2}$. \end{lemm} \par \begin{proof} Note that $R \otimes _{K} K _{1}K _{2}$ and $(R \otimes _{K} K _{i}) \otimes _{K _{1}} K _{1}K _{2}$, $i = 1, 2$, are isomorphic $K _{1}K _{2}$-algebras. These algebras are central simple and Artinian, which enables one to deduce Lemma \ref{lemm5.8} (a) from Wedderburn-Artin's theorem and \cite{P}, Sect. 9.3, Corollary~b. In addition, it follows from Lemma \ref{lemm5.1}, the assumptions on $D _{1}$ and $D _{2}$, and the Double Centralizer Theorem that $k(p)$ equals the $p$-powers of $R _{i}/K _{i}$, and $C _{R _{i}}(D _{i})$ is a central division $K _{i}$-subalgebra of $R _{i}$, for each $i$. Hence, by Lemma \ref{lemm5.3} (a), $C _{R _{1}}(D _{1})/K _{1}$ and $C _{R _{2}}(D _{2})/K _{2}$ are of $p$-power zero, which proves Lemma \ref{lemm5.8} (b). \par We turn to the proof of Lemma \ref{lemm5.8} (c). Assume that $p \nmid [K _{1}K _{2}\colon K]$ and denote by $R ^{\prime }$ the underlying division $K _{1}K _{2}$-algebra of $R \otimes _{K} K _{1}K _{2}$. Then, by Lemma \ref{lemm5.1}, $k(p)$ equals the $p$-power of $R ^{\prime }/K _{1}K _{2}$. Applying \cite{Ch3}, Lemma~3.5 \par\vskip0.054truecm\noindent (or results of \cite{P}, Sect. 13.4), one also obtains that $D _{i} \otimes _{K _{i}} K _{1}K _{2} \in d(K _{1}K _{2})$ \par\vskip0.05truecm\noindent and $D _{i} \otimes _{K _{i}} K _{1}K _{2}$ are embeddable in $R ^{\prime }$ as $K _{1}K _{2}$-subalgebras, for $i = 1, 2$. \par\vskip0.07truecm\noindent Let $D _{1} ^{\prime }$ and $D _{2} ^{\prime }$ be $K _{1}K _{2}$-subalgebras of $R ^{\prime }$ isomorphic to $D _{1} \otimes _{K _{1}} K _{1}K _{2}$ and \par\vskip0.06truecm\noindent $D _{2} \otimes _{K _{2}} K _{1}K _{2}$, respectively. As above, then it follows that, for each index $i$, \par\vskip0.06truecm\noindent $R ^{\prime }$ coincides with $D _{i} ^{\prime } \otimes _{K _{1}K _{2}} C _{R'}(D _{i} ^{\prime })$, $C _{R'}(D _{i} ^{\prime })$ is a central division algebra \par\vskip0.064truecm\noindent over $K _{1}K _{2}$, and $C _{R'}(D _{i} ^{\prime })/K _{1}K _{2}$ is of zero $p$-power; thus $p \nmid [K _{1}K _{2}(c')\colon K _{1}K _{2}]$, \par\vskip0.064truecm\noindent for any $c' \in C _{R'}(D _{1} ^{\prime }) \cup C _{R'}(D _{2} ^{\prime })$. Therefore, by Lemma \ref{lemm5.4}, $D _{1} ^{\prime } \cong D _{2} ^{\prime }$, \par\vskip0.064truecm\noindent whence, $D _{1} \otimes _{K _{1}} K _{1}K _{2} \cong D _{2} \otimes _{K _{2}} K _{1}K _{2}$ as $K _{1}K _{2}$-algebras. The latter part \par\vskip0.06truecm\noindent of our assertion is obvious, so Lemma \ref{lemm5.8} is proved. \end{proof} \par \begin{lemm} \label{lemm5.9} Let $D$ be a finite-dimensional simple algebra over a field $K$. Suppose that the centre $B$ of $D$ is a compositum of extensions $B _{1}$ and $B _{2}$ of $K$ of relatively prime degrees, and the following conditions are fulfilled: \par {\rm (a)} $[D\colon B] = n ^{2}$ and $D$ possesses a maximal subfield $E$ such that \par\noindent $[E\colon B] = n$ and $E = B\widetilde E$, for some separable extension $\widetilde E/K$ of degree $n$; \par {\rm (b)} $p > n$, for every $p \in \mathbb P$ dividing $[B\colon K]$; \par {\rm (c)} $D \cong D _{i} \otimes _{B _{i}} B$ as a $B$-algebra, for some $D _{i} \in s(B _{i})$, $i = 1, 2$. \par\noindent Then there exist $\widetilde D \in s(K)$ with $[\widetilde D\colon K] = n ^{2}$, and isomorphisms of \par\noindent $B _{i}$-algebras $\widetilde D \otimes _{K} B _{i} \cong D _{i}$, $i = 1, 2, 3$, where $B _{3} = B$ and $D _{3} = D$. \end{lemm} \par Lemma \ref{lemm5.9} has been proved as \cite{Ch3}, Lemma~8.2 (see also \cite{Ch1}). It has been used for proving \cite{Ch3}, Lemma~8.3, and the main result of \cite{Ch1}. In the present paper, the application of Lemma \ref{lemm5.9} in the proof of Lemma \ref{lemm8.2} gives us the possibility to deduce Lemma \ref{lemm3.3} by the method of proving \cite{Ch3}, Lemma~8.3. \par \section{\bf Henselian fields $(K, v)$ with char$(\widehat K) = q > 0$ and {\rm abrd}$_{q}(K(q)) \le 1$} \par The question of whether abrd$_{q}(\Phi (q)) = 0$, for every field $\Phi $ of characteristic $q > 0$ seems to be open. This Section gives a criterion for a Henselian field $(K, v)$ with char$(\widehat K) = q$ and $\widehat K$ of arithmetic type to satisfy the equality abrd$_{q}(K(q)) = 0$. To prove this criterion we need the following two lemmas. \par \begin{lemm} \label{lemm6.1} Let $(K, v)$ be a Henselian field with {\rm char}$(\widehat K) = q > 0$ and \par\noindent $\widehat K \neq \widehat K ^{q}$, and in case {\rm char}$(K) = 0$, suppose that $v$ is discrete and \par\noindent $v(q) \in qv(K)$. Let also $\widetilde \Lambda /\widehat K$ be an inseparable extension of degree $q$. Then there is $\Lambda \in I(K(q)/K)$, such that $[\Lambda \colon K] = q$ and $\widehat \Lambda $ is $\widehat K$-isomorphic to $\widetilde \Lambda $. \end{lemm} \par \begin{proof} The assumption on $\widetilde \Lambda /\widehat K$ shows that $\widetilde \Lambda = \widehat K(\sqrt[q]{\hat a})$, for some $\hat a \in \widehat K \setminus \widehat K ^{q}$. Hence, by the Artin-Schreier theorem, one may take as $\Lambda $ the extension of $K$ in $K _{\rm sep}$ obtained by adjunction of a root of the polynomial $X ^{q} - X - a\pi ^{-q}$ (equivalently, of the polynomial $X ^{q} - \pi ^{q-1}X - a$), for any fixed $\pi \in K ^{\ast }$ with $v(\pi ) > 0$. When char$(K) = 0$, our assertion is contained in \cite{Ch9}, Lemma~5.4, so Lemma \ref{lemm6.1} is proved. \end{proof} \par \begin{lemm} \label{lemm6.2} Let $(K, v)$ be a Henselian field, $L/K$ an inertial extension, and $N(L/K)$ the norm group of $L/K$. Then $\nabla _{0}(K)$ is a subgroup of $N(L/K)$. \end{lemm} \par \begin{proof} This is a special case of \cite{Er}, Proposition~2. \end{proof} \par Next we show that a Henselian field $(K, v)$ with char$(\widehat K) = q > 0$ satisfies abrd$_{q}(K(q)) = 0$, provided that char$(K) = q$ or the valuation $v$ is discrete. Note here that by the Albert-Hochschild theorem, the inequality abrd$_{q}(K(q)) = 0$ ensures that Br$(K(q) ^{\prime }) _{q} = \{0\}$, for every finite extension $K(q) ^{\prime }/K(q)$. \par \vskip0.32truecm \begin{theo} \label{theo6.3} Let $(K, v)$ be a Henselian field with {\rm char}$(\widehat K) = q > 0$, and in case {\rm char}$(K) = 0$, let $v$ be discrete. Then $v(K(q)) = qv(K(q))$, the residue field $\widehat {K(q)} = \widehat K(q)$ of $(K(q), v _{K(q)})$ is perfect, and {\rm abrd}$_{q}(K(q)) = 0$. \end{theo} \par Since the proof of Theorem \ref{theo6.3} relies on the presentability of cyclic \par\noindent $K$-algebras of degree $q$ as $q$-symbol algebras over $K$, we recall some basic facts related to such algebras over any field $E$ with char$(E) = q > 0$. Firstly, for each pair $a \in E$, $b \in E ^{\ast }$, $[a, b) _{E} \in s(E)$ and deg$([a, b) _{E}) = q$ (cf. \cite{GiSz}, Corollary~2.5.5). Secondly, if $[a, b) _{E} \in d(E)$, then the polynomial \par\noindent $f _{a}(X) = X ^{q} - X - a \in E[X]$ is irreducible over $E$. This follows from the Artin-Schreier theorem (see \cite{L}, Ch. VI, Sect. 6), which also shows that if $f _{a}(X)$ is irreducible over $E$, then $E(\xi )/E$ is a cyclic field extension, $[E(\xi )\colon E] = q$, and $[a, b) _{E}$ is isomorphic to the cyclic $E$-algebra $(E(\xi )/E, \sigma , b)$, where $\sigma $ is the $E$-automorphism of $E(\xi )$ mapping $\xi $ into $\xi + 1$; hence, by \cite{P}, Sect. 15.1, Proposition~b, $[a, b) _{E} \in d(E)$ if and only if $b \notin N(E(\xi )/E)$. \par \vskip0.48truecm\noindent {\it Proof of Theorem \ref{theo6.3}.} It is clear from Galois theory, the definition of $K(q)$ and the closeness of the class of pro-$q$-groups under the formation of profinite group extensions that $\widetilde K(q) = K(q)$, for every $\widetilde K \in I(K(q)/K)$; in particular, $K(q)(q) = K(q)$, which means that $K(q)$ does not admit cyclic extensions of degree $q$. As $(K(q), v_{K(q)})$ is Henselian, this allows to deduce from \cite{Ch6}, Lemma~4.2, and \cite{Ch9}, Lemma~2.3, that $v(K(q)) = qv(K(q))$. We show that the field $\widehat {K(q)} = \widehat K(q)$ is perfect. It follows from Lemma \ref{lemm6.1} and \cite{Ch9}, Lemma~2.3, that in case char$(K) = 0$ (and $v$ is discrete), one may assume without loss of generality that $v(q) \in qv(K)$. Denote by $\Sigma $ the set of those fields $U \in I(K(q)/K)$, for which $v(U) = v(K)$, $\widehat U \neq \widehat K$ and $\widehat U/\widehat K$ is a purely inseparable extension. In view of Lemma \ref{lemm6.1}, our extra hypothesis ensures that $\Sigma \neq \emptyset $. Also, $\Sigma $ is a partially ordered set with respect to set-theoretic inclusion, so it follows from Zorn's lemma that it contains a maximal element, say $U ^{\prime }$. Using again Lemma \ref{lemm6.1}, one proves by assuming the opposite that $\widehat U ^{\prime }$ is a perfect field. Since $(K(q), v _{K(q)})/(U ^{\prime }, v _{U'})$ is a valued extension and $\widehat K(q)/\widehat {U'}$ is an algebraic extension, this implies $\widehat K(q)$ is perfect as well. \par It remains to be seen that abrd$_{q}(K(q)) = 0$. Suppose first that \par\noindent char$(K) = q$, fix an algebraic closure $\overline K$ of $K _{\rm sep}$, and put $\bar v = v _{\overline K}$. It is known \cite{A1}, Ch. VII, Theorem~22, that if $K$ is perfect, then Br$(K ^{\prime }) _{q} = \{0\}$, for every finite extension $K ^{\prime }/K$. We assume further that $K$ is imperfect and $K _{\rm ins}$ is the perfect closure of $K$ in $\overline K$. It is easily verified that $K _{\rm ins}$ equals the union $\cup _{\nu =1} ^{\infty } K ^{q^{-\nu }}$ of the fields $K ^{q^{-\nu }} = \{\beta \in \overline K\colon \beta ^{q^{\nu }} \in K\}$, $\nu \in \mathbb{N}$, and $[K ^{q^{-\nu }}\colon K] \ge q ^{\nu }$, for each index $\nu $. To prove the equality abrd$_{q}(K(q)) = 0$ it suffices to show that Br$(L ^{\prime }) _{q} = \{0\}$, for an arbitrary $L ^{\prime } \in {\rm Fe}(K(q))$. Clearly, Br$(L ^{\prime }) _{q}$ coincides with the union of the images of Br$(L _{0} ^{\prime }) _{q}$ under the scalar extension maps Br$(L _{0} ^{\prime }) \to {\rm Br}(L ^{\prime })$, where $L _{0} ^{\prime }$ runs across the set of finite extensions of $K$ in $L ^{\prime }$. Moreover, one may restrict to the set $\mathcal{L}$ of those finite extensions $L _{0} ^{\prime }$ of $K$ in $L ^{\prime }$, for which $L _{0}'.K(q) = L ^{\prime }$ (evidently, $\mathcal{L} \neq \emptyset $). These observations, together with basic results on tensor products (cf. \cite{P}, Sect. 9.4, Corollary~a), indicate that the concluding assertion of Theorem \ref{theo6.3} can be deduced from the following statement: \par \noindent (6.1) Br$(L) _{q} = {\rm Br}(L.K(q)/L)$, for an arbitrary $L \in {\rm Fe}(K)$. \par \noindent We prove (6.1) by showing that, for any fixed $L$-algebra $D \in d(L)$ of \par\noindent $q$-primary degree, there is a finite extension $K _{1}$ of $K$ in $K(q)$ (depending on $D$), such that $[D] \in {\rm Br}(LK _{1}/L)$, i.e. the compositum $LK _{1}$ is a splitting field of $D$. Our proof relies on the fact that $K _{\rm ins}$ is perfect. This ensures that Br$(K _{\rm ins} ^{\prime }) _{q} = \{0\}$ whenever $K _{\rm ins} ^{\prime } \in I(\overline K/K _{\rm ins})$, which implies Br$(L _{1}) _{q} = {\rm Br}(L _{1}K _{\rm ins}/L _{1})$, for every $L _{1} \in {\rm Fe}(L)$. Thus it turns out that $[D] \in {\rm Br}(L.J'/L)$, for some finite extension $J ^{\prime }$ of $K$ in $K _{\rm ins}$. In particular, $J'$ lies in the set, say $\mathcal{D}$, of finite extensions $I ^{\prime }$ of $K$ in $K _{\rm ins}$, for which $K$ has a finite extension $\Lambda _{I'}$ in $K(q)$, such that $[D \otimes _{L} L\Lambda _{I'}] \in {\rm Br}(L\Lambda _{I'}I'/L\Lambda _{I'})$. Choose $J \in \mathcal{D}$ to be of minimal degree over $K$. We prove that $J = K$, by assuming the opposite. For this purpose, we use the following fact: \par \noindent (6.2) For each $\beta \in K _{\rm ins} ^{\ast }$ and any nonzero element $\pi \in M _{v}(K(q))$, there exists $\beta ^{\prime } \in K(q) ^{\ast }$, such that $\bar v(\beta ^{\prime } - \beta ) > v(\pi )$. \par \noindent To prove (6.2) it is clearly sufficient to consider only the special case of $\bar v(\beta ) \ge 0$. Note also that if $\beta \in K$, then one may put $\beta ^{\prime } = \beta (1 + \pi ^{2})$, so we assume further that $\beta \notin K$. A standard inductive argument leads to the conclusion that, one may assume, for our proof, that $[K(\beta )\colon K] = q ^{n}$ and the assertion of (6.2) holds for any pair $\beta _{1} \in K _{\rm ins} ^{\ast }$, $\pi _{1} \in M _{v}(K(q)) \setminus \{0\}$ satisfying $[K(\beta _{1})\colon K] < q ^{n}$. Since $[K(\beta ^{q})\colon K] = q ^{n-1}$, our extra hypothesis ensures the existence of an element $\tilde \beta \in K(q)$ with $\bar v(\tilde \beta - \beta ^{q}) > qv(\pi )$. Applying Artin-Schreier's theorem to the polynomial $X ^{q} - X - \tilde \beta \pi ^{-q^{3}}$, one \par\vskip0.038truecm\noindent proves that the polynomial $X ^{q} - \pi ^{q^{2}(q-1)}X - \tilde \beta \in K(q)[X]$ has a root \par\vskip0.032truecm\noindent $\beta ^{\prime } \in K(q)$. In view of the inequality $\bar v(\tilde \beta ) \ge 0$, this implies consecutively \par\vskip0.038truecm\noindent that $\bar v(\beta ^{\prime }) \ge 0 $ and $\bar v(\beta ^{\prime q} - \tilde \beta ) \ge q ^{2}(q-1).v(\pi )$. As $\bar v(\tilde \beta - \beta ^{q}) > qv(\pi )$, it is \par\vskip0.038truecm\noindent now easy to see that $\bar v(\beta ^{\prime q} - \beta ^{q}) > qv(\pi )$, whence, $\bar v(\beta ^{\prime } - \beta ) > v(\pi )$, as claimed by (6.2). \par We continue with the proof of (6.1). The assumption that $J \neq K$ shows that there exists $I \in I(J/K)$ with $[I\colon K] = [J\colon K]/q$; this means that $I \notin \mathcal{D}$. Take an element $b \in I$ so that $J = I(\sqrt[q]{b})$ and $\bar v(b) \ge 0$, and put $\Lambda = \Lambda _{J}.I$, $\Lambda ^{\prime } = L\Lambda $. As $\widehat K(q)$ is a perfect field (i.e. $\widehat K ^{q} = \widehat K$) and $v(K(q)) = qv(K(q))$, one may assume, for our proof, that $\Lambda _{J}$ is chosen so that $b = b _{1} ^{q}.\tilde b$, for some $b _{1} \in O _{v}(\Lambda _{J})$ and $\tilde b \in \nabla _{0}(\Lambda _{J})$. \par Let now $\Delta $ be the underlying division $\Lambda ^{\prime }$-algebra of $D \otimes _{L} \Lambda ^{\prime }$. Then follows from \cite{P}, Sect. 13.4, Corollary, and the choice of $J$ that $\Delta \neq \Lambda ^{\prime }$ and $[\Delta ] \in {\rm Br}(\Lambda ^{\prime }J/\Lambda ^{\prime })$. This implies $\Delta \cong [a, b) _{\Lambda '}$ as $\Lambda ^{\prime }$-algebras, for some $a \in \Lambda ^{\prime \ast }$ (see, for instance, the end of the proof of \cite{He}, Theorem~3.2.1). It is therefore clear that the polynomial $h _{a}(X) = X ^{q} - X - a \in \Lambda ^{\prime }[X]$ has no root in $\Lambda ^{\prime }$, so it follows from the Artin-Schreier theorem (see \cite{L}, Ch. VI, Sect. 6) that $h _{a}$ is irreducible over $\Lambda ^{\prime }$, and the field $W _{a} = \Lambda ^{\prime }(\xi _{a})$ is a degree $q$ cyclic extension of $\Lambda ^{\prime }$, where $\xi _{a} \in \overline K$ and $h _{a}(\xi _{a}) = 0$. One also sees that $W _{a}$ is embeddable in $\Delta $ as a $\Lambda ^{\prime }$-subalgebra, and $\Delta $ is isomorphic to the cyclic $\Lambda ^{\prime }$-algebra $(W _{a}/\Lambda ^{\prime }, \sigma , b)$, for a suitably chosen generator $\sigma $ of $\mathcal{G}(W _{a}/\Lambda ^{\prime })$. Because of the above-noted presentation $b = b _{1} ^{q}\tilde b$, this indicates that $\Delta \cong (W _{a}/\Lambda ^{\prime }, \sigma , \tilde b)$. Note further that the extension $W _{a}/\Lambda ^{\prime }$ is not inertial. Assuming the opposite, one obtains from Lemma \ref{lemm6.2} that $\tilde b \in N(W _{a}/K)$ which means that $[\Delta ] = 0$ (cf. \cite{P}, Sect. 15.1, Proposition~b). Since $\Delta \in d(\Lambda ^{\prime })$ and $\Delta \neq \Lambda ^{\prime }$, this is a contradiction, proving our assertion. In view of Ostrowski's theorem and the equality $[W _{a}\colon\Lambda ^{\prime }] = q$, the considered assertion can be restated by saying that $\widehat W _{a}/\widehat \Lambda ^{\prime }$ is a purely inseparable extension of degree $q$ unless $\widehat W _{a} = \widehat \Lambda ^{\prime }$. \par Next we observe, using (4.1) (b), that $\eta = (\xi _{a} + 1)\xi _{a} ^{-1}$ is a primitive element of $W _{a}/\Lambda ^{\prime }$ and $\eta \in O _{v}(W _{a}) ^{\ast }$; also, we denote by $f _{\eta }(X)$ the minimal polynomial of $\eta $ over $\Lambda^{\prime }$, and by $D(f _{\eta })$ the discriminant of $f _{\eta }$. It is easily verified that $f _{\eta }(X) \in O _{v}(W _{a})[X]$, $f _{\eta }(0) = (-1) ^{q}$, $D(f _{\eta }) \neq 0$, and \par\vskip0.041truecm\noindent $\bar v(D(f _{\eta })) = q\bar v(f _{\eta } ^{\prime }(\eta )) > 0$ (the inequality is strict, since $[W _{a}\colon K] = q$ and \par\vskip0.04truecm\noindent $W _{a}/\Lambda ^{\prime }$ is not inertial). Moreover, it follows from Ostrowski's theorem that \par\vskip0.044truecm\noindent there exists $\pi _{0} \in O _{v}(K)$ of value $v(\pi _{0}) = [K(D(f _{\eta }))\colon K]\bar v(D(f _{\eta }))$. Note also that $b ^{q ^{n-1}} \in K ^{\ast }$ (whence, $q ^{n-1}\bar v(b) \in v(K)$), put $\pi '= \pi _{0}b ^{q ^{n-1}}$, and let \par\vskip0.032truecm\noindent $b'$ be the $q$-th root of $b$ lying in $K _{\rm ins}$. Applying (6.2) to $b'$ and $\pi '$ (which is allowed because $v(\pi ') \ge v(\pi _{0}) > 0$), one obtains that there is $\lambda \in K(q) ^{\ast }$ \par\vskip0.032truecm\noindent with $\bar v(\lambda ^{q} - b) > qv(\pi ')$. Consider now the fields $\Lambda _{J}(\lambda )$, $\Lambda (\lambda )$ and $\Lambda ^{\prime }(\lambda )$ \par\vskip0.032truecm\noindent instead of $\Lambda _{J}$, $\Lambda $, and $\Lambda ^{\prime }$, respectively. Clearly, $\Lambda _{J}(\lambda )$ is a finite extension of \par\vskip0.032truecm\noindent $K$ in $K(q)$, $\Lambda (\lambda ) = \Lambda _{J}(\lambda ).I$ and $\Lambda ^{\prime }(\lambda ) = L.\Lambda (\lambda )$, so our choice of $J$ indicates that one may assume, for the proof of (6.1), that $\lambda \in \Lambda _{J}$. \par We can now rule out the possibility that $J \neq K$, by showing that \par\noindent $[a, b) _{\Lambda '} \notin d(\Lambda ^{\prime })$ (in contradiction with the choice of $J$ which requires that $I \notin \mathcal{D}$). Indeed, the norm $N _{\Lambda '}^{W _{a}}(\lambda \eta )$ is equal to $\lambda ^{q}$, and it follows from the equality $\pi '= \pi _{0}b ^{q ^{n-1}}$ that $v(\pi ') \ge v(\pi _{0}) + \bar v(b)$. Thus it turns out that $$\bar v(\lambda ^{q}b ^{-1} - 1) > qv(\pi ') - v(b) > v(\pi _{0}) \ge \bar v(D(f _{\eta })) = q\bar v(f _{\eta } ^{\prime }(\eta )).$$ Therefore, applying (4.1) to the polynomial $f _{\eta }(X) + (-1) ^{q}(\lambda ^{q}b ^{-1} - 1)$ and the element $\eta $, one obtains that $\lambda ^{q}b ^{-1}$ and $b$ are contained in $N(W _{a}/\Lambda ^{\prime })$, which means that $[a, b) _{\Lambda '} \notin d(\Lambda ^{\prime })$, as claimed. Hence, $J = K$, and by the definition of the set $\mathcal{D}$, there exists a finite extension $\Lambda _{K}$ of $K$ in $K(q)$, such that $[D \otimes _{L} L\Lambda _{K}] \in {\rm Br}(L\Lambda _{K}/L\Lambda _{K}) = \{0\}$. In other words, $[D] \in {\rm Br}(L\Lambda _{K}/L)$, so (6.1) and the equality abrd$_{q}(K(q)) = 0$ are proved in case char$(K) = q$. \par Our objective now is to prove Theorem \ref{theo6.3} in the special case where $v$ is discrete. Clearly, one may assume, for our proof, that char$(K) = 0$. Note that there exist fields $\Psi _{\nu } \in I(K(q)/K)$, $\nu \in \mathbb{N}$, such that $\Psi _{\nu }/K$ is a totally ramified Galois extension with $[\Psi _{\nu }\colon K] = q ^{\nu }$ and $\mathcal{G}(\Psi _{\nu }/K)$ abelian of period $q$, for each index $\nu $, and $\Psi _{\nu '} \cap \Psi _{\nu ''} = K$ whenever $\nu ', \nu '' \in \mathbb{N}$ and $\nu '\neq \nu ''$. This follows from \cite{Ch9}, Lemma~2.3 (and Galois theory, which ensures that each finite separable extension has finitely many intermediate fields). Considering, if necessary, $\Psi _{1}$ instead of $K$, one obtains further that it is sufficient to prove Theorem \ref{theo6.3} under the extra hypothesis that $v(q) \in qv(K)$. In addition, the proof of the $q$-divisibility of $v(K(q))$ shows that, for the proof of Theorem \ref{theo6.3}, one may consider only the special case where $\widehat K$ is perfect. \par Let now $\Phi $ be a finite extension of $K$ in $K _{\rm sep}$, and $\Omega \in d(\Phi )$ a division algebra, such that $[\Omega ] \in {\rm Br}(\Phi ) _{q}$ and $[\Omega ] \neq 0$. We complete the proof of Theorem \ref{theo6.3} by showing that $[\Omega ] \in {\rm Br}(\Psi _{\nu }\Phi /\Phi )$, for every sufficiently large $\nu \in \mathbb{N}$. As $v$ is discrete and Henselian with $\widehat K$ perfect, the prolongation of $v$ on $\Phi $ (denoted also by $v$) and its residue field $\widehat \Phi $ preserve the same properties, so it follows from the assumptions on $\Omega $ that it is a cyclic NSR-algebra over $\Phi $, in the sense of \cite{JW}. In other words, there exists an inertial cyclic extension $Y$ of $\Phi $ in $K _{\rm sep}$ of degree $[Y\colon \Phi ] = {\rm deg}(\Omega )$, as well as an element $\tilde \pi \in \Phi ^{\ast }$ and a generator $y$ of $\mathcal{G}(Y/\Phi )$, such that $v(\tilde \pi ) \notin qv(\Phi )$ and $\Omega $ is isomorphic to the cyclic $\Phi $-algebra $(Y/\Phi , y, \tilde \pi )$. It follows from Galois theory and our assumptions on the fields $\Psi _{\nu }$, $\nu \in \mathbb{N}$, that $\Psi _{\nu } \cap Y = K$, for all $\nu $, with, possibly, finitely many exceptions. Fix $\nu $ so that $\Psi _{\nu } \cap Y = K$ and deg$(\Omega ).q ^{\mu } \le q ^{\nu }$, where $\mu $ is the greatest integer for which $q ^{\mu } \mid [\Phi \colon K]$. Put $\Omega _{\nu } = \Omega \otimes _{\Phi } \Psi _{\nu }\Phi $ and denote by $v _{\nu }$ the valuation of $\Psi _{\nu }\Phi $ extending $v$. It is easily obtained from Galois theory and the choice of $\nu $ (cf. \cite{L}, Ch. VI, Theorem~1.12) that $\Psi _{\nu }Y/\Psi _{\nu }\Phi $ is a cyclic extension, $[\Psi _{\nu }Y\colon \Psi _{\nu }\Phi ] = [Y\colon \Phi ] = {\rm deg}(\Omega )$, $y$ extends uniquely to a $\Psi _{\nu }\Phi $-automorphism $y _{\nu }$ of $\Psi _{\nu }Y$, $y _{\nu }$ generates $\mathcal{G}(\Psi _{\nu }Y/\Psi _{\nu }\Phi )$, and $\Omega _{\nu }$ is isomorphic to the cyclic $\Psi _{\nu }\Phi $-algebra $(\Psi _{\nu }Y/\Psi _{\nu }\Phi , y _{\nu }, \tilde \pi )$. Also, the assumptions on $\Psi _{\nu }$ show that $v _{\nu }(\tilde \pi ) \in q ^{\nu -\mu }v _{\nu }(\Psi _{\nu }\Phi )$. Therefore, by the theory of cyclic algebras (cf. \cite{P}, Sect. 15.1), and the divisibility deg$(\Omega ) \mid q ^{\nu -\mu }$, $\Omega _{\nu }$ is $\Psi _{\nu }\Phi $-isomorphic to $(\Psi _{\nu }Y/\Psi _{\nu }\Phi , y _{\nu }, \lambda _{\nu })$, for some $\lambda _{\nu } \in O _{v _{\nu }}(\Psi _{\nu }\Phi ) ^{\ast }$. Since $\widehat K$ is perfect (that is, $\widehat K = \widehat K ^{q ^{\ell }}$, for each $\ell \in \mathbb{N}$), a similar argument shows that $\lambda _{\nu }$ can be chosen to be an element of $\nabla _{0}(\Psi _{\nu }\Phi )$. Taking also into account \par\vskip0.032truecm\noindent that $\Psi _{\nu }Y/\Psi _{\nu }\Phi $ is inertial, one obtains from Lemma \ref{lemm6.2} that \par\vskip0.032truecm\noindent $\lambda _{\nu } \in N(\Psi _{\nu }Y/\Psi _{\nu }\Phi )$. Hence, by the cyclicity of $\Psi _{\nu }Y/\Psi _{\nu }\Phi $, $[\Omega _{\nu }] = 0$, i.e. \par\vskip0.032truecm\noindent $[\Omega ] \in {\rm Br}(\Psi _{\nu }Y/\Psi _{\nu }\Phi )$. As $\Psi _{\nu } \in I(K(q)/K)$ and $\Omega \in d(\Phi )$ represents an arbitrary nonzero element of Br$(\Phi ) _{q}$, now it is clear that \par\vskip0.028truecm\noindent Br$(\Phi ) _{q} = {\rm Br}(K(q)\Phi /\Phi )$, for each $\Phi \in {\rm Fe}(K)$, so Theorem \ref{theo6.3} is proved. \par At the end of this Section, we prove two lemmas which show that \par\noindent dim$(K _{\rm sol}) \le 1$ whenever $K$ is a field satisfying the conditions of Lemma \ref{lemm3.3}. \par \begin{lemm} \label{lemm6.4} Let $(K, v)$ be a Henselian field with {\rm char}$(\widehat K) = q$ and {\rm dim}$(\widehat K _{\rm sol})$ $\le 1$, and in case {\rm char}$(K) = 0 < q$, let $v$ be discrete. Then {\rm dim}$(K _{\rm sol}) \le 1$. \end{lemm} \par \begin{proof} Put $\mathbb{P} ^{\prime } = \mathbb{P} \setminus \{q\}$, and for each $p \in \mathbb{P} ^{\prime }$, fix a primitive $p$-th root of unity $\varepsilon _{p} \in K _{\rm sep}$ and a field $T _{p}(K) \in I(K _{\rm tr}/K)$ in accordance with Lemma \ref{lemm4.2} (b)-(c). Note first that the compositum $T(K)$ of fields $T _{p}(K)$, $p \in \mathbb{P} ^{\prime }$, is a subfield of $K _{\rm sol}$. Indeed, $T _{p}(K) \in I(K(\varepsilon _{p})(p)/K)$, for each $p \in \mathbb{P} ^{\prime }$, so our assertion follows from Galois theory, the cyclicity of the extension $K(\varepsilon _{p})/K$, and the fact that finite solvable groups form a closed class under taking subgroups, quotient groups and group extensions. Secondly, Lemma \ref{lemm4.1} implies the field $K _{\rm ur} \cap K _{\rm sol} := U$ satisfies $\widehat U = \widehat K _{\rm sol}$. Observing also that $v(T(K)) = pv(T(K))$, $p \in \mathbb{P} ^{\prime }$, and dim$(\widehat K _{\rm sol}) \le 1$, and using (4.2)~(a), (4.3)~(a) and \cite{JW}, Theorem~2.8, one obtains that $v(K ^{\prime }) = pv(K ^{\prime })$, Br$(K ^{\prime }) _{p} \cong {\rm Br}(\widehat K ^{\prime }) _{p}$ and Brd$_{p}(\widehat K ^{\prime }) = {\rm Brd}_{p}(K ^{\prime }) = 0$, for each $p \in \mathbb{P} ^{\prime }$ and every finite extension $K ^{\prime }/K_{\rm sol}$. When $q = 0$, this proves Lemma \ref{lemm6.4}, and when $q > 0$, our proof is completed by applying Theorem \ref{theo6.3}. \end{proof} \par \begin{lemm} \label{lemm6.5} Let $K _{m}$ be a complete $m$-discretely valued field with {\rm dim}$(K _{0,{\rm sol}})$ $\le 1$, $K _{0}$ being the $m$-th residue field of $K _{m}$. Then {\rm dim}$(K _{m,{\rm sol}}) \le 1$. \end{lemm} \par \begin{proof} In view of Lemma \ref{lemm6.4}, one may consider only the case where $m \ge 2$. Denote by $K _{m-j}$ the $j$-th residue field of $K _{m}$, for $j = 1, \dots , m$. Suppose first that char$(K _{m}) = {\rm char}(K _{0})$. Using repeatedly Lemma \ref{lemm6.4}, one obtains that dim$(K _{m,{\rm sol}}) \le 1$, which allows to assume, for the rest of our proof, that char$(K _{m}) = 0$ and char$(K _{0}) = q > 0$. Let $\mu $ be the maximal integer for which char$(K _{m-\mu }) = 0$. Then $0 \le \mu < m$, char$(K _{m-\mu -1}) = q$, and in case $\mu < m - 1$, $K _{m-\mu -1}$ is a complete $m - \mu - 1$-discrete valued field with last residue field $K _{0}$; also, $K _{m-\mu }$ is a complete discrete valued field with a residue field $K _{m-\mu -1}$. Therefore, Lemma \ref{lemm6.4} yields dim$(K _{m-\mu ',{\rm sol}}) \le 1$, for $\mu ' = \mu , \mu + 1$. Note finally that if $\mu > 0$, then $K _{m}$ is a complete $\mu $-discretely valued field with $\mu $-th residue field $K _{m-\mu }$, and by Lemma \ref{lemm6.4} (used repeatedly), dim$(K _{m-m',{\rm sol}}) \le 1$, $m' = 0, \dots , \mu -1$, as required. \end{proof} \par \section{\bf Tame version of Lemma \ref{lemm3.3} for admissible Henselian fields} \par Let $(K, v)$ be a Henselian field with $\widehat K$ of arithmetic type and characterisitc $q$, put $\mathbb{P} _{q} = \mathbb{P} \setminus \{q\}$, and suppose that abrd$_{p}(K) < \infty $, $p \in \mathbb{P}$, and $R$ is a central division LBD-algebra over $K$. Our main objective in this Section is to prove a modified version of Lemma \ref{lemm3.3}, where the fields $E _{p}$, $p \in \mathbb{P}$, are replaced by tamely ramified extensions $V _{p}$, $p \in \mathbb{P} _{q}$, of $K$ in $K _{\rm sep}$, chosen so as to satisfy the following conditions, for each $p \in \mathbb{P} _{q}$: $V _{p}$ is a $p$-splitting field of $R/K$, $[V _{p}\colon K]$ is a $p$-primary number, and $V _{p} \cap K _{\rm ur} \subseteq K(p)$. The desired modification is stated as Lemma \ref{lemm7.6}, and is also called a tame version of Lemma \ref{lemm3.3}. Our first step towards this goal can be formulated as follows: \par \begin{lemm} \label{lemm7.1} Let $(K, v)$ be a Henselian field and let $T/K$ be a tamely totally ramified extension of $p$-primary degree $[T\colon K] > 1$, for some $p \in \mathbb{P}$. Then there exists a degree $p$ extension $T _{1}$ of $K$ in $T$. Moreover, $T _{1}/K$ is a Galois extension if and only if $K$ contains a primitive $p$-th root of unity. \end{lemm} \par \begin{proof} Our assumptions show that $v(T)/v(K)$ is an abelian $p$-group of order equal to $[T\colon K]$, whence, there is $\theta \in T$ with $v(\theta ) \notin v(K)$ and $pv(\theta ) \in v(K)$. Therefore, it follows that $K$ contains elements $\theta _{0}$ and $a$, such that \par\vskip0.044truecm\noindent $v(\theta _{0}) = pv(\theta ) = v(\theta ^{p})$, $v(a) = 0$ and $v(\theta ^{p} - \theta _{0}a) > 0$. This implies the \par\vskip0.044truecm\noindent existence of an element $\theta ' \in T$ satisfying $v(\theta ') > 0$ and $\theta ^{p} = \theta _{0}a(1 + \theta ')$. Note further that, by the assumption on $T/K$, $p \neq {\rm char}(\widehat T)$ and $\widehat T = \widehat K$; \par\vskip0.04truecm\noindent hence, by (4.1) (a), applied to the binomial $X ^{p} - (1 + \theta ')$, $1 + \theta ' \in T ^{\ast p}$. More precisely, $1 + \theta ' = (1 + \theta _{1}) ^{p}$, for some $\theta _{1} \in T$ of value $v(\theta _{1}) > 0$. Observing \par\vskip0.044truecm\noindent now that $v(\theta _{0}a) \notin pv(K)$ and $(\theta (1 + \theta _{1}) ^{-1}) ^{p} = \theta _{0}a$, one obtains that the \par\vskip0.044truecm\noindent field $T _{1} = K(\theta (1 + \theta _{1}) ^{-1})$ is a degree $p$ extension of $K$ in $T$. Suppose finally that $\varepsilon $ is a primitive $p$-th root of unity lying in $T _{\rm sep}$. It is clear from the noted properties of $T _{1}$ that $T _{1}(\varepsilon )$ is the Galois closure of $T _{1}$ (in $T _{\rm sep}$) over $K$. Since $[K(\varepsilon )\colon K] \mid p - 1$ (see \cite{L}, Ch. VI, Sect. 3), this ensures that $T _{1}/K$ is a Galois extension if and only if $\varepsilon \in K$, so Lemma \ref{lemm7.1} is proved. \end{proof} \par The fields $V _{p}(K)$, $p \in \mathbb{P} \setminus \{{\rm char}(\widehat K)\}$, singled out by the next lemma play the same role in our tame version of Lemma \ref{lemm3.3} as the role of the maximal $p$-extensions $K(p)$, $p \in \mathbb{P}$, in the original version of Lemma \ref{lemm3.3}. \par \begin{lemm} \label{lemm7.2} Let $(K, v)$ be a Henselian field with {\rm abrd}$_{p}(\widehat K(p)) = 0$, for some $p \in \mathbb{P}$ different from {\rm char}$(\widehat K)$. Fix $T _{p}(K) \in I(T(K)/K)$ in accordance with Lemma \ref{lemm4.2} {\rm (c)}, and put $K _{0}(p) = K(p) \cap K _{\rm ur}$. Then {\rm abrd}$_{p}(V _{p}(K)) = 0$, where $V _{p}(K) = K _{0}(p).T _{p}(K)$. \end{lemm} \par \begin{proof} It follows from Lemma \ref{lemm4.2} (b) and (c) that $v(T ^{\prime }) = pv(T ^{\prime })$, for any $T ^{\prime } \in I(K _{\rm sep}/T _{p}(K))$; therefore, if $D ^{\prime } \in d(T ^{\prime })$ is of $p$-primary degree $\ge p$, then it is neither totally ramified nor NSR over $T ^{\prime }$. As $p \neq {\rm char}(\widehat K)$, this implies in conjunction with Decomposition Lemmas~5.14 and 6.2 of \cite{JW}, that $D ^{\prime }/T ^{\prime }$ is inertial. Thus it turns out that Br$(\widehat T ^{\prime }) _{p}$ must be nontrivial. Suppose now that $T ^{\prime } \in I(K _{\rm sep}/V _{p}(K))$. Then $\widehat T ^{\prime }/\widehat K(p)$ is a separable field extension, so the condition that abrd$_{p}(\widehat K(p)) = 0$ requires that Br$(\widehat T ^{\prime }) _{p} = \{0\}$. It is now easy to see that Br$(T ^{\prime }) _{p} = \{0\}$, i.e. Brd$_{p}(T ^{\prime }) = 0$. Since the field $T ^{\prime }$ is an arbitrary element of $I(K _{\rm sep}/V _{p}(K))$, this proves Lemma \ref{lemm7.2}. \end{proof} \par The following lemma presents the main properties of finite extensions of $K$ in $V _{p}(K)$, which are used for proving Lemma \ref{lemm3.3}. \par \begin{lemm} \label{lemm7.3} In the setting of Lemma \ref{lemm7.2}, let $V$ be an extension of $K$ in $V _{p}(K)$ of degree $p ^{\ell } > 1$. Then there exist fields $\Sigma _{0}, \dots , \Sigma _{\ell } \in I(V/K)$, such that $[\Sigma _{j}\colon K] = p ^{j}$, $j = 0, \dots , \ell $, and $\Sigma _{j-1} \subset \Sigma _{j}$ for every index $j > 0$. \end{lemm} \par \begin{proof} By Lemma \ref{lemm4.1} (d), the field $K$ has an inertial extension $V _{0}$ in $V$ with $\widehat V _{0} = \widehat V$. Moreover, it follows from (4.2) (a) and the inequality $p \neq {\rm char}(\widehat K)$ that $V/V _{0}$ is totally ramified. Considering the extensions $V _{0}/K$ and $V/V _{0}$, one concludes that it is sufficient to prove Lemma \ref{lemm7.3} in the special case where $V _{0} = V$ or $V _{0} = K$. If $V _{0} = V$, then our assertion follows from Lemma \ref{lemm4.1} (c), Galois theory and the subnormality of proper subgroups of finite $p$-groups (cf. \cite{L}, Ch. I, Sect. 6). When $V _{0} = K$, by Lemma \ref{lemm7.2}, there is a degree $p$ extension $V _{1}$ of $K$ in $V$. As $V/V _{1}$ is totally ramified, this allows to complete the proof of Lemma \ref{lemm7.3} by a standard inductive argument. \end{proof} \par Theorem \ref{theo6.3} and our next lemma characterize the fields of arithmetic type among all fields admissible by some of Theorems \ref{theo3.1} and \ref{theo3.2}. These lemmas show that an $m$-dimensional local field is of arithmetic type if and only if $m = 1$. They also prove that if $(K, v)$ is a Henselian field with char$(K) = {\rm char}(\widehat K)$, then $K$ is a field of arithmetic type, provided that it is virtually perfect, $\widehat K$ is of arithmetic type, $v(K)/pv(K)$ are finite groups, for all $p \in \mathbb{P}$, and $\widehat K$ contains a primitive $p$-th root of unity, for each $p \in \mathbb{P} \setminus \{{\rm char}(\widehat K)\}$. \par \begin{lemm} \label{lemm7.4} Assume that $(K, v)$ and $p$ satisfy the conditions of Lemma \ref{lemm7.2}, $\hat \varepsilon \in \widehat K _{\rm sep}$ is a primitive $p$-th root of unity, and $\tau (p)$ is the dimension of the group $v(K)/pv(K)$, viewed as a vector space over the field $\mathbb{Z}/p\mathbb{Z}$. Then {\rm abrd}$_{p}(K(p)) = 0$ unless $\hat \varepsilon \notin \widehat K$ and $\tau (p) + {\rm cd}_{p}(\mathcal{G}_{\widehat K(p)}) \ge 2$. \end{lemm} \par \begin{proof} It follows from (4.2) (a) and Lemma \ref{lemm4.1} that if $v(K) = pv(K)$, then $K(p) \subseteq K _{\rm ur}$, whence, $K(p) = K _{0}(p) = V _{p}(K)$, and by Lemma \ref{lemm7.2}, abrd$_{p}(K(p)) = 0$. This agrees with the assertion of Lemma \ref{lemm7.4} in case $v(K) = pv(K)$, since $p \neq {\rm char}(\widehat K)$ and, by Galois cohomology, we have abrd$_{p}(\widehat K(p)) = 0$ if and only if cd$_{p}(\mathcal{G}_{\widehat K(p)}) \le 1$ (see \cite{GiSz}, Theorem~6.1.8, or \cite{S1}, Ch. II, 3.1). Therefore, we assume in the rest of the proof that $v(K) \neq pv(K)$. Fix a primitive $p$-th root of unity $\varepsilon \in K _{\rm sep}$, and as in Lemma \ref{lemm7.3}, consider a finite extension $V$ of $K$ in $V _{p}(K)$. It is easily verified that $\varepsilon \in K$ if and only if $\hat \varepsilon \in \widehat K$, and this holds if and only if $\varepsilon \in V _{p}(K)$. Suppose that $[V\colon K] = p ^{\ell } > 1$ and take fields $\Sigma _{j} \in I(V/K)$, $j = 0, 1, \dots , \ell $, as required by Lemma \ref{lemm7.3}. Observing that $K(p) = K _{1}(p)$, for any $K _{1} \in I(K(p)/K)$, and using Galois theory and the normality of maximal subgroups of nontrivial finite $p$-groups, one obtains that $V \in I(K(p)/K)$ if and only if $\Sigma _{j}/\Sigma _{j-1}$ is a Galois extension, for every $j > 0$. In view of Lemma \ref{lemm7.1}, this occurs if and only if $\varepsilon \in K$ or $V \in I(K _{0}(p)/K)$. It is now easy to see that $K(p) = V _{p}(K)$ if $\varepsilon \in K$, and $K(p) = K _{0}(p)$, otherwise. Hence, by Lemma \ref{lemm7.2}, abrd$_{p}(K(p)) = 0$ in case $\varepsilon \in K$, as claimed by Lemma \ref{lemm7.4}. \par Assume finally that $v(K) \neq pv(K)$ and $\varepsilon \notin K$ (in this case, $p > 2$). It is easy to see that if cd$_{p}(\mathcal{G}_{\widehat K(p)}) = 1$, then there is a finite extension $Y$ of $K _{0}(p)$ in $K _{\rm ur}$, such that $\widehat Y(p) \neq \widehat Y$. Therefore, there exists a degree $p$ cyclic extension $Y ^{\prime }$ of $Y$ in $Y _{\rm ur} = Y.K _{\rm ur}$, which ensures the existence of a nicely semi-ramified $Y$-algebra $\Lambda \in d(Y)$, in the sense of \cite{JW}, of degree $p$; this yields abrd$_{p}(K _{0}(p)) \ge {\rm Brd}_{p}(Y) \ge 1$. The inequality abrd$_{p}(K _{0}(p)) \ge 1$ also holds if $\tau (p) \ge 2$, i.e. $v(K)/pv(K)$ is noncyclic. Indeed, then Brd$_{p}(K _{0}(p)(\varepsilon )) \ge 1$; this follows from the fact that $v(K _{0}(p)(\varepsilon )) = v(K)$, which implies the symbol $K _{0}(p)(\varepsilon )$-algebra $A _{\varepsilon }(a _{1}, a _{2}; K _{0}(p)(\varepsilon ))$ (defined, e.g., in \cite{Mat}) is a division one whenever $a _{1}$ and $a _{2}$ are elements of $K ^{\ast }$ chosen so that the cosets \par\noindent $v(a _{i}) + pv(K)$, $i = 1, 2$, generate a subgroup of $v(K)/pv(K)$ of order $p ^{2}$. \par In order to complete the proof of Lemma \ref{lemm7.4} it remains to be seen that abrd$_{p}(K _{0}(p)) = 0$ in case cd$_{p}(\mathcal{G}_{\widehat K(p)}) = 0$ and $\tau (p) = 1$. Since $p \neq {\rm char}(\widehat K)$, this is the same as to prove that cd$_{p}(\mathcal{G}_{K _{0}(p)}) \le 1$. As $K _{0}(p) = K _{\rm ur} \cap K(p)$, we have $v(K _{0}(p)) = v(K)$ and $\widehat {K _{0}(p)} = \widehat K(p)$, so it follows from \cite{Ch5}, Lemma~1.2, that cd$_{p}(\mathcal{G}_{K _{0}(p)}) = {\rm cd}_{p}(\mathcal{G}_{\widehat K(p)}) + \tau (p) = 1$, as claimed. \end{proof} \par \begin{rema} \label{rema7.5} Summing-up Lemmas \ref{lemm4.4}, \ref{lemm4.5} and \ref{lemm7.4}, one obtains a complete valuation-theoretic characterization of the fields of arithmetic type among the maximally complete fields $(K, v)$ with abrd$_{p}(K) < \infty $, for every $p \in \mathbb{P}$. As demonstrated in the proof of Corollary \ref{coro4.6}, this fully describes the class $\mathcal{C} _{0}$ of those fields of arithmetic type, which lie in the class $\mathcal{C}$ of generalized formal power series fields of finite absolute Brauer $p$-dimensions. Note that $\mathcal{C}$ is considerably larger than $\mathcal{C} _{0}$. For example, if $K _{0}$ is a finite field and $\Gamma $ is an ordered abelian group with finite quotients $\Gamma /p\Gamma $, for all $p \in \mathbb{P}$, then $K _{0}((\Gamma )) \in \mathcal{C} \setminus \mathcal{C} _{0}$ in case $\Gamma /p\Gamma $ are noncyclic, for infinitely many $p$. \end{rema} \par \label{tame} The conclusion of Lemma \ref{lemm7.3} remains valid if $K$ is an arbitrary field, $p \in \mathbb{P}$, and $V$ is a finite extension of $K$ in $K(p)$ of degree $p ^{\ell } > 1$; then the extensions $\Sigma _{j}/\Sigma _{j-1}$, $j = 1, \dots , \ell $, are Galois of degree $p$ (see \cite{L}, Ch. I, Sect. 6). Considering the proof of Lemma \ref{lemm7.4}, one also sees that, in the setting of Lemma \ref{lemm7.3}, $V _{p}(K) \subseteq K(p)$ if and only $\widehat K$ contains a primitive $p$-th root of unity or $v(K) = pv(K)$. These observations and Lemma \ref{lemm7.4} allow to view the following result as a tame version of Lemma \ref{lemm3.3}: \par \begin{lemm} \label{lemm7.6} Assume that $K$, $q$ and $R$ satisfy the conditions of Theorem \ref{theo3.1} or Theorem \ref{theo3.2}. Put $\mathbb{P} ^{\prime } = \mathbb{P} \setminus \{q\}$, and for each $p \in \mathbb{P} ^{\prime }$, denote by $k(p)$ the $p$-power of $R/K$, and by $V _{p}(K)$ the extension of $K$ in $K _{\rm sep}$ singled out by Lemma \ref{lemm7.2}. Then there exist finite extensions $V _{p}$ of $K$, $p \in \mathbb P ^{\prime }$, with the following properties, for each $p$: \par {\rm (c)} $V _{p}$ is a $p$-splitting field of $R/K$, i.e. $p$ does not divide $[V _{p}(\delta _{p})\colon V _{p}]$, for any element $\delta _{p}$ of the underlying central division $V _{p}$-algebra $\Delta _{p}$ of $R \otimes _{K} V _{p}$; \par {\rm (cc)} $V _{p} \in I(V _{p}(K)/K)$, so $[V _{p}\colon K] = p ^{\ell (p)}$, for some integer $\ell (p) \ge k(p)$, and the maximal inertial extension $U _{p}$ of $K$ in $V _{p}$ is a subfield of $K(p)$. \end{lemm} \par \begin{proof} It is clearly sufficient to show that $R \otimes _{K} V _{p}(K) \cong M _{p^{k(p)}}(R ^{\prime })$, for some central division $V _{p}(K)$-algebra $R ^{\prime }$. Our proof relies on the inclusion $V _{p}(K) \subseteq K _{\rm sol}$. In view of the $V _{p}(K)$-isomorphism $R \otimes _{K} V _{p}(K) \cong (R \otimes _{K} Y _{p}) \otimes _{Y_{p}} V _{p}(K)$, for each $Y _{p} \in I(V _{p}(K)/K)$, this enables one to obtain from Lemma \ref{lemm5.1} that $R \otimes _{K} V _{p}(K)$ is $V _{p}(K)$-isomorphic to $M _{s(p)}(R _{p})$, for some central division $V _{p}(K)$-algebra $R _{p}$ and some $s(p) \in \mathbb{N}$ dividing $p ^{k(p)}$. In order to complete the proof of Lemma \ref{lemm7.6} we show that $p ^{k(p)} \mid s(p)$. Since, by Lemma \ref{lemm7.2}, abrd$_{p}(V _{p}(K)) = 0$, it can be deduced from \cite{Ch3}, Lemma~3.6, that for any finite extension $Y ^{\prime }$ of $V _{p}(K)$, $R _{p} \otimes _{V _{p}(K)} Y ^{\prime }$ is isomorphic as an $Y ^{\prime }$-algebra to $M _{y'}(R ^{\prime })$, for some $y' \in \mathbb{N}$ not divisible by $p$, and some central division LBD-algebra $R ^{\prime }$ over $Y ^{\prime }$. Note further that there is an $Y ^{\prime }$-isomorphism \par\vskip0.031truecm\noindent $R \otimes _{K} Y ^{\prime } \cong (R \otimes _{K} Y) \otimes _{Y} Y ^{\prime }$, for any $Y \in I(Y ^{\prime }/K)$. This, applied to the case \par\vskip0.031truecm\noindent where $Y = V _{p}(K)$, and together with the Wedderburn-Artin theorem and \par\vskip0.031truecm\noindent \cite{P}, Sect. 9.3, Corollary~b, leads to the conclusion that $R \otimes _{K} Y ^{\prime } \cong M _{s(p).y'}(R ^{\prime })$ \par\vskip0.031truecm\noindent as $Y ^{\prime }$-algebras. Considering again an arbitrary $Y \in I(Y ^{\prime }/K)$, one obtains \par\vskip0.023truecm\noindent similarly that if $R _{Y}$ is the underlying division $Y$-algebra of $R \otimes _{K} Y$, then \par\vskip0.027truecm\noindent there exists an $Y$-isomorphism $R \otimes _{K} Y \cong M _{y}(R _{Y})$, for some $y \in \mathbb{N}$ dividing \par\vskip0.027truecm\noindent $s(p).y'$. Suppose now that $Y ^{\prime } = V _{p}(K)Y$, for some finite extension $Y$ of $K$ in an algebraic closure of $V _{p}(K)$, such that $p ^{k(p)} \mid [Y\colon K]$ and $Y$ embeds in $R$ as a $K$-subalgebra. Then, by the previous observation, $p ^{k(p)} \mid s(p).y'$; since $p \nmid y'$, this implies $p ^{k(p)} \mid s(p)$ and so completes the proof of Lemma \ref{lemm7.6}. \end{proof} \par Lemmas \ref{lemm7.2}, \ref{lemm7.3}, \ref{lemm7.6} and the results of Sections 4 and 5 give us the possibility to deduce Lemma \ref{lemm3.3} by the method of proving \cite{Ch3}, Lemma~8.3. This is done in the following two Sections in two steps. \par \section{\bf A special case of Lemma \ref{lemm3.3}} \par Let $K$ be a field and $R$ a central division LBD-algebra over $K$ satisfying the conditions of Theorem \ref{theo3.1} or Theorem \ref{theo3.2}, and put $q = {\rm char}(K _{0})$ in the former case, $q = {\rm char}(K)$ in the latter one. This Section gives a proof of Lemma \ref{lemm3.3} in the case where $q$ does not divide $[K(r)\colon K]$, for any $r \in R$. In order to achieve this goal we need the following two lemmas: \par \begin{lemm} \label{lemm8.1} Let $(K, v)$ be a field with {\rm dim}$(K _{\rm sol}) \le 1$ and {\rm abrd}$_{\ell }(K) < \infty $, for all $\ell \in \mathbb{P}$. Fix $p \in \mathbb{P}$ and a field $M \in I(M ^{\prime }/K)$, for some finite Galois extension $M ^{\prime }$ of $K$ in $K _{\rm sep}$ with $\mathcal{G}(M ^{\prime }/K)$ nilpotent and $[M ^{\prime }\colon K]$ not divisible by $p$. Assume that $R$ is a central division {\rm LBD}-algebra over $K$, $R _{M}$ is the underlying division $M$-algebra of $R \otimes _{K} M$, and there is an $M$-subalgebra $\Delta _{M}$ of $R _{M}$, such that the following (equivalent) conditions hold: \par {\rm (c)} $M$ is a $p'$-splitting field of $R/K$, for every $p' \in \mathbb{P}$ dividing $[M\colon K]$; \par\noindent $\Delta _{M} \in d(M)$ and {\rm deg}$(\Delta _{M}) = p ^{k(p)}$, where $k(p)$ is the $p$-power of $R/K$; \par {\rm (cc)} $\gcd \{p[M\colon K], [M(z _{M})\colon M]\} = 1$, for every $z _{M} \in C _{R _{M}}(\Delta _{M})$. \par \noindent Then $\Delta _{M} \cong \Delta \otimes _{K} M$ as $M$-algebras, for some subalgebra $\Delta \in d(K)$ of $R$. \end{lemm} \par \begin{proof} The equivalence of conditions (c) and (cc) follows from Lemmas \ref{lemm5.1} and \ref{lemm5.3}. Note further that if $M \neq K$, then $M$ contains as a subfield a cyclic extension $M _{0}$ of $K$ of degree $p' \neq p$. This is a consequence of the normality of maximal subgroups of nilpotent finite groups (established by the Burnside-Wielandt theorem, see \cite{KM}, Theorem~17.1.4) and Galois theory. Considering $M _{0}$ and the underlying division $M _{0}$-algebra $R _{0}$ of $R \otimes _{K} M _{0}$, instead of $K$ and $R$, respectively, and taking into account that the $M$-algebras $R \otimes _{K} M$ and $(R \otimes _{K} M _{0}) \otimes _{M _{0}} M$ are isomorphic, one concludes that conditions (c) and (cc) of Lemma \ref{lemm8.1} are fulfilled again. Therefore, a standard inductive argument shows that it suffices to prove Lemma \ref{lemm8.1} under the extra hypothesis that there exists a subalgebra $\Delta _{0} \in d(M _{0})$ of $R _{0}$, such that $\Delta _{0} \otimes _{M _{0}} M \cong \Delta _{M}$ as $M$-algebras. Let $\varphi $ be a $K$-automorphism of $M _{0}$ of order $p'$, and let $\bar \varphi $ be the unique $K$-automorphism of $R \otimes _{K} M _{0}$ extending $\varphi $ and acting on $R$ as the identity. Then it follows from the Skolem-Noether theorem (cf. \cite{He}, Theorem~4.3.1) and from the existence of an $M _{0}$-isomorphism $R \otimes _{K} M _{0} \cong M _{p*}(M _{0}) \otimes _{M _{0}} R _{0}$ (where $p* = p$ or $p* = 1$ depending on whether or not $M _{0}$ is embeddable in $R$ as a $K$-subalgebra) that $R _{0}$ has a $K$-automorphism $\tilde \varphi $ extending $\varphi $. Note also that $p \nmid [M _{0}(z _{0})\colon M _{0}]$, for any $z _{0} \in C _{R _{0}}(\Delta _{0})$. This is implied by Lemma \ref{lemm5.1}, condition (cc) of Lemma \ref{lemm8.1}, and the fact that $C _{R _{M}}(\Delta _{M})$ is the underlying division $M$-algebra of $C _{R _{M _{0}}}(\Delta _{0}) \otimes _{M _{0}} M$. Hence, by Lemma \ref{lemm5.4}, $\Delta _{0}$ is $M _{0}$-isomorphic to its image $\Delta _{0} ^{\prime }$ under $\tilde \varphi $, so it follows from the Skolem-Noether theorem that $\varphi $ extends to a $K$-automorphism of $\Delta _{0}$. As $p \nmid [M _{0}\colon K]$ and deg$(\Delta _{0}) = {\rm deg}(\Delta ) = p ^{k(p)}$, this enables one to deduce from Teichm\"{u}ller's theorem (cf. \cite{Dr1}, Sect. 9, Theorem~4) and \cite{Ch3}, Lemma~3.5, that there exists an $M _{0}$-isomorphism $\Delta _{0} \cong \Delta \otimes _{K} M _{0}$, for some central $K$-subalgebra $\Delta $ of $R$. \end{proof} \par \begin{lemm} \label{lemm8.2} Let $(K, v)$ be a Henselian field with {\rm abrd}$_{\ell }(K) < \infty $, $\ell \in \mathbb{P}$, and $\widehat K$ of arithmetic type, and let $R$ be a central division {\rm LBD}-algebra over $K$. Fix a primitive $p$-th root of unity $\varepsilon \in K _{\rm sep}$, for some $p \in \mathbb{P}$, $p \neq {\rm char}(\widehat K)$, and suppose that {\rm dim}$(K _{\rm sol}) \le 1$ and $R$ satisfies the following conditions: \par {\rm (i)} $p ^{2}$ and {\rm char}$(\widehat K)$ do not divide the degree $[K(\delta )\colon K]$, for any $\delta \in R$; \par {\rm (ii)} There is a $K$-subalgebra $\Theta $ of $R$, which is a totally ramified extension of $K$ of degree $[\Theta \colon K] = p$. \par\noindent Then there exists a central $K$-subalgebra $\Delta $ of $R$, such that {\rm deg}$(\Delta ) = p$ and $\Delta $ possesses a $K$-subalgebra isomorphic to $\Theta $. Moreover, if $\varepsilon \notin K$, then $\Delta $ contains as a $K$-subalgebra an inertial cyclic extension of $K$ of degree $p$. \end{lemm} \par \begin{proof} As in the proof of Lemma \ref{lemm7.1}, one obtains from the assumption on $\Theta /K$ that $\Theta = K(\xi )$, where $\xi $ is a $p$-th root of an element $\theta \in K ^{\ast }$ of value $v(\theta ) \notin pv(K)$. Suppose first that $\varepsilon \in K$. Then $\Theta /K$ is a cyclic extension, so it follows from the Skolem-Noether theorem that there exists $\eta \in R ^{\ast }$, such that $\eta \xi \eta ^{-1} = \varepsilon \xi $. As a first step towards our proof, we show that $\eta $ can be chosen so as to satisfy the following: \par \noindent (8.1) The field extension $K(\eta ^{p})/K$ is inertial. \par \noindent Put $\eta ^{p} = \rho $, $B = K(\rho )$, and $r = [B\colon \mathcal{B}]$, where $\mathcal{B}$ is the maximal inertial extension of $K$ in $B$. It is easily verified that $\xi \rho = \rho \xi $. Since $\xi \eta \neq \eta \xi $ and $\varepsilon \in K$, this means that $\eta \notin B$ and $[K(\eta )\colon K] = p$. Observing that $[K(\eta )\colon K] = [K(\eta )\colon B].[B\colon K]$, and by assumption, $p ^{2} \nmid [K(\eta )\colon K]$, one also obtains that $p \nmid [B\colon K]$. Therefore, $p \nmid r$, whence, the pairs $\xi , \eta $ and $\xi , \eta ^{r}$ generate the same $K$-subalgebra of $R$. Similarly, condition (i) of Lemma \ref{lemm8.2} shows that char$(\widehat K) \nmid r$, which leads to the conclusion that the set of those $b \in B$, for which $v(b) \in rv(B)$ equals the inner group product $\mathcal{B} ^{\ast }.\nabla _{0}(B)$. Since, by the Henselian property of $(B, v _{B})$, $\nabla _{0}(B) \subset B ^{\ast pr}$, this observation indicates that there exists a pair $\rho _{0} \in \mathcal{B} ^{\ast }$, $\rho _{1} \in B ^{\ast }$, such that $\rho ^{r} = \rho _{0}\rho _{1} ^{pr}$. Putting $\eta _{1} = (\eta \rho _{1} ^{-1}) ^{r.r'}$, for a fixed $r' \in \mathbb{N}$ satisfying $r.r' \equiv 1 ({\rm mod} \ p)$, one obtains that $\eta _{1}\xi \eta _{1} ^{-1} = \varepsilon \xi $ and $\eta _{1} ^{p} = \rho _{0} ^{r'} \in \mathcal{B}$, which proves (8.1). \par Our objective now is to prove the existence of a $K$-subalgebra $\Delta $ of $R$ with the properties required by Lemma \ref{lemm8.2}. Let $\mathbb{P} ^{\prime } = \mathbb{P} \setminus \{{\rm char}(\widehat K), p\}$, and for each $p' \in \mathbb{P} ^{\prime }$, take an extension $V _{p'}$ of $K$ in $K _{\rm tr}$ in accordance with Lemma \ref{lemm7.6}, and put $U _{p'} = V _{p'} \cap K _{\rm ur}$. Consider a sequence $\Pi _{n}$, $n \in \mathbb{N}$, of pairwise distinct finite subsets of $\mathbb{P} ^{\prime }$, such that $\cup _{n=1} ^{\infty } \Pi _{n} = \mathbb{P} ^{\prime }$ and $\Pi _{n} \subset \Pi _{n+1}$, for each index $n$. Denote by $W _{n}$ the compositum of the fields $V _{p _{n}}$, $p _{n} \in \Pi _{n}$, and by $R _{n}$ the underlying division $W _{n}$-algebra of $R \otimes _{K} W _{n}$, for any $n$. We show that $W _{n}$, $R _{n}$ and the $W _{n}$-algebra $\Theta _{n} = \Theta \otimes _{K} W _{n}$ satisfy conditions (i) and (ii) of Lemma \ref{lemm8.2}. It is easily verified that $[W _{n}\colon K] = \prod _{p _{n} \in \Pi _{n}} [V _{p _{n}}\colon K]$; in particular, $p \nmid [W _{n}\colon K]$ which ensures that $\Theta _{n}$ is a field. Moreover, it follows from (4.2) (a), that $\Theta _{n}/W _{n}$ is a totally ramified extension of degree $p$. Using the fact that $R \otimes _{K} \Theta _{n}$ is isomorphic to the $W _{n}$-algebras $(R \otimes _{K} \Theta ) \otimes _{\Theta } \Theta _{n}$ and $(R \otimes _{K} W _{n}) \otimes _{W _{n}} \Theta _{n}$ (cf. \cite{P}, Sect. 9.4, Corollary~a), one obtains from \cite{Ch3}, Lemma~3.5, and the uniqueness part of the Wedderburn-Artin theorem, that $\Theta _{n}$ embeds in $R _{n}$ as a $W _{n}$-subalgebra. Note also that $W _{n}$ and $R _{n}$ satisfy condition (i) of Lemma \ref{lemm8.2}; since $p$ and char$(\widehat K)$ do not divide $[W _{n}\colon K]$, this follows from Lemma \ref{lemm5.1} and \cite{Ch3}, Lemma~3.5. \par The next step towards our proof of the lemma can be stated as follows: \par \noindent (8.2) When $n$ is sufficiently large, $R _{n}$ has a $W _{n}$-subalgebra $\Delta _{n} \in d(W _{n})$, such that deg$(\Delta _{n}) = p$ and $\Theta _{n}$ embeds in $\Delta _{n}$ as a $W _{n}$-subalgebra. \par \noindent Our proof of (8.2) relies on Lemma \ref{lemm5.1} and the choice of the fields $W _{\nu }$, $\nu \in \mathbb{N}$, which indicate that, for any $\nu $, the degrees $[W _{\nu }(\delta _{\nu })\colon W _{\nu }]$, $\delta _{\nu } \in R _{\nu }$, are not divisible by any $p _{\nu } \in \Pi _{\nu }$. Arguing as in the proof of Lemma \ref{lemm5.5}, given in \cite{Ch3}, Sect.~8, one obtains from (8.1) the existence of a finite-dimensional \par\noindent $W _{\nu }$-subalgebra $\Lambda _{\nu }$ of $R _{\nu }$ satisfying the following: \par \noindent (8.3) (i) The centre $B _{\nu }$ of $\Lambda _{\nu }$ is an inertial extension of $W _{\nu }$ of degree not divisible by char$(\widehat K)$, $p$ and any $p _{n} \in \Pi _{n}$; moreover, by (4.2) (a) and Lemma \ref{lemm4.1} (d), $B _{\nu } = \mathcal{B} _{\nu }W _{\nu }$ and $[B _{\nu }\colon W _{\nu }] = [\mathcal{B} _{\nu }\colon \mathcal{W} _{\nu }]$, where $\mathcal{B} _{\nu }$ and $\mathcal{W} _{\nu }$ are the maximal inertial extensions of $K$ in $B _{\nu }$ and $W _{\nu }$, respectively. \par (ii) $\Lambda _{\nu }$ has degree $p$ as an algebra in $d(B _{\nu })$, $C _{R _{\nu }}(\Lambda _{\nu })$ is a central division $B _{\nu }$-algebra, and $C _{R _{\nu }}(\Lambda _{\nu })/B _{\nu }$ is of $p$-power zero (see Lemmas \ref{lemm5.1} and \ref{lemm5.3}). \par \noindent It is easy to see that the field $\mathcal{W} _{\nu }$ defined in (8.3) (i) equals the compositum of the fields $U _{p _{\nu }}$, $p _{\nu } \in \Pi _{\nu }$, for each $\nu \in \mathbb{N}$. Let now $\mathcal{W} _{\nu } ^{\prime }$ be the Galois closure of $\mathcal{W} _{\nu }$ in $K _{\rm sep}$ over $K$, and let $B _{\nu } ^{\prime }$ be a $W _{\nu }$-isomorphic copy of $B _{\nu }$ in $K _{\rm sep}$. Then it follows from Galois theory and Lemma \ref{lemm7.6} (cc) that $\mathcal{G}(\mathcal{W} _{\nu } ^{\prime }/K)$ decomposes into a direct product of finite $p _{\nu }$-groups, indexed by $\Pi _{\nu }$; hence, $p \nmid [\mathcal{W} _{\nu } ^{\prime }\colon K]$, and by the Burnside-Wielandt theorem, $\mathcal{G}(\mathcal{W} _{\nu } ^{\prime }/K)$ is nilpotent, for every $\nu $. Next, observing that, by the same theorem, maximal subgroups of nilpotent finite groups are normal of prime indices, and using Galois theory and (8.3) (i), one obtains that: \par \noindent (8.4) For any pair of indices $\nu , \nu '$ with $\nu < \nu '$, $B _{\nu } ^{\prime }W _{\nu '}/W _{\nu '}$ is a field extension of degree dividing $[B _{\nu } ^{\prime }\colon W _{\nu }] = [B _{\nu }\colon W _{\nu }]$. \par \noindent It is clear from (8.3) (i) and the assumptions on $\Pi _{\nu }$, $\nu \in \mathbb{N}$, that there exists an index $\nu _{0}$, such that all prime divisors of $[B _{\nu }\colon W _{\nu }]$ are greater than $p$, for each $\nu > \nu _{0}$. Similarly, for any $\nu $, one can find $\xi (\nu ) \in \mathbb{N}$ satisfying the condition $\gcd \{[B _{\nu '}\colon W _{\nu '}], [B _{\nu }\colon W _{\nu }]\} = 1$ whenever $\nu ' \in \mathbb{N}$ and $\nu ' > \nu + \xi (\nu )$. Thus it follows that, for each pair $\nu , n \in \mathbb{N}$ with $\nu _{0} < \nu < \xi (\nu ) < n - \nu $, we have $\gcd \{[B _{\nu }\colon W _{\nu }], [B _{n}\colon W _{n}]\} = 1$ and $\gcd \{[B _{\nu }\colon W _{\nu }][B _{n}\colon W _{n}], \tilde p\} = 1$, for \par\vskip0.04truecm\noindent every $\tilde p \in \mathbb{P}$ less than or equal to $p$. \par We show that $R _{n}$ possesses a central $W _{n}$-subalgebra $\Delta _{n}$ with the properties required by (8.2). Take a generator $\varphi $ of $\mathcal{G}(\Theta /K)$, and for each $\xi \in \mathbb{N}$, let $\varphi _{\xi }$ be the unique $W _{\xi }$-automorphism of $\Theta _{\xi }$ extending $\varphi $. Fix an embedding $\psi _{\xi }$ of $B _{\xi }$ in $K _{\rm sep}$ as a $W _{\xi }$-algebra, and denote by $B _{\xi } ^{\prime }$ the image of $B _{\xi }$ under $\psi _{\xi }$. Clearly, $\psi _{\xi }$ gives rise to a canonical bijection of $s(B _{\xi })$ upon $s(B _{\xi } ^{\prime })$, which in turn induces an isomorphism $\psi _{\xi }'\colon {\rm Br}(B _{\xi }) \to {\rm Br}(B _{\xi } ^{\prime })$. Denote by $\Sigma _{\xi }$ and $\Sigma _{\xi } ^{\prime }$ the underlying division algebras of $R _{\xi } \otimes _{W _{\xi }} B _{\xi }$ and $R _{\xi } \otimes _{W _{\xi }} B _{\xi } ^{\prime }$, respectively, and let $\widetilde B _{\xi }$ be a $W _{\xi }$-isomorphic copy of $B _{\xi }$ in the full matrix $W _{\xi }$-algebra $M _{b _{\xi }}(W _{\xi })$, where $b _{\xi } = [B _{\xi }\colon W _{\xi }]$. Using the fact that $M _{b _{\xi }}(R _{\xi }) \cong M _{b _{\xi }}(W _{\xi }) \otimes _{W _{\xi }} R _{\xi }$ over $W _{\xi }$, and applying the Skolem-Noether theorem to $B _{\xi }$ and $\widetilde B _{\xi }$, one obtains that $R _{\xi } \otimes _{W _{\xi }} B _{\xi }$ and $M _{b _{\xi }}(C _{R _{\xi }}(B _{\xi }))$ are isomorphic as $B _{\xi }$-algebras. Hence, by the Wedderburn-Artin theorem, so are $\Sigma _{\xi }$ and $C _{R _{\xi }}(B _{\xi })$. These observations allow to identify the $B _{\xi } ^{\prime }$-algebras $R _{\xi } \otimes _{W _{\xi }} B _{\xi } ^{\prime }$ and $M _{b _{\xi }}(\Sigma _{\xi } ^{\prime })$ and to prove the following fact: \par \noindent (8.5) There exists a $W _{\xi }$-isomorphism $\tilde \psi _{\xi }\colon M _{b _{\xi }}(C _{R _{\xi }}(B _{\xi })) \to (R _{\xi } \otimes _{W _{\xi }} B _{\xi } ^{\prime })$, which extends $\psi _{\xi }$ and maps $C _{R _{\xi }}(B _{\xi })$ upon $\Sigma _{\xi } ^{\prime }$. The image $\Lambda _{\xi } ^{\prime }$ of $\Lambda _{\xi }$ under $\tilde \psi _{\xi }$ is a central $B _{\xi } ^{\prime }$-subalgebra of $\Sigma _{\xi } ^{\prime }$ of degree $p$, which is a representative of the equivalence class $\psi _{\xi }'([\Lambda _{\xi }]) \in {\rm Br}(B _{\xi } ^{\prime })$. \par \noindent Now fix a pair $\nu , n$ so that $\nu _{0} < \nu < \xi (\nu ) < n - \nu $. Retaining notation as in (8.5), we turn to the proof of the following assertion: \par \vskip0.22truecm \noindent (8.6) The tensor products $\Lambda _{\nu } ^{\prime } \otimes _{B _{\nu }'} (B _{\nu } ^{\prime }B _{n} ^{\prime })$, $(\Lambda _{\nu } ^{\prime } \otimes _{B _{\nu }'} B _{\nu } ^{\prime }W _{n}) \otimes _{B _{\nu }'W _{n}} (B _{\nu } ^{\prime }B _{n} ^{\prime })$ and $\Lambda _{n} ^{\prime } \otimes _{B _{n}'} (B _{\nu } ^{\prime }B _{n} ^{\prime })$ are isomorphic central division $B _{\nu } ^{\prime }B _{n} ^{\prime }$-algebras. \par \vskip0.22truecm\noindent The statement that $\Lambda _{\nu } ^{\prime } \otimes _{B _{\nu }'} (B _{\nu } ^{\prime }B _{n} ^{\prime }) \cong (\Lambda _{\nu } ^{\prime } \otimes _{B _{\nu }'} B _{\nu } ^{\prime }W _{n}) \otimes _{B _{\nu }'W _{n}} (B _{\nu } ^{\prime }B _{n} ^{\prime })$ as \par \vskip0.032truecm\noindent $B _{\nu } ^{\prime }B _{n} ^{\prime }$-algebras is known (cf. \cite{P}, Sect. 9.4, Corollary~a), so it suffices to \par \vskip0.032truecm\noindent show that $\Lambda _{n} ^{\prime } \otimes _{B _{n}'} (B _{\nu } ^{\prime }B _{n} ^{\prime }) \cong (\Lambda _{\nu } ^{\prime } \otimes _{B _{\nu }'} B _{\nu } ^{\prime }W _{n}) \otimes _{B _{\nu }'W _{n}} (B _{\nu } ^{\prime }B _{n} ^{\prime })$ over $B _{\nu } ^{\prime }B _{n} ^{\prime }$. \par \vskip0.07truecm\noindent Denote by $\Sigma _{\nu ,n}$ and $\Sigma _{\nu ,n} ^{\prime }$ the underlying division $B _{\nu } ^{\prime }B _{n} ^{\prime }$-algebras of \par \vskip0.05truecm\noindent $\Sigma _{\nu } ^{\prime } \otimes _{B _{\nu }'} (B _{\nu } ^{\prime }B _{n} ^{\prime })$ and $\Sigma _{n} ^{\prime } \otimes _{B _{n}'} (B _{n} ^{\prime }B _{\nu } ^{\prime })$, respectively. Using Lemma \ref{lemm5.8}, one \par \vskip0.05truecm\noindent obtains that $\Sigma _{\nu ,n}$ and $\Sigma _{\nu ,n} ^{\prime }$ are isomorphic to the underlying division \par \vskip0.048truecm\noindent $B _{\nu } ^{\prime }B _{n} ^{\prime }$-algebra of $R \otimes _{K} B _{\nu } ^{\prime }B _{n} ^{\prime }$. Note also that $p \nmid [(B _{\nu } ^{\prime }B _{n} ^{\prime })\colon K]$; since \par\vskip0.056truecm\noindent $[B _{\nu } ^{\prime }B _{n} ^{\prime }\colon K] = [B _{\nu } ^{\prime }B _{n} ^{\prime }\colon W _{n}].[W _{n}\colon K]$ and $B _{\nu } ^{\prime }B _{n} ^{\prime } = B _{\nu } ^{\prime }W _{n}.B _{n} ^{\prime }$, the assertion \par \vskip0.052truecm\noindent follows from (8.3) (i), (8.4) and the fact that $p \nmid [W _{n}\colon K]$. In view of \par\vskip0.034truecm\noindent (8.3) (ii) and \cite{Ch3}, Lemma~3.5, this ensures that $\Lambda _{\nu } ^{\prime } \otimes _{B _{\nu }'} (B _{\nu } ^{\prime }B _{n} ^{\prime })$ and \par\vskip0.038truecm\noindent $\Lambda _{n} ^{\prime } \otimes _{B _{n}'} (B _{\nu } ^{\prime }B _{n} ^{\prime })$ are central division $B _{\nu } ^{\prime }B _{n}$-algebras which are embeddable in \par\vskip0.034truecm\noindent $\Sigma _{\nu ,n} ^{\prime }$ as $B _{\nu } ^{\prime }B _{n} ^{\prime }$-subalgebras. At the same time, it follows from Lemma \ref{lemm5.1} and the observation on $[B _{\nu } ^{\prime }B _{n} ^{\prime }\colon K]$ and $p$ that $\Sigma _{\nu ,n} ^{\prime }/(B _{\nu } ^{\prime }B _{n} ^{\prime })$ is of $p$-power one. Now the proof of (8.6) is completed by applying Lemma \ref{lemm5.8}. Since, by (8.4) and the choice of the indices $\nu , n$, we have $\gcd \{[B _{\nu } ^{\prime }W _{n}\colon W _{n}], [B _{n} ^{\prime }\colon W _{n}]\} = 1$, statement (8.6) and Lemma \ref{lemm5.9} imply the following: \par \noindent (8.7) There exists $\Delta _{n} \in d(W _{n})$, such that $\Delta _{n} \otimes _{W _{n}} B _{n} ^{\prime } \cong \Lambda _{n} ^{\prime }$ and \par \noindent $\Delta _{n} \otimes _{W _{n}} (B _{\nu } ^{\prime }W _{n}) \cong \Lambda _{\nu } ^{\prime } \otimes _{B _{\nu }'} (B _{\nu } ^{\prime }W _{n})$ (over $B _{n} ^{\prime }$ and $B _{\nu } ^{\prime }W _{n}$, respectively). \par \noindent It is clear from (8.7) and the $W _{n}$-isomorphism $B _{n} \cong B _{n} ^{\prime }$ that the $B _{n}$-algebras $\Delta _{n} \otimes _{W _{n}} B _{n}$ and $\Lambda _{n}$ are isomorphic, which proves (8.2). Applying (8.1), one obtains that $\Delta _{n,0} \otimes _{\mathcal{W} _{n}} W _{n} \cong \Delta _{n}$ as $W _{n}$-algebras, for some $\Delta _{n,0} \in d(\mathcal{W} _{n})$ (here $\mathcal{W} _{n} = K _{\rm ur} \cap W _{n}$). Since $\mathcal{G}(\mathcal{W} _{n} ^{\prime }/K)$ is nilpotent and $p \nmid [\mathcal{W} _{n} ^{\prime }/K]$ (see the observations proving (8.4)), this allows to deduce the former assertion of Lemma \ref{lemm8.2} from Lemma \ref{lemm8.1}, in case $\varepsilon \in K$. \par Let now $\varepsilon \notin K$, $[K(\varepsilon )\colon K] = m$, and $R _{\varepsilon }$ be the underlying division algebra of the central simple $K(\varepsilon )$-algebra $R \otimes _{K} K(\varepsilon )$. Then $K(\varepsilon )/K$ is a cyclic field extension and $m \mid p - 1$, which implies $\Theta (\varepsilon )/K(\varepsilon )$ is a totally ramified Kummer extension of degree $p$. Observing also that $R _{\varepsilon }$ is a central LBD-algebra over $K(\varepsilon )$, one obtains that $\Theta (\varepsilon )$ embeds in $R _{\varepsilon }$ as a $K(\varepsilon )$-subalgebra. At the same time, it follows from Lemma \ref{lemm5.1} that the $p$-power $k(p)_{\varepsilon }$ of $R _{\varepsilon }/K(\varepsilon )$ is less than $2$, i.e. $p ^{2} \nmid [K(\varepsilon , \delta ')\colon K(\varepsilon )]$, for any $\delta ' \in R _{\varepsilon }$. Hence, $k(p)_{\varepsilon } = 1$, and by the already considered special case of our lemma, $R _{\varepsilon }$ possesses a central $K(\varepsilon )$-subalgebra $\Delta _{\varepsilon }$, such that deg$(\Delta _{\varepsilon }) = p$ and there exists a $K(\varepsilon )$-subalgebra of $\Delta _{\varepsilon }$ isomorphic to $\Theta (\varepsilon )$. Let now $\varphi $ be a generator of $\mathcal{G}(K(\varepsilon )/K)$. Then $\varphi $ extends to an automorphism $\bar \varphi $ of $R _{\varepsilon }$ (as a $K$-algebra), so Lemma \ref{lemm5.4} ensures that $\Delta _{\varepsilon }$ is $K(\varepsilon )$-isomorphic to its image under $\bar \varphi $. Together with the Skolem-Noether theorem, this shows that $\bar \varphi $ can be chosen so that $\bar \varphi (\Delta _{\varepsilon }) = \Delta _{\varepsilon }$. Now it follows from Teichm\"{u}ller's theorem (and the equality $\gcd \{m, p\} = 1$) that there is a $K(\varepsilon )$-isomorphism $\Delta _{\varepsilon } \cong \Delta \otimes _{K} K(\varepsilon )$, for some $\Delta \in d(K)$ with deg$(\Delta ) = p$. Moreover, it can be deduced from \cite{Ch3}, Lemma~3.5, that $\Delta $ is isomorphic to a $K$-subalgebra of $R$, which in turn has a $K$-subalgebra isomorphic to $\Theta $. Hence, by Albert's criterion (see \cite{P}, Sect. 15.3), $\Delta $ is a cyclic $K$-algebra. Observe finally that cyclic degree $p$ extensions of $K$ are inertial. Since $p \neq {\rm char}(\widehat K)$ and $\varepsilon \notin K$, this is implied by (4.2) (a) and Lemma \ref{lemm7.1}, so Lemma \ref{lemm8.2} is proved. \end{proof} \par The main lemma of the present Section can be stated as follows: \par \begin{lemm} \label{lemm8.3} Assume that $(K, v)$ is a Henselian field with $\widehat K$ of arithmetic type, {\rm dim}$(K _{\rm sol}) \le 1$, and {\rm abrd}$_{p}(K) < \infty $, $p \in \mathbb{P}$, and let $R$ be a central division {\rm LBD}-algebra over $K$, such that {\rm char}$(\widehat K) \nmid [K(\delta )\colon K]$, for any $\delta \in R$. Then, for any $p \in \mathbb{P}$ not equal to {\rm char}$(\widehat K)$, there exists a $p$-splitting field $E _{p}$ of $R/K$, that is included in $K(p)$. \end{lemm} \par \begin{proof} Fix an arbitrary $p \in \mathbb{P} \setminus \{{\rm char}(\widehat K)\}$, take a primitive $p$-th root of unity $\varepsilon = \varepsilon _{p}$ in $K _{\rm sep}$, suppose that $T _{p}(K)$ is defined as in Lemma \ref{lemm4.1} (c), and put $V _{p}(K) = K _{0}(p).T _{p}(K)$, where $K _{0}(p) = K(p) \cap K _{\rm ur}$. For each $z \in \mathbb{P}$, denote by $k(z)$ the $z$-power of $R/K$, and let $\ell $ be the minimal integer $\ell (p) \ge 0$, for which there exists an extension $V _{p}$ of $K$ in $V _{p}(K)$ satisfying conditions (c) and (cc) of Lemma \ref{lemm7.6}. As shown in the proof of Lemma \ref{lemm7.4}, $K(p) = V _{p}(K)$ if $\varepsilon \in K$ or $v(K) = pv(K)$, and $K(p) = K _{0}(p)$, otherwise. In the former case, $V _{p}$ clearly has the properties claimed by Lemma \ref{lemm8.3}, so we suppose, for the rest of our proof, that $\varepsilon \notin K$, $v(K) \neq pv(K)$ and $V _{p}/K$ is chosen so that $[V _{p}\colon K] = p ^{\ell }$ and the ramification index $e(V _{p}/K)$ be minimal. Let $E _{p}$ be the maximal inertial extension of $K$ in $V _{p}$. Then it follows from Lemma \ref{lemm4.1}~(d) and the inequality $p \neq {\rm char}(\widehat K)$ that $\widehat E _{p} = \widehat V _{p}$; using also (4.2) (a), one sees that $V _{p}/E _{p}$ is totally ramified and $[V _{p}\colon E _{p}] = e(V _{p}/K)$. Note further that $E _{p} \subseteq K _{0}(p)$, by Lemma \ref{lemm7.6}, so it suffices for the proof of Lemma \ref{lemm8.3} to show that $V _{p} = E _{p}$ (i.e. $e(V _{p}/K) = 1$). Assuming the opposite and using Lemma \ref{lemm7.3}, with its proof, one obtains that there is an extension $\Sigma $ of $E _{p}$ in $V _{p}$, such that $[\Sigma \colon K] = p ^{\ell -1}$. \par The main step towards the proof of Lemma \ref{lemm8.3} is to show that $p$, the underlying division $\Sigma $-algebra $R _{\Sigma }$ of $R \otimes _{K} \Sigma $, and the field extension $V _{p}/\Sigma $ satisfy the conditions of Lemma \ref{lemm8.2}. Our argument relies on the assumption that dim$(K _{\rm sol}) \le 1$. In view of Lemma \ref{lemm5.1}, it guarantees that, for each $z \in \mathbb{P} \setminus \{p\}$, $k(z)$ is the $z$-power of $R _{\Sigma }/\Sigma $. Thus it turns out that char$(\widehat K) \nmid [\Sigma (\rho ')\colon \Sigma ]$, for any $\rho ' \in R _{\Sigma }$. At the same time, it follows from the Wedderburn-Artin theorem and the choice of $V _{p}$ and $\Sigma $ that there exist isomorphisms $R \otimes _{K} \Sigma \cong M _{\gamma }(R _{\Sigma })$ and $R \otimes _{K} V _{p} \cong M _{\gamma '}(R _{V_{p}})$ (as algebras over $\Sigma $ and $V _{p}$, respectively), where $\gamma '= p ^{k(p)}$, $\gamma \mid p ^{k(p)-1}$ and $R _{V_{p}}$ is the underlying division $V _{p}$-algebra of $R \otimes _{K} V _{p}$. Note further that the $\Sigma $-algebras $M _{\gamma }(R _{\Sigma })$ and $M _{\gamma }(\Sigma ) \otimes _{\Sigma } R _{\Sigma }$ are isomorphic, which enables one to deduce from the existence of a $V _{p}$-isomorphism $R \otimes _{K} V _{p} \cong (R \otimes _{K} \Sigma ) \otimes _{\Sigma } V _{p}$ (cf. \cite{P}, Sect. 9.4, Corollary~a) that $M _{\gamma '}(R _{V_{p}}) \cong M _{\gamma }(V _{p}) \otimes _{V_{p}} (R _{\Sigma } \otimes _{\Sigma } V _{p})$ as $V _{p}$-algebras; hence, by Wedderburn-Artin's theorem and the inequality $\gamma < \gamma '$, $R _{\Sigma } \otimes _{\Sigma } V _{p}$ is not a division algebra. This, combined with \cite{Ch3}, Lemma~3.5, and the equality $[V _{p}\colon \Sigma ] = p$, proves that $R _{\Sigma } \otimes _{\Sigma } V _{p} \cong M _{p}(R _{V_{p}} ^{\prime })$, for some central division $V _{p}$-algebra $R _{V_{p}} ^{\prime }$ (which means that $V _{p}$ is embeddable in $R _{\Sigma }$ as a $\Sigma $-subalgebra). It is now easy to see that $$M _{\gamma '}(R _{V_{p}}) \cong M _{\gamma }(V _{p}) \otimes (M _{p}(V _{p}) \otimes _{V _{p}} R _{V_{p}} ^{\prime }) \cong (M _{\gamma }(V _{p}) \otimes _{V _{p}} M _{p}(V _{p})) \otimes _{V_{p}} R _{V_{p}} ^{\prime }$$ $$\cong M _{\gamma p}(V _{p}) \otimes _{V _{p}} R _{V_{p}} ^{\prime } \cong M _{p\gamma }(R _{V _{p}} ^{\prime }).$$ Using Wedderburn-Artin's theorem, one obtains that $\gamma = \gamma '/p = p ^{k(p)-1}$ \par\vskip0.07truecm\noindent and $R _{V_{p}} \cong R _{V_{p}} ^{\prime }$ over $V _{p}$. Therefore, by Lemma \ref{lemm5.1}, $p ^{2} \nmid [\Sigma (\rho ')\colon \Sigma ]$, for any $\rho ' \in R _{\Sigma }$, which completes the proof of the fact that $p$, $R _{\Sigma }$ and $V _{p}/\Sigma $ satisfy the conditions of Lemma \ref{lemm8.2}. Furthermore, it follows that a finite extension of $\Sigma $ is a $p$-splitting field of $R _{\Sigma }/\Sigma $ if and only if it is a such a field for $R/K$. \par We are now in a position to complete the proof of Lemma \ref{lemm8.3} in the case where $\varepsilon \notin K$ and $v(K) \neq pv(K)$. By Lemma \ref{lemm8.2}, there exists a central $\Sigma $-subalgebra $\Delta $ of $R _{\Sigma }$, such that deg$(\Delta ) = p$ and $V _{p}$ is embeddable in $\Delta $ as a $\Sigma $-subalgebra; hence, by \cite{He}, Theorem~4.4.2, $R _{\Sigma } = \Delta \otimes _{\Sigma } C(\Delta )$, where $C(\Delta )$ is the centralizer of $\Delta $ in $R _{\Sigma }$. In addition, $C(\Delta )$ is a central division $\Sigma $-algebra, and since $p ^{2} \nmid [\Sigma (\rho ')\colon \Sigma ]$, for any $\rho ' \in R _{\Sigma }$, it follows from Lemma \ref{lemm5.3} that $p \nmid [\Sigma (c)\colon \Sigma ]$, for any $c \in C(\Delta )$. Note also that $\gcd \{[K(\varepsilon )\colon K], [\Sigma \colon K]\} = 1$, whence, $K(\varepsilon ) \cap \Sigma = K$ and $\varepsilon \notin \Sigma $. Therefore, Lemma \ref{lemm8.2} requires the existence of a degree $p$ cyclic extension $\Sigma ^{\prime }$ of $\Sigma $ in $K _{\rm sep}$, which is inertial over $\Sigma $ (by Lemma \ref{lemm7.1}) and embeds in $\Delta $ as a $\Sigma $-subalgebra. This implies $\Sigma ^{\prime }$ is a $p$-splitting field of $R _{\Sigma }/\Sigma $ and $R/K$ (see Lemma \ref{lemm5.3} (c) and \cite{P}, Lemma~13.4 and Corollary~13.4), $[\Sigma ^{\prime }\colon K] = p ^{\ell }$, $e(\Sigma ^{\prime }/K) = e(V _{p}/K)/p$, and $\widehat \Sigma ^{\prime }/\widehat \Sigma $ is a cyclic extension of degree $p$. Taking finally into consideration that $\widehat \Sigma \in I(\widehat K(p)/\widehat K)$, and using Lemma \ref{lemm4.1}, one obtains consecutively that $\widehat \Sigma ^{\prime } \in I(\widehat K(p)/\widehat K)$ and $E _{p}$ has a degree $p$ extension $E ^{\prime }$ in $\Sigma ^{\prime } \cap K _{0}(p)$. It is now easy to see that $\Sigma ^{\prime } = E ^{\prime }\Sigma $ and $\Sigma ^{\prime } \in I(V _{p}(K)/K)$. The obtained properties of $\Sigma ^{\prime }$ show that it satisfies conditions (c) and (cc) of Lemma \ref{lemm7.6}. As $e(\Sigma ^{\prime }/K) < e(V _{p}/K)$, this contradicts our choice of $V _{p}$ and thereby yields $e(V _{p}/K) = 1$, i.e. $V _{p} = E _{p}$, so Lemma \ref{lemm8.3} is proved. \end{proof} \par \section{\bf Proof of Lemma \ref{lemm3.3} and the main results} \par We begin this Section with a lemma which shows how to deduce Lemma \ref{lemm3.3} in general from its validity in the case where $q > 0 = k(q)$ ($q$ is defined at the beginning of Section 8, and $k(q)$ is the $q$-power of $R/K$). \par \begin{lemm} \label{lemm9.1} Let $(K, v)$ be a Henselian field with $\widehat K$ of arithmetic type, \par\noindent {\rm char}$(\widehat K) = q$, {\rm dim}$(K _{\rm sol}) \le 1$ and {\rm abrd}$_{p}(K) < \infty $, $p \in \mathbb{P}$. Put $\mathbb{P} ^{\prime } = \mathbb{P} \setminus \{q\}$, take a central division {\rm LBD}-algebra $R$ over $K$, and in case $q > 0$, assume that $K$ has an extension $E _{q}$ in $K(q)$ that is a $q$-splitting field of $R/K$. Then, for each $p \in \mathbb{P} ^{\prime }$, there is a $p$-splitting field $E _{p}$ of $R/K$, lying in $I(K(p)/K)$. \end{lemm} \par \begin{proof} Our assertion is contained in Lemma \ref{lemm8.3} if $q = 0$, so we assume that $q > 0$. Let $\mathcal{R} _{q}$ be the underlying division $E _{q}$-algebra of $R \otimes _{K} E _{q}$, and for each $p \in \mathbb{P}$, let $k(p)'$ be the $p$-power of $\mathcal{R} _{q}/E _{q}$. Lemma \ref{lemm5.1} (c) and the assumption on $E _{q}$ ensure that $k(q)' = 0$, and $k(p)'$ equals the $p$-power of $R/K$ whenever $p \in \mathbb{P} ^{\prime }$. Therefore, by Lemma \ref{lemm8.3}, for each $p \in \mathbb{P} ^{\prime }$, there is an extension $E _{p} ^{\prime }$ of $E _{q}$ in $E _{q}(p)$, which is a $p$-splitting field of $\mathcal{R} _{q}/E _{q}$. This enables one to deduce from Lemmas \ref{lemm5.5} and \ref{lemm5.6} that there exist $E _{q}$-algebras $\Delta _{p} ^{\prime } \in d(E _{q})$, $p \in \mathbb{P} ^{\prime }$, embeddable in $\mathcal{R} _{q}$, and such that deg$(\Delta _{p} ^{\prime }) = p ^{k(p)'}$, for every $p \in \mathbb{P} ^{\prime }$. Hence, by Lemma \ref{lemm8.1}, $R$ possesses central $K$-subalgebras $\Delta _{p} \in d(K)$, $p \in \mathbb{P} ^{\prime }$, with $\Delta _{p} \otimes _{K} E _{q} \cong \Delta _{p} ^{\prime }$ as $E _{q}$-algebras, for each index $p$. In view of Lemmas \ref{lemm4.3} and \ref{lemm5.3} (c), this proves Lemma \ref{lemm9.1}. \end{proof} \par We are now prepared to complete the proof of Lemma \ref{lemm3.3} in general, and thereby to prove Theorems \ref{theo3.1} and \ref{theo3.2}. If $(K, v)$ and $\widehat K$ satisfy the conditions of Theorem \ref{theo3.2}, then the conclusion of Lemma \ref{lemm3.3} follows from Theorem \ref{theo6.3} and Lemma \ref{lemm9.1}. As noted in Remark \ref{rema5.7}, this leads to a proof of Theorem \ref{theo3.2}. \par \begin{rema} \label{rema9.2} Let $(K, v)$ be an HDV-field with $\widehat K$ of arithmetic type and virtually perfect, and let $R$ be a central division {\rm LBD}-algebra over $K$. Then it follows from Theorem \ref{theo6.3} and Lemma \ref{lemm9.1} that, for each $p \in \mathbb{P}$, there exists a finite extension $E _{p}$ of $K$ in $K(p)$, which is a $p$-splitting field of $R/K$. Therefore, as in Remark \ref{rema5.7}, one concludes that $R$ has a central $K$-subalgebra $\widetilde R$ subject to the restrictions of Conjecture \ref{conj1.1}. This proves Theorem \ref{theo3.1} in case $m = 1$. \end{rema} \par In the rest of the proof of Lemma \ref{lemm3.3}, we assume that $m \ge 2$, $K = K _{m}$ is a complete $m$-discretely valued field whose $m$-th residue field $K _{0}$ is virtually perfect of characteristic $q$ and arithmetic type, $v$ is the standard Henselian $\mathbb{Z} ^{m}$-valued valuation of $K _{m}$ with $\widehat K _{m} = K _{0}$, and $K _{m-m'}$ is the $m'$-th residue field of $K _{m}$, for $m' = 1, \dots , m$. Recall that $K _{m-m'+1}$ is complete with respect to a discrete valuation $w _{m'-1}$ with a residue field $K _{m-m'}$, for each $m'$, and $v$ equals the composite valuation $w _{m-1} \circ \dots \circ w _{0}$. Considering $(K, v)$ and $(K, w _{0})$, and arguing as in the proof of Theorem \ref{theo3.2}, one obtains that the conclusions of Theorem \ref{theo3.1} and Lemma \ref{lemm3.3} hold if either $q = 0$ or char$(K _{m-1}) = q > 0$. It remains for us to prove Theorem \ref{theo3.1} under the hypothesis that $q > 0$ and char$(K _{m-1}) = 0$ (so char$(K _{m}) = 0$). Denote by $\mu $ the maximal index for which char$(K _{m-\mu }) = 0$, fix a primitive $q$-th root of unity $\varepsilon \in K _{\rm sep}$, put $K ^{\prime } = K(\varepsilon )$, $v'= v _{K'}$, and denote by $R ^{\prime }$ the underlying division $K ^{\prime }$-algebra of $R \otimes _{K} K ^{\prime }$. It is clear from Lemma \ref{lemm7.4}, applied to $R ^{\prime }$ and $(K ^{\prime }, v')$, that $R ^{\prime }$ satisfies the condition of Lemma \ref{lemm9.1}, whence, for each $p \in \mathbb{P}$, there exists a finite extension $E _{p} ^{\prime }$ of $K ^{\prime }$ in $K ^{\prime }(p)$, which is a $p$-splitting field of $R ^{\prime }/K ^{\prime }$. Similarly to the proof of Lemma \ref{lemm9.1}, this allows to show that, for each $p \in \mathbb{P}$, $R ^{\prime }$ possesses a $K ^{\prime }$-subalgebra $\Delta _{p} ^{\prime } \in d(K ^{\prime })$ of degree $p ^{k(p)'}$, where $k(p)'$ is the $p$-power of $R ^{\prime }/K ^{\prime }$. Using the fact that $K ^{\prime }/K$ is a cyclic field extension with $[K ^{\prime }\colon K] \mid q - 1$, and applying Lemma \ref{lemm8.1} to $K ^{\prime }$, $R ^{\prime }$ and $\Delta _{q} ^{\prime }$, one concludes that $R$ has a $K$-subalgebra $\Delta _{q} \in d(K)$, such that $\Delta _{q} \otimes _{K} K ^{\prime } \cong \Delta _{q} ^{\prime }$ as a $K ^{\prime }$-subalgebra. Obviously, deg$(\Delta _{q}) = q ^{k(q)'}$, and it follows from Lemma \ref{lemm5.1} and the divisibility $[K ^{\prime }\colon K] \mid q - 1$ that $k(q)'$ \par\vskip0.032truecm\noindent equals the $q$-power $k(q)$ of $R/K$. Observe now that abrd$_{q}(K _{m-\mu }(q)) \le 1$ and abrd$_{q}(K _{m-\mu }) < \infty $. As char$(K _{m-\mu -1}) = q$, the former inequality is implied by Theorem \ref{theo6.3} and the fact that $(K _{m-\mu }, w _{\mu })$ is an HDV-field with a residue field $K _{m-\mu -1}$. The latter one can be deduced from \cite{PS}, Corollary~2.5, since \par\vskip0.032truecm\noindent $(K _{m-\mu }, w _{\mu })$ is complete and $[K _{m-\mu -1}\colon K _{m-\mu -1} ^{q}] < \infty $. Note also that the \par\vskip0.032truecm\noindent composite valuation $\kappa _{\mu } = w _{\mu -1} \circ \dots \circ w _{0}$ of $K$ is Henselian with a residue \par\vskip0.032truecm\noindent field $K _{m-\mu }$ and $\kappa _{\mu }(K) \cong \mathbb{Z} ^{\mu }$. Hence, by Lemma \ref{lemm4.4}, abrd$_{q}(K _{m}) < \infty $. Applying finally Lemma \ref{lemm4.3} to $\Delta _{q}/K$ and $\kappa _{\mu }$, as well as Lemma \ref{lemm5.3} to $R$ and $\Delta _{q}$, one concludes that $(K, v)$, $q$ and $R/K$ satisfy the conditions of Lemma \ref{lemm9.1}. Therefore, for each $p \in \mathbb{P}$, $K$ has a finite extension $E _{p}$ in $K(p)$, which is a $p$-splitting field of $R/K$. Thus Lemma \ref{lemm3.3} is proved. As explained in Remark \ref{rema5.7}, this enables one to complete the proof of Theorem \ref{theo3.1}. \par Note finally that, in the setting of Conjecture \ref{conj1.1}, it is unknown whether there exists a sequence $E _{p}$, $p \in \mathbb{P}$, of $p$-splitting fields of $R/K$, such that \par\noindent $E _{p} \subseteq K(p)$, for each $p$. In view of Proposition \ref{prop2.1} and \cite{Me}, Conjecture~1 (see also the end of \cite{P}, Ch. 15), and since Questions~2.3~(a) and (b) are open, the answer is affirmative in all presently known cases. When $R$ is an LFD-algebra and $K$ contains a primitive $p$-th root of unity, for every $p \in \mathbb{P}$, $p \neq {\rm char}(K)$, such an answer follows from Proposition \ref{prop2.1}, combined with \cite{A1}, Ch. VII, Theorem~28, and the Merkur'ev-Suslin theorem \cite{MS}, (16.1) (see also \cite{GiSz}, Theorem~9.1.4 and Ch. 8, respectively). This supports the idea to make further progress in the study on Conjecture \ref{conj1.1}, by finding a generalization of Lemma \ref{lemm3.3} for more fields $K$ with abrd$_{p}(K) < \infty $, $p \in \mathbb{P}$, than those singled out by Theorems \ref{theo3.1} and \ref{theo3.2}. To conclude with, it would surely be of interest to learn whether a proof of Conjecture \ref{conj1.1}, for a field $K$ admissible by Proposition \ref{prop2.1}, could lead to an answer to Question~2.3~(b), for central division LBD-algebras over $K$. \par \emph{Acknowledgement.} I am grateful to A.A. Panchishkin, P.N. Siderov (1952-2016), A.V. Mikhal\"{e}v (1939-2022), V.N. Latyshev (1934-2020), N.I. Dubrovin, and V.I. Yanchevskij for the useful discussions on a number of aspects of valuation theory and associative simple rings concerning the topic of this paper, which stimulated my work on its earliest draft in \cite{Ch2}. The present research has been partially supported by Grant KP-06 N 32/1 of the Bulgarian National Science Fund. \par \vskip0.04truecm \end{document}
\begin{document} \title{Splitting-Particle Methods for Structured Population Models: Convergence and Applications} \begin{abstract} We propose a new numerical scheme designed for a wide class of structured population models based on the idea of operator splitting and particle approximations. This scheme is related to the Escalator Boxcar Train (EBT) method commonly used in biology, which is in essence an analogue of particle methods used in physics. Our method exploits the split-up technique, thanks to which the transport step and the nonlocal integral terms in the equation can be separately considered. The order of convergence of the proposed method is obtained in the natural space of finite nonnegative Radon measures equipped with the flat metric. This convergence is studied even adding reconstruction and approximation steps in the particle simulation to keep the number of approximation particles under control. We validate our scheme in several test cases showing the theoretical convergence error. Finally, we use the scheme in situations in which the EBT method does not apply showing the flexibility of this new method to cope with the different terms in general structured population models. \end{abstract} \noindent Key words: structured population models, particle methodd, measure valued solutions, Radon measures, flat metric. \ \noindent AMS Classification: 92D25, 65M12, 65M75. \section{Introduction} The main purpose of population dynamics models is to describe the evolution of a population, which changes its size, structure, or trait due to birth, growth, death, selection, and mutation processes. Initially, the models are based on linear ordinary differential equations, and as a consequence exponential growing solutions are typically obtained. However, in many cases it is not a realistic phenomenon, since the exponential growth can be inhibited by environmental limitations such as lack of nutrients, space, partners to reproduction, etc. Additionally, these models leave out of consideration the individual's stage of development, which strongly influences its vital functions. For example, fertility and death rates depend heavily on the age of human beings, the process of cell mitosis can be influenced by the age, size or maturity level of the cell, a trait of an offspring may depend on parents traits. Taking into account the population structure usually leads to first order hyperbolic equations. Finally, subsequent generations of individuals produce slight changes in their traits due to small mutations. Selection-mutation models typically lead to nonlocal terms due to the offspring different trait. This paper is devoted to the numerical analysis of such equations written in general as \begin{align}\label{eq1} \frac{\partial}{\partial t} \mu + \frac{\partial}{\partial x}(b(t,\mu)\mu) + c(t,\mu) \mu & = \int_{{\mathbb{R}}^{+}}(\eta(t,\mu) )(y)\d \mu(y), \\ \nonumber \mu(0) & = \mu_o, \end{align} where $t \in [0,T]$ and $x \geq 0$ denote time and a structural variable respectively, $b, c, \eta$ are vital functions depending on $x\geq 0$, and $\mu$ is a Radon measure describing the distribution of individuals with respect to the trait/variable $x$. The function $b(t, \mu)$ describes the dynamics of the transformation of the individual's state. More precisely, the individual changes its state according to the following ODE $$ \dot{x} = b(t, \mu)(x). $$ By $c(t,\mu)(x)$ we denote a rate of evolution (growth or death rate). The integral on the right hand side accounts for an influx of the new individuals into the system. We assume the following form of the measure-valued function $\eta$ is of the form \begin{equation}\label{form_of_eta} \eta(t,\mu)(y) = \sum_{p=1}^{r}\beta_p(t,\mu)(y) \delta_{x = \bar x_p(y)}, \end{equation} which means that an individual at the state $y$ gives rise to offspring being at the states $\{\bar x_p(y)\}$, $p = 1, \dots, r$. The integral on the right-hand side has to be understood in the B\"ochner sense, that is, by duality on test functions $\varphi\in \mathbf{C}_0({\mathbb{R}}^+)$ functions as \begin{equation}\label{form_of_eta2} \int_{{\mathbb{R}}^+} \int_{{\mathbb{R}}^+}\varphi(t,x)[\d\eta(t,\mu)(y)](x) d\mu(y) = \sum_{p=1}^{r} \int_{{\mathbb{R}}^+} \beta_p(t,\mu)(y) \varphi(\bar x_p(y)) \d\mu(y)\,. \end{equation} In case all new born individuals have the same physiological state $x^b$, then \begin{equation}\label{form_of_eta3} \eta(t,\mu)(y) = \beta(t,\mu)(y) \delta_{x = x^b}\,, \end{equation} and the integral in \eqref{form_of_eta2} transforms into a boundary condition. We restrict to integral operators of the form \eqref{form_of_eta} for the sake of simplicity. In fact, the continuous dependence of solutions of \eqref{eq1} with respect to $\eta$ in \cite{CCC} allows for the general case to be approximated by integral operators of the form \eqref{form_of_eta}, and thus this restriction is done without loss of generality, see Remark \ref{aproxeta}. In the present paper, we develop a numerical scheme, which is based on results obtained in \cite{Spop}, for the equation \eqref{eq1}. It turns out that a measure setting used in the latter paper is convenient not only from the analytical but also from the practical numerical simulation viewpoint. Note that the result of a measurement or an observation is usually the number of individuals, whose state is within a specific range. For example, demographic data provide the number of humans within certain age cohorts. A natural way of translating such data into a mathematical language is to make use of Dirac Deltas. This intuitive idea was the basis for a numerical scheme called the {Escalator Boxcar Train} (EBT) method developed in \cite{Roos}. This method approximates in some sense a solution at time $t$ by a sum of Dirac measures $\sum_{i\in I}m^i(t)\delta_{x^i(t)}$. In the first step, an initial distribution is divided into $M$ cohorts characterized by pairs $(m^i, x^i)$, for $i=1,\dots, M$. For the $i$-th cohort, $m^i(t)$ denotes its weight at time $t$, which is a number of the individuals within the cohort and $x^i(t)$ is its location at time $t$, that is, an average value of the structural variable within this cohort. The mass $m^i(t)$ changes its value due to the process of evolution (growth or death), while $x^i(t)$ evolves according to the characteristic lines defined by the transport term. A boundary cohort $(m_B, x_B)$, that is, the cohort which accounts for the influx of new individuals into the system, evolves in a is slightly different way, since its weight changes additionally due to the birth process. Enclosing the boundary cohort into the system, which occurs in certain time moments, is called the internalization process. A power of the described method lies in its simplicity and clear biological meaning of the output. Indeed, integrals of a population's distribution over specified domains, which are the output, are more meaningful than a density's value in nodal points. Originally, the EBT method was designed for equations of the form \eqref{eq1} with the most simplified form of the integral kernel \eqref{form_of_eta3}, and since its invention in \cite{Roos} it has been widely used by biologists, see e.g. \cite{EBT1,EBT2,EBT3,EBT4}. Similar mesh-free methods called particle methods are commonly used in problems, where one has to model a behaviour of large groups of particles or individuals, which interact between each other. Contrary to the EBT, particle methods were originally designed for problems where the number of individuals was preserved and thus the mass conservation law holds. These methods have been successfully used for solving numerically such problems as the Euler equation in fluid mechanics \cite{Euler2,Euler} and Vlasov equation in plasma physics \cite{BirdLang,CR,GV}. Recently, they are also used in problems related to crowd dynamics and pedestrians flow \cite{PT,piccoli} or collective motion of large groups of agents \cite{Dorsogna,kinetic,KSUB}. As it has been stated above, in structured population models conservation laws do not hold in general. One has to deal with new particles, which appear due to the birth process or mutations. Depending on the model, new individuals may appear only on the boundary or can be distributed over the whole domain. Therefore, one cannot exploit some natural distances for probability measures like Wasserstein distances. The measure approach, which rigorously deals with Dirac Deltas in models coming from biology, is relatively new \cite{GwiazdaThomasEtAl,GwiazdaMarciniak,CCC}, and thus a convergence of the particle based schemes for these models was difficult to establish for a long period of time. One of the first steps in this direction has been made for the equation \eqref{eq1} in \cite{GwiazdaThomasEtAl,GwiazdaMarciniak}, where existence, uniqueness, and Lipschitz dependence of solutions on the initial data and model parameters in the space of Radon measures were proved. By the proper choice of a metric authors overcame the nonconservative character of the problem. Namely, they employed a modified Wasserstein distance and the flat metric, known also as the bounded Lipschitz distance. This framework was the theoretical foundations for the very recent proof \cite{EBT} of the convergence of the EBT method without any explicit error estimates for \eqref{eq1}--\eqref{form_of_eta3}. In this work, we shall explicitly show how the method used for proving the well posedness of \eqref{eq1} in \cite{Spop} can be translated into an applicable numerical scheme. We provide estimates on the order of the convergence for the general models \eqref{eq1}, covering in particular the case \eqref{eq1}--\eqref{form_of_eta}. The novelty of this paper also concerns the problem of increasing number of Dirac measures that appears due to birth and/or mutation processes. We provide a procedure to construct an approximation of a sum of Dirac Deltas by a smaller amount of deltas, called the measure reconstruction procedure, together with an error of the approximation. This paper is organized as follows. In Section \ref{overview}, we describe the algorithm and the procedure of a measure reconstruction. In Section \ref{conv_res}, we present the proof of the convergence of the scheme together with the convergence order error analysis. In Section \ref{simulations}, we validate our numerical scheme and implementation by checking the convergence order in some test cases with explicit solutions. We also use this new proposed scheme in several examples to show the flexibility and the accurate approximation of the evolution of the density in structured population models even for long-time asymptotics including cases that are not amenable for the EBT method. \section{Particle Method}\label{overview} \subsection{General Description}\label{GD} The main idea of the particle method is to approximate a solution at each time by a sum of Dirac measures. Note that even if the initial data in \eqref{eq1} is a sum of Dirac Deltas, the integral term possibly produces a continuous distribution at $t > 0$. This phenomenon can be avoided due to the splitting algorithm, which allows to separate the transport operator from the integral one and simulate the corresponding problems successively. This is essentially the reason why we have exploited this technique in our scheme. To proceed with a description of the method, assume that the approximation of the solution at time $t_k = k\Delta t$ is provided as a sum of Dirac measures, that is, \begin{equation}\label{initial_freeze} \mu_{t_k} = \sum_{i=1}^{M_k}m^i_k\; \delta_{x^i_k},\quad M_k \in {\mathbb{N}}. \end{equation} The procedure of calculating the approximation of the solution at time $t_{k+1}$ is divided into three main steps. In the first step one calculates the characteristic lines for the cohorts $(m^i, x^i)$ given by \eqref{initial_freeze}, which is equivalent to solving the following ODE's system on a time interval $[t_k, t_{k+1}]$: \begin{equation}\label{b} \frac{d}{ds} x^i(s) = b_k(x^i(s)),\quad x^i(t_k) = x_k^i,\quad i=1,\dots,M_k, \end{equation} where \begin{equation}\label{b_freeze} b_k(x) = b(t_{k}, \mu_{t_k})(x). \end{equation} In other words, each Dirac Delta is transported along its characteristic to the new location $x^i_{k+1}$ without changing its mass. The second step consists in creating new Dirac Deltas due to the influx of new individuals and recalculating the mass of each Dirac Delta. We have already mentioned in the introduction above that for each $(t,\nu) \in [0,T] \times {\mathcal M}_{+}({\mathbb{R}}^+)$, $\eta$ is given by \begin{equation}\label{eta_def} \eta(t, \nu)(y) = \sum_{p=1}^{r}\beta_p(t, \nu)(y) \; \delta_{x = \bar x_p(y)}. \end{equation} From this form of $\eta$, it follows that the set of possible new states $x^l_{k+1}$ at time $t_k$ is $$\{x^l_{k+1},\; l=M_k+1, \dots,M_{k+1} \} := \{ \bar x_p(x_{k+1}^i),\; i = 1, \dots, M_k,\; p=1,\dots, r \}.$$ Let us define \begin{eqnarray} \nonumber \mu_{k}^1 &=& \sum_{i=1}^{M_{k+1}} m^i_k\; \delta_{x^i_{k+1}}, \\[1mm] \label{c_freeze} c_k(x) &=& c \left(t_k, \mu_{k}^1\right)(x), \\[2mm] \label{eta_freeze} \eta_k(y) &=& \sum_{p=1}^{r}\beta_p(t_k, \mu_{k}^1)(y) \; \delta_{x = \bar x_p(y)} \end{eqnarray} and for $i,j \in \{1, \dots, M_{k+1}\}$ $$ \alpha(x_{k+1}^i, x_{k+1}^j) = \left\{ \begin{array}{l l} \beta_p(t_k, \mu_{k}^1)(x_{k+1}^j), & \quad \text{if $p$ is such that } \bar x_p(x_{k+1}^j) = x_{k+1}^i,\\ 0, & \quad \text{otherwise.} \end{array} \right. $$ We cannot solve an ODE system for the masses directly, since new states will be created at any time $t_k<t<t_{k+1}$. Therefore, we approximate it by the following explicit Euler scheme \begin{eqnarray}\label{c_eta} \frac{m^i_{k+1} - m^i_k}{t_{k+1} - t_k} &=& -c_k(x^i_{k+1}) m^i_k + \sum_{j=1}^{M_{k+1}} \alpha_k(x^i_{k+1},x^j_{k+1})m^j_k, \\[1mm] \nonumber m^i_k &=& 0, \;\; \mathrm{for}\;\; i = M_{k}+1,\dots, M_{k+1.} \end{eqnarray} The resulting measure \begin{eqnarray}\label{nu} \mu_{k}^2= \sum_{i=1}^{M_{k+1}}m^i_{k+1} \delta_{x^i_{k+1}} \end{eqnarray} consists of $M_{k+1} \geq M_k$ Dirac Deltas. In some cases, it is necessary to approximate the measure \eqref{nu} by a smaller number of Dirac Deltas (see Subsection \ref{app}). If so, we define $\mu_{t_{k+1}} = \mathcal R (\mu_k^2) $, where $\mathcal R(\mu_k^2)$ is the result of this approximation. Otherwise we let $\mu_{t_{k+1}} = \mu_k^2$. \begin{remark}\label{2approaches} In the particular case where only one new state $x^b$ is allowed, we can use the continuum ODE system: \begin{eqnarray}\label{c_eta_bound} \frac{d}{ds} m^i(s) &=& -c_k(x^i_{k+1}) m^i(s),\quad\mathrm{for}\;\; i\neq b, \\ \nonumber \frac{d}{ds} m^b(s) &=& -c_k(x^b) m^b(s) + \sum_{j=1}^{M_{k+1}} \alpha_k(x^b,x^j_{k+1})m^j(s), \end{eqnarray} instead of the Euler approximation \eqref{c_eta}. \end{remark} \noindent In the method presented above, one has to deal with an increasing number of Dirac measures, which is an important issue to solve from the point of view of numerical simulation. In the simplest case that all new individuals have the same size $x^b$ at birth, then just one additional Dirac Delta is created at the boundary at each time step. Unfortunately, in many models the number of new particles increases so fast that after several steps the computational cost become unacceptable. For example, in the case of equation describing the process of cell equal mitosis, the number of Dirac Deltas is doubled at each time step. This growth forces us to approximate the numerical solution by a smaller number of Dirac measures after several iterations. This procedure is called measure reconstruction. We propose some different methods of this reconstruction, which are discussed in the next subsection. In order to rigorously introduce this reconstruction procedure and to discuss the convergence of the particle method above, we first need to introduce several distances between measures which are relevant and useful for those purposes. \subsection{Distances between measures} Through this paper ${\mathcal M}_{+}({\mathbb{R}}^{+})$ denotes the space of nonnegative Radon measures with bounded total variation on ${\mathbb{R}}^{+} = \{x \in {\mathbb{R}} \;\colon x \geq 0\}$. We define a metric on ${\mathcal M}_{+}({\mathbb{R}}^{+})$ as \begin{equation} \label{distance} \rho_F(\mu_1, \mu_2) = \sup \left\{ \int_{{\mathbb{R}}^{+}} \varphi \, \d(\mu_1-\mu_2) \colon \varphi \in \C1({\mathbb{R}}^{+};{\mathbb{R}}) \mbox{ and } \norma{\varphi}_{\W{1}{\infty}} \leq 1 \right\} , \end{equation} where $\norma{\varphi}_{\W{1}{\infty}} = \max \left\{ \norma{\varphi}_{\L\infty}, \norma{\partial_x \varphi}_{\L\infty} \right\}$. \noindent $\rho_F$ is known as a \textit{flat} metric or a \textit{bounded Lipschitz distance}. The condition $\C1({\mathbb{R}}^{+}; {\mathbb{R}})$ in \eqref{distance} can be replaced by $\W{1}{\infty}({\mathbb{R}}^{+}; {\mathbb{R}})$ through a standard mollifying sequence argument applied to the test function $\varphi$, as its derivative is not involved in the value of the integral, which implies that $\rho_F$ is the metric dual to the $\norma{\cdot}_{({\W{1}{\infty}})^{*}}$ distance. Note that in this paper, the space ${\mathcal M}_{+}({\mathbb{R}}^{+})$ is equipped with the metric $\rho_F$ and this shall remain until said differently. The space $({\mathcal M}_{+}({\mathbb{R}}^{+}),\rho_F)$ is complete and separable. In the following lemma we introduce $\rho$ related to $\rho_F$, which turns out to be useful for computational purposes. Since \cite[Theorem 6.0.2]{AmbrosioGigliSavare} gives an explicit formula on the Wasserstein distance between two probability measures in terms of their cumulative distribution functions, we shall exploit this result and relate it to the flat metric. In particular, all error estimates calculated in Section \ref{simulations} are given in terms of $\rho$. \begin{lemma} \label{met_eq} Let $\mu_1, \mu_2 \in {\mathcal M}_{+}({\mathbb{R}}^{+})$ be such that $M_{\mu_i} = \int_{{\mathbb{R}}^{+}} \d \mu_i \neq 0$ and $\tilde \mu_i = \mu_i / M_{\mu_i}$ for $i=1,2$. Define $\rho : {\mathcal M}_{+}({\mathbb{R}}^{+}) \times {\mathcal M}_{+}({\mathbb{R}}^{+}) \rightarrow {\mathbb{R}}^{+}$ as the following \begin{eqnarray}\label{rho*} \rho(\mu_1,\mu_2) = \min\left\{M_{\mu_1}, M_{\mu_2}\right\} W_1(\tilde \mu_1, \tilde \mu_2)+ \modulo{M_{\mu_1} - M_{\mu_2}}, \end{eqnarray} where $W_1$ is the $1$-Wasserstein distance. Then, there exists a constant $C_K = \frac{1}{3}\min\left\{1, \frac{2}{\modulo{K}}\right\}$, such that $$ C_K\rho(\mu_1, \mu_2) \leq \rho_F(\mu_1, \mu_2) \leq \rho(\mu_1, \mu_2), $$ where $K$ is the smallest interval such that $\mathrm{supp}(\mu_1), \mathrm{supp}(\mu_2) \subseteq K$ and $\modulo{K}$ is the length of the interval $K$. If $K$ is unbounded we set $C_K = 0$. \end{lemma} \begin{remark}\label{rem_w} For $\tilde \mu_1$, $\tilde \mu_2$ defined as in the lemma above, it holds that $$ W_1(\tilde \mu_1, \tilde \mu_2) = \int_{0}^{1} \modulo{ F^{-1}_{\tilde \mu_1}(t) - F^{-1}_{\tilde \mu_2}(t)} \d t = \int_{{\mathbb{R}}^+} \modulo{F_{\tilde \mu_1}(x) - F_{\tilde \mu_2}(x)} \d x, $$ which follows from {\rm\cite[Section 2.2.2]{villani2}}. Since a cumulative distribution function $F_{\mu}$ does not have to be continuous or strictly increasing we set $$ F^{-1}_{\mu} (s) = \sup\{x \in {\mathbb{R}}^{+}\; : \; F_{\mu}(x) \leq s\}, s \in [0, 1]. $$ \end{remark} \begin{remark} Let $\mu \in {\mathcal M}_{+}({\mathbb{R}}^+)$ be a probability measure and $M_1, M_2 > 0$. Then, \begin{equation}\label{mu_diff_masses} \rho_F(M_1 \mu, M_2 \mu) \leq \modulo{M_1 - M_2}. \end{equation} Indeed, let $\varphi \in \C1({\mathbb{R}}^+; {\mathbb{R}})$ be such that $\norma{\varphi}_{\W{1}{\infty}} \leq 1$. Then, \begin{eqnarray*} \int_{{\mathbb{R}}^+} \varphi(x) \d (M_1 \mu - M_2 \mu)(x) \leq \modulo{M_1 - M_2} \int_{{\mathbb{R}}^+} \norma{\varphi}_{\L\infty} \d \mu(x) \leq \modulo{M_1 - M_2}. \end{eqnarray*} Taking supremum over all admissible functions $\varphi$ proves the assertion. \end{remark} \begin{proofof}{Lemma \ref{met_eq}} Let $\mu, \nu \in {\mathcal M}_{+}({\mathbb{R}}^{+})$ be probability measures. Assume for the moment that $K$ is bounded, so that $\modulo{K} < + \infty$. Note that in the definition of $W_1$ \begin{equation*} W_1(\mu, \nu) = \sup \left\{ \int_{{\mathbb{R}}^{+}} \varphi \, \d(\mu-\nu)\; \colon\; \mathop\mathbf{Lip}(\varphi) \leq 1 \right\}\,, \end{equation*} we can assume without loss of generality that $\norma{\varphi}_{\L\infty} \leq \modulo{K}/2$. Indeed, for any $\varphi$ such that $\mathop\mathbf{Lip}(\varphi) \leq 1$, there exists a constant $a$ and a function $\tilde \varphi$ such that $\mathop\mathbf{Lip}(\tilde \varphi) \leq 1$, $\norma{\tilde \varphi}_{\L\infty} \leq \modulo{K}/2$ and $\varphi = a + \tilde \varphi$. Observe that by taking $a$ to be the middle point of the interval $K$, and taking into account that $\mathop\mathbf{Lip}(\tilde \varphi) \leq 1$ and the support of the measures is included in $K$, then $\norma{\tilde \varphi}_{\L\infty} \leq \modulo{K}/2$ in $K$. Since the values of $\varphi$ can be changed arbitrarily outside $K$, then we can assume that $\norma{\tilde \varphi}_{\L\infty} \leq \modulo{K}/2$ without loss of generality. As a consequence, we deduce $$ \int_{{\mathbb{R}}^{+}}\varphi(x) \d (\mu - \nu)(x) = a \int_{{\mathbb{R}}^{+}} \d (\mu - \nu)(x) + \int_{{\mathbb{R}}^{+}}\tilde \varphi(x) \d (\mu - \nu)(x) = \int_{{\mathbb{R}}^{+}}\tilde \varphi(x) \d (\mu - \nu)(x), $$ since $\int_{{\mathbb{R}}^{+}} \d (\mu - \nu)(x)$ is equal to zero due to the fact that $\mu$ and $\nu$ have the same mass. Therefore, we infer that \begin{eqnarray*} W_1(\mu,\nu) &=& \sup \left\{ \int_{{\mathbb{R}}^{+}} \varphi \, \d(\mu-\nu)\; \colon\; \norma{\varphi}_{\L{\infty}}\leq \modulo{K}/2,\; \mathop\mathbf{Lip}(\varphi) \leq 1 \right\} \\ &\leq& \sup \left\{ \int_{{\mathbb{R}}^{+}} \varphi \, \d(\mu-\nu)\; \colon\; \norma{\varphi}_{\W{1}{\infty}}\leq \max\{ 1, \modulo{K}/2 \} \right\} = \max\left\{1, \frac{\modulo{K}}{2}\right\} \rho_F(\mu, \nu). \end{eqnarray*} Now, let $\mu_1, \mu_2$ be as in the statement of the Lemma. Then, \begin{eqnarray*} \rho_F (\mu_1, \mu_2) &=& M_{\mu_1} \rho_F \left( \frac{\mu_1}{M_{\mu_1}}, \frac{\mu_2}{M_{\mu_1}} \right) \leq M_{\mu_1} \rho_F \left( \frac{\mu_1}{M_{\mu_1}}, \frac{\mu_2}{M_{\mu_2}} \right) + M_{\mu_1} \rho_F \left( \frac{\mu_2}{M_{\mu_2}}, \frac{\mu_2}{M_{\mu_1}} \right) \\ &\leq& M_{\mu_1} \rho_{F}(\tilde \mu_1, \tilde \mu_2) + M_{\mu_1}M_{\mu_2} \modulo{\frac{1}{M_{\mu_1}} - \frac{1}{M_{\mu_2}}} \\ &=& M_{\mu_1}{W_1}(\tilde \mu_1, \tilde \mu_2) + \modulo{M_{\mu_1} - M_{\mu_2}}, \end{eqnarray*} where we used triangle inequality, inequality \eqref{mu_diff_masses} and the fact that $\rho_F(\tilde \mu_1, \tilde \mu_2) \leq W_1(\tilde \mu_1, \tilde \mu_2)$. Analogously, we obtain $$ \rho_F (\mu_1, \mu_2) \leq M_{\mu_2}{W_1}(\tilde \mu_1, \tilde \mu_2) + \modulo{M_{\mu_1} - M_{\mu_2}} $$ and thus, $$ \rho_F (\mu_1, \mu_2) \leq \min\left\{M_{\mu_1}, M_{\mu_2}\right\} {W_1}(\tilde \mu_1, \tilde \mu_2) + \modulo{M_{\mu_1} - M_{\mu_2}} = \rho(\mu_1, \mu_2). $$ Note that this estimate does not depend on $\modulo{K}$. Assume that $K$ is bounded, so that the argument above applies. Using $\varphi = \pm 1$ as a test function in \eqref{distance}, we obtain that $\modulo{M_{\mu_1} - M_{\mu_2}} \leq \rho_F(\mu_1, \mu_2)$. Then, \begin{eqnarray*} \rho(\mu_1,\mu_2) &\leq& M_{\mu_1} W_1(\tilde \mu_1, \tilde \mu_2) + \modulo{M_{\mu_1} - M_{\mu_2}} \\ &=& M_{\mu_1} \max\{1, \modulo{K}/2\}\rho_F(\tilde \mu_1, \tilde \mu_2) + \modulo{M_{\mu_1} - M_{\mu_2}} \\ &\leq& \max\{1, \modulo{K}/2\} \rho_F\left( \mu_1, \frac{M_{\mu_1}}{M_{\mu_2}} \mu_2\right) + \rho_F(\mu_1, \mu_2) \\ &\leq& \max\{1, \modulo{K}/2\} \left(\rho_F\left( \mu_1, \mu_2 \right) + \rho_F\left( \mu_2, \frac{M_{\mu_1}}{M_{\mu_2}} \mu_2\right)\right) + \rho_F(\mu_1, \mu_2) \\ &\leq& 2\max\{1, \modulo{K}/2\} \rho_F(\mu_1, \mu_2) + \max\{1, \modulo{K}/2\} M_{\mu_2} \modulo{1 - \frac{M_{\mu_1}}{M_{\mu_2}}} \\ &\leq& 3\max\{1, \modulo{K}/2\} \rho_F(\mu_1, \mu_2), \end{eqnarray*} which implies that $$ \frac{1}{3}\min\left\{1, \frac{2}{\modulo{K}}\right\} \rho(\mu_1,\mu_2) \leq \rho_F(\mu_1,\mu_2). $$ In case $\modulo{K} = +\infty$ we set $C_K = 0$ obtaining a trivial inequality $0 \leq \rho_F(\mu_1, \mu_2)$. \end{proofof} \begin{remark} The dependence of the constant $C_K$ on a length of the interval $K$ express a small sensitivity of the flat metric in the case where a distance between supports of measures is large. In particular, the flat distance for two Dirac measures $\delta_{x = a}$ and $\delta_{x=b}$ is equal to $$ \rho_F(\delta_{x=a}, \delta_{x=b}) = \min\{2, \modulo{a-b}\}. $$ \end{remark} Now, we can precisely discuss the measure reconstruction by approximation with a fixed number of particles of continuum or larger number of particles distributions. \subsection{Measure Reconstruction}\label{app} Due to Lemma \ref{met_eq}, we restrict our analysis to probability measures. Let $\mu = \sum_{i=1}^{M} m_i \delta_{x_i}$ be a probability measure with a compact support $K=[k_1,k_2]$. The aim of the reconstruction is to find a smaller number of Dirac Deltas $\bar M<M$ such that \begin{eqnarray*} \mathcal R_o(\mu) := \mathrm{argmin}\;\; W_1 \left(\mu, \sum_{j=1}^{\bar M} n_j \delta_{y_j} \right),\quad \mathrm{where}\;\; \sum_{j=1}^{\bar M} n_j = 1 \;\;\mathrm{and}\;\; n_j \geq 0, x_j \in {\mathbb{R}}^+. \end{eqnarray*} This minimisation procedure is essentially a linear programming problem which, under some particular assumptions on cycles, can be solved by the simplex algorithm providing the global minimum. This choice is the optimal for the reconstruction procedure. However, its complexity is at least cubic. From that reason, we exploit less costly (linear cost in the size of the problem) methods of reconstruction, which provide the error of the order $\mathcal O(1/\bar M)$. Note that the cubic cost is unacceptable in our case, since the total cost of the method is quadratic if the number of particles grows linearly with the time step. \ \noindent \textbf{A) Fixed-location reconstruction:} The idea of the fixed-location reconstruction is to divide the support of the measure $\mu$ into $\bar M$ equal intervals and put a Dirac Delta with a proper mass in the middle of each interval. The mass of this Dirac Delta is equal to the mass of $\mu$ contained in this particular interval. Let $\Delta x = \modulo{K}/\bar M$ and define $$ \tilde{x}_j = k_1 + \left(j - \frac{1}{2} \right)\Delta x, \quad \tilde m_j = \left\{ \begin{array}{l l} \mu\left(\left[\tilde{x}_j - \Delta x/2,\tilde{x}_j +\Delta x/2\right)\right),\; & \text{for}\; j=1,\dots, \bar M-1, \\[1mm] \mu\left(\left[\tilde{x}_{\bar M} - \Delta x/2,\tilde x_{\bar M} +\Delta x/2\right]\right),\; & \text{for}\; j=\bar M, \end{array} \right. $$ and $$ \mathcal R_l(\mu) := \sum_{j=1}^{\bar M} \tilde m_j \delta_{\tilde{x}_j}\,. $$ To estimate the error between $\mu$ and $\mathcal R_l(\mu)$ consider a transportation plan $\gamma$ between both measures. Then, according to \cite[Introduction]{villani2}, we have \begin{eqnarray}\label{e1} {W_1} \left(\mu, \mathcal R_l(\mu) \right) \leq \int_{{\mathbb{R}}_{+}^2} \modulo{\; x - y \;} \d \gamma(x,y) \leq \int_{{\mathbb{R}}_{+}^2} \frac{\Delta x}{2} \d \gamma(x,y) \leq \frac{\Delta x}{2} = \frac{\modulo{K} }{ 2\bar M}. \end{eqnarray} The second inequality follows from the fact that each particle was shifted by a distance not greater than a half of the interval of a length $\Delta x$, while the third one is a consequence of the fact that $\gamma$ is a probability measure on ${\mathbb{R}}_{+}^{2}$. \ \textbf{B) Fixed-Equal mass reconstruction:} The aim of the fixed-equal mass reconstruction is to distribute Dirac Deltas of equal masses over the support of a given measure in a proper way. In our particular case, we want to reduce the number of Dirac Deltas from $M$ to $\bar M$, and thus we need to explain an algorithm allowing for splitting of the Dirac Deltas into two. The definition of the reconstruction operator $\mathcal R_m(\mu)$ is as follows: we set $$ \tilde{m}_j = \frac1{\bar M},\;\;\mathrm{for}\;\; j=1,\dots,\bar M. $$ The scheme for determining $\tilde{x}_j$ is the following. We first look for an index $n_1$, such that $$ \sum_{i=1}^{n_1 - 1}m_i < \frac1{\bar M} \leq \sum_{i=1}^{n_1}m_i. $$ We set $$ \tilde{x}_1 = \sum_{i=1}^{n_1 - 1} m_i x_i + m'_{n_1} x_{n_1}, \;\;\mathrm{where}\;\; m'_{n_1} = \frac1{\bar M} - \sum_{i=1}^{n_1 - 1} m_i x_i. $$ Namely, the mass located in $x_{n_1}$ is split into two parts -- the amount of mass equal to $m'_{n_1}$ is shifted to $\tilde{x}_1$ and the rest, that is, $m_{n_1} - m'_{n_1}$ stays in $x_{n_1}$. For simplicity, we redefine $m_{n_1} : = m_{n_1} - m'_{n_1}$ and repeat the procedure described above until the last point $\tilde{x}_{\bar M}$ is found to get the final form of the reconstruction $$ \mathcal R_m(\mu) := \sum_{j=1}^{\bar M} \tilde m_j \delta_{\tilde{x}_j}\,. $$ Note that in each step of the procedure one changes the locations of the Dirac Deltas, of which joint mass is not greater than $m$. Using an analogous argument as in the previous case, we conclude that in the $j$-th step we commit an error not greater than $\modulo{x_{n_j} - x_{n_{j-1}}}m$, where $x_{n_o} = k_1$. Since $k_1 = x_{n_{o}} \leq x_{n_{1}}\dots \leq x_{n_{\bar M}} \leq k_2$, the total error can be bounded by \begin{eqnarray}\label{e2} W_1(\mu, \mathcal R_m(\mu) ) \leq \frac{\modulo{K}}{\bar M}. \end{eqnarray} \ The findings above can be summarized in the following \begin{corollary}\label{correcon} The error of the fixed-location $\mathcal R_l(\mu)$ and fixed-equal mass $\mathcal R_m(\mu)$ reconstructions is of the order of $\mathcal{O}(1/\bar M)$ where $\bar M$ is the number of Dirac Deltas approximating the original measure $\mu$. \end{corollary} These reconstructions can be used at $t=0$, if the initial data in \eqref{eq1} is not a sum of Dirac Deltas or at $t>0$ in order to deal with the problem of increasing number of Dirac Deltas, which are produced due to birth and/or mutation processes. We introduce the following notation: \begin{itemize} \item $E_I(\bar M_o)$ is the upper bound for the error of the initial data reconstruction defined in terms of $W_1$ distance. More specifically, for a measure $\mu$ such that $M_{\mu} := \int_{{\mathbb{R}}^+} \d \mu(x)> 0$, it holds that $$ W_1\left( \frac{\mu}{M_{\mu}}, \frac{\mathcal R(\mu)}{M_{\mu}}\right) \leq E_I(\bar M_o). $$ Here, the reconstruction operator $\mathcal R(\mu)$ refers to either $\mathcal R_l(\mu)$ or $\mathcal R_m(\mu)$. \item $E_R(\bar M)$ is the upper bound for the error of the measure reconstruction at time $t > 0$ defined in terms of $W_1$ distance as above. \end{itemize} We are now ready to state and prove the main convergence result. \section{Convergence Results}\label{conv_res} \subsection{Assumptions and theoretical results on splitting} For the sake of the reader, we recall the theoretical results on splitting for the equation \eqref{eq1} obtained in \cite{Spop}. The assumptions on the parameter functions $b,c$ and $\beta_p$, $p=1, \dots, r$, are the following \begin{eqnarray} \label{eqAssumptions} b,c, \beta_p \; : \; [0, T] \times \mathcal{M}^+({\mathbb{R}}^+) &\to& \W{1}{\infty}({\mathbb{R}}^+;{\mathbb{R}}), \\ \label{eqAssumptions_xp} \bar x_p \; : \; {\mathbb{R}}^+ &\to& {\mathbb{R}}^+, \end{eqnarray} where $ b(t,\mu)(0)\geq 0$ for $(t,\mu)\in [0,T] \times \mathcal{M}^+({\mathbb{R}}^+)$ and $p=1, \dots, r$. We require the following regularity \begin{eqnarray}\label{assumptions:nonlinear} b,c, \beta_p & \in & {\mathop\mathbf{BC}}^{\mathbf{\alpha,1}} \left( [0,T] \times {\mathcal M}^{+}({\mathbb{R}}^+); \; \W{1}{\infty}({\mathbb{R}}^+;{\mathbb{R}}) \right), \\ \label{assumptions:nonlinear_xp} \bar x_p &\in & \mathop\mathbf{Lip}({\mathbb{R}}^+; {\mathbb{R}}^+). \end{eqnarray} Here, ${\mathop\mathbf{BC}}^{\mathbf{\alpha,1}}([0,T]\times{\mathcal M}^+({\mathbb{R}}^+); \W{1}{\infty}({\mathbb{R}}^+;{\mathbb{R}}))$ is the space of $\W{1}{\infty}({\mathbb{R}}^+;{\mathbb{R}})$ valued functions which are bounded in the $\norma{\cdot}_{\W{1}{\infty}}$ norm, H\"older continuous with exponent $0<\alpha\leq 1$ with respect to time and Lipschitz continuous in $\rho_F$ with respect to the measure variable. This space is equipped with the $\norma{\cdot}_{{\mathop\mathbf{BC}}^{\mathbf{\alpha,1}}}$ norm defined by \begin{equation} \label{norma_bc} \norma{f}_{\mathbf{\mathop\mathbf{BC}^{\alpha,1}}} = \sup_{t\in[0,T], \mu\in{\mathcal{M}^+({\mathbb{R}}^+)}} \left( \norma{f(t, \mu)}_{\W{1}{\infty}} + \mathop\mathbf{Lip}\left(f(t,\cdot)\right) + \mathrm{H}_\alpha\left(f(\cdot,\mu)\right) \right), \end{equation} where $\mathop\mathbf{Lip}(f)$ is the Lipschitz constant of a function $f$ and \begin{displaymath} \mathrm{H}_\alpha(f(\cdot,\mu)) := \sup_{s_1,s_2\in[0,T]} \frac{\norma{f(s_1,\mu) - f(s_2,\mu)}_{\W{1}{\infty}}}{\modulo{s_1 - s_2}^{\alpha}} . \end{displaymath} For any $f \in {\mathop\mathbf{BC}}^{\mathbf{\alpha,1}}([0,T]\times{\mathcal M}^+({\mathbb{R}}^+);\W{1}{\infty}({\mathbb{R}}^+;{\mathbb{R}}))$ and any $\mu:[0,T] \to \mathcal M^+({\mathbb{R}}^+)$, we define \begin{equation*} \norma{f}_{\mathop\mathbf{BC}} = \sup_{t\in[0,T]}\norma{f(t, \mu(t))}_{\L\infty}. \end{equation*} Regularity of $\beta_p$ and $x_p$ imposed in \eqref{eqAssumptions}--\eqref{assumptions:nonlinear_xp} guarantees that $\eta$ defined by \eqref{form_of_eta} fulfills the assumptions of \cite[Theorem 2.11]{Spop} and thus, \eqref{eq1} is well posed. We recall this result next. \begin{theorem}\label{thm:Main} Let \eqref{eqAssumptions}--\eqref{assumptions:nonlinear_xp} hold. Then, there exists a unique solution $$ \mu \in (\mathop\mathbf{BC}\cap \mathop\mathbf{Lip})\left([0,T] ;{\mathcal M}^+({\mathbb{R}}^+)\right)$$ to~\eqref{eq1}. Moreover, the following properties are satisfied: \begin{enumerate} \item For all $0 \leq t_1\leq t_2 \leq T$ there exist constants $K_1$ and $K_2$, such that \begin{equation*} \rho_F\left(\mu(t_1),\mu(t_2)\right) \leq K_1 \mathinner{\mathrm{e}}^{K_2({t_2-t_1})} \mu_o({\mathbb{R}}^+)({t_2 - t_1}). \end{equation*} \item Let $\mu_1(0), \mu_2(0) \in {\mathcal M}^{+}({\mathbb{R}}^+)$ and $b_i$, $c_i$, $\beta_i = (\beta^i_1, \dots, \beta^i_r)$ satisfy assumptions \eqref{eqAssumptions} - \eqref{assumptions:nonlinear_xp} for $i = 1, 2$, $p=1,\dots, r$. Let $\mu_i$ solve~\eqref{eq1} with initial datum $\mu_i(0)$ and coefficients $(b_i,c_i,\beta_i)$. Then, there exist constants $C_1$, $C_2$ and $C_3$ such that for all $t\in [0,T]$ \begin{eqnarray*} \rho_F\left(\mu_1(t),\mu_2(t)\right) \leq \mathinner{\mathrm{e}}^{C_1t} \rho_F\left(\mu_1(0),\mu_2(0)\right) + C_2 \mathinner{\mathrm{e}}^{C_3t}t\; \norma{(b_1,c_1,\beta_1) - (b_2,c_2, \beta_2)}_{\mathop\mathbf{BC}}. \end{eqnarray*} where $$ \norma{(b,c,\beta)}_{\mathop\mathbf{BC}} = \norma{b}_{\mathop\mathbf{BC}} + \norma{c}_{\mathop\mathbf{BC}} + \sum_{p=1}^{r}\norma{\beta_p}_{\mathop\mathbf{BC}} \,. $$ \end{enumerate} \end{theorem} \subsection{Error estimates in $\rho_F$} The aim of this subsection is to obtain an estimate on the error between the numerical solution $\mu_t$ and the exact solution $\mu(t)$. Let $[0,T]$ be a time interval, $N$ be a number of time steps, $\Delta t = T / N$ be the time step. We define the time mesh $\{t_k\}_{k=0}^N$, where $t_k = k \Delta t$. Let $\bar M_k$, $k = 0,1,\dots N$, be parameters of the measure reconstruction. In particular, $\bar M_o$ is the number of Dirac Deltas approximating the initial condition and $\bar M_k$ stands for the number of Dirac measures approximating the numerical solution at $t > 0$ after a reconstruction, if performed. We assume that reconstructions are done every $n$ steps, which means that there are $\mathcal K = N/n$ reconstructions, each at time $t_{jn}$, where $j = 1,\dots,\mathcal K$. Let $\bar M$ be the number of Dirac Deltas after the reconstruction that will not depend on time. \begin{theorem}\label{rec_error} Let $\mu$ be a solution to \eqref{eq1} with initial data $\mu_o$. Assume that $\mu_{t_m}$ is defined by the numerical scheme described in Subsection {\rm \ref{GD}} and $m = jn$ for some $j \in \{1,\dots, \mathcal K\}$, i.e., that $t_m$ is the time after $j$ reconstructions. Then, there exists $C$ depending only on the parameter functions, the initial data, and $T$ such that \begin{equation}\label{ppp} \rho_F\left( \mu_{t_m},\mu(t_m) \right) \leq C\left( \Delta t + (\Delta t)^{\alpha} + E_I(\bar M_o) + E_R(\bar M) j\right). \end{equation} \end{theorem} \begin{remark}\label{rmerror} The error estimate \eqref{ppp} accounts for different error sources. More specifically, the error of the order $\mathcal{O}(\Delta t)$ is a consequence of the splitting algorithm. The term of order $\mathcal{O}((\Delta t)^{\alpha})$ follows from the fact that we solve \eqref{b}--\eqref{c_eta} with parameter functions independent of time, while $b,c$ and $\eta$ are in fact of $\C{\alpha}$ regularity with respect to time. Finally, $E_I$ and $E_R$ are the errors coming from the measure reconstruction procedure that are of the order $1/\bar{M_o}$ and $1/\bar{M}$ respectively as proven in subsection {\rm 2.3}. Thinking about $1/\bar{M}$, with $\bar{M}=\bar{M}_o$, as the spatial discretization $\Delta x$ and for $\alpha=1$, we obtain that the method is of order one both in space and in time. \end{remark} \begin{proofof}{ Theorem \ref{rec_error}} The proof is divided into several steps. For simplicity, in all estimates below, we will use a generic constant $C$, without specifying its exact form that may change from line to line. \\ \textbf{Step 1: The auxiliary scheme.} \quad Let us define the auxiliary semi-continuous scheme, which consists in solving subsequently the following problems: \begin{equation} \left\{ \begin{array}{rcl} \label{exact1} \partial_t \mu + \partial_x ( \bar b_k(x)\mu )&=& 0,\quad \mathrm{on}\;\;\; [t_k, t_{k+1}] \times {\mathbb{R}}^+, \\ \mu(t_{k}) &=& \bar \mu_k \end{array} \right. \end{equation} and \begin{equation} \label{exact2} \left\{ \begin{array}{rcl} \partial_t \mu &=& -\bar{\bar c}_k(x)\mu + \int_{{\mathbb{R}}^+} \bar{ \bar{\eta}}_k(y) \d \mu(y),\quad \mathrm{on}\;\;\; [t_k, t_{k+1}] \times {\mathbb{R}}^+, \\ \displaystyle \mu(t_{k}) &=& \bar \mu_k^1, \end{array} \right. \end{equation} where $\bar \mu_k \in \mathcal M^+({\mathbb{R}}^+)$, $\bar \mu_k^1$ is the solution to \eqref{exact1} at time $t_{k+1}$ and $\bar b_k$, ${\bar{\bar c}}_k$, and $\bar{\bar \eta}_k$ are defined as \begin{eqnarray} \label{bar_b_freeze} \bar b_k(x) &=& b \left(t_k, \bar \mu_{k}\right)(x), \\[2mm] \nonumber {\bar{\bar c}}_k(x) &=& c \left(t_k, \bar \mu_{k}^1\right), \quad \bar{\bar {\eta}}_k(y) = \sum_{p=1}^{r}\beta_p(t_k, \bar \mu_{k}^1)(y) \; \delta_{x = \bar x_p(y)}. \end{eqnarray} A solution to the second equation at time $t_{k+1}$ is denoted by $\bar \mu^2_k$. The output of one time step of our scheme is defined as $\bar \mu_{k+1} =\mathcal R (\bar \mu_k^2)$. \\ \textbf{Step 2: Error of the reconstruction.} \quad Since $\bar \mu_{k+1}$ arises from $\bar \mu_k^2$ through the reconstruction, masses of both measures are equal. Therefore, application of Lemma \ref{met_eq} yields \begin{eqnarray}\label{z1} \rho_F(\bar \mu_{k+1},\bar \mu_k^2) \leq \rho(\bar \mu_{k+1}, \bar \mu_k^2) = M_{\bar \mu_k^2} W_1 \left( \frac{\bar \mu_{k+1}}{M_{\bar \mu_k^2}}, \frac{\bar \mu_k^2}{M_{\bar \mu_k^2}}\right) \leq M_{\bar \mu_k^2} E_R(\bar M), \end{eqnarray} where $M_{\bar \mu_k^2} =\bar \mu_{k+1}({\mathbb{R}}^{+}) = \bar \mu_k^2({\mathbb{R}}^{+})$ and $E_R(\bar M)$ is the error of the reconstruction introduced in Subsection \ref{app}. As stated in Corollary \ref{correcon}, $E_R(\bar M)$ is of order $1/\bar M$ for both reconstructions. Note that $M_{\bar \mu_k^2} $ can be bounded independently on $k$. Indeed, on each time interval $[t_k, t_{k+1}]$ mass grows at most exponentially, which follows from \cite[Theorem 2.10, (i)]{Spop}, and reconstructions, if performed, do not change the mass. Thus, there exists a constant $C = C(T, b, c, \eta, \mu_o)$ such that $M_{\bar \mu_k^2} \leq C$. \textbf{Step 3: Error of splitting.} \quad Let $\nu(t)$ be a solution to \eqref{eq1} on a time interval $[t_{k}, t_{k+1}]$ with initial datum $\bar \mu_{k}$ and parameter functions $\bar b_k$, $\bar c_k$, $\bar \eta_k$, where $\bar b_k$ is defined by \eqref{bar_b_freeze}, \begin{eqnarray} \label{bar_c_freeze} {{\bar c}}_k(x) &=& c \left(t_k, \bar \mu_{k}\right), \\ \label{bar_eta_freeze} \bar{ {\eta}}_k(y) &=& \sum_{p=1}^{r}\bar{\beta}_{p,k}(y) \; \delta_{x = \bar x_p(y)},\quad \mathrm{where}\;\;\; \bar{\beta}_{p,k}(y) = \beta_p(t_k, \bar \mu_{k})(y). \end{eqnarray} According to \cite[Proposition 2.7]{ColomboGuerra2009} and \cite[Proposition 2.7]{Spop}, the distance between $\bar \mu_{k}^2$ and $\nu(t_{k+1})$, that is, the error coming from the application of the splitting algorithm can be estimated as \begin{equation}\label{est_split} \rho_F(\bar \mu_{k}^2,\nu(t_{k+1})) \leq C (\Delta t)^2. \end{equation} To estimate a distance between $\nu(t_{k+1})$ and $\mu(t_{k+1})$ consider $\zeta(t)$, which is a solution to \eqref{eq1} on a time interval $[t_k, t_{k+1}]$ with initial data $\mu(t_k)$ and coefficients $\bar b_k$, $\bar c_k$, $\bar \eta_k$. By triangle inequality $$ \rho_F(\nu(t_{k+1}), \mu(t_{k+1})) \leq \rho_F(\nu(t_{k+1}), \zeta(t_{k+1})) + \rho_F(\zeta(t_{k+1}), \mu(t_{k+1})). $$ The first term of the inequality above is a distance between solutions to \eqref{eq1} with different initial data, that is, $\bar \mu_{k}$ and $\mu(t_k)$ respectively. The second term is equal to a distance between solutions to \eqref{eq1} with coefficients $(\bar b_k, \bar c_k,\bar \eta_k)$ defined by \eqref{bar_b_freeze}, \eqref{bar_c_freeze}, and \eqref{bar_eta_freeze}, and $(b(t,\mu(t)), c(t,\mu(t)), \eta(t,\mu(t)))$. By the continuity of solutions to \eqref{eq1} with respect to the initial datum and coefficients in Theorem \ref{thm:Main}, we obtain \begin{eqnarray}\label{main1} \rho_F(\nu(t_{k+1}), \zeta(t_{k+1})) \leq \mathinner{\mathrm{e}}^{ C \Delta t} \rho_F(\bar \mu_{k}, \mu(t_{k})), \end{eqnarray} and \begin{eqnarray}\label{main2} \rho_F(\zeta(t_{k+1}), \mu(t_{k+1})) \leq C \Delta t \mathinner{\mathrm{e}}^{ C\Delta t} \left( \norma{\bar b_k - b}_{\overline{\mathop\mathbf{BC}}} + \norma{\bar c_k - c}_{\overline{\mathop\mathbf{BC}}} + \sum_{p=1}^{r} \norma{\bar \beta_{p,k} - {\beta_p}}_{\overline{\mathop\mathbf{BC}}} \right), \end{eqnarray} where \begin{eqnarray} \nonumber \norma{\bar b_k - b}_{\overline{\mathop\mathbf{BC}}} &=& \sup_{t \in [t_k, t_{k+1}]}\norma{ \bar b_k - b(t,\mu(t))}_{\L\infty}, \\ \label{111} \norma{\bar c_k - c}_{\overline{\mathop\mathbf{BC}}} &=& \sup_{t \in [t_k, t_{k+1}] }\norma{\bar c_k - c(t,\mu(t))}_{\L\infty}, \\ \label{222} \norma{\bar \beta_{p,k} - \beta_p}_{\overline{\mathop\mathbf{BC}}} &=& \sup_{t \in [t_k, t_{k+1}]} \norma{\bar \beta_{p,k} - \beta_p(t, \mu(t))}_{\L\infty}. \end{eqnarray} Due to the assumptions \eqref{eqAssumptions}--\eqref{assumptions:nonlinear_xp} and the definition of $\bar b_k$, $\bar c_k$, $\bar \eta_k$ we obtain \begin{eqnarray} \nonumber \norma{\bar b_k - b(t,\mu(t))}_{\L\infty} &\leq& \norma{ b(t_{k},\bar \mu_{k}) - b(t_{k},\mu(t))}_{\L\infty} + \norma{b(t_{k},\mu(t)) - b(t,\mu(t))}_{\L\infty} \\ \nonumber &\leq& \mathop\mathbf{Lip}(b(t_k, \cdot))\; \rho_F(\bar \mu_{k}, \mu(t)) + \norma{b}_{\mathop\mathbf{BC}^{\mathbf {\alpha,1}}} \modulo{t - t_k}^{\alpha} \\ \label{333} &\leq& \norma{b}_{\mathop\mathbf{BC}^{\mathbf {\alpha,1}}}\left[ \rho_F(\bar \mu_{k}, \mu(t)) + (\Delta t)^{\alpha}\right]\,. \end{eqnarray} Using Lipschitz continuity of the solution $\mu(t)$, see \cite[Theorem 2.11]{Spop}, we obtain $$ \rho_F(\bar \mu_{k}, \mu(t))\leq \rho_F(\bar \mu_{k}, \mu(t_k)) + \rho_F(\mu(t_k), \mu(t)) \leq \rho_F(\bar \mu_{k}, \mu(t_k)) + C \Delta t \mathinner{\mathrm{e}}^{ C \Delta t}. $$ Substituting the latter expression into \eqref{333} yields $$ \norma{b_k - b(t,\mu(t))}_{\L\infty} \leq \norma{b}_{\mathop\mathbf{BC}^{\alpha,1}} \left( \rho_F(\bar \mu_{k}, \mu(t_k)) + C \Delta t \mathinner{\mathrm{e}}^{ C \Delta t} \right) + \norma{b}_{\mathop\mathbf{BC}^{\alpha,1}}(\Delta t)^{\alpha}. $$ Bounds for \eqref{111} and \eqref{222} can be proved analogously. From the assumptions it holds that $$ \norma{(b,c,\beta)}_{\mathop\mathbf{BC}^{\mathbf{\alpha,1}}} = \norma{b}_{\mathop\mathbf{BC}^{\mathbf{\alpha,1}}} + \norma{c}_{\mathop\mathbf{BC}^{\mathbf{\alpha,1}}} + \sum_{p=1}^{r}\norma{\beta_p}_{\mathop\mathbf{BC}^{\mathbf{\alpha,1}}} < +\infty, $$ and as a consequence, we obtain $$ \norma{\bar b_k - b}_{\overline{\mathop\mathbf{BC}}} + \norma{\bar c_k - c}_{\overline{\mathop\mathbf{BC}}} +\sum_{p=1}^{r} \norma{\bar \beta_{p,k} - {\beta_p}}_{\overline{\mathop\mathbf{BC}}} \leq \norma{(b,c,\beta)}_{\mathop\mathbf{BC}^{\mathbf{\alpha,1}}} \left[ \rho_F(\bar \mu_{k}, \mu(t_k)) + C\mathinner{\mathrm{e}}^{ C T} \Delta t + (\Delta t)^{\alpha}\right]\,. $$ Using this inequality in \eqref{main2} yields \begin{eqnarray*} \rho_F(\zeta(t_{k+1}), \mu(t_{k+1})) &\leq& C \Delta t \mathinner{\mathrm{e}}^{ C\Delta t} \left[ \rho_F(\bar \mu_{k}, \mu(t_k)) + \Delta t + (\Delta t)^{\alpha} \right] \\ &\leq& C \Delta t \mathinner{\mathrm{e}}^{ C\Delta t} \rho_F(\bar \mu_{k}, \mu(t_k)) + C \mathinner{\mathrm{e}}^{ C T} (\Delta t)^2 + C \mathinner{\mathrm{e}}^{ C T} (\Delta t)^{1+\alpha}\,. \end{eqnarray*} Combining the inequality above with \eqref{main1} and redefining $C$ leads to \begin{eqnarray}\label{z2} \rho_F(\nu(t_{k+1}), \mu(t_{k+1})) &\leq& \mathinner{\mathrm{e}}^{ C \Delta t}(1 + C\Delta t) \rho_F(\bar \mu_{k}, \mu(t_{k})) + C (\Delta t)^2 + C (\Delta t)^{1+\alpha} \\[2mm] \nonumber &\leq& \mathinner{\mathrm{e}}^{ 2C \Delta t} \rho_F(\bar \mu_{k}, \mu(t_{k})) + C (\Delta t)^2 + C (\Delta t)^{1+\alpha}. \end{eqnarray} Finally, putting together \eqref{z2} and \eqref{est_split}, we conclude that \begin{equation}\label{znew} \rho_F(\bar \mu_k^2, \mu(t_{k+1})) \leq \mathinner{\mathrm{e}}^{ 2C \Delta t} \rho_F(\bar \mu_{k}, \mu(t_{k})) + C (\Delta t)^2 + C (\Delta t)^{1+\alpha}. \end{equation} \textbf{Step 4: Adding the errors.} \quad Now, let $w = j n$, $v = (j-1)n$, $j \in \{1,\dots,\mathcal K\}$, that is, $t_w$ and $t_{v}$ are the time points in which the measure reconstruction occurs. Since for $t_i$ such that $t_v < t_i < t_w$ it holds that $\bar \mu_{i} = \mathcal R (\bar \mu_{i-1}^2) = \bar \mu_{i-1}^2$, i.e., the measure reconstruction is not performed, the application of the discrete Gronwall's inequality to \eqref{znew} yields \begin{eqnarray*} \rho_F(\bar \mu_w^2, \mu(t_w)) &\leq& \mathinner{\mathrm{e}}^{nC\Delta t} \rho_F(\bar \mu_v, \mu(t_v)) + C \frac{\mathinner{\mathrm{e}}^{n C \Delta t}-1}{\mathinner{\mathrm{e}}^{C\Delta t} - 1} \left( (\Delta t)^2 + (\Delta t)^{1+\alpha}\right). \end{eqnarray*} There exists $C^*$ depending only on $T$ such that $\mathinner{\mathrm{e}}^{n C \Delta t}-1 < n C^* \Delta t$, for each $n \Delta t \in [0,T]$. Therefore, we deduce $$ \frac{\mathinner{\mathrm{e}}^{nC \Delta t}-1}{\mathinner{\mathrm{e}}^{C\Delta t} - 1} \leq \frac{nC^*\Delta t}{C \Delta t} = \frac{C^*}{C}n $$ and thus, \begin{equation*} \rho_F(\bar \mu_w^2, \mu(t_w)) \leq \mathinner{\mathrm{e}}^{nC\Delta t} \rho_F(\bar \mu_v, \mu(t_v)) + n C \left( (\Delta t)^2 + (\Delta t)^{1+\alpha}\right), \end{equation*} for some constant $C$. Combining this inequality with \eqref{z1} in Step 2 of the proof yields \begin{eqnarray*} \rho_F(\bar \mu_w, \mu(t_{w})) &\leq& \mathinner{\mathrm{e}}^{n C \Delta t} \rho_F(\bar \mu_v, \mu(t_v)) + nC ((\Delta t)^2 + (\Delta t)^{1+\alpha}) + C E_R(\bar M). \end{eqnarray*} \textbf{Step 5: Final estimate for the auxiliary scheme.} An analogous argument using the discrete Gronwall's inequality again results in the following estimate \begin{eqnarray} \nonumber \rho_F(\bar \mu_w, \mu(t_{w})) &\leq&\!\! \mathinner{\mathrm{e}}^{jn C\Delta t} \rho_F(\mathcal R(\mu_o), \mu_o) + C \frac{ \mathinner{\mathrm{e}}^{jn C\Delta t} - 1}{ \mathinner{\mathrm{e}}^{n C\Delta t} - 1} \left[n( (\Delta t)^2 + (\Delta t)^{1+\alpha}) + E_R(\bar M) \right] \\ \nonumber &\leq& C \mathinner{\mathrm{e}}^{C t_w} E_I(\bar M_o) + Cj \left[n ( (\Delta t)^2 + (\Delta t)^{1+\alpha}) + E_R(\bar M) \right] \\ &\leq& C\mathinner{\mathrm{e}}^{C t_w} E_I(\bar M_o) + C(jn \Delta t) \left(\Delta t + (\Delta t)^{\alpha}\right) + Cj E_R(\bar M) \label{final} \end{eqnarray} and since $jn \Delta t = t_w \leq T$ the assertion is proved. \\ \noindent \textbf{Step 6: Full error estimate.} The full error estimate \eqref{ppp} takes into account the error following from the numerical approximation of the auxiliary scheme \eqref{exact1}--\eqref{exact2}. This additional source of error is nothing else than the error of the Euler method for ODE's. According to \cite[(515.62)]{butcher}, the error committed is of order $\Delta t$ when solving \eqref{exact1}--\eqref{exact2} using its Euler approximation \eqref{b}--\eqref{c_eta}. Therefore, the final estimate \eqref{final} holds. \end{proofof} \begin{remark}\label{aproxeta} In this work, we have assumed that $\eta$ is given as a sum of Dirac Deltas \eqref{form_of_eta}. If $\eta(t,\mu)(y)$ is not in such a form, one has to use a proper approximation by atomic measures in order to apply our scheme. One of the possibilities for this approximation is through the measure reconstruction described in Subsection {\rm\ref{app}}. Assume that there exists a bounded interval $K$ such that for all $(t,\mu) \in [0,T] \times \mathcal M^+({\mathbb{R}}^+)$, we have \begin{equation}\label{ttt} \mathrm{supp}(\eta(t,\mu)(y)) \subseteq K. \end{equation} Fix $r \in {\mathbb{N}}$ and let $\{K_p\}_{p=1}^r$ be a family of intervals such that $$ \bigcup_{p=1}^r K_p = K, \quad K_i \cap K_j = \emptyset,\; \mathrm{for}\; i \neq j \quad\mathrm{and} \quad \modulo{K_p} = \frac{\modulo{K}}{r},\;\mathrm{where}\; p=1,\dots, r. $$ Namely, we divide $K$ into $r$ disjoint intervals of equal length. Denote the center of each interval by $\bar x_p(y)$ and define \begin{equation}\label{mmm2} \beta_p(t,\mu)(y) = \int_{K_p} \d (\eta(t,\mu)(y)) (x). \end{equation} The approximation of $\eta(t,\mu)(y)$ is thus given by \begin{equation}\label{mmm} \sum_{p=1}^{r} \beta_p(t,\mu)(y) \delta_{x = \bar x_p(y)}. \end{equation} If $\eta$ is regular enough, then the assumptions on $\beta_p$ and $\bar x_p$ \eqref{eqAssumptions}--\eqref{assumptions:nonlinear_xp} are fulfilled for all $r$, and the numerical scheme we propose applies. In order to prove the convergence towards the solution of \eqref{eq1} with the parameter function $\eta$, we observe that the distance between $\eta$ and its approximation \eqref{mmm} expressed in terms of the proper norm can be bounded by $C/r$, where $C$ does not depend on $t, \mu$ and $y$ due to \eqref{mmm2}--\eqref{mmm}. Thus, the most general version of the stability result in {\rm\cite[Theorem 2.11]{Spop}} guarantees that if $r$ tends to $+ \infty$, then the numerical solution obtained for the approximated $\eta$ converges towards a solution to \eqref{eq1} with the parameter function $\eta$. For all technical details, we refer to {\rm\cite{Spop}}. \end{remark} \section{Simulation Results}\label{simulations} This section is devoted to presenting results of numerical simulations for several test cases. In all examples presented in this paper, we used the $4$-th order Runge-Kutta method for solving \eqref{b} and the explicit Euler scheme for solving \eqref{c_eta}, as described in Subsection \ref{GD}. The error of the numerical solution with parameters $(\Delta t, \bar M_o, \bar M)$ at time $T>0$ is defined as \begin{equation}\label{error:def} \mathrm{Err}(T; \Delta t, \bar M_o, \bar M) := \rho(\mu(t_{\bar{k}}), \mu_{\bar{k}} )\,, \end{equation} with $\bar{k}$ such that $\bar{k}\Delta t=T$. The order of the method $q$ is given by \begin{equation} q := \lim_{\Delta t \to 0}\frac{\log \left (\mathrm{Err}{(T; 2\Delta t, 2\bar M_o, 2\bar M)} / \mathrm{Err}{(T; \Delta t, \bar M_o, \bar M)} \right)}{\log 2}. \end{equation} We also define $\Delta x := \modulo{K}/ \bar M_o$, where $K$ is the minimal bounded closed interval containing the support of the initial measure. We will not distinguish between measures and their densities whenever the measures are absolutely continuous with respect to the Lebesgue measure. \subsection{Example 1 (McKendrick-von Foerster equation)} In this subsection, we validate the convergence result for our numerical scheme by means of the well-known McKendrick-von Foerster type equation \cite{McK}. This is a linear model describing the evolution of an size-structured population. We set \begin{eqnarray*} b(x) = 0.2(1-x), \;\; c(x) = 0.2, \;\; \eta(y)= 2.4(y^2 - y^3)\delta_{x = 0} , \;\; \mathrm{and} \;\; \mu_o = \chi_{[0,1]}(x), \end{eqnarray*} and solve \eqref{eq1} for $x \in [0,1]$, see also \cite{angulo1}. The solution is stationary and then given by $\mu(t,x) = \chi_{[0,1]}(x)$. In Table \ref{error:ex1:T}, we present the relative error and the order of the scheme, where we used just one measure reconstruction in order to approximate the initial data. In Table \ref{error:ex1:T2}, we present results for the scheme with the measure reconstruction performed at $t = 0, 1, \dots, 10$ and $\bar M_o = \bar M$. In all cases, we see that the convergence error approximates order one as $\Delta t\to 0$ as proven in Theorem \ref{rec_error} and Remark \ref{rmerror}. \begin{table}[ht]{\tiny \begin{center} \begin{tabular}{ r > {$\quad\quad\quad} c < {$} > {$} c < {$} } \hline $\Delta t$ = $\Delta x$ & \mathrm{Err}(10, \Delta t, \bar M_o, \bar M) & q \\ \hline $1.000000 \cdot 10^{-1} $ &1.2532 \cdot10^{-2} & - \\ \hline $5.000000 \cdot 10^{-2} $ &5.0543 \cdot10^{-3}& 1.31006 \\ \hline $2.500000 \cdot 10^{-2} $ &2.2225\cdot10^{-3} & 1.18533 \\ \hline $1.250000 \cdot 10^{-2}$ &1.0349\cdot10^{-4} & 1.10272 \\ \hline $6.250000 \cdot 10^{-3} $ & 4.9832\cdot10^{-4} & 1.05431 \\ \hline $3.125000 \cdot 10^{-3} $ & 2.4438\cdot10^{-4} & 1.02796 \\ \hline $1.562500 \cdot 10^{-3} $ & 1.2099\cdot10^{-4} & 1.01419 \\ \hline $7.812500 \cdot 10^{-5} $ & 6.0198\cdot10^{-5} & 1.00715 \\ \hline $3.906250 \cdot 10^{-4} $ & 3.0024\cdot10^{-5} & 1.00359 \\ \hline $1.953125 \cdot 10^{-4} $ & 1.4993\cdot10^{-5} & 1.00180 \\ \hline $9.765625 \cdot 10^{-5} $ & 7.4920\cdot10^{-6} & 1.00090 \\ \hline \end{tabular} \caption{(Example 1) The relative error and order of the scheme at $T = 10$. One reconstruction performed at $t = 0$, $\bar M = \bar M_o$.}\label{error:ex1:T} \end{center}} \end{table} \begin{table}[ht]{\tiny \begin{center} \begin{tabular}{ r > {$\quad\quad} c < {$} > {$} c < {$} > {$} c < {$} > {$} c < {$} } \hline $\Delta t$ = $\Delta x$ & \mathrm{Err}(10, \Delta t, \bar M_o, \bar M)& q&\mathrm{Err}(10, \Delta t, \bar M_o, \bar M)& q \\ & \mbox{(Fixed-location)} & & \mbox{(Fixed-equal mass)} & \\ \hline $1.000000 \cdot 10^{-1} $ & 3.4657 \cdot10^{-1} & - & 8.8838 \cdot10^{-2} & - \\ \hline $5.000000\cdot 10^{-2} $ &1.1670 \cdot10^{-1}& 1.5703 & 2.9437 \cdot 10^{-2} & 1.5935 \\ \hline $2.500000\cdot 10^{-2} $ &3.4080\cdot10^{-2} & 1.7759 & 1.0879 \cdot 10^{-2} & 1.4361 \\ \hline $1.250000\cdot 10^{-2} $ &1.1863\cdot10^{-2} & 1.5224 & 4.4725 \cdot 10^{-3} & 1.2824 \\ \hline $6.250000\cdot 10^{-3} $ & 3.6874\cdot10^{-3} & 1.6858 & 1.9907 \cdot 10^{-3} & 1.1678 \\ \hline $3.125000\cdot 10^{-3} $ & 1.6866\cdot10^{-3} & 1.1285 & 9.3351 \cdot 10^{-4} & 1.0926 \\ \hline $1.562500\cdot 10^{-3} $ & 6.8067\cdot10^{-4} & 1.3091 & 4.5131 \cdot 10^{-4} & 1.0486 \\ \hline $7.812500 \cdot 10^{-4} $ & 3.3212\cdot10^{-4} & 1.0352 & 2.2178 \cdot 10^{-4} & 1.0250 \\ \hline $3.906250 \cdot 10^{-4} $ & 1.5814\cdot10^{-4} & 1.0705 & 1.0992 \cdot 10^{-4} & 1.0127 \\ \hline $1.953125 \cdot 10^{-4} $ & 7.4507\cdot10^{-5} & 1.0858 & 5.4719 \cdot 10^{-5} & 1.0063 \\ \hline $9.765625 \cdot 10^{-5} $ & 3.6414\cdot10^{-5} & 1.0329 & 2.7299 \cdot 10^{-5} & 1.0032 \\ \hline \end{tabular} \caption{(Example 1) The relative error and order of the scheme at $T = 10$. Reconstruction performed at $t = 0, 1, \dots, T$, $\bar M = \bar M_o$.} \label{error:ex1:T2} \end{center}} \end{table} \subsection{Example 2 (nonlinear growth term)} In this subsection, we present results for a model where $b$ and $\eta$ are equal to zero. Thus, we have conservation of the number of approximated Dirac Deltas, and consequently, there is no need for reconstructions. We consider a nonlinear growth function $c$ as in \cite{des} of the form \begin{eqnarray*} c(t,\mu)(x) = a(x) - \int_{{\mathbb{R}}}\alpha(x,y)\d \mu(y), \end{eqnarray*} where \begin{displaymath} a(x) = A - x^2,\;\;A > 0 \;\;\;\;\; \mathrm{and} \;\;\;\;\; \alpha(x,y) = \frac{1}{1 + (x-y)^2}. \end{displaymath} According to \cite[Remark 2.3, Lemma 4.8]{Spop}, one can consider \eqref{eq1} on the whole ${\mathbb{R}}$, so that the result concerning well posedness still holds if all parameter functions verify the regularity properties \eqref{eqAssumptions}--\eqref{assumptions:nonlinear_xp} on the whole line. However, $a(x)$ is not globally Lipschitz on ${\mathbb{R}}$. Nevertheless, the global well-posedness theory still applies if we reduce to measures whose support lies in a fixed compact interval. Note that the support of the solution is invariant in time. \begin{figure} \caption{(Example 2) Long time behaviour of numerical solutions. The three subplots show the evolution of the numerical solution on the time interval $[0, 10000]$ for $A = 0.5, 1.5$ and $2.5$, respectively. For simulations, we set $\Delta t = 0.1$, $\bar M_o = 1000$ and $\mu_o = \sum_{i=1}^{\bar M_o} ({1}/{\bar M_o}) \delta_{x^i_o}$, where $x^i_o := -2 + (i-\frac{1}{2})/\bar M_o$. No measure reconstruction has been performed.} \label{figure2} \end{figure} If $\modulo{x} > \sqrt{A}$, then the solution decreases exponentially to zero, since $\alpha(x,y) \geq 0$, for all $x,y \in {\mathbb{R}}$. This equation can describe a population structured with respect to the trait $x$, and then its asymptotic behaviour reflects the speciation process. Typically, after a long time period only a few traits are observable, since the rest of the population got extinct. Under some assumptions, there exists a linearly stable steady solution $\bar \mu$ being a sum of Dirac Deltas, which is shown in \cite{des}. The number of Dirac measures depends on the parameter $A$ and some stationary solutions are explicit. Figures \ref{figure2} and \ref{figure2b} present the evolution and long time behaviour of solutions for different choices of the parameter $A$. These results are consistent with the findings in \cite{des}. In all cases, we assumed that initial data are given as a sum of uniformly distributed Dirac Deltas with the same mass. \begin{figure} \caption{(Example 2) Stationary State as a function of $A>0$. We show the numerical solution at time $t=10000$ depending on the parameter $A \in [0,3]$. For simulations, we set $\Delta t = 0.05$, $\bar M_o = 320$ and $\mu_o = \sum_{i=1}^{\bar M_o} ({1}/{\bar M_o}) \delta_{x^i_o}$, where $x^i_o := -2 + (i-\frac{1}{2})/\bar M_o$. No measure reconstruction has been performed.} \label{figure2b} \end{figure} \subsection{Example 3 (size structure - equal fission)} In this subsection, we shall concentrate on a size-structured cell population model, in which a cell reproduces itself by fission into two equal parts. We assume that the cell divides after it has reached a minimal size $x_o > 0$. Therefore, there exists a minimum size whose value is $x_o / 2$. Moreover, cells have to divide before they reach a maximal size, which is normalized to be equal to $x_{max} = 1$. Similarly as in \cite{AnguloFission}, we set $$ x_o = \frac{1}{4}, \;\; b(x) = 0.1(1-x), \;\; c(x) = \beta(x),\;\;\eta(t,\mu)(y) = 2\beta(y)\delta_{x = y/2},\;\;\mathrm{and}\;\; \mu_o(x) = (1- x)(x - x_o/2)^3, $$ where \[ \beta(y) = \left\{ \begin{array}{c l} 0, & \quad \text{for $y \in ({\mathbb{R}}^{+} \backslash\; [x_o,1])$,}\\[3mm] \displaystyle\frac{b(y) \varphi(y)}{1 - \int_{x_o}^{y} \varphi(x) \d x}, & \quad \text{for $y \in [x_o,1]$,}\\ \end{array} \right. \] and \[ \varphi(y) = \left\{ \begin{array}{l l} \frac{160}{117}\left( -\frac{2}{3} + \frac{8}{3} y \right)^3, & \quad \text{for $y \in [x_o,(x_o + 1)/2]$,}\\ \frac{32}{117}\left( -20 + 40 y + \frac{320}{3}\left(y - \frac{5}{8} \right)^2 \right) + \frac{5120}{9}\left(y - \frac{5}{8} \right)^3\left( \frac{8}{3} y - \frac{11}{3} \right), & \quad \text{for $y \in ((x_o +1)/2,1]$.}\\ \end{array} \right. \] \begin{figure} \caption{(Example 3) Numerical solution at $t = 0, 1, 5, 10, 50, 500$, calculated for $\Delta t = 0.0125$, $\bar M_o = \bar M = 2800$. Fixed-equal mass reconstruction has been performed every $4$ time steps. We show the numerical solution after the fixed-location reconstruction with parameter $\bar M = 70$ and normalization.} \label{figure23} \end{figure} Figure \ref{figure23} shows the long time behaviour of a numerical solution for a particular choice of parameters. We observe the convergence towards a stationary profile once normalized, since the mass grows exponentially in time, as discussed in \cite{DHT,AnguloFission}. We remark that this structured population model cannot be discretized using the standard EBT method since particles divide at different sizes and the nonlocal term cannot be understood as a boundary condition. In order to keep the number of Dirac Deltas under control, we perform the reconstruction procedure as discussed in Subsection 2.3. Let us point out that the convergence towards normalized stationary states for similar models in the framework of Lebesgue spaces has been proved in \cite{PR,LP,CCM,BCG}. Finding the properties of these stationary states numerically is a relevant question that will be discussed elsewhere. \subsection{Example 4 (selection-mutation)} The last test case concerns a simple selection-mutation model in which the population is structured with respect to a evolutionary trait as in \cite{CCDR}. We assume that $x \in [0,1]$ and set the parameters as $$ b(x) = 0, \;\; c(\mu)(x) = - (1-\varepsilon)B(x) + m(\mu),\;\;\; \mathrm{and}\;\;\; \eta(y) = \varepsilon \sum_{p=1}^{r}B(y) \beta_p(y)\delta_{x = \bar x_p(y)}. $$ Here, $B(x)$ represents the trait specific birth rate, $m(\mu)$ is the death rate depending on the population distribution, and $\beta_p$ represents the mutation density probability, i.e., the probability that a parent with trait $y$ has a newborn with trait $\bar x_p(y)$. Finally, the parameter $\varepsilon$ is the mutation rate, and thus there are two parts in the right hand side, those that are a faithful reproduction of their parents and those that mutate, slightly with high probability, their trait. Let us point out that the mutation term in this model is an approximation in the sense of Remark \ref{aproxeta} of a continuous nonlocal term of the form $$ \int_0^1 B(y) \beta(x,y) \,\d\mu(y)\,\qquad \mbox{with}\qquad \int_0^1 \beta(x,y) \d x = 1\,, $$ and, in practice we can assume that has a Gaussian shape concentrated around the diagonal $x=y$. The approximated nonlocal term is constructed by substituting the mutation probability density $\beta(x,y)$ at each $y$ by an approximation with $r$ Delta Dirac points $\{\bar x_p(y)\}_{p=1}^r$ leading to the form of $\eta(y)$ above. More precisely, the approximated $\eta(y)$ is defined by duality on test functions $\varphi\in \mathbf{C}_0({\mathbb{R}}^+)$ functions as \begin{align*} \int_{{\mathbb{R}}^+} \int_{{\mathbb{R}}^+}\varphi(t,x) B(y) \beta(x,y) \d x \d \mu(y) &\approx \sum_{p=1}^{r} \int_{{\mathbb{R}}^+} B(y) \beta_p(y) \varphi(\bar x_p(y)) \d\mu(y) \\ &= \int_{{\mathbb{R}}^+} \int_{{\mathbb{R}}^+}\varphi(t,x)[\d\eta(t,\mu)(y)](x) \d\mu(y) \,. \end{align*} \begin{figure} \caption{(Example 4) The subplots show the function $\eta(y)$ for $y = 0.15$, $y=0.5$ and $y=0.99$, respectively, and parameters $r = 40$, $a = 0.4$. } \label{eta} \end{figure} In our simulations and based on the previous considerations, we consider $B(x) = x(1-x)$, the death rate is assumed to depend increasingly on the total population with a saturation of the form $m(\mu) = 1 - \exp\left\{-\int_{0}^{1} \d \mu\right\}$, and the approximation of the mutation kernel is chosen with $r = 10$, \[ \bar x_p(y) = \left\{ \begin{array}{l l} (y-a) + \frac{a}{r}\left(2p - 1\right), & \quad \text{if $0 \leq (y-a) + \frac{a}{r}\left(2p - 1\right) \leq 1$,}\\ 0, & \quad \text{otherwise},\\ \end{array} \right. \] and \[ \beta_p(y) = \frac{\check \beta_p(y)}{\sum_{p=1}^{r} {\check \beta_p(y)}},\;\; \text{where}\;\; \check \beta_p(y) = \left\{ \begin{array}{l l} \mathrm{exp}\left( -\frac{a^2}{a^2 - (\bar x_p(y) - y)^2} \right) , & \quad \text{if $p$ is s.t. $ 0 \leq \bar x_p(y) \leq 1$,}\\ 0, & \quad \text{otherwise}.\\ \end{array} \right.. \] The parameter $a$ is related to the mutation strength in the sense that a distance between a parent and its offspring is not greater than $a$, set in our simulations to $a = 0.4$. Figure \ref{ex4_1} shows the convergence towards stationary states for different values of the mutation rate $\varepsilon$. We observe that the stabilization rate depends on $\varepsilon$, being slower as $\varepsilon$ gets smaller and smaller. The existence of these stationary states with the full mutation kernel $\eta$ was proved in \cite{CCDR} without information about their stability. \begin{figure} \caption{(Example 4) Long time behaviour of numerical solutions. The plots show the evolution of a numerical solution on the time interval $[0, 2000]$ for $\varepsilon = 0.1, 0.05, 0.025$, and $0.0125$, respectively. For simulations, we set $\Delta t = 0.025$, $\bar M_o = \bar M= 100$, and $\mu_o = \sum_{i=1}^{\bar M_o} ({1}/{\bar M_o}) \delta_{x^i_o}$, where $x^i_o := (i-\frac{1}{2})/\bar M_o$. Fixed location reconstruction has been performed every $2$ time steps.} \label{ex4_1} \end{figure} \section*{Acknowledgments} JAC acknowledges support from the Royal Society by a Wolfson Research Merit Award and by the Engineering and Physical Sciences Research Council grant with references EP/K008404/1. JAC was partially supported by the project MTM2011-27739-C04-02 DGI (Spain) and 2009-SGR-345 from AGAUR-Generalitat de Catalunya. PG is the coordinator and AU is a Ph.D student in the International Ph.D. Projects Programme of Foundation for Polish Science operated within the Innovative Economy Operational Programme 2007-2013 (Ph.D. Programme: Mathematical Methods in Natural Sciences). PG is supported by the grant of National Science Centre no 6085/B/H03/2011/40. AU is supported by the grant of National Science Centre no 2012/05/N/ST1/03132. \end{document}
\begin{document} \begin{frontmatter} \title{A finite element method by patch reconstruction for the Stokes problem using mixed formulations} \author[add1]{Ruo Li} \ead{[email protected]} \author[add2]{Zhiyuan Sun} \ead{[email protected]} \author[add2]{Fanyi Yang} \ead{[email protected]} \author[add3]{Zhijian Yang} \ead{[email protected]} \address[add1]{CAPT, LMAM and School of Mathematical Sciences, Peking University, Beijing 100871, P. R. China} \address[add2]{School of Mathematical Sciences, Peking University, Beijing 100871, P. R. China} \address[add3]{School of Mathematics and Statistics, Wuhan University} \begin{abstract} In this paper, we develop a patch reconstruction finite element method for the Stokes problem. The weak formulation of the interior penalty discontinuous Galerkin is employed. The proposed method has a great flexibility in velocity-pressure space pairs whose stability properties are confirmed by the inf-sup tests. Numerical examples show the applicability and efficiency of the proposed method. \end{abstract} \begin{keyword} Stokes problem $\cdot$ Reconstructed basis function $\cdot$ Discontinuous Galerkin method $\cdot$ Inf-sup test \MSC[2010] 49N45\sep 65N21 \end{keyword} \end{frontmatter} \section{Introduction} \label{sec:introduction} We are concerned in this paper with the incompressible Stokes problem, which has a wide range of applications on the approximation of low Reynolds number flows and the time discretizations of the Oseen equation or Naiver-Stokes equation. One of the major difficulties in finite element discretizations for the Stokes problem is the incompressible constraint, which leads to a saddle-point problem. The stability condition often referred as the inf-sup (LBB) condition requires the approximation spaces for velocity and pressure need to be carefully chosen \cite{boffi2013mixed}. We refer to \cite{girault1986finite, taylor1973numerical} for some specific spaces used in the traditional finite element methods to solve the Stokes problem. Most recently, the discontinuous Galerkin (DG) methods have achieved a great success in computational fluid dynamics, see the state of art survey \cite{cockburn2000development}. Hansbo and Larson propose and analyze an interior penalty DG method for incompressible and nearly incompressible linear elasticity on triangular meshes in \cite{hansbo2002discontinuous} where polynomial spaces of degree $k$ and $k-1$ are employed to approximate velocity and pressure, respectively. In \cite{toselli2002hp} Toselli considers the $hp$-approximations for the Stokes problem using piecewise polynomial spaces. The uniform divergence stability and error estimates with respect to $h$ and $p$ are proven for this DG formulation when velocity is approximated one or two degrees higher than pressure. Numerical results show that using equal order spaces for velocity and pressure can also work well. Sch{\"o}tzau et al. improve the estimates on tensor product meshes in \cite{schotzau2002mixed}. A local discontinuous Galerkin method (LDG) for the Stokes problem is proposed in \cite{cockburn2002local}. The LDG method can be considered as a stabilized method when the approximation spaces for velocity and pressure are chosen with the same order. { Hybrid discontinuous Galerkin methods are also of interest due to their capability of providing a superconvergent post processing, we refer to \cite{carrero2006hybridized, Lederer2018Hybrid, Nguyen2010Hybridizable, Cockburn2009Hybridization} for more discussion. } Some special finite element spaces can be adopted to Stokes problem in DG framework. Karakashian and his coworkers \cite{baker1990piecewise, karakashian1998nonconforming} propose a DG method with piecewise solenoidal vector fields which are locally divergence-free. Cockburn et al. \cite{cockburn2005incompressiblei, cockburn2005incompressibleii, carrero2006hybridized, cockburn2007note} develop the LDG method with solenoidal vector fields. By introducing the hybrid pressure, the pressure and the globally divergence-free velocity can be obtained by a post-process of the LDG solution. While Montlaur et al.\cite{montlaur2008discontinuous} present two DG formulations for the incompressible flow, the first formulation is derived from an interior penalty method such that the computation of the velocity and the pressure is decoupled and the second formulation follows the methodology in \cite{baker1990piecewise}. With an inconsistent penalty, the velocity can be computed with absence of pressure terms. Liu \cite{liu2011penalty} presents a penalty-factor-free DG formulation for the Stokes problem with optimal error estimates. However, one of the limitations of DG methods is the computational cost is higher than using continuous Galerkin method directly \cite{zienkiewicz2003discontinuous, montlaur2009high} because of the duplication of the degrees of freedom at interelement boundaries especially in three-dimensional case. In this paper, we follow the methodology in \cite{li2012efficient, 2018arXiv180300378L} to apply the patch reconstruction finite element method to the Stokes problem. Piecewise polynomial spaces built by patch reconstruction procedure are taken to approximate velocity and pressure. The new space is a sub-space of the common approximation space used in DG framework, which allows us to employ the interior penalty formulation directly to solve the Stokes problem. As we mentioned before, it is important to verify the inf-sup condition for a mixed formulation to guarantee the stability, which is often severe for a specific discretization \cite{bathe2000inf}. We carry out a series of numerical inf-sup tests proposed in \cite{chapelle1993inf, boffi2013mixed} to show this method is numerically stable. The proposed method provides many merits. First, the DOFs of the system are totally decided by the mesh partition and have no relationship with the interpolation order. Then the method is easy to implement on arbitrary polygonal meshes because of the independence between the process of the construction of the space and the geometry structure of meshes. Third, we emphasize that the spaces to approximate velocity and pressure can be engaged with great flexibility. The results of numerical inf-sup tests exhibit the robustness of our method even in some extreme cases. The outline of this paper is organized as follows. In Section \ref{sec:reconopreator}, we briefly introduce the patch reconstruction procedure and the finite element space. Then the scheme of the mixed interior penalty DG method and its error analysis for the Stokes problem are presented in Section \ref{sec:weakform}. In Section \ref{sec:infsuptest}, we briefly review the inf-sup test and carry out a series of numerical inf-sup tests in several situations to show the proposed method satisfies the inf-sup condition. Finally, two-dimensional numerical examples are presented in Section \ref{sec:numericalresults} to illustrate the accuracy and efficiency of the proposed approach, and verify our theoretical results. \section{Reconstruction operator} \label{sec:reconopreator} In this section, we will introduce a reconstruction operator which can be constructed on any polygonal meshes and its corresponding approximation properties. Let $\Omega\subset\mathbb R^d, d=2, 3$, be a convex polygonal domain with boundary $\partial\Omega$. We denote by $\mathcal{T}_h$ a subdivision that partitions $\Omega$ into polygonal elements. And let $\mathcal E_h$ be the set of $(d-1)$-dimensional interfaces (edges) of all elements in $\mathcal T_h$, $\mathcal E_h^i$ the set of interior faces and $\mathcal E_h^b$ the set of the faces on the domain boundary $\partial\Omega$. We set \begin{displaymath} h=\max_{K\in\mathcal T_h} h_K,\quad h_K=\text{diam}(K),\quad h_e=\text{diam}(e), \end{displaymath} for $\forall K\in \mathcal T_h,\ \forall e \in \mathcal E_h$. Further, we assume that the partition $\mc{T}_h$ admits the following shape regularity conditions ~\cite{Brezzi:2009,DaVeiga2014}: \begin{enumerate} \item[{\bf H1}\;]There exists an integer number $N$ independent of $h$, that any element $K$ admits a sub-decomposition $\wt{\mathcal T}_{h | K}$ made of at most $N$ triangles. \item[{\bf H2}\;] $\wt{\mc{T}_h}$ is a compatible sub-decomposition, that any triangle $T\in\wt{\mc{T}_h}$ is shape-regular in the sense of Ciarlet-Raviart~\cite{ciarlet:1978}: there exists a real positive number $\sigma$ independent of $h$ such that $h_T/\rho_T\le\sigma$, where $\rho_T$ is the radius of the largest ball inscribed in $T$. \end{enumerate} There many useful properties using for the analysis in finite difference schemes and DG framework can be derived from the above assumptions, such as Agmon inequality and inverse inequality \cite{DaVeiga2014, antonietti2013hp, 2018arXiv180300378L}: \begin{enumerate} \item[{\bf M1}\;][Agmon inequality] There exists $C$ that depends on $N$ and $\sigma$ but independent of $h_K$ such that \begin{displaymath} \|v\|_{L^2(\partial K)}^2 \leq C\left( h_K^{-1}\|v\|_{L^2(K)}^2 + h_K\|\nabla v\|_{L^2(K)}^2 \right), \quad \forall v \in H^1(K). \end{displaymath} \item [{\bf M2}\;][Inverse inequality] There exists $C$ that depends on $N$ and $\sigma$ but independent of $h_K$ such that \begin{displaymath} \|\nabla v\|_{L^2(K)} \leq Cm^2/h_K\|v\|_{L^2(K)}, \quad \forall v \in \mathbb P_m(K). \end{displaymath} \end{enumerate} { Given the partition $\mc{T}_h$, we define the reconstruction operator as follows. First in each element $K \in \mc{T}_h$, we specify a point $\bm x_K \in K$ as the collocation point. Here we just let $\bm x_K$ be the barycenter of $K$. Then for each $K \in \mc{T}_h$ we construct an element patch $S(K)$, which is a set of $K$ itself and some elements around $K$. Specifically, we construct $S(K)$ in a recursive manner. For element $K$, we set $S(K) = \left\{ K \right\}$ first, and we enlarge $S(K)$ by adding all the von Neumann neighbours (adjacent edge-neighbouring elements) of $S(K)$ into $S(K)$ recursively until we have collected enough elements into the element patch. We denote by $\# S(K)$ the cardinality of $S(K)$ and an example of construction of $S(K)$ with $\# S(K) = 12$ is shown in Fig \ref{fig:buildpatch}. } \begin{figure} \caption{Step 1} \caption{Step 2} \caption{Step 3} \caption{Step 4} \caption{Build patch for $\# S(K) = 12$} \label{fig:buildpatch} \end{figure} For element $K$, we collect all collocation points in a set $\mathcal I_K$: \begin{displaymath} \mathcal I_K\triangleq\left\{\bm x_{\widetilde K}\ |\ \bm x_{\widetilde K}\ \text{is the barycenter of}\ \widetilde K,\ \forall\widetilde K\in S(K)\right\}. \end{displaymath} Let $U_h$ be the space consisting of piecewise constant functions: \begin{displaymath} U_h\triangleq\left\{ v\in L^2(\Omega)\ \big |\ v|_K \in \mathbb P_0(K),\ \forall K\in \mathcal T_h\right\}, \end{displaymath} where $\mathbb P_n$ is the polynomial space of degree not greater than $n$. For any $v\in U_h$, we reconstruct a $m$th-order polynomial denoted by $\mathcal R_K^m v$ on $S(K)$ by the following least squares problem: \begin{equation} \mathcal R_K^mv= \mathop{\arg\min}_{p\in\mathbb P_m(S(K))} \ \sum_{\bm x\in \mathcal I_K}|v(\bm x)-p(\bm x)|^2. \label{eq:lsproblem} \end{equation} The uniqueness condition for the problem \eqref{eq:lsproblem} is provided by the condition $\# S(K)\geq\text{dim}(\mathbb P_m) $ and the following assumption \cite{li2012efficient, li2016discontinuous}: \begin{assumption} For $\forall K\in\mathcal T_h$ and $\forall p \in \mathbb P_m(S(K))$, problem \eqref{eq:lsproblem} satisfies \begin{displaymath} p|_{\mathcal I(K)}=\bm0\quad\Longrightarrow\quad p|_{S(K)}\equiv0. \end{displaymath} \label{as:unique} \end{assumption} Hereafter, we assume the uniqueness condition for \eqref{eq:lsproblem} always holds. For any $g \in U_h$, we restrict the definition domain of the polynomial $\mathcal R_K^m g$ on element $K$ to define a global reconstruction operator which is denoted by $\mathcal R^m$: \begin{displaymath} \mathcal R^mg|_K = (\mathcal R_K^mg)|_K,\quad \forall K\in \mathcal T_h. \end{displaymath} Then we extend the reconstruction operator to an operator defined on $C^0(\Omega)$, still denoted as $\mathcal R^m$: \begin{displaymath} \mathcal R^mu=\mathcal R^m\tilde u,\quad \tilde u\in U_h, \quad \tilde u(\bm x_K)=u(\bm x_K), \quad \forall u\in C^0(\Omega). \end{displaymath} We note that $\mathcal R^m$ is a linear operator whose image is actually a piecewise $m$th-order polynomial space which is denoted as \begin{displaymath} V_{h}^{m} = \mathcal R^m U_h. \end{displaymath} { Further, we give a group of basis functions of the space $V_h^m$. We define $w_K(\bm x) \in C^0(\Omega)$ such that \begin{displaymath} w_K(\bm x) = \begin{cases} 1, \quad &\bm x = \bm x_K, \\ 0, \quad &\bm x \in \widetilde K, \quad \widetilde K \neq K. \end{cases} \end{displaymath} Then we denote $\left\{ \lambda_K\ |\ \lambda_K = \mc R^m w_K \right\}$ as a group of basis functions. Given ${\lambda_K}$, we may write the reconstruction operator in an explicit way: \begin{equation} \mc R^m g = \sum_{K \in \mc{T}_h} g(\bm x_K) \lambda_K(x), \quad \forall g \in C^0(\Omega). \label{eq:explicit} \end{equation} From \eqref{eq:explicit}, it is clear that the degrees of freedom of $\mc R^m$ are the values of the unknown function at the collocation points of all elements in partition. We present a 2D example in Section \ref{sec:2dexample} to demonstrate the reconstruction process and the implementation of basis functions. We note that} $\mathcal R^m u(\forall u\in C^0(\Omega))$ may be discontinuous across the inter-element boundaries. The fact inspires us to share some well-developed theories of DG methods and enjoy its advantages. We first introduce the traditional average and jump notations in DG method. Let $e$ be an interior edge shared by two adjacent elements $e=\partial K^{+} \cap \partial K^{-}$ with the unit outward normal vector $\bm{\mathrm n}^{+}$ and $\bm{\mathrm n}^{-}$, respectively. Let $v$ and $\bm{v}$ be the scalar-valued and vector-valued functions on $\mathcal T_h$, respectively, we define the $average$ operator $\{ \cdot \}$ as follows: \begin{displaymath} \{v\}=\frac{1}{2}(v^{+} + v^{-}), \quad \{ \bm{v} \} = \frac{1}{2}(\bm{v}^{+} + \bm{v}^{-}) , \quad\text{on }\ e\in\mathcal E_h^i, \end{displaymath} with $v^+=v|_{K^+},\ v^-=v|_{K^-},\ \bm v^+=\bm v|_{K^+},\ \bm v^-=\bm v|_{K^-}$. Further, we set the $jump$ operator $[ \hspace{-2pt} [ \cdot ] \hspace{-2pt} ]$ as \begin{displaymath} \begin{aligned} [ \hspace{-2pt} [ v ] \hspace{-2pt} ] =v^{+} \bm{\mathrm n}^{+} + v^{-} \bm{\mathrm n}^{-}, \quad [ \hspace{-2pt} [ \bm{v} ] \hspace{-2pt} ] =\bm{v}^{+}\cdot \bm{\mathrm n}^{+}+\bm{v}^{-}\cdot \bm{\mathrm n}^{-}, \\ [ \hspace{-2pt} [ \bm{v} \otimes\bm{\mathrm n} ] \hspace{-2pt} ] =\bm{v}^{+}\otimes \bm{\mathrm n}^{+}+\bm{v}^{-}\otimes \bm{\mathrm n}^{-},\quad \text{on}\ e\in\mathcal E_h^i. \\ \end{aligned} \end{displaymath} For $e \in \mathcal E^b_h$, we set \begin{displaymath} \begin{aligned} \{v\}=v&,\quad \{\bm v\}=\bm v,\quad [ \hspace{-2pt} [ v ] \hspace{-2pt} ]=v\bm{\mathrm n}, \\ [ \hspace{-2pt} [ \bm v ] \hspace{-2pt} ]=\bm v\cdot\bm{\mathrm n}&, \quad [ \hspace{-2pt} [ \bm{v}\otimes \bm{\mathrm n} ] \hspace{-2pt} ] = \bm{v}\otimes \bm{\mathrm n} ,\quad\text{on}\ e \in \mathcal E^b_h.\\ \end{aligned} \end{displaymath} Now we will present the error analysis of $\mathcal R^m$. We begin by defining broken Sobolev spaces of composite order $\bm{\mathrm s}=\{ s_K\geq0: \forall K\in\mathcal T_h\}$: \begin{displaymath} \begin{aligned} H^{\bm{\mathrm s}}(\Omega,\mathcal T_h)\triangleq\{u\in L^2(\Omega)&: u|_K \in H^{s_K}(K),\forall K\in\mathcal T_h\}, \\ \end{aligned} \end{displaymath} where $H^{s_K}(K)$ is the standard Sobolev spaces on element $K$. The associated broken norm is defined as \begin{displaymath} \begin{aligned} \|u\|_{H^{\bm{\mathrm s}}(\Omega,\mathcal T_h)}^2= \sum_{K\in\mathcal T_h}\|u\|_{H^{s_K}(K)}^2, \end{aligned} \end{displaymath} where $\|\cdot\|_{H^{s_K}(K)}$ is the standard Sobolev norm on element $K$. For $\bm u\in [H^{\bm{\mathrm s}}(\Omega, \mathcal T_h)]^d$, the norm is defined as \begin{displaymath} \|\bm u\|_{H^{\bm{\mathrm s}}(\Omega,\mathcal T_h)}^2= \sum_{i=1}^d \|\bm u_i\|_{H^{\bm{\mathrm s}}(\Omega, \mathcal T_h)}^2. \end{displaymath} When $s_K=s$ for all elements in $\mathcal T_h$, we simply write $H^s(\Omega,\mathcal T_h)$ and $[H^s(\Omega, \mathcal T_h)]^d$. Then we define a constant $\Lambda(m,\mathcal{I}_K)$ for $K\in \mathcal T_h$: \begin{equation} \label{eq:constant} \Lambda(m, \mathcal{I}_K) \triangleq \max_{p\in \mathbb{P}_m(S(K))} \frac{\max_{\bm{x} \in S(K)} |p(\bm{x})|}{\max_{\bm{x} \in \mathcal{I}_K} |p(\bm{x})|}, \end{equation} the Assumption \ref{as:unique} is equivalent to \begin{displaymath} \Lambda(m, \mathcal{I}_K) < \infty. \end{displaymath} The uniform upper bound of $\Lambda(m, \mathcal I_K)$ exists if element patches are convex and the triangulation is quasi-uniform \cite{li2012efficient}. We also refer to \cite{li2016discontinuous} for the estimate of $\Lambda(m, \mathcal I_K)$ in more general cases such as polygonal partition and non-convex element patch. We denote by $\Lambda_m$ the uniform upper bound of $\Lambda(m, \mathcal I_K)$. With $\Lambda_m$, we have the following estimates. \begin{lemma} Let $g\in H^{m+1}(\Omega)(m\geq0)$ and $ K\in\mathcal T_h$, then \begin{equation} \|g-\mathcal{R}^m g\|_{L^2(K)}\lesssim \Lambda_{m} h^{m+1}\|g\|_{H^{m+1}(S(K))}. \label{eq:reconL2error} \end{equation} \label{le:reconL2error} \end{lemma} For convenience, the symbol $\lesssim$ and $\gtrsim$ will be used in this paper. That $X_1\lesssim Y_1$ and $X_2\gtrsim Y_2$ mean that $X_1\leq C_1Y_1$ and $X_2\geq C_2Y_2$ for some positive constants $C_1$ and $C_2$ which are independent of mesh size $h$. \begin{lemma} Let $g\in H^{m+1}(\Omega)(m\geq0)$ and $ K\in\mathcal T_h$, then \begin{equation} \|g-\mathcal R^m g\|_{L^2(\partial K)}\lesssim \Lambda_{m} h^{m+\frac12}\|g\|_{H^{m+1}(S(K))}. \label{eq:recontraceinequality} \end{equation} \label{th:recontraceinequality} \end{lemma} For the standard Sobolev norm, we have the following estimates: \begin{lemma} Let $g\in H^{m+1}(\Omega)(m\geq0)$ and $K\in\mathcal T_h$, then \begin{equation} \|g-\mathcal R^mg\|_{H^1(K)}\lesssim \Lambda_{m} h^{m}\|g\|_{H^{m+1}(S(K))}. \label{eq:reconSobleverror} \end{equation} \label{th:reconSobleverror} \end{lemma} We refer to \cite{li2012efficient, li2016discontinuous} for detailed proofs and more discuss about $S(K)$ and $\# S(K)$. Here we note that one of the conditions of guaranteeing the uniform upper bound $\Lambda_m$ is $\# S(K)$ should be much larger than $\text{dim}(\mathbb P_m)$. In Section \ref{sec:infsuptest} we will list the values of $\# S(K)$ used in all numerical experiments. Finally, we derive the estimate in DG energy norm. For the scalar-valued function, the DG energy norm is defined as: \begin{displaymath} \begin{aligned} \|u\|_{\mathrm{DG}}^2&\triangleq\sum_{K\in\mathcal T_h}|u|_{H^1(K)}^2 + \sum_{e\in\mathcal E_h} \frac1{h_e}\|[ \hspace{-2pt} [ u ] \hspace{-2pt} ]\|_{L^2(e)}^2,\quad \forall u\in H^1(\Omega, \mathcal T_h),\\ \end{aligned} \end{displaymath} \begin{theorem} Let $g\in H^{m+1}(\Omega)(m\geq0)$, then \begin{equation} \|g-\mathcal R^mg\|_{\mathrm{DG}}\lesssim \Lambda_{m} h^{m} \|g\|_{H^{m+1}(\Omega)}. \label{eq:interpolation} \end{equation} \label{th:interpolationerrorDG} \end{theorem} \begin{proof} From Lemma \ref{th:reconSobleverror}, we have \begin{displaymath} \begin{aligned} \sum_{K\in\mathcal T_h}|g-\mathcal R^mg|_{H^1(K)}&\lesssim \sum_{K\in\mathcal T_h}\Lambda_m h^{m}\|g\|_{H^{m+1}(S(K))}\\ &\lesssim \Lambda_m h^{m}\|g\|_{H^{m+1}(\Omega)}.\\ \end{aligned} \end{displaymath} For any $e\in\mathcal E_h^i$ shared by elements $K_1$ and $K_2$, we have \begin{displaymath} \frac1{h_e}\|[ \hspace{-2pt} [ g-\mathcal R^m g] \hspace{-2pt} ] \|_{L^2(e)}^2\leq \frac{1}{h_e}\left( \|g-\mathcal R^m g\|_{L^2(\partial K_1)}^2 +\|g-\mathcal R^m g \|_{L^2(\partial K_2)}^2\right). \end{displaymath} From Lemma \ref{th:recontraceinequality}, we get \begin{displaymath} \begin{aligned} \frac1{h_e}\|g-\mathcal R^m g\|_{L^2(\partial K_1)}^2&\lesssim\Lambda_m h^{2m}\|g\|_{H^{m+1}(K_1)}^2,\\ \frac1{h_e}\|g-\mathcal R^m g\|_{L^2(\partial K_2)}^2&\lesssim\Lambda_m h^{2m}\|g\|_{H^{m+1}(K_2)}^2.\\ \end{aligned} \end{displaymath} For any $e\in\mathcal E_h^b$, assume $e$ is a face of element $K$, we have \begin{displaymath} \begin{aligned} \frac1{h_e}\|[ \hspace{-2pt} [ g-\mathcal R^m g] \hspace{-2pt} ] \|_{L^2(e)}^2&\leq \frac{1}{h_e}\|g-\mathcal R^m g\|_{L^2(\partial K)}^2\\ &\lesssim \Lambda_m h^{2m}\|g\|_{H^{m+1}(K)}^2.\\ \end{aligned} \end{displaymath} Combining the above inequalities gives the estimate \eqref{eq:interpolation}, which completes the proof. \end{proof} For the vector-valued function, the DG energy norm is defined as: \begin{displaymath} \|\bm u\|_{\mathrm{DG}}^2\triangleq\sum_{i=1}^d\|\bm u_i\|_{ \mathrm{DG}}^2,\quad\forall \bm u\in [H^1(\Omega,\mathcal T_h)]^d, \end{displaymath} and the reconstruction operator is defined component-wisely for $[U_h]^d$, still denoted by $\mathcal R^m$: \begin{displaymath} \begin{aligned} {\mathcal R}^m\bm v&=[\mathcal R^m\bm v_i]^d, \quad 1\leq i\leq d,\quad \forall \bm v \in [U_h]^d.\\ \end{aligned} \end{displaymath} Then the operator can be extended on $[C^0(\Omega)]^d$ and the corresponding estimate is written as: \begin{theorem} let $\bm g\in [H^{m+1}(\Omega)]^d(m\geq0)$, then \begin{displaymath} \|\bm g-{\mathcal R}^m\bm g\|_{\mathrm{DG}}\lesssim \Lambda_m h^{m}\|\bm g\|_{H^{m+1}(\Omega)}. \end{displaymath} \end{theorem} \begin{proof} It is a direct extension from Theorem \ref{th:interpolationerrorDG}. \end{proof} \section{The weak form of the stokes problem} \label{sec:weakform} In this section, we consider the incompressible Stokes problem with Dirichlet boundary condition, which seeks the velocity field $\bm u$ and its associated pressure $p$ satisfying \begin{equation} \begin{aligned} -\Delta\bm u+\nabla p&=\bm f \qquad \text{in} \ \Omega,\\ \nabla\cdot\bm u&=0 \qquad\text{in}\ \Omega,\\ \bm u&=\bm g \qquad\text{on}\ \partial\Omega,\\ \end{aligned} \label{eq:stokes} \end{equation} where $\bm f$ is the given source term and $\bm g$ is a Dirichlet boundary condition that satisfies the compatibility condition \begin{displaymath} \int_{\partial \Omega} \bm g\cdot\bm{\mathrm n}\mathrm ds=0. \end{displaymath} For positive integer $k,k'$, we define the following finite element spaces to approximate velocity and pressure: \begin{displaymath} \begin{aligned} \bm V_h^k=[V_h^k]^d,\quad Q_{h}^{k'}=V_{h}^{k'}.\\ \end{aligned} \end{displaymath} We note that finite element spaces $\bm V_{h}^{k}$ and $Q_{h}^{k'}$ are the subspace of the common discontinuous Galerkin finite element spaces, which implies that the interior penalty discontinuous Galerkin method \cite{hansbo2002discontinuous,montlaur2008discontinuous} can be directly applied to the Stokes problem \eqref{eq:stokes}. For a vector $\bm u$, we define the second-order tensor $\nabla\bm u$ by \begin{displaymath} (\nabla\bm u)_{i,j}=\frac{\partial\bm u_i}{\partial x_j}, \quad 1\leq i,j\leq d. \end{displaymath} The discrete problem for the Stokes problem \eqref{eq:stokes} is as: find $(\bm u_h, p_h)\in \bm V_{h}^{k} \times Q_{h}^{k'}$ such that \begin{equation} \begin{aligned} a(\bm u_h, \bm v_h)+b(\bm v_h, p_h)&=l(\bm v_h),\quad \forall \bm v_h\in \bm V_{h}^{k},\\ b(\bm u_h, q_h)&=(q_h,\bm{\mathrm n}\cdot\bm g)_{\partial \Omega},\quad \forall q_h\in Q_{h}^{k'},\\ \end{aligned} \label{eq:weakform} \end{equation} where symmetric bilinear form $a(\cdot, \cdot)$ is given by \begin{equation} \begin{aligned} a(\bm{u}, \bm{v})&=\int_{\Omega}\nabla \bm{u} : \nabla \bm{v} \,\mathrm{d}x\\ &-\int_{\mathcal E_h} (\{\nabla \bm{u}\} : [ \hspace{-2pt} [ \bm{v} \otimes \bm{n} ] \hspace{-2pt} ]+[ \hspace{-2pt} [ \bm{u} \otimes \bm{n} ] \hspace{-2pt} ] : \{\nabla \bm{v}\})\mathrm ds \\ &+\int_{\mathcal E_h}\eta[ \hspace{-2pt} [ \bm{u} \otimes \bm{n} ] \hspace{-2pt} ] : [ \hspace{-2pt} [ \bm{v} \otimes \bm{n} ] \hspace{-2pt} ] \mathrm ds, \quad \forall \bm{u},\bm{v}\in [H^1(\Omega, \mathcal T_h)]^d.\\ \end{aligned} \label{eq:ellipticform} \end{equation} The term $\eta$ is referred to as the penalty parameter which is defined on $\mathcal E_h$ by \begin{displaymath} \eta|_e = \eta_e,\quad \forall e\in \mathcal E_h, \end{displaymath} and will be specified later. The bilinear form $b(\cdot, \cdot)$ and the linear form $l(\cdot)$ are defined as \begin{equation} \begin{aligned} b(\bm v, p)&=-\int_{\Omega} p\nabla\cdot\bm v\,\mathrm{d}x + \int_{\mathcal E_h} \{p\}[ \hspace{-2pt} [ \bm{v} ] \hspace{-2pt} ]\mathrm ds,\\ l(\bm v)&=\int_{\Omega}\bm f \cdot \bm v\,\mathrm{d}x-\int_{\mathcal E_h^b}\bm g \cdot (\nabla \bm v\cdot\bm{\mathrm n}) \mathrm ds + \int_{\mathcal E_h^b}\eta \bm g\cdot \bm v\mathrm ds,\\ \end{aligned} \label{eq:divergenceform} \end{equation} for $\forall \bm v\in [H^1(\Omega,\mathcal T_h)]^d$ and $p\in L^2(\Omega)$. Now we present the standard continuity and coercivity properties of the bilinear form $a(\cdot. \cdot)$. Actually the bilinear form $a(\cdot, \cdot)$ is a direct extension from the interior penalty bilinear form used for solving the elliptic problems \cite{arnold1982interior}. It is easy to extend the theoretical results of solving the elliptic problems to $a(\cdot, \cdot)$. \begin{lemma} The bilinear form $a(\cdot, \cdot)$, defined in \eqref{eq:ellipticform}, is continuous when $\eta\geq0$. The following inequality holds: \begin{displaymath} |a(\bm u, \bm v)|\lesssim\|\bm u\|_{\mathrm{\mathrm{DG}}}\|\bm v\|_{\mathrm{\mathrm{DG}}},\quad \forall \bm{u,v}\in [H^1(\Omega, \mathcal T_h)]^d. \end{displaymath} \label{le:ellipticcontinuous} \end{lemma} \begin{lemma} Let \begin{displaymath} \eta|_e=\frac{\mu}{h_e},\quad \forall e\in \mathcal E_h, \end{displaymath} where $\mu$ is a positive constant. With sufficiently large $\mu$, the following inequality holds: \begin{displaymath} |a(\bm u_h, \bm u_h)|\gtrsim \|\bm u_h\|_{\mathrm{\mathrm{DG}}}^2, \quad \forall \bm u_h\in \bm V_{h}^{k}. \end{displaymath} \label{le:ellipticcoercivity} \end{lemma} The detailed proofs of Lemma \ref{le:ellipticcontinuous} and Lemma \ref{le:ellipticcoercivity} could be found in \cite{arnold2002unified,hansbo2002discontinuous, montlaur2008discontinuous}. We also refer to \cite{arnold2002unified} where a unified method is employed to analyse the choices of the penalty parameter $\eta$. For $b(\cdot,\cdot)$, we have the analogous continuity property. \begin{lemma} The bilinear form $b(\cdot, \cdot)$, defined in \eqref{eq:divergenceform}, is continuous. The following inequality holds: \begin{displaymath} |b(\bm v, q)|\lesssim\|\bm v\|_{\mathrm{\mathrm{DG}}} \| q \|_{L^2(\Omega)},\quad \forall \bm{v}\in [H^1(\Omega, \mc{T}_h)]^d,\ \forall q\in L^2(\Omega). \end{displaymath} \label{le:divergencecontinuous} \end{lemma} Besides the continuity of $a(\cdot, \cdot), b(\cdot,\cdot)$ and the coercivity of $a(\cdot, \cdot)$, { the existence of a stable finite element approximation solution $(\bm u_h, p_h)$ depends on choosing a pair of spaces $\bm V_{h}^{k}$ and $Q_h^{k'}$ such that the following inf-sup condition holds \cite{boffi2013mixed} }: \begin{equation} \mathop{\inf\qquad\sup}_{q_h\in Q_{h}^{k'}\ \bm v_h\in \bm V_{h}^{k}} \frac{b(\bm v_h, q_h)}{ \|\bm v_h\|_{\mathrm{\mathrm{DG}}}\|q_h\|_{L^2(\Omega)}}\geq \beta, \label{eq:infsupcondition} \end{equation} where $\beta$ is a positive constant. The finite element space we build depends on the collocation points and element patches, the theoretical verification of the inf-sup condition for the pair $\bm V_{h}^{k} \times Q_{h}^{k'}$ is very difficult in all situations. Chapelle and Bathe \cite{chapelle1993inf} propose a numerical test on whether the inf-sup condition is passed for a given finite element discretization. In next section, we will carry out a series of numerical evaluations for different $k$ and $k'$ to give an indication of the verification of the inf-sup condition. Then if the inf-sup condition holds, we could state a standard priori error estimate of the mixed method \eqref{eq:weakform}. \begin{theorem} Let the exact solution $(\bm u, p)$ to the Stokes problem \eqref{eq:stokes} belong to $[H^{k+1}(\Omega)]^d \times H^{k'+1}(\Omega)$ with $k\geq 1$ and $k'\geq 0$, and let $(\bm u_h, p_h)$ be the numerical solution to \eqref{eq:weakform}, and assume that the inf-sup condition \eqref{eq:infsupcondition} holds and the penalty parameter $\eta$ is set properly. Then the following estimate holds: \begin{equation} \|\bm u- \bm u_h\|_{\mathrm{\mathrm{DG}}}+\|p - p_h\|_{L^2(\Omega)}\lesssim h^s\left( \|\bm u\|_{H^{k+1}( \Omega)}+\|p\|_{H^{k'+1}(\Omega)}\right), \label{eq:prioriestimate} \end{equation} \label{th:prioriestimate} where $s=\min(k,k'+1)$. \end{theorem} \begin{proof} We define $\bm Z (\bm g)\subset \bm V_h^{k}$ by \begin{equation}\label{eq:kernel_space} \bm Z(\bm g)=\{\bm v \in \bm V_h:b(\bm v ,q )= \int_{\mathcal E_h^b} \bm g\cdot \bm n q \mathrm{d}s, \ \forall q\in Q_h^{k'} \}. \end{equation} Consider $\bm w \in \bm Z (\bm g)$ and $q\in Q_h^{k'}$. Since Lemma \ref{le:ellipticcoercivity}, we have \begin{align*} \|\bm w -\bm u_h\|_{\mathrm{DG}}^2 &\lesssim a(\bm w -\bm u_h, \bm w- \bm u_h) \\ &\lesssim a(\bm w -\bm u, \bm w- \bm u_h)+ a(\bm u -\bm u_h, \bm w- \bm u_h)\\ &=a(\bm w -\bm u, \bm w- \bm u_h) - b(\bm w -\bm u_h, p- p_h). \end{align*} Since $\bm w -\bm u_h\in \bm Z(0)$, the $q_h$ can be replaced by any $q\in Q_h^{k'}$, we obtain \begin{displaymath} \|\bm w -\bm u_h\|_{\mathrm{DG}}^2\lesssim a(\bm w -\bm u, \bm w- \bm u_h) - b(\bm w -\bm u_h, p- q). \end{displaymath} Using Lemma \ref{le:ellipticcontinuous} and \ref{le:divergencecontinuous} gives \begin{equation}\label{eq:kernel_ineq} \|\bm u -\bm u_h\|_{\mathrm{DG}} \lesssim \|\bm u- \bm w\|_{\mathrm{DG}} + \| p- q\|_{L^2(\Omega)}, \ \bm w\in \bm Z(\bm g), q\in Q_h^{k'}. \end{equation} Then we deal with an arbitrary function in $\bm V_h^k$. For the fixed $\bm v \in \bm V_h^k$, we consider the problem of finding $\bm z(\bm v) \in \bm V_h^k$, such that \begin{displaymath} b(\bm z(\bm v) , q)= b(\bm u-\bm u_h,q), \ q\in Q_h^{k'}. \end{displaymath} Thanks to the inf-sup condition \eqref{eq:infsupcondition} and \cite[Proposition 5.1.1,p.270]{boffi2013mixed}. We can find a solution $\bm z\in \bm V_h^k$, such that \begin{equation}\label{eq:inf_sup_ineq} \| \bm z(\bm v) \|_{\mathrm{DG}} \lesssim \sup_{0\neq q\in Q_{h}^{k'}} \frac{b(\bm z(\bm v), q)}{\|q\|_{L^2(\Omega)}}= \sup_{0 \neq q\in Q_{h}^{k'}} \frac{b(\bm u -\bm u_h, q)}{\|q\|_{L^2(\Omega)}} \lesssim \|\bm u -\bm u_h\|_{\mathrm{DG}}. \end{equation} Since \begin{displaymath} b(\bm z(\bm v) +\bm v,q)=b (\bm u_h, q)=\int_{\mathcal E_h^b} \bm g\cdot \bm n q \mathrm{d}s, \ \forall q\in Q_h^{k'}, \end{displaymath} we have $\bm z(\bm v) +\bm v \in \bm Z (\bm g)$. Taking $\bm w=\bm z(\bm v) +\bm v$ in \eqref{eq:kernel_ineq} yields \begin{equation}\label{eq:kernel_app} \|\bm u -\bm u_h\|_{\mathrm{DG}} \lesssim \|\bm u -\bm v\|_{\mathrm{DG}} + \|\bm z(\bm v)\|_{\mathrm{DG}} + \|p-q\|_{L^2(\Omega)}. \end{equation} together with \eqref{eq:inf_sup_ineq}, \begin{equation}\label{eq:velocity_app} \begin{split} \|\bm u -\bm u_h\|_{\mathrm{DG}} &\lesssim \inf_{\bm v \in \bm V_h^k} \|\bm u -\bm v\|_{\mathrm{DG}} + \inf_{q\in Q_h^{k'}}\|p-q\|_{L^2(\Omega)}\\ &\lesssim h^{k}\|\bm u\|_{H^{k+1}(\Omega)} + h^{k'+1} \|p\|_{H^{k'+1}(\Omega)}. \end{split} \end{equation} Next we consider the pressure term, let $q\in Q_h^{k'}$. Using the inf-sup condition in \eqref{eq:infsupcondition} we have \begin{equation}\label{eq:pressure_ineq} \begin{split} \|q-p_h\|_{L^2(\Omega)}& \lesssim \sup_{0\neq \bm v \in \bm V_h^k} \frac{b(\bm v, q-p_h)}{\|\bm v\|_{\mathrm{DG}}}\\ &=\sup_{0\neq \bm v \in \bm V_h^k} \frac{b(\bm v, q-p)+b(\bm v, p-p_h)}{\|\bm v\|_{\mathrm{DG}}}\\ &=\sup_{0\neq \bm v \in \bm V_h^k} \frac{b(\bm v, q-p)- a(\bm u -\bm u_h ,\bm v)}{\|\bm v\|_{\mathrm{DG}}} \\ &\lesssim \|p-q\|_{L^2(\Omega)} + \|\bm u -\bm u_h\|_{\mathrm{DG}}. \end{split} \end{equation} From the triangle inequality and \eqref{eq:pressure_ineq}, we obtain \begin{equation}\label{eq:pressure_app} \begin{split} \|p-p_h\|_{L^2(\Omega)} &\lesssim \|\bm u -\bm u_h\|_{\mathrm{DG}} + \inf_{q\in Q_h^{k'}}\|p-q\|_{L^2(\Omega)} \\ &\lesssim h^{k}\|\bm u\|_{H^{k+1}(\Omega)} + h^{k'+1} \|p\|_{H^{k'+1}(\Omega)}, \end{split} \end{equation} and the proof is concluded by combining \eqref{eq:velocity_app} and \eqref{eq:pressure_app}. \end{proof} \section{Inf-sup test} \label{sec:infsuptest} In this section, we perform the inf-sup tests with some velocity-pressure finite element space pairs to validate the inf-sup condition numerically. After the discretization, the matrix form of the problem \eqref{eq:weakform} is obtained, \begin{displaymath} \begin{bmatrix} A & B^T\\ B & 0 \\ \end{bmatrix} \begin{bmatrix} \bm{\mathrm U}\\ \bm{\mathrm P}\\ \end{bmatrix} = \begin{bmatrix} \bm{\mathrm F}\\ \bm{\mathrm G}\\ \end{bmatrix}, \end{displaymath} where the matrix $A$ and the matrix $B$ associate with the bilinear form $a(\cdot, \cdot)$ and $b(\cdot, \cdot)$, respectively. The vector $\bm U, \bm P$ is the solution vector corresponding to $\bm u_h, p_h$ and $\bm F, \bm G$ is the right hand side corresponding to $\bm f, \bm g$. Then the numerical inf-sup test is based on the following lemma. \begin{lemma} Let $S$ and $T$ be symmetric matrices of the norms $\|\cdot\|_{\mathrm{DG}}$ in $\bm V_h^k$ and $\|\cdot\|_{L^2(\Omega)}$ in $Q_{h}^{k'}$, respectively, and let $\mu_{\min}$ be the smallest nonzero eigenvalue defined by the following generalized eigenvalue problem: \begin{displaymath} B^TS^{-1}B\bm{V}=\mu_{\min}^2T\bm{V}, \end{displaymath} then the value of $\beta$ is simply $\mu_{\min}$. \label{le:numericalinfsup} \end{lemma} The proof of this lemma can be found in \cite{boffi2013mixed, malkus1981eigenproblems}. In numerical tests, we would consider a sequence of successive refined meshes and monitor $\mu_{\min}$ of each mesh. If a sharp decrease of $\mu_{\min}$ is observed while the mesh size approaches to zero, we could predict that the pair of approximation spaces violates the inf-sup condition. Otherwise, if $\mu_{\min}$ stabilizes as the mesh is refined, we can conclude that the inf-sup test is passed. The numerical tests are conducted with following settings, let $\Omega$ be the unit square domain in two dimension and we consider two groups of quasi-uniform meshes which are generated by the software \emph{gmsh} \cite{geuzaine2009gmsh}. The first ones are triangular meshes(see Fig \ref{fig:infsuptesttriangularmesh}) and the second ones consist of triangular and quadrilateral elements(see Fig \ref{fig:infsuptestmixedmesh}). In both cases, the mesh size $h$ is taken by $h=\frac1n,\ n=10,20,30,\cdots 80$. \begin{figure} \caption{The triangular meshes, $h=\frac1{10}$(left)/$h=\frac1{ 20}$(right).} \label{fig:infsuptesttriangularmesh} \end{figure} \begin{figure} \caption{The mixed meshes, $h=\frac1{10}$(left)/$h=\frac1{2 0}$(right).} \label{fig:infsuptestmixedmesh} \end{figure} With the given mesh partition, the finite element space can be constructed. As we mention before, for element $K$, $\# S(K)$ should be large enough to ensure the uniform upper bound $\Lambda_m$. For simplicity, $\# S(K)$ is taken uniformly and for different order $k$ we list a group of reference values of $\# S(K)$ for both meshes in Table \ref{tab:patchnumber2d}. \begin{table}[htp] \centering \caption{choices of $\# S(K)$ for $1\leq k \leq 5$} \label{tab:patchnumber2d} \scalebox{1.10}{ \begin{tabular}{|l|l|p{0.6cm}|p{0.6cm}|p{0.6cm}|p{0.6cm}|p{0.6cm}|} \hline \multicolumn{2}{|l|}{order $k$}& 1 & 2 & 3 & 4 & 5\\ \hline \multirow{2}{*}{$\# S(K)$} & triangular mesh & 5 & 9 & 18 & 25 & 32\\ \cline{2-7} & mixed mesh& 6 & 10 & 20 & 28 & 35\\ \hline \end{tabular} } \end{table} \newcommand{\br}[1]{\uppercase\expandafter{\romannumeral#1}} We consider three choices of velocity-pressure pairs: \begin{itemize} \item \textbf{Method \br 1.} $ (\bm u_h, p_h)\in\bm V_{h}^{k}\times Q_{h}^{k-1},\ 1\leq k \leq 5.$ \item \textbf{Method \br 2.} $ (\bm u_h, p_h)\in\bm V_{h}^{k}\times Q_{h}^{k},\ 1\leq k \leq 5.$ \item \textbf{Method \br 3.} $ (\bm u_h, p_h)\in\bm V_{h}^{k}\times Q_{h}^{0},\ 1\leq k \leq 5.$ \end{itemize} Here the space $Q_h^0$ is just the piecewise constant space. These methods correspond to the choices $k'=k,\ k-1,\ 0$, respectively. \textbf{Method \br 1.} The combination of polynomial degrees for the velocity and pressure approximation spaces is common in traditional FEM and DG while $k\geq 2$, known as Taylor-Hood elements. Numerical results for the method \br 1 are shown in Fig \ref{fig:m1infsuptest}. $\mu_{\min}$ appears to be bounded in every case, which clearly indicates the method \br 1 has passed the inf-sup test. It is noticeable that $\bm V_{h}^{1}\times Q_{h}^{0}$ is a stable pair which will lead to the locking-phenomenon in traditional FEM. \begin{figure} \caption{Inf-sup tests for method \br 1 on triangular meshes (left) / mixed meshes (right)} \label{fig:m1infsuptest} \end{figure} \textbf{Method \br 2.} We consider equal polynomial degrees for both approximation spaces. This method is more efficient because the reconstruction procedure is carried out only once. Fig \ref{fig:m2infsuptest} displays the history of $\mu_{\min}$. Similar with method \br 1, the values of $\mu_{\min}$ stabilize as $h$ decreases to zero. This method surprisingly keeps valid with $\bm V_{h}^{1}\times Q_{h}^{1}$ which is unstable due to the spurious pressure models in traditional FEM. \begin{figure} \caption{Inf-sup tests for method \br 2 on triangular meshes (left) / mixed meshes (right)} \label{fig:m2infsuptest} \end{figure} \textbf{Method \br 3.} We note that the number of DOFs of our finite element space, which are always equal to the number of elements in partition, has no concern to the order of approximation accuracy. In the sense of that, for all $k$, high order space $V_{h}^{k}$ is in the same size as the piecewise constant piece $Q_{k}^{0}$. Thus, we take $\bm V_{h}^{k}$ as the velocity approximation space while we select $Q_{h}^{0}$ for the pressure. Fig \ref{fig:m3infsuptest} summarizes the results of this inf-sup test, which show that the inf-sup condition holds. \begin{figure} \caption{Inf-sup tests for method \br 3 on triangular meshes (left) / mixed meshes (right)} \label{fig:m3infsuptest} \end{figure} The satisfaction of the inf-sup condition has been checked in this section by the numerical tests. All experiments show that the inf-sup value $\mu_{\min}$ is bounded. In fact, the combination of two approximation spaces can be more flexible, such as $\bm V_{h}^{k}\times Q_{h}^{k+1}$ or $\bm V_{h}^{k}\times Q_{h}^{k+2}$, see Fig \ref{fig:m4infsuptest} and Fig \ref{fig:m5infsuptest}. Both cases could pass the inf-sup test. The numerical results demonstrate that our finite element space possesses more robust properties than the traditional finite element method. An analytical proof of the verification of the inf-sup condition is considered as the future work. \begin{figure} \caption{Inf-sup tests for $\bm V_{h}^{k}\times Q_{h}^{k+1}$ on triangular meshes (left) / mixed meshes (right)} \label{fig:m4infsuptest} \end{figure} \begin{figure} \caption{Inf-sup tests for $\bm V_{h}^{k}\times Q_{h}^{k+2}$ on triangular meshes (left) / mixed meshes (right)} \label{fig:m5infsuptest} \end{figure} \section{Numerical Results} \label{sec:numericalresults} In this section, we give { some implementation details and} some numerical examples in two dimensions to verify the theoretical error estimates in Theorem \ref{th:prioriestimate}. The numerical settings remain unchanged as in the previous section. For the resulting sparse system, a direct sparse solver is employed to solve it. { \subsection{Implementation} \label{sec:2dexample} We present a 2D example on the domain $[0, 1]\times[0, 1]$ to illustrate the implementation of our method. The key point is to calculate the basis functions. We consider a quasi-uniform triangular mesh, see Fig \ref{fig:2dquasi_uniform}. \begin{figure} \caption{The triangulation and the element patch $S(K_0)$ and the collocation points set $\mc I_{K_0}$ (left) / the basis function $\lambda_{K_0}$ (right)} \label{fig:2dquasi_uniform} \end{figure} Here we consider a linear reconstruction. The barycenters of all elements are assigned as the collocation points. For any element $K$, we let $S(K)$ consist of $K$ itself and all edge-neighboring elements. Then we obtain the basis functions by solving the least squares problem on every element. We take $K_0$ as an example (see Fig \ref{fig:2dquasi_uniform}), the element patch $S(K_0)$ is chosen as \begin{displaymath} S(K_0) = \left\{ K_{0}, K_{1}, K_{2},K_{3} \right\}, \end{displaymath} and the corresponding collocation points are \begin{displaymath} \mc I_{K_0} = \left\{ (x_{K_{0}}, y_{K_{0}}),(x_{K_{1}}, y_{K_{1}}),(x_{K_{2}}, y_{K_{2}}),(x_{K_{3}}, y_{K_{3}}) \right\}, \end{displaymath} where $(x_{K_i}, y_{K_i})$ is the barycenter of $K_i$. For a continuous function $g$, the least squares problem is \begin{displaymath} \mc R_{K_0} = \mathop{\arg \min}_{ (a, b, c) \in \mathbb R} \sum_{(x_{K'},y_{K'}) \in \mc{I}_{K_0}} |g(x_{K'},y_{K'}) - (a + bx_{K'} + cy_{K'})|^2. \end{displaymath} By the Assumption 1, we obtain the unique solution \begin{displaymath} [a, b, c]^T = (A^TA)^{-1}A^Tq, \end{displaymath} where \begin{displaymath} A = \begin{bmatrix} 1 & x_{K_{0}}& y_{K_{0}} \\ 1 & x_{K_{1}}& y_{K_{1}} \\ 1 & x_{K_{2}} & y_{K_{2}}\\ 1 & x_{K_{3}} & y_{K_{3}} \end{bmatrix}, \quad q = \begin{bmatrix} g(x_{K_{0}},y_{K_{0}}) \\g(x_{K_{1}},y_{K_{1}}) \\g(x_{K_{2}},y_{K_{2}}) \\g(x_{K_{3}},y_{K_{3}}) \end{bmatrix}. \end{displaymath} Thus the matrix $(A^TA)^{-1}A^T$ contains all necessary information of the basis functions $\lambda_{K_0}, \lambda_{K_1}, \lambda_{K_2}, \lambda_{K_3}$ on $K_0$ and we just store it to represent the basis functions. All the basis functions could be obtained by solving the least squares problem on every element. Besides, the basis function $\lambda_{K_0}$ is presented in Fig \ref{fig:2dquasi_uniform} and we shall point out that the support of the basis function is not always equal to the element patch, and vice versa. } \subsection{2D smooth problem} We first consider a 2D example on $\Omega=[0,1]^2$ with smooth analytical solution to investigate the convergence properties. The exact solution is taken as \begin{displaymath} \bm u(x,y)=\begin{bmatrix} \sin(2\pi x)\cos(2\pi y) \\ -\cos(2\pi x)\sin(2\pi y) \\ \end{bmatrix}, \quad p(x,y)=x^2+y^2, \end{displaymath} and the source term $\bm f$ and the boundary condition $\bm g$ are chosen accordingly. We consider three methods in Section \ref{sec:infsuptest} and solve the Stokes problem on the given triangular meshes and mixed meshes, respectively, with mesh size $h=\frac1n, n=10, 20, 40, 80$. In Fig \ref{fig:m1L2error} and Fig \ref{fig:m1DGerror}, we present the $L^2$ norm and the DG energy norm of the error in the approximation to the exact velocity on both meshes when using method \br 1. And Fig \ref{fig:m1Pressureerror} shows the pressure error in $L^2$ norm. Here we observe that the optimal convergence rates for $\|\bm u - \bm u_h\|_{L^2( \Omega)}$, $\|\bm u - \bm u_h\|_{\mathrm{DG}}$ and $\|p - p_h\|_{L^2(\Omega)}$ are obtained, which are $O(h^{k+1})$, $O(h^{k})$ and $O(h^k)$, respectively. The numerical results confirm the estimate \eqref{eq:prioriestimate}. \begin{figure} \caption{Velocity $L^2$ norm error with method \br 1 for the smooth case on triangular meshes (left) / mixed meshes (right)} \label{fig:m1L2error} \end{figure} \begin{figure} \caption{Velocity DG energy norm error with method \br 1 for the smooth case on triangular meshes (left) / mixed meshes (right)} \label{fig:m1DGerror} \end{figure} \begin{figure} \caption{Pressure $L^2$ norm error with method \br 1 for the smooth case on triangular meshes (left) / mixed meshes (right)} \label{fig:m1Pressureerror} \end{figure} Now we consider the method \br 2, the convergence rates are displayed in Fig \ref{fig:m2L2error}, \ref{fig:m2DGerror} and \ref{fig:m2Pressureerror}. All convergence orders are identical to the results in method \br{1}, which agrees with the developed theory. For this method, the approximation to the pressure converges in a suboptimal way, but we build the approximation space only once which makes the method \br{2} more effective. \begin{figure} \caption{Velocity $L^2$ norm error with method \br 2 for the smooth case on triangular meshes(left) /mixed meshes(right)} \label{fig:m2L2error} \end{figure} \begin{figure} \caption{Velocity DG energy norm error with method \br 2 for the smooth case on triangular meshes(left)/mixed meshes(right)} \label{fig:m2DGerror} \end{figure} \begin{figure} \caption{Pressure $L^2$ norm error with method \br 2 for the smooth case on triangular meshes (left)/mixed meshes(right)} \label{fig:m2Pressureerror} \end{figure} Finally, we investigate the numerical performance of the method \br 3. The errors are plotted in Fig \ref{fig:m3L2error}, \ref{fig:m3DGerror} and \ref{fig:m3Pressureerror}. Here the theoretical convergence rates under norm $\|\bm u - \bm u_h\|_{\mathrm{DG}}$ and $\|p-p_h\|_{L^2(\Omega)}$ are $O(h^1)$. We observe that the numerical results do not coincide with the theory exactly which results from the numerical error in approximation to pressure is much larger than the interpolation error. The super convergence is spurious and the convergence orders will drop to the expected values as the mesh size $h$ approaches to zero. But it does not imply that the high order is not preferred in method \br{3}, {the results show that using $\bm V_{h}^{k}$ with larger $k$ could give a more accurate approximation to the velocity and the pressure.} \begin{figure} \caption{Velocity $L^2$ norm error on with method \br 3 for the smooth case triangular meshes(left)/mixed meshes(right)} \label{fig:m3L2error} \end{figure} \begin{figure} \caption{Velocity DG energy norm error with method \br 3 for the smooth case on triangular meshes(left)/mixed meshes(right)} \label{fig:m3DGerror} \end{figure} \begin{figure} \caption{Pressure $L^2$ norm error on with method \br 3 for the smooth case triangular meshes (left)/mixed meshes(right)} \label{fig:m3Pressureerror} \end{figure} \subsection{Driven cavity problem} The driven cavity problem is a standard benchmark test for the incompressible flow. It models a plane flow of an isothermal fluid in a unit square lid-driven cavity. The domain $\Omega$ is $[0,1]^2$ and the boundary condition and the source term are given by \begin{displaymath} \bm g(x,y)=\begin{cases} (1,0)^T,\quad 0<x<1,\ y=1,\\ (0,0)^T,\quad\text{otherwise},\\ \end{cases}\quad \bm f(x,y)=\begin{bmatrix} 0\\0\\ \end{bmatrix}. \end{displaymath} The domain is partitioned by triangular mesh with mesh size $h=\frac1{60}$. Fig \ref{fig:m1liddriven} shows the velocity vectors and the streamline of the flow for the discretization of $\bm V_{h}^3\times Q_{h}^2$. Fig \ref{fig:m2liddriven} and \ref{fig:m3liddriven} present the results for the pair of $\bm V_{h}^3\times Q_{h}^3$ and $\bm V_h^3\times Q_h^0$, respectively. \begin{figure} \caption{Velocity vectors (left) and the streamline of the flow (right) for $\bm V_h^3\times Q_h^2$} \label{fig:m1liddriven} \end{figure} \begin{figure} \caption{Velocity vectors(left) and the streamline of the flow(right) for $\bm V_h^3\times Q_{h}^3$} \label{fig:m2liddriven} \end{figure} \begin{figure} \caption{Velocity vectors(left) and the streamline of the flow(right) for $\bm V_{h}^3\times Q_{h}^0$} \label{fig:m3liddriven} \end{figure} \subsection{Non-smooth problem} In this example, we investigate the performance of our method dealing with the Stokes problem with a corner singularity in the analytical solution. Let $\Omega$ be the L-shaped domain $[-1,1]\times[-1,1]\backslash [0,1)\times(-1,0]$ and the meshes we use, which are generated by \emph{gmsh}, are refinements of a triangular mesh of 250 triangles(see Fig \ref{fig:Lshape}). The exact solution(from \cite{verfurth1996review, hansbo2008piecewise}) is given by \begin{displaymath} \bm u(r, \theta)=r^\lambda\begin{bmatrix} (1+\lambda)\sin(\theta)\psi(\theta)+\cos(\theta)\psi'(\theta) \\ \sin(\theta)\psi'(\theta)-(1+\lambda)\cos(\theta)\psi(\theta) \\ \end{bmatrix}, \end{displaymath} in polar coordinates, where \begin{displaymath} \begin{aligned} \psi(\theta)=&\frac1{1+\lambda}\sin( (1+\lambda)\theta) \cos(\lambda\omega)-\cos( (1+\lambda)\theta)\\ &-\frac1{1-\lambda}\sin( (1-\lambda)\theta) \cos(\lambda\omega) + \cos( (1-\lambda)\theta),\\ \end{aligned} \end{displaymath} with $\omega=\frac32\pi$ and $\lambda\approx0.5444837$ as the smallest positive root to \begin{displaymath} \sin(\lambda\omega) + \lambda\sin\omega=0. \end{displaymath} At the corner $(0,0)$, the exact solution contains a singularity which indicates $\bm u(r, \theta)$ does not belong to $H^{2}(\Omega)$. \begin{figure} \caption{The triangular meshes of L-shaped domain, 250 elements (left)/ 1000 elements (right)} \label{fig:Lshape} \end{figure} The $\# S(K)$ is chosen also as the Tab \ref{tab:patchnumber2d} shows. In Tab \ref{tab:Lshapeerror} we list the $L^2$ norm error of the velocity against the degrees of freedom for different pairs of approximation spaces. We observe that all convergence orders are about 1, which are consistent with the results in \cite{hansbo2008piecewise} where a piece divergence-free discontinuous Galerkin method is developed to solve this problem. \begin{table}[!htp] \newcommand\me{\mathrm e} \renewcommand\arraystretch{1.8} \centering \caption{Convergence orders of nonsmooth example in L-shaped domain} \label{tab:Lshapeerror} \scalebox{0.76}{ \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Method} & 250 DOFs & \multicolumn{2}{|l|}{1000 DOFs} & \multicolumn{2}{|l|}{4000 DOFs} & \multicolumn{2}{|l|}{16000 DOFs}& \multicolumn{2}{|l|}{64000 DOFs}\\ \cline{2-10} & $L^2$ error & $L^2$ error & order& $L^2$ error & order& $L^2$ error & order& $L^2$ error & order\\ \hline $V_{h}^2\times Q_{h}^1$ & $5.57\mathrm{E}{-2}$ & $2.40\mathrm{E}{-2}$& 1.21 & $9.56\mathrm{E}{-3}$& 1.33& $4.61\mathrm{E}{-3}$& 1.05& $2.13\mathrm{E}{-3}$& 1.10\\ \hline $V_{h}^3\times Q_{h}^2$ & $4.97\mathrm{E}{-2}$ & $1.89\mathrm{E}{-2}$& 1.33 & $9.29\mathrm{E}{-3}$& 1.09& $4.51\mathrm{E}{-3}$& 1.04& $2.21\mathrm{E}{-3}$& 1.03\\ \hline $V_{h}^2\times Q_{h}^2$ & $4.44\mathrm{E}{-2}$ & $1.58\mathrm{E}{-2}$& 1.49 & $7.46\mathrm{E}{-3}$& 1.08& $3.61\mathrm{E}{-3}$& 1.05& $1.73\mathrm{E}{-3}$& 1.06\\ \hline $V_{h}^3\times Q_{h}^3$ & $4.83\mathrm{E}{-2}$ & $1.87\mathrm{E}{-2}$& 1.37 & $8.66\mathrm{E}{-3}$& 1.11& $4.11\mathrm{E}{-3}$& 1.07& $1.96\mathrm{E}{-3}$& 1.07\\ \hline $V_{h}^2\times Q_{h}^0$ & $6.59\mathrm{E}{-2}$ & $2.22\mathrm{E}{-2}$& 1.57 & $1.03\mathrm{E}{-2}$& 1.11& $5.10\mathrm{E}{-3}$& 1.01& $2.49\mathrm{E}{-3}$& 1.03\\ \hline $V_{h}^3\times Q_{h}^0$ & $6.88\mathrm{E}{-2}$ & $2.71\mathrm{E}{-2}$& 1.34 & $9.75\mathrm{E}{-3}$& 1.47& $4.58\mathrm{E}{-3}$& 1.09& $2.18\mathrm{E}{-3}$& 1.07\\ \hline \end{tabular} } \end{table} \section{Conclusion} In this paper, we have introduced a new discontinuous Galerkin method to solve the Stokes problem. A novelty of this method is the new piecewise polynomial space that is reconstructed by solving local least squares problem. A variety of numerical inf-sup tests demonstrate the stability of this method. The optimal error estimates in $L^2$ norm and DG energy norm are presented and the numerical results are reported to show good agreement with the theoretical predictions. \end{document}
\begin{document} \title{On ideal dynamic climbing ropes} \author{D. Harutyunyan, G.W. Milton, T.J. Dick and J. Boyer\\ \textit{Department of Mathematics, The University of Utah}} \date{\today} \maketitle \begin{abstract} We consider the rope climber fall problem in two different settings. The simplest formulation of the problem is when the climber falls from a given altitude and is attached to one end of the rope while the other end of the rope is attached to the rock at a given height. The problem is then finding the properties of the rope for which the peak force felt by the climber during the fall is minimal. The second problem of our consideration is again minimizing the same quantity in the presence of a carabiner. We will call such ropes \textit{mathematically ideal.} Given the height of the carabiner, the initial height and the mass of the climber, the length of the unstretched rope, and the distance between the belayer and the carabineer, we find the optimal (in the sense of minimized the peak force to a given elongation) dynamic rope in the framework of nonlinear elasticity. Wires of shape memory materials have some of the desired features of the tension-strain relation of a mathematically ideal dynamic rope, namely a plateau in the tension over a range of strains. With a suitable hysteresis loop, they also absorb essentially all the energy from the fall, thus making them an ideal rope in this sense too. \end{abstract} \textbf{Keywords}\\ Dynamic climbing ropes, shape memory alloys, hysteresis, nonlinear elasticity\\ \section{Introduction} \label{sec1} Climber fall is a central problem in rock climbing; and an important factor, which this paper addresses, is minimizing the peak force felt by the climber as he/she falls, allowing a maximum elongation of the rope. Minimizing the peak force is important also for decreasing the likelihood that the anchor will be dislodged, and for minimizing the stress on the rope and on the bolts, carabiners and all kind of other protections, thus increasing their lifetime. Some comprehensive analyses of the maximal forces and rope elongation and the design of optimal ropes, optimality meant in various senses, already exist. In [\ref{bib:Leu1.},\ref{bib:Leu2.}] Leuth\"auser studies the above general problem in the setting of viscoelasticity. When a climber falls, some of the energy is converted to heat during the fall, and it is assumed in both [\ref{bib:Leu1.},\ref{bib:Leu2.}], that the coefficient of conversion, i.e, the proportion of the energy converted to heat and the total energy, is equal to $0.5.$ In [\ref{bib:Sporri.}] Sp\"orri carries out a numerical analysis for the forces acting on the climber, and the resultant motion of the climber, assuming Pavier's 3-parameter viscoelastic rope model [\ref{bib:Pav1.}]. There are other problems, too, that concern climber fall. One is the durability of the rope and a safe lower bound on the number of the falls the rope can handle due to tear during any fall, which has been studied numerically by Bedogni and Manes [\ref{bib:Bed.Man.}], and in combination with experiment by Pavier [\ref{bib:Pav1.}]. A second problem concerns the stiffness or stretchiness of the rope (which controls the total extension of the rope during a fall), and the maximal handling force of bolts and carabiners, see [\ref{bib:Att.},\ref{bib:Bla.Cus.Gra.Oka}]. A third problem is mostly medical and concerns the most probable injuries of the climber, [\ref{bib:Pai.Fio.Hou.}]. There are two kinds of ropes used in rope climbing: static and dynamic. A dynamic rope stretches and is the rope connecting the climber to the belayer or anchor, while a static rope has little stretch and is often used in ascending using jumars, hauling, and for rappelling, equivalently called abseiling. In addition, static ropes or webbing are used to affix via carabiners the dynamic rope to the climbing wall. The problem we consider is to find and minimize the maximal force acting on the climber during a fall over a fixed distance. This peak force is the maximal value of the tension of the dynamic rope at the point attached to the climber. In other words, the first goal is to identify the properties of a "mathematically ideal dynamic rope" that is designed so the peak force on the climber is minimized during a fall with a specified elongation of the rope. Of course many factors influence whether a dynamic rope is ideal in practice (such as durability, cost, weight, knotability and redundancy so breaks of single filaments do not affect the overall strength), and for this reason we use the term ``mathematically ideal'' to mean ideal in the restricted sense of minimizing the peak force for a given total elongation. A second goal is to obtain a rope which absorbs essentially all the energy of the fall, so that the climber does not rebound after the fall in contrast to the oscillations that bungy cords produce during bungy jumping. To achieve the first goal, we begin by assuming the rope is purely elastic, rather than viscoelastic. Later we consider such ropes with suitable hysteresis loops which absorb all the energy from the fall, and thus which are ideal from the viewpoint of both goals. The question is then how the nonlinear elastic response of the rope should be tailored to minimize the maximal force acting on the climber during a fall. Intuitively, it makes sense that the rope should be designed to provide a constant braking force on the falling climber. While it may be the case that this result is known, we were unable to find a reference which addressed this condition. So a mathematical proof of this fact is provided. We draw attention to the fact that shape memory material wires have the desired characteristic of a plateau in the stress as the strain is varied. Furthermore, they can exhibit large hysteresis loops, and we observe that this feature is exactly what is needed to achieve the second goal, i.e., to absorb all the energy of the fall without oscillation. A similar problem has been studied by Reali and Stefanini in [\ref{bib:Rea.Ste.}] in the setting of linear elasticity, but we believe nonlinear elasticity is more appropriate to the analysis of dynamic climbing ropes. We also remark that the problem of minimizing the maximal force during deceleration over a fixed distance is also appropriate to pilots landing on an aircraft carrier, but there the desired response can be controlled through the hydraulic dampers attached to the braking cable. \section{Rope without a carabiner} \label{sec2} \subsection{Notation and problem setting} \label{subsec2.1} In this section, we set up the notation for the climber fall problem in its simplest formulation. We direct the $x$ axis towards the direction of gravity: see Figure~\ref{Fig1}. We shall choose the gravitational potential to be zero at the position $x=0$ for convenience. Let us mention, that the variable $x$ does not show the position of the climber, but rather it is the coordinate variable used in the undeformed configuration. There will be a climber affixed to a rock wall by means of a rope and the climber will fall vertically downwards from a point directly above or below where the rope is attached to the wall, so that the problem is one-dimensional. The given rope of length $L$ will be attached to the rock at the point $x=0$ and the climber of mass $m$ will be attached to the other end at some arbitrary height $x=h_{0}$, where \ $|h_{0}|\leq|L|$. Note that $h_0$ denotes the distance below where the rope is anchored, and is negative if the climber is above the anchoring point. The rope will be assumed to be massless. \begin{figure} \caption{Before and after the fall. The figure is schematic, in that the climber before the fall, as illustrated on the left, should be directly below or above the point where the rope is attached.} \label{Fig1} \end{figure} We assume that the climber is allowed to fall without hitting the ground up to the point $L+\Delta L,$ i.e., the maximal admissible stretching of the rope is $\Delta L.$\\ \textit{We are interested in finding the properties of the rope so that given the initial data $L, \Delta L, m$ and $h_0$ it minimizes the maximal force felt by the climber during a fall. Such a rope will be called \textbf{mathematically ideal}.}\\ We reformulate the problem as follows: instead of considering what happens before the rope becomes taut, we can set our clock so $t=0$ marks the instant in time when the rope becomes taut; and at that time the gravitational potential energy $mg(L-h_0)$ lost by the climber in falling a distance $L-h_0$ will have been converted into kinetic energy so that the climber will have a velocity $v_0=\sqrt{2g(L-h_0)}$ in the direction of increasing $x$ (downwards) at time $t=0$. Thus the deformation of the rope begins at time $t=0$. We denote by $y(x,t)$ the position of a point on the rope at time $t$ that was initially located at $x$ at time $t=0;$ thus a point on the rope at $x$ at $t=0$ gets displaced by a distance $u(x,t)=y(x,t)-x$ (here $y(x,t)$ will be assumed to be continuous and differentiable in $x$). Once the rope begins to deform, its deformation is assumed to be described by nonlinear elasticity theory (ignoring viscosity). \begin{figure} \caption{The reformulated problem.} \label{Fig2} \end{figure} The elastic properties of the rope are given by a function $W$, representing the elastic energy density, of the one-dimensional strain $$ \varepsilon(x)=\frac{\partial u(x,t)}{\partial x}=\frac{\partial y(x,t)}{\partial x}-1. $$ We assume the elastic energy does not depend on higher order derivatives of the deformation $y(x,t)$, that there is no air resistance, and (to begin with, as it will be something we will reconsider later in section 4) that there is no energy absorption or dissipation during a fall. Thus, a fall will be a periodic phenomenon as the climber oscillates between the heights $x=h_0$ and $x=L+\triangle L.$ We denote the time the climber reaches the critical point $x=L+\triangle L$ by $t=T.$ We denote furthermore by $E_{el}$ the elastic energy of the rope, and by $E_{total}$ the total energy of the system. The total elastic energy for the rope, treated as a one dimensional body, is then: \begin{equation} \label{2.2} E_{el}(t)=\int \limits_{0}^{L} W\left(\varepsilon(x)\right) \, dx. \quad \end{equation} In other words, the total elastic energy is the sum (integral) of the elastic energies associated with the stretching of each rope segment. Note that we do {\it not} assume that $W$ is a quadratic function of $\varepsilon$ and hence it is not appropriate to associate an elastic modulus to the rope: the elastic modulus is an appropriate descriptor when the tension in the rope is proportional to the overall strain (and it is the elastic modulus which gives this constant of proportionality). However, in our {\it mathematically ideal} rope, we will see it is best to have a rope where the tension is independent of the strain. From the theory of nonlinear elasticity, e.g, [\ref{bib:Gur.}] the elastic energy density $W$ has to satisfy the following properties: \begin{itemize} \item[(\textbf{P1})] $W(\varepsilon)\geq 0$ \ for all \ $\varepsilon\in\mathbb R$ (deformations store elastic energy) \item[(\textbf{P2})] $W(0)=0$ (no deformation-no energy) \item[(\textbf{P3})] $W(\varepsilon)$ is a quasiconvex function of $\varepsilon+1$, which is equivalent to being a convex function of $\varepsilon$ in the one-dimensional case under consideration. Recall, that a function $f(x)\colon[a,b]\to\mathbb R$ is convex, if it satisfies the inequality $$f(\lambda x+(1-\lambda)y)\leq \lambda f(x)+(1-\lambda)f(y),$$ for all $x,y\in[a,b],\lambda\in [0,1].$ Geometrically, convexity means that the entire graph of the function is above the tangent line at any point $x\in[a,b].$ Physically, convexity is important for stability: if $W$ is not convex, then microscopic oscillations in the deformation of the rope are energetically favorable, and macroscopically the behavior of the rope will be given by an energy function which is again convex, being the convexification of the local nonconvex energy function. (Such oscillatons do occur in shape memory wires, a point which we will return to later.) \end{itemize} We aim to find under which conditions on $W$ the rope is \textit{mathematically ideal.} Our goal is to mitigate the force on the climber during the fall. In the more general version of the problem, the only new feature will be that the rope also passes through one carabiner, attached to the rock, that has a friction coefficient $k$. The single carabiner case will be sufficient to make a generalization with an arbitrary integer number of carabiners, with frictional coefficients $k_{i}$.\\ \subsection{Optimal bounds via energy conservation} \label{subsec2.2} Denote by $b(t)$ the rope tension at the point $x=L$ at time $t,$ and by $a(x,t)$ the acceleration of the rope point $x$ at time $t,$ for $x\in[0,L]$ and $t\in\mathbb R.$ We have to solve the minimization problem \begin{equation} \label{2.3} \min(\max_{t\geq 0}|b(t)|). \end{equation} It is clear, as the motion of the climber is periodic, we can consider the minimization problem on the restricted time interval $[0,T]$ rather than the whole time interval $[0,\infty).$ It is evident that the tension $b(t)$ imposes an upwards force on the climber for all $t\in[0,T].$ The other force acting on the climber is the gravitation force $mg$ pointing downwards, thus being positive. So, the net force on the climber is $mg-b(t)$ and by Newton's second law we have that the mass times the acceleration downwards is given by \begin{equation} \label{2.4} ma(L,t)=mg-b(t). \end{equation} Next, we prove the key inequality \begin{equation} \label{2.5} \max_{t\in[0,T]}|b(t)| \geq \frac{mg(L+\Delta L-h_0)}{\Delta L}. \end{equation} It is actually a direct consequence of energy conservation. Namely, as is well-known, we have that the work done by the climber in the time interval $[0,T]$ equals on one hand $\int_{L}^{L+\Delta L}ma(L,t)dx,$ and on the other hand it is the change in kinetic energy, i.e., $-\frac{mv_0^2}{2}.$ Therefore we get \begin{equation} \label{2.6} \int_{L}^{L+\Delta L}ma(L,t)dx=-\frac{mv_0^2}{2}. \end{equation} Integrating (\ref{2.4}) in $x$ over the interval $[L,L+\Delta L],$ we obtain $$ -\int_{L}^{L+\Delta L}ma(L,t)dx=\int_{L}^{L+\Delta L}b(t)dx-mg\Delta L,$$ and thus taking into account (\ref{2.6}) and the formula for the initial velocity, $v_0=\sqrt{2g(L-h_0)}$, we get \begin{equation} \label{2.7} \int_{L}^{L+\Delta L}b(t)dx=mg\Delta L+\frac{mv_0^2}{2}=mg(L+\Delta L-h_0). \end{equation} Finally applying the inequality $$\left|\int_{L}^{L+\Delta L}b(t)dx\right|\leq \Delta L\max_{t\in[0,T]}|b(t)|$$ to (\ref{2.7}), we arrive at (\ref{2.5}). Observe, that equality in (\ref{2.5}) holds if and only if the tension $b(t)$ and thus the acceleration $a(L,t)$ is constant in the interval $[0,T],$ which is the scenario that a \textit{mathematically ideal rope} must develop. Thus in this case we get \begin{align} \label{2.8} a(L,t)=a_0&\equiv \frac{g(h_0-L)}{\Delta L},\\ \nonumber b(t)=b_0&\equiv\frac{mg(L+\Delta L-h_0)}{\Delta L}. \end{align} This \textit{mathematically ideal rope} causes the force where the rope is attached to the climber to step from a zero force to a constant value as the rope becomes taut. In practice, one would want a more gradual transition to allow the climber's body time to respond to the force. Certainly the effects of the force cannot propagate through the climber's body faster than the speed of elastic waves; but more significantly, if we consider the climber's body as a viscoelastic object, then one would not want the transition time to be faster than the typical viscoelastic relaxation time. The undesireable instantaneous step function in the force will be mollified if we replace the \textit{mathematically ideal rope} by an approximation to it, such as the shape memory material rope, suggested later, which has a transition region before the stress plateau. \subsection{The Optimal Elastic Energy Density Function} \label{subsec2.3} In this section, we find a formula for the elastic energy density function $W$ associated with a {\it mathematically ideal rope}. Under our assumption that the rope is massless, equilibrium of forces implies the tension $b$ in the rope must be constant along the rope. Also if the rope is optimal in the sense that the maximal possible force felt by the climber is minimized, then $b$ must be independent of the strain and given by (\ref{2.8}). Suppose a small segment of rope extending from $x=x_0$ to $x=x_0+\ell$ in the undeformed state at $t=0$, gets extended under the deformation at some time $t>0$ less than $T$ to the length $\ell+\delta\ell$. The work done on the rope $b_0\delta\ell$ (being the force times the distance) must go into the elastic energy stored in this rope segment, implying \begin{equation} \label{2.14.5} b_0\delta\ell=\int \limits_{x_0}^{x_0+\delta\ell} W\left(\frac{\partial y(x,t)}{\partial x}-1\right) \, dx. \end{equation} Using the fact that the energy density is zero in the undeformed state at $t=0$ when $y(x,t)=x$, this will hold if \begin{equation} \label{2.15} W(\varepsilon)=b_0\varepsilon=\frac{mg(L+\Delta L-h_0)}{\Delta L}\varepsilon. \end{equation} This defines $W$ for values $\varepsilon\geq 0$. (Equivalently, one could use the fact that the local tension $b=b_0$, which is the one-dimensional stress, is $\partial W(\varepsilon)/\partial \varepsilon$ to deduce that $W(\varepsilon)$ necessarily has the form (\ref{2.15})). Moreover, it is clear that $W$ satisfies the properties (P1) and (P2) for $\varepsilon\geq 0.$ The case $\varepsilon<0$ corresponds to rope compression, which is assumed to require no energy, thus we must take $W(\varepsilon)=0$ for $\varepsilon<0.$ In conclusion, we get a final formula for the energy density function: \begin{eqnarray} \label{2.16} W(\varepsilon) & = & \frac{mg(L+\Delta L-h_0)}{\Delta L}\varepsilon ~~\text{if} \ \ \ \varepsilon\geq 0, \\ \nonumber & = & 0 ~ \text{if} \ \ \ \varepsilon<0, \end{eqnarray} which clearly satisfies all properties (P1-P3). Note however that $W(\varepsilon)$ is not a strictly convex function of $\varepsilon$. If it was strictly convex, we could use Jensen's inequality, \begin{equation} \label{2.16.1} \frac{1}{L}\int \limits_{0}^{L} W\left(\frac{\partial y(x,t)}{\partial x}-1\right) \, dx\geq W\left(\frac{y(L,t)}{L}-1\right), \end{equation} with equality holding only when $y(x,t)$ is independent of $x$, to deduce that the deformation of the rope which minimizes the elastic energy is necessarily homogeneous\footnote{A function $f(x,t)\colon\mathbb R^2\to\mathbb R$ is homogeneous in $x$ for each $t\in\mathbb R,$ if $f(\lambda x,t)=\lambda f(x,t)$ for all $\lambda,x,t\in\mathbb R.$ In other words, its derivative in the $x$ variable does not depend on $x.$} at each time $t$. Recall, that Jensen's inequality asserts the following: \textit{If a function $f\colon\mathbb R\to\mathbb R$ is convex, then for any interval $[a,b]\subset\mathbb R$ and any continuous function $g\colon[a,b]\to\mathbb R$ the inequality holds:} \begin{equation} \label{2.16.5} \frac{1}{b-a}\int_a^b f(g(x))dx\geq f\left(\frac{1}{b-a}\int_a^b g(x)dx\right). \end{equation} In the absence of this strict convexity, there is no reason to assume homogeneity of the deformation: the deformation could be any function $y(x,t)$ with prescribed values of $y(0,t)=0$ and $y(L,t)$ (given below in (\ref{2.11a})) and with $\partial y(x,t)/\partial x>1$, where this last condition is required by constancy of the tension $b$ along the rope. The actual deformation which is selected could depend on higher order gradient terms in the energy function which we have neglected to include, and could be very sensitive to slight inhomogeneities in the rope. If we assume the selected deformation is homogeneous, then we have \begin{equation} \label{2.10} y(x,t)=\frac{x}{L}y(L,t). \end{equation} We can integrate $a(L,t)$ in $t$ to get \begin{equation} \label{2.11} v(L,t)=\int_0^t a(L,t)dt+v_0=a_{0}t+v_0. \end{equation} Further integration gives \begin{equation} \label{2.11a} y(L,t)=\int_0^t v(L,t)dt=\frac{a_{0}}{2}t^{2}+v_{0}t+L. \end{equation} Thus owing to (\ref{2.10}), we finally arrive at \begin{equation} \label{2.12} y(x,t)=\frac{x}{L}\left(\frac{a_{0}}{2}t^{2}+v_{0}t+L\right). \end{equation} \section{The rope with a carabiner} \label{sec3} In this section, we consider the climber fall problem in the presence of a carabiner with a friction coefficient $k$. \begin{figure} \caption{The configuration after the fall at the instant the rope tightens, in the presence of a carabiner. } \label{Fig3} \end{figure} Namely, we assume that the rope passes through a carabiner attached to the rock wall, with one end of the rope attached to the climber, while the other end is fastened to the belayer. The rope line from the belayer to the carabiner forms a given angle $\alpha$ with the vertical axis $x,$ see Figure 3. Then we are again seeking a mathematically ideal rope. As was done in Section~\ref{sec2}, we reformulate the problem as follows:\\ \textit{Assume that the rope is vertical and is attached to the rock at the origin $x=0$ and the carabiner is attached to the rock at the point $x=L_1,$ while the rope length between the carabiner and the climber is $L_2$ and the climber has an initial velocity $v_0,$ see Figure 3. In addition, due to the carabiner, the rope tension jumps from $b(t)$ below the carabiner to $\mu b(t)$ above the carabiner, where by the capstan equation (also known as the Euler-Eytelwein formula) $\mu=e^{-(\pi-\alpha)k}$ and $k$ is the coefficient of friction between the metal surface of the carabiner and the rope,} see Figure 4. \\ \begin{figure} \caption{The reformulated problem for the case with a carabiner.} \label{Fig4} \end{figure} The problem is again finding a rope that solves the problem \begin{equation} \label{3.1} \min(\max_{t\in[0,T]}|b(t)|). \end{equation} The setting of the reformulated problem makes it clear that problem~(\ref{3.1}) is solved when $b(t)=b_0$ as well as $a(L,t)=a_0$ for $t\in[0,T]$, where $a_0$ and $b_0$ are given by (\ref{2.8}). We have to now find a rope that develops a constant resultant force acting on the climber during a fall cycle. It turns out that the type of rope found in the previous section works also for this case. The idea is to take $L=L_2$ in formula (\ref{2.16}) and take $y(x,t)=x$ for $x<L_1$ so that the rope develops no deformation between the rock and the carabiner and thus fulfills the desired conditions. To understand that, let us plot the tension-strain diagram corresponding to the rope given by (\ref{2.16}) with $L$ replaced by $L_2,$ see Figure 5. Because the tension above the carabiner is strictly smaller than $b_0,$ it is then clear from the diagram that the rope develops no stretching in the segment $[0,L_1]$, as we wanted to establish. Note that that this conclusion remains valid even if the Euler-Eytelwein formula is questioned, as it may be because the rope diameter is comparable to that of the metal diameter in the carabiner as observed by Weber and Ehrmann in [\ref{bib:Weber.}]: all we require for the analysis is that the tension in the rope between the carabiner and the belayer be less than that between the carabiner and the climber (i.e. $\mu<1$). The necessity of having the carabiner is to lessen the force on the rope attached to the belayer, to allow the belayer to be positioned in various places, and to prevent the climber from falling off the wall. \begin{figure} \caption{The tension-strain relation for a mathematically ideal dynamic rope, shown in red. } \label{Fig5} \end{figure} \section{Realizability of the mathematically ideal rope} \begin{figure} \caption{The black curve is the microscopic energy $W_{mic}$ as a function of the strain $\varepsilon$, which when convexified gives the macroscopic strain energy, which is the red curve $W(\varepsilon)$.} \label{Fig6} \end{figure} Here we provide some arguments which suggest that a rope approximately realizing the condition (\ref{2.16}) is not beyond the realm of possibility. The characteristic feature of the tension-strain diagram of Figure 5, is the \textit{plateau in the tension as the strain is varied.} Wires of shape memory materials such as Nitinol (an alloy of nickel and titanium, see https://en.wikipedia.org/wiki/Nickel\_titanium) have such plateaus, although in currently available shape memory wires the \textit{plateau in the tension} occurs only for strains less than about $8\%$ (see [\ref{bib:Sittner.}] and references therein.) As Fig. 14 in [\ref{bib:Att.}] shows, normal dynamic climbing ropes can have strains of up to $15\%$. The reason for the plateau is illustrated in Figure 6. At a microscopic scale the elastic energy might not be a convex function of $\varepsilon$ but given by the function $W_{mic}(\varepsilon),$ shown in black in Figure 6. A strain having the value $\varepsilon$ in the region of non-convexity is energetically unstable and on a macroscopic scale the material \textit{phase separates} into a collection of segments having microscopic strains $\varepsilon_1$ or $\varepsilon_2$ in proportions $\theta$ and $1-\theta$ where $\theta=(\varepsilon-\varepsilon_1)/(\varepsilon_2-\varepsilon_1)$. The elastic energy density of this mixture is \begin{eqnarray} \label{6.0a} W(\varepsilon) & = & \theta W_{mic}(\varepsilon_1)+(1-\theta)W_{mic}(\varepsilon_2) \\ \nonumber & = & W_{mic}(\varepsilon_2)+\frac{\varepsilon-\varepsilon_1}{\varepsilon_2-\varepsilon_1}[W_{mic}(\varepsilon_1)-W_{mic}(\varepsilon_2)], \end{eqnarray} which depends linearly on $\varepsilon$ for strains between $\varepsilon_1$ and $\varepsilon_2$: thus the wire tension is independent of the macroscopic strain in this interval. (We remark in passing that the treatment for the deformation of two or three dimensional shape memory materials, rather than wires, has also been developed but is more complicated and involves quasiconvexification rather than convexification: see, for example, [\ref{bib:Ball.}].) \begin{figure} \caption{The tension-strain hysteresis loop for a close to ideal dynamic rope, shown in red. The path with the single red arrow denotes the trajectory when the strain is increased, the double red arrow denotes the trajectory when the strain is decreased.} \label{Fig7} \end{figure} In reality, the tension-strain diagram of shape memory wires have a hysteresis loop, which is not encompassed by our purely elastic formulation. Although hysteresis can be minimized in shape memory materials [\ref{bib:Song.}], a hysteresis loop in the response of a rope would actually be a necessity: it would be good if the tension in the rope was just slightly more than $mg$ when the rope retracts after it reaches its maximum extension, as then the climber would slowly rebound from the fall, returning close to $x=L$. This close-to-ideal tension-strain relation with a hysteresis loop is sketched in Figure 7. In some circumstances it might be best if the rope remained close to the maximum extension $x=L+\Delta L$ after the fall: this will be the case if the return path in the hysteresis loop was below the line $b=mg$. \section{Conclusions} \label{sec3} We do not expect this paper to have an immediate effect on the climbing community, but by providing a prescription for a \textit{mathematically ideal rope,} the work may help guide the development of new ropes. Also, it may motivate research into alternate shape memory materials which may be more suitable for use in ropes than nitinol. It is worth mentioning, that there are recently developed shape memory materials that are not alloys, but polymers, e.g., [\ref{bib:Eis.Rhe.},\ref{bib:Xie.Rou.}], although these do not yet have the required strength to be suitable in ropes. The suggestion of using materials with shape memory wire characteristics appears to be novel. However, in reality the tension-strain hysteresis loop trajectory in shape memory materials depends on the rate of deformation, and the plateaus disappear if the rate of deformation is too fast (see, for example, figure 14 in [\ref{bib:Heller.}]). In order for the plateaus to be retained at a desired rate of deformation, the \textit{phase separation} must occur on an appropriately fast time scale. Another unwanted characteristic is that the shape and position of the hysteresis loop in general varies according to the number of deformation cycles the wire has undergone. While shape memory wires only have stress plateau extending up to 8\% strain, while dynamic ropes can have strains of 15\%, this may actually be an advantage as pointed out to us by Wendy Crone at the University of Wisconsin. The reason is that with a \textit{mathematically ideal rope,} one can have the same peak force with less overall elongation than in a standard dynamic rope, and less elongation minimizes the chance of collision with a rock outcrop or another climber. Of course a ``rope'' built from shape memory material wires may have disadvantages not considered here, such as being too heavy (although titanium is comparatively light among metals), too expensive, not easily coiled, not easily knotted, or having properties which are temperature-dependent. Cables have been built from shape memory wires and their properties have been studied [\ref{bib:Reedlunn.}, \ref{bib:Ozbulut.}]. Significantly, Reedlunn, Daly, and Shaw [\ref{bib:Reedlunn.}] find that the hysteresis loop can extend up to a strain of about 12\% (see their Figure 6a), depending on the placement of the wires in the cable, but in this case the stress plateau is lost: an alternate cable design has (as shown in the same figure) a hysteresis loop up to a strain of about 8\% while retaining the stress plateau. Cables, like wires, have a hysteresis loop that varies according to the number of deformation cycles: see Figure 9 of Ozbulut, Daghash, and Sherif [\ref{bib:Ozbulut.}] where the hysteresis loop after 100 cycles is much narrower than after one cycle. It may be an advantage to combine fibers or wires with shape memory characteristics with other rope elements, but we have not studied this possibility. We remark that recently developed shape memory alloys can reliably go through 10 million transformation cycles without fatigue [\ref{bib:SMfat}]; although in this case the hysteresis loop is quite small. There may be other applications where this analysis is useful. For example, suppose one wants to drop cargo from a plane or a helicopter, without deploying parachutes (which may easily be seen by an enemy). Then, to lessen the impact of the fall on the cargo, it may be useful to attach a tether to the cargo which becomes taut at a certain distance above the ground: the distance to the ground sets the maximal elongation. One would again want to minimize the peak tension in the tether, thus minimizing the forces on the plane or the helicopter and on the cargo. In such applications to achieve the desired \textit{mathematically ideal tether} which produces a constant braking force, one could use a tether with sacrifical elements which tear and dissipate energy as it is stretched. A one dimensional model with these sacrificial elements was studied by Cherkaev, Cherkaev and Slepyan [\ref{bib:Cherkaev}] and shows an approximate plateau in the tension verses elongation (with oscillations): see their figure 3. Such sacrificial elements are believed to be a mechanism giving bone its strength [\ref{bib:Thompson.}] and account for resilance of double network hydrogels [\ref{bib:Gong}: see also http://imechanica.org/node/13088]. \appendix{Appendix I}\\ \textit{\textbf{Notation}} \hspace*{-0.3cm}$a_0$\hspace{1.5cm} constant acceleration/deceleration\\ \hspace*{2.0cm}of the climber\\ $a(x,t)$\hspace{0.9cm} acceleration of the rope point at time $t$\\ \hspace*{1.9cm} with initial (at $t=0$) coordinate $x$\\ $b(t)$\hspace{1.3cm} rope tension at the point $x=L$ at\\ \hspace*{1.8cm} time $t$\\ $b_0$\hspace{1.5cm} constant rope tension at the point $x=L$\\ \hspace*{1.8cm} at constant acceleration\\ $E_{el}$ \hspace{1.2cm} stored elastic energy of the rope\\ $E_{total}$ \hspace{1.0cm} total energy of the system\\ $\epsilon(x), \epsilon_i$ \hspace{0.8cm} elastic strain\\ $g$ \hspace{1.7cm} gravitational acceleration\\ $h_0$ \hspace{1.5cm} initial coordinate of the climber\\ $k$ \hspace{1.6cm} friction coefficient between the rope\\ \hspace*{2.0cm} and the carabineer\\ $L$ \hspace{1.6cm} length of the undeformed rope\\ $\triangle L$ \hspace{1.3cm} maximal stretch of the rope in a\\ \hspace*{2.0cm} fall cycle\\ $m$ \hspace{1.6cm} mass of the climber\\ $t$ \hspace{1.7cm} time\\ $T$ \hspace{1.6cm} time moment when the rope reaches\\ \hspace*{2.0cm} maximal stretch for the first time\\ $u(x,t)$ \hspace{0.9cm} the rope displacement function\\ $v(x,t)$ \hspace{1.0cm}velocity of the rope point $x$ ate time $t$\\ $v_0$ \hspace{1.5cm} initial velocity of the climber in the\\ \hspace*{2.0cm} reformulated problem\\ $W$ \hspace{1.6cm} rope elastic energy density\\ $x$ \hspace{1.6cm} coordinate variable of the system\\ $y(x,t)$ \hspace{0.9cm} the rope deformation function\\ \end{document}
\begin{document} \title{\bf Tree-based Particle Smoothing Algorithms in a Hidden Markov Model} \author{Dong Ding \hspace{.2cm}\\ Department of Mathematics, Imperial College London\\ Axel Gandy \\ Department of Mathematics, Imperial College London} \maketitle \begin{abstract} We provide a new strategy built on the divide-and-conquer approach by Lindsten et al. (2017, Journal of Computational and Graphical Statistics) to investigate the smoothing problem in a hidden Markov model. We employ this approach to decompose a hidden Markov model into sub-models with intermediate target distributions based on an auxiliary binary tree structure and produce independent samples from the sub-models at the leaf nodes towards the original model of interest at the root. We review the target distribution in the sub-models suggested by Lindsten et al. and propose two new classes of target distributions, which are the estimates of the (joint) filtering distributions and the (joint) smoothing distributions. The first proposed type is straightforwardly constructible by running a filtering algorithm in advance. The algorithm using the second type of target distributions has an advantage of roughly retaining the marginals of all random variables invariant at all levels of the tree at the cost of approximating the marginal smoothing distributions in advance. We further propose parametric and non-parametric ways of constructing these target distributions using pre-generated Monte Carlo samples. We show empirically the algorithms with the proposed intermediate target distributions give stable and comparable results as the conventional smoothing methods in a linear Gaussian model and a non-linear model. \end{abstract} \noindent {\it Keywords: Algorithms; Bayesian methods; Monte Carlo simulations; Particle filters} \section{Introduction} \label{intro} A hidden Markov model (HMM) is a discrete-time stochastic process $\{X_{t}, Y_{t}\}_{t \geq 0}$ where $\{X_{t}\}_{t \geq 0}$ is an unobserved Markov chain. We only have access to $\{Y_{t}\}$ whose distribution depends on $\{X_{t}\}$. We make the following assumptions in the entire article: The densities of the initial state $X_{0}$, the transition density $X_{t+1}$ given $X_{t} = x_{t}$ and the emission density $Y_{t}$ given $X_{t} = x_{t}$ taken with respect to some dominating measure exist and are denoted as follows: \begin{alignat*}{2} X_{0} &\sim p_{0}(x_{0}) &&\\ X_{t+1}| \{ X_{t} = x_{t} \} &\sim p(x_{t+1} | x_{t}) && \text{~~for } t = 0, \ldots, T-1,\\ Y_{t} | \{X_{t} = x_{t}\} &\sim p(y_{t}|x_{t}) && \text{~~for } t = 0, \ldots, T, \end{alignat*} where $T$ is the final time step of the process. We are interested in the (marginal) smoothing distributions $\{p(x_{t}|y_{0:T})\}_{t = 0, \ldots, T}$ or the joint smoothing distribution $p(x_{0:T}|y_{0:T})$ where $x_{0:T}$ and $y_{0:T}$ are abbreviations of $(x_{0}, \ldots, x_{T})$ and $(y_{0}, \ldots, y_{T})$, respectively. Exact solutions are available for linear Gaussian HMM using a Rauch--Tung--Striebel smoother (RTSs) \citep{rauch1965maximum} and in a HMM with finite-space Markov chains \citep{baum1966statistical}. In most other cases, the smoothing distributions are not analytically tractable. A large body of work uses Monte Carlo methods to approximate the smoothing distributions $\{p(x_{t}|y_{0:T})\}_{t = 0, \ldots, T}$ or the joint smoothing distribution $p(x_{0:T}|y_{0:T})$. Sequential Monte Carlo (SMC) methods \citep{de2001sequential} are commonly used to sequentially update the filtering distributions $\{p(x_{t}|y_{0:t})\}_{t = 0, \ldots, T}$. SMC can in principle be used to estimate the joint smoothing density $p(x_{0:T}|y_{0:T})$ by updating the entire history of the random samples in each resampling step. However, the performance can be poor, as path degeneracy will occur in many settings \citep{arulampalam2002tutorial}. Advanced sequential Monte Carlo methods with desirable theoretical and practical results have been developed in recent years including sequential Quasi-Monte Carlo (SQMC) \citep{gerber2015sequential}, divide-and-conquer sequential Monte Carlo (D\&C SMC) \citep{lindsten2017divide}, multilevel sequential Monte Carlo (MSMC) \citep{beskos2017multilevel} and variational sequential Monte Carlo (VSMC) \citep{naesseth2017variational}. Other smoothing algorithms have been suggested previously. \citet{doucet2000sequential} develop the forward filtering backward smoothing algorithm (FFBSm) for sampling from $\{p(x_{t}|y_{0:T})\}_{t = 0, \ldots, T}$ based on the formula proposed by \citet{kitagawa1987non}. \citet{godsill2004monte} propose the forward filtering backward simulation algorithm (FFBSi) which generates samples from the joint smoothing distribution $p(x_{0:T}|y_{0:T})$. \citet{briers2010smoothing} propose a two-filter smoother (TFS) which employs a standard forward particle filter and a backward information filter to sample from $\{p(x_{t}|y_{0:T})\}_{t = 0, \ldots, T}$. Typically, these algorithms have quadratic complexities in $N$ for generating $N$ samples. \citet{fearnhead2010sequential} and \citet{klaas2006fast} propose two smoothing algorithms with lower computational complexity, but their methods do not provide unbiased estimates. In this article, we suggest using the divide-and-conquer sequential Monte Carlo (D\&C SMC) \citep{lindsten2017divide} approach to address the smoothing problem. The D\&C SMC algorithm performs statistical inferences in probabilistic graphical models. It splits the random variables of the target distribution into multiple levels of disjoint sets based upon an auxiliary tree $\mathcal{T}$. An intermediate target distribution needs to be assigned to each set of random variables yielding sub-models for each non-leaf node. The choice of these intermediate target distributions is key for a good overall performance of the algorithm. By sampling independently from the leaf nodes and gradually propagating, merging and resampling from the leaf nodes to the root, the D\&C SMC algorithm eventually produces samples from the target distribution. The merging step involves importance sampling. Using the idea of D\&C SMC, we aim to estimate the joint smoothing distribution $p(x_{0:T}|y_{0:T})$ and thus call the algorithm: `tree-based particle smoothing algorithm' (TPS). The key differences between TPS and other smoothing algorithms lie in its non-sequential and more adaptive merging step of the samples. Our main contribution is the proposition and investigation of three classes of intermediate target distributions to be used in the algorithm. We denote a leaf node corresponding to a single random variable $X_{j}$ by $\mathcal{T}_{j} \in \mathcal{T}$ and a non-leaf node corresponding to the random variables $X_{j:l}$ by $\mathcal{T}_{j:l} \in \mathcal{T} (j < l)$. The first class advised by \cite{lindsten2017divide} has the density proportional to the product of all transition and emission densities associated to the target variable $X_{j}$ (resp. $X_{j:l}$) in the sub-model. This is equivalent to the unnormalised likelihood of a new HMM starting at time $j$ (resp. from time $j$ to $l$) given the observations of the same time interval with an uninformative prior of $X_{j}$ if $j \neq 0$. The second class uses an estimate of the filtering distribution $p(x_{j} | y_{0:j})$ at $\mathcal{T}_{j} \in \mathcal{T}$ and an estimate of the joint filtering distribution $p(x_{j:l} | y_{0:l})$ at $\mathcal{T}_{j:l} \in \mathcal{T}$. Working with this estimate involves tuning a preliminary particle filter. The third class uses estimates of the marginal smoothing distribution $p(x_{j} | y_{0:T})$ at $\mathcal{T}_{j} \in \mathcal{T}$ and of the joint smoothing distribution $p(x_{j:l} | y_{0:T})$ at $\mathcal{T}_{j:l} \in \mathcal{T}$. We will see that this class of immediate distributions is optimal in a certain sense. Furthermore, under this construction, we approximately retain the marginal distribution of all single random variables $\{X_{j}\}_{j = 0}^{T}$ invariant as the marginal smoothing distributions $\{ p(x_{j} | y_{0:T}) \}_{j = 0}^{T}$ at every level of the tree. The price of implementing TPS using the second class of intermediate target distributions relies on both the estimates of the filtering and the (marginal) smoothing distributions, but not necessarily the joint smoothing distribution. We then propose some parametric and non-parametric approaches to construct these intermediate distributions based on the pre-generated Monte Carlo samples considering both efficiency and accuracy. The article is structured as follows. We first describe the divide-and-conquer approach for particle smoothing in Section \ref{TPS_intro}. We discuss the intermediate target distributions and the constructions of the initial sampling distributions at the leaf nodes in Section \ref{target_distr}. In Section \ref{simulation}, we conduct simulation studies in a linear Gaussian and non-linear non-Gaussian HMM to compare TPS with other smoothing algorithms. The article finishes with a discussion in Section \ref{conclusion}. \section{Tree-based Particle Smoothing Algorithm (TPS)} \label{TPS_intro} This section outlines an algorithm we call `tree-based particle smoothing algorithm' (TPS). \cite{lindsten2017divide} describe the construction of an auxiliary tree for general probabilistic graphical models. We demonstrate a unique construction of an auxiliary binary tree from a HMM bearing intermediate target distributions specified at each node. We then illustrate the sampling procedure for the target distributions at the nodes. We present an algorithm which can be applied recursively from the leaf nodes towards the root and yet generate the target samples. \subsection{Construction of an auxiliary tree} \label{Aux_tree} TPS splits a HMM into sub-models based upon a binary tree decomposition. It first divides the random variables $X_{0:T}$ into two disjoint subsets and recursively apply binary splits to the resulting two subsets until the resulting subset consists of only a single random variable. Each generated subset corresponds to a tree node and is assigned an intermediate target distribution. The root characterises the complete model with the target distribution $p(x_{0:T}|y_{0:T})$. Initial samples are generated at the leaf nodes, independent between leaves. Theses samples are recursively merged using importance sampling until the root of the tree is reached. We propose one intuitive way of implementing the binary splits which ensures that the left subtree is always a complete binary tree and contains at least as many nodes as the right subtree. We split a non-leaf node with the variables $X_{j:l}$ where $0 \leq j < l \leq T$, into two children $\mathcal{T}_{j:k-1}$ and $\mathcal{T}_{k:l}$ with the random variables $X_{j:k-1}$ and $X_{k:l}$, where \begin{eqnarray} \label{cut_point} k = j + 2^{p}, \end{eqnarray} and $p = \ceil {\frac{\log(l-j+1)}{\log 2}} - 1.$ The auxiliary tree when $T = 5$ is shown in Figure \ref{RVoT}. This construction has several advantages: The random variables within each node have consecutive time indices. The left subtree is also a complete binary tree of $2^{\ceil {\frac{\log(T+1)}{\log 2}}}$ leave nodes. $\{y_{T+1}, y_{T+2}, \ldots\}$ become available, as samples from the complete subtree would not need to be updated. Moreover, the tree has a height of $\big(\ceil {\frac{\log(T+1)}{\log 2}} + 1\big)$ levels, which implies a maximum number of $\ceil {\frac{\log(T+1)}{\log 2}}$ updates of the samples corresponding to a single random variable with different target distributions at different levels of the tree. Usually, more updates potentially indicate more resampling steps, which may cause more serious degeneracy problems. In Figure \ref{RVoT}, the samples corresponding to $X_{0}, \ldots, X_{3}$ need to be updated three times from the leave nodes and those of $X_{4}, X_{5}$ need to be updated twice. When running a bootstrap particle filter to solve the smoothing problem, the samples at time step $t = 0$ need to be updated $T$ times and thus the maximum number of the updates become $T$, which is no less than $ \ceil {\frac{\log(T+1)}{\log 2}}$. \cite{lindsten2017divide} also propose a general way of constructing the auxiliary tree in a self-similar model family, where a HMM belongs to. Their construction in the context of a HMM may not be identical to ours with no restriction on the choice of the cutting point. \begin{figure} \caption{An auxiliary binary tree consisting of random variables when $T = 5$} \label{RVoT} \end{figure} \subsection{Sampling procedure in the sub-models of tree} We describe the sampling approach from the target distribution at a leaf and non-leaf node of the constructed binary tree $\mathcal{T}$ described in Section \ref{Aux_tree}. We denote a target density by $f_{j}$ which can be straightforwardly sampled from at a leaf node $\mathcal{T}_{j} \in \mathcal{T}$, a proper importance density by $h_{j:l}$ and a target density by $f_{j:l}$ respectively at a non-root tree node $\mathcal{T}_{j:l} \in \mathcal{T}$ where $0 < l - j < T$. At the root, the target density is always $f_{0:T} = p(x_{0:T}|y_{0:T})$. At a leaf node $\mathcal{T}_{j}$, we sample from $f_{j}$ directly. At a non-root node $\mathcal{T}_{j:l}$, we employ an importance sampling step with the proposal $h_{j:l} = f_{j:k-1} f_{k:l}$ being the product of the target densities from the two children of $\mathcal{T}_{j:l}$. Practically, we merge the samples from $\mathcal{T}_{j:k-1}$ and $\mathcal{T}_{k:l}$ respectively and reweigh them. \begin{algorithm}[tbp] \SetAlgoLined \eIf{j = l}{ Simulate $x_{j}^{(i)} \sim f_{j}(x_{j})$ for $i = 1,2, \ldots, N$. Return $\{ x_{l}^{(i)}, w_{l}^{(i)} = \frac{1}{N} \}_{i = 1}^{N}.$ } {Let $p = \ceil{\frac{\log(l-j+1)}{\log 2}} - 1$ and $k = j+ 2^{p}$. \\ $\{ \tilde{x}_{j:k-1}^{(i)}, \tilde{w}_{j:k-1}^{(i)} \}_{i = 1}^{N} \leftarrow \texttt{TS}(j,k-1)$ from $\mathcal{T}_{j:k-1}$ and $\{ \tilde{x}_{k:l}^{(i)}, \tilde{w}_{k:l}^{(i)} \}_{i = 1}^{N} \leftarrow \texttt{TS}(k,l)$ from $\mathcal{T}_{k:l}$.\\ Denote the combined particles by $\{ \tilde{x}_{j:l}^{(i)} = (\tilde{x}_{j:k-1}^{({i})}, \tilde{x}_{k:l}^{({i})}), \tilde{w}_{j:l}^{(i)} = \tilde{w}_{j:k-1}^{({i})} \tilde{w}_{k:l}^{({i})} \}_{i = 1}^{N}$. \\ Update the unnormalised weights for $i = 1, \ldots, N$: \begin{eqnarray} \label{weight_formula} \hat{w}^{(i)}_{j:l} = \tilde{w}_{j:l}^{(i)} \frac{f_{j:l} (\tilde{x}_{j:l}^{(i)})}{ f_{j:k-1} ( \tilde{x}^{({i})}_{j:k-1}) f_{k:l}(\tilde{x}^{({i})}_{k:l})}. \end{eqnarray} Resample $\big\{ \tilde{x}_{j:l}^{(i)}, \hat{w}^{(i)}_{j:l} \big\}_{i = 1}^{N}$ to obtain the normalised weighted particles $\big\{ x_{j:l}^{(i)}, w^{(i)}_{j:l} \big\}_{i = 1}^{N}$. \\ Return $\big\{ x_{j:l}^{(i)}, w^{(i)}_{j:l} \big\}_{i = 1}^{N}.$ } \caption{Algorithm \texttt{TS}($j, l$) which generates weighted samples from the target $f_{j:l}$} \label{RecursiveAlg} \end{algorithm} \begin{figure} \caption{Computational flow of $\texttt{TS}$ (see Algorithm \ref{RecursiveAlg}) in a HMM for $T = 5$. Each non-root node contains the weighted samples from the intermediate target distributions. The generation of the samples starts from the leaves following the branches towards the root of the auxiliary binary tree.} \label{CF} \end{figure} Algorithm \ref{RecursiveAlg} demonstrates the generation of $N$ weighted samples $\big\{ x_{j:l}^{(i)}, w^{(i)}_{j:l} \big\}$ from the target $f_{j:l}$ at $\mathcal{T}_{j:l}$. It adopts the pre-stored weighted particles $\big\{ \tilde{x}_{j:k-1}^{(i)}, \tilde{w}^{(i)}_{j:k-1} \big\}_{i = 1}^{N}$ from $\mathcal{T}_{j:k-1}$ and $\big\{\tilde{x}_{k:l}^{(i)}, \tilde{w}^{(i)}_{k:l} \big\}_{i = 1}^{N}$ from $\mathcal{T}_{k:l}$ where $k$ is the cutting point defined in Equation \eqref{cut_point}. The algorithm first merges the weighted particles $\big \{\tilde{x}_{j:l}^{(i)} = \big( \tilde{x}_{j:k-1}^{({i})}, \tilde{x}_{k:l}^{({i})} \big) \big\}_{i = 1}^{N}$ from the children which forms an approximation of the distribution with density $f_{j:k-1} f_{k:l}$. The algorithm reweighs the combined samples using importance sampling to target the new distribution $f_{j:l}$. We retain the notation of the weights in the algorithm since some return unequal weights including Chopthin algorithm \citep{gandy2016chopthin} while others including multinomial resampling, residual resampling \citep{liu1998sequential} and systematic resampling \citep{kitagawa1996monte} return equal weights. We apply the algorithm recursively from the leaf nodes to the root of the auxiliary binary tree which yields the samples from the final target $f_{0:T} = p(x_{0:T} | y_{0:T})$. The computational flow is shown in Figure \ref{CF} when $T = 5$. The setting of the algorithms are the same as the paper by \cite{lindsten2017divide} with additional attentions to the form of the proposals and intermediate target distributions associated to the tree nodes. According to Proposition 1 and 2 in \cite{lindsten2017divide}, the unbiasedness of the normalising constant and the consistency can be verified under some regularity conditions given valid proposals and an exchangeable resampling procedure. \section{Intermediate target distributions in TPS} \label{target_distr} Given an auxiliary tree $\mathcal{T}$ constructed in a way described in Section~\ref{Aux_tree}, we define the intermediate target distributions of the sub-models associated to the nodes in the tree. We apply \cite{lindsten2017divide}'s method to build one class of intermediate target distribution $\{f_{j:l}\}_{\mathcal{T}_{j:l} \in \mathcal{T}}$ and develop two new classes, based on the filtering and the smoothing distribution, respectively. \subsection{Target suggested by \cite{lindsten2017divide}} \cite{lindsten2017divide} recommends a class of intermediate target distributions with densities proportional to the product of the factors within the probabilistic graphical model. We apply the method to a HMM which bears binary and unary factors. A binary factor refers to a transition density of two consecutive hidden states. An unary factor refers to a prior density of a hidden state or the emission density between a hidden state and its observation. We call the tree-based particle smoothing algorithm with the above idea TPS-L as suggested by \cite{lindsten2017divide}. At a leaf node $\mathcal{T}_{j}$ where the sub-model only contains a single random variable $X_{j}$ given the observation $Y_{j} = y_{j}$, the target distribution contains no binary factor and is defined as $f_{0}(x_{0}) \propto p_{0}(x_{0})p(y_{0} | x_{0})$ when $j = 0$ and $f_{j}(x_{j}) \propto p(y_{j} | x_{j})$ when $j \neq 0$. At a non-leaf node $\mathcal{T}_{j:l}$, the target density is proportional to the product of all transition and emission densities containing the hidden states in the sub-model: \begin{eqnarray*} f_{j:l}(x_{j:l}) &\propto& p(y_{j} | x_{j}) \prod^{l-1}_{i = j} \bigg\{ p(x_{i+1} | x_{i}) p(y_{i+1} | x_{i+1}) \bigg\}. \end{eqnarray*} When $j = 0$, the prior density of $X_{0}$ is additionally multiplied. Assume $\mathcal{T}_{j:l}$ connects two children $\mathcal{T}_{j:k-1}, \mathcal{T}_{k:l} \in \mathcal{T}$ carrying the pre-generated particles: $\{ \tilde{x}^{(i)}_{j:k-1}, \tilde{w}^{(i)}_{j:k-1} \}_{i = 1}^{N} \sim f_{j:k-1}$ at $\mathcal{T}_{j:k-1} \in \mathcal{T}$ and $\{ \tilde{x}^{(i)}_{k:l}, \tilde{w}^{(i)}_{k:l} \}_{i = 1}^{N} \sim f_{k:l}$ at $\mathcal{T}_{k:l} \in \mathcal{T}$. The unnormalised importance weight $\hat{w}^{(i)}_{j:l}$ of the combined particle $\tilde{x}^{(i)}_{j:l} = (\tilde{x}_{j:k-1}^{({i})}, \tilde{x}_{k:l}^{({i})})$ in Equation \eqref{weight_formula} becomes: \begin{eqnarray} \hat{w}^{(i)}_{j:l} = \tilde{w}_{j:l}^{(i)} p(\tilde{x}^{(i)}_{k} | \tilde{x}^{(i)}_{k-1}), \end{eqnarray} where $\tilde{x}^{(i)}_{k-1}$ is the last element in $\tilde{x}_{j:k-1}^{({i})}$ and $\tilde{x}^{(i)}_{k}$ is the first element in $\tilde{x}_{k:l}^{({i})}$. The tree-based sampling algorithm employing this type of intermediate target distributions is simple to implement, which does not involve any estimation techniques in the algorithms discussed in Section~\ref{inter_EF} and \ref{inter_ES}. TPS-L only requires the initial sampling of the particles from $f_{j}$ and applies importance sampling with a straightforward weight formula to merge them towards the root of the tree. The initial sampling distribution $f_{j}$ for $j \neq 0$ is equivalent to the posterior given a single observation $y_{j}$ from an uninformative prior. Correspondingly, the target distribution $\mathcal{T}_{j:l}$ only incorporates the observations from time $j$ to $l$ with no information beforehand or afterward. We will see in the simulation section that with only one observation conditioned on, the initial sampling distribution may be vastly different from the marginal smoothing distribution, thus resulting in poor estimation results. \subsection{Estimates of filtering distributions as target} \label{inter_EF} The second class of target distributions is based on estimates of filtering distributions and thus we name the algorithm TPS-EF. At the root, the target distribution is \begin{eqnarray*} f_{0:T}(x_{0:T}) = p(x_{0:T}|y_{0:T}) = p_{0}(x_{0}) p(y_{0}|x_{0}) \prod^{T-1}_{i = 0} \bigg\{ p(x_{i+1} | x_{i}) p(y_{i+1} | x_{i+1}) \bigg\}. \end{eqnarray*} At a leaf node $\mathcal{T}_{j} \in \mathcal{T}$, we use an estimate of the filtering distribution $f_{j}(x_{j}) = \hat{p}(x_{j} | y_{0:j}) \approx p(x_{j} | y_{0:j})$ whose exact form and sampling process will be discussed in Section \ref{ini_target}. At a non-leaf and non-root node $\mathcal{T}_{j:l} \in \mathcal{T}$, we define the intermediate target distribution: \begin{eqnarray*} f_{j:l}(x_{j:l}) &\propto& \hat{p}(x_{j} | y_{0:j}) \prod^{l-1}_{i = j} \bigg\{ p(x_{i+1} | x_{i}) p(y_{i+1} | x_{i+1}) \bigg\} \approx p(x_{j:l} | y_{0:l}). \end{eqnarray*} The weight of the merged sample $\tilde{x}^{(i)}_{j:l} = (\tilde{x}_{j:k-1}^{({i})}, \tilde{x}_{k:l}^{({i})})$ in Equation \eqref{weight_formula} becomes: \begin{eqnarray} \label{weight_1} \hat{w}^{(i)}_{j:l} = \tilde{w}^{(i)}_{j:l} \frac{ p( \tilde{x}_{k}^{({i})} | \tilde{x}_{k-1}^{({i})} ) p(y_{k} | \tilde{x}_{k}^{({i})}) }{ \hat{p}_{k} (\tilde{x}^{({i})}_{k} | y_{0:k}) }. \end{eqnarray} Under such constructions of the intermediate target distributions, the particles at the leaf nodes are initially generated from (an estimate of) the filtering distribution. Whilst moving up the tree, their empirical marginal distributions gradually shifts towards the smoothing distributions. One downside of this is that this may eliminate a large population of particles, as the transition is accomplished via importance sampling, particularly if the discrepancy between the filtering and smoothing distribution is large. \subsection{Kullback--Leibler divergence between the target and proposal distribution} \label{KLdiv} Before proposing the second type of intermediate target distributions, we present an optimal type of proposal attaining the minimum Kullback--Leibler (KL) divergence \citep{cover2012elements} by assuming the random variables $X_{j:k-1} \in \mathcal{T}$ and $X_{k:l} \in \mathcal{T}$ from the sibling nodes being independent. Given the proposal $h_{j:l} = f_{j:k-1} f_{k:l}$ being the product of the densities of two independent random variables, the minimum KL divergence is met when the two densities are the marginals of the target densities with respect to the corresponding random variables. For simplicity of the notations, we denote the target density at a non-leaf node to be $f(\mathbf{x_{1}}, \mathbf{x_{2}})$ where $\mathbf{X_{1}}, \mathbf{X_{2}}$ are the random variables with the same time indices from the children but not necessarily the same probability measure. A valid proposal density $h_{1}( {\mathbf{x_{1}}}) h_{2}({ \mathbf{x_{2}}})$ satisfies $h_{1}( {\mathbf{x_{1}}}) h_{2}({ \mathbf{x_{2}}}) > 0$ whenever $f(\mathbf{x_{1}}, \mathbf{x_{2}}) > 0$, where we assume $h_{1}$ and $h_{2}$ are the probability densities of two independent (joint) random variables $\mathbf{X_{1}}$ and $\mathbf{X_{2}}$. We claim that proposal $f_{1}({\mathbf{x_{1}}}) f_{2}({ \mathbf{x_{2}}})$ has the smallest KL divergence among all proposals of the form $h_{1}(\mathbf{x_{1}})h_{2}({\mathbf{x_{2}}})$ where $f_{1}({\mathbf{x_{1}}})$ and $f_{2}({ \mathbf{x_{2}}})$ are the marginal densities of $f(\mathbf{x_{1}, x_{2}})$ with respect to $\mathbf{X_{1}}$ and $\mathbf{X_{2}}$, respectively. \begin{theorem} \label{thm:KL_dst} Let $f$ be a probability density function defined on $\mathbb{R}^{n_{1} + n_{2}}$, let $h_{1}$ and $h_{2}$ be probability density functions on $\mathbb{R}^{n_{1}}$ and $\mathbb{R}^{n_{2}}$, respectively. If $h_{1}(\mathbf{x_{1}}) h_{2} (\mathbf{x_{2}}) > 0$ whenever $f(\mathbf{x_{1}, x_{2}}) >0,$ then $$ \int_{\mathbb{R}^{n_{2}}} \int_{\mathbb{R}^{n_{1}}} f( \mathbf{x_{1}}, \mathbf{x_{2}}) \log \bigg(\frac{f(\mathbf{x_{1}}, \mathbf{x_{2}})}{h_{1}(\mathbf{x_{1}}) h_{2}(\mathbf{x_{2}}) } \bigg) \mathrm{d} \mathbf{x_{1}} \mathrm{d} \mathbf{x_{2}} \geq \int_{\mathbb{R}^{n_{2}}} \int_{\mathbb{R}^{n_{1}}} f(\mathbf{x_{1}}, \mathbf{x_{2}}) \log \bigg(\frac{f(\mathbf{x_{1}}, \mathbf{x_{2}})}{f_{1}(\mathbf{x_{1}}) f_{2}(\mathbf{x_{2}}) } \bigg) \mathrm{d} \mathbf{x_{1}} \mathrm{d} \mathbf{x_{2}},$$ where $f_{1}({ \mathbf{x_{1}}}) = \int_{\mathbb{R}^{n_{2}}} f({\mathbf{x_{1}}}, \mathbf{{x_{2}}}) \mathrm{d} { \mathbf{x_{2}}}$ and $f_{2}({ \mathbf{x_{2}}}) = \int_{\mathbb{R}^{n_{1}}} f(\mathbf{{x_{1}}}, \mathbf{{x_{2}}}) \mathrm{d} {\mathbf{x_{1}}}$ are the densities of the marginal distributions of $f(\mathbf{x_{1}}, \mathbf{x_{2}})$. \end{theorem} The proof of Theorem \ref{thm:KL_dst} is in the Appendix. \subsection{Estimates of smoothing distributions as target} \label{inter_ES} We provide an alternative way of constructing the intermediate target distributions using the marginal smoothing distributions motivated by Theorem \ref{thm:KL_dst}. Since the closed-form solutions to the marginal smoothing distributions are not available in general, we employ the estimates of the distributions at the nodes. At the root, we still use $f_{0:T} = p(x_{0:T}|y_{0:T})$. At a leaf node $\mathcal{T}_{j} \in \mathcal{T}$, we define $f_{j}(x_{j}) = \hat{p}(x_{j} | y_{0:T}) \approx p(x_{j} | y_{0:T})$, which requires estimating the marginal smoothing distribution. We thus name the algorithm TPS-ES. At a non-leaf and non-root node $\mathcal{T}_{j:l}$, we define the target distribution $f_{j:l}$: \begin{eqnarray*} f_{j:l}(x_{j:l}) &\propto& \hat{p}(x_{j} | y_{0:j}) \frac{ \hat{p}(x_{l} | y_{0:T})}{ \hat{p}(x_{l} | y_{0:l})} \prod^{l-1}_{i = j} \bigg\{ p(x_{i+1} | x_{i}) p(y_{i+1} | x_{i+1}) \bigg\} \\ &\approx& p(x_{j} | y_{0:j}) \frac{p(x_{l} | y_{0:T}) }{p(x_{l} | y_{0:l}) } \prod^{l-1}_{i = j} \bigg\{ p(x_{i+1} | x_{i}) p(y_{i+1} | x_{i+1}) \bigg\}\\ & = & p(x_{j:l} | y_{0:T}), \end{eqnarray*} where $\hat{p}(x_{j} | y_{0:j})$ denotes a probability density approximating the filtering density at the $j$th time step. Hence, given the estimate smoothing densities $\{ \hat{p}(x_{j} | y_{0:T}) \}_{j = 0, \ldots, T}$ and the estimating filtering densities $\{ \hat{p}(x_{j} | y_{0:j}) \}_{j = 0, \ldots, T}$, we build an estimator of the distribution $p(x_{j:l} | y_{0:T})$ at $\mathcal{T}_{j:l} \in \mathcal{T}$. Merging the particles at $\mathcal{T}_{j:l}$ from its children at $\mathcal{T}_{j:k-1} \in \mathcal{T}$ and $\mathcal{T}_{k:l} \in \mathcal{T}$ amounts to correlating the two sets of samples while roughly preserving their marginal distributions. The weight of the merged sample $\tilde{x}^{(i)}_{j:l} = (\tilde{x}_{j:k-1}^{({i})}, \tilde{x}_{k:l}^{({i})})$ in Equation \eqref{weight_formula} becomes: \begin{eqnarray} \label{weight_2} \hat{w}^{(i)}_{j:l} = \tilde{w}^{(i)}_{j:l} \frac{\hat{p}(\tilde{x}^{({i})}_{k-1}|y_{0:k-1})}{ \hat{p}(\tilde{x}^{({i})}_{k-1}|y_{0:T}) \hat{p}(\tilde{x}^{({i})}_{k}|y_{0:k})} p(\tilde{x}^{({i})}_{k} | \tilde{x}^{({i})}_{k-1}) p(y_{k}|\tilde{x}^{({i})}_{k}). \end{eqnarray} Applying TPS-ES demands the constructions of $\{ \hat{p}(x_{j} | y_{0:j}) \}_{j = 0, \ldots, T}$ and $\{ \hat{p}(x_{j} | y_{0:T}) \}_{j = 0, \ldots, T}$ in advance. The new weight formula in Equation \eqref{weight_2} additionally incorporates the ratio between the estimated filtering and smoothing densities of $x_{k-1}$ compared with Equation \eqref{weight_1}. TPS-ES exhibits a sound property regarding the Kullback--Leibler divergence discussed in Section \ref{KLdiv}. Given the target distribution $f_{j:l}(x_{j:l}) = \hat{p}(x_{j:l}|y_{0:T}) $ estimating $p(x_{j:l}|y_{0:T}) $ at $\mathcal{T}_{j:l}$, the proposal $h_{j:l}(x_{j:l}) = f_{j:k-1}(x_{j:k-1}) f_{k:l}(x_{k:l})$ estimates $p(x_{j:k-1} | y_{0:T}) p(x_{k:l} | y_{0:T})$. We notice $p(x_{j:k-1} | y_{0:T})$ and $p(x_{k:l} | y_{0:T})$ are the marginal distributions, and their product forms a proposal attaining the minimum KL divergence from $p(x_{j:l}|y_{0:T})$. Hence, what the proposal density $h_{j:l}(x_{j:l})$ estimates has a minimum KL divergence from the smoothing density that our target distribution $f_{j:l}(x_{j:l})$ estimates. Moreover, TPS-ES can be practically useful in some extreme models whereas the empirical marginal densities from other Monte Carlo smoothing algorithms may miss modes caused by the poor proposals. Since TPS-ES leaves the marginal distributions of all random variables roughly invariant at all levels of the tree, we can diagnose each importance sampling step by inspecting the empirical marginals of the corresponding variables. If there is a substantial difference between the empirical marginal distributions, we need to examine the combination step. \subsection{Initial sampling distribution at leaf nodes} \label{ini_target} We illustrate the constructions of the univariate distributions $\{\hat{p}(x_{j} | y_{1:j})\}_{j = 0, \ldots, T}$ and $\{\hat{p}(x_{j} | y_{0:T})_{j = 0, \ldots,T}\}$ mentioned in Section \ref{inter_EF} and Section \ref{inter_ES}, which are used in the initial sampling distributions at the leaf nodes. In general, the solutions of the filtering and smoothing distribution of a HMM are analytically intractable and need to be estimated from Monte Carlo samples with some exceptions including linear Gaussian and discrete HMMs. We aim to generate a probability density $\hat{f}$ estimating a target density $f$ given the weighted samples $\{x_{i}, w_{i}\}_{i = 1}^{n}$ from $f$. In the context of $f$ being a filtering or smoothing distribution, we can obtain the weighted samples by running a filtering algorithm or a smoothing algorithm. We are not interested in the empirical distribution since it is discrete and generally does not cover the full support of the random variable of interest. We first consider some parametric approaches. We can fit the data with some common probability distributions including a normal distribution and Student's $t$-distribution. We can also accommodate a mixture model to fit multiple modes of the target densities. The parameters of the distributions can be estimated in various ways including moment matching, maximum likelihood method and EM algorithm. The parametric approaches are reasonably quick and simple. For instance, assuming a Gaussian distribution requires the evaluation of the mean and variance and can be easily obtained from the samples using moment matching. The generation and evaluation of densities of the new particles are straightforward and fast to implement. Nevertheless, the target distribution may not be well approximated under the parametric assumption. Alternatively, we can employ some non-parametric approaches for instance, a kernel density estimator (KDS). We need to select the type of kernels and bandwidth in advance. The complexity of generating $N$ new samples is $O\big(\log(n)N\big)$ and the evaluation of the densities is more computationally expensive with complexity $O(nN)$. We propose another non-parametric approximation method using piecewise constant functions with a lower computational effort than a KDS. We first build a uniform grid consisting of the points $x_{1} < x_{2} < \ldots < x_{n}$ with densities $d_{1}, \ldots, d_{n}$ estimated by a KDS such that $x_{i+1} - x_{i} = \Delta > 0$ for $i = 1, \ldots, n$. The resulting probability density function formed by these grid points using piecewise constant functions is: \begin{align} \label{interpolate} f(x) = \sum_{i = 1}^{n} \mathbbm{1}_{x \in [x_{i} - \Delta/2, x_{i} + \Delta/2)} d_{i}. \end{align} The evaluation of the sample densities reduces significantly from $O(nN)$ to $O\big(N \big)$ compared to a KDS. Such probability density functions using piecewise constant functions have several disadvantages though enjoy a fast computation of estimated densities. Firstly, the estimator is biased since the proposal density generally does not cover the full support of the target density. Moreover, in TPS-ES, if the estimated filtering and smoothing distributions are both generated using the piecewise constant functions with different samples, there is no guarantee their densities have the same support, which may cause zero or infinite weight in Equation \eqref{weight_2}. To avoid this, we consider the mixture probability distributions using the piecewise constant functions accommodating the samples from both the filtering and smoothing distributions. Assume at time step $j$, the first uniform grid consists of the points $x^{f}_{1} < x^{f}_{2} < \ldots < x^{f}_{n^{f}}$ such that $x^{f}_{i+1} - x^{f}_{i} = \Delta^{f}$ for $i = 1, \ldots, n^{f}$ with estimated filtering densities $d^{f}_{1}, \ldots, d^{f}_{n}$ from a KDS and assume the second uniform grid consists of the points $x^{s}_{1} < x^{s}_{2} < \ldots < x^{s}_{n^{s}}$ such that $x^{s}_{i+1} - x^{s}_{i} = \Delta^{s}$ for $i = 1, \ldots, n^{s}$ with estimated smoothing densities $d^{s}_{1}, \ldots, d^{s}_{n}$ from another KDS. Then the resulting estimated filtering density $\hat{p}(x | y_{0:j})$ is given by \begin{align} \label{filter_combine_smoother} \hat{p}(x | y_{0:j}) = \alpha^{f} \sum_{i = 1}^{n^{f}} \mathbbm{1}_{x \in [x^{f}_{i} - \Delta^{f}/2, x^{f}_{i} + \Delta^{f}/2)} d^{f}_{i} + (1-\alpha^{f}) \sum_{i = 1}^{n^{s}} \mathbbm{1}_{x \in [x^{s}_{i} - \Delta^{s}/2, x^{s}_{i} + \Delta^{s}/2)} d^{s}_{i}, \end{align} where $0 < \alpha^{f} < 1$. Similarly, the estimated smoothing density $\hat{p}(x | y_{0:T})$ is given by \begin{align} \label{smoother_combine_filter} \hat{p}(x | y_{0:T}) = \alpha^{s} \sum_{i = 1}^{n^{s}} \mathbbm{1}_{x \in [x^{s}_{i} - \Delta^{s}/2, x^{s}_{i} + \Delta^{s}/2)} d^{s}_{i} + (1-\alpha^{s}) \sum_{i = 1}^{n^{f}} \mathbbm{1}_{x \in [x^{f}_{i} - \Delta^{f}/2, x^{f}_{i} + \Delta^{f}/2)} d^{f}_{i}, \end{align} where $0 < \alpha^{s} < 1$. We have no conclusion of the values of $\alpha^{f}$ and $\alpha^{s}$ so far and choose them with values close to 1. The resulting grid with the set of points $\{x^{f}_{1}, x^{f}_{2}, \ldots, x^{f}_{n^{f}}, x^{s}_{1}, x^{s}_{2}, \ldots, x^{s}_{n^{s}}\}$ is generally not uniform, but we ensure the estimated filtering and smoothing densities have the same support, though still finite. \section{Simulations} \label{simulation} We conduct simulations in a linear Gaussian HMM and a non-linear non-Gaussian HMM in this section. We implement TPS-EF and other smoothing algorithms with roughly the same computational effort. In the second example, we further compare TPS-EF and TPS-ES. \subsection{Gaussian Linear Model} \label{linear_simulation} We consider a simple linear Gaussian HMM similar to \citet{doucet2000sequential}. \begin{alignat*}{2} X_{t} &= 0.8 X_{t-1} + V_{t} ~~~&& t = 1, \ldots, T,\\ Y_{t} &= X_{t} + W_{t} ~~~&& t = 0, \ldots, T. \end{alignat*} where $T = 127$, where $X_0,V_1,\dots,V_T,W_0,\dots,W_T$ are independent with $X_{0} \sim \mathcal{N}(0,1)$, $V_{t}\sim N(0,1)$, $W_{t}\sim N(0,1)$. We implement the following smoothing algorithms. We run TPS using normal distributions as the initial sampling distributions (TPS-N) whose means and variances are estimated using moment matching from the samples of a bootstrap particle filter. The choice of a normal distribution is motivated by the fact that in this case the true smoothing distribution is a normal distribution. We also implement the tree-based particle smoothing algorithm suggest by \cite{lindsten2017divide} (TPS-L), the Rauch--Tung--Striebel smoother (RTSs) \citep{rauch1965maximum} yielding the closed-form solutions, the bootstrap particle filter (BPF) which updates the entire history of the particles in each step, the forward filtering backward smoothing algorithm (FFBSm) \citep{doucet2000sequential} and the forward filtering backward simulation (FFBSi) \citep{godsill2004monte}. We have implemented the above methods in \texttt{R} ourselves. We set the required sample size $N = 10000$ in TPS-N as a benchmark and denote $n = 10000$ the number of samples pre-generated from a bootstrap particle filter in FFBSm, FFBSi, TPS-N and TPS-L. We adjust the number of particles in other algorithms to roughly keep the same running time. As the implementations are not deterministic, we allow a 10\% error regarding the running time for the rest of the algorithms compared to TPS-N. We run each algorithm $M = 500$ times with the same set of observations $\{y_{t}\}_{t = 0}^{127}$. As a criterion for comparison, we define the mean square error of means (MSEm) and variances (MSEv) in the $m$th simulation: \begin{eqnarray*} \text{MSEm}_{m} &=& \frac{1}{T+1} \sum^{T}_{t = 0} \big( \widehat{\mathbb{E}}^{m}[X_{t}| Y_{0:T}] - \mathbb{E}[X_{t}| Y_{0:T}]\big)^{2}, \\ \text{MSEv}_{m} &=& \frac{1}{T+1} \sum^{T}_{t = 0} \big( \widehat{\text{Var}}^{m}[X_{t}| Y_{0:T}] - \text{Var}[X_{t}| Y_{0:T}]\big)^{2}, \\ \end{eqnarray*} where $ \widehat{\mathbb{E}}[X^{m}_{t}| Y_{0:T}]$ and $\widehat{\text{Var}}[x^{m}| y_{0:T}] $ are the Monte Carlo estimates of the mean and variance of the smoothing distribution at time step $t$ in the $m$th simulation. $\mathbb{E}[X_{t}| Y_{0:T}]$ and $\text{Var}[X_{t}| Y_{0:T}]$ are the true smoothing means and variances from a Rauch--Tung--Striebel smoother \citep{rauch1965maximum}. The simulation results are shown in Table \ref{tablelinear}. When $N = n$, the two tree-based sampling algorithms: TPS-L and TPS-N enjoy the same complexity $O(N)$ as BPF, and generate far more particles than FFBSm and FFBSi with quadratic complexities. TPS-L has the smallest mean of MSEm and MSEv followed by TPS-N, which outperform FFBSm and FFBSi significantly in terms of MSEm. \begin{table}[ht] \centering \caption{Simulation errors in the linear model} \label{tablelinear} \begin{tabular}{rrrrrrr} \hline & $N$ & $n$ & Mean of MSEm (s.e.) & Mean of MSEv (s.e.) \\ \hline BPF & 44000 & NA & 0.0020 (0.0000147) & 0.0019 (0.000013) \\ FFBSm & 410 & 410 & 0.0065 (0.0000550) & 0.0047 (0.000037) \\ FFBSi & 450 & 450 & 0.0059 (0.0000563) & 0.0044 (0.000031) \\ TPS-N & 10000 & 10000 & 0.0014 (0.0000096) & 0.0018 (0.000014) \\ TPS-L & 13000 & NA & 0.0008 (0.0000061) & 0.0007 (0.000005) \\ \hline \end{tabular} \end{table} \subsection{Non-linear Model} \label{non-linear_simulation} We consider a well-known non-linear model \citep{gordon1993novel, andrieu2010particle}: \begin{alignat*}{2} X_{t} &= \frac{1}{2} X_{t-1} + 25 \frac{X_{t-1}}{1 + X_{t-1}^2} + 8 \cos (1.2 t) + V_{t},~~&&t = 1,2, \ldots, T,\\ Y_{t} &= \frac{X^{2}_{t}}{20} + W_{t},~~&&t = 0,2, \ldots, T, \end{alignat*} where $T = 511$, where $X_0, V_1,...,V_T, W_0,...W_T$ are independent with $X_{0} \sim \mathcal{N}(0, 1)$, $V_{t} \sim \mathcal{N}(0, \tau^2)$ and $W_{t} \sim \mathcal{N}(0,\sigma^2).$ We run the same algorithms BPF, FFBSm, FFBSi and TPS-L as in Section \ref{linear_simulation}. In TPS-EF, we use piecewise constant functions defined in Equation \eqref{interpolate} for the approximation of the initial sampling distributions. We call the algorithm TPS-EFP and set $N = n = 10000$ as a benchmark. As before, we correspondingly adjust the sample sizes in other algorithms to achieve roughly the same computational effort. We calculate the mean and standard deviation of the MSE of means (MSEm) in $M = 500$ simulations with the same set of observations. Given no closed-form solutions to the true smoothing distributions, we apply a discrete analogue to the distributions of the initial hidden state $p_{0}(x_{0})$ and the transition distributions $\{p(x_{t+1}|x_{t})\}_{t = 0, \ldots, 126}$. We then approximate the smoothing distributions of the original HMM using the solutions of the discrete-space HMM. The MSEm of the $m$th simulation in the non-linear model is defined as: $$ \text{MSEm}_{m} = \frac{1}{T+1} \sum^{T}_{t = 0} \big(\widehat{\mathbb{E}}^{m}[X_{t}| Y_{0:T}] - \mathbb{E} (\hat{X}_{t}\big | y_{0:T}) )^{2}, $$ where $\mathbb{E} (\hat{X}_{t}\big | y_{0:T})$ is the mean of the smoothing distribution at time step $t$ of the discrete-space HMM. We additionally perform Kolmogorov--Smirnov test \citep{massey1951kolmogorov} which measures a distance between the empirical distribution and the target probability distribution. In the context of the smoothing problem in a non-linear hidden Markov model, the Kolmogorov--Smirnov statistic can be defined as $$ D = \sup _{x} | F^{(t)}_{1,N}(x) - F^{(t)}_{2}(x) |, $$ where $F^{(t)}_{1,N}$ is the empirical cumulative function generated by $N$ samples at the time step $t$ from a smoothing algorithm and $F^{(t)}_{2}$ is the cumulative distribution function at time step $t$ of the smoothing distribution from a discrete-space HMM derived from the true model. We denote KS$_{m}$ to be the sum of the KS statistic of all time steps in the $m$th simulation. \begin{table}[tbp] \centering \caption{Simulation errors in the non-linear model} \label{tablenon-linear} \begin{tabular}{rrrrrrrr} \hline & Parameter Values & $N$ & $n$ & Mean of MSEm (s.e.) & Mean of KS \\ \hline \multicolumn{1}{c|}{BPF} & \multicolumn{1}{c|}{\multirow{4}{*}{$\tau = 1, \sigma = 1$}} & 40000 & NA & 0.0239 (0.00085) & 80.04 \\ \multicolumn{1}{c|}{FFBSm} & \multicolumn{1}{c|}{} & 315 & 315 & 0.0944 (0.01657) & 77.20 \\ \multicolumn{1}{c|}{FFBSi} & \multicolumn{1}{c|}{} & 320 & 320 & 0.1399 (0.02291) & 76.65 \\ \multicolumn{1}{c|}{TPS-EFP} & \multicolumn{1}{c|}{} & 10000 & 10000 & 0.0050 (0.00007) & 34.51 \\ \multicolumn{1}{c|}{TPS-L} & \multicolumn{1}{c|}{} & 13000 & NA & 0.3020 (0.00042) & 109.13 \\ \cline{2-2} \multicolumn{1}{c|}{BPF} & \multicolumn{1}{c|}{\multirow{4}{*}{$\tau = 1, \sigma = 5$}} & 40000 & NA & 0.2096 (0.03064) & 55.10 \\ \multicolumn{1}{c|}{FFBSm} & \multicolumn{1}{c|}{} & 315 & 315 & 0.6785 (0.02850) & 67.71 \\ \multicolumn{1}{c|}{FFBSi} & \multicolumn{1}{c|}{} & 320 & 320 & 0.6071 (0.04981) & 66.12 \\ \multicolumn{1}{c|}{TPS-EFP} & \multicolumn{1}{c|}{} & 10000 & 10000 & 0.3998 (0.01174) & 47.36 \\ \multicolumn{1}{c|}{TPS-L} & \multicolumn{1}{c|}{} & 13000 & NA & 14.4847 (0.01790) & 261.34 \\ \cline{2-2} \multicolumn{1}{c|}{BPF} & \multicolumn{1}{c|}{\multirow{4}{*}{$\tau = 5, \sigma = 1$}} & 40000 & NA & 1.2182 (0.05684) & 119.33 \\ \multicolumn{1}{c|}{FFBSm} & \multicolumn{1}{c|}{} & 315 & 315 & 3.4342 (0.22357) & 94.57 \\ \multicolumn{1}{c|}{FFBSi} & \multicolumn{1}{c|}{} & 320 & 320 & 3.2161 (0.20196) & 93.60 \\ \multicolumn{1}{c|}{TPS-EFP} & \multicolumn{1}{c|}{} & 10000 & 10000 & 0.1034 (0.00544) & 28.19 \\ \multicolumn{1}{c|}{TPS-L} & \multicolumn{1}{c|}{} & 13000 & NA & 0.4599 (0.00149) & 67.69 \\\cline{2-2} \hline \end{tabular} \end{table} The simulation results with different values of $\tau$ and $\sigma$ are shown in Table \ref{tablenon-linear}. In the first two situations, TPS-L shows the largest error and KS statistic, especially when $\tau = 1$ and $\sigma = 5$. This can be explained by the poor proposal from the initial sampling distribution constructed by the algorithm. We examine this by plotting the cumulative distribution function (CDF) of the initial sampling distribution $f_{j}$ in TPS-L, the filtering distribution $p(x_{j}| y_{0:j})$ and the marginal smoothing distribution $p(x_{j}| y_{0:T})$ at a particular time step when $j = 271$. In Figure \ref{fig:ecdf272}, the CDF of the initial sampling are far more dissimilar to the marginal smoothing distribution than the filtering one, which contributes to very ineffective importance sampling steps during the built-up of the tree. \begin{figure} \caption{CDF of the smoothing, filtering and initial sampling distribution at time step $j = 271$ of TPS-L in the non-linear model when $\tau = 1, \sigma = 5$.} \label{fig:ecdf272} \end{figure} Other algorithms provide different results in the three parameter settings. When $\tau = 1, \sigma = 1$, TPS-EFP shows much smaller MSEm and KS followed by BPF. BPF however has the largest mean of KS. When $\tau = 1, \sigma = 5$, TPS-EFP has a larger mean of MSEm than BPF. In terms of the KS statistic, TPS-EFP outperforms other smoothing algorithms. When $\tau = 5, \sigma = 1$, TPS-EFP and TPS-L produce dominant results with vastly smaller MSEm. They also exhibit the smallest mean of KS among the smoothing algorithms whereas the BPF gives the largest result though generating the most samples. To conclude, TPS-EFP and TPS-L perform well when the ratio between the standard deviation in the transition and emission density, i.e. when $\tau / \sigma$ is large. TPS-EFP has a more stable and appreciable performance, which provides low MSEm and consistently the smallest KS among the five smoothing algorithms. In contrast, the result of TPS-L may be misleading due to its instability. BPF works well regarding MSEm in some situations, but poorly in terms of Kolmogorov--Smirnov statistic. FFBSm and FFBSi produces less accurate results due to higher computational complexity. \subsection{Comparing TPS-EF and TPS-ES in the non-linear model} In this section, we conduct simulations in the same non-linear model using tree-based particle smoothing algorithm with estimated filtering (TPS-EF) and smoothing (TPS-ES) distributions as the intermediate target distributions. As TPS-ES is not a good competitor given a relatively small sample size in Section \ref{non-linear_simulation}, we compare its performance with TPS-EF with more computational budget. We demonstrate the implementations of the two algorithms. We apply the same smoothing algorithm TPS-EFP as described in Section \ref{non-linear_simulation} which utilises the piecewise constant functions to estimate the filtering distributions. As TPS-ES requires the estimated smoothing distributions as the initial sampling distributions, we achieve this by using piecewise constant functions for the estimation based on the samples from an initial run of TPS-EFP and thus call the algorithm TPS-ESP. We specify the parameters in the simulations of TPS-EFP and TPS-ESP. We denote the sample size by $N$. We set the parameters $\alpha^{s} = \alpha^{f} = 0.95$ appeared in Equation \eqref{filter_combine_smoother} and \eqref{smoother_combine_filter}. In TPS-EFP and TPS-ESP, the estimated filtering distributions are both constructed from $n$ samples from the particle filters. Additionally, in TPS-ESP, the estimated smoothing distributions are constructed from TPS-EFP with $n'$ samples. We run TPS-ESP in two different situations: The first one has the same sample size $N$ as TPS-EFP and requires more computational effort to estimate the initial sampling distributions based on $n'$ Monte Carlo samples. The second one has roughly the same computational effort as TPS-EFP which generates fewer Monte Carlo samples for the estimation of the initial sampling distributions and the target samples. We compare TPS-EFP and TPS-ESP with respect to the mean square error and Kolmogorov--Smirnov statistic defined in Section \ref{non-linear_simulation}. We run TPS-EFP and TPS-ESP for $M = 200$ times with different values of $\tau$ and $\sigma$ whose results are shown in Table \ref{TPS_compare}. TPS-ESP has an evident improvement of the Kolmogorov--Smirnov (KS) statistic in most situations and the comparisons between MSEm vary. The MSEm of TPS-ESP always decreases when generating the same number of samples as TPS-EFP. However, TPS-ESP does not provide convinced results under roughly the same computational effort. Overall, the performance of TPS-ESP depends on the computational budget. Given the same sample size as in TPS-EFP, TPS-ESP can potentially decrease both MSEm and KS statistic. This may not be true when the algorithm is kept the same overall effort as TPS-EFP. \begin{table}[tbp] \centering \caption{Simulation errors between TPS-EF and TPS-ES in the non-linear model} \label{TPS_compare} \begin{tabular}{rrrrrrrr} \hline & Parameter Values & $N$ & $n$ & $n'$ & Mean of MSEm (s.e.) & Mean of KS \\ \hline \multicolumn{1}{c|}{TPS-EFP} & \multicolumn{1}{c|}{\multirow{3}{*}{$\tau = 1,\sigma = 1$}} & 50000 & 50000 & NA & 0.00123 (0.00032) & 17.56 \\ \multicolumn{1}{c|}{TPS-ESP} & \multicolumn{1}{c|}{} & 50000 & 50000 & 50000 & 0.00051 (0.00007) & 11.91 \\ \multicolumn{1}{c|}{TPS-ESP} & \multicolumn{1}{c|}{} & 18000 & 50000 & 25000 & 0.00169 (0.00037) & 15.17 \\ \cline{2-2} \multicolumn{1}{c|}{TPS-EFP} & \multicolumn{1}{c|}{\multirow{3}{*}{$\tau = 1,\sigma = 5$}} & 50000 & 50000 & NA & 0.09136 (0.02758) & 24.27 \\ \multicolumn{1}{c|}{TPS-ESP} & \multicolumn{1}{c|}{} & 50000 & 50000 & 50000 & 0.10297 (0.01128) & 19.51 \\ \multicolumn{1}{c|}{TPS-ESP} & \multicolumn{1}{c|}{} & 18000 & 50000 & 25000 & 0.19861 (0.01954) & 26.56\\ \cline{2-2} \multicolumn{1}{c|}{TPS-EFP} & \multicolumn{1}{c|}{\multirow{3}{*}{$\tau = 5,\sigma = 1$}} & 50000 & 50000 & NA & 0.01420 (0.01193) & 14.63 \\ \multicolumn{1}{c|}{TPS-ESP} & \multicolumn{1}{c|}{} & 50000 & 50000 & 50000 & 0.01261 (0.00269) & 11.87 \\ \multicolumn{1}{c|}{TPS-ESP} & \multicolumn{1}{c|}{} & 18000 & 50000 & 25000 & 0.02599 (0.00509) & 14.80 \\ \cline{2-2} \hline \end{tabular} \end{table} \section{Conclusion} \label{conclusion} This article introduces a Monte Carlo sampling method we call TPS built on the D\&C SMC \citep{lindsten2017divide} to estimate the joint smoothing distribution $p(x_{0:T}|y_{0:T})$ in a hidden Markov model. The method decomposes the model into sub-models with intermediate target distributions using a binary tree structure. TPS samples independently from the leaves of the tree and gradually merges and resamples to target the new distributions upon the auxiliary tree. We propose one generic way of constructing a binary tree which sequentially splits the joint random variables $X_{0:T}$. Furthermore, we discuss the sampling procedure of the target samples at a non-leave node by combining the samples from its children using importance sampling. The computational effort is adjustable with a possible reduction to a linear effort with respect to the sample size. Using the above settings, we investigate three algorithms with different types of intermediate target distributions at the non-root nodes. TPS-L \citep{lindsten2017divide} constructs intermediate target distributions conditional on the observations from the same time interval as the target variables and imposes an uninformative prior. TPS-L is very simple to implement with no additional tuning algorithms. The algorithm is at the risk of providing very poor initial sampling distribution based on little information from the observations. TPS-EF employs intermediate target distributions estimating of the (joint) filtering distributions which conditions on the observations up to the last time step in the target variable. It is straightforward for implementation with an initial run of a filtering algorithm. Nevertheless, the proposal in the importance sampling step may still not be satisfactory when the marginal filtering and smoothing distributions are vastly different. TPS-ES builds the distributions estimating of the (joint) smoothing distributions which conditions on all the observations. It roughly retains the marginal smoothing distributions from the intermediate target distributions at all levels of the auxiliary tree despite its more intensive computations. We further propose the constructions of the estimated filtering and smoothing distributions based on the Monte Carlo samples. Considering both accuracy and computational effort, we recommend parametric approaches such as normal assumptions in a linear Gaussian model and non-parametric approaches such as using piecewise constant functions in a non-linear model. In the simulation studies, TPS-L has the smallest error in the linear model, but very unstable results in the different settings of the non-linear model. TPS-EF exhibits more desirable simulation outcomes. It is computationally less expensive than the most smoothing algorithms with quadratic complexity. It also produces the smallest mean square errors in the linear Gaussian model and consistently the smallest average Kolmogorov--Smirnov statistic in different situations under the non-linear model. In particular, it outperforms other algorithms substantially when the variance of the transition density is much larger than the emission density. TPS-ES, however, has a better approximation of the smoothing distribution with respect to the Kolmogorov--Smirnov statistic compared with TPS-EF at the cost of an additional run of a smoothing algorithm. To conclude, TPS with two proposed choices of the intermediate target distribution presents a new approach of addressing the smoothing problem which shows the following advantages: We have flexibilities of choosing and constructing the intermediate target distributions, which can potentially produce better proposals in the importance sampling steps. TPS can escape from the quadratic complexity with respect to the sample size computationally, and produce more particles and accurate simulation results than some smoothing algorithms. Nevertheless, its performance depends on the implementation of other filtering or smoothing algorithms and the estimation of the target distributions. Due to its flexible and relatively fast implementations with stable and comparable simulation results, we regard it as a competitor with other smoothing algorithms. \appendix \section{Proof of Theorem \ref{thm:KL_dst}} By Jensen's inequality, \begin{eqnarray*} && \int_{\mathbb{R}^{n_{1}}} f_{1}(\mathbf{x_{1}}) \log \big( f_{1}(\mathbf{x_{1}}) \big) \mathrm{d} \mathbf{x_{1}} - \int_{\mathbb{R}^{n_{1}}} f_{1}(\mathbf{x_{1}}) \log \big( h_{1}(\mathbf{x_{1}}) \big) \mathrm{d}\mathbf{x_{1}}\\ &=& \int_{\mathbb{R}^{n_{1}}} f_{1} (\mathbf{x_{1}}) \log \bigg( \frac{f_{1}(\mathbf{x_{1}})} {h_{1} (\mathbf{x_{1}})} \bigg) \mathrm{d} \mathbf{x_{1}} = \mathbb{E} \bigg[ \log \bigg( \frac{f_{1}(\mathbf{X_{1}})}{ h_{1}(\mathbf{X_{1}}) } \bigg) \bigg] = \mathbb{E} \bigg[ - \log \bigg( \frac{h_{1}(\mathbf{X_{1}})}{ f_{1}(\mathbf{X_{1}}) } \bigg) \bigg] \\ &\geq& - \log \bigg\{ \mathbb{E} \bigg[ \frac{h_{1}(\mathbf{X_{1}})}{f_{1}(\mathbf{X_{1}})} \bigg] \bigg\} = 0. \end{eqnarray*} Using this and the definition of marginal distribution, \begin{eqnarray} \label{eq:entropy1} && \int_{\mathbb{R}^{n_{2}}} \int_{\mathbb{R}^{n_1}} f(\mathbf{x_{1}}, \mathbf{x_{2}}) \log \big( f_{1}(\mathbf{x_{1}}) \big) \mathrm{d} \mathbf{x_{1}} \mathrm{d} \mathbf{x_{2}} = \int_{\mathbb{R}^{n_{1}}} f_{1}(\mathbf{x_{1}}) \log \big( f_{1}(\mathbf{x_{1}}) \big) \mathrm{d} \mathbf{x_{1}} \nonumber \\ &\geq& \int_{\mathbb{R}^{n_{1}}} f_{1}(\mathbf{x_{1}}) \log \big( h_{1}(\mathbf{x_{1}}) \big) \mathrm{d} \mathbf{x_{1}} = \int_{\mathbb{R}^{n_{2}}} \int_{\mathbb{R}^{n_1}} f(\mathbf{x_{1}}, \mathbf{x_{2}}) \log \big( h_{1}(\mathbf{x_{1}}) \big) \mathrm{d} \mathbf{x_{1}}\mathrm{d} \mathbf{x_{2}}. \nonumber\\ \end{eqnarray} Similarly, \begin{eqnarray} \label{eq:entropy2} \int_{\mathbb{R}^{n_{2}}} \int_{\mathbb{R}^{n_{1}}} f(\mathbf{x_{1}}, \mathbf{x_{2}}) \log \big( f_{2}(\mathbf{x_{2}}) \big) \mathrm{d} \mathbf{x_{1}} \mathrm{d} \mathbf{x_{2}} \geq \int_{\mathbb{R}^{n_{2}}} \int_{\mathbb{R}^{n_{1}}} f(\mathbf{x_{1}}, \mathbf{x_{2}}) \log \big( h_{2}(\mathbf{x_{2}}) \big) \mathrm{d} \mathbf{x_{1}}\mathrm{d} \mathbf{x_{2}}. \end{eqnarray} Multiplying \eqref{eq:entropy1} and \eqref{eq:entropy2} by -1 and adding them, we have \begin{eqnarray*} \int_{\mathbb{R}^{n_{2}}} \int_{\mathbb{R}^{n_{1}}} f( \mathbf{x_{1}}, \mathbf{x_{2}}) \log \bigg( \frac{1}{f_{1}(\mathbf{x_{1}}) f_{2}(\mathbf{x_{2}})} \bigg) \mathrm{d} \mathbf{x_{1}} \mathrm{d}\mathbf{x_{2}} \leq \int_{\mathbb{R}^{n_{2}}} \int_{\mathbb{R}^{n_{1}}} f( \mathbf{x_{1}}, \mathbf{x_{2}}) \log \bigg(\frac{1}{h_{1}(\mathbf{x_{1}}) h_{2}(\mathbf{x_{2}})} \bigg) \mathrm{d} \mathbf{x_{1}} \mathrm{d} \mathbf{x_{2}}. \end{eqnarray*} Adding $ \displaystyle \int_{\mathbb{R}^{n_{2}}} \int_{\mathbb{R}^{n_{1}}} f( \mathbf{x_{1}}, \mathbf{x_{2}}) \log \big( f(\mathbf{x_{1}}, \mathbf{x_{2}}) \big) \mathrm{d} \mathbf{x_{1}} \mathrm{d} \mathbf{x_{2}} $ to both sides yields the result. \end{document}
\begin{document} \pagenumbering{arabic} \title{\huge \bf Self-adjoint sub-classes of third and fourth-order evolution equations} \author{\rm \large Igor Leite Freire \\ \\ \it Centro de Matemática, Computação e Cognição\\ \it Universidade Federal do ABC - UFABC\\ \it Rua Catequese, $242$, Jardim, $09090-400$\\\it Santo André, SP - Brasil\\ \rm E-mail: [email protected]\\} \date{\ } \maketitle \begin{abstract} In this work a class of self-adjoint quasilinear third-order evolution equations is determined. Some conservation laws of them are established and a generalization on a self-adjoint class of fourth-order evolution equations is presented. \end{abstract} \vskip 1cm \begin{center} {2000 AMS Mathematics Classification numbers: \\ 76M60, 58J70, 35A30, 70G65 \\ Key words: Evolution equations, self-adjoint equation, conservation laws for evolution equations} \end{center} \pagenumbering{arabic} \section{Introduction} In this paper we consider the problem on self-adjointness condition of equation \begin{equation}\label{1.1.1} u_{t}=r(u)u_{xxx}+p(u)u_{xx}+q(u)u_{x}^{2}+a(u)u_{x}+b(u), \end{equation} where $r,\,p,\,q,\,a,\,b:\mathbb{R}\rightarrow\mathbb{R}$ are arbitrary smooth functions. Equation (\ref{1.1.1}) includes important evolution equations employed in mathematical physics and in mathematical biology, for instance, inviscid Burgers equation, Burgers equation, potential Burgers equation, Fisher equation, Korteweg--de Vries (KdV) equation, Gardner equation and so on, see \cite{cher, gun, igor1}. It can be used to describe shallow watter waves, collisionless-plasma magnetohydrodynamics waves and ion acoustic waves, among other physical or biological phenomena, see also \cite{ott, zabu}. Similar work has been performed by Bruzón, Gandarias and Ibragimov \cite{ib1} regarding equation \begin{equation}\label{1.1.3} u_{t}+f(u)u_{xxxx}+g(u)u_{x}u_{xxx}+h(u)u_{xx}^{2}+d(u)u_{x}^{2}u_{xx}-p(u)u_{xx}-q(u)u_{x}^{2}=0. \end{equation} However, in (\ref{1.1.3}) source terms and nonlinearities type $a(u)u_{x}$ and $r(u)u_{xxx}$ were not taken into account. So we shall complement the results previously obtained by them including these terms. Ibragimov \cite{ib2} has recently established a new conservation theorem for equations without Lagrangians. If (\ref{1.1.1}) is self-adjoint it is possible to construct conservation laws $D_{t}C^{0}+D_{x}C^{1}=0$ for it, where the components $C^{0}$ and $C^{1}$ depend on $t,x,u$ and its derivatives. The purpose of this paper is to determine the self-adjoint equations type (\ref{1.1.1}) and, by using the recent result \cite{ib2}, establish some nontrivial conservation laws for some of these equations. The results on self-adjointness condition of equation (\ref{1.1.3}) obtained in \cite{ib1} is also generalized by including dispersive, convective and source terms. The paper is organized as the follows: in the section \ref{review} we revisit some results regarding Lie point symmetries and conservation laws for differential equations. Section \ref{self} is devoted to find the self-adjoint equations type (\ref{1.1.1}). We comment some results presented in \cite{ib1} in section \ref{comment}. \section{Preliminaries}\label{review} This section contains a brief discussion on the space of differential functions ${\cal A}$, Lie-Bäcklund operators, self-adjoint equations and conservation laws for differential equations. For more details, see \cite{ib0, ib1, ib2, ib3}. In the following the summation over repeated indices is understood. Let $x=(x^{1},\cdots,x^{n})$ be $n$ independent variables and $u=(u^{1},\cdots,u^{m})$ be $m$ dependent variables with partial derivatives $u^{\alpha}_{i}=\frac{\partial u^{\alpha}}{\partial x^{i}},\,u^{\alpha}_{ij}=\frac{\partial^{2}u^{\alpha}}{\partial x^{i}\partial x^{j}}$, etc. The total differentiation operators are given by $$ D_{i}=\frac{\partial}{ \partial x^{i}}+u^{\alpha}_{i}\frac{\partial}{\partial u^{\alpha}}+u^{\alpha}_{ij}\frac{\partial}{\partial u^{\alpha}_{j}}+\cdots,\,\,i=1,\cdots,n,\,\alpha=1,\cdots,m. $$ Observe that $u^{\alpha}_{i}=D_{i}u^{\alpha},\,u^{\alpha}_{ij}=D_{i}D_{j}u^{\alpha}$, etc. The variables $u^{\alpha}$ with the sucessive derivatives $u^{\alpha}_{i_{1}\cdots i_{k}},\,k\in\mathbb{N}$, is known as the differential variables. \begin{definition} A locally analytic function of a finite number of the variables $x,\,u$ and $u$ derivatives is called a differential function. The highest order of derivatives appearing in the differential function is called the order of this function. The vector space of all differential functions of finite order is denoted by ${\cal A}$. \end{definition} {\bf Example}: Let us consider the differential function \begin{equation}\label{2.1.1} F=u_{t}-r(u)u_{xxx}-p(u)u_{xx}-q(u)u_{x}^{2}-a(u)u_{x}-b(u), \end{equation} where $p,\,q,\,r,\,a,\,b:\mathbb{R}\rightarrow\mathbb{R}$ are arbitrary smooth functions. Supposing $r(u)\neq0$, the order of $F$ is three. \begin{definition} A Lie-Bäcklund operator is a formal sum \begin{equation}\label{2.1.2} X=\xi^{i}\frac{\partial}{\partial x^{i}}+\eta^{\alpha}\frac{\partial}{\partial u^{\alpha}}+\eta^{\alpha}_{i}\frac{\partial}{\partial u^{\alpha}_{i}}+\eta^{\alpha}_{ij}\frac{\partial}{\partial u^{\alpha}_{ij}}+\cdots, \end{equation} where $\xi^{i},\,\eta^{\alpha}\in{\cal A},\, \eta^{\alpha}_{i}=D_{i}(\eta^{\alpha}-\xi^{j}u^{\alpha}_{j})+\xi^{j}u^{\alpha}_{ij},\,\eta^{\alpha}_{ij}=D_{i}D_{j}(\eta^{\alpha}-\xi^{k}u^{\alpha}_{k})+\xi^{k}u^{\alpha}_{kij}$, etc. The Lie-Bäcklund operator is often written as \begin{equation}\label{2.1.3} X=\xi^{i}\frac{\partial}{\partial x^{i}}+\eta^{\alpha}\frac{\partial}{\partial u^{\alpha}} \end{equation} understanding the prolonged form $(\ref{2.1.2})$. If $\xi^{i}=\xi^{i}(x,u)$ and $\eta=\eta(x,u)$ in $(\ref{2.1.3})$, then $X$ is a generator of Lie point symmetry group. \end{definition} {\bf Example:} The field \begin{equation}\label{4.1.1'} X=t\frac{\partial}{\partial t}-u\frac{\partial }{\partial u} \end{equation} is a Lie point symmetry generator of inviscid Burgers equation \begin{equation}\label{4.1.1} u_{t}=uu_{x}. \end{equation} The set of all Lie-Bäcklund operators endowed with the commutator $$ [X,Y]=(X(\zeta^{i})-Y(\xi^{i}))\frac{\partial}{\partial x^{i}}+(X(\omega^{\alpha})-Y(\eta^{\alpha}))\frac{\partial}{\partial u^{\alpha}}+\cdots, $$ where $X$ is given by $(\ref{2.1.3})$ and $$Y=\zeta^{i}\frac{\partial}{\partial x^{i}}+\omega^{\alpha}\frac{\partial}{\partial u^{\alpha}},$$ is an infinite-dimensional Lie algebra. \begin{definition} The Euler-Lagrange operator $\frac{\delta}{\delta u^{\alpha}}:{\cal A}\rightarrow{\cal A}$ is defined by the formal sum \begin{equation}\label{2.1.4} \frac{\delta}{\delta u^{\alpha}}=\frac{\partial}{\partial u^{\alpha}}+\sum_{j=1}^{\infty}(-1)^{j}D_{i_{1}}\cdots D_{i_{j}}\frac{\partial}{\partial u^{\alpha}_{i_{1}\cdots i_{j}}}. \end{equation} \end{definition} \begin{definition} Let $F_{\alpha}\in{\cal A}$. We define the adjoint system of differential functions $F_{\alpha}^{\ast}$ to $F_{\alpha}$ by the expression $$ F^{\ast}_{\alpha}=\frac{\delta}{\delta u^{\alpha}}(v^{\beta}F_{\beta}), $$ where $v^{\beta}$ is a new dependent variable. We say that $F_{\alpha}^{\ast}$ is a self-adjoint system of differential functions to $F_{\alpha}$ if there exists $\phi\in{\cal A}$ such that $$ \left.F^{\ast}_{\alpha}\right|_{v=u}=\phi F_{\alpha}. $$ \end{definition} We observe that a system of differential equations can be viewed as $F_{\alpha}=0$, for some $F_{\alpha}\in{\cal A}$. \begin{definition} An adjoint system of differential equations $F^{\ast}_{\alpha}=0$ to a system of differential equations $F_{\alpha}=0$ is given by $$ F^{\ast}_{\alpha}=\frac{\delta}{\delta u^{\alpha}}(v^{\beta}F_{\beta})=0, $$ where $v^{\beta}$ is a new dependent variables. We say that $F^{\ast}_{\alpha}=0$ is a self-adjoint equation to $F_{\alpha}=0$ if \begin{equation}\label{ast} \left.F^{\ast}_{\alpha}\right|_{v=u}=\phi F_{\alpha}, \end{equation} for some differential function $\phi\in{\cal A}$. So $\left.F^{\ast}_{\alpha}\right|_{v=u}=0$ if and only if $F_{\alpha}=0$. \end{definition} {\bf Example}: The adjoint differential function $F^{\ast}$ to (\ref{2.1.1}) is $$F^{\ast}=\frac{\delta}{\delta u}(vF)=v[-p'u_{xx}-q'u_{x}^{2}-a'u_{x}-b']-D_{t}(v)-D_{x}[-v(2qu_{x}+a)]+D_{x}^{2}(-vp)-D_{x}^{3}(-vr).$$ Setting $v=u$ and after a tedious calculation we obtain \begin{equation}\label{adj} \begin{array}{lcl} F^{\ast}&=&-u_{t}-ub'(u)+a(u)u_{x}+[uq'(u)-2p'(u)-up''(u)+2q(u)]u_{x}^{2}+(3r''+ur''')u_{x}^{3}\\ \\ &&+[2uq(u)-2up'(u)+2uq(u)]u_{xx}+(6r'+3ur'')u_{x}u_{xx}+ru_{xxx}. \end{array}\end{equation} If an equation possesses variational structure, it is well known that the Noether Theorem can be employed in order to establish conservation laws for the respective equation, {\it e.g.}, see \cite{yi1, yi2, igor2, naz}. However, the Noether Theorem cannot be applied to evolution equations in order to obtain conservation laws, since this class of equations does not possess variational structure. Fortunately there are some other alternative methods to establish conservation laws for equations without Lagrangians, see \cite{naz}. One of them is a recent result \cite{ib2}, due to Ibragimov. Let \begin{equation}\label{2.1.7} X=\tau(t,x,u)\frac{\partial}{\partial t}+\xi(t,x,u)\frac{\partial}{\partial x}+\eta(t,x,u)\frac{\partial}{\partial u} \end{equation} be a Lie point symmetry generator of (\ref{1.1.1}) and \begin{equation}\label{2.1.8} {\cal L}=v F, \end{equation} where $F$ is given by $(\ref{2.1.1})$. From the new conservation theorem \cite{ib2}, the conservation law for the system given by equation $(\ref{2.1.1})$ and by its adjoint equation $F^{\ast}=0$, where $F^{\ast}$ is given by $(\ref{adj})$, is $Div(C)=D_{t}C^{0}D_{x}C^{1}=0$, where \begin{equation}\label{2.1.10} \begin{array}{lcl} C^{0}&=&\displaystyle {\tau {\cal L}+W\,\frac{\partial {\cal L}}{\partial u_{t}}},\\ \\ C^{1}&=&\displaystyle {\xi {\cal L}+W\left[\frac{\partial {\cal L}}{\partial u_{x}}-D_{x}\frac{\partial {\cal L}}{\partial u_{xx}}+D_{x}^{2}\frac{\partial {\cal L}}{\partial u_{xxx}}\right]+D_{x}(W)\left[\frac{\partial {\cal L}}{\partial u_{xx}}-D_{x}\frac{\partial {\cal L}}{\partial u_{xxx}}\right]+D_{x}^{2}(W)\frac{\partial {\cal L}}{\partial u_{xxx}}} \end{array} \end{equation} and $W=\eta-\tau u_{t}-\xi u_{x}$. In particular, whenever $(\ref{2.1.1})$ is self-adjoint, substituting $v=u$ into $(\ref{2.1.10})$, $C=(C^{0},C^{1})$ provides a conserved vector for $(\ref{2.1.1})$. \section{Self-adjoint equations type $(\ref{1.1.1})$}\label{self} \subsection{The class of self-adjoint equations type (\ref{1.1.1})} By applying the Euler-Lagrange operator (\ref{2.1.4}) to (\ref{2.1.8}), where $F$ is given by (\ref{2.1.1}) and equating to $0$, we obtain the adjoint equation to (\ref{1.1.1}), that is, $F^{\ast}$=0, where $F^{\ast}$ is given by (\ref{adj}). Supposing that $F$ is self-adjoint, equation (\ref{ast}) holds, for some $\phi\in{\cal A}$. Comparing the coefficient of $u_{t}$, we obtain $\phi=-1$ and \begin{equation}\label{3.1.1} \begin{array}{lcl} 3r''+ur'''=0,\,\,\,\, 3ur''+6r'=0,\,\,\,\, ur'''+3r''=0,\\ \\ -up'-p=-uq,\,\,\,\, uq'-2p'-up''+q=0,\,\, \,\, ub'=-b. \end{array} \end{equation} Solving the system (\ref{3.1.1}), we obtain \begin{equation}\label{3.1.1'} r=a_{1}+\frac{a_{2}}{u}, \end{equation} \begin{equation}\label{3.1.2} q=\frac{(up)'}{u} \end{equation} and \begin{equation}\label{3.1.3} b=\frac{a_{3}}{u}, \end{equation} where $a_{1},\,a_{2}$ and $a_{3}$ are arbitrary constants. The following theorem is proved. \begin{theorem}\label{teo1} Equation $(\ref{1.1.1})$ is self-adjoint if and only if it has the form \begin{equation}\label{3.2.1} u_{t}=\left(a_{1}+\frac{a_{2}}{u}\right)u_{xxx}+p(u)u_{xx}+\frac{(up)'}{u}u_{x}^{2}+a(u)u_{x}+\frac{a_{3}}{u}, \end{equation} where $a_{1},\,a_{2}$ and $a_{3}$ are constants. \end{theorem} \subsection{Conservation laws for equations type (\ref{3.2.1})}\label{examples} Here we shall illustrate Theorem \ref{teo1} by using it and the results due to Ibragimov in order to establish some conservation laws for self-adjoint equations type (\ref{3.2.1}). \subsubsection{Inviscid Burgers equation} The vector field (\ref{4.1.1'}) is a Lie point symmetry of the inviscid Burgers equation (\ref{4.1.1})(for more details, see \cite{igor1}). Since it is a self-adjoint equation, taking $v=u$ in (\ref{2.1.10}), the conservation law $D_{t}C^{0}+D_{x}C^{1}=0$ is obtained, where $$ C^{0}=-u^{2}-tu^{2}\,u_{x},\,\,\,\,C^{1}=u^{3}+tu^{2}\, u_{t}. $$ However, $$C^{0}=-u^{2}+D_{x}\left(-\frac{tu^{3}}{3}\right),\,\,\,\, C^{1}=u^{3}+tD_{t}\left(\frac{u^{3}}{3}\right)$$ and $$ \begin{array}{lcl} D_{t}C^{0}+D_{x}C^{1}&=&\displaystyle {-D_{x}\left(\frac{u^{3}}{3}\right)-tD_{t}D_{x}\left(\frac{u^{3}}{3}\right)-D_{t}(u^{2})+D_{x}(u^{3})+tD_{x}D_{t}\left(\frac{u^{3}}{3}\right)}\\ \\ &=&\displaystyle {D_{t}(-u^{2})+D_{x}\left(\frac{2}{3}u^{3}\right)}. \end{array} $$ Then $C=(-u^{2},\frac{2}{3}u^{3})$ provides a conserved vector for (\ref{4.1.1}). This conservation law was established in \cite{igor1}. \subsubsection{Singular second order evolution equation} A Lie point symmetry generator of singular equation \begin{equation}\label{5.1.10} u_{t}=\frac{u_{xx}}{u} \end{equation} is given by \begin{equation}\label{5.1.11} X=t\frac{\partial}{\partial t}+u\frac{\partial}{\partial u}. \end{equation} From (\ref{2.1.10}) $$ \begin{array}{lcl} C^{0}&=&\displaystyle {\frac{tvu_{xx}}{u}+vu},\\ \\ C^{1}&=&\displaystyle {(u-tu_{x})\left(\frac{v_{x}}{u}-\frac{vu_{x}}{u^{2}}\right)-\frac{v}{u}D_{x}(u-tu_{t})}. \end{array} $$ Setting $v=u$ in $C^{0}$ and $C^{1}$ and after reckoning we obtain $C=(u^{2},-2u_{x})$ as a conserved vector for equation (\ref{5.1.10}). Considering the Lie point symmetry generator $$Y=x\frac{\partial}{\partial x}+2t\frac{\partial}{\partial t}$$ the components of the conserved vector is \begin{equation}\label{5.1.1.x} \begin{array}{lcl} C^{0}&=&\displaystyle {-\frac{2tv}{u}u_{xx}-xvu_{x}},\\ \\ C^{1}&=&\displaystyle {xvu_{t}-\frac{xv}{u}u_{xx}-\frac{xv_{x}u_{x}}{u}+\frac{xvu_{x}^{2}}{u^{2}}-\frac{2tv_{x}u_{t}}{u}+\frac{2tvu_{t}u_{x}}{u^{2}}+\frac{v}{u}D_{x}(xu_{x}+2tu_{t})}. \end{array} \end{equation} After a tedious calculus and substituting $v=u$ into (\ref{5.1.1.x}) we obtain the vector $C=(u^{2}/2,-u_{x})$. Then $Y$ does not give a new conservation law for (\ref{5.1.10}). \subsubsection{The Korteweg--de Vries equation} Let us now consider the Korteweg--de Vries equation \begin{equation}\label{4.2.1} u_{t}=u_{xxx}+uu_{x}. \end{equation} It is clear that $$ X=t\frac{\partial}{\partial t}-\frac{\partial}{\partial u} $$ is a Lie point symmetry generator of (\ref{4.2.1}). Since (\ref{4.2.1}) is self-adjoint, from (\ref{2.1.10}) and setting $v=u$, we obtain $$ \begin{array}{lcl} C^{0}&=&\displaystyle {-u-tuu_{x}=-u+D_{x}\left(-t\frac{u^{2}}{2}\right)},\\ \\ C^{1}&=& \displaystyle {tuu_{t}+u_{xx}+u^{2}=tD_{t}\left(\frac{u^{2}}{2}\right)+u^{2}+u_{xx}}. \end{array} $$ Then $$D_{t}C^{0}+D_{x}C^{1}=D_{t}(-u)+D_{x}\left(\frac{u^{2}}{2}+u_{xx}\right),$$ we conclude that $C=(-u,\frac{u^{2}}{2}+u_{xx})$ is a conserved vector for (\ref{4.2.1}). This example was presented in the seminal work \cite{ib2}. \subsubsection{Generalized Korteweg--de Vries equation} {\bf Example 3}: Let us consider the following generalization of the Korteweg--de Vries equation (\ref{4.2.1}) \begin{equation}\label{4.2.2} u_{t}=u_{xxx}+u^{\mu}u_{x}, \end{equation} where $\mu\neq 0$ is a constant. Here we shall present in a more detailed form the conservation law for (\ref{4.2.2}) arising from the Lie point symmetry generator $$ X_{\mu}=\frac{2}{\mu}u\frac{\partial}{\partial u}-3t\frac{\partial}{\partial t}-x\frac{\partial}{\partial x}. $$ From (\ref{2.1.10}) is obtained \begin{equation}\label{4.2.4} \begin{array}{lcl} A^{0}&=&\displaystyle {v \left(3tu_{xxx}+3tu^{\mu}u_{x}+xu_{x}\frac{2}{\mu}u\right)},\\ \\ A^{1}&=&\displaystyle {-v\left(\frac{2}{\mu}u^{\mu+1}+xu_{t}+3tu^{\mu}u_{t}+2\frac{\mu+1}{\mu}u_{xx}+3tu_{txx}\right)+v_{x}\left(\frac{2+\mu}{\mu}u_{x}+3tu_{tx}+xu_{xx}\right)}\\ \\ &&\displaystyle {-v_{xx}\left(\frac{2}{\mu}u+3tu_{t}+xu_{x}\right)}. \end{array} \end{equation} Setting $v=u$ in (\ref{4.2.4}) and after reckoning, we obtain $$D_{t}A^{0}+D_{x}A^{1}=D_{t}\left(\frac{4-\mu}{2\mu}u^{2}\right)+D_{x}\left[\frac{\mu-4}{\mu(\mu+2)}u^{\mu+2}+\frac{\mu-4}{\mu}uu_{xx}-\frac{\mu-4}{2\mu}u_{x}^{2}\right].$$ Then $C=(C^{0},C^{1})$ provides a conserved vector for the generalized Korteweg--de Vries equation (\ref{4.2.2}), where $$ C^{0}=u^{2},\,\,\,\,C^{1}=u_{x}^{2}-2uu_{xx}-\frac{2}{\mu+2}u^{\mu+2}. $$ In particular, whenever $\mu=1$, $C=(u^{2},u_{x}^{2}-2uu_{xx}-\frac{2}{3}u^{3})$ is another conserved vector for the KdV equation (\ref{4.2.1}), see \cite{ib2}. Choosing $\mu=2$, then $C=(u^{2},u_{x}^{2}-2uu_{xx}-\frac{1}{2}u^{4})$ is a conserved vector for the modified KdV equation $$u_{t}=u_{xxx}+u^{2}u_{x}.$$ \section{Self-adjoint adjoint equations of fourth-order}\label{comment} Concerning equation (\ref{1.1.3}), in \cite{ib1} is proved that equation $(\ref{1.1.3})$ is self-adjoint if and only if \begin{equation}\label{5.1.1} g=h+\frac{1}{u}(uf)',\,\,\,\,\,\,d=\frac{c_{1}}{u}+\frac{1}{u}(uh)' \end{equation} and \begin{equation}\label{5.1.2} q=\frac{1}{u}[c_{2}+(up)'], \end{equation} where $f,\,h$ and $p$ are arbitrary functions of $u$ (see \cite{ib1}, Theorem 3.2, p.p 310). From Theorem \ref{teo1} we conclude that equations (\ref{5.1.2}) and (\ref{3.1.2}) cannot be compatible whenever $c_{2}\neq 0$. In fact, the correct statement is \begin{theorem}\label{teo5.2} Equation $(\ref{1.1.3})$ is self-adjoint if and only if $g$ and $d$ are given by $(\ref{5.1.1})$ and $q$ is given by $(\ref{3.1.2})$, where $f,\,h$ and $p$ are arbitrary functions of $u$ and $c_{1}$ is an arbitrary constant. \end{theorem} \begin{proof} From the self-adjointness condition (\ref{ast}) we obtain the following system of equations \begin{equation}\label{5.1.3} (uf)'-ug+uh=0, \end{equation} \begin{equation}\label{5.1.4} (uf)''-(ug)'+(uh)'=0, \end{equation} \begin{equation}\label{5.1.5} 3(uf)'''-3(ug)''+(uh)''+2(ud)'=0, \end{equation} \begin{equation}\label{5.1.6} (up)'-uq=0, \end{equation} \begin{equation}\label{5.1.7} (up)''-(uq)'=0 \end{equation} and \begin{equation}\label{5.1.8} (uf)''''-(ug)'''+(ud)''=0. \end{equation} From (\ref{5.1.3}) and (\ref{5.1.5}) we obtain (\ref{5.1.1}). Equation (\ref{5.1.4}) is a consequence of (\ref{5.1.3}). Equation (\ref{5.1.8}) is a consequence of (\ref{5.1.3}) and (\ref{5.1.5}). From (\ref{5.1.7}) we obtain (\ref{5.1.2}). However, substituting (\ref{5.1.2}) into (\ref{5.1.6}) we conclude that $c_{2}=0$. Thus we obtain (\ref{3.1.2}). \end{proof} From theorems \ref{teo5.2} and \ref{teo1}, we have the following generalization for Theorem \ref{teo5.2} (and Theorem 3.2 of \cite{ib1}): \begin{theorem}\label{teo5.3} Equation \begin{equation}\label{5.1.9} \begin{array}{l} u_{t}+f(u)u_{xxxx}+g(u)u_{x}u_{xxx}-r(u)u_{xxx}+h(u)u_{xx}^{2}\\ \\ +d(u)u_{x}^{2}u_{xx}-p(u)u_{xx}-q(u)u_{x}^{2}-a(u)u_{x}+b(u)=0 \end{array} \end{equation} is self-adjoint if and only if $g$ and $d$ are given by $(\ref{5.1.1})$, $r,\,q$ and $b$ are given by $(\ref{3.1.1'})$, $(\ref{3.1.2})$ and $(\ref{3.1.3})$, respectively, where $a_{1},\,a_{2},\,a_{3}$ and $c_{1}$ are arbitrary constants and $f,\,h$ and $p$ are arbitrary functions of $u$. \end{theorem} \section{Conclusion} In this paper the self-adjoint subclasses of equation (\ref{1.1.1}) was obtained. Thanks to the recently proposed conservation theorem from Ibragimov, some conservation laws of particular self-adjoint equations type (\ref{3.2.1}) were established. Further examples can be found in \cite{ib1, ib2,igor1, ya}. A comment in a recently published result (see \cite{ib1}, Theorem 3.2) was given in section \ref{comment}. In particular the self-adjointness condition obtained by Bruzón, Gandarias and Ibragimov to equation (\ref{1.1.3}) was generalized to equation (\ref{5.1.9}). Equation (\ref{5.1.9}) covers a wider list of equations, for instance, all equations mentioned in the present paper, the thin film equation and so on, see \cite{qu, ib1}. The main results are summarized by Theorem \ref{teo1} and Theorem \ref{teo5.3}. In particular, by using Theorem \ref{teo5.3} and the new conservation theorem presented in \cite{ib2}, conservation laws for lubrication equation, Korteweg--de Vries and inviscid Burgers equation, among others, can be established, as pointed out in \cite{ib1, igor1, ya}. \end{document}
\begin{document} \begin{abstract} From the mid-1970s, Eberhard Kirchberg undertook a remarkable extensive study of $C^*$-algebras exactness whose applications spread out to many branches of analysis. In this review we focus on the case of groupoid $C^*$-algebras for which the notion of exactness needs to be better understood. In particular some versions of exactness play an important role in the study of the weak containment problem (WCP), that is whether the coincidence of the full and reduced groupoid $C^*$-algebras implies the amenability of the groupoid or not. \end{abstract} \title{Amenability, exactness and weak containment property for groupoids } \section*{Introduction} The study of $C^*$-algebras exactness was initiated by Kirchberg as early as the mid-1970s. In the short note \cite{Ki77Proc} he announced several results whose proofs were published along with many other major contributions in the 1990s \cite{Ki91, Ki93, Ki94}. An unexpected link was found at the end of the 1990s between the Novikov higher signatures conjecture for a discrete group $\Gamma$ and the exactness of its reduced $C^*$-algebra $C^*_{r}(\Gamma)$. This follows from the discovery by Higson and Roe \cite{HR} that a finitely generated group $\Gamma$ has the so-called Yu's property A \cite{Yu} if and only if $\Gamma$ admits an amenable action on a compact space $X$ (then we say that the group is amenable at infinity). Such a group satisfies the above mentioned Novikov conjecture \cite{Yu, HR, H00} and morever $C^*_{r}(\Gamma)$ is exact since it is a sub-$C^*$-algebra of the nuclear crossed product $C(X)\rtimes \Gamma$. At the same time extensive studies of amenable actions (and amenable groupoids) \cite{AD-R} and of reduced group $C^*$-algebras exactness \cite{KW99} had just appeared. This series of results was finally crowned by the proof of the fact that if $C_{r}^*(\Gamma)$ is exact then $\Gamma$ admits an amenable action on a compact space \cite{GK, Oza, AD02}. All this raised a renewed interest about various notions of exactness for locally compact groups. It turned out to be potentially interesting to extend these notions to the case of locally compact groupoids. A first attempt was presented in \cite{AD00}. A detailed presentation was made available in \cite{AD16}. Nowadays, some versions of groupoid exactness appear to be essential in the study of the weak containment problem (WCP). The purpose of this paper is to describe some of the history of this subject. It is organized as follows. In section 1, after pointing out the fact, due to Kirchberg \cite{Ki93}, that many full group $C^*$-algebras are not exact, we focus on exactness of reduced group $C^*$-algebras, compared with two other notions of exactness, namely KW-exactness introduced by Kirchberg and Wassermann in \cite{KW99}, and amenability at infinity. The three notions are equivalent for discrete groups and we describe what is known in general for locally compact groups. In Section 2 we recall some facts about measured and locally compact groupoids and their operator algebras and Section 3 provides a short summary about amenable groupoids. In Section 4 we introduce the notion of amenability at infinity for a locally compact groupoid. Here, compact spaces have to be replaced by locally compact spaces that are fibred on the space of units of the groupoid in such a way that the projection is a proper map. When the groupoid ${\mathcal G}$ is \'etale, it has a universal fibrewise compactification $\beta_r{\mathcal G}$, called its Stone-\v Cech fibrewise compactification. Then, amenability at infinity of ${\mathcal G}$ is equivalent to the fact that the canonical action of ${\mathcal G}$ on $\beta_r{\mathcal G}$ is amenable, and can be expressed in terms of positive type kernels, exactly as for groups. In Section 5 we describe the relations between the various notions of exactness that are defined for \'etale groupoids. They are equivalent when we assume the inner amenability of the groupoid. Inner amenability is well understood in the group case, in particular all discrete groups are inner amenable in our sense, but this notion remains mysterious for groupoids. In this section we also introduce a weak notion of exactness for groupoids, that we call inner exactness. It is automatically fulfilled for transitive groupoids and in particular for all locally compact groups, but it has proven useful in other contexts. From 2014 many remarkable results have been obtained in the study of the (WCP) for groupoids, which crucially involve exactness. We review them in Section 6. Finally, in the last section we recap some open problems. \section{Exact group $C^*$-algebras} Unlike the functor $(\cdot)\otimes_{\hbox {max}} A$ (maximal tensor product with the $C^*$-algebra $A$), the minimal tensor product functor $(\cdot)\otimes A$ is not necessarily exact, that is, given a short exact sequence of $C^*$-algebras $0 \rightarrow I \rightarrow B \rightarrow B/I \rightarrow 0,$ the sequence $$0 \rightarrow I\otimes A \rightarrow B\otimes A \rightarrow (B/I) \otimes A\rightarrow 0$$ is not always exact in the middle. When the functor $(\cdot)\otimes A$ is exact, one says that the $C^*$-algebra $A$ is {\it exact}. This notion, so named in the pioneering paper \cite{Ki77Proc}, has been the subject of Kirchberg's major contributions from the end of the 1980s. As early as 1976, Simon Wassermann showed in \cite{Wass76} that the full $C^*$-algebra $C^*({\mathbb{F}}_2)$ of the free group ${\mathbb{F}}_2$ on two generators is not exact. In fact, when $\Gamma$ is a finitely generated residually finite group, the full group $C^*$-algebra $C^*(\Gamma)$ is exact if and only if the group $\Gamma$ is amenable\footnote{To the author's knowledge, there is so far no example of a non-amenable group $G$ such that $C^*(G)$ is exact.}(see \cite{Ki77Proc}, and \cite[Proposition 7.1]{Ki93} for a more general result). This is in sharp contrast with the behaviour of the reduced group $C^*$-algebras. For instance, let $G$ be a locally compact group having a closed amenable subgroup $P$ with $G/P$ compact and let $H$ be any closed subgroup of $G$. Then the full crossed product $C^*$-algebras $C(G/P)\rtimes H$ and $C_0(H\setminus G)\rtimes P$ are Morita equivalent \cite{Rie76} and $C_0(H\setminus G)\rtimes P$ is nuclear since $P$ is amenable. It follows that $C(G/P)\rtimes H$ is nuclear and that the reduced crossed product $C^*$-algebra $C(G/P)\rtimes_r H$ is nuclear too. But the reduced group $C^*$-algebra $C^*_{r}(H)$ embeds into $C(G/P)\rtimes_r H$ since $G/P$ is compact and therefore $C^*_{r}(H)$ is an exact $C^*$-algebra. This well-known argument applies for instance to closed subgroups of almost connected groups. A locally compact group is said to be $C^*$-{\em exact}\footnote{This differs from the terminology used in \cite{Ki77Proc} where a group was called $C^*$-exact if its full $C^*$-algebra was exact.} if its reduced group $C^*$-algebra is $C^*$-exact. Most familiar groups are known to be exact. The first examples of discrete groups that are not $C^*$-exact are Gromov monsters \cite{Gro}. Osajda has given other examples \cite{Osa}, and he even built residually finite groups that are not $C^*$-exact \cite{Osa18}. Clearly, an easy way to show that the reduced group $C^*$-algebra $C^*_{r}(G)$ of a locally compact group $G$ is exact is to exhibit a continuous action of $G$ on a compact space $X$ such that the reduced crossed product $C(X)\rtimes_r G$ is nuclear. When this property is fulfilled with $G$ a discrete group, it follows that the $G$-action on $X$ is (topologically) amenable \cite[Theorem 4.5]{AD87}, \cite[Theorem 5.8]{AD02}. This notion plays an important role the study of reduced group $C^*$-algebras exactness. It will be recalled in the subsequent sections in the more general context of groupoids. It is called {\it amenability at infinity} \cite[Definition 5.2.1]{AD-R} (or boundary amenability). When a locally compact group $G$ has an amenable action on a compact space, it has a property stronger than $C^*$-exactness, that was introduced by Kirchberg and Wassermann \cite{KW99}, and is now often called KW-exactness (see for instance \cite[Definition 5.1.9]{BO}). \begin{defn}\label{KWexact:group} A locally compact group $G$ is KW-{\it exact} if the functor $A \mapsto A\rtimes_r G$ is exact, that is, for every short exact sequence of $G$-$C^*$-algebras $0 \rightarrow I \rightarrow A \rightarrow A/I \rightarrow 0$, the sequence $$0 \rightarrow I \rtimes_r G \rightarrow A\rtimes_rG \rightarrow(A/I)\rtimes_r G \rightarrow 0$$ is also exact. \end{defn} The theorem below, which presents the currently known relations between the different definitions of exactness for a locally compact group involves in particular the notion of {\em inner amenability}. Following \cite[page 84]{Pat88}, we say that a locally compact group $G$ is {\it inner amenable} if there exists an inner invariant mean on $L^\infty(G)$, that is, a state $m$ such that $m(sfs^{-1}) = m(f)$ for every $f\in L^\infty(G)$ and $s\in G$, where $(sfs^{-1})(y) = f(s^{-1}ys)$. This is a quite weak notion (that should deserve in fact the name of weak inner amenablity) since for instance every discrete group is inner amenable in this sense (whereas Effros \cite{Effros} excludes the trivial inner invariant mean in his definition). Note that a locally compact group $G$ is amenable if and only if $G$ is inner amenable and $C^*_{r}(G)$ is nuclear \cite{LP}. The importance of inner amenability when studying the relations between properties of groups and their $C^*$-algebras has also been highlighted by Kirchberg in \cite[\S 7]{Ki93} where inner amenability is called Property (Z). \begin{thm}\label{equivexact:group} Let $G$ be a locally compact group and consider the following conditions: \begin{itemize} \item[(1)] $G$ has an amenable action on a compact space; \item[(2)] $G$ is KW-exact; \item[(3)] $G$ is $C^*$-exact. \end{itemize} Then (1) $\Leftrightarrow$ (2) ${\Rightarrow}$ (3) and the three conditions are equivalent when $G$ is an inner amenable group or when $C^*_{r}(G)$ has a tracial state. \end{thm} That (1) ${\Rightarrow}$ (2) ${\Rightarrow}$ (3) is easy (see for instance \cite[Theorem 7.2]{AD02}). When $G$ is a discrete group the equivalence between (2) and (3) is proved in \cite [Theorem 5.2]{KW99} and the fact that (3) implies (1) is proved in \cite{Oza}. That (2) implies (1) for any locally compact group is proved in \cite[Theorem 5.6]{BCL}, \cite[Proposition 2.5]{OS20}. The fact that (3) implies (1) in the case of an inner amenable locally compact group $G$ was treated in \cite[Theorem 7.3]{AD02} where we used a property of $G$ that we called Property (W). Subsequently, it was proved in \cite{CT} that this property (W) is the same as inner amenability. The fact that (3) implies (2) when $C^*_{r}(G)$ has a tracial state is proved in \cite{Man}. Let us point out that this latter property is equivalent to the existence of an open amenable normal subgroup in $G$ as shown in \cite{KR}. Whether (3) implies (2) holds for any locally compact group is still open. Note that if KW-exactness and $C^*$-exactness are equivalent for all unimodular totally disconnected second countable groups then they are equivalent for all locally compact second countable groups \cite{CZ}. \section{Background on groupoids} We assume that the reader is familiar with the basic definitions about groupoids. We use the terminology and the notation of \cite{AD-R}. The unit space of a groupoid ${\mathcal G}$ is denoted by ${\mathcal G}^{(0)}$ and is often renamed as $X$. We implicitly identify ${\mathcal G}^{(0)}$ to a subset of ${\mathcal G}$. The structure of ${\mathcal G}$ is defined by the range and source maps $r,s: {\mathcal G}\to{\mathcal G}^{(0)}$, the inverse map $\gamma \mapsto \gamma^{-1}$ from ${\mathcal G}$ to ${\mathcal G}$ and the multiplication map $(\gamma,\gamma') \mapsto \gamma\gamma'$ from ${\mathcal G}^{(2)} = \set{(\gamma,\gamma')\in {\mathcal G}\times {\mathcal G} : s(\gamma) = r(\gamma')}$ to ${\mathcal G}$. For $x\in {\mathcal G}^{(0)}$ we set ${\mathcal G}^x = r^{-1}(x)$, ${\mathcal G}_x = s^{-1}(x)$ and ${\mathcal G}(x) = {\mathcal G}^x \cap {\mathcal G}_x$. Given $E\subset {\mathcal G}^{(0)}$, we write ${\mathcal G}(E) = r^{-1}(E)\cap s^{-1}(E)$. One important example is given by the left action of a group $G$ on a set $X$. The corresponding {\em semidirect product groupoid} ${\mathcal G} = X\rtimes G$ is $X\times G$ as a set. Its unit set is $X$, the range and source maps are given respectively by $r(x,g) = x$ and $s(x,g)= g^{-1}x$. The product is given by $(x,g)(g^{-1}x,h) = (x,gh)$, and the inverse by $(x,g)^{-1} = (g^{-1}x, g^{-1})$. Equivalence relations on $X$ are also an interesting family of examples. If ${\mathcal R}\subset X\times X$ is an equivalence relation, it is viewed as a groupoid with $X$ as set of units, $r(x,y) = x$ and $s(x,y) = y$ as range and source maps respectively. The product is given by $(x,y)(y,z) = (x,z)$ and the inverse by $(x,y)^{-1} = (y,x)$. \subsection{Measured groupoids} A {\em Borel groupoid} ${\mathcal G}$ is a groupoid endowed with a Borel structure such that the range, source, inverse and product maps are Borel, where ${\mathcal G}^{(2)}$ has the Borel structure induced by ${\mathcal G}\times {\mathcal G}$ and ${\mathcal G}^{(0)}$ has the Borel structure induced by ${\mathcal G}^{(0)}$. A {\em Borel Haar system} $\lambda$ on ${\mathcal G}$ is a family $(\lambda^x)_{x\in {\mathcal G}^{(0)}}$ of measures on the fibres ${\mathcal G}^x$, wich is Borel (in the sense that for every non-negative Borel function $f$ on ${\mathcal G}$ the function $x\mapsto \lambda(f)(x) =\int f{\,\mathrm d}\lambda^x$ is Borel), left invariant (in the sense that for all $\gamma\in{\mathcal G}, \quad\gamma\lambda^{s(\gamma)} = \lambda^{r(\gamma)}$), proper (in the sense that there exists a non-negative Borel function $f$ on ${\mathcal G}$ such that $ \lambda(f)(x) = 1$ for all $x\in {\mathcal G}^{(0)}$). Given a measure $\mu$ on ${\mathcal G}^{(0)}$, one can integrate the measures $\lambda^x$ with respect to $\mu$ to get a measure $\mu\circ\lambda$ on ${\mathcal G}$. The measure $\mu$ is quasi-invariant with respect to the Haar system if the inverse map preserves the $(\mu\circ\lambda)$-negligible sets. A {\em measured groupoid} is a triple $({\mathcal G},\lambda,\mu)$ satisfying the above properties. All measure spaces are assumed to be standard and the measures are $\sigma$-finite. \begin{exs}\label{exs:mesgroupoid} (a) {\it Semidirect product measured groupoids.} Let $G$ be a second countable locally compact group with a left Haar measure $\lambda$, and $X$ a standard Borel space. A Borel left action of $G$ on $X$ is a left action such that the map $(x,s)\mapsto sx$ from $X\times G$ to $X$ is Borel. Then ${\mathcal G}=X\rtimes G$ is a Borel groupoid with a canonical Haar system, also denoted by $\lambda$. Indeed, identifying ${\mathcal G}^x$ with $G$, we take $\lambda^x = \lambda$. Let $\mu$ be a measure on $X$. Then $\mu\circ\lambda = \mu\otimes\lambda$. Moreover $\mu$ is quasi-invariant with respect to the $G$-action if and only if $({\mathcal G},\lambda,\mu)$ is a measured groupoid. (b) {\em Discrete measured equivalence relations.} Let ${\mathcal R}$ be an equivalence relation on a Borel standard space $X$ which has countable equivalence classes and such that ${\mathcal R}$ is a Borel subset of $X\times X$. This groupoid has a canonical Haar system: $\lambda^x$ is the counting measure on the equivalence class of $x$, identified with ${\mathcal R}^x$. A measure $\mu$ on $X$ is quasi-invariant if for every Borel subset $A\subset X$, the saturation of $A$ with respect to ${\mathcal R}$ has measure $0$ when $\mu(A) = 0$. Then $({\mathcal R},\mu)$ is a measured groupoid, called a {\em discrete measured equivalence relation}. \end{exs} \subsection{Topological groupoids} A {\it locally compact groupoid} is a groupoid ${\mathcal G}$ equipped with a locally compact\footnote{By convention a locally compact space will be Hausdorff.} topology such that the structure maps are continuous, where ${\mathcal G}^{(2)}$ has the topology induced by ${\mathcal G}\times{\mathcal G}$ and ${\mathcal G}^{(0)}$ has the topology induced by ${\mathcal G}$. A {\em continuous Haar system} is a family $ \lambda=(\lambda^x)_{x\in {\mathcal G}^{(0)}}$ of measures on ${\mathcal G}$ such that $\lambda^x$ has exactly ${\mathcal G}^x$ as support for every $x\in {\mathcal G}^{(0)}$, is left invariant and is continuous in the sense that for every $f\in {\mathcal C}_c({\mathcal G})$ (the space of continuous complex valued functions with compact support on ${\mathcal G}$) the function $x\mapsto \lambda(f)(x)= \int f{\,\mathrm d}\lambda^x$ is continuous. Note that the existence of a continuous Haar system implies that the range (and therefore the source) map is open \cite[Chap. I, Proposition 2.4]{Ren_book}. \begin{exs}\label{exs:groupoids} (a) {\it Semidirect products.} Let us consider a locally compact group $G$ with Haar measure $\lambda$ acting continuously to the left on a locally compact space $X$. Then ${\mathcal G} = X\rtimes G$ is locally compact groupoid and the Haar system defined in Example \ref{exs:mesgroupoid} (a) is continuous. (b) {\it Group bundle groupoids.} A group bundle groupoid is a locally compact groupoid such that the range and source maps are equal and open. By \cite[Lemma 1.3]{Ren91}, one can choose, for $x\in {\mathcal G}^{(0)}$, a left Haar measure $\lambda^x$ on the group ${\mathcal G}^x ={\mathcal G}_x$ in such a way that $(\lambda^x)_{x\in X}$ forms a Haar system on ${\mathcal G}$. An explicit example will be given in Section \ref{sec:HLS}. (c) {\it \'Etale groupoids.} A locally compact groupoid is called {\it \'etale} when its range (and therefore its source) map is a local homeomorphism from ${\mathcal G}$ into ${\mathcal G}^{(0)}$. Then ${\mathcal G}^x$ and ${\mathcal G}_x$ are discrete and ${\mathcal G}^{(0)}$ is open in ${\mathcal G}$. Moreover the family of counting measures $\lambda^x$ on ${\mathcal G}^x$ forms a Haar system (see \cite[Chap. I, Proposition 2.8]{Ren_book}). It will be implicitly our choice of Haar system. Groupoids associated with actions of discrete groups are \'etale. \end{exs} \subsection{Groupoid operator algebras} For the representation theory of measured groupoids we refer to \cite[\S 6.1]{AD-R}. The von Neumann algebra $VN({\mathcal G},\lambda,\mu)$ associated to such a groupoid is defined by its left regular representation. For a semidirect product $(X\rtimes G, \mu)$ it is the von Neumann crossed product $L^\infty(X,\mu)\rtimes G$. For a discrete measured equivalence relation $({\mathcal R},\mu)$, it is the von Neumann algebra defined in \cite{FMII}. We will now focus on the operator algebras associated with a locally compact groupoid\footnote{Throughout this text a locally compact groupoid will be implicitly endowed with a Haar system $\lambda$ which, concerning the examples given in Examples \ref{exs:groupoids}, will be the Haar systems described there.} ${\mathcal G}$. We set $X = {\mathcal G}^{(0)}$. The space ${\mathcal C}_c({\mathcal G})$ of continuous functions with compact support on ${\mathcal G}$ is an involutive algebra with respect to the following operations for $f,g\in {\mathcal C}_c({\mathcal G})$: \begin{align*} (f*g)(\gamma) &= \int f(\gamma_1)g(\gamma_{1}^{-1}\gamma) d\lambda^{r(\gamma)}(\gamma_1)\\ f^*(\gamma) & =\overline{f(\gamma^{-1})}. \end{align*} We define a norm on ${\mathcal C}_c({\mathcal G})$ by $$\norm{f}_I = \max \set{\sup_{x\in X} \int \abs{f(\gamma)}{\,\mathrm d}\lambda^x(\gamma), \,\,\sup_{x\in X} \int \abs{f(\gamma^{-1})}{\,\mathrm d}\lambda^x(\gamma)}.$$ The {\it full $C^*$-algebra $C^*({\mathcal G})$ of the groupoid} ${\mathcal G}$ is the enveloping $C^*$-algebra of the Banach $*$-algebra obtained by completion of ${\mathcal C}_c({\mathcal G})$ with respect to the norm $\norm{\cdot}_I$. In order to define the reduced $C^*$-algebra of ${\mathcal G}$ we need the notion of (right) Hilbert $C^*$-module ${\mathcal H}$ over a $C^*$-algebra $A$ (or Hilbert $A$-module) for which we refer to \cite{Lance_book}. We shall denote by ${\mathcal B}_A({\mathcal H})$ the $C^*$-algebra of $A$-linear adjointable maps from ${\mathcal H}$ into itself. Let ${\mathcal E}$ be the Hilbert $C^*$-module\footnote{When ${\mathcal G}$ is \'etale, we shall use the notation $\ell^2_{{\mathcal C}_0(X)}({\mathcal G})$ rather than $L^2_{{\mathcal C}_0(X)}({\mathcal G},\lambda)$ } $L^2_{{\mathcal C}_0(X)}({\mathcal G},\lambda)$ over ${\mathcal C}_0(X)$ (the algebra of continuous functions on $X$ vanishing to $0$ at infinity) obtained by completion of ${\mathcal C}_c({\mathcal G})$ with respect to the ${\mathcal C}_0(X)$-valued inner product $$\scal{\xi,\eta}(x) = \int_{{\mathcal G}^x} \overline{\xi(\gamma)}\eta(\gamma){\,\mathrm d}\lambda^x(\gamma).$$ The ${\mathcal C}_0(X)$-module structure is given by $$(\xi f)(\gamma) = \xi(\gamma)f\circ r(\gamma).$$ Let us observe that $L^2_{{\mathcal C}_0(X)}({\mathcal G},\lambda)$ is the space of continuous sections vanishing at infinity of a continuous field of Hilbert spaces with fibre $L^2({\mathcal G}^x,\lambda^x)$ at $x\in X$. We let ${\mathcal C}_c({\mathcal G})$ act on ${\mathcal E}$ by the formula $$(\Lambda(f)\xi)(\gamma) = \int f(\gamma^{-1}\gamma_1) \xi(\gamma_1) {\,\mathrm d}\lambda^{r(\gamma)}(\gamma_1).$$ Then, $\Lambda$ extends to a representation of $C^*({\mathcal G})$ in the Hilbert ${\mathcal C}_0(X)$-module ${\mathcal E}$, called the {\it regular representation of} $({\mathcal G},\lambda)$. Its range is denoted by $C^*_{r}({\mathcal G})$ and called the {\it reduced $C^*$-algebra}\footnote{Very often, the Hilbert ${\mathcal C}_0(X)$-module $L^2_{{\mathcal C}_0(X)}({\mathcal G},\lambda^{-1})$ is considered in order to define the reduced $C^*$-algebra (see for instance \cite{KS02,KS04}). We pass from this setting to ours (which we think to be more convenient for our purpose) by considering the isomorphism $U: L^2_{{\mathcal C}_0(X)}({\mathcal G},\lambda^{-1})\to L^2_{{\mathcal C}_0(X)}({\mathcal G},\lambda)$ such that $(U\xi)(\gamma) = \xi(\gamma^{-1})$.}{\it of the groupoid} ${\mathcal G}$. Note that $\Lambda(C^*({\mathcal G}))$ acts fibrewise on the corresponding continuous field of Hilbert spaces with fibres $L^2({\mathcal G}^x,\lambda^x)$ by the formula $$(\Lambda_x(f)\xi)(\gamma) = \int_{{\mathcal G}^x} f(\gamma^{-1}\gamma_1) \xi(\gamma_1) {\,\mathrm d}\lambda^x(\gamma_1)$$ for $f\in {\mathcal C}_c({\mathcal G})$ and $\xi\in L^2({\mathcal G}^x,\lambda^x)$. Moreover, we have $\norm{\Lambda(f)} = \sup_{x\in X} \norm{\Lambda_x(f)}$. For a semidirect product groupoid ${\mathcal G} = X\rtimes G$ as in Example \ref{exs:groupoids} (a) we get the usual crossed products $C^*({\mathcal G}) = C_0(X)\rtimes G$ and $C^*_{r}({\mathcal G}) = C_0(X)\rtimes_r G$. \section{Amenable groupoids} \subsection{Amenability of measured groupoids} The existence of actions of non-amenable groups exhibiting behaviours reminiscent of amenability had already been observed in the 1970s by several authors, among them Vershik \cite{Ver} for the boundary action of $PSL(2,{\mathbb{Z}})$. The original definition of an amenable action in the measured setting is due to Zimmer \cite[Definition 1.4]{Zi3}. It was expressed in terms of an involved fixed point property. Later \cite{Zi1} it was reformulated in terms of invariant means: an action of a discrete group $\Gamma$ on a measured space $(X,\mu)$, with $\mu$ being quasi-invariant, is amenable if there exists a norm one projection $m: L^\infty(X\rtimes \Gamma, \mu\circ\lambda)\to L^\infty(X,\mu)$ such that $s.(m(f)) = m(s.f)$ for all $f\in L^\infty(X\rtimes \Gamma, \mu\circ\lambda)$ and $s\in \Gamma$ where $(s.f)(x,t) = f(s^{-1}x,s^{-1}t)$ and $(s.m(f))(x) = m(f)(s^{-1}x)$. In \cite{AEG}, this characterization was extended to the case of any second countable locally compact group. It also holds in the case of discrete measured equivalence relations. Clearly, the right framework that unifies this notion of amenability is that of measured groupoids. \begin{defn}\label{amen:measgroupoid}\cite[Definition 3.2.8]{AD-R} A measured groupoid $({\mathcal G},\lambda,\mu)$ is said to be {\it amenable} if there exists a norm one projection $m: L^\infty({\mathcal G},\mu\circ\lambda) \to L^\infty({\mathcal G}^{(0)},\mu)$ such that $m(\psi*f) = \psi* m(f)$ for every $f\in L^\infty({\mathcal G},\mu\circ\lambda)$ and every Borel function $\psi$ on ${\mathcal G}$ such that $\sup_{x\in {\mathcal G}^{(0)}}\lambda^x(\abs{\psi} )<\infty$. \end{defn} Recall that $(\psi*f)(\gamma) = \int \psi(\eta)f(\eta^{-1}\gamma) {\,\mathrm d} \lambda^{r(\gamma)}(\eta)$ for $f\in L^\infty({\mathcal G},\mu\circ\lambda)$ and that we have $(\psi*f)(x) = \int \psi(\eta)f(\eta^{-1}x) {\,\mathrm d} \lambda^{x}(\eta)$ for $f\in L^\infty(X,\mu)$. The first definition of amenability for a measured groupoid is due to Renault \cite[Chap. II, \S 3]{Ren_book}. It was expressed in different terms: as a generalisation of the classical Day condition or equivalently as generalisations of the Reiter condition or of the Godement condition for groups. \begin{thm}\label{thm:caractAmenMeas}\cite[Propostion 3.2.14]{AD-R} Let $({\mathcal G},\lambda,\mu)$ be a measured groupoid. We endow ${\mathcal G}$ with the measure $\mu\circ\lambda$. The following conditions are equivalent: \begin{itemize} \item[(i)] $({\mathcal G},\lambda,\mu)$ is amenable; \item[(ii)] $[$Weak Day condition$]$ There exists a sequence $(g_n)$ of non-negative Borel functions on ${\mathcal G}$ such that $\lambda(g_n) =1$ and $\lim_n f*g_n - (\lambda(f)\circ r )g_n = 0$ in the weak topology of $L^1({\mathcal G})$ for all $f\in L^1({\mathcal G})$; \item[(iii)] $[$Weak Reiter condition$]$ There exists a sequence $(g_n)$ of non-negative Borel functions on ${\mathcal G}$ such that $\lambda(g_n) =1$ and $\lim_n \int\abs{g_n(\gamma^{-1}\gamma_1) - g_n(\gamma_1)}{\,\mathrm d} \lambda^{r(\gamma)}(\gamma_1) = 0$ in the weak*-topology of $L^\infty({\mathcal G})$; \item[(iv)] $[$Weak Godement condition$]$ There exists a sequence $(\xi_n)$ of Borel functions on ${\mathcal G}$ such that $\lambda(\abs{\xi_n}^2) = 1$ for all $n$ and $\lim_n \int\overline{\xi_n(\gamma_1)}\xi_n(\gamma^{-1}\gamma_1) {\,\mathrm d}\lambda^{r(\gamma)}(\gamma_1) = 1$ in the weak*-topology of $L^\infty({\mathcal G})$. \end{itemize} \end{thm} \subsection{Amenability of locally compact groupoids} The (topological) amenability\footnote{From now on, amenability will implicitly mean topological amenability.} of a locally compact groupoid ${\mathcal G}$ has been introduced by Renault in \cite{Ren_book}. In \cite{AD-R} it is defined as follows. \begin{defn} \cite[Definition 2.2.1]{AD-R} We say that a locally compact groupoid ${\mathcal G}$ is {\em amenable} if there exists a net (or a sequence when ${\mathcal G}$ is $\sigma$-compact) $(m_i)$, where $m_i = (m_i^{x})_{x\in {\mathcal G}^{(0)}}$ is a family of probability measures $m_i^{x}$ on ${\mathcal G}^x$, continuous in the sense that $x\mapsto m_i^{x}(f)$ is continuous for every $f\in {\mathcal C}_c({\mathcal G})$, and such that $\lim_i\norm{\gamma m_i^{s(\gamma)} - m_i^{r(\gamma)}}_1 = 0$ uniformly on every compact subset of ${\mathcal G}$. \end{defn} This notion has many equivalent definitions: \begin{thm}\label{thm:caractAmen}\cite[Proposition 2.2.13]{AD-R} Let ${\mathcal G}$ be a $\sigma$-compact locally compact groupoid. The following conditions are equivalent: \begin{itemize} \item[(i)] ${\mathcal G}$ is amenable; \item[(ii)] $[$Reiter condition$]$ There exists a sequence $(g_n)$ in ${\mathcal C}_c({\mathcal G})^+$ such that $\lim_n\lambda(g_n) = 1$ uniformly on every compact subset of ${\mathcal G}^{(0)}$ and $\lim_n \int\abs{g_n(\gamma^{-1}\gamma_1) - g_n(\gamma_1)}{\,\mathrm d} \lambda^{r(\gamma)}(\gamma_1) = 0$ uniformly on every compact subset of ${\mathcal G}$; \item[(iii)] There exists a sequence $(h_n)$ of continuous positive definite functions with compact support on ${\mathcal G}$ whose restrictions to ${\mathcal G}^{(0)}$ are bounded by $1$ and such that $\lim_n h_n = 1$ uniformly on every compact subset of ${\mathcal G}$; \item[(iv)] $[$Godement condition$]$ There exists a sequence $(\xi_n)$ in ${\mathcal C}_c({\mathcal G})$ such that $\lambda(\abs{\xi_n}^2) \leq 1$ for all $n$ and $\lim_n \int\overline{\xi_n(\gamma_1)}\xi_n(\gamma^{-1}\gamma_1) {\,\mathrm d}\lambda^{r(\gamma)}(\gamma_1) = 1$ uniformly on every compact subset of ${\mathcal G}$. \end{itemize} \end{thm} Recall that a function $h$ on ${\mathcal G}$ is {\em positive definite} or of {\em positive type} if for every $x\in {\mathcal G}^{(0)}$, $n\in {\mathbb{N}}$ and $\gamma_1,\cdots, \gamma_n \in {\mathcal G}^x$, the $n \times n$ matrix $[h(\gamma_i^{-1}\gamma_j)]$ is non-negative. For instance, given $\xi$ on ${\mathcal G}$ such that $\lambda(\abs{\xi}^2) $ is bounded on ${\mathcal G}^{(0)}$, the function $\gamma \mapsto \int\overline{\xi(\gamma_1)}\xi(\gamma^{-1}\gamma_1) {\,\mathrm d}\lambda^{r(\gamma)}(\gamma_1)$ is positive definite. \begin{rems}\label{rem:Godement} (a) In \cite{AD-R} it is assumed that ${\mathcal G}$ is second countable but the proof of the above theorem holds as well when ${\mathcal G}$ is $\sigma$-compact. This observation will be useful later when working with the groupoid $\beta_r{\mathcal G} \rtimes {\mathcal G}$. (b) In the above characterizations, the boundedness conditions for the sequences $(h_n)$ and $(\xi_n)$ are not necessary (see \cite[Propositions 2.2.13]{AD-R}). \end{rems} \begin{defn}\cite[Chap. II, Definition 3.6]{Ren_book},\cite[Definition 3.3.1]{AD-R} One says that a second countable locally compact groupoid with Haar system $({\mathcal G},\lambda)$ is {\em measurewise amenable} if for every quasi-invariant measure $\mu$ on ${\mathcal G}^{(0)}$ the measured groupoid $({\mathcal G},\lambda,\mu)$ is amenable. \end{defn} Topological amenability is closely related to measurewise amenability. It is not hard to see for instance that the Reiter condition of Theorem \ref{thm:caractAmen} implies the weak Reiter condition of Theorem \ref{thm:caractAmenMeas} for every quasi-invariant measure $\mu$. Therefore topological amenability implies measurewise amenability. It is a long-standing open question whether the converse is true. This has been proved for \'etale groupoids \cite[Corollary 3.3.8]{AD-R} and recently for locally compact second countable semidirect product groupoids \cite[Corollary 3.29]{BEW20}. \begin{rem} Let us consider the case of a locally compact semidirect product groupoid\footnote{For these groupoids the $\sigma$-compactness assumption is not needed \cite[Proposition 2.5]{AD02}.} ${\mathcal G}= X\rtimes G$. Then, topological amenability is for instance spelled out as the existence of a net $(m_i)$ of weak*-continuous maps $m_i: x\mapsto m_i^{x}$ from $X$ into the space of probability measures on $G$, such that $\lim_i \norm{gm_i^{x} - m_i^{gx}}_1 = 0$ uniformly on every compact subset of $X\times G$. In this case we also say that {\em the $G$-action on $X$ is amenable}. We set $A = {\mathcal C}_0(X)$ and for every $f\in {\mathcal C}_c(X\times G)$ we set $\tilde f(s)(x) = f(x,s)$. Then $\tilde f$ is in the space ${\mathcal C}_c(G,A)$ of continuous functions with compact support from $G$ into $A$. It is also an element of the Hilbert $A$-module $L^2(G,A)$ given as the completion of ${\mathcal C}_c(G,A)$ with respect to the $A$-valued inner product $\scal{\xi,\eta} = \int_G \xi(s)^*\eta(s) {\,\mathrm d}\lambda(s)$. Finally, for $\xi\in {\mathcal C}_c(G,A)$, we set $(\tilde\alpha_t\xi)(s)(x) = \xi(t^{-1}s)(t^{-1}x)$ and we denote by the same symbol the continuous extension of $\tilde\alpha_t$ to $L^2(G,A)$. If $h_i(\gamma)= \int \overline{\xi_i(\gamma_1)}\xi_i(\gamma^{-1}\gamma_1) {\,\mathrm d}\lambda^{r(\gamma)}(\gamma_1)$ with $\xi_i\in {\mathcal C}_c(X\times G)$, we have $$\tilde h_i(t)(x) = \int_G \overline{\xi_i(x,s)} \xi_i(t^{-1}x,t^{-1}s){\,\mathrm d}\lambda(s) = \scal{\tilde\xi_i,\tilde\alpha_t(\tilde\xi_i)}(x).$$ It follows that the Godement condition characterizing the amenability of $X\rtimes G$ may be interpreted as the existence of a bounded net $(\eta_i)$ in $L^2(G,A)$ such that $\scal{\eta_i, \tilde\alpha_t(\eta_i)} \to 1$ uniformly on compact subsets of $G$ in the strict topology of $A$. The first tentative to define an amenable action of a group on a non-commutative $C^*$-algebra $A$ was presented in \cite{AD87}. The solution was not satisfactory since it was limited to discrete groups and involved the bidual of $A$. Since the end of the 2010s, a new interest in the subject has led to major advances \cite{BC, BEW, BEW20, OS20} and resulted in very nice equivalent definitions of amenability. One of the definitions is the following extension of the commutative case described above, called the approximation property (AP), first introduced in \cite{Exe, ENg} in the setting of Fell bundles over locally compact groups. An action $\alpha: G\curvearrowright A$ of a locally compact group on a $C^*$-algebra $A$ has the {\em approximation property} (AP) if there exists a bounded net (or sequence in separable cases) $(\eta_i)$ in ${\mathcal C}_c(G,A) \subset L^2(G,A)$ such that $\scal{\eta_i, a \tilde\alpha_t(\eta_i)} \to a$ in norm, uniformly on compact subsets of $G$, for every $a\in A$. Here one sets again $\tilde\alpha_t(\eta)(s) = \alpha_t(\eta(t^{-1}s))$ for $\eta\in {\mathcal C}_c(G,A)$, $s,t\in G$. For interesting properties of amenable actions of locally compact groups on $C^*$-algebras we refer to \cite{BEW20, OS20}. \end{rem} \section{Amenable at infinity groupoids} As already said, the property for a locally compact group $G$ to be KW-exact is equivalent to the existence of an amenable $G$-action on a compact space. In order to try to extend this fact to a locally compact groupoid ${\mathcal G}$ we need some preparation. \subsection{First definitions} Let $X$ be a locally compact space. A {\it fibre space} over $X$ is a pair $(Y,p)$ where $Y$ is a locally compact space and $p$ is a continuous surjective map from $Y$ on $X$. For $x\in X$ we denote by $Y^x$ the {\it fibre} $p^{-1}(x)$. We say that $(Y,p)$ is {\em fibrewise compact} if the map $p$ is proper in the sense that $p^{-1}(K)$ is compact for every subset $K$ of $X$. Note that this property is stronger than requiring each fibre to be compact. Let $ (Y_i,p_i)$, $i=1,2$, be two fibre spaces over $X$. We denote by $Y_{1} \,_{p_1}\!\!*_{p_2} Y_2$ (or $Y_1*Y_2$ when there is no ambiguity) the {\it fibred product}\index{fibred product} $\set{(y_1,y_2)\in Y_1\times Y_2: p_1(y_1) = p_2(y_2)}$ equipped with the topology induced by the product topology. We say that a continuous map $\varphi: Y_1\to Y_2$ is a {\it morphism of fibre spaces} if $p_2\circ \varphi = p_1$. \begin{defn}\label{def:Gspace} Let ${\mathcal G}$ be a locally compact groupoid. A {\it left} ${\mathcal G}$-{\it space} \index{${\mathcal G}$-space} is a fibre space $(Y,p)$ over $X = {\mathcal G}^{(0)}$, equipped with a continuous map $(\gamma, y) \mapsto \gamma y$ from ${\mathcal G}\,_s\!*_pY$ into $Y$, satisfying the following conditions: \begin{itemize} \item $p(\gamma y) = r(\gamma)$ for $(\gamma, y) \in {\mathcal G}\,_s\!*_pY$, and $p(y)y = y$ for $y\in Y$; \item if $(\gamma_1, y) \in {\mathcal G}\,_s\!*_pY$ and $(\gamma_2,\gamma_1)\in {\mathcal G}^{(2)}$, then $(\gamma_2\gamma_1)y = \gamma_2(\gamma_1 y)$. \end{itemize} \end{defn} Given such a ${\mathcal G}$-space $(Y,p)$, we associate a groupoid $Y\rtimes {\mathcal G}$, called the semidirect product groupoid of $Y$ by ${\mathcal G}$. It is defined as in the case of group actions except that as a topological space it is the fibred product $Y\!_p\!*_r {\mathcal G}$ over $X = {\mathcal G}^{(0)}$. Although $p$ is not assumed to be an open map, the range map $(y,\gamma) \mapsto y$ from $Y\rtimes {\mathcal G}$ onto $Y$ is open since the range map $r:\gamma \mapsto r(\gamma)$ is open. Moreover, if ${\mathcal G}$ has a Haar system $(\lambda^x)_{x\in X}$, then $Y\rtimes {\mathcal G}$ has the canonical Haar system $y\mapsto \delta_y\times \lambda^{p(y)}$ (identified with $\lambda^{p(y)}$ on ${\mathcal G}^{(p(y)}$) (see \cite[Proposition 1.4]{AD16}). Note that $Y\rtimes {\mathcal G}$ is an \'etale groupoid when ${\mathcal G}$ is \'etale. We say that the {\em ${\mathcal G}$-space $(Y,p)$ is amenable} if the semidirect product groupoid $Y\rtimes {\mathcal G}$ is amenable. Note that if ${\mathcal G}$ is an amenable groupoid, every ${\mathcal G}$-space is amenable \cite[Corollary 2.2.10]{AD-R}. There is a subtlety about the definition of amenability at infinity which leads us to introduce two notions. We do not know whether they are equivalent in general. \begin{defn}\label{def:ameninf} Let ${\mathcal G}$ be a locally compact groupoid and let $X= {\mathcal G}^{(0)}$. We say that \begin{itemize} \item[(i)] ${\mathcal G}$ is {\em strongly amenable at infinity} if there exists an amenable fibrewise compact ${\mathcal G}$-space $(Y,p)$ with a continuous section $\sigma : X\to Y$ of $p$; \item[(ii)] ${\mathcal G}$ is {\em amenable at infinity} if there exists an amenable fibrewise compact ${\mathcal G}$-space; \end{itemize} \end{defn} \begin{exs}\label{rem:invGE} (a) Every locally compact amenable groupoid ${\mathcal G}$ is strongly amenable at infinity since the left action of ${\mathcal G}$ on its unit space is amenable. (b) It is easily seen that the semidirect product groupoid ${\mathcal G} = X\rtimes G$ relative to an action of a KW-exact (hence amenable at infinity) locally compact group $G$ on a locally compact space $X$ is strongly amenable at infinity \cite[Proposition 4.3]{AD16}. This is also true for partial actions\footnote{For the definition see \cite[\S I.2, \S I.5]{Exel_Book}.} of exact discrete groups \cite[Proposition 4.23]{AD16}. \end{exs} It is useful to have a criterion of amenablity at infinity which does not involve $Y$ but only ${\mathcal G}$. Before proceeding further we need to introduce some notation and definitions. We set ${\mathcal G}*_r {\mathcal G} = \set{(\gamma,\gamma_1) \in {\mathcal G}\times {\mathcal G} : r(\gamma) = r(\gamma_1)}$. A subset of ${\mathcal G}*_r {\mathcal G}$ will be called a {\it tube} if its image by the map $(\gamma,\gamma_1) \mapsto \gamma^{-1}\gamma_1$ is relatively compact in ${\mathcal G}$. We denote by ${\mathcal C}_t({\mathcal G}*_r{\mathcal G})$ the space of continuous bounded functions on ${\mathcal G}*_r{\mathcal G}$ with support in a tube. We say that a function $k: {\mathcal G} *_r {\mathcal G} \to {\mathbb{C}}$ is a {\it positive definite kernel} if for every $x\in X$, $n \in {\mathbb{N}}$ and $\gamma_1,\dots,\gamma_n \in {\mathcal G}^x$, the matrix $[k(\gamma_i,\gamma_j)]$ is non-negative, that is $$\sum_{i,j=1}^n \overline{\alpha_i}\alpha_j k(\gamma_i,\gamma_j) \geq 0$$ for $\alpha_1,\dots,\alpha_n \in {\mathbb{C}}$. In the case of groups (for which amenability at infinity coincides with strong amenability at infinity) let us recall the following result: \begin{thm}\label{caractinfgroup} A (second countable) locally compact group $G$ is amenable at infinity if and only if there exists a net $(k_i)$ of continuous positive definite kernels $k_i: G\times G \to {\mathbb{C}}$ with support in tubes such that $\lim_i k_i = 1$ uniformly on tubes. \end{thm} When $G$ is any discrete group this is proved in \cite{Oza} and when $G$ is a locally compact second countable group this is proved in \cite[Theorem 2.3, Corollary 2.9]{DL} which improves \cite[Proposition 3.5]{AD02}. One important ingredient in the proof of the above theorem is the use of a universal compact $G$-space, namely the Stone-\v Cech compactification $\beta G$ of $G$ if $G$ is discrete and an appropriate variant of it in general. \subsection{Fibrewise compactifications of ${\mathcal G}$-spaces} In order to extend Theorem \ref{caractinfgroup} to the case of groupoids we first need some informations about fibrewise compactifications of fibre spaces. \begin{defn}\label{def:fibComp} A {\it fibrewise compactification} of a fibre space $(Y,p)$ over a locally compact space $X$ is a triple $(Z, \varphi, q)$ where $Z$ is a locally compact space, $q: Z \rightarrow X$ is a continuous {\it proper} map and $\varphi : Y \rightarrow Z$ is a homeomorphism onto an open dense subset of $Z$ such that $p = q \circ \varphi$. \end{defn} We denote by ${\mathcal C}_0(Y,p)$ the $C^*$-algebra of continuous bounded functions $g$ on $Y$ such that for every $\varepsilon >0$ there exists a compact subset $K$ of $X$ satisfying $\abs{g(y)} \leq \varepsilon$ if $y \notin p^{-1}(K)$. We denote by $\beta_pY$ the Gelfand spectrum of ${\mathcal C}_0(Y,p)$. The inclusion $f\mapsto f\circ p$ from ${\mathcal C}_0(X)$ into ${\mathcal C}_0(Y,p)$ defines a surjection $p_\beta$ from $\beta_pY$ onto $X$. It is easily checked that $(\beta_pY,p_\beta)$ is fibrewise compact. We call it the {\em Stone-\v Cech fibrewise compactification of} $(Y,p)$. Note that when $X$ is compact, then ${\mathcal C}_0(Y,p)$ is the $C^*$-algebra of continuous bounded functions on $Y$ and $\beta_pY$ is the usual Stone-\v Cech compactification $\beta Y$ of $Y$. We observe that even if $p: Y\to X$ is open, its extension $p_\beta : \beta_pY\to X$ is not always open. Consider for instance $Y =( [0,1]\times \set{0}) \sqcup (]1/2, 1]\times \set{1}) \subset {\mathbb{R}}^2$ and let $p$ be the first projection on $X= [0,1]$. Then $\beta_p Y = \beta Y =( [0,1]\times \set{0}) \sqcup (\beta]1/2, 1]\times \set{1}$). The fibres of $\beta_p Y$ are the same as those of $Y$ except $(\beta_p Y)^{1/2} = (\set{1/2}\times \set{0})\sqcup \big((\beta]1/2,1]\setminus ]1/2,1])\times\set{1}\big)$. Then $\beta_pY \setminus ([0,1]\times \set{0})$ is open and its image by $p_\beta$ is $[1/2,1]$. The next proposition shows that $(\beta_p Y, p_\beta)$ is the solution of a universal problem. \begin{prop}\label{prop:universal}\cite[Proposition A.4]{AD16} Let $(Y,p)$ and $(Y_1,p_1)$ be two fibre spaces over $X$, where $(Y_1,p_1)$ is fibrewise compact. Let $\varphi_1: (Y,p) \to (Y_1,p_1)$ be a morphism. There exists a unique continuous map $\Phi_1 : \beta_p Y \to Y_1$ which extends $\varphi_1$. Moreover, $\Phi_1$ is proper and is a morphism of fibre spaces, that is, $p_\beta = p_1\circ \Phi_1$. \end{prop} We assume now that $(Y,p)$ is a ${\mathcal G}$-space. A {\it ${\mathcal G}$-equivariant fibrewise compactification} of the ${\mathcal G}$-space $(Y,p)$ is a fibrewise compactification $(Z,\varphi, q)$ of $(Y,p)$ such that $(Z,q)$ is a ${\mathcal G}$-space satisfying $\varphi(\gamma y) = \gamma\varphi(y)$ for every $(\gamma, y)\in {\mathcal G}\,_s\!*_pY$. We need to extend the ${\mathcal G}$-action on $(Y,p)$ to a continuous ${\mathcal G}$-action on $(\beta_p Y,p_\beta)$. Even in the case of a non-discrete group action $G\curvearrowright Y$ this is not possible in general: we have to replace $\beta Y$ by the spectrum of the $C^*$-algebra of bounded left-uniformly continuous functions on $G$ \cite{AD02}. In the groupoid case it is more complicated, and {\bf we will limit ourselves to the case of \'etale groupoids}. \begin{prop}\label{prop:max_min} \cite[Proposition 2.5]{AD16} Let $(Y,p)$ be a ${\mathcal G}$-space, where ${\mathcal G}$ is an \'etale groupoid. The structure of ${\mathcal G}$-space of $(Y,p)$ extends in a unique way to the Stone-\v Cech fibrewise compactification $(\beta_p Y, p_\beta)$ and makes it a ${\mathcal G}$-equivariant fibrewise compactification. \end{prop} \begin{prop}\label{prop:universal1}\cite[Proposition 2.6]{AD16} Let ${\mathcal G}$ be an \'etale groupoid and $(Y,p)$, $(Y_1,p_1)$ be two ${\mathcal G}$-spaces. We assume that $(Y_1,p_1)$ is fibrewise compact. Let $\varphi_1: (Y,p) \to (Y_1,p_1)$ be a ${\mathcal G}$-equivariant morphism. The unique continuous map $\Phi_1 : \beta_p Y \to Y_1$ which extends $\varphi_1$ is ${\mathcal G}$-equivariant. \end{prop} \subsection {Amenability at infinity for \'etale groupoids} We view the fibrewise space $r:{\mathcal G} \to {\mathcal G}^{(0)}$ in an obvious way as a left ${\mathcal G}$-space. Its ${\mathcal G}$-equivariant fibrewise compactification $(\beta_r {\mathcal G}, r_\beta)$ will play an important role in the sequel because of the following observation. \begin{prop}\label{prop:SC} An \'etale groupoid ${\mathcal G}$ is strongly amenable at infinity if and only if the Stone-\v Cech fibrewise compactification $(\beta_r {\mathcal G}, r_\beta)$ is an amenable ${\mathcal G}$-space. \end{prop} \begin{proof} In one direction, we note that the inclusions ${\mathcal G}^{(0)}\subset {\mathcal G}\subset \beta_r {\mathcal G}$ provide a continuous section for $r_\beta$ and therefore the amenability of the ${\mathcal G}$-space $\beta_r{\mathcal G}$ implies the strong amenability at infinity of ${\mathcal G}$. Conversely, assume that $(Y,p,\sigma)$ satisfies the conditions of Definition \ref{def:ameninf}. We define a continuous ${\mathcal G}$-equivariant morphism $\varphi : ({\mathcal G},r)\to (Y,p)$ by $$\varphi(\gamma) = \gamma \sigma\circ s(\gamma).$$ Then, by Proposition \ref{prop:universal1}, $\varphi$ extends in a unique way to a continuous ${\mathcal G}$-equivariant morphism $\Phi$ from $(\beta_r{\mathcal G}, r_\beta)$ into $(Y,p)$. Note that $\Phi(\beta_r{\mathcal G})$ is a closed ${\mathcal G}$-invariant subset of $Y$. Now, it follows from \cite[Proposition 2.2.9 (i)]{AD-R} that ${\mathcal G}\curvearrowright \beta_r{\mathcal G}$ is amenable, since ${\mathcal G}\curvearrowright \Phi(\beta_r{\mathcal G})$ is amenable. \end{proof} The space $\beta_r{\mathcal G}$ has the serious drawback that it is not second countable in most of the cases but however it is $\sigma$-compact when ${\mathcal G}$ is second countable. On the other hand, it has the advantage of being intrinsic. Moreover it is possible to build a {\it second countable} amenable fibrewise compact ${\mathcal G}$-space out of any amenable fibrewise compact ${\mathcal G}$-space, when ${\mathcal G}$ is second countable and \'etale \cite[Lemma 4.9]{AD16}. \begin{thm}\label{prop:amen_inf} Let ${\mathcal G}$ be a second countable \'etale groupoid. The following conditions are equi\-valent: \begin{itemize} \item[(i)] ${\mathcal G}$ is strongly amenable at infinity; \item[(ii)] there exists a sequence $(k_n)$ of bounded positive definite continuous kernels on ${\mathcal G} *_r {\mathcal G}$ supported in tubes such that \begin{itemize} \item[(a)] for every $n$, the restriction of $k_n$ to the diagonal of ${\mathcal G} *_r {\mathcal G}$ is uniformly bounded by $1$; \item[(b)] $\lim_n k_n = 1$ uniformly on tubes. \end{itemize} \end{itemize} \end{thm} \begin{proof} By Theorem \ref{thm:caractAmen}, the groupoid $\beta_r{\mathcal G}\rtimes {\mathcal G}$ is amenable if and only if there exists a net $(h_n)$ of continuous positive definite functions in ${\mathcal C}_c(\beta_r{\mathcal G}\rtimes {\mathcal G})$, whose restriction to the set of units are bounded by 1, such that $\lim_i h_i = 1$ uniformly on every compact subset of $\beta_r{\mathcal G}\rtimes {\mathcal G}$. For $(\gamma_1,\gamma_2)\in {\mathcal G}*_r{\mathcal G}$ we set $k_n(\gamma_1,\gamma_2) = h_n(\gamma_1^{-1}, \gamma_1^{-1}\gamma_2)$. Then we check that $k_n$ is a positive definite kernel bounded by $1$ on the diagonal, supported in a tube, and that $\lim_i k_n = 1$ uniformly on tubes. The converse is proved similarly (see \cite[Theorem 4.13, Theorem 4.15]{AD16} for details). \end{proof} As observed in \cite[Remark 3.4, Remark 4.16]{AD16} it suffices in (ii) (a) above to require that each $k_n$ is bounded. \section{About exactness for groupoids} \subsection{Equivalence of several definitions of exactness for \'etale groupoids} \begin{defn} Let ${\mathcal G}$ be a locally compact groupoid. We say that ${\mathcal G}$ is KW{\it-exact} if for every ${\mathcal G}$-equivariant exact sequence $0\to I \to A \to B \to 0$ of ${\mathcal G}$-$C^*$-algebras, the corresponding sequence $$0 \to C^*_{r}({\mathcal G},I) \to C^*_{r}({\mathcal G},A) \to C^*_{r}({\mathcal G},B)\to 0$$ of reduced crossed products is exact. We say that ${\mathcal G}$ is {\it $C^*$-exact} if $C^*_{r}({\mathcal G})$ is an exact $C^*$-algebra. \end{defn} For the definition of actions of locally compact groupoids on $C^*$-algebras and the construction of the corresponding crossed products we refer for instance to \cite{KS04} or \cite[\S 6.2]{AD16}. As in the case of groups we easily see that amenability at infinity implies KW-exactness which in turn implies $C^*$-exactness (see for instance \cite[Theorem 7.2]{AD02} for groups and \cite[\S 7]{AD16} for groupoids). The main problem is to see if $C^*$-exactness of an \'etale groupoid implies its amenability at infinity as it is the case for discrete groups. In this section we will adapt to the \'etale groupoid case our proof of the fact that an inner amenable locally compact $C^*$-exact group is amenable at infinity \cite[Theorem 7.3]{AD02}. We need first to define inner amenability for groupoids. \begin{defn} Let ${\mathcal G}$ be a locally compact groupoid. Following \cite[Definition 2.1]{Roe}, we say that a closed subset $A$ of ${\mathcal G} \times {\mathcal G}$ is {\it proper} if for every compact subset $K$ of ${\mathcal G}$, the sets $(K\times {\mathcal G}) \cap A$ and $({\mathcal G} \times K) \cap A$ are compact. We say that a function $f: {\mathcal G}\times {\mathcal G} \to {\mathbb{C}}$ is {\it properly supported} if its support is proper. \end{defn} Given a groupoid ${\mathcal G}$, let us observe that the product ${\mathcal G}\times {\mathcal G}$ has an obvious structure of groupoid, with $X\times X$ as set of units, where $X= {\mathcal G}^{(0)}$. Observe that a map $f:{\mathcal G}\times {\mathcal G}\to {\mathbb{C}}$ is positive definite if and only if, given an integer $n$, $(x,y)\in X\times X$ and $\gamma_1,\dots, \gamma_n\in {\mathcal G}^x$, $\eta_1,\dots,\eta_n\in {\mathcal G}^y$, the matrix $[f(\gamma_i^{-1}\gamma_j, \eta_i^{-1}\eta_j)]_{i,j}$ is non-negative. \begin{defn}\label{def:wia} We say that a locally compact groupoid ${\mathcal G}$ is {\it inner amenable}\index{inner amenable l. c. groupoid} if for every compact subset $K$ of ${\mathcal G}$ and for every $\varepsilon >0$ there exists a continuous positive definite function $f$ on the product groupoid ${\mathcal G}\times {\mathcal G}$, properly supported, such that $f(x,y)\leq 1$ for all $x,y\in {\mathcal G}^{(0)}$ and such that $|f(\gamma,\gamma) - 1| < \varepsilon$ for all $\gamma \in K$. \end{defn} This terminology is justified by the fact that for a locally compact group the above property is equivalent to the notion of inner amenability introduced in Section 1. That this property for groups implies inner amenability is proved in \cite{CT}; the reverse is almost immediate \cite[Proposition 4.6]{AD02}. Every amenable locally compact groupoid ${\mathcal G}$ is inner amenable since the groupoid ${\mathcal G}\times {\mathcal G}$ is amenable and therefore Theorem \ref{thm:caractAmen} applies to this groupoid. Every closed subgroupoid of an inner amenable groupoid is inner amenable \cite[Corollary 5.6]{AD16}. Every semidirect product groupoid $X\rtimes G$ is inner amenable as soon as $G$ is an inner amenable locally compact group \cite[Corollary 5.9]{AD16}. We do not know whether every \'etale groupoid is inner amenable. \begin{thm}\label{thm:equiv} Let ${\mathcal G}$ be a second countable inner amenable \'etale groupoid. Then the following conditions are equivalent: \begin{itemize} \item[(1)] ${\mathcal G}$ is strongly amenable at infinity. \item[(2)] ${\mathcal G}$ is amenable at infinity. \item[(3)] $\beta_r {\mathcal G}\rtimes {\mathcal G}$ is nuclear. \item[(4)] $\beta_r {\mathcal G}\rtimes {\mathcal G}$ is exact. \item[(5)] ${\mathcal G}$ is KW-exact. \item[(6)] $C^*_{r}({\mathcal G})$ is exact. \end{itemize} \end{thm} The following implications are immediate or already known: $$\xymatrix{(1) \ar@{=>}[r] \ar@{=>}[d] & (2)\ar@{=>}[r] &(5)\ar@{=>}[d]\\ (3) \ar@{=>}[r] & (4) \ar@{=>}[r] & (6)} $$ The implication (1) ${\Rightarrow}$ (3) is proved in \cite[Corollary 6.2.14]{AD-R} for second countable locally compact groupoids, but this result extends to the groupoid $\beta_r {\mathcal G}\rtimes {\mathcal G}$ when ${\mathcal G}$ is second countable locally compact and \'etale (see \cite[Proposition 7.2]{AD16}). It remains to show that (6) implies (1). We give below an idea of the proof which is detailed in \cite[\S 8]{AD16}. $\blacktriangleright$ The first step is to extend Kirchberg's characterization of exact $C^*$-algebras as being nuclearly embeddable into some ${\mathcal B}(H)$ as follows. \begin{lem}\cite[Lemma 8.1]{AD16}\label{lem:Kirch} Let $A$, $B$ be two separable $C^*$-algebras, where $B$ is nuclear. Let ${\mathcal E}$ be a countably generated Hilbert $C^*$-module over $B$. Let $\iota : A \to {\mathcal B}_B({\mathcal E})$ be an embedding of $C^*$-algebras. Then $A$ is exact if and only if $\iota$ is nuclear. \end{lem} The two main ingredients of the proof of this lemma are the Kasparov absorption theorem and the Kasparov-Voiculescu theorem \cite[Theorem 2, Theorem 6]{Kasp} that allow us to reduce the situation to the case of Hilbert spaces. $\blacktriangleright$ The second step is the following approximation theorem. Recall that a completely positive contraction $\Phi:A\to B$ between two $C^*$-algebras is {\it factorable} if there exists an integer $n$ and completely positive contractions $\psi: A \to M_{n}({\mathbb{C}})$, $\varphi : M_{n}({\mathbb{C}}) \to B$ such that $\Phi = \varphi \circ \psi$. A map $\Psi :C^*_{r}({\mathcal G}) \to B$ is said to have a {\it compact support} if there exists a compact subset $K$ of ${\mathcal G}$ such that $\Psi(f) = 0$ for every $f \in {\mathcal C}_c({\mathcal G})$ with $(\hbox{Supp}\, f) \cap K = \emptyset$. \begin{thm}\cite[Corollary 8.4]{AD16}\label{cor:approx} Let $B$ be a $C^*$-algebra and let $\Phi : C^{*}_{r}({\mathcal G}) \to B$ be a nuclear completely positive contraction. Then for every $\varepsilon > 0$ and every $a_1,\dots,a_k \in C^{*}_{r}({\mathcal G})$ there exists a factorable completely positive contraction $\Psi : C^{*}_{r}({\mathcal G}) \to B$, with compact support, such that $$\| \Psi(a_i) - \Phi(a_i) \| \leq \varepsilon \,\,\, \hbox{for}\,\,\, i = 1,\dots,k.$$ \end{thm} $\blacktriangleright$ Finally we need the following result due to Jean Renault (private communication). Given $f:{\mathcal G}\times {\mathcal G} \to {\mathbb{C}}$, we set $f_\gamma(\gamma') = f(\gamma, \gamma')$. \begin{lem}\cite[Lemma 8.5]{AD16} Let ${\mathcal G}$ be a locally compact groupoid. \begin{itemize} \item[(a)] Let $f\in {\mathcal C}_c({\mathcal G})$ be a continuous positive definite function. Then, $f$ viewed as an element of $C^*_{r}({\mathcal G})$ is a positive element. \item[(b)] Let $f:{\mathcal G}\times {\mathcal G} \to {\mathbb{C}}$ be a properly supported positive definite function. Then $\gamma\mapsto f_\gamma$ is a continuous positive definite function from ${\mathcal G}$ into $C^*_{r}({\mathcal G})$. \end{itemize} \end{lem} $\blacktriangleright$ We can now now proceed to the proof of (6) ${\Rightarrow}$ (1) in Theorem \ref{thm:equiv}. \begin{proof}[Proof of (6) ${\Rightarrow}$ (1)] We fix a compact subset $K$ of ${\mathcal G}$ and $\varepsilon >0$. We want to find a continuous bounded positive definite kernel $k\in {\mathcal C}_t({\mathcal G}*_r{\mathcal G})$ such that $k(\gamma,\gamma) \leq 1$ for all $\gamma\in {\mathcal G}$ and $\abs{k(\gamma,\gamma_1) - 1} \leq \varepsilon$ whenever $\gamma^{-1}\gamma_1 \in K$ (see Theorem \ref{prop:amen_inf}). We set ${\mathcal E} = \ell^2_{{\mathcal C}_0(X)}({\mathcal G})$ with $X = {\mathcal G}^{(0)}$. Recall that $\lambda^x$ is the counting measure on ${\mathcal G}^x$. We first choose a bounded, continuous positive definite function $f$ on ${\mathcal G}\times {\mathcal G}$, properly supported, such that $|f(\gamma,\gamma) - 1| \leq \varepsilon/2$ for $\gamma \in K$ and $f(x,y)\leq 1$ for $(x,y)\in X\times X$. By Lemma \ref{lem:Kirch} the regular representation $\Lambda$ is nuclear. Then, using Theorem \ref{cor:approx}, we find a compactly supported completely positive contraction $\Phi: C^*_{r}({\mathcal G}) \to {\mathcal B}_{{\mathcal C}_0(X)}({\mathcal E})$ such that\footnote{We write $f_\gamma$ instead of $\Lambda(f_\gamma)$ for simplicity of notation.} $$\norm{\Phi(f_\gamma) - f_\gamma} \leq \varepsilon/2$$ for $\gamma \in K$. We also choose a continuous function $\xi : X \to [0,1]$ with compact support such that $\xi(x) = 1$ if $x\in s(K)\cup r(K)$. Let $(\gamma,\gamma_1)\in {\mathcal G}*_r{\mathcal G}$. We choose an open bisection $S$ such that $\gamma\in S$ and a continuous function $\varphi: X\to [0,1]$, with compact support in $r(S)$ such that $\varphi (x) = 1$ on a neighborhood of $r(\gamma)$. We denote by $\xi_\varphi$ the continuous function on ${\mathcal G}$ with compact support (and thus $\xi_\varphi \in {\mathcal E}$) such that $$\xi_\varphi(\gamma') = 0 \,\,\hbox{if}\,\, \gamma'\notin S,\quad \xi_\varphi(\gamma') = \varphi\circ r(\gamma')\xi\circ s(\gamma') \,\,\hbox{if}\,\,\gamma'\in S.$$ Note that $\norm{\xi_\varphi}_{\mathcal E} \leq 1$. We define $\xi_{\varphi_1}$ similarly with respect to $\gamma_1$. Then we set \begin{align*} k(\gamma,\gamma_1) &= \langle \xi_\varphi, \Phi(f_{\gamma^{-1}\gamma_{1}})\xi_{\varphi_1}\rangle (r(\gamma))\\ &= \xi\circ s(\gamma)\big(\Phi(f_{\gamma^{-1}\gamma_{1}})\xi_{\varphi_1}\big)(\gamma). \end{align*} We observe that $k(\gamma,\gamma_1)$ does not depend on the choices of $S,\varphi, S_1,\varphi_1$. Since $\gamma \mapsto f_\gamma$ is a continuous positive definite function from ${\mathcal G}$ into $C^*_{r}({\mathcal G})$ and since $\Phi$ is completely positive, we see that $k$ is a continuous and positive definite kernel. Moreover, there is a compact subset $K_1$ of ${\mathcal G}$ such that $\Phi(f_\gamma) = 0$ when $\gamma\notin K_1$, because $\Phi$ is compactly supported, and $f$ is properly supported. It follows that $k$ is supported in a tube. We fix $(\gamma,\gamma_1)\in {\mathcal G}*_r{\mathcal G}$ such that $\gamma^{-1}\gamma_1 \in K$. Then we have $$\abs{k(\gamma,\gamma_1) -1}\leq \varepsilon/2 + \abs{ \langle \xi_\varphi,f_{\gamma^{-1}\gamma_{1}}\xi_{\varphi_1} \rangle(r(\gamma)) -1},$$ and $$ \langle \xi_\varphi, f_{\gamma^{-1}\gamma_{1}}\xi_{\varphi_1}\big\rangle(r(\gamma))\\ = \xi\circ s(\gamma)\xi\circ s(\gamma_1)f(\gamma^{-1}\gamma_{1},\gamma^{-1}\gamma_{1}). $$ Observe that $s(\gamma)\in r(K)$ and $s(\gamma_1)\in s(K)$ and therefore $ \xi\circ s(\gamma) = 1= \xi\circ s(\gamma_1)$. It follows that $$\abs{k(\gamma,\gamma_1) -1}\leq \varepsilon/2 + \abs{f(\gamma^{-1}\gamma_{1},\gamma^{-1}\gamma_{1})-1}\leq \varepsilon.$$ To end the proof it remains to check that $k$ is a bounded kernel. Since this kernel is positive definite, it suffices to show that $\gamma\mapsto k(\gamma,\gamma)$ is bounded on ${\mathcal G}$. We have $$k(\gamma,\gamma) = \langle \xi_\varphi,\Phi(f_{s(\gamma)})\xi_{\varphi}\rangle (r(\gamma)) \leq \norm{\Phi(f_{s(\gamma)})}.$$ Our claim follows, since $\Phi(f_{s(\gamma)})= 0$ when $s(\gamma)\notin K_1\cap X$ and $x\mapsto f_x$ is continuous from the compact set $K_1\cap X$ into $C_r^{*}({\mathcal G})$. \end{proof} \begin{rem}\label{rem:partial} Let $\alpha : \Gamma\curvearrowright X$ be an action of a discrete group on a locally compact space $X$. Since the groupoid $X\rtimes G$ is inner amenable, Theorem \ref{thm:equiv} applies. Therefore $C_0(X)\rtimes_r \Gamma$ is exact if and only if the groupoid $X\rtimes \Gamma$ is KW-exact. More generally, this holds for any partial action such that the domains of the partial homeomorphisms $\alpha_t$ are closed (in addition to being open). Indeed it not difficult to show that the groupoid $X\rtimes \Gamma$ is inner amenable (directly, or using the fact that such partial actions admit a Hausdorff globalisation \cite[Proposition 5.7]{Exel_Book}). For general partial actions of $\Gamma$ the situation is not clear. We do not know whether $X\rtimes \Gamma$ is inner amenable in this case. If $\Gamma$ is exact the semidirect product groupoid $X\rtimes \Gamma$ is strongly amenable at infinity \cite[Proposition 4.23]{AD16} and therefore $C^*_{r}(X\rtimes \Gamma)$ is exact. This had been previously shown in \cite[Corollary 2.2]{AEK} by using Fell bundles. \end{rem} \subsection{Inner exactness} We introduce now a very weak notion of exactness. First let us make some reminders. Let ${\mathcal G}$ be a locally compact groupoid. Recall that a subset $E$ of $X= {\mathcal G}^{(0)}$ is said to be {\it invariant} if $s(\gamma)\in E$ if and only if $r(\gamma)\in E$. Let $F$ be a closed invariant subset of $X$ and set $U = X\setminus F$. It is well-known that the inclusion $\iota : {\mathcal C}_c({\mathcal G}(U) )\to {\mathcal C}_c({\mathcal G})$ extends to an injective homomorphism from $C^*({\mathcal G}(U))$ into $C^*({\mathcal G})$ and from $C^*_{r}({\mathcal G}(U))$ into $C^*_{r}({\mathcal G})$. Similarly, the restriction map $\pi: {\mathcal C}_c({\mathcal G}) \to {\mathcal C}_c({\mathcal G}(F))$ extends to a surjective homomorphism from $C^*({\mathcal G})$ onto $C^*({\mathcal G}(F))$ and from $C^*_{r}({\mathcal G})$ onto $C^*_{r}({\mathcal G}(F))$. Moreover the sequence $$0 \rightarrow C^*({\mathcal G}(U)) \rightarrow C^*({\mathcal G}) \rightarrow C^*({\mathcal G}(F)) \rightarrow 0$$ is exact. For these facts, we refer to \cite[page 102]{Ren_book}, or to \cite[Proposition 2.4.2]{Ram} for a detailed proof. On the other hand, the corresponding sequence \begin{equation}\label{eq:ie} 0 \rightarrow C^*_{r}({\mathcal G}(U)) \rightarrow C^*_{r}({\mathcal G}) \rightarrow C^*_{r}({\mathcal G}(F)) \rightarrow 0 \end{equation} with respect to the reduced groupoid $C^*$-algebras is not always exact, as shown in \cite[Remark 4.10]{Ren91} (see also Proposition \ref{prop:HLS} below). \begin{defn}\label{def:inamen} A locally compact groupoid such that the sequence \eqref{eq:ie} is exact for every closed invariant subset $F$ of $X$ called KW-{\it inner exact} or simply {\it inner exact}.\index{inner exact groupoid} \end{defn} We will see that the class of inner exact groupoids plays a role in the study of the (WCP). It is also interesting in itself and now plays a role in other contexts (see for instance \cite{BL}, \cite{BCS}, \cite{BEM}). This class is quite large. It includes all locally compact groups and more generally the groupoids that act with dense orbits on their space of units. This class is stable under equivalence of groupoids \cite[Theorem 6.1]{Lal17}. Of course, KW-exact groupoids are inner exact. \subsection{The case of group bundle groupoids} We first need to recall some definitions. \begin{defn}\label{def:bundle_C*}\cite[Definition 1.1]{KW95} A {\it field} (or bundle) {\it of $C^*$-algebras over a locally compact space} $X$ is a triple ${\mathcal A} = (A, \set{\pi_x: A \to A_x}_{x\in X},X)$ where $A$, $A_x$ are $C^*$-algebras, and where $\pi_x$ is a surjective $*$-homomorphism such that \begin{itemize} \item[(i)] $\set{\pi_x: x\in X}$ is faithful, that is, $\norm{a} = \sup_{x\in X}\norm{\pi_x(a)}$ for every $a\in A$; \item[(ii)] for $f\in {\mathcal C}_0(X)$ and $a\in A$, there is an element $fa\in A$ such that $\pi_x(fa) = f(x)\pi_x(a)$ for $x\in X$; \item[(iii)] the inclusion of ${\mathcal C}_0(X)$ into the center of the multiplier algebra of $A$ is non-degenerate. \end{itemize} We say that the field is (usc) {\it upper semi-continuous} (resp. (lsc) {\it lower semi-continuous}) if the function $x\mapsto \norm{\pi_x(a)}$ is upper semi-continuous (resp. lower semi-continuous) for every $a\in A$. If for each $a\in A$, the function $x\mapsto \norm{\pi_x(a)}$ is in ${\mathcal C}_0(X)$, we will say that ${\mathcal A}$ is a {\it continuous field of $C^*$-algebras}\footnote{In \cite{KW95}, this is called a continuous bundle of $C^*$-algebras.}. \end{defn} Recall that a ${\mathcal C}_0(X)$-algebra $A$ is a $C^*$-algebra equipped with a non-degenerate homomorphism from ${\mathcal C}_0(X)$ into the multiplier algebra of $A$ (see \cite[Appendix C.1]{Will}). For $x\in X$ we denote by ${\mathcal C}_x(X)$ the subalgebra of ${\mathcal C}_0(X)$ of functions that vanish at $x$. Note that a ${\mathcal C}_0(X)$-algebra $A$ gives rise to an usc field of $C^*$-algebras with fibres $A_x = A/{\mathcal C}_x(X)A$ (see \cite[Proposition 1.2]{Rie} or \cite[Appendix C.2]{Will}). We will use the following characterization of usc fields of $C^*$-algebras. \begin{lem}\cite[Lemma 2.3]{KW95}, \cite[Lemma 9.4]{AD16}\label{lem:usc} Let ${\mathcal A}$ be a field of $C^*$-algebras on a locally compact space $X$. The function $x\mapsto \norm{\pi_x(a)}$ is upper semi-continuous at $x_0$ for every $a\in A$ if and only if $\ker \pi_{x_0}= {\mathcal C}_{x_0}(X)A$ \end{lem} We apply this fact to the reduced $C^*$-algebra of a groupoid group bundle ${\mathcal G}$ as defined in Example \ref{exs:groupoids} (b). The structure of ${\mathcal C}_0(X)$-algebra of the $C^*$-algebra $C^*_{r}({\mathcal G})$ is defined by $(f h)(\gamma) = f\circ r(\gamma) h(\gamma)$ for $f\in {\mathcal C}_0(X)$ and $h\in {\mathcal C}_c({\mathcal G})$ (see \cite[Lemma 2.2.4]{Ram}, \cite[\S 5]{LR}). We set $U_x = X \setminus \set{x}$. Then we have $C^*_{r}({\mathcal G}(U_x)) = {\mathcal C}_x(X) C^*_{r}({\mathcal G})$. We get that $C^*_{r}({\mathcal G})$ is an usc field of $C^*$-algebras over $X$ with fibre $C^*_{r}({\mathcal G})/{\mathcal C}_x(X) C^*_{r}({\mathcal G})= C^*_{r}({\mathcal G})/C^*_{r}({\mathcal G}(U_x)) $ at $x$. On the other hand, $(C^*_{r}({\mathcal G}), \set{\pi_x : C^*_{r}({\mathcal G})\to C^*_{r}({\mathcal G}(x))})$ is lower semi-continuous (see \cite[Th\'eor\`eme 2.4.6]{Ram} or \cite[Theorem 5.5]{LR}). Then it follows from Lemma \ref{lem:usc} that the function $x\mapsto \norm{\pi_x(a)}$ is continuous at $x_0$ for every $a\in C^*_{r}({\mathcal G})$ if and only if the following sequence is exact: $$0\rightarrow C^*_{r}({\mathcal G}(U_{x_0})) \rightarrow C^*_{r}({\mathcal G}) \stackrel{\pi_{x_0}}{\rightarrow} C^*_{r}({\mathcal G}(x_0)) \rightarrow 0.$$ \begin{prop}\label{prop:innexact} Let ${\mathcal G}$ be a group bundle groupoid on $X$. The following conditions are equi\-valent: \begin{itemize} \item[(i)] ${\mathcal G}$ is inner exact; \item[(ii)] for every $x\in X$ the following sequence is exact: $$ 0 \rightarrow C^*_{r}({\mathcal G}(X\setminus \set{x})) \rightarrow C^*_{r}({\mathcal G}) \rightarrow C^*_{r}({\mathcal G}(x)) \rightarrow 0. $$ \item[(iii)] $C^*_{r}({\mathcal G})$ is a continuous field of $C^*$-algebras over $X$ with fibres $C^*_{r}({\mathcal G}(x)) $. \end{itemize} \end{prop} \begin{proof} (i) $\Rightarrow$ (ii) is obvious and (ii) $\Rightarrow$ (iii) is a particular case of the previous observation. Assume that (iii) holds true and, given an invariant closed subset $F$ of $X$, let us show that the following sequence is exact: $$ 0 \rightarrow C^*_{r}({\mathcal G}(X\setminus F)) \rightarrow C^*_{r}({\mathcal G}) \rightarrow C^*_{r}({\mathcal G}(F)) \rightarrow 0. $$ Let $a\in C^*_{r}({\mathcal G})$ be such that $\pi_x(a) = 0$ for every $x\in F$. Let $\varepsilon >0$ be given. Then $K = \set{x\in X, \norm{\pi_x(a)} \geq \varepsilon}$ is a compact subset of $X$ with $K\cap F = \emptyset$. Take $f\in {\mathcal C}_0(X), f: X\to [0,1]$ with $f(x) = 1$ for $x\in K$ and $f(x) = 0$ for $x\in F$. We have $\norm{a-fa}\leq \varepsilon$ and $fa\in C^*_{r}({\mathcal G}(X\setminus F))$. Therefore $a\in C^*_{r}({\mathcal G}(X\setminus F))$. \end{proof} \subsection{The case of HLS groupoids} \label{sec:HLS} The following class of \'etale group bundle groupoids (that we call HLS-groupoids)\index{HLS-groupoid} was introduced by Higson, Lafforgue and Skandalis \cite{HLS}, in order to provide examples of groupoids for which the Baum-Connes conjecture fails. We consider an infinite discrete group $\Gamma$ and a decreasing sequence $(N_k)_{k\in {\mathbb{N}}}$ of normal subgroups of $\Gamma$ of finite index. We set $\Gamma_\infty = \Gamma$, and $\Gamma_k = \Gamma/N_k$ and we denote by $q_k : \Gamma \to \Gamma_k$ the quotient homomorphism for $k$ in the Alexandroff compactification ${\mathbb{N}}^+$ of ${\mathbb{N}}$. Let ${\mathcal G}$ be the quotient of $ {\mathbb{N}}^+ \times\Gamma$ with respect to the equivalence relation $$(k,t)\sim (l,u) \,\,\,\hbox{if} \,\,\, k=l \,\,\,\hbox{and}\,\,\, q_k(t) = q_k(u).$$ Then ${\mathcal G}$ is the bundle of groups $k\mapsto \Gamma_k$ over ${\mathbb{N}}^+$. The range and source maps are given by $r([k,t]) = s([k,t]) = k$, where $[k,t] = (k,q_k(t))$ is the equivalence class of $(k,t)$. We endow ${\mathcal G}$ with the quotient topology. Then ${\mathcal G}$ is Hausdorff (and obviously an \'etale groupoid) if and only if for every $s\not= 1$ there exists $k_0$ such that $s\notin N_k$ for $k\geq k_0$ (hence, $\Gamma$ is residually finite). We keep this assumption. Such examples are provided by taking $\Gamma = \hbox{SL}_n({\mathbb{Z}})$ and $\Gamma_k = \hbox{SL}_n({\mathbb{Z}}/k{\mathbb{Z}})$, for $k\geq 2$. For these HLS groupoids, the exactness of $C^*_{r}({\mathcal G})$ is a very strong condition which suffices to imply the amenability of $\Gamma$ as shown by Willett in \cite{Wil15}. \begin{prop}\label{prop:HLS} Let us keep the above notation. We assume that $\Gamma$ is finitely generated. Then the following conditions are equivalent: \begin{itemize} \item[(1)] $\Gamma$ is amenable; $(2)$ ${\mathcal G}$ is amenable; \item[(3)] ${\mathcal G}$ is KW-exact; $(4)$ ${\mathcal G}$ is inner exact; \item[(5)] the sequence $0 \longrightarrow C^*_{r}({\mathcal G}({\mathbb{N}}))\longrightarrow C^*_{r}({\mathcal G}) \longrightarrow C^*_{r}({\mathcal G}(\infty)) \longrightarrow 0$ is exact $($$($5'$)$ $C^*_{r}({\mathcal G})$ is a continuous field of $C^*$-algebras with fibres $C^*_{r}({\mathcal G}(x))$$)$; \item[(6)] $C^*_{r}({\mathcal G})$ is nuclear; $(7)$ $C^*_{r}({\mathcal G})$ is exact. \end{itemize} \end{prop} \begin{proof} The equivalence between (1) and (2) follows for instance from \cite[Lemma 2.4]{Wil15}. That (2) ${\Rightarrow}$ (3) ${\Rightarrow}$ (4) ${\Rightarrow}$ (5) is obvious and by Proposition \ref{prop:innexact} we have (5) ${\Rightarrow}$ (5'). Let us prove that (5') ${\Rightarrow}$ (1). Assume by contradiction that $\Gamma$ is not amenable. We fix a symmetric probability measure $\mu$ on $\Gamma$ with a finite support that generates $\Gamma$ and we choose $n_0$ such that the restriction of $q_n$ to the support of $\mu$ is injective for $n\geq n_0$. We take $a\in{\mathcal C}_c({\mathcal G})\subset C^*_{r}({\mathcal G})$ such that $a(\gamma) = 0$ except for $\gamma = (n,q_n(s))$ with $n\geq n_0$ and $s\in \supp(\mu)$ where $a(\gamma) = \mu(s)$. Then $\pi_n(a) = 0$ if $n<n_0$ and $\pi_n(a) = \lambda_{\Gamma_n}(\mu) \in C_{r}^*(\Gamma_n)= C^*_{r}({\mathcal G}(n))$ if $n\geq n_0$, where $\lambda_{\Gamma_n}$ is the quasi-regular representation of $\Gamma$ in $\ell^2(\Gamma_n)$. By Kesten's result \cite{Kes1, Kes} on spectral radii relative to symmetric random walks, we have $\norm{\lambda_{\Gamma_n}(\mu)}_{C_{r}^*(\Gamma_n)} =1$ for ${\mathbb{N}} \ni n\geq n_0$ and $\norm{\lambda_{\Gamma_\infty}(\mu)}_{C_{r}^*(\Gamma_\infty)} < 1$ since $\Gamma$ is not amenable. It follows that $C^*_{r}({\mathcal G})$ is not a continuous field of $C^*$-algebras with fibres $C^*_{r}({\mathcal G}(n))$ on ${\mathbb{N}}^+$, a contradiction. We know that (2) ${\Rightarrow}$ (6) ${\Rightarrow}$ (7). For the fact that (7) ${\Rightarrow}$ (1) see \cite[Lemma 3.2]{Wil15}. \end{proof} Given a group bundle groupoid it may happen that ${\mathcal G}(x)$ is a $C^*$-exact group for every $x\in {\mathcal G}^{(0)}$ whereas ${\mathcal G}$ is not $C^*$-exact. Indeed if ${\mathcal G}$ is an HLS groupoid associated with a group $\Gamma$ that has Kazdhan's property (T), then the sequence $$0 \longrightarrow C^*_{r}({\mathcal G}({\mathbb{N}}))\longrightarrow C^*_{r}({\mathcal G}) \longrightarrow C^*_{r}({\mathcal G}(\infty)) \longrightarrow 0$$ is not exact (it is not even exact in $K$-theory!) \cite{HLS}. As an example we can take the exact group $\Gamma = SL(3,{\mathbb{Z}})$. The previous proposition shows that $C^*_{r}({\mathcal G})$ is not exact. Willett has given an even more surprising example with $\Gamma = {\mathbb{F}}_2$, the free group with two generators (see below). \section{Weak containment, exactness and amenability} \begin{defn}\label{def:weakamen} Let ${\mathcal G}$ be a locally compact groupoid. We say that ${\mathcal G}$ has the {\it weak containment property}, \index{weak containment property}(WCP) \index{(WCP)} in short, if the canonical surjection from its full $C^*$-algebra $C^*({\mathcal G})$ onto its reduced $C^*$-algebra $C^*_{r}({\mathcal G})$ is injective, {\it i.e.}, the two completions $C^*({\mathcal G})$ and $C^*_{r}({\mathcal G})$ of ${\mathcal C}_c({\mathcal G})$ are the same. \end{defn} A very useful theorem of Hulanicki \cite{Hul, Hul66} asserts that a locally compact group $G$ has the (WCP) if and only if it is amenable. While it has long been known that every amenable locally compact groupoid has the (WCP) \cite{Ren91, AD-R}, whether the converse holds was a long-standing open problem (see \cite[Remark 6.1.9]{AD-R}). Remarkably, in 2015 Willett \cite{Wil15} published a very nice example showing that a group bundle groupoid may have the (WCP) without being amenable. His example is a HLS groupoid where $(\Gamma_n)$ is a well chosen sequence of subgroups of the free group with two generators ${\mathbb{F}}_2$. Therefore the groupoid version of Hulanicki's theorem is not true in general. However there are many positive results, all of which involve an additional exactness assumption. A first result in this direction is due to Buneci \cite{Bun}. She proved that a second countable locally compact transitive groupoid ${\mathcal G}$ having the (WCP) is measurewise amenable. The (topological) amenability of ${\mathcal G}$ can also be proved by observing that it is preserved under equivalence of groupoids \cite[Theorem 2.2.17]{AD-R}, as well as the (WCP) \cite[Theorem 17]{SW12}, and using the fact that ${\mathcal G}$ is equivalent to any of its isotropy group by transitivity \cite{MRW}. It is only in 2014 that a second result appeared, linking amenability and the (WCP) \begin{thm}\label{Mat}\cite{Mat} Let $\Gamma$ be a discrete group acting by homeomorphisms on a compact space. Then the semidirect product groupoid is amenable if and only if it has the (WCP) and $\Gamma$ is exact. \end{thm} Note that the exactness of $\Gamma$ is equivalent to the strong amenability at infinity of the groupoid $X\rtimes \Gamma$ since $X$ is compact and $\Gamma$ is discrete (see \cite[Proposition 4.3 (i)]{AD16}). Recently, the above theorem has been extended by Kranz as follows. \begin{thm}\label{Kranz} \cite{Kra} Let ${\mathcal G}$ be an \'etale groupoid. Then ${\mathcal G}$ is amenable if and only if it has the (WCP) and is strongly amenable at infinity. \end{thm} Assuming that ${\mathcal G}$ has the (WCP) and is strongly amenable at infinity, Kranz's strategy to prove that ${\mathcal G}$ is amenable is the same as that of \cite{Mat}, but with additional technical difficulties. It consists in showing that the canonical inclusion of $C^*_{r}({\mathcal G})$ into its bidual $C^*_{r}({\mathcal G})^{**}$ is nuclear. Then by \cite[Proposition 2.3.8]{BO} one sees that $C^*_{r}({\mathcal G})$ is nuclear and by \cite[Theorem 5.6.18]{BO} it follows that the groupoid ${\mathcal G}$ is amenable. The delicate step, which requires in a crucial way that ${\mathcal G}$ is \'etale, is to show the existence of a completely positive map $\phi: C^*_{r}(\beta_r{\mathcal G} \rtimes {\mathcal G}) \to C^*_{r}({\mathcal G})^{**}$ whose restriction to $C^*_{r}({\mathcal G})$ is the inclusion from $C^*_{r}({\mathcal G})$ in its bidual. Since $C^*_{r}(\beta_r{\mathcal G} \rtimes {\mathcal G})$ is nuclear (see \cite[Proposition 7.2]{AD16}), this inclusion is nuclear. By a different method the following extension of Theorem \ref{Mat} was obtained in \cite{BEW20}. Note that unlike the case where $X$ is compact it is not true in general that $G$ is KW-exact when $X\rtimes G$ is amenable. \begin{thm} \cite[Theorem 5.15]{BEW20} Let $G\curvearrowright X$ be a continuous action of a locally compact group $G$ on a locally compact space $X$. We assume that $G$ is KW-exact and that $X\rtimes G$ has the (WCP). Then the groupoid $X\rtimes G$ is amenable. \end{thm} It is interesting to note that the behaviour is different for group actions on non-commutative $C^*$-algebras. For instance in \cite[Proposition 5.25]{BEW20} a surprising example of a non-amenable action having the (WCP) of the exact group $G= PSL(2,{\mathbb{C}})$ on the $C^*$-algebra of compact operators has been constructed. It would be interesting to construct an example with an exact discrete group. For group bundle groupoids we have the following easy result. \begin{prop}\label{prop:WCPbundle} Let ${\mathcal G}$ be a second countable locally compact group bundle groupoid over a locally compact space $X$. Then ${\mathcal G}$ is amenable if and only if it has the (WCP) and is inner exact. \end{prop} \begin{proof} Assume that ${\mathcal G}$ has the (WCP) and is inner exact and let $x\in X$. We set $U_x = X\setminus \set{x}$. In the commutative diagram $$\xymatrix{ 0\ar[r] &C^*({\mathcal G}(U_x)) \ar[d] \ar[r] & C^*({\mathcal G})\ar[d]^{\lambda} \ar[r] &C^* ({\mathcal G}(x)) \ar[r]\ar[d]^{\lambda_{{\mathcal G}(x)}} &0\\ 0\ar[r] & C^*_{r}({\mathcal G}(U_x)) \ar[r] & C^*_{r}({\mathcal G})\ar[r]^{\pi_x}& C^*_r ({\mathcal G}(x)) \ar[r] &0}$$ both sequences are exact and $\lambda$ is injective. Chasing through the diagram we see that $\lambda_{{\mathcal G}(x)}$ is injective ({\it i.e.}, the group ${\mathcal G}(x)$ is amenable). This ends the proof since, by \cite[Theorem 3.5]{Ren13}, the group bundle groupoid ${\mathcal G}$ is amenable if and only if ${\mathcal G}(x)$ is amenable for every $x\in X$. \end{proof} The cases of transitive groupoids and of group bundle groupoids are included in the following result of B\"onicke. His nice elementary proof is reproduced in \cite[Theorem 10.5]{AD16}. \begin{thm}\cite{Bon}\label{thm:bon} Let ${\mathcal G}$ be a second countable locally compact groupoid such that the orbit space ${\mathcal G}\setminus {\mathcal G}^{(0)}$ equipped with the quotient topology is $T_0$. Then the following conditions are equivalent: \begin{itemize} \item[(i)] ${\mathcal G}$ is amenable; \item[(ii)] ${\mathcal G}$ has the (WCP) and is inner exact. \end{itemize} \end{thm} \section{Open questions} \subsection{About amenability at infinity and inner amenability} \begin{itemize} \item[(1)] The notion of strong amenability at infinity has proven to be more useful than amenability at infinity. But are the two notions equivalent? Note that by Theorem \ref{thm:equiv} this is true for every second countable inner amenable \'etale groupoid. \item[(2)] It would be interesting to understand better the notion of inner amenability for locally compact groupoids. Is it invariant under equivalence of groupoids? Are there \'etale groupoids that are not inner amenable? In particular, if $G$ is a discrete group acting partially on a locally compact space X, is it true that the corresponding partial transformation groupoid is inner amenable? This is true when the domains of the partial homeomorphisms are both open and closed but what happens in general? It would also be interesting to study the case of HLS groupoids. \end{itemize} \subsection{About exactness for groups} \begin{itemize} \item[(3)] Let us denote by $[InnAmen]$ the class of locally compact inner amenable groups and by $[Tr]$ the class of groups whose reduced $C^*$-algebra has a tracial state. Groups $G$ in each of these two classes and such that $C^*_r(G)$ is nuclear are amenable \cite{AD02}, \cite{Ng}. Almost connected groups are amenable at infinity \cite[Theorem 6.8]{KW99bis}, \cite[Proposition 3.3]{AD02}. Their full $C^*$-algebras are nuclear. So they are in $[InnAmen]$ or in $[Tr]$ if and only if they are amenable. This latter observation applies also to groups of type I. We denote by $[{\mathcal C}]$ the class of locally compact groups for which $C^*$-exactness is equivalent to KW-exactness. It contains $[InnAmen]$ and $[Tr]$. Almost connected groups are in $[{\mathcal C}]$ since they are KW-exact. The case of groups of type I is not clear. Of course they are $C^*$-exact but are they KW-exact? In support of this question we point out that it is conjectured that every second countable locally compact group of type I has a cocompact amenable subgroup \cite{CKM}, a property which implies amenability at infinity \cite[Proposition 5.2.5]{AD-R}. It would be interesting to find more examples in the class $[{\mathcal C}]$. It seems difficult to find examples not in $[{\mathcal C}]$. Note that this class is preserved by extensions $$0\to N\to G\to G/N\to 0$$ where $N$ is amenable. Indeed since $N$ is amenable, $C^*_{r}(G/N)$ is a quotient of $C^*_{r}(G)$ (see the proof of Lemma 3.5 in \cite{CZ}). Assume that $G$ is $C^*$-exact and that $G/N\in [{\mathcal C}]$. Then the group $G/N$ is $C^*$-exact and therefore KW-exact. It follows that $G$ is KW-exact since the class of KW-exact groups is preserved under extension \cite[Proposition 5.1]{KW99bis}. \item[(4)] There are examples of non-inner amenable groups in $[Tr]$ (see \cite[Remark 2.6 (ii)]{FSW}, \cite[Example 4.15]{Man}). But are there inner amenable groups which are not in $[Tr]$? Note that the subclass $[IN]$ of $[InnAmen]$ is contained into $[Tr]$ \cite[Theorem 2.1]{FSW}. Let us recall that a locally compact group $G$ is in $[IN]$ if its identity $e$ has a compact neighborhood invariant under conjugacy. By \cite[Proposition 4.2]{Tay}, this is equivalent to the existence of a normal tracial state on the von Neumann algebra $L(G)$ of $G$. Since $C^*_{r}(G)$ is weakly dense into $L(G)$ the conclusion follows. Let us observe that the existence of a locally compact group in $[InnAmen]\setminus [Tr]$ is equivalent to the existence of a totally disconnected locally compact group in $[InnAmen]\setminus [Tr]$. Indeed let $G$ be a locally compact group in this set. Let $G_0$ be the connected component of the identity. Then $G_0$ is inner amenable as well as $G/G_0$ (see \cite[Corollary 3.3]{CT} and \cite[Proposition 6.2]{LP}). Since a connected inner amenable group is amenable (by \cite[Theorem 5.8]{AD02}), we see that $G_0$ is amenable. It follows that $C^*_{r}(G/G_0)$ is a quotient of $C^*_{r}(G)$ and therefore $G/G_0$ is not in $[Tr]$. As a consequence of this observation we are left with the following problem: does there exist a totally disconnected locally compact inner amenable group without open normal amenable subgroups? \end{itemize} \subsection{About exactness for groupoids} \begin{itemize} \item[(5)] In \cite{KW95}, Kirchberg and Wassermann have constructed examples of continuous fields of exact $C^*$-algebras on a locally compact space, whose $C^*$-algebra of continuous sections vanishing at infinity is not exact. Are there examples of \'etale group bundle groupoids ${\mathcal G}$, whose reduced $C^*$-algebra is not exact whereas $$(C^*_{r}({\mathcal G}), \set{\pi_x : C^*_{r}({\mathcal G}) \to C^*_{r}({\mathcal G}(x))}_{x\in {\mathcal G}^{(0)}},{\mathcal G}^{(0)})$$ is a continuous field of exact $C^*$-algebras (compare with Proposition \ref{prop:HLS})? A similar question is asked in \cite[Question 3]{Lal17}: if ${\mathcal G}$ is an inner exact locally compact group bundle groupoid, whose fibres are KW-exact groups, is it true that ${\mathcal G}$ is KW-exact? \item[(6)]Let ${\mathcal G}$ be a locally compact groupoid. We have \begin{center} Amenability at infinity $\xLongrightarrow{\text{(1)}}$ KW-exactness $\xLongrightarrow{\text{(2)}}$ $C^*$-exactness. \end{center} Let us recap what is known about the reversed arrows and what is still open. When ${\mathcal G}$ is an \'etale inner amenable groupoids, the above three notions of exactness are equivalent (Theorem \ref{thm:equiv}). Does this fact extend to any inner amenable locally compact groupoid? Without the assumption of inner amenability, nothing is known. As already said, it seems difficult to find an example of a locally compact group which is $C^*$-exact but not KW-exact. Could it be easier to find an example in the context of locally compact groupoids? If $G$ is a KW-exact locally compact group, every semidirect product groupoid (relative to a global or to a partial action) is strongly amenable at infinity \cite[Proposition 4.3, Proposition 4.23]{AD16}, and therefore KW-exact. Does KW-exactness of $X\rtimes G$ imply that $X\rtimes G$ is amenable at infinity in general? Note that the notion of exactness for a semidirect product groupoid $X\rtimes \Gamma$, where $\Gamma$ is a discrete group is not ambiguous by Theorem \ref{thm:equiv}. \end{itemize} \subsection{(WCP) vs amenability} \begin{itemize} \item[(7)] Are there examples of inner exact groupoids ${\mathcal G}$ which have the (WCP) without being amenable? By \cite{Bon} one should look for examples for which the orbit space ${\mathcal G}\setminus {\mathcal G}^{(0)}$ is not $T_0$. \item[(8)] We have seen that if an \'etale locally compact groupoid ${\mathcal G}$ is assumed to be strongly amenable at infinity, the (WCP) implies its amenability (Theorem \ref{Kranz}). Is it true in general, or at least for a semidirect product groupoid $X\rtimes G$ with $X$ and $G$ locally compact? Recall that this holds when $G$ is KW-exact \cite[Theorem 5.15]{BEW20}, a property stronger than strong amenability at infinity. \item[(9)] Let $G$ be a discrete group and let $X = \partial G = \beta G\setminus G$ be its boundary equipped with the natural action of $G$. The weak containment property for $\partial G \rtimes G$ implies that this groupoid is amenable. Indeed the (WCP) implies that the sequence $$0\longrightarrow C^*_{r}(G\rtimes G) \longrightarrow C^*_{r}( \beta G\rtimes G)\longrightarrow C^*_{r}(\partial G\rtimes G)\longrightarrow 0$$ is exact. Roe and Willett have proven in \cite{RW} that this exactness property implies that $G$ is exact. It follows that $G\curvearrowright \beta G$ is amenable and therefore $G\curvearrowright \partial G$ is amenable too. Can we replace $\partial G$ by $\beta G$, that is, if $G\curvearrowright \beta G$ has the (WCP) can we deduce that $G\curvearrowright \beta G$ is amenable? This is asked in \cite[Remark 4.10]{BEW20bis}. \end{itemize} \noindent{\em Acknowledgements}. I thank Julian Kranz, Jean Renault and the referee for useful remarks and suggestions. \noindent {\bf Addendum.} A construction due to Suzuki \cite{Suz} gives an example of a totally disconnected locally compact inner amenable group without open normal amenable subgroups (or equivalently without tracial states), thus answering the question posed in point 7.2.(4) above. Suzuki considers a sequence $(\Gamma_n, F_n)$ of pairs of discrete groups, where $F_n$ is a finite group acting on $\Gamma_n$ is such a way that the reduced $C^*$-algebra of the semidirect product $\Gamma_n \rtimes F_n$ is simple. Let us set $\Gamma = \bigoplus_{i=1}^{\infty}\Gamma_i$ and $K= \prod_{i=1}^\infty F_i$. Let $K$ act on $\Gamma$ component-wise. Then Suzuki shows that $C^*_{r}(G)$ is simple and that the Plancherel weight is the unique lsc semifinite tracial weight on $C^*_{r}(G)$. Since $G$ is not discrete this weight is not finite and therefore $G$ in not in $[Tr]$ (see also \cite[Remark 2.5]{FSW}). Let us show now that $G\in [InnAmen]$. We set $G_n = (\bigoplus_{i=1}^n \Gamma_i)\rtimes K$. Then $(G_n)$ is an increasing sequence of open subgroups of $G$ with $\bigcup_{n=1}^\infty G_n = G$. Since $G_n$ contains an open compact normal subgroup, namely $\prod_{i= n+1}^\infty F_i$ we see that there exists an inner invariant mean on $L^\infty(G_n)$ and therefore a mean $m_n$ on $L^\infty(G)$ which is invariant by conjugation under the elements of $G_n$. Any cluster point of the sequence $(m_n)$ in $L^\infty(G)^*$ gives an inner invariant mean on $L^\infty(G)$. \end{document}
\begin{document} \title{Categorification of some level two representations of $\mathfrak{sl} \baselineskip 14pt \def\mathbb C{\mathbb C} \def\mathbb R{\mathbb R} \def\mathbb N{\mathbb N} \def\mathbb Z{\mathbb Z} \def\mathbb Q{\mathbb Q} \def\mathbb F{\mathbb F} \def\lbrace{\lbrace} \def\rbrace{\rbrace} \def\otimes{\otimes} \def\longrightarrow{\longrightarrow} \def\mathfrak{sl}_n{\mathfrak{sl}_n} \def\mathrm{-mod}{\mathrm{-mod}} \def\mathrm{Hom}{\mathrm{Hom}} \def\mathrm{Id}{\mathrm{Id}} \def\binom#1#2{\left( \begin{array}{c} #1 \\ #2 \end{array}\right)} \def\mathcal{\mathcal} \def\mathfrak{\mathfrak} \def\mathfrak{sl}{\mathfrak{sl}} \def\yesnocases#1#2#3#4{\left\{ \begin{array}{ll} #1 & #2 \\ #3 & #4 \end{array} \right. } \newcommand{\define}{\stackrel{\mbox{\scriptsize{def}}}{=}} \def\sbinom#1#2{\left( \hspace{-0.06in}\begin{array}{c} #1 \\ #2 \end{array} \hspace{-0.06in} \right)} \def\drawing#1{\begin{center} \epsfig{file=#1} \end{center}} \def\hspace{0.05in}{\hspace{0.05in}} \def\hspace{0.1in}{\hspace{0.1in}} \def { } \def\leftrightmaps#1#2#3{\raise3pt\hbox{$\mathop{\,\,\hbox to #1pt{\rightarrowfill}\kern-#1pt\lower3.95pt\hbox to #1pt{\leftarrowfill}\,\,}\limits_{#2}^{#3}$}} \def\mc{H}{\mathcal{H}} \def\mc{A}{\mathcal{A}} \def\mc{C}{\mathcal{C}} \def\mc{E}{\mathcal{E}} \def\mc{F}{\mathcal{F}} \def\mc{K}{\mathcal{K}} \def\mc{R}{\mathcal{R}} \tableofcontents \section{Introduction} Let $W$ be the fundamental $n$-dimensional representation of the Lie algebra $\mathfrak{sl}_n.$ Let $\omega_1, \dots ,\omega_{n-1}$ be the fundamental dominant weights of $\mathfrak{sl}_n,$ the highest weights of the exterior powers $\Lambda^k W$ of $W.$ For the rest of this paper fix $k$ between $1$ and $n-1$ and denote by $V$ the irreducible representation with the highest weight $2 \omega_k.$ This representation is a direct summand of $S^2(\Lambda^k W).$ Decompose $V$ into weight spaces, $V= \oplusop{\lambda} V_{\lambda}.$ We call $\lambda$ \emph{admissible} if $V_{\lambda}\not= 0.$ Admissible weights are enumerated by sequences \begin{equation*} \lambda= (\lambda_1, \dots, \lambda_{n}), 0\le \lambda_i \le 2, \sum_{i=1}^n \lambda_i = 2k. \end{equation*} For an admissible $\lambda$ let $m= m(\lambda)$ be one-half of the number of 1s in the sequence $(\lambda_1, \dots, \lambda_{2n}).$ $m(\lambda)$ is an integer between $0$ and $\mathrm{min}(k,n-k).$ The dimension of $V_{\lambda}$ depends only on $m$ and is the $m$-th Catalan number $c_m=\frac{1}{m+1}\sbinom{2m}{m}.$ $E_i \in \mathfrak{sl}_{n}$ maps the weight space $V_{\lambda}$ to $V_{\lambda+\epsilon_i}$ where $\epsilon_i = (0, \dots, 0 , 1, -1, 0 , \dots , 0),$ and $1,-1$ are the $i$-th and $(i+1)$-th entries. $E_i V_{\lambda}=0$ if $\lambda+ \epsilon_i$ is not admissible. $F_i \in \mathfrak{sl}_{n}$ maps $V_{\lambda}$ to $V_{\lambda-\epsilon_i}.$ A level two representation is an irreducible representation with the highest weight $\omega_i + \omega_j.$ In particular, $V$ is a level two representation. Green [G] found a graphical interpretation of Lusztig-Kashiwara canonical basis [L1], [Ka], [L2] in level two representations of $U_q(\mathfrak{sl}_n)$ via a calculus of planar diagrams similar to the one of Temperley-Lieb. In this paper we categorify $V$ and Green's construction of the canonical basis in $V$ with the help of rings $H^m$ introduced in [K]. Let $\mc{C}$ be the direct sum of categories of $H^{m(\lambda)}$-modules, over admissible $\lambda.$ The Grothendieck group of $H^{m(\lambda)}$-mod is naturally isomorphic to the weight space $V_{\lambda}$ (more precisely, to a $\mathbb Z$-lattice in the latter), and the rank of the Grothendieck group is the $m$-th Catalan number (the dimension of $V_{\lambda}).$ We construct exact functors $\mathcal{E}_i$ and $\mathcal{F}_i$ in the category $\mc{C}$ that in the Grothendieck group descend to $E_i$ and $F_i$ acting on $V.$ Various structures in $V$ lift to fancier structures in $\mc{C}.$ The symmetric group action on $V$ lifts to a braid group action in the derived category of $\mc{C}.$ The contravariant symmetric bilinear form on $V$ is given by dimensions of Hom spaces between projective modules in $\mc{C}.$ Contravariance, meaning $(E_i v, w)= (v, F_i w)$ for $v,w\in V,$ turns into the property that the functor $\mc{E}_i$ is both left and right adjoint to $\mc{F}_i$. Rings $H^m$ are naturally graded and throughout the paper we work with the categories of graded $H^m$-modules. The Grothendieck groups are then $\mathbb Z[q,q^{-1}]$-modules (the grading shift functor descends to the multiplication by a formal variable $q$ in the Grothendieck group), and assemble into a representation (also denoted $V$) of the quantum enveloping algebra $U= U_q(\mathfrak{sl}_n).$ Indecomposable projective modules in $\mc{C}$ descend to the canonical basis in $V.$ All of our results specialize easily to the $q=1$ case, by working with the category of ungraded modules. This specialization was sketched two paragraphs above, the details are left out. Some results become simpler when the grading is ignored. The Hom form on the Grothendieck group is semilinear with the grading, and bilinear without it. Functors $\mc{E}_i$ and $\mc{F}_i$ are left and right adjoint in the category of ungraded modules, while in the category of graded modules $\mc{E}_i$ and $\mc{F}_i$ are left and right adjoint only up to shifts in the grading that depend on $\lambda$ (see proposition~\ref{all-adjoint}). The category $H^m$-mod is a bicategorification of the $m$-th Catalan number $c_m=\frac{1}{m+1} \sbinom{2m}{m}.$ Informally, to categorify is to upgrade a number to a vector space, or to upgrade a vector space to a category. The number becomes the dimension of the vector space; the vector space becomes the Grothendieck group of the category (we should tensor the Grothendieck group with a field to get a vector space). To bicategorify is to upgrade a number to a category, so that the number becomes the rank of the Grothendieck group of the category. \begin{center} Number $\leftrightmaps{90}{\mbox{\scriptsize{dimension}}} {\mbox{\scriptsize{Categorification}}}$ Vector space $\leftrightmaps{90}{\mbox{\scriptsize{Grothendieck group}}} {\mbox{\scriptsize{Categorification}}}$ Category \end{center} Any nonnegative integer is, of course, a dimension of some vector space, but just picking a vector space is not a categorification. What we want is a vector space that appears naturally and comes with a bonus: an algebra structure, a group action, etc. Some examples: I) Categorifications of $n!$ \begin{itemize} \item Cohomology ring of the flag variety of $\mathbb C^n.$ Benefits include grading, commutative multiplication, the basis of Schubert cells, action of the symmetric group. \item Group algebra of the symmetric group $S_n.$ \item Other categorifications: the Hecke algebra, nilCoxeter and nilHecke algebras, quantum cohomology ring of the flag variety. \end{itemize} An example of a bicategorification of $n!$ is a regular block $\mathcal{O}_{reg}$ of the highest weight category of $\mathfrak{sl}_n$-modules: \begin{center} $n! \leftrightmaps{90}{\mbox{\scriptsize{dimension}}} {\mbox{\scriptsize{Categorification}}} \begin{array}{c} \mbox{Regular representation} \\ \mbox{of the symmetric group} \end{array} \leftrightmaps{90}{\mbox{\scriptsize{Grothendieck group}}} {\mbox{\scriptsize{Categorification}}} \mathcal{O}_{reg}$ \end{center} The action of the symmetric group on the regular representation lifts to a braid group action in the derived category of $\mathcal{O}_{reg}.$ The braid group action extends to a representation of the braid cobordism category (objects are braids with $n$ strands and morphisms are cobordisms in $\mathbb R^4$ between braids) in $D^b(\mathcal{O}_{reg}),$ by assigning certain natural transformations to braid cobordisms (see Rouquier [R]). II) Categorifications of the $m$-th Catalan number \begin{itemize} \item $\mbox{Inv}_{\mathfrak{sl}_2}(L^{\otimes{2m}}),$ the space of $\mathfrak{sl}_2$-invariants in the $2m$-th tensor power of the two-dimensional "defining" representation $L$ of $\mathfrak{sl}_2.$ \item Irreducible representation of $S_{2m}$ associated to the partition $(m,m).$ This categorification is equivalent to the previous one. \item $\mbox{Inv}_{U_q(\mathfrak{sl}_2)}(L^{\otimes{2m}}),$ the space of invariants in the $2m$-th tensor power of the two-dimensional "defining" representation of the quantum group $U_q(\mathfrak{sl}_2),$ for generic $q.$ \item The Temperley-Lieb algebra (this categorification is equivalent to the previous one). \item The weight space $V_{\lambda}$ with $m=m(\lambda).$ \item The quotient of $\mathbb C[x_1,\dots, x_m]$ by the ideal generated by all quasisymmetric functions in the variables $x_1, \dots, x_n$ with $0$ constant term [ABB]. \item the subspace of $S_n$-alternating elements in the space of diagonal harmonics [H]. \end{itemize} The Grothendieck group of the category of $H^m$-modules (without grading) is naturally isomorphic (after tensoring with $\mathbb C$) to the space of $\mathfrak{sl}_2$-invariants in the $2m$-th tensor power of the fundamental representation $L$ of $\mathfrak{sl}_2:$ \begin{equation*} K(H^m\mathrm{-mod}) \otimes_{\mathbb Z}\mathbb C \cong \mathrm{Inv}_{\mathfrak{sl}_2}(L^{\otimes 2m}) \end{equation*} The category of $H^m$-modules can be viewed as a bicategorification of the $m$-th Catalan number: \begin{center} $\frac{1}{m+1}\binom{2m}{m} \leftrightmaps{90}{\mbox{\scriptsize{dimension}}} {\mbox{\scriptsize{Categorification}}} \mbox{Inv}_{\mathfrak{sl}_2}(L^{\otimes 2m}) \leftrightmaps{90}{\mbox{\scriptsize{Grothendieck group}}} {\mbox{\scriptsize{Categorification}}} H^m\mathrm{-mod}$ \end{center} \section{Flat tangles, rings $H^m,$ and bimodules} We recall some definitions from [K]. Denote by $\mathcal{A}$ the cohomology ring $ \mathrm{H}^{\ast}(S^2,\mathbb Z)$ of the 2-sphere. $\mathcal{A}\cong \mathbb Z[X]/(X^2),$ where $X$ is a generator of $\mathrm{H}^2(S^2,\mathbb Z).$ $\mathcal{A}$ is a commutative Frobenius ring, with the nondegenerate trace form \begin{equation*} \mathrm{tr}:\mathcal{A}\to \mathbb Z, \hspace{0.2in} \mathrm{tr}(1)=0, \hspace{0.1in} \mathrm{tr}(X)=1, \end{equation*} We make $\mathcal{A}$ into a graded ring, by placing $1\in \mathcal{A}$ in degree $-1$ and $X$ in degree $1.$ The multiplication map $\mathcal{A}^{\otimes 2}\longrightarrow \mathcal{A}$ has degree one. We assign to $\mathcal{A}$ a 2-dimensional topological quantum field theory $\mathcal{F},$ a functor from the category of oriented (1+1)-cobordisms to the category of abelian groups. $\mathcal{F}$ associates \begin{itemize} \item $\mathcal{A}^{\otimes i}$ to a dijoint union of $i$ circles, \item the multiplication map $\mathcal{A}^{\otimes 2}\to \mathcal{A}$ to the three-holed sphere viewed as a cobordism from two circles to one circle. \item the comultiplication \begin{equation*} \Delta: \mathcal{A}\to \mathcal{A}^{\otimes 2}, \hspace{0.2in} \Delta(1)= 1\otimes X + X\otimes 1, \hspace{0.1in} \Delta(X)= X \otimes X \end{equation*} to the three-holed sphere viewed as a cobordism from one circle to two circles. \item either trace or the unit map to the disk (depending on whether we consider the disk as a cobordism from one circle to the empty manifold or vice versa). \end{itemize} Let $B^m$ be the set of matchings of integers from $1$ to $2m$ without any quadruple $i<j<l<p$ such that $i$ is matched with $l$ and $j$ with $p.$ $B^m$ has a geometric interpretation as the set of crossingless matchings of $2m$ points. Figure~\ref{pic-match2} shows elements of $B^2.$ \begin{figure} \caption{crossingless matchings $\{(12),(34)\}$ and $\{ (14), (23) \}$} \label{pic-match2} \end{figure} For $a,b\in B^m$ denote by $W(b)$ the reflection of $b$ about the horizontal axis and by $W(b)a$ the closed 1-manifold obtained by gluing $W(b)$ and $a$ along their boundaries, see figure~\ref{pic-glue}. \begin{figure} \caption{Reflection and gluing} \label{pic-glue} \end{figure} $\mathcal{F}(W(b)a)$ is a graded abelian group, isomorphic to $\mathcal{A}^{\otimes I},$ where $I$ is the set of connected components (circles) of $W(b)a.$ For $a,b,c\in B^m$ there is a canonical cobordism from $W(c)bW(b)a$ to $W(c)a$ given by "contracting" $b$ with $W(b),$ see figure~\ref{pic-contr} for an example. \begin{figure} \caption{The contraction cobordism} \label{pic-contr} \end{figure} This cobordism induces a homomorphism of abelian groups \begin{equation}\label{ind-hom} \mathcal{F}(W(c)b)\otimes \mathcal{F}(W(b)a) \longrightarrow \mathcal{F}(W(c)a). \end{equation} If $M$ is a graded abelian group, denote by $M\{ i\}$ the graded abelian group obtained by shifting the grading of $M$ up by $i.$ Let \begin{equation*} H^m\define \oplusop{a,b\in B^m} \hspace{0.05in} {_b(H^m)_a}, \hspace{0.2in} {_b(H^m)_a}\define \mathcal{F}(W(b)a)\{ m\}. \end{equation*} Homomorphisms (\ref{ind-hom}), over all $a,b,c,$ define an associative multiplication in $H^m$ (the product ${_d(H^m)_c}\otimes\hspace{0.05in} {_b(H^m)_a}\to\hspace{0.05in}{_d(H^m)_a}$ is set to zero if $b\not= c$). The grading shift $\{ m\}$ the multiplication grading-preserving. $_a(H^m)_a$ is a subring of $H^m,$ isomorphic to $\mathcal{A}^{\otimes m}.$ Its element $1_a\define 1^{\otimes n}\in \mathcal{A}^{\otimes n}$ is an idempotent in $H^m.$ The sum $\sum_a 1_a$ is the unit element of $H^m.$ Notice that $_b(H^m)_a= 1_b H^m 1_a.$ Suppose we are given a diagram of a system of disjoint arcs and circles in a horizontal plane strip, see figure~\ref{flat}. \begin{figure} \caption{A flat $(2,1)$-tangle} \label{flat} \end{figure} We only consider diagrams with even number of bottom endpoints, and will refer to such a digram with $2m$ bottom and $2l$ top endpoints as a flat $(l,m)$-tangle. To a flat $(l,m)$-tangle $T$ we associate an $(H^l, H^m)$-bimodule $\mathcal{F}(T)$: \begin{equation*} \mathcal{F}(T) \define \oplusop{b\in B^l, a\in B^m}\mathcal{F}(W(b)Ta)\{ m\}. \end{equation*} Since the diagram $W(b)Ta$ is a closed 1-manifold, we can apply $\mathcal{F}$ to it. $\mathcal{F}(W(b)Ta)\cong \mc{A}^{\otimes r},$ where $r$ is the number of circles in $W(b)Ta.$ Notice that $r$ depends on the choice of $a$ and $b.$ The ring $H^l$ acts on $T$ on the left via maps \begin{equation*} \mathcal{F}(W(c)b) \otimes \hspace{0.05in} \mathcal{F}(W(b)Ta) \longrightarrow \mathcal{F}(W(c)Ta) \end{equation*} where $c,b\in B^l$ and $a\in B^m,$ and the map is induced by the cobordism from $W(c)bW(b)Ta $ to $W(c) Ta$ which contracts $b$ with $W(b)$ (see [K, Section 2.7] for more details). A flat $(r,l)$-tangle $T_1$ can be composed with a flat $(l,m)$-tangle $T_2$ by identifying the bottom endpoints of $T_1$ with the top endpoints of $T_2$ to produce a flat $(r,m)$-tangle $T_1 T_2.$ We recall the following result [K, Theorem 1]. \begin{prop} There is a canonical isomorphism of $(H^r,H^m)$-bimodules \begin{equation*} \mathcal{F}(T_1 T_2) \cong \mathcal{F}(T_1) \otimes_{H^l} \mathcal{F}(T_2). \end{equation*} \end{prop} When defining the set $B^m$ we did not specify the positions of the arc's endpoints on the horizontal line. We do so now. For each sequence $s=(s_1, \dots, s_{2m}), s_1< s_2 < \dots <s_{2m}$ of $2m$ points on the real line we can consider crossingless matchings of $s_1, \dots , s_{2m}.$ The set of such matchings is canonically isomorphic to $B^m,$ and we can repeat our definition of $H^m$ and get a ring, $H(s),$ canonically isomorphic to $H^m.$ $\mathcal{F}(T),$ associated to a flat tangle $T$ with a bottom endpoints sequence $s=(s_1, \dots , s_{2m})$ and a top endpoints sequence $t= (t_1, \dots, t_{2l}),$ is naturally an $(H(t), H(s))$-bimodule. $\mathcal{F}(T)$ is also, of course, an $(H^l, H^m)$-bimodule. Working with sequences $s$ and $t$ is simply a way to keep track of the real coordinates of the endpoints. Consider an admissible weight $\lambda.$ Let $\lambda_{s_1}= \lambda_{s_2}= \dots = \lambda_{s_{2m}}=1,$ that is, $s_1, \dots , s_{2m}$ are the indices of those coefficients of $\lambda$ that are equal to $1.$ Let $s(\lambda)= (s_1, \dots, s_{2m}).$ We denote the ring $H(s(\lambda))$ simply by $H_{\lambda}.$ Notice that $H_{\lambda}\cong H^{m(\lambda)}.$ \emph{Example:} If $\lambda= (0,2,1,1,1,0,1)$ then $s(\lambda)=(3,4,5,7)$ and $H_{\lambda}\cong H^2.$ Suppose that $\lambda_i=1$ and $\lambda_{i+1}\not= 1$ (so that $\lambda_{i+1}$ is either $0$ or $2$). Let $\mu = (\mu_1, \dots, \mu_{2m})$ be the transposition of the $i$-th and $i+1$-th coefficients of $\lambda$ (so that $\mu_j = \lambda_j$ if $j\not= i,i+1,$ $\mu_{i+1}= \lambda_i,$ and $\mu_i = \lambda_{i+1}$). Note that $s(\mu)$ is obtained from $s(\lambda)$ by changing $i\in s(\lambda), i = s_j$ for some $j,$ to $i+1.$ To such $\lambda$ and $i$ we assign an $(H_{\mu}, H_{\lambda})$ bimodule $\mc{F}(\mathrm{Id}_i^{i+1})$ where $\mathrm{Id}_i^{i+1}$ is the flat tangle depicted on the left of figure~\ref{pic-id}. The bimodule $\mc{F}(\mathrm{Id}_i^{i+1})$ defines the obvious isomorphism of rings $H_{\lambda}$ and $H_{\mu}.$ \begin{figure} \caption{Flat tangles $\mathrm{Id}_i^{i+1}$ and $\mathrm{Id}_{i+1}^i$ } \label{pic-id} \end{figure} For $\lambda$ and $i$ such that $\lambda_{i+1}=1$ and $\lambda_i\not= 1$ we similarly construct an $(H_{\mu}, H_{\lambda})$-bimodule $\mc{F}(\mathrm{Id}^i_{i+1}),$ where $\mu= (\lambda_1, \dots, \lambda_{i+1}, \lambda_i , \dots, \lambda_n).$ For $\lambda$ and $i$ such that $\lambda_i=\lambda_{i+1}=1$ we have an $(H_{\mu}, H_{\lambda})$-bimodule $\mc{F}(\cap_{i,i+1})$ where $\cap_{i,i+1}$ is the diagram on the right of figure~\ref{pic-cupcap} and $\mu= (\lambda_1, \dots, \lambda_{i-1},0,2,\lambda_{i+2}, \dots, \lambda_n)$ or $\mu= (\lambda_1, \dots, \lambda_{i-1},2,0, \lambda_{i+2}, \dots, \lambda_n).$ Bimodules $\cup^{i,i+1}$ are defined likewise. \begin{figure} \caption{Flat tangles $\cup^{i,i+1}$ and $\cap_{i,i+1}$} \label{pic-cupcap} \end{figure} \section{Category $\mc{C}$ and functors $\mathcal{E}_i,\mathcal{F}_i$} For an admissible weight $\lambda$ let $\mc{C}(\lambda)$ be the category $H_{\lambda}$-mod of graded finitely-generated $H_{\lambda}$-modules. $\mc{C}(\lambda)$ is equivalent to the category of (graded finitely-generated) $H^{m(\lambda)}$-modules where, recall, $2m(\lambda)$ is the number of 1's in $\lambda.$ For instance, if $\lambda$ consists entirely of 0's and 2's, then $H_{\lambda}\cong \mathbb Z$ and $\mc{C}(\lambda)$ is equivalent to the category of finitely-generated graded abelian groups. Let the category $\mc{C}$ be the direct sum of $\mc{C}(\lambda),$ over all admissible $\lambda$: \begin{equation*} \mc{C} \define \oplusop{\lambda}\mc{C}(\lambda). \end{equation*} Define the functor $\mc{E}_i: \mc{C} \longrightarrow \mc{C}$ as the sum, over all admissible $\lambda,$ of the following functors $\mc{C}(\lambda)\longrightarrow \mc{C}(\lambda+ \epsilon_i)$: \begin{itemize} \item the zero functor if $\lambda+ \epsilon_i$ is not admissible, \item tensoring with the bimodule $\mc{F}(\mathrm{Id}_i^{i+1})$ if $(\lambda_i, \lambda_{i+1}) = ( 1,2),$ \item tensoring with the bimodule $\mc{F}(\mathrm{Id}_{i+1}^i)$ if $(\lambda_i, \lambda_{i+1}) = ( 0,1),$ \item tensoring with the bimodule $\mc{F}(\cup^{i,i+1})$ if $(\lambda_i, \lambda_{i+1}) = ( 0,2),$ \item tensoring with the bimodule $\mc{F}(\cap_{i,i+1})$ if $(\lambda_i, \lambda_{i+1}) = ( 1,1).$ \end{itemize} For instance, the flat tangle $\mathrm{Id}_i^{i+1}$ shifts a bottom endpoint with coordinate $i$ to the top endpoint with coordinate $i+1,$ so that $\mc{F}(\mathrm{Id}_i^{i+1})$ is an $(H_{\lambda+\epsilon_i}, H_{\lambda})$-bimodule for any admissible $\lambda$ with $(\lambda_i, \lambda_{i+1}) = ( 1,2).$ Define the functor $\mc{F}_i: \mc{C} \longrightarrow \mc{C}$ as the sum, over all admissible $\lambda,$ of the following functors $\mc{C}(\lambda)\longrightarrow \mc{C}(\lambda- \epsilon_i)$: \begin{itemize} \item the zero functor if $\lambda- \epsilon_i$ is not admissible, \item tensoring with the bimodule $\mc{F}(\mathrm{Id}_i^{i+1})$ if $(\lambda_i, \lambda_{i+1}) = ( 1,0),$ \item tensoring with the bimodule $\mc{F}(\mathrm{Id}_{i+1}^i)$ if $(\lambda_i, \lambda_{i+1}) = ( 2,1),$ \item tensoring with the bimodule $\mc{F}(\cup^{i,i+1})$ if $(\lambda_i, \lambda_{i+1}) = ( 2,0),$ \item tensoring with the bimodule $\mc{F}(\cap_{i,i+1})$ if $(\lambda_i, \lambda_{i+1}) = ( 1,1).$ \end{itemize} \emph{Warning:} Although the notations $\mc{F}$ and $\mc{F}_i$ are similar, the two are no more related than $\mc{F}$ and $\mc{E}_i.$ The similarity is the negative side effect of making our notations compatible with those of both [K] and [KH]. Let $\mc{K}_i: \mc{C} \longrightarrow \mc{C}$ be the functor that shifts the grading of $M\in \mc{C}(\lambda)$ up by $\lambda_i-\lambda_{i+1}$: \begin{equation*} \mc{K}_i(M) \define M\{ \lambda_i - \lambda_{i+1}\}. \end{equation*} \begin{prop} \label{other-rel} There are functor isomorphisms \begin{equation} \label{functor-isom} \begin{array}{l} \mc{K}_{i} \mc{K}^{-1}_{i} \cong \mathrm{Id} \cong \mc{K}^{-1}_{i} \mc{K}_{i}, \\ \mc{K}_{i} \mc{K}_{j} \cong \mc{K}_{j} \mc{K}_{i}, \\ \mc{K}_{i} \mc{E}_{j} \cong \mc{E}_{j}\mc{K}_{i}\{c_{i,j}\}, \\ \mc{K}_{i} \mc{F}_{j} \cong \mc{F}_{j} \mc{K}_{i}\{-c_{i,j}\}, \\ \mc{E}_{i} \mc{F}_{j} \cong \mc{F}_{j}\mc{E}_{i} \hspace{0.1in} \mbox{ if } \hspace{0.1in} i \not= j, \\ \mc{E}_{i} \mc{E}_{j} \cong \mc{E}_{j}\mc{E}_{i} \hspace{0.1in}\mbox{ if }\hspace{0.1in} |i-j|> 1, \\ \mc{F}_{i} \mc{F}_{j} \cong \mc{F}_{j}\mc{F}_{i} \hspace{0.1in}\mbox{ if }\hspace{0.1in} |i-j|> 1, \\ \mc{E}_{i}^2 \mc{E}_{j} \oplus \mc{E}_{j} \mc{E}_{i}^2 \cong \mc{E}_{i} \mc{E}_{j} \mc{E}_{i}\{ 1\} \oplus\mc{E}_{i} \mc{E}_{j} \mc{E}_{i}\{ -1\} \hspace{0.1in} \mbox{ if } \hspace{0.1in} j = i \pm 1 \\ \mc{F}_{i}^2 \mc{F}_{j} \oplus \mc{F}_{j} \mc{F}_{i}^2 \cong \mc{F}_{i} \mc{F}_{j} \mc{F}_{i} \{ 1\} \oplus\mc{F}_{i} \mc{F}_{j} \mc{F}_{i} \{ -1\} \hspace{0.1in} \mbox{ if } \hspace{0.1in} j = i \pm 1 \end{array} \end{equation} where $c_{i,j} = {\left\{ \begin{array}{ll} 2 & \mathrm{ if }\hspace{0.1in} j = i, \\ -1 & \mathrm{ if } \hspace{0.1in} j = i\pm 1, \\ 0 & \mathrm{ if} \hspace{0.1in} |j-i|>1. \end{array} \right. }$ \end{prop} \begin{prop} \label{ef-fe-iso} For any admissible $\lambda$ there is an isomorphism of functors in the category $\mc{C}(\lambda)$ \begin{equation} \label{more-fn} \begin{array}{l} \mc{E}_i \mc{F}_i \cong \mc{F}_i \mc{E}_i \oplus \mathrm{Id}\{1\}\oplus \mathrm{Id} \{ -1\} \hspace{0.1in} \mbox{ if } \hspace{0.1in} (\lambda_i,\lambda_{i+1})=(2,0), \\ \mc{E}_i \mc{F}_i \cong \mc{F}_i \mc{E}_i \oplus \mathrm{Id} \hspace{0.1in} \mbox{ if } \hspace{0.1in} \lambda_i - \lambda_{i+1} =1, \\ \mc{E}_i \mc{F}_i \cong \mc{F}_i \mc{E}_i \hspace{0.1in} \mbox{ if } \hspace{0.1in} \lambda_i = \lambda_{i+1}, \\ \mc{E}_i \mc{F}_i \oplus \mathrm{Id} \cong \mc{F}_i \mc{E}_i \hspace{0.1in} \mbox{ if } \hspace{0.1in} \lambda_i - \lambda_{i+1}=-1, \\ \mc{E}_i \mc{F}_i \oplus \mathrm{Id}\{1\}\oplus \mathrm{Id} \{ -1\} \cong \mc{F}_i \mc{E}_i \hspace{0.1in} \mbox{ if } \hspace{0.1in} (\lambda_i,\lambda_{i+1})=(0,2). \end{array} \end{equation} \end{prop} \emph{Proof of Proposition~\ref{other-rel}:} The top four isomorphisms in (\ref{functor-isom}) are obvious. The next three isomorphisms are clear if $|i-j|>1,$ since functors $\mc{E}_i$ and $\mc{F}_i$ (respectively $\mc{E}_j$ and $\mc{F}_j$) come from bimodules assigned to flat tangles that are nontrivial only in the area with the $x$-coordinate between $i$ and $i+1$ (respectively $j$ and $j+1$). Composition of such flat tangles is commutative (see example in figures~\ref{commute} and \ref{comt}). \begin{figure} \caption{Flat tangles for functors $\mc{E}_1, \mc{F}_3$ and $\lambda=(021111)$} \label{commute} \end{figure} $\quad$ \begin{figure} \caption{The two compositions of flat tangles are isotopic, hence define isomorphic bimodules, hence $\mc{F}_3 \mc{E}_1 {\protect\cong} \mc{E}_1 \mc{F}_3 $ as functors from $\mathcal{C} (021111)$ to $\mathcal{C} (111021)$} \label{comt} \end{figure} To check commutativity $\mc{E}_i \mc{F}_{i+ 1} \cong \mc{F}_{i\pm 1} \mc{E}_i$ (and its variations) one considers all possible triples $(\lambda_i, \lambda_{i+1}, \lambda_{i+2})$ and draws flat tangles that define functors $\mc{E}_i \mc{F}_{i+ 1}$ and $\mc{F}_{i\pm 1} \mc{E}_i$ in each case. Unless $\lambda_i<2, \lambda_{i+1}=2,$ and $\lambda_{i+2}>0$ the sequence $\lambda+\epsilon_i - \epsilon_{i+1}$ is not admissible, so that there are only four nontrivial cases. The case $(0,2,1)$ is depicted in figure~\ref{nearcomm}, other cases are similar and left to the reader. \begin{figure} \caption{Flat tangles for functors $\mc{F}_{i+1}\mc{E}_i$ and $\mc{E}_i \mc{F}_{i+1}$ and the triple $(0,2,1)$ } \label{nearcomm} \end{figure} The last two isomorphisms in (\ref{functor-isom}) are also checked case by case. To illustrate, we verity the isomorphism \begin{equation*} \mc{E}_i^2 \mc{E}_{i+1} \oplus \mc{E}_{i+1}\mc{E}_i^2 \cong \mc{E}_i \mc{E}_{i+1}\mc{E}_i\{ 1\} \oplus \mc{E}_i \mc{E}_{i+1} \mc{E}_i \{ -1 \} \end{equation*} when $(\lambda_i, \lambda_{i+1}, \lambda_{i+2})= (0,1,2).$ The functor $\mc{E}_{i+1}\mc{E}_i^2$ is zero since $\lambda+2 \epsilon_i$ is not admissible. Functors $\mc{E}_i^2 \mc{E}_{i+1}$ and $\mc{E}_i \mc{E}_{i+1}\mc{E}_i$ are assigned to diagrams in figure~\ref{triples}. \begin{figure} \caption{Flat tangles for functors $\mc{E}_i^2\mc{E}_{i+1}$ and $\mc{E}_i \mc{E}_{i+1}\mc{E}_i$ and the triple $(0,1,2)$ } \label{triples} \end{figure} If flat tangle $T_2$ is obtained from a flat tangle $T_1$ by adding an extra circle, then there is an isomorphism of bimodules \begin{equation*} \mc{F}(T_2) \cong \mc{F}(T_1)\otimes \mc{A} \cong \mc{F}(T_1)\{1\} \oplus \mc{F}(T_1)\{ -1\}. \end{equation*} The right diagram in figure~\ref{triples} after the circle is removed is isotopic to the left diagram. Hence, \begin{equation*} \mc{E}_i^2 \mc{E}_{i+1} \cong \mc{E}_i \mc{E}_{i+1}\mc{E}_i\{ 1\} \oplus \mc{E}_i \mc{E}_{i+1} \mc{E}_i \{ -1\} \end{equation*} when $(\lambda_i, \lambda_{i+1}, \lambda_{i+2})= (0,1,2).$ Other cases, as well as proof of proposition~\ref{ef-fe-iso}, are left to the reader. $\square$ Functor isomorphisms of proposition~\ref{ef-fe-iso} categorify the quantum group relation $E_i F_i - F_i E_i = \frac{K_i - K_i^{-1}}{q - q^{-1}},$ while those of proposition~\ref{other-rel} category all other defining relations in $U_q(\mathfrak{sl}_{2n}).$ We recall the defining relations of $U= U_q(\mathfrak{sl}_{2n}):$ \begin{equation} \label{q-rel} \begin{array}{l} K_i K_i^{-1} = 1 = K_i^{-1} K_i, \\ K_i K_j = K_j K_i, \\ K_i E_j = q^{c_{i,j}} E_j K_i, \\ K_i F_j = q^{-c_{i,j}} F_j K_i, \\ E_i F_j - F_j E_i = \delta_{i,j} \frac{K_i - K_i^{-1}}{q-q^{-1}}, \\ E_i E_j = E_j E_i \hspace{0.1in} \mathrm{if} \hspace{0.1in} |i-j|>1, \\ F_i F_j = F_j F_i \hspace{0.1in} \mathrm{if} \hspace{0.1in} |i-j|>1, \\ E_i^2 E_{i\pm 1} - (q+q^{-1}) E_i E_{i\pm 1}E_i + E_{i\pm 1}E_i^2 = 0, \\ F_i^2 F_{i\pm 1} - (q+q^{-1}) F_i F_{i\pm 1}F_i + F_{i\pm 1}F_i^2 = 0. \end{array} \end{equation} The quantum divided powers are defined by \begin{equation*} E_i^{(j)}= \frac{E_i^j}{[j]!} \hspace{0.1in} \mathrm{ and } F_i^{(j)}= \frac{F_i^j}{[j]!}, \end{equation*} where $[j]! = [1][2]\dots [j]$ and $[j]= \frac{q^j - q^{-j}}{q-q^{-1}}.$ In the representation $V$ operators $E_i^j, F_i^j$ are zero for $j>2.$ Quantum divided powers $E_i^{(2)}$ and $F_i^{(2)}$ admit the following categorification. Note that $E_i^2: V_{\lambda} \to V_{\lambda+ 2\epsilon_i}$ is nonzero only if $(\lambda_i,\lambda_{i+1})=(0,2).$ In the latter case rings $H_{\lambda}$ and $H_{\lambda+2\epsilon_i}$ are canonically isomorphic and we define \begin{equation*} \mc{E}_i^{(2)}: \mc{C}(\lambda)\to \mc{C}(\lambda+2\epsilon_i), \hspace{0.1in} \hspace{0.1in} \mc{F}_i^{(2)}: \mc{C}(\lambda+2\epsilon_i)\to \mc{C}(\lambda) \end{equation*} as the mutually inverse equivalences of categories induced by this isomorphism. For other $\lambda$'s we set the functors to zero. \begin{prop} There are functor isomorphisms \begin{eqnarray} \mc{E}_i^2 & \cong & \mc{E}_i^{(2)} \{ 1\} \oplus \mc{E}_i^{(2)} \{ -1\}, \\ \mc{F}_i^2 & \cong & \mc{F}_i^{(2)} \{ 1\} \oplus \mc{F}_i^{(2)} \{ -1\}, \\ \mc{E}_i \mc{E}_j \mc{E}_i & \cong & \mc{E}_i^{(2)}\mc{E}_j \oplus \mc{E}_j \mc{E}_i^{(2)} \hspace{0.1in} \mathrm{if} \hspace{0.05in} j= i \pm 1, \label{red-one} \\ \mc{F}_i \mc{F}_j \mc{F}_i & \cong & \mc{F}_i^{(2)}\mc{F}_j \oplus \mc{F}_j \mc{F}_i^{(2)} \hspace{0.1in} \mathrm{if} \hspace{0.05in} j= i \pm 1. \label{red-two} \end{eqnarray} \end{prop} Proof is straightforward. Isomorphisms (\ref{red-one}) and (\ref{red-two}) simplify the last two isomorphisms in (\ref{functor-isom}). \section{The structure of $\mc{C}$} {\bf Grothendieck group } The Grothendieck group of $\mc{C}$ is $\mathbb Z[q,q^{-1}]$-module, with multiplication by $q$ corresponding to the grading shift, $[M\{ 1\}]=q [M],$ where we denote by $[M]$ the image of the module $M$ in the Grothendieck group $K(\mc{C}).$ Functors $\mc{E}_i,\mc{F}_i$ and $\mc{K}_i$ are exact and commute with $\{1\}.$ Therefore, they descend to $\mathbb Z[q,q^{-1}]$-linear endomorphisms $[\mc{E}_i], [\mc{F}_i]$ and $[\mc{K}_i]$ of $K(\mc{C}).$ Functor isomorphisms~\ref{functor-isom} and~\ref{more-fn} descend to quantum group relations (\ref{q-rel}) between $[\mc{E}_i], [\mc{F}_i]$ and $[\mc{K}_i]$ in the Grothendieck group $K(\mc{C}).$ Therefore, the Grothendieck group is naturally a $U$-module. For accuracy, let's view $U$ as an algebra over $\mathbb Q(q),$ the field of rational functions in an indeterminate $q$ with rational coefficients. To make $K(\mc{C})$ into a $U$-module we tensor it with $\mathbb Q(q)$ over $\mathbb Z[q,q^{-1}].$ Recall that $V$ denotes the irreducible representation of $U$ with the highest weight $2\omega_k.$ Choose a highest weight vector $\eta \in V$ (its weight is $2\omega_k=(2^k 0^{n-k}).$) \begin{prop} \label{groth} The Grothendieck group of $\mc{C}$ is isomorphic to the irreducible representation $V$ of $U$ with highest weight $2\omega_k$: \begin{equation} \label{main-iso} K(\mc{C})\otimes_{\mathbb Z[q,q^{-1}]}\mathbb Q(q)\cong V. \end{equation} \end{prop} Clearly, $K(\mc{C})\otimes_{\mathbb Z[q,q^{-1}]}\mathbb Q(q)$ is a representation of $U.$ Why is it irreducible and isomorphic to $V$? For instance, because dimensions of its weight spaces equal dimensions of weight spaces of $V$ (equal Catalan numbers). Dimensions of weight spaces of $V$ can be computed via the Weyl character formula, or extracted from [G] which explicitly describes all level 2 representations of $\mathfrak{sl}_n,$ including $V.$ $\square$ $\mc{C}(2\omega_k)$ is isomorphic to the category of graded finitely-generated abelian groups. Let $Q_{2\omega_k}$ be the object of $\mc{C}(2\omega)$ which is $\mathbb Z$ is degree $0.$ We fix isomorphism (\ref{main-iso}) such that $[Q_{2\omega_k}]$ is taken to $\eta\in V.$ For $a\in B^m$ denote by $\mathbb Z(a)$ the graded $H^m$-module isomorphic as a graded abelian group to $\mathbb Z$ (placed in degree 0), with the idempotent $1_a\in H^m$ acting as identity and $1_b \mathbb Z(a)=0$ for $b\not= a.$ These modules are analogous to simple modules for finite-dimensional algebras over a field, in the sense that after tensoring $H^m$ and $\mathbb Z(a)$ with a field these modules become simple. Images of $\mathbb Z(a)$'s make a basis in $K(H^m\mathrm{-mod}),$ see [K, Proposition 20]. For an admissible $\lambda$ and $a\in B^{m(\lambda)}$ denote by $\mathbb Z(\lambda,a)$ the $H_{\lambda}$ module isomorphic to $\mathbb Z(a)$ under the canonical isomorphism $H_{\lambda}\cong H^{m(\lambda)}.$ Proposition 20 in [K] implies \begin{prop} The Grothendick group of $\mc{C}$ is a free $\mathbb Z[q,q^{-1}]$-module with a basis $\{ [\mathbb Z(\lambda, a)]\}_{\lambda, a}$ over all admissible $\lambda$ and $a\in B^{m(\lambda)}.$ \end{prop} {\bf Projective Grothendieck group} For $a\in B^m$ we denote by $P_a$ the left $H^m$-module $H^m 1_a,$ see [K, Section 2.5]. Let $Q_a = P_a\{ -m \}$ be the indecomposable projective graded $H^m$-module given by shifting the grading of $P_a$ down by $m,$ \begin{equation*} Q_a = \oplusop{b\in H^m} \mc{F}(W(b)a). \end{equation*} The grading of $Q_a$ is balanced, in the sense that its nontrivial graded components are in degrees $-m, -m+2, \dots, m.$ We will refer to $Q_a$'s as \emph{balanced indecomposable projectives.} Similarly, for any admissible $\lambda$ and $a\in B^{m(\lambda)}$ we define projective $H_{\lambda}$-module $Q_{\lambda, a}$ as the image of $Q_a$ under the canonical ring isomorphism $H_{\lambda}\cong H^{m(\lambda)}.$ \begin{prop} Any indecomposable projective in $\mc{C}$ is isomorphic to $Q_{\lambda, a}\{ i\}$ for some (and unique) admissible $\lambda, a\in B^{m(\lambda)}$ and $i\in \mathbb Z.$ \end{prop} Let $K_P(\mc{C})$ be the projective Grothendieck group of $\mc{C},$ the subgroup of $K(\mc{C})$ generated by images of projective modules. $K_P(\mc{C})$ is a free $\mathbb Z[q,q^{-1}]$-module with the basis $[Q_{\lambda, a}]$ over all $\lambda, a$ as above. The inclusion $K_P(\mc{C})\subset K(\mc{C})$ is proper but turns into an isomorphism when tensored with the field $\mathbb Q(q)$: \begin{equation} \label{proj-or-not} K(\mc{C})\otimes_{\mathbb Z[q,q^{-1}]}\mathbb Q(q) \cong K_P(\mc{C})\otimes_{\mathbb Z[q,q^{-1}]}\mathbb Q(q). \end{equation} $K_P(\mc{C})$ is stable under the action of $[\mc{E}_i], [\mc{F}_i],$ and $[\mc{K}_i],$ since functors $\mc{E}_i, \mc{F}_i, $ and $\mc{K}_i$ take projectives to projectives. $Q_{2\omega_k}$ is the unique (up to isomorphism) balanced indecomposable projective in $\mc{C}(2\omega_k).$ \begin{prop} \label{ffs} Any balanced indecomposable projective in $\mc{C}$ is isomorphic to $\mc{F}^{(j_r)}_{i_r} \dots \mc{F}^{(j_2)}_{i_2} \mc{F}^{(j_1)}_{i_1} Q_{2\omega_k}$ for some sequences $(i_1, \dots, i_r)$ and $(j_1, \dots, j_r)$ where $j_1, \dots, j_r \in \{ 1, 2\}.$ \end{prop} The following example makes it clear. Let $n=6, k=3, \lambda=(1,1,1,0,2,1)$ and $Q$ be the balanced indecomposable projective given by the flat tangle in figure~\ref{balan}. You can check using figure~\ref{balance} that \begin{equation*} Q \cong \mc{F}_2\mc{F}_4^{(2)}\mc{F}_3^{(2)}\mc{F}_5\mc{F}_1\mc{F}_4\mc{F}_2\mc{F}_3 Q_{2\omega_3} \end{equation*} \begin{figure} \caption{Flat tangle for one of the two balanced indecomposable projectives in $\mc{C}(1,1,1,0,2,1)$} \label{balan} \end{figure} Proposition~\ref{ffs} implies that $K_P(\mc{C})\otimes_{\mathbb Z[q,q^{-1}]}\mathbb Q(q)$ is a cyclic $U_-$-module generated by $[Q_{2\omega_k}].$ Therefore, $K_P (\mc{C})\otimes_{\mathbb Z[q,q^{-1}]}\mathbb Q(q)$ is an irreducible $U$-module with highest weight $2\omega_k.$ This and (\ref{proj-or-not}) gives another proof of Proposition~\ref{groth}. \begin{figure} \caption{A presentation of $Q$} \label{balance} \end{figure} {\bf Biadjoint functors} Let $\overline{\phantom{a}}$ be the $\mathbb Q$-linear involution of $\mathbb Q(q)$ which changes $q$ into $q^{-1}.$ $U$ has an antiautomorphism $\tau: U \to U^{\mathrm{op}}$ described by \begin{equation} \begin{split} & \tau(E_{\alpha})=q F_{\alpha}K_{\alpha}^{-1}, \hspace{0.05in} \tau(F_{\alpha})= q E_{\alpha}K_{\alpha}, \hspace{0.05in} \tau(K_{\alpha}) =K_{\alpha}^{-1}, \\ & \tau(fx) = \overline{f}\tau(x), \hspace{0.05in} \mbox{ for }f\in \mathbb Q(q) \mbox{ and } x\in U, \\ & \tau(xy) = \tau(y)\tau(x), \hspace{0.05in} \mbox{ for }x,y\in U. \label{welcome-tau} \end{split} \end{equation} \begin{prop} \label{all-adjoint} The functor $\mc{E}_i$ is left adjoint to $\mc{F}_i\mc{K}_i^{-1} \{ 1\},$ the functor $\mc{F}_i$ is left adjoint to $\mc{E}_i\mc{K}_i\{ 1\}$ and $\mc{K}_i$ is left adjoint to $\mc{K}_i^{-1}.$ \end{prop} \emph{Proof:} case by case verification for each pair $(\lambda_i,\lambda_{i+1}).$ Suppose $(\lambda_i,\lambda_{i+1})=(1,1).$ Then $\mc{F}_i \mc{E}_i$ is given by the left diagram in figure~\ref{scobs}, while the identity functor in $\mc{C}(\lambda)$ is given by the right diagram. \begin{figure} \caption{Flat tangles for functors $\mc{F}_i\mc{E}_i$ and $\mathrm{id}$ when $(\lambda_i,\lambda_{i+1})=(1,1)$ } \label{scobs} \end{figure} There are standard cobordisms (embedded surfaces in $\mathbb R^3$) between these two flat tangles that give rise to natural transformations of functors $\mc{F}_i \mc{E}_i \Longrightarrow \mathrm{id}$ and $ \mathrm{id}\Longrightarrow \mc{F}_i \mc{E}_i.$ Similar cobordisms provide natural transformations $\mc{E}_i \mc{F}_i \Longrightarrow \mathrm{id}$ and $ \mathrm{id}\Longrightarrow \mc{E}_i \mc{F}_i$ for functors in $\mc{C}(\lambda+\epsilon_i).$ Isotopies of surfaces translate into relations between these four natural transformations which imply that, up to grading shifts, $\mc{F}_i$ is left and right adjoint to $\mc{E}_i$ (the natural transformations come from bimodule maps that change grading, thus grading shifts appear). Shifts are taking care of by composing with $\mc{K}_i^{\pm 1}\{1\}.$ Details and other cases are left to the reader. $\square$ The moral: our categorification lifts the automorphism $\tau$ of $U$ to the operation of taking the right adjoint functor. {\bf Semilinear form} We say that a form $V\times V \to \mathbb Q(q)$ is semilinear if it is $q$-antilinear in the first variable and $q$-linear in the second: \begin{equation*} \langle f v, w \rangle =\overline{f} \langle v , w\rangle , \hspace{0.1in} \hspace{0.1in} \langle v, f w\rangle = f \langle v, w\rangle . \end{equation*} $V$ has a unique semilinear form subject to conditions \begin{eqnarray} & & \langle \eta, \eta\rangle = 1, \\ & & \label{rel-bi} \langle x v, w\rangle = \langle v, \tau(x)w\rangle \hspace{0.1in} \hspace{0.1in} x\in U, \hspace{0.05in} v,w\in V. \end{eqnarray} On the other hand we have a semilinear form $\langle ,\rangle$ \begin{equation*} K_P(\mc{C}) \times K(\mc{C}) \longrightarrow \mathbb Z [q,q^{-1}] \end{equation*} which measures the graded rank of Hom: \begin{equation*} \langle [P] , [M] \rangle \define \mathrm{rk} \mathrm{Hom}_{\mc{C}}(P,M), \end{equation*} where $P$ is a projective module in $\mc{C},$ while $M$ is any module, and the rank of a finitely-generated graded abelian group is a Laurent polynomial in $q$ with coefficients given by ranks of graded components of the group. After tensoring with $\mathbb Q(q)$ we obtain a semilinear form $\langle ,\rangle$ on $K(\mc{C})\otimes_{\mathbb Z[q,q^{-1}]} \mathbb Q(q)$ with values in $\mathbb Q(q).$ Under the isomorphism (\ref{main-iso}) this form coincides with the form $\langle, \rangle$ on $V.$ Relation (\ref{rel-bi}) comes from interpreting $\tau$ via the right adjoint functor, for instance \begin{equation*} \mathrm{Hom}_{\mc{C}}(\mc{E}_i P, M) \cong \mathrm{Hom}_{\mc{C}}(P, \mc{F}_i \mc{K}_i^{-1} M \{ 1\}) \end{equation*} descends to $\langle E_i v, w \rangle = \langle v, \tau(E_i) w \rangle .$ {\bf Symmetric bilinear form} A left $H^m$-module $M$ can be made into a right $H^m$-module $_\psi M$ by twisting the action of $H^m$ by the antiinvolution $\chi.$ For $H^m$-modules $M,N$ we can form $_\psi M \otimes_{H^m} N,$ which is a graded abelian group. We similarly define $_\psi M$ for $M\in \mathrm{Ob}(\mc{C})$ and graded abelian group $_\psi M \otimes_H N$ (or simply $_\psi M \otimes N$) for $M,N\in \mathrm{Ob}(\mc{C}).$ Notice that $_\psi M \otimes N=0$ if $M\in \mathrm{Ob}(\mc{C}(\lambda)), N \in \mathrm{Ob}(\mc{C}(\mu))$ and $\lambda\not= \mu.$ \begin{prop} \label{iso-for-sym} There are natural isomorphisms \begin{eqnarray*} _\psi (\mc{E}_i M) \otimes N & \cong & \hspace{0.05in} _\psi M \otimes (\mc{K}_i \mc{F}_i N)\{ 1\}, \\ _\psi (\mc{F}_i M) \otimes N & \cong & \hspace{0.05in} _\psi M \otimes (\mc{K}_i^{-1} \mc{E}_i N)\{ 1\}, \\ _\psi (\mc{K}_i M) \otimes N & \cong & \hspace{0.05in} _\psi M \otimes (\mc{K}_i N). \end{eqnarray*} \end{prop} Introduce a bilinear form $(,)$ on $K_P(\mc{C})$ with values in $\mathbb Z[q,q^{-1}]$ by \begin{equation*} ([P], [Q]) \define \mathrm{rk}(_\psi P \otimes Q) \in \mathbb Z[q,q^{-1}] \end{equation*} where $P$ and $Q$ are projectives in $\mc{C}$ and $\mathrm{rk}$ is the graded rank. Form $(,)$ is $\mathbb Z[q,q^{-1}]$-linear in each variable, since \begin{equation*} _\psi P\{1\} \otimes Q \cong (_\psi P\otimes Q)\{ 1\} \cong \hspace{0.05in} _\psi P \otimes Q\{ 1\}. \end{equation*} Since $\psi^2=1,$ graded abelian groups $_\psi P\otimes Q $ and $_\psi Q\otimes P$ are isomorphic and the form is symmetric. We have \begin{eqnarray*} _\psi Q_{\lambda,a}\otimes Q_{\mu, b} & = & 0 \hspace{0.1in} \hspace{0.1in} \mathrm{if } \hspace{0.1in} \hspace{0.1in} \lambda \not= \mu, \\ _\psi Q_{\lambda,a} \otimes Q_{\lambda, b} & \cong & \mc{F}(W(a)b) \{ -m\}. \end{eqnarray*} Therefore, \begin{equation*} ([Q_{\lambda,a}],[Q_{\lambda, b}]) = (q+q^{-1})^r q^{-m}= (1+q^{-2})^r q^{r-m}, \end{equation*} where $r$ is the number of connected components of $W(a)b.$ Notice that $r<m$ unless $a=b.$ \begin{corollary} \begin{eqnarray*} ([Q_{\lambda,a}], [Q_{\mu, b}]) & = & 0 \hspace{0.1in} \mathrm{if} \lambda\not= \mu, \\ ([Q_{\lambda,a}],[Q_{\lambda, b}]) & \in & q^{-1}\mathbb Z[q^{-1}] \hspace{0.1in} \mathrm{if} \hspace{0.1in} a\not= b, \\ ([Q_{\lambda,a}],[Q_{\lambda, a}]) & \in & 1 + q^{-1}\mathbb Z[q^{-1}] \hspace{0.1in} \mathrm{for}\hspace{0.1in} \mathrm{all}\hspace{0.1in} a\in B^m. \end{eqnarray*} \end{corollary} Bilinear form $(,)$ extends to $\mathbb Q(q)$-bilinear form on $K_P(\mc{C}) \otimes_{\mathbb Z[q,q^{-1}]} \mathbb Q(q).$ We turn it into a bilinear form on $V$ via isomorphisms (\ref{main-iso}) and (\ref{proj-or-not}). This form is the unique bilinear form on $V$ such that \begin{eqnarray*} (\eta, \eta) & = & 1, \\ (x v, w) & = & (v, \rho(x) w) \hspace{0.1in} \mathrm{for} \hspace{0.1in} \mathrm{all} \hspace{0.1in} v,w \in V \hspace{0.1in} \mathrm{and} \hspace{0.1in} x \in U, \end{eqnarray*} where $\rho$ is a $\mathbb Q(q)$-linear antiinvolution of $U$ defined on generators by \begin{equation*} \rho(E_i)= q K_i F_i, \hspace{0.1in} \hspace{0.1in} \rho(F_i) = q K_i^{-1} E_i, \hspace{0.1in} \hspace{0.1in} \rho(K_i) = K_i. \end{equation*} {\bf Canonical basis} Let $\psi$ be the following $\mathbb Q$-algebra involution of $U$: \begin{equation} \label{vote-for-psi} \begin{split} & \psi(E_{\alpha})= E_{\alpha}, \hspace{0.05in} \psi(F_{\alpha}) = F_{\alpha}, \hspace{0.05in} \psi(K_{\alpha}) = K_{\alpha}^{-1}, \\ & \psi (fx) = \overline{f} x \hspace{0.05in} \mbox{ for } \hspace{0.05in} f\in \mathbb Q(q) \hspace{0.05in} \mbox{ and } \hspace{0.05in} x\in U. \end{split} \end{equation} There is a unique $\mathbb Q$-linear involution $\psi_V$ of $V$ such that \begin{equation} \label{psiR} \psi_V(\eta) = \eta, \hspace{0.05in}\hspace{0.05in} \psi_V(x v) = \psi (x)\psi_V(v)\hspace{0.05in} \mbox{ for } x\in U, v\in V. \end{equation} Involutions $\psi$ and $\psi_V$ are denoted by $\overline{\phantom{a}}$ in [L2]. For $a,b\in B^m$ the diagram $W(b)a$ is the mirror image of $W(a)b.$ Consequently, there is a natural isomorphism of graded abelian groups $\mc{F}(W(b)a)\cong \mc{F}(W(a)b).$ Summing over all $a,b\in B^m$ we obtains an antiinvolution $\chi$ of the ring $H^m.$ Notice that $\chi$ preserves all minimal idempotents of $H^m,$ $\chi(1_a) = 1_a.$ For each admissible $\lambda$ we similarly have an antiinvolution of $H_{\lambda},$ also denoted $\chi.$ For what follows in this subsection we need to switch either from the base ring $\mathbb Z$ to a field, or from $\mc{C}$ to its full subcategory which consists of modules that are free as abelian groups. Denote the latter category by $\mc{C}_f.$ To $M\in \mathrm{Ob}(\mc{C}_f)$ we assign $M^{\ast}= \mbox{Hom}_{\mathbb Z} (M,\mathbb Z),$ which is a right graded module over $\oplusop{\lambda}H_{\lambda}.$ Using antiinvolution $\chi$ we turn $M^{\ast}$ into a left graded $\oplusop{\lambda}H_{\lambda}$-module, denoted $\Psi M.$ Note that $\Psi M\in \mathrm{Ob}(\mc{C}_f)$ and that $\Psi$ is a contravariant duality functor in $\mc{C}_f.$ \begin{prop} $\Psi$ preserves indecomposable balanced projectives: \begin{equation*} \Psi Q_{\lambda,a} \cong Q_{\lambda,a}. \end{equation*} There are equivalences of functors \begin{eqnarray*} \Psi \mc{E}_i \cong \mc{E}_i \Psi, & \hspace{0.1in} \hspace{0.1in} & \Psi \mc{F}_i \cong \mc{F}_i \Psi, \\ \Psi \mc{K}_i \cong \mc{K}_i^{-1} \Psi, & \hspace{0.1in} \hspace{0.1in} & \Psi \{ 1\} \cong \{ -1\} \Psi. \end{eqnarray*} $\Psi$ is exact and descends to a $q$-antilinear automorphism of the Grothendieck group $K(\mc{C})$ and projective Grothendieck group $K_P(\mc{C}).$ Under the isomorphism of proposition~\ref{groth} involution $[\Psi]$ corresponds to the involution $\psi_V$ of $V,$ that is, the diagram below commutes (horizontal arrows are inclusions) \[ \begin{CD} K(\mc{C}) @>>> V \\ @V{[\Psi]}VV @VV{\psi_V}V \\ K(\mc{C}) @>>> V \end{CD} \] \end{prop} We omit the proof. $\square$ \begin{prop} The basis $\{ [Q_{\lambda,a}]\}_{\lambda,a} $ of balanced indecomposable projective modules in $K_P(\mc{C})$ is the canonical basis in $V.$ \end{prop} \emph{Proof:} From the previous proposition we see that $[Q_{\lambda,a}]$ is invariant under $\psi_V = [\Psi].$ This and proposition~\ref{ffs} imply that $[Q_{\lambda,a}]$ is a canonical basis vector, using [L2, Theorem 19.3.5]. $\square$ In particular, any canonical basis vector in $V$ can be presented as a monomial in divided powers of $\mc{F}_i$'s applied to the highest weight vector. This is a rather special property, characteristic of representations with small or degenerate highest weight. {\bf Braid group action} The $n$-stranded braid group $\mathrm{Br}_n$ acts in any finite-dimensional representation of $U$ via \begin{equation} \sigma_i (v) = \sum_{\begin{array}{c} a,b,c\ge 0 \\ -a + b -c =r\end{array}} (-1)^b q^{b-ac} E^{(a)} F^{(b)} E^{(c)} v \end{equation} where $v$ has weight $\lambda$ and $r= \lambda_i-\lambda_{i+1}.$ To categorify this action we look for a way to change the sum into a complex of functors $\mc{E}^{(a)} \mc{F}^{(b)} \mc{E}^{(c)}\{ b-ac\}.$ In representation $V$ the sums simplify and we expect similar simplifications in the categorification. Let $\mathcal{D}$ be either the bounded derived category of $\mc{C}$ or the category of bounded complexes of objects of $\mc{C}$ up to chain homotopies. Categories $\mathcal{D}(\lambda)$ are defined similarly. We define functors $\Sigma_i: \mathcal{D}\to \mathcal{D}$ that take $\mathcal{D}(\lambda)$ to $\mathcal{D}(\pi_i \lambda)$ (where $\pi_i$ transposes $\lambda_i$ and $\lambda_{i+1}$) for all admissible $\lambda.$ Rings $H_{\lambda}$ and $H_{\pi_i \lambda}$ are naturally isomorphic, and we denote by $\mathcal{Y}_{\lambda}$ the equivalence $\mathcal{D}(\lambda) \stackrel{\cong}{\longrightarrow} \mathcal{D}(\pi_i \lambda)$ induced by this isomorphism. The restriction of $\Sigma_i$ to $\mathcal{D}(\lambda)$ is the following functor: \begin{itemize} \item If $(\lambda_i, \lambda_{i+1})\not= (1,1)$ then $ \Sigma_i = \mathcal{Y}_{\lambda} [x]\{x \}$ where $x=\mathrm{max}(0,\lambda_i-\lambda_{i+1}).$ \item If $(\lambda_i,\lambda_{i+1}=(1,1)$ then $\Sigma_i$ is the complex of functors \begin{equation*} \longrightarrow 0 \longrightarrow \mc{F}_i \mc{E}_i \{ 1\} \longrightarrow \mathrm{id} \longrightarrow 0 \longrightarrow \end{equation*} where $\mathrm{id}$ is in cohomological degree $0$ and the natural transformation comes from the simplest cobordism between flat tangles that describe $\mc{F}_i \mc{E}_i$ and $\mathrm{id},$ see figure~\ref{scobs}. \end{itemize} The Grothendieck groups of $\mathcal{D}$ and $\mc{C}$ are isomorphic, and the functor $\Sigma_i$ descends to the operator $\sigma_i$ in the Grothendieck group $K(\mc{C}).$ \begin{prop} The functors $\Sigma_i$ are invertible and satisfy functor isomorphisms \begin{eqnarray*} \Sigma_i \Sigma_{i+1} \Sigma_i & \cong & \Sigma_{i+1} \Sigma_i \Sigma_{i+1}, \\ \Sigma_i \Sigma_j & \cong & \Sigma_j \Sigma_i, \hspace{0.1in} \hspace{0.1in} |i-j|>1. \end{eqnarray*} \end{prop} Follows from results of [K]. $\square$ [ABB] J.-C.Aval, F.Bergeron, and N.Bergeron, Ideals of quasi-symmetric functions and super-covariant polynomials for $S_n,$ arXiv:math.CO/0202071. [G] R.M.Green, A diagram calculus for certain canonical bases, \emph{Communications in Mathematical Physics} 183 (1997), no. 3, 521--532 [H] M.Haiman, $t,q$-Catalan numbers and the Hilbert scheme, Selected papers in honor of Adriano Garsia (Taormina, 1994). \emph{Discrete Mathematics} 193 (1998), no. 1-3, 201--224. [K] M.Khovanov, A functor-valued invariant of tangles, arXiv:math.QA/0103190, to appear in \emph{Algebraic and geometric topology.} [KH] R.S.Huerfano and M.Khovanov, A category for the adjoint representation, \emph{Journal of Algebra} {\bf 246}, 514--542, (2001), arXiv:math.QA/0002060. [Ka] M.Kashiwara, On crystal bases of the $Q$-analogue of universal enveloping algebras. \emph{Duke Mathematical Journal} 63 (1991), no. 2, 465--516. [L1] G.Lusztig, Canonical bases arising from quantized enveloping algebras. \emph{J. Amer. Math. Soc.} 3 (1990), no. 2, 447--498. [L2] G.Lusztig, Introduction to quantum groups. Progress in Mathematics, 110. Birkh\"auser Boston, Inc., Boston, MA, 1993. [R] R.Rouquier, Categorification of the braid groups, in preparation. Ruth Stella Huerfano Departamento de Matem\'aticas Universidad Nacional de Colombia Santaf\'e de Bogot\'a, Colombia [email protected] Mikhail Khovanov Department of Mathematics University of California Davis, CA 95616 [email protected] \end{document}
\begin{document} \title{Data Analytics for Smart cities: Challenges and Promises} \begin{abstract} The explosion of advancements in artificial intelligence, sensor technologies, and wireless communication activates ubiquitous sensing through distributed sensors. These sensors are various domains of networks that lead us to smart systems in healthcare, transportation, environment, and other relevant branches/networks. Having collaborative interaction among the smart systems connects end-user devices to each other which enables achieving a new integrated entity called Smart Cities. The goal of this study is to provide a comprehensive survey of data analytics in smart cities. In this paper, we aim to focus on one of the smart cities important branches, namely Smart Mobility, and its positive ample impact on the smart cities decision-making process. Intelligent decision-making systems in smart mobility offer many advantages such as saving energy, relaying city traffic, and more importantly, reducing air pollution by offering real-time useful information and imperative knowledge. Making a decision in smart cities in time is challenging due to various and high dimensional factors and parameters, which are not frequently collected. In this paper, we first address current challenges in smart cities and provide an overview of potential solutions to these challenges. Then, we offer a framework of these solutions, called universal smart cities decision making, with three main sections of data capturing, data analysis, and decision making to optimize the smart mobility within smart cities. With this framework, we elaborate on fundamental concepts of big data, machine learning, and deep leaning algorithms that have been applied to smart cities and discuss the role of these algorithms in decision making for smart mobility in smart cities. \end{abstract} \textbf{keywords:} Smart cities, Smart Mobility, Making Decision, Artificial Intelligence, Data Science, Machine Learning, Deep Learning. \section{Introduction} $\noindent\diamond$\textbf{Motivation} The explosion of advancement in artificial intelligence, sensor technologies, like Internet of Things (IOT), and wireless communication activates ubiquitous sensing through distributed sensors in various domains of networks that lead us to smart systems in healthcare, transportation, environment and other relevant branches/networks. Having collaborative interaction among the smart systems connects end-user devices to each other enabling a new integrated entity called Smart Cities. In the last decade, the Internet of Things (IoT) devices have been connected among different independent agents and heterogeneous networks as with communication technologies \cite{hadi3} \cite{montori2017collaborative}. The Connected high-performance sensors, and end-user devices, Internet of Things, is the trigger leveraging the networks in transitioning from urban cities towards smart sustainable cities. The goal of smart cities is to address upcoming challenges of conventional cities by offering integrated management systems with a combination of intelligent infrastructures \cite{bibri2017smart}. Making decisions in smart cities is challenging due to high direct/indirect dimensional factors and parameters. In this paper, we aim to focus on one of the smart cities important branches, namely Smart Mobility, and its positive ample impact on smart cities decision making processes. Intelligent decision making systems in smart mobility offer many advantages, such as saving energy \cite{hadi2}, relaying city traffic \cite{an2020traffic}, and more importantly, reducing air pollution \cite{hadi1} by offering real-time useful information and imperative generated knowledge. Making an optimal decision in time in smart mobility with a wide variety of smart devices and systems is challenging. You cannot make a promising decision when your data is not frequently collected. Consequently, a training process of decision support systems still is challenging due to the lack of data \cite{mohammadi2020introduction}. In this paper, we first address current challenges in smart cities and provide an overview of potential solutions to these challenges. Then, we offer a framework of these solutions, called universal smart cities decision making, with three main sections of data capturing, data analysis, and decision making to optimize the smart mobility within smart cities. Interestingly, large cities are losing their own urban style and turning into smart ones. Smart cities are growing due to advanced technology, especially Artificial Intelligence(AI). The more people live in large cities, the more need to have an integrated system to cost-efficiently handle the ample growth in urbanization. The proliferation of population offers smart development challenges in such cities and enables enormous pressure on society to create innovative, smart, and sustainable environments. Therefore, today's developed cities (or smart cities) are in need of integrated smart policies and novel innovative solutions to enhance the monitoring functionality in order to facilitate urban living conditions \cite{neves2020impacts}. Smart cities are created to enable advanced capabilities, such as sustainable energy systems and electrified transportation networks, and interact with information and communication technologies (ICTs) to enhance the efficiency of the cities being generated \cite{hadi4, park2021emerging}. An example of responding to new changes for smart cities may be seen in the progress being made in smart healthcare systems for emergency care cases. To enable Hospitals to monitor and control their patients and let the specialists offer better solutions to their patients, they need to use a smart healthcare system \cite{hossain2019smart} \cite{ pacheco2019smart}. To develop the smart healthcare system, we need to use healthcare networks that are the inter and intra connection among the healthcare components like IOT devices and sensors to enhance the process of monitoring and consequent services for patients \cite{ellaji2020efficient, hadi2}. The performance of such a system heavily depends on the quality of this network communication or online synchronization with other associated networks (other smart systems in smart cities) that actively contribute to the service operation in a positive way. For example, a healthcare network may take advantage of networks' resources, like energy, in case the system fails to run an operation due to lack of enough power supply. In this situation, the best solution is connecting this network with other networks for the benefit of both patients and their health and service management reliability \cite{hadi2}. In the processes of making and turning cities into smart cities, we can identify several representative features. According to a research study \cite{tran2019application}, the scientists investigated a period of 10 years of work from 2008 to 2018 discovering that smart cities might have some features in common. The most interesting features \ref{fig:smart_Cities} \cite{tran2019application, de2020development, hadi4} are smart economy, smart people, smart governance, smart mobility, smart environment, smart energy, and smart living that are shown in figure \ref{fig:smart_Cities}. As shown in the figure \ref{fig:smart_Cities}, although each feature represents the importance of itself to smart cities, they all are inter-related to each other. Each feature has direct or indirect impact on others. Furthermore, proposed decision-making solutions used in smart cities are Multi-criteria decision-making (MCDM), Mathematical Programming (MP), Artificial Intelligence (AI), and Integrated Method (IM). In this study, we aim to discover AI solutions, particularly Machine learning and deep learning approaches rather than others, for smart mobility. \begin{figure} \caption{Smart cities main features} \label{fig:smart_Cities} \end{figure} \section{Role of Machine Learning in Smart Cities} Current advanced technologies in sensors and Internet of Things (IoT) devices make it essential in leveraging Artificial Intelligence, particularly Machine Learning, to model the data for further application. \cite{boulos2019overview, al2020intelligence}. The IoT devices are considered as the most important and unavoidable parts of smart cities. These devices provide a huge amount of data depending on which applications are going to be used, such as healthcare and transportation applications \cite{ullah2020applications}. Internet of Things technologies have proliferated in many fields, such as smart healthcare \cite{ellaji2020efficient, ellaji2020efficient, manikandan2020hash} and smart home systems like Alexa \cite{khatri2018advancing, rak2020systematic}, specifically in urban cities, turning them into smart cities. Thus, the huge usage of Internet of Things technologies plays a pivot role in generating big data which requires solutions to analyze and keep track of smart cities activities. This big data analysis \cite{elhoseny2018framework} provides invaluable knowledge in order to integrate all smart cities' sources like IOT and networks. As smart cities and their data grow, this analysis process may become challenging for future decision-making. In the next section, we address this problems and discuss possible solutions have been proposed thus far. Researchers in \cite{ju2018citizen} proposed a new solution to leverage citizen-centered big data analysis to apply to smart cities. Their solution is to determine a path for implementation of citizen-centered big data analysis for the sake of decision-making. This solution's main goal is to provide imperative perspectives: data-analysis algorithms and urban governance issues \cite{ju2018citizen}. Furthermore, researchers in \cite{bhattacharya2020review} provided several deep learning applications in smart cities, such as smart governance, smart urban modeling, smart education, smart transportation,intelligent infrastructures, and smart health solutions. Additionally, the challenges of using deep learning towards smart cities are also addressed. However, still the problem of decision making in smart cities remains challenging. In the next section, we address the problems and highlight the possible solutions. \section{Smart Cities Data Analytics Framework} There are plenty of research studies accomplished in smart cities like \cite{kumar2018intelligent}, which is an intelligent decision computing solution for crowd monitoring. In this section, we dig into smart cities applications, and the foundation of techniques are developed upon by establishing a universal smart cities data analytics framework, which is elaborately depicted in figure \ref{fig:data_analytics}. This framework has three main sections: Data Capturing, Data Analysis, and Decision Making. We elaborate on each in separate sections. \subsection{Data Capturing} Smart cities have been engulfed with too much data which requires the management department to control and monitor the cities, but this department is cost and time inefficient. Hopefully, due to domains (features of smart cities), these data ,which are grouped automatically into domains, create particular big data for each domain separately. IOT devices used to gather data for healthcare are completely different from the ones that are developed specially for traffic control (i.e that is why we need to categorize the data into groups of domains to create certain technical big data for each domain). According to figure \ref{fig:data_analytics}, we provide 6 different samples of sensors and IOT devices, such as smart phones, smart cameras, smart thermometers, smart users with sensors, smart cars, and smart houses. The variations of sensors enable a data center to receive different types, ranges, and values of data from objects. Here, we highlight the current upcoming challenges with associate solutions, respectively. $\noindent \diamond$ \textbf{Challenges:} The process of data gathering in smart cities has been enhanced yet remains challenging due to lack of enough equipment in every place we aim to monitor and make decisions for that region, like relaying traffic information. No matter our focus, whether in smart healthcare mobility or other areas of interest, IOT devices and other sensors may still fail to capture all data correctly or may miss data entirely due to their limited storage and inefficient time-scales. In addition to that, data have proliferated significantly and are produced from heterogeneous sources. Therefore, the types of data vary, from video and images to digits or strings, and need particular procedures to convert all of the data into single unit measurements. These measurements enable us to run machine learning algorithms and other deep learning algorithms on the data readily to make optimum decisions. $\noindent \diamond$ \textbf{Solutions:} Pre-processing plays a pivotal role in managing missing information and values within the generated dataset. There are some basic and advanced approaches \cite{usman2019survey} to handle the missing values, and also, there are tools and techniques to select important and relevant features like IFAB \cite{mohammadi2014image} using Artificial Bee colony to get rid of irrelevant features and ensure the genuineness and reproducibility of the results \cite{farzanstat2019}. To handle the heterogeneous problem, Data Engineering \cite{elhoseny2018framework} is responsible for managing and analyzing the input data and adding labels to data left unlabeled. This requires experts and time, thus it is not cost and time efficient. Therefore, leveraging Big Data algorithms helps to tune and analyze the data properly. \begin{figure} \caption{Smart cities data analytics framework} \label{fig:data_analytics} \end{figure} \subsection{Data Analysis} The smart cities promises lead us to an ample proliferation and generation in data from all aspects of the domains and branches. Therefore, such huge amounts of data are at the core of the services generated by the IoT technologies \cite{hashem2016role}. This section of the framework, Data Analysis, is imperative because its results lead us to make proper decisions. If this process is not accomplished, the decision made will not be efficient. Thus, a large number of research studies have enhanced the process and yielded better results. In the early era of smart cities, there were only limited data generated everyday due to the lack of sensors.`Therefore, typical machine learning algorithms were sufficient for data analysis to make a model that can handle the situation and provide enough information to make a decision. However, thanks to technologies, the number of sensors and IOT objects have proliferated and thus we have huge amounts of data that require big data algorithms like Spark and Hadoop to handle the data. \cite{mohammadi2020Covid} Additionally, due to the huge quantity of data, researchers used deep learning algorithms, especially Transfer learning and Meta-learning \cite{mohammadi2020introduction} and some other famous machine learning techniques to learn within Reinforcement Learning like Q-learning \cite{chakole2021q, liao2020fast,fan2020theoretical} for generating smart systems \cite{boussakssou2020towards, joo2020traffic}. \subsubsection{Big Data Algorithms and challenges} Due to the Big Data revolution, the enormous volume of high performance computations are unavoidable in such smart cities. Thus, big data algorithms are getting one of the important functioning pieces of smart cities. $\noindent \diamond$ \textbf{Transportation System:} Transportation systems are generated using advanced technologies where applications of big data vary and are important \cite{wang2021review}. Scenarios are established in the research studies to offer the following options to smart people: suggesting the best travel time for any given trip, providing real time traffic information, and predicting movement patterns according to personality (daily routing path) or spatio-temporal routines, enhancing crash analysis, planning bus routes, improving taxi dispatch, and optimising traffic time during big occasions \cite{wang2021review}. For example, Zhu \emph{et al} in \cite{zhu2018big}, developed an approach to use two algorithms: Bayesian inference and Random forest to execute in real-time to predict the probability of crash occurrence to decrease crash risks in smart cities. Moreover, researchers \cite{xiong2014novel} have established a combination of supervised and regression algorithms, such as multivariate adaptive regression splines, regression trees, and logistic regression, to study motor vehicle accident injury behaviour. $\noindent \diamond$ \textbf{Urban Governance} Learning from human mobility patterns discloses the movement and trend of a large population, the most popular applications of which are in domains like crime prediction \cite{wang2021review}, disaster evacuation, big events management, and safety estimation. To that end, knowledge about big events derived from bus trajectories, including event start time and end time, can be beneficial for the event planning and management department \cite{mazimpaka2017they}. \subsubsection{Machine learning process and challenges} The learning process based on given input where data are not sufficient does not produce reliable and robust results \cite{mohammadi2020introduction}. Machine learning algorithms get stuck in local minimum or maximum easily due to a large amount of data which leads to a problem called over-fitting \cite{mohammadi2020introduction}. Furthermore, machine learning algorithms fail to learn positions and states (classes) when the number occurrence in the data is far less than others. For instance, within the smart healthcare domain, when capturing data, the probability of having all desired classes looks low and the number of rare diseases would be insufficient for learning. $\noindent \diamond$ \textbf{Solutions:} In paper \cite{mohammadi2019promises}, researchers established an approach called Meta-Sense, i.e. learn to sense rather than sense to learn. This approach took advantage of one of the important data analytic promises called zero-shot learning (ZSL) \cite{wei2020lifelong, mohammadi2020introduction} . There are many applications of ZSL in computer vision \cite{mohammadi2019parameter} and motion detection in healthcare application \cite{wijekoon2018zero} Furthermore, researchers in \cite{yu2020zero} proposed a solution based on ZSL called Zero-Virus providing a deep understanding for an intelligent transportation system to generate the best rout for drivers. Zero-virus does not need any vehicle-tracklets annotation, thus it is the most volatile real-world traffic scheme. $\noindent \diamond$ \textbf{Machine learning statistic records:} Iskandaryan \emph{et al} in \cite{hadi1} investigated a survey to study sensor data and analyze the impact of air quality in Smart Cities using supervised machine learning algorithms. They established four important and popular categories of algorithms associated with the number of research studies: first, the most popular ML algorithm, neural network (NN), second, Logistic regression, third, ensemble algorithm, followed by others. Furthermore, the results in \cite{hadi1} demonstrated that the number of publications associated with machine learning algorithms with the application of smart cities, particularly smart environment by predicting and preventing the air pollution risks, have increased progressively. Additionally, researchers in \cite{hadi1} evaluated the number of metrics that have been used in publications to examine the performance of each algorithm. The two most popular metrics are Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). \subsubsection{Deep learning process and challenges} Feasible smart cities are established by technology driven foundations and their initiatives are on different branches and domains, in which each of them may requires systems with high-performance computing resources and technologies. Such systems have pros and cons such as saving energy and reducing the air pollution and reducing diagnose time, but (writing something bad). One of the most popular technologies used to tackle such huge amounts of data is Deep Learning (DL). DL is a special algorithm within machine learning and is efficiently used to obtain required knowledge from the input data, extracting the patterns which govern the whole data and also classify them. Several research studies are successfully applying DL on smart cities \cite{bhattacharya2020review}, such as urban modeling for smart cities, intelligent infrastructure for smart cities, and smart urban governance. Here, we aim to focus on a particular application of deep learning which is smart mobility and transportation. AI definitely pushed all the science a step forward by making the systems and processes of scientific inquiry as smart as possible, for example autonomous transportation systems \cite{bhattacharya2020review} in smart mobility. Making a decision about whether the object seen is a human being or not is challenging \cite{pandey2018object}. Object detection is one of the challenging issues in smart mobility that surely boosts and facilitates automation in transportation systems. Consequently, this positively enhances and improves smart mobility in Smart Cities. Researchers in \cite{pandey2018object} explored analysis of decent object detection solutions like deep learning \cite{Asali2020DeepMSRFAN}. The scientists leveraged a well-known object detection system, namely YOLO (You Only Look Once), which was developed earlier by Redmon and \emph{et al} in \cite{redmon2016you} and assessed its performance on real-time data. \subsubsection{Learning process and emerging new type of data problems} In this section, we address possible challenges and solutions in the world of data analysis. The first and foremost problem that researchers tackle is lack of data for rare classes within the dataset that are used to make a model. The less number of samples we have in the dataset, the higher chance of ignoring that sample while we train and make the model. To handle this problem, Metalearning has come to play an essential role to make a model only using few samples. It has three important promises: zero shot learning, one shot learning, and few shot learning \cite{mohammadi2020introduction}. Zero shot learning is a certain type of Metalearning when a training dataset does not have any samples for classes, and we still can predict them during a test process. For instance, a research work \cite{yu2020zero} was conducted to not use any annotation for processing vehicles-tracklets. This study established a route understanding system based on zero-shot theory for intelligent transportation, namely zero-virus, which obtains high effectiveness with zero samples of annotation of vehicle tracklets. Further, another research work \cite{mohammadi2019promises} established a new technique, namely Metasense, which is the process of learning to sense rather than sensing to learn. This process takes advantage of the lack of samples of classes by learning from learning rather than learning from samples. The process of non-annotation helps to work with data that lacks classes because typical machine learning algorithms fail to detect non-annotation classes. Furthermore, sensing to learn helps new algorithms predict information that is completely lacking from the dataset itself. These advancements lead us closer to (or are part of) zero shot learning, which is critical for the advancement of Smart Cities. The second promise is one shot learning, in which each epoch in a training phase has only one sample per each class which is taken by a deep learning algorithm or a combination of neural networks \cite{mohammadi2020introduction}. The third promises but not the least one, is few (k) shot learning, in which each epoch in a training phase has only few (k) samples per each class which are taken by a deep learning algorithm or a combination of neural networks \cite{mohammadi2020introduction}. \subsection{Decision Making Problems in smart cities} Decision making problems are becoming challenging issueS in smart cities where not only the problem itself but also other relevant problems in other aspects of smart cities need to be analyzed. Additionally, decision makers must depict the consequence of the decision they are going to make. Thus, decision making systems are needed in smart cities in which the systems take care of all issues within the connected networks and only some limited information is taken which would be enough make a optimum decision. In this section, we highlight the challenging decision making problems and solutions. \subsubsection{Traffic decision making system} Traffic decision making system, as known as intelligent transportation system (ITS), aims to detect traffic flow within a smart city and offer optimal solutions using proper big data analytics \cite{safari2020fast}. Assessing and analysing this big data plays a pivot role in decision making systems that makes the process time- and cost-efficient \cite{zhu2018big}. Researchers addressed some case studies withing traffic decision making systems, such as road traffic accidents analysis \cite{golob2003relationships, xu2015mining}, public transportation management and control, and road traffic flow prediction. $\noindent \diamond$ \textbf{Challenges:} In these such systems, users feed data into them by gathering data from heterogeneous resources, such as high-performance IOT devices, video detectors, and GPS. The systems must use big data analysis approaches to evaluate the data online \cite{mohamed2014real} in order to provide efficient and intimate knowledge for decision making. To be more certain about the amount of data these systems handle, consider Petabyte level data which are beyond the utmost of traditional machine learning analytic abilities \cite{mohamed2014real}. This is due to two important issues: these traditional data processing algorithms are not yet developed for online and real-time monitoring systems, and additionally, they fail to learn from these data due to disorganized and nonstandard structures. $\noindent \diamond$ \textbf{Solutions:} Smart cities, particularly in this domain (i.e one of their important features, smart mobility, discussed in figure \ref{fig:smart_Cities}), need optimum traffic decision making systems which address the aforementioned challenges and provide efficient solutions to them. One of the solutions discussed in \cite{mohamed2014real} is the use of big data analytics approaches. They solved the challenges by providing enough data storage, analysis and management tools. The most important and promising frameworks, libraries or technologies are used to analyze the big data, such as Hadoop and Spark, but not limited to them. For instance, researchers in \cite{park2020pacc} established a time-efficient and scalable distributed technique, namely Partition-Aware Connected Components (PACC), for connected component computation that relies on main approaches: edge filtering, two-step processing of partitioning and computation, and sketching. PACC outperforms MapReduce and Spark frameworks and is time-efficient on real-world graphs \cite{park2020pacc}. Additionally, Big Data analysis algorithms facilitate and boost the process of handling large amount of data to provide enough information for making decisions. Traffic decision making systems using Big Data algorithms and technologies help associate offices and the people in charge to get to learn drivers journey patterns within the transportation network where reports whole networks trends or better understanding of similar drivers \cite{zhu2018big} . Considering this feature, the systems provide the best path to drivers to reach their destination through the minimum time possible. Furthermore, the systems help the city to relay the traffic by controlling traffic light. The best timing to make which light should be on or off and for what period. Furthermore, Traffic decision making systems predicts the the probability of traffic accident occurrence using Big Data algorithms \cite{zhu2018big}. This requires to have smart healhthcare systems, which is nominated as one of smart cities features discussed in figure \ref{fig:smart_Cities}, to help emergency centers to facilitate the process of emergency rescue. \subsubsection{ Safe and Smart Environment} Researchers in \cite{an2020traffic} leveraged deep learning algorithms and advanced communication technologies to link vehicles, roads and drivers in order to facilitate and enhance various traffic‐related tasks and improve air pollution \cite{hadi1}. The scientists focused on different initiatives with the goal of creating a safe and smart environment and transportation. Further, Zhu \emph{et al} in \cite{zhu2018big}, developed an approach to use two algorithms: Bayesian inference and Random forest to execute real-time tp predict the probability of crash occurrence to decrease crash risks in smart cities. Moreover, researchers \cite{xiong2014novel} have established a combination of supervised and regression algorithms such as multivariate adaptive regression splines, and classification and regression trees, logistic regression and to experiment analytical study using of a dataset of motor vehicle accident injury. To have a safe environment, air pollution prediction in smart cities plays imperative role. There have been a significant amount of work have tied to enhance the prediction using several machine learning algorithms. The usage of machine learning algorithms to make environment safe has increased consistently, stating that how important this prediction would be for smart cities \cite{hadi1}. We examined the most relevant research studies \cite{hadi1, hadi2, ju2018citizen, xiong2014novel} but not limited to these, applying different evaluation based on several metrics. The evaluation of those research lead us to following common observations: first, the rate of applying of the advanced (deep learning) machine learning algorithms have proliferated rather than typical machine learning algorithms; second, among the prediction elements for air pollution prediction, PM2.5 is considered as the most popular element; third, the data used for air pollution prediction had already generated hourly rather than daily; and final observation, efficient prediction occurs when air-quality captured data merged with other relevant data of other networks like healthcare network within the smart cities. \section{Conclusion} We highlight smart cities’ research branches and technology advancement regarding different complex domains. We propose a solution namely, universal smart cities decision making, which has three main section: data capturing, data analysis and making decision. We provide an abstract review of the fundamental concepts of Big Data, ML, and DL algorithms have been applied to smart cities. We explore the essential role of the aforementioned algorithms on making decision within smart cities. The goal of this study is to provide a comprehensive survey of data analytics in smart cities, more specifically, the role of Big Data algorithms and other advanced technologies like ML and DL for making decision in smart mobility within smart cities. \end{document}
\begin{document} \newcommand{\longrightarrow}{\longrightarrow} \newcommand{\mathcal{H}}{\mathcal{H}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{K}}{\mathcal{K}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{W}}{\mathcal{W}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\mathcal{T}}{\mathcal{T}} \newcommand{\mathcal{F}}{\mathcal{F}} \newtheorem{definition}{Definition}[section] \newtheorem{defn}[definition]{Definition} \newtheorem{lem}[definition]{Lemma} \newtheorem{prop}[definition]{Proposition} \newtheorem{thm}[definition]{Theorem} \newtheorem{cor}[definition]{Corollary} \newtheorem{cors}[definition]{Corollaries} \newtheorem{example}[definition]{Example} \newtheorem{examples}[definition]{Examples} \newtheorem{rems}[definition]{Remarks} \newtheorem{rem}[definition]{Remark} \newtheorem{notations}[definition]{Notations} \theoremstyle{remark} \theoremstyle{remark} \theoremstyle{remark} \theoremstyle{notations} \theoremstyle{remark} \theoremstyle{remark} \theoremstyle{remark} \newtheorem{dgram}[definition]{Diagram} \theoremstyle{remark} \newtheorem{fact}[definition]{Fact} \theoremstyle{remark} \newtheorem{illust}[definition]{Illustration} \theoremstyle{remark} \theoremstyle{definition} \newtheorem{question}[definition]{Question} \theoremstyle{definition} \newtheorem{conj}[definition]{Conjecture} \title{\textbf{SS-Injective Modules and Rings}} \author{\textbf{Adel Salim Tayyah} \\ Department of Mathematics, College of Computer Science and \\Information Technology, Al-Qadisiyah University, Al-Qadisiyah, Iraq \\Email: [email protected] \\ \\\textbf{Akeel Ramadan Mehdi}\\ Department of Mathematics, College of Education, \\Al-Qadisiyah University, P. O. Box 88, Al-Qadisiyah, Iraq \\Email: akeel\[email protected]} \date{\today} \maketitle \begin{abstract} \,We introduce and investigate ss-injectivity as a generalization of both soc-injectivity and small injectivity. A module $M$ is said to be ss-$N$-injective \,(where $N$ is a module) \,if \, every \, $R$-homomorphism from a semisimple small submodule of $N$ into $M$ extends to $N$. A module $M$ is said to be ss-injective (resp. strongly ss-injective), if $M$ is ss-$R$-injective (resp. ss-$N$-injective for every right $R$-module $N$). Some characterizations and properties of (strongly) ss-injective modules and rings are given. Some results of Amin, Yuosif and Zeyada on soc-injectivity are extended to ss-injectivity. Also, we provide some new characterizations of universally mininjective rings, quasi-Frobenius rings, Artinian rings and semisimple rings. \end{abstract} $\vphantom{}$ \noindent \textbf{Key words and phrases:} Small injective rings (modules); soc-injective rings (modules); SS-Injective rings (modules); Perfect rings; quasi-Frobenius rings. $\vphantom{}$ \noindent \textbf{2010 Mathematics Subject Classification:} Primary: 16D50, 16D60, 16D80 ; Secondary: 16P20, 16P40, 16L60 . $\vphantom{}$ \noindent \textbf{$\ast$} The results of this paper will be part of a MSc thesis of the first author, under the supervision of the second author at the University of Al-Qadisiyah. $\vphantom{}$ \section{Introduction} Throughout this paper, $R$ is an associative ring with identity, and all modules are unitary $R$-modules. For a right $R$-module $M$, we write soc$(M)$, $J(M)$, $Z(M)$, $Z_{2}(M)$, $E(M)$ and End$(M)$ for the socle, the Jacobson radical, the singular submodule, the second singular submodule, the injective hull and the endomorphism ring of $M$, respectively. Also, we use $S_{r}$, $S_{\ell}$, $Z_{r}$, $Z_{\ell}$, $Z_{2}^{r}$ and $J$ to indicate the right socle, the left socle, the right singular ideal, the left singular ideal, the right second singular ideal, and the Jacobson radical of $R$, respectively. For a submodule $N$ of $M$, we write $N\subseteq^{ess}M$, $N\ll M$, $N\subseteq^{\oplus}M$, and $N\subseteq^{max}M$ to indicate that $N$ is an essential submodule, a small submodule, a direct summand, and a maximal submodule of $M$, respectively. If $X$ is a subset of a right $R$-module $M$, the right (resp. left) annihilator of $X$ in $R$ is denoted by $r_{R}(X)$ (resp. $l_{R}(X)$). If $M=R$, we write $r_{R}(X)=r(X)$ and $l_{R}(X)=l(X)$. Let $M$ and $N$ be right $R$-modules, $M$ is called soc-$N$-injective if every $R$-homomorphism from the soc$(N)$ into $M$ extends to $N$. A right $R$-module $M$ is called soc-injective, if $M$ is soc-$R$-injective. A right $R$-module $M$ is called strongly soc-injective, if $M$ is soc-$N$-injective for all right $R$-module $N$ \cite{2AmYoZe05} Recall that a right $R$-module $M$ is called mininjective \cite{14NiYo97} (resp. small injective \cite{19ThQu09}, principally small injective \cite{20Xia11}) if every $R$-homomorphism from any simple (resp. small, principally small) right ideal to $M$ extend to $R$. A ring is called right mininjective (resp. small injective, principally small injective) ring, if it is right mininjective (resp. small injective, principally small injective) as right $R$-module. A ring $R$ is called right Kasch if every simple right $R$-module embeds in $R$ (see for example \cite{15NiYu03}. Recall that a ring $R$ is called semilocal if $R/J$ is a semisimple \cite{11Lom99}. Also, a ring $R$ is said to be right perfect if every right $R$-module has a projective cover. Recall that a ring $R$ is said to be quasi-Frobenius (or $QF$) ring if it is right (or left) artinian and right (or left) self-injective; or equivalently, every injective right $R$-module is projective. In this paper, we introduce and investigate the notions of ss-injective and strongly ss-injective modules and rings. Examples are given to show that the (strong) ss-injectivity is distinct from that of mininjectivity, principally small injectivity, small injectivity, simple J-injectivity, and (strong) soc-injectivity. Some characterizations and properties of (strongly) ss-injective modules and rings are given. W. K. Nicholson and M. F. Yousif in \cite{14NiYo97} introduced the notion of universally mininjective ring, a ring $R$ is called right universally mininjective if $S_{r}\cap J=0$. In Section 2, we show that $R$ is a right universally mininjective ring if and only if every simple right $R$-module is ss-injective. We also prove that if $M$ is a projective right $R$-module, then every quotient of an ss-$M$-injective right $R$-module is ss-$M$-injective if and only if every sum of two ss-$M$-injective submodules of a right $R$-module is ss-$M$-injective if and only if Soc$(M)\cap J(M)$ is projective. Also, some results are given in terms of ss-injectivity modules. For example, every simple singular right $R$-module is ss-injective implies that $S_{r}$ projective and $r(a)\subseteq^{\oplus}R_{R}$ for all $a\in S_{r}\cap J$, and if $M$ is a finitely generated right $R$-module, then Soc$(M)\cap J(M)$ is finitely generated if and only if every direct sum of ss-$M$-injective right $R$-modules is ss-$M$-injective if and only if every direct sum of \emph{$\mathbb{N}$} copies of ss-$M$-injective right $R$-module is ss-$M$-injective. In Section 3, we show that a right $R$-module $M$ is strongly ss-injective if and only if every small submodule $A$ of a right $R$-module $N$, every $R$-homomorphism $\alpha:A\longrightarrow M$ with $\alpha(A)$ semisimple extends to $N$. In particular, $R$ is semiprimitive if every simple right $R$-module is strongly ss-injective, but not conversely. We also prove that if $R$ is a right perfect ring, then a right $R$-module $M$ is strongly soc-injective if and only if $M$ is strongly ss-injective. A results (\cite[Theorem 3.6 and Proposition 3.7]{2AmYoZe05}) are extended. We prove that a ring $R$ is right artinian if and only if every direct sum of strongly ss-injective right $R$-modules is injective, and $R$ is $QF$ ring if and only if every strongly ss-injective right $R$-module is projective. In Section 4, we extend the results (\cite[Proposition 4.6 and Theorem 4.12]{2AmYoZe05}) from a soc-injective ring to an ss-injective ring (see Proposition~\ref{Proposition:(4.14)} and Corollary~\ref{Corollary:(4.18)}). In Section 5, we show that a ring $R$ is $QF$ if and only if $R$ is strongly ss-injective and right noetherian with essential right socle if and only if $R$ is strongly ss-injective, $l(J^{2})$ is countable generated left ideal, $S_{r}\subseteq^{ess}R_{R}$, and the chain $r(x_{1})\subseteq r(x_{2}x_{1})\subseteq...\subseteq r(x_{n}x_{n-1}...x_{1})\subseteq...$ terminates for every infinite sequence $x_{1},x_{2},...$ in $R$ (see Theorem~\ref{Theorem:(5.10)} and Theorem~\ref{Theorem:(5.12)}). Finally, we prove that a ring $R$ is $QF$ if and only if $R$ is strongly left and right ss-injective, left Kasch, and $J$ is left $t$-nilpotent (see Theorem~\ref{Theorem:(5.16)}), extending a result of I. Amin, M. Yousif and N. Zeyada \cite[Proposition 5.8]{2AmYoZe05} on strongly soc-injective rings. General background materials can be found in \cite{3AnFu74}, \cite{9Kas82} and \cite{10Lam99}. \section{SS-Injective Modules} \begin{defn}\label{Definition:(2.1)(a)} Let \textit{N} be a right \textit{R}-module. A right \textit{R}-module \textit{M} is said to be ss-\textit{N}-injective, if for any semisimple small submodule \textit{K} of \textit{N}, any right \textit{R}-homomorphism \textit{\textcolor{black}{$f:K{\color{black}\longrightarrow}M$}}\textit{ }extends to \textit{N}. A module \textit{M} is said to be ss-\textit{quasi}-injective if \textit{M} is ss-\textit{M}-injective. \textit{M} is said to be ss-injective if \textit{M} is ss-\textit{R}-injective. A ring \textit{R} is said to be right ss-injective if the right \textit{R}-module $\mathit{{\color{black}R}_{{\color{black}R}}}$ is ss-injective. \end{defn} \begin{defn}\label{Definition:(2.1(b))} A right $R$-module $M$ is said to be strongly ss-injective if $M$ is ss-$N$-injective, for all right $R$-module $N$. A ring $R$ is said to be strongly right ss-injective if the right $R$-module $R_{R}$ is strongly ss-injective. \end{defn} \begin{example}\label{example:(2.2)} \noindent \emph{(1) Every soc-injective module is ss-injective, but not conversely (see Example~\ref{Example:(5.8)}).} \noindent\emph{(2) Every small injective module is ss-injective, but not conversely (see Example~\ref{Example:(5.6)}).} \noindent\emph{(3) Every $\mathbb{Z}$-module is ss-injective. In fact, if $M$ is a $\mathbb{Z}$-module, then $M$ is small injective (by \cite[Theorem 2.8]{19ThQu09} and hence it is ss-injective.} \noindent\emph{(4) The two classes of principally small injective rings and ss-injective rings are different (see \cite[Example 5.2]{15NiYu03}, Example~\ref{Example:(4.4)} and Example~\ref{Example:(5.6)}).} \noindent\emph{(5) Every strongly soc-injective module is strongly ss-injective, but not conversely (see Example~\ref{Example:(5.8)}).} \noindent\emph{(6) Every strongly ss-injective module is ss-injective, but not conversely (see Example~\ref{Example:(5.7)}).} \end{example} \begin{thm}\label{Theorem:(2.3)} The following statements hold: \noindent (1) Let $N$ be a right $R$-module and let ${\color{black}\left\{ M_{i}:i\in I\right\} }$ be a family of right $R$-modules. Then the direct product $\prod_{{\scriptstyle {\scriptscriptstyle i\in I}}}M_{i}$ is ss-$N$-injective if and only if each $\mathit{M_{i}}$ is ss-$N$-injective, for all $\mathit{i\in I}$. \noindent (2) Let $M$, $N$ and $K$ be right $R$-modules with $K{\color{black}\subseteq}N$. If $M$ is ss-$N$-injective, then $M$ is ss-$K$-injective. \noindent (3) Let $M$, $N$ and $K$ be right $R$-modules with ${\color{black}M{\color{black}\cong}N}$. If $M$ is ss-$K$-injective, then $N$ is ss-$K$-injective. \noindent (4) Let $M$, $N$ and $K$ be right $R$-modules with ${\color{black}K{\color{black}\cong}N}$. If $M$ is ss-$K$-injective, then $M$ is ss-$N$-injective. \noindent (5) Let $M$, $N$ and $K$ be right $R$-modules with $N$ is a direct summand of $M$. If $M$ is ss-$K$-injective, then $N$ is ss-$K$-injective. \end{thm} \begin{proof} Clear. \end{proof} \begin{cor}\label{Corollary:(2.4)} \noindent(1) If $N$ is a right $R$-module, then a finite direct sum of ss-$N$-injective modules is again ss-$N$-injective. Moreover, a finite direct sum of ss-injective (resp. strongly ss-injective) modules is again ss-injective (resp. strongly ss-injective). \noindent(2) A direct summand of an ss-quasi-injective (resp. ss-injective, strongly ss-injective) module is again ss-quasi-injective (resp. ss-injective, strongly ss-injective). \end{cor} \begin{proof} (1) By taking the index $I$ to be a finite set and applying Theorem~\ref{Theorem:(2.3)}(1). (2) This follows from Theorem~\ref{Theorem:(2.3)}(5). \end{proof} \begin{lem}\label{Lemma:(2.5)} Every ss-injective right $R$-module is right mininjective. \end{lem} \begin{proof} Let $I$ be a simple right ideal of $R$. By \cite[Lemma 3.8]{16Pas04} we have that either $I$ is nilpotent or a direct summand of $R$. If $I$ is a nilpotent, then $I\subseteq J$ by \cite[Corollary 6.2.8]{6Bla11} and hence $I$ is a semisimple small right ideal of $R$. Thus every ss-injective right $R$-module is right mininjective. \end{proof} It easy to prove the following proposition. \begin{prop}\label{Proposition:(2.6)} Let $N$ be a right $R$-module. If $J(N)$ is a small submodule of $N$, then a right $R$-module $M$ is ss-$N$-injective if and only if any $R$-homomorphism $f:soc(N)\cap J(N){\color{black}\longrightarrow M}$ extends to $N$. \end{prop} \begin{prop}\label{Proposition:(2.8)} Let $N$ be a right $R$-module and $\left\{ {\color{black}A_{i}:i=1,2,...,n}\right\} $ be a family of finitely generated right $R$-modules. Then $N$ is ss-$\overset{{\scriptscriptstyle n}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}A_{i}$-injective if and only if $N$ is ss-$\mathit{A_{i}}$-injective, for all $\mathit{i}=1,2,...,n$. \end{prop} \begin{proof} ($\Rightarrow$) This follows from Theorem~\ref{Theorem:(2.3)}((2),(4)). $\!\!\!\!\!\!\!\!$($\Leftarrow$) By \cite[Proposition (I.4.1) and Proposition (I.1.2)]{5BiKeNe82} we have soc($\overset{{\scriptscriptstyle n}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}A_{i})\cap J(\overset{{\scriptscriptstyle n}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}A_{i})=($soc$\,\cap \,J)(\overset{{\scriptscriptstyle n}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}A_{i})$ $=\mathit{\overset{{\scriptscriptstyle n}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}}($soc$\,\cap\, J)(\mathit{A_{i}})=\overset{{\scriptscriptstyle n}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}($soc$(\mathit{A_{i}})\,\cap\,J(\mathit{A_{i}}))$. For $j=1,2,...,n$, consider the following diagram: \[ \xymatrix{ K_{j}=soc(\mathit{A_{j}})\,\cap\,J(\mathit{A_{j}}) \,\ar[d]_{i_{K_{j}}}\ar@{^{(}->}[r]^{\qquad\qquad i_{2}} \ar[r] & A_{j} \ar[d]^{i_{A_{j}}} \\ \overset{{\scriptscriptstyle n}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}(soc(\mathit{A_{i}})\,\cap\,J(\mathit{A_{i}})) \,\ar[d]_{f}\ar@{^{(}->}[r]^{\qquad\qquad i_{1}} \ar[r] &{\overset{{\scriptscriptstyle n}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}\mathit{A_{i}}}\\N } \] \noindent where \,$i_{1}$, \,$i_{2}$ \, are inclusion maps and \, $i_{K_{j}}$, \, $i_{A_{j}}$ \, are injection maps. \, By hypothesis, \, there exists an $R$-homomorphism $h_{j}:\mathit{A_{j}\longrightarrow N}$ such that $\mathit{h_{j}\circ i_{2}=f\circ i_{K_{j}}}$, also there exists exactly one homomorphism $h:\overset{{\scriptscriptstyle n}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}A_{i}\longrightarrow N$ satisfying $\mathit{h_{j}}=h\circ i_{A_{j}}$ by \cite[Theorem 4.1.6(2)]{9Kas82}. Thus $\mathit{f\circ i_{K_{j}}}=h_{j}\circ i_{2}=h\circ i_{A_{j}}\circ i_{2}=h\circ i_{1}\circ i_{K_{j}}$ for all $\mathit{j}=1,2,...,n$. Let $(\mathit{a_{1},a_{2},}...,a_{n})$$\in\overset{{\scriptscriptstyle n}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}($soc$(A_{i})\cap J(A_{i}))$, thus $\mathit{a_{j}\in}$soc$(A_{j})\cap J(A_{j})$, for all $\mathit{i}=1,2,...,n$ and, $f(\mathit{a_{1},a_{2},}...,a_{n})=f(\mathit{i_{K_{1}}}(a_{1}))+$$f(\mathit{i_{K_{2}}}(a_{2}))+...+$$f(\mathit{i_{K_{n}}}(a_{n}))=(\mathit{h\circ i_{1}})(\mathit{a_{1},a_{2},}...,a_{n})$. Thus $\mathit{f=h\circ i_{1}}$ and the proof is complete. \end{proof} \begin{cor}\label{Corollary:(2.10)} Let $M$ be a right $R$-module and $1=\mathit{e_{1}}+e_{2}+...+e_{n}$ in $R$ such that $\mathit{e_{i}}$ are orthogonal idempotent. Then $M$ is ss-injective if and only if $M$ is ss-$\mathit{e_{i}R}$-injective for every $\mathit{i=1,2,}...,n$. \noindent (2) For idempotents $e$ and $f$ of $R$. If $\mathit{eR\cong f}R$ and $M$ is ss-$\mathit{eR}$-injective, then $M$ is ss-$\mathit{fR}$-injective. \end{cor} \begin{proof} \noindent (1) From \cite[Corollary 7.3]{3AnFu74}, we have $\mathit{R=\overset{{\scriptscriptstyle n}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}}e_{i}R$, thus it follows from Proposition~\ref{Proposition:(2.8)} that $M$ is ss-injective if and only if $M$ is ss-$\mathit{e_{i}}R$-injective for all 1$\leq i\leq n$. (2) This follows from Theorem~\ref{Theorem:(2.3)}(4). \end{proof} \begin{prop}\label{Proposition:(2.9)} A right $R$-module $M$ is ss-injective if and only if $M$ is ss-$P$-injective, for every finitely generated projective right $R$-module $P$. \end{prop} \begin{proof} ($\Rightarrow$) Let $M$ be an ss-injective $R$-module, thus it follows from Proposition~\ref{Proposition:(2.8)} that $M$ is ss-$\mathit{R^{n}}$-injective for any $\mathit{n\in\mathbb{\mathbb{Z^{\dotplus}}}}$. Let $P$ be a finitely generated projective $R$-module, thus by \cite[Corollary 5.5]{1AdWe92}, we have that $P$ is a direct summand of a module isomorphic to $\mathit{R^{m}}$ for some $\mathit{m\in\mathbb{Z^{\dotplus}}}$. Since $M$ is ss-$\mathit{R^{m}}$-injective, thus $M$ is ss-$P$-injective by Theorem~\ref{Theorem:(2.3)}((2),(4)). ($\Leftarrow$) By the fact that $R$ is projective. \end{proof} \begin{prop}\label{Proposition:(2.11)} The following statements are equivalent for a right $R$-module $M$. \noindent(1) Every right $R$-module is ss-$M$-injective. \noindent(2) Every simple submodule of $M$ is ss-$M$-injective. \noindent(3) \emph{soc}$(M)\cap J(M)=0$. \end{prop} \begin{proof} (1) $\Rightarrow$ (2) and (3) $\Rightarrow$ (1) are obvious. (2) $\Rightarrow$ (3) Assume that soc$(M)\cap J(M)\neq 0$, thus soc$(M)\cap J(M)=\underset{i\in I}{\bigoplus}x_{i}R$ where $x_{i}R$ is a simple small submodule of $M$, for each $i\in I$. Therefore, $x_{i}R$ is ss-$M$-injective for each $i\in I$ by hypothesis. For any $i\in I$, the inclusion map from $x_{i}R$ to $M$ is split, so we have that $x_{i}R$ is a direct summand of $M$. Since $x_{i}R$ is small submodule of $M$, thus $x_{i}R=0$ and hence $x_{i}=0$ for all $i\in I$ and this a contradiction. \end{proof} \begin{lem}\label{Lemma:(2.13)} Let $M$ be an ss-quasi-injective right $R$-module and $S=\emph{End}(M_{R})$, then the following statements hold: \noindent(1) $l_{M}r_{R}(m)=Sm$ for all $m\in$ \emph{soc}$(M)\cap J(M)$. \noindent(2) $r_{R}(m)\subseteq r_{R}(n)$, where $m\in$ \emph{soc}$(M)\cap J(M)$, $n\in M$ implies $Sn\subseteq Sm$. \noindent(3) $l_{S}(mR\cap r_{M}(\alpha))=l_{S}(m)+S\alpha$, where $m\in $ \emph{soc}$(M)\cap J(M)$, $\alpha\in S$. \noindent(4) If $kR$ is a simple submodule of $M$, then $Sk$ is a simple left $S$-module, for all $k\in J(M)$. Moreover, \emph{soc}$(M)\cap J(M)\subseteq $\,\,\emph{soc}$(_{S}M)$. \noindent(5) \emph{soc}$(M)\cap J(M)\subseteq r_{M}(J($$_{S}S))$. \noindent(6) $l_{S}(A\cap B)=l_{S}(A)+l_{S}(B)$, for every semisimple small right submodules $A$ and $B$ of $M$. \end{lem} \begin{proof} (1) Let $n\in l_{M}r_{R}(m)$, thus $r_{R}(m)\subseteq r_{R}(n)$. Now, let $\gamma:mR\longrightarrow M$ is given by $\gamma(mr)=nr$, thus $\gamma$ is a well define $R$-homomorphism. By hypothesis, there exists an endomorphism $\beta$ of $M$ such that $\beta_{|mR}=\gamma$. Therefore, $n=\gamma(m)=\beta(m)\in Sm$, that is $l_{M}r_{R}(m)\subseteq Sm$. The inverse inclusion is clear. (2) Let $n\in M$ and $m\in$ soc$(M)\cap J(M)$. Since $r_{R}(m)\subseteq r_{R}(n)$, then $n\in l_{M}r_{R}(m)$. By (1), we have $n\in Sm$ as desired. (3) If $f\in l_{S}(m)+S\alpha$, then $f=f_{1}+f_{2}$ such that $f_{1}(m)=0$ and $f_{2}=g\alpha$, for some $g\in S$. For all $n\in mR\cap r_{M}(\alpha)$, we have $n=mr$ and $\alpha(n)=0$ for some $r\in R$. Since $f_{1}(n)=f_{1}(mr)=f_{1}(m)r=0$ and $f_{2}(n)=g(\alpha(n))=g(0)=0$, thus $f\in l_{S}(mR\cap r_{M}(\alpha))$ and this implies that $l_{S}(m)+S\alpha\subseteq l_{S}(mR\cap r_{M}(\alpha))$. Now, we will prove that the other inclusion. Let $g\in l_{S}(mR\cap r_{M}(\alpha)).$ If $r\in r_{R}(\alpha(m))$, then $\alpha(mr)=0$, so $mr\in mR\cap r_{M}(\alpha)$ which yields $r_{R}(\alpha(m))\subseteq r_{R}(g(m))$. Since $m\in$ soc$(M)\cap J(M)$, thus $\alpha(m)\in$ soc$(M)\cap J(M)$. By (2), we have that $g(m)=\gamma\alpha(m)$ for some $\gamma\in S$. Therefore, $g-\gamma\alpha\in l_{S}(m)$ which leads to $g\in l_{S}(m)+S\alpha$. Thus $l_{S}(mR\cap r_{M}(\alpha))=l_{S}(m)+S\alpha$. (4) To prove $Sk$ is simple left $S$-module, we need only show that $Sk$ is cyclic for any nonzero element in it. If $0\neq\alpha(k)\in Sk$, then $\alpha:kR\longrightarrow\alpha(kR)$ is an $R$-isomorphism. Since $\alpha\in S$, then $\alpha(kR)\ll M$. Since $M$ is ss-quasi-injective, thus $\alpha^{-1}:\alpha(kR)\longrightarrow kR$ has an extension $\beta\in S$ and hence $\beta(\alpha(k))=\alpha^{-1}(\alpha(k))=k$, so $k\in S\alpha k$ which leads to $Sk=S\alpha k$. Therefore $Sk$ is a simple left $S$-module and this leads to soc$(M)\cap J(M)\subseteq$ soc$(_{S}M)$. (5) If $mR$ is simple and small submodule of $M$, then $m\neq0$. We claim that $\alpha(m)=0$ for all $\alpha\in J(S)$, thus $mR\subseteq r_{M}(J(S))$. Otherwise, $\alpha(m)\neq0$ for some $\alpha\in J(S)$. Thus $\alpha:mR\longrightarrow\alpha(mR)$ is an $R$-isomorphism. Now, we need prove that $r_{R}(\alpha(m))=r_{R}(m)$. Let $r\in r_{R}(m)$, so $\alpha(m)r=\alpha(mr)=\alpha(0)=0$ which leads to $r_{R}(m)\subseteq r_{R}(\alpha(m))$. The other inclusion, if $r\in r_{R}(\alpha(m))$, then $\alpha(mr)=0$, that is $mr\in$ ker$(\alpha)=0$, so $r\in r_{R}(m)$. Hence $r_{R}(\alpha(m))=r_{R}(m)$. Since $m,\alpha(m)\in$ soc$(M)\cap J(M)$, thus $S\alpha m=Sm$ (by(2)) and this implies that $m=\beta\alpha(m)$ for some $\beta\in S$, so $(1-\beta\alpha)(m)=0$. Since $\alpha\in J(S)$, then the element $\beta\alpha$ is quasi-regular by \cite[Theorem 15.3]{3AnFu74}. Thus $1-\beta\alpha$ is invertible and hence $m=0$ which is a contradiction. This shows that soc$(M)\cap J(M)\subseteq r_{M}(J(S))$. (6) Let $\alpha\in l_{S}(A\cap B)$ and consider $f:A+B\longrightarrow M$ is given by $f(a+b)=\alpha(a),$ for all $a\in A$ and $b\in B$. Since $M$ is ss-quasi-injective, thus there exists $\beta\in S$ such that $f(a+b)=\beta(a+b).$ Thus $\beta(a+b)=\alpha(a)$, so $(\alpha-\beta)(a)=\beta(b)$ which yields $\alpha-\beta\in l_{S}(A)$. Therefore, $\alpha=\alpha-\beta+\beta\in l_{S}(A)+l_{S}(B)$ and this implies that $l_{S}(A\cap B)\subseteq l_{S}(A)+l_{S}(B)$. The other inclusion is trivial and the proof is complete. \end{proof} \begin{rem}\label{Remark:(2.14)} \emph{Let $M$ be a right $R$-module, then $D(S)=\{ \alpha\in S=$ End$(M)\mid r_{M}(\alpha)\cap mR\neq 0$ for each $ 0\neq m\in $ soc$(M)\cap J(M)\}$ is a left ideal in $S$.} \end{rem} \begin{proof} This is obvious. \end{proof} \begin{prop}\label{Proposition:(2.15)} Let $M$ be an ss-quasi-injective right $R$-module. Then $r_{M}(\alpha)\varsubsetneqq r_{M}(\alpha-\alpha\gamma\alpha)$, for all $\alpha\notin D(S)$ and for some $\gamma\in S$. \end{prop} \begin{proof} For all $\alpha\notin D(S)$. By hypothesis, we can find $0\neq m\in$ soc$(M)\cap J(M)$ such that $r_{M}(\alpha)\cap mR=0$. Clearly, $r_{R}(\alpha(m))=r_{R}(m)$, so $Sm=S\alpha m$ by Lemma~\ref{Lemma:(2.13)}(2). Thus $m=\gamma\alpha m$ for some $\gamma\in S$ and this implies that $(\alpha-\alpha\gamma\alpha)m=0$. Therefore, $m\in r_{M}(\alpha-\alpha\gamma\alpha)$, but $m\notin r_{M}(\alpha)$ and hence the inclusion is strictly. \end{proof} \begin{prop}\label{Proposition:(2.16)} Let $M$ be an ss-quasi-injective right $R$-module, then the set $\{\alpha\in S=\emph{End}(M)\mid 1-\beta\alpha$ is monomorphism for all $\beta\in S\}$ is contained in $D(S)$. Moreover, $J($$_{S}S)\subseteq D(S)$. \end{prop} \begin{proof} Let $\alpha\notin D(S)$, then there exists $0\neq m\in$ soc$(M)\cap J(M)$ such that $r_{M}(\alpha)\cap mR=0$. If $r\in r_{R}(\alpha(m))$, then $\alpha(mr)=0$ and so $mr\in r_{M}(\alpha)$. Since $r_{M}(\alpha)\cap mR=0$. Thus $r\in r_{R}(m)$ and hence $r_{R}(\alpha(m))\subseteq r_{R}(m)$, so $Sm\subseteq S\alpha m$ by Lemma~\ref{Lemma:(2.13)}(2). Therefore, $m\in$ ker$(1-\gamma\alpha)$ for some $\gamma\in S$. Since $m\neq0$, thus $1-\gamma\alpha$ is not monomorphism and hence the inclusion holds. Now, let $\alpha\in J($$_{S}S)$ we have $\beta\alpha$ is a quasi-regular element by \cite[Theorem 15.3]{3AnFu74} and hence $1-\beta\alpha$ is isomorphism for all $\beta\in S$, which completes the proof. \end{proof} \begin{thm}\label{Theorem:(2.17)}\textup{(ss-Baer's condition)} The following statements are equivalent for a ring $R$. \noindent (1) $M$ is an ss-injective right $R$-module. \noindent (2) If $S_{r}\cap J=A\oplus B$ and $\alpha:A\longrightarrow M$ is an $R$-homomorphism, then there exists $m\in M$ such that $\alpha(a)=ma$ for all $a\in A$ and $mB=0$. \noindent (3) If $S_{r}\cap J=A\oplus B$, and $\alpha:A\longrightarrow M$ is an $R$-homomorphism, then there exists $m\in M$ such that $\alpha(a)=ma$, for all $a\in A$ and $mB=0$. \end{thm} \begin{proof} (1)$\Rightarrow$(2) Define $\gamma:S_{r}\cap J\longrightarrow M$ by $\gamma(a+b)=\alpha(a)$ for all $a\in A,b\in B$. By hypothesis, there is a right $R$-homomorphism $\beta:\mathit{R\longrightarrow M}$ is an extension of $\gamma$, so if $\mathit{m=\beta}(1)$, then $\mathit{\alpha}(a)=\gamma(a)=\beta(a)=\beta(1)a=ma$, for all $\mathit{a\in A}$. Moreover, $mb=\beta(b)=\gamma(b)=\alpha(0)=0$ for all $b\in B$, so $mB=0$. (2)$\Rightarrow$(1) Let $\alpha:I\rightarrow M$ be any right $R$-homomorphism, where $I$ is any semisimple small right ideal in $R$. By (2), there exists $m\in M$ such that $\alpha(a)=ma$ for all $a\in I$. Define $\beta:R_{R}\longrightarrow M$ by $\beta(r)=mr$ for all $r\in R$, thus $\beta$ extends $\alpha$. (2)$\Leftrightarrow$(3) Clear. \end{proof} A ring $R$ is called right universally mininjective ring if it satisfies the condition $\mathit{S}_{r}\cap J=0$ (see for example \cite{14NiYo97}). In the next results, we give new characterizations of universally mininjective ring in terms of ss-injectivity and soc-injectivity. \begin{cor}\label{Corollary:(2.18)} The following are equivalent for a ring $R$. \noindent (1) $R$ is right universally mininjective. \noindent (2) $R$ is right mininjective and every quotient of a soc-injective right $R$-module is soc-injective. \noindent (3) $R$ is right mininjective and every quotient of an injective right $R$-module is soc-injective. \noindent (4) $R$ is right mininjective and every semisimple submodule of a projective right $R$-module is projective. \noindent (5) Every right $R$-module is ss-injective. \noindent (6) Every simple right ideal is ss-injective. \end{cor} \begin{proof} (1)$\Leftrightarrow$(2)$\Leftrightarrow$(3)$\Leftrightarrow$(4) By \cite[Lemma 5.1]{14NiYo97} and \cite[Corollary 2.9]{2AmYoZe05}. (1)$\Leftrightarrow$(5)$\Leftrightarrow$(6) By Proposition~\ref{Proposition:(2.11)}. \end{proof} \begin{thm}\label{Theorem:(2.20)} If $M$ is a projective right $R$-module. Then the following statements are equivalent. \noindent (1) Every quotient of an ss-$M$-injective right $R$-module is ss-$M$-injective. \noindent (2) Every quotient of a \emph{soc}-$M$-injective right $R$-module is ss-$M$-injective. \noindent (3) Every quotient of an injective right $R$-module is ss-$M$-injective. \noindent (4) Every sum of two ss-$M$-injective submodules of a right $R$-module is ss-$M$-injective. \noindent (5) Every sum of two \emph{soc}-$M$-injective submodules of a right $R$-module is ss-$M$-injective. \noindent (6) Every sum of two injective submodules of a right $R$-module is ss-$M$-injective. \noindent (7) Every semisimple small submodule of $M$ is projective. \noindent (8) Every simple small submodule of $M$ is projective. \noindent (9) $\emph{soc}(M)\cap J(M)$ is projective. \end{thm} \begin{proof} (1)$\Rightarrow$(2)$\Rightarrow$(3), (4)$\Rightarrow$(5)$\Rightarrow$(6) and (9)$\Rightarrow$(7)$\Rightarrow$(8) are obvious. (8)$\Rightarrow$(9) Since soc$(M)\cap J(M)$ is a direct sum of simple submodules of $M$ and since every simple in $J(M)$ is small in $M$, thus soc$(M)\cap J(M)$ is projective. (3)$\Rightarrow$(7) Consider the following diagram: \[ \xymatrix{ 0\,\ar[r] &K \,\ar[d]_{f}\ar@{^{(}->}[r]^{i} \ar[r] & M \\ E \,\ar[r]^{h}&N \, \ar[r] &0 } \] \noindent where $E$ and $N$ are right $R$-modules, $K$ is a semisimple small submodule of $M$, $h$ is a right $R$-epimorphism and \, $f$ \, is a right$R$-homomorphism. We can assume that $E$ is injective (see, e.g. \cite[Proposition 5.2.10]{6Bla11}). Since $N$ is ss-$M$-injective, thus $f$ can be extended to an $R$-homomorphism $\mathit{g}:M\longrightarrow N$. By projectivity of $M$, thus $g$ can be lifted to an $R$-homomorphism $\tilde{g}:M\longrightarrow E$ such that $\mathit{h}\circ\tilde{g}=g$. Define $\tilde{f}:K\longrightarrow E$ is the restriction of $\tilde{g}$ over $K$. Clearly, $\mathit{h}\circ\tilde{f}=f$ and this implies that $K$ is projective. (7)$\Rightarrow$(1) Let $N$ and $L$ be right $R$-modules with $h:N\longrightarrow L$ is an $R$-epimorphism and $N$ is ss-$M$-injective. Let $K$ be any semisimple small submodule of $M$ and let $f:K\longrightarrow L$ be any left $R$-homomorphism. By hypothesis $K$ is projective, thus $f$ can be lifted to $R$-homomorphism $\mathit{g}:K\longrightarrow N$ such that $\mathit{h}\circ g=f$. Since $N$ is ss-$M$-injective, thus there exists an $R$-homomorphism $\tilde{g}:M\longrightarrow N$ such that $\tilde{g}\circ i=g$. Put $\beta=h\circ\tilde{g}:M\longrightarrow L$. Thus $\beta\circ i=h\circ\tilde{g}\circ i=h\circ g=f$. Hence $L$ is an ss-$M$-injective right $R$-module. (1)$\Rightarrow$(4) Let $N_{1}$ and $N_{2}$ be two ss-$M$-injective submodules of a right $R$-module $N$. Thus $N_{1}+N_{2}$ is a homomorphic image of the direct sum $N_{1}\oplus N_{2}$. Since $N_{1}\oplus N_{2}$ is ss-$M$-injective, thus $N_{1}+N_{2}$ is ss-$M$-injective by hypothesis. (6)$\Rightarrow$(3) Let $E$ be an injective right $R$-module with submodule $N$. Let $Q=E\oplus E$, \, $K=\{(n,n)\mid n\in N\}$, $\bar{Q}=Q/K$, $H_{1}=\{y+K\in\bar{Q}\mid y\in E\oplus 0\}$, $H_{2}=\{y+K\in\bar{Q}\mid y\in 0\oplus E\}$. Then $\bar{Q}=H_{1}+H_{2}$. Since $(E\oplus 0)\cap K=0$ and $(0\oplus E)\cap K=0$, thus $E\cong H_{i}$, $i=1,2$. Since $H_{1}\cap H_{2}=\{y+K\in\bar{Q}\mid y\in N\oplus0\}$= $\{y+K\in\bar{Q}\mid y\in 0\oplus N\}$, thus $H_{1}\cap H_{2}\cong N$ under $y\mapsto y+K$ for all $y\in N\oplus0$. By hypothesis, $\bar{Q}$ is ss-$M$-injective. Since $H_{1}$ is injective, thus $\bar{Q}=H_{1}\oplus A$ for some submodule $A$ of $\bar{Q}$, so $A\cong(H_{1}+H_{2})/H_{1}\cong H_{2}/H_{1}\cap H_{2}\cong E/N$. By Theorem~\ref{Theorem:(2.3)}(5), $E/N$ is ss-$M$-injective. \end{proof} \begin{cor}\label{Corollary:(2.21)} The following statements are equivalent. \noindent (1) Every quotient of an ss-injective right $R$-module is ss-injective. \noindent (2) Every quotient of a soc-injective right $R$-module is ss-injective. \noindent (3) Every quotient of a small injective right $R$-module is ss-injective. \noindent (4) Every quotient of an injective right $R$-module is ss-injective. \noindent (5) Every sum of two ss-injective submodules of any right $R$-module is ss-injective. \noindent (6) Every sum of two soc-injective submodules of any right $R$-module is ss-injective. \noindent (7) Every sum of two small injective submodules of any right $R$-module is ss-injective. \noindent (8) Every sum of two injective submodules of any right $R$-module is ss-injective. \noindent (9) Every semisimple small submodule of any projective right $R$-module is projective. \noindent (10) Every semisimple small submodule of any finitely generated projective right $R$-module is projective. \noindent (11) Every semisimple small submodule of $\mathit{R}_{R}$ is projective. \noindent (12) Every simple small submodule of $\mathit{R}_{R}$ is projective. \noindent (13) $\mathit{S}_{r}\cap J$ is projective. \noindent (14) $S_{r}$ is projective. \end{cor} \begin{proof} The equivalence of (1), (2), (4), (5), (6), (8), (11), (12) and (13) is from Theorem~\ref{Theorem:(2.20)}. (1)$\Rightarrow$(3)$\Rightarrow$(4), (5)$\Rightarrow$(7)$\Rightarrow$(8) and (9)$\Rightarrow$(10)$\Rightarrow$(13) are clear. (14)${\color{black}\Rightarrow}$(9) By \cite[Corollary 2.9]{2AmYoZe05}. (13)$\Rightarrow$(14) Let $S_{r}=(S_{r}\cap J)\oplus A$, where $A=\underset{{\scriptscriptstyle i\in I}}{{\displaystyle {\scriptstyle {\textstyle \bigoplus}}}}S_{i}$ and $S_{i}$ is a right simple and summand of $R_{R}$ for all $i\in I$. Thus $A$ is projective, but $S_{r}\cap J$ is projective, so it follows that $S_{r}$ is projective. \end{proof} \begin{thm}\label{Theorem:(2.22)} If every simple singular right $R$-module is ss-injective, then $r(a)\subseteq^{\oplus}R_{R}$ for every $a\in S_{r}\cap J$ and $S_{r}$ is projective. \end{thm} \begin{proof} Let $a\in S_{r}\cap J$ and let $A=RaR+r(a)$. Thus there exists a right ideal $B$ of $R$ such that $A\oplus B\subseteq^{ess}R_{R}$. Suppose that $A\oplus B\neq R_{R}$, thus we choose $I\subseteq^{max}R_{R}$ such that $A\oplus B\subseteq I$ and so $I\subseteq^{ess}R_{R}$. By hypothesis, $R/I$ is a right ss-injective. Consider the map $\alpha:aR\longrightarrow R/I$ is given by $\alpha(ar)=r+I$ which is a well-define $R$-homomorphism. Thus there exists $c\in R$ such that $1+I=ca+I$ and hence $1-ca\in I$. But $ca\in RaR\subseteq I$ which leads to $1\in I$, a contradiction. Thus $A\oplus B=R$ and hence $RaR+(r(a)\oplus B)=R$. Since $RaR\ll R_{R}$, thus $r(a)\subseteq^{\oplus}R_{R}$. Put $r(a)=(1-e)R$, for some $e^{2}=e\in R$, so it follows that $ax=aex$ \,\,for all $x\in R$ and hence $aR=aeR$. Let $\gamma:eR\longrightarrow aeR$ be defined by $\gamma(er)=aer$ for all $r\in R$. Then $\gamma$ is a well-defined $R$-epimorphism. Clearly, ker$(\gamma)=eR\cap r(a)$. Hence $\gamma$ is an isomorphism and so $aR$ is projective. Since $S_{r}\cap J$ is a direct sum of simple small right ideals, thus $S_{r}\cap J$ is projective and it follows from Corollary~\ref{Corollary:(2.21)} that $S_{r}$ is projective. \end{proof} \begin{cor}\label{Corollary:(2.23)} The following statements are equivalent for a ring $R$. \noindent (1) $R$ is right mininjective and every simple singular right $R$-module is ss-injective. \noindent (2) $R$ is right universally mininjective. \end{cor} \begin{proof} By Theorem~\ref{Theorem:(2.22)} and \cite[Lemma 5.1]{14NiYo97}. \end{proof} Recall that a ring $R$ is called zero insertive, if $aRb=0$ for each $a,b\in R$ with $ab=0$ (see \cite{19ThQu09}). Note that if $R$ is zero insertive ring, then $RaR+r(a)\subseteq^{ess}R_{R}$ for every $a\in R$ (see \cite[Lemma 2.11]{19ThQu09}). \begin{prop}\label{Proposition:(2.24)} Let \, $R$ \, be a zero insertive ring. If every simple singular right \, $R$-module is ss-injective, then $R$ is right universally mininjective. \end{prop} \begin{proof} Let $a\in S_{r}\cap J$. We claim that $RaR+r(a)=R$, thus $r(a)=R$ (since $RaR\ll R$), so $a=0$ and this means that $S_{r}\cap J=0$. Otherwise, if $RaR+r(a)\subsetneqq R$, then there exists a maximal right ideal $I$ of $R$ such that $RaR+r(a)\subseteq I$. Since $I\subseteq^{ess}R_{R}$, thus $R/I$ is ss-injective by hypothesis. Consider $\alpha:aR\longrightarrow R/I$ is given by $\alpha(ar)=r+I$ \, for all $r\in R$ which is a well-defined $R$-homomorphism. Thus $1+I=ca+I$ for some $c\in R$. Since $ca\in RaR\subseteq I$, thus $1\in I$ and this a contradicts with a maximality of $I$, so we must have $RaR+r(a)=R$ and this completes the proof. \end{proof} \begin{thm}\label{Theorem:(2.25)} If $M$ is a finitely generated right $R$-module, then the following statements are equivalent. \noindent (1) \emph{soc}$(M)\cap J(M)$ is a Noetherian $R$-module. \noindent(2) \emph{soc}$(M)\cap J(M)$ is finitely generated. \noindent (3) Any direct sum of ss-$M$-injective right $R$-modules is ss-$M$-injective. \noindent (4) Any direct sum of \emph{soc}-$M$-injective right $R$-modules is ss-$M$-injective. \noindent(5) Any direct sum of injective right $R$-modules is ss-$M$-injective. \noindent (6) $\mathit{K}^{(S)}$ is ss-$M$-injective for every injective right $R$-module $K$ and for any index set $S$. \noindent (7) $\mathit{K}^{(\mathbb{N})}$ is ss-$M$-injective for every injective right $R$-module $K$. \end{thm} \begin{proof} \textcolor{black}{(1)$\Rightarrow$(2) and (3)$\Rightarrow$(4)$\Rightarrow$(5)$\Rightarrow$(6)$\Rightarrow$(7) Clear.} \textcolor{black}{(2)$\Rightarrow$(3) Let $\mathit{E}=\underset{{\scriptscriptstyle i\in I}}{\bigoplus}M_{i}$ be a direct sum of ss-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective right }\textit{\textcolor{black}{R}}\textcolor{black}{-modules and $\mathit{f}:N\longrightarrow E$ be a right }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism, where }\textit{\textcolor{black}{N}}\textcolor{black}{{} is a semisimple small submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{. Since soc$(M)\cap J(M)$ is finitely generated, thus}\textit{\textcolor{black}{{} N}}\textcolor{black}{{} is finitely generated and hence $\mathit{f}(N)\subseteq\underset{{\scriptscriptstyle j\in I_{1}}}{\bigoplus}M_{j}$, for some finite subset }\textit{\textcolor{black}{$I_{1}$}}\textcolor{black}{{} of }\textit{\textcolor{black}{I}}\textcolor{black}{. Since a finite direct sums of ss-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective right }\textit{\textcolor{black}{R}}\textcolor{black}{-modules is ss-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective, thus $\underset{{\scriptscriptstyle j\in I_{1}}}{\bigoplus}M_{j}$ is ss-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective and hence }\textit{\textcolor{black}{f}}\textcolor{black}{{} can be extended to an }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism $\mathit{g}:M\longrightarrow E$. Thus }\textit{\textcolor{black}{E}}\textcolor{black}{{} is ss-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective. } \textcolor{black}{(7)$\Rightarrow$(1) Let $\mathit{N}_{1}\subseteq N_{2}\subseteq...$ be a chain of submodules of soc$(M)\cap J(M)$. For each $\mathit{i}\geq1$, let $\mathit{E}_{i}=E(M/N_{i})$, $\mathit{E}=\underset{{\scriptscriptstyle i=1}}{\overset{{\scriptscriptstyle \infty}}{\bigoplus}}E_{i}$ and $\mathit{M_{i}}=\underset{{\scriptscriptstyle j=1}}{\overset{{\scriptscriptstyle \infty}}{\prod}}E_{j}=E_{i}\oplus(\underset{{\scriptscriptstyle \underset{j\neq i}{j=1}}}{\overset{{\scriptscriptstyle \infty}}{\prod}}E_{j})$, then $\mathit{M_{i}}$ is injective. By hypothesis, $\mathit{\underset{{\scriptscriptstyle i=1}}{\overset{{\scriptscriptstyle \infty}}{\bigoplus}}M_{i}}=(\underset{{\scriptscriptstyle i=1}}{\overset{{\scriptscriptstyle \infty}}{\bigoplus}}E_{i})\oplus(\underset{{\scriptscriptstyle i=1}}{\overset{{\scriptscriptstyle \infty}}{\bigoplus}}\underset{{\scriptscriptstyle \underset{j\neq i}{j=1}}}{\overset{{\scriptscriptstyle \infty}}{\prod}}E_{j})$ is ss-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective, so it follows from Theorem~\ref{Theorem:(2.3)}(5) that }\textit{\textcolor{black}{E}}\textcolor{black}{{} it self is ss-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective. Define $\mathit{f}:U=\overset{{\scriptscriptstyle \infty}}{\underset{{\scriptscriptstyle i=1}}{\bigcup}}N_{i}\longrightarrow E$ by $\mathit{f}(m)=(m+N_{i})_{i}$. It is clear that }\textit{\textcolor{black}{f}}\textcolor{black}{{} is a well defined }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism. Since }\textit{\textcolor{black}{M}}\textcolor{black}{{} is finitely generated, thus soc$(M)\cap J(M)$ is a semisimple small submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{{} and hence $\overset{{\scriptscriptstyle \infty}}{\underset{{\scriptscriptstyle i=1}}{\bigcup}}N_{i}$ is a semisimple small submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{, so }\textit{\textcolor{black}{f}}\textcolor{black}{{} can be extended to a right }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism $\mathit{g}:M\longrightarrow E$. Since }\textit{\textcolor{black}{M}}\textcolor{black}{{} is finitely generated, we have $\mathit{g}(M)\subseteq\overset{{\scriptscriptstyle n}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}E(M/N_{i})$ for some }\textit{\textcolor{black}{n}}\textcolor{black}{{} and hence $\mathit{f}(\underset{{\scriptscriptstyle i=1}}{\overset{{\scriptscriptstyle \infty}}{\bigcup}}N_{i})\subseteq\underset{{\scriptscriptstyle i=1}}{\overset{{\scriptscriptstyle n}}{\bigoplus}}E(M/N_{i})$. Since $\mathit{\pi_{i}}f(x)=\pi_{i}(x+N_{j})_{{\scriptscriptstyle j\geq1}}=x+N_{i}$, for all $\mathit{x\in}U$ and $\mathit{i}\geq1$, where $\mathit{\pi_{i}}:\underset{{\scriptscriptstyle j\geq1}}{\bigoplus}E(M/N_{j})\longrightarrow E(M/N_{i})$ be the projection map, thus $\mathit{\pi_{i}}f(U)=U/N_{i}$ for all $\mathit{i}\geq1$. Since $\mathit{f}(U)\subseteq\underset{{\scriptscriptstyle i=1}}{\overset{{\scriptscriptstyle n}}{\bigoplus}}E(M/N_{i})$, thus $\mathit{U}/N_{i}=\pi_{i}f(U)=0$, for all $\mathit{i}\geq n+1$, so $\mathit{U}=N_{i}$ for all $\mathit{i}\geq n+1$ and hence the chain $\mathit{N}_{1}\subseteq N_{2}\subseteq...$ terminates at $\mathit{N}_{n+1}$. Thus soc$(M)\cap J(M)$ is a Noetherian $R$-module.} \end{proof} \begin{cor}\label{Corollary:(2.26)} If $N$ is a finitely generated right $R$-module, then the following statements are equivalent. \noindent (1) \emph{soc}$(N)\cap J(N)$ is finitely generated. \noindent (2) $\mathit{M}^{(S)}$ is ss-$N$-injective for every \emph{soc}-$N$-injective right $R$-module $M$ and for any index set $S$. \noindent (3) $\mathit{M}^{(S)}$ is ss-$N$-injective for every ss-$N$-injective right $R$-module $M$ and for any index set $S$. \noindent (4) $\mathit{M}^{(\mathbb{N})}$ is ss-$N$-injective for every \emph{soc}-$N$-injective right $R$-module $M$. \noindent (5) $\mathit{M}^{(\mathbb{N})}$ is ss-$N$-injective for every ss-$N$-injective right $R$-module $M$. \end{cor} \begin{proof} By Theorem~\ref{Theorem:(2.25)}. \end{proof} \begin{cor}\label{Corollary:(2.27)} The following statements are equivalent. \noindent (1) $\mathit{S}_{r}\cap J$ is finitely generated. \noindent (2) Any direct sum of ss-injective right $R$-modules is ss-injective. \noindent (3) Any direct sum of soc-injective right $R$-modules is ss-injective. \noindent (4) Any direct sum of small injective right $R$-modules is ss-injective. \noindent (5) Any direct sum of injective right $R$-modules is ss-injective. \noindent (6) $\mathit{M}^{(S)}$ is ss-injective for every injective right $R$-module $M$ and for any index set $S$. \noindent (7) $\mathit{M}^{(S)}$ is ss-injective for every soc-injective right $R$-module $M$ and for any index set $S$. \noindent (8) $\mathit{M}^{(S)}$ is ss-injective for every small injective right $R$-module $M$ and for any index set $S$. \noindent (9) $\mathit{M}^{(S)}$ is ss-injective for every ss-injective right $R$-module $M$ and for any index set $S$. \noindent (10) $\mathit{M}^{(\mathbb{N})}$ is ss-injective for every injective right $R$-module $M$. \noindent (11) $\mathit{M}^{(\mathbb{N})}$ is ss-injective for every \emph{soc}-injective right $R$-module $M$. \noindent (12) $\mathit{M}^{(\mathbb{N})}$ is ss-injective for every small injective right $R$-module $M$. \noindent (13) $\mathit{M}^{(\mathbb{N})}$ is ss-injective for every ss-injective right $R$-module $M$. \end{cor} \begin{proof} By applying Theorem~\ref{Theorem:(2.25)} and Corollary~\ref{Corollary:(2.26)}. \end{proof} \begin{rem}\label{Remark:(2.28)} \emph{Let $M$ be a right $R$-module. We denote that $r_{u}(N)=\{a\in S_{r}\cap J \mid Na=0\}$ and $l_{M}(K)=\{ m\in M \mid mK=0\}$ where $N\subseteq M$ and $K\subseteq S_{r}\cap J$. Clearly, $r_{u}(N)\subseteq(S_{r}\cap J)_{R}$ and $l_{M}(K)\subseteq$ $ _{s}{M}$, where $S=End(M_{R})$ and we have the following: } \noindent\textcolor{black}{(1) $N\subseteq l_{M}r_{u}(N)$ \emph{for all} $N\subseteq M$.} \noindent\textcolor{black}{(2) $K\subseteq r_{u}l_{M}(K)$ \emph{for all} $K\subseteq S_{r}\cap J$.} \noindent\textcolor{black}{(3) $r_{u}l_{M}r_{u}(N)=r_{u}(N)$ \emph{for all} $N\subseteq M$.} \noindent\textcolor{black}{(4) $l_{M}r_{u}l_{M}(K)=l_{M}(K)$ \emph{for all} $K\subseteq S_{r}\cap J$.} \end{rem} \begin{proof} This is clear \end{proof} \begin{lem}\label{Lemma:(2.29)} The following statements are equivalent for a right $R$-module $M$: \noindent (1) $R$ satisfies the ACC for right ideals of form $r_{u}(N)$, where $N\subseteq M$. \noindent (2) $R$ satisfies the DCC for $l_{M}(K)$, where $K\subseteq S_{r}\cap J$. \noindent (3) For each semisimple small right ideal $I$ there exists a finitely generated right ideal $K\subseteq I$ such that $l_{M}(I)=l_{M}(K)$. \end{lem} \begin{proof} (1)$\Leftrightarrow$(2) Clear. (2)$\Rightarrow$(3) Consider $\Omega=\{ l_{M}(A) \mid A$ is finitely generated right ideal and $A\subseteq I$ $\}$ which is non empty set because $M\in\Omega$. Now, let $K$ be a finitely generated right ideal of $R$ and contained in $I$. such that $l_{M}(K)$ is minimal in $\Omega$. Put $B=K+xR$, where $x\in I$. Thus $B$ is a finitely generated right ideal contained in $I$ and $l_{M}(B)\subseteq l_{M}(K)$. But since $l_{M}(K)$ is minimal in $\Omega$, thus $l_{M}(B)=l_{M}(K)$ which yields $l_{M}(K)x=0$ for all $x\in I$. Therefore, $l_{M}(K)I=0$ and hence $l_{M}(K)\subseteq l_{M}(I)$. But $l_{M}(I)\subseteq l_{M}(K)$, so $l_{M}(I)=l_{M}(K)$. (3)$\Rightarrow$(1) Suppose that $r_{u}(M_{1})\subseteq r_{u}(M_{2})\subseteq...\subseteq r_{u}(M_{n})\subseteq...$, where $M_{i}\subseteq M$ for each $i$. Put $D_{i}=l_{M}r_{u}(M_{i})$ for each $i$, and $I=\underset{{\scriptscriptstyle i=1}}{\overset{{\scriptscriptstyle \infty}}{\bigcup}}r_{u}(M_{i})$, then $I\subseteq S_{r}\cap J$. By hypothesis, there exists a finitely generated right ideal $K$ of $R$ and contained in $I$ such that $l_{M}(I)=l_{M}(K)$. Since $K$ is a finitely generated, thus there exists $t\in\mathbb{N}$ such that $K\subseteq r_{u}(M_{n})$ for all $n\geq t$, that is $l_{M}(K)\supseteq l_{M}r_{u}(M_{n})=D_{n}$ for all $n\geq t$. Since $l_{M}(K)=l_{M}(I)=l_{M}(\underset{{\scriptscriptstyle i=1}}{\overset{{\scriptscriptstyle \infty}}{\bigcup}}r_{u}(M_{i}))=\overset{{\scriptscriptstyle \infty}}{\underset{{\scriptscriptstyle i=1}}{\bigcap}}l_{M}r_{u}(M_{i})=\overset{{\scriptscriptstyle \infty}}{\underset{{\scriptscriptstyle i=1}}{\bigcap}}D_{i}\subseteq D_{n}$, thus $l_{M}(K)=D_{n}$ for all $n\geq t$. Since $D_{n}=l_{M}r_{u}(M_{n})$, thus $r_{u}(M_{n})=r_{u}l_{M}r_{u}(M_{n})=r_{u}(D_{n})=r_{u}l_{M}(K)$ for all $n\geq t$. Thus $r_{u}(M_{n})=r_{u}(M_{t})$ for all $n\geq t$. and hence (3) implies (1), which completes the proof. \end{proof} The first part in following proposition is obtained directly by Corollary~\ref{Corollary:(2.27)}, but we will prove it by different way. \begin{prop}\label{Proposition:(2.30)} Let $E$ be an ss-injective right $R$-module. Then $E^{(\mathbb{N})}$ is ss-injective if and only if $R$ satisfies the ACC for right ideals of form $r_{u}(N)$, where $N\subseteq E$. \end{prop} \begin{proof} ($\Rightarrow$) Suppose that $r_{u}(N_{1})\subsetneqq r_{u}(N_{2})\subsetneqq...\subsetneqq r_{u}(N_{m})\subsetneqq...$ be a strictly chain, where $N_{i}\subseteq E$. Thus we get, $l_{E}r_{u}(N_{1})\supsetneqq l_{E}r_{u}(N_{2})\supsetneqq...\supsetneqq l_{E}r_{u}(N_{m})\supsetneqq...$. For each $i\geq1,$ so we can find $t_{i}\in l_{E}r_{u}(N_{i})/l_{E}r_{u}(N_{i+1})$ and $a_{i+1}\in r_{u}(N_{i+1})$ such that $t_{i}a_{i+1}\neq0$. Let $L=\overset{{\scriptscriptstyle \infty}}{\underset{{\scriptscriptstyle i=1}}{\bigcup}}r_{u}(N_{i})$, then for all $\ell\in L$ there exists $m_{\ell}\geq1$ such that $\ell\in r_{u}(N_{i})$ for all $i\geq m_{\ell}$ and this implies that $t_{i}\ell=0$ for all $i\geq m_{\ell}$. Put $\bar{t}=(t_{i})_{i}$ , we have $\bar{t}\ell\in E^{(\mathbb{N})}$ for every $\ell\in L$. Consider $\alpha_{\bar{t}}:L\longrightarrow E^{(\mathbb{N})}$ is given by $\alpha_{\bar{t}}(\ell)=\bar{t}\ell$, then $\alpha_{\bar{t}}$ is a well-define $R$-homomorphism. Since $L$ is a semisimple small right ideal, thus $\alpha_{\bar{t}}$ extends to $\gamma:R\longrightarrow E^{(\mathbb{N})}$ (by hypothesis) and hence $\alpha_{\bar{t}}(\ell)=\bar{t}\ell=\gamma(\ell)=\gamma(1)\ell$. Thus there exists $k\geq1$ such that $t_{i}\ell=0$ for all $i\geq k$ and all $\ell\in L$ (since $\gamma(1)\in E^{(\mathbb{N})}$), but this contradicts with $t_{k}a_{k+1}\neq 0$. ($\Leftarrow$) Let $\alpha:I\longrightarrow E^{(\mathbb{N})}$ be an $R$-homomorphism, where $I$ is a semisimple small right ideal, thus it follows from Lemma~\ref{Lemma:(2.29)} that there is a finitely generated right ideal $K\subseteq I$ such that $l_{M}(I)=l_{M}(K)$. Since $E^{\mathbb{N}}$ is ss-injective, thus $\alpha=a.$ for some $a\in E^{\mathbb{N}}$. Write $K=\overset{{\scriptscriptstyle m}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}r_{i}R$, so we have $\alpha(r_{i})=ar_{i}\in E^{(\mathbb{N})}$, $i=1,2,...,m$. Thus there exists $\tilde{a}\in E^{(\mathbb{N})}$ such that $a_{n}r_{i}=\tilde{a}_{n}r_{i}$ for all $n\in\mathbb{N}$, $i=1,2,...,m,$ where $a_{n}$ is the $n$th-coordinate of $a$ . Since $K$ is generated by $\{r_{1},r_{2},...,r_{m}\}$, thus $ar=\tilde{a}r$ for all $r\in K$. Therefore, $a_{n}-\tilde{a}_{n}\in l_{M}(K)=l_{M}(I)$ for all $n\in\mathbb{N}$ which leads to $a_{n}r=\tilde{a}_{n}r$ for all $r\in I$ and $n\in\mathbb{N}$, so $ar=\tilde{ar}$ for all $r\in I$. Thus there exists $\tilde{a}\in E^{(\mathbb{N})}$ such that $\alpha(r)=\tilde{a}r$ for all $r\in I$ and this means that $E^{(\mathbb{N})}$ is ss-injective. \end{proof} \begin{thm}\label{Theorem:(2.31)} The following statements are equivalent for a ring $R$: \noindent (1) $S_{r}\cap J$ is finitely generated. \noindent (2) $\overset{{\scriptscriptstyle \infty}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}E(M_{i})$ is ss-injective right $R$-module for every simple right $R$-modules $M_{i}$, $i\geq1$. \end{thm} \begin{proof} (1)$\Rightarrow$(2) By Corollary~\ref{Corollary:(2.27)}. (2)$\Rightarrow$(1) Let $I_{1}\subsetneqq I_{2}\subsetneqq...$ be a properly ascending chain of semisimple small right ideals of $R$. Clearly, $I=\overset{{\scriptscriptstyle \infty}}{\underset{{\scriptstyle {\scriptscriptstyle i=1}}}{\bigcup}}I_{i}\subseteq S_{r}\cap J$. For every $i\geq1$, there exists $a_{i}\in I$, $a_{i}\notin I_{i}$ and consider $N_{i}/I_{i}\subseteq^{max}(a_{i}R+I_{i})/I_{i}$, so $K_{i}=(a_{i}R+I_{i})/N_{i}$ is a simple right $R$-module. Define $\alpha_{i}:(a_{i}R+I_{i})/I_{i}\longrightarrow(a_{i}R+I_{i})/N_{i}$ by $\alpha_{i}(x+I_{i})=x+N_{i}$ which is right $R$-epimorphism. Let $E(K_{i})$ be the injective hull of $K_{i}$ and $i_{i}:K_{i}\rightarrow E(K_{i})$ be the inclusion map. By injectivity of $E(K_{i})$, there there exists $\beta_{i}:I/I_{i}\longrightarrow E(K_{i})$ such that $\beta_{i}=i_{i}\alpha_{i}$. Since $a_{i}\notin N_{i}$, then $\beta_{i}(a_{i}+I_{i})=i_{i}(\alpha_{i}(a_{i}+I_{i}))=a_{i}+N_{i}\neq0$ for each $i\geq1$. If $b\in I$, then there exists $n_{b}\geq1$ such that $b\in I_{i}$ for all $i\geq n_{b}$ and hence $\beta_{i}(b+I_{i})=0$ for all $i\geq n_{b}$. Thus we can define $\gamma:I\longrightarrow\overset{{\scriptscriptstyle \infty}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}E(K_{i})$ by $\gamma(b)=(\beta_{i}(b+I_{i}))_{i}$. Then there exists $\tilde{\gamma}:R\longrightarrow\overset{{\scriptscriptstyle \infty}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}E(K_{i})$ such that $\tilde{\gamma}_{|I}=\gamma$ (by hypothesis). Put $\tilde{\gamma}(1)=(c_{i})_{i}$, thus there exists $n\geq1$ with $c_{i}=0$ for all $i\geq n$. Since $(\beta_{i}(b+I_{i}))_{i}=\gamma(b)=\tilde{\gamma}(b)=\tilde{\gamma}(1)b=(c_{i}b)_{i}$ for all $b\in I$, thus $\beta_{i}(b+I_{i})=c_{i}b$ for all $i\geq1$, so it follows that $\beta_{i}(b+I_{i})=0$ for all $i\geq n$ and all $b\in I$ and this contradicts with $\beta_{n}(a_{n}+I_{n})\neq0$. Hence (2) implies (1). \end{proof} \section{Strongly SS-Injective Modules} \begin{prop}\label{Proposition:(3.1)} The following statements are equivalent. \noindent (1) $M$ is a strongly ss-injective right$R$-module. \noindent (2) Every $R$-homomorphism $\alpha:A\longrightarrow M$ extends to $N$, for all right $R$-module $N$, where $A\ll N$ and $\alpha(A)$ is a semisimple submodule in $M$.\end{prop} \begin{proof} \textcolor{black}{(2)$\Rightarrow$(1) Clear.} \textcolor{black}{(1)$\Rightarrow$(2) Let }\textit{\textcolor{black}{A}}\textcolor{black}{{} be a small submodule of }\textit{\textcolor{black}{N}}\textcolor{black}{, and $\alpha:A\longrightarrow M$ be an }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism with $\alpha(A)$ is a semisimple submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{. If $B=$ ker$(\alpha)$, then $\alpha$ induces an embedding $\tilde{\alpha}:A/B\longrightarrow M$ defined by $\tilde{\alpha}(a+B)=\alpha(a)$, for all $a\in A$. Clearly, $\tilde{\alpha}$ is well define because if $a_{1}+B=a_{2}+B$ we have $a_{1}-a_{2}\in B$, so $\alpha(a_{1})=\alpha(a_{2})$, that is $\tilde{\alpha}(a_{1}+B)=\tilde{\alpha}(a_{2}+B)$. Since }\textit{\textcolor{black}{M}}\textcolor{black}{{} is strongly ss-injective and $A/B$ is semisimple and small in $N/B$, thus $\tilde{\alpha}$ extends to an }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism $\gamma:N/B\longrightarrow M$. If $\pi:N\longrightarrow N/B$ is the canonical map, then the }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism $\beta=\gamma\circ\pi:N\longrightarrow M$ is an extension of $\alpha$ such that if $a\in A$, then $\beta(a)=\gamma\circ\pi(a)=\gamma(a+B)=\tilde{\alpha}(a+B)=\alpha(a)$ as desired. }\end{proof} \begin{cor}\label{Corollary:(3.2)} \noindent (1) Let $M$ be a semisimple right $R$-module. If $M$ is a strongly ss-injective, then $M$ is small injective. \noindent (2) If every simple right $R$-module is strongly ss-injective, then $R$ is semiprimitive. \end{cor} \begin{proof} (1) By Proposition~\ref{Proposition:(3.1)}. (2) By (1) and applying \cite[Theorem 2.8]{19ThQu09}.\end{proof} \begin{rem}\label{Remark:(3.3)} \emph{The converse of Corollary~\ref{Corollary:(3.2)} is not true (see Example~\ref{Example:(3.8)})}.\end{rem} \begin{thm}\label{Theorem:(3.4)} If $\mathit{M}$ is a strongly ss-injective (or just ss-$\mathit{E}(M)$-injective) right $R$-module, then for every semisimple small submodule $\mathit{A}$ of $\mathit{M}$, there is an injective $R$-module $\mathit{E}_{A}$ such that $\mathit{M}=E_{A}\oplus T_{A}$ where $\mathit{T}_{A}$ is a submodule of $M$ with $\mathit{T}_{A}\cap A=0$. Moreover, if $\mathit{A}\neq0$, then $\mathit{E}_{A}$ can be taken $\mathit{A}\leq^{ess}E_{A}$. \end{thm} \begin{proof} \textcolor{black}{Let }\textit{\textcolor{black}{A}}\textcolor{black}{{} be a semisimple small submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{. If $\mathit{A}=0$, we are done by taking $\mathit{E}_{A}=0$ and $\mathit{T}_{A}=M$. Suppose that $\mathit{A}\neq0$ and let} \textcolor{black}{ $\mathit{i}_{1}$, $\mathit{i}_{2}$ and $\mathit{i}_{3}$ be inclusion maps and $D_{A}=E(A)$ be the injective hull of }\textit{\textcolor{black}{A}}\textcolor{black}{{} in $\mathit{E}(M)$. Since }\textit{\textcolor{black}{M}}\textcolor{black}{{} is strongly ss-injective, thus }\textit{\textcolor{black}{M}}\textcolor{black}{{} is ss-$\mathit{E}(M)$-injective. Since }\textit{\textcolor{black}{A}}\textcolor{black}{{} is a semisimple small submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{, it follows from \cite[Lemma 5.1.3(a)]{9Kas82} that }\textit{\textcolor{black}{A}}\textcolor{black}{{} is a semisimple small submodule in $\mathit{E}(M)$ and hence there exists an }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism $\mathit{\alpha}:E(M)\longrightarrow M$ such that $\alpha i_{2}i_{1}=i_{3}$. Put $\beta=\alpha i_{2}$, thus $\beta:D_{A}\longrightarrow M$ is an extension of $i_{3}$. Since $\mathit{A}\leq^{ess}D_{A}$, thus $\beta$ is a monomorphism. Put $\mathit{E}_{A}=\beta(D_{A})$. Since $\mathit{E}_{A}$ is an injective submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{, thus $\mathit{M}=E_{A}\oplus T_{A}$ for some submodule $T_{A}$ of }\textit{\textcolor{black}{M}}\textcolor{black}{. Since $\beta(A)=A$, thus $A\subseteq\beta(D_{A})=E_{A}$ and this means that $T_{A}\cap A=0$. Moreover, define $\tilde{\beta}=\beta:D_{A}\longrightarrow E_{A}$, thus $\tilde{\beta}$ is an isomorphism. Since $A\leq^{ess}D_{A}$, thus $\tilde{\beta}(A)\leq^{ess}E_{A}$. But $\tilde{\beta}(A)=\beta(A)=A$, so $A\leq^{ess}E_{A}$.}\end{proof} \begin{cor}\label{Corollary:(3.5)} If $M$ is a right $R$-module has a semisimple small submodule $A$ such that $A\leq^{ess}M$, then the following conditions are equivalent. \noindent (1) $M$ is injective. \noindent (2) $M$ is strongly ss-injective. \noindent (3) $M$ is ss-$E(M)$-injective. \end{cor} \begin{proof} \textcolor{black}{(1)$\Rightarrow$(2) and (2)$\Rightarrow$(3) are obvious.} \textcolor{black}{(3)$\Rightarrow$(1) By Theorem~\ref{Theorem:(3.4)}, we can write $M=E_{A}\oplus T_{A}$ where $E_{A}$ injective and $T_{A}\cap A=0$. Since $A\leq^{ess}M$, thus $T_{A}=0$ and hence $M=E_{A}$. Therefore, }\textit{\textcolor{black}{M}}\textcolor{black}{{} is an injective }\textit{\textcolor{black}{R}}\textcolor{black}{-module.}\end{proof} \begin{example}\label{Example:(3.6)} \emph{\textcolor{black}{$\mathbb{Z}_{4}$ as $\mathbb{Z}$-module is not strongly ss-injective. In particular, $\mathbb{Z}_{4}$ is not ss-$\mathbb{Z}_{2^{\infty}}$-injective.}}\end{example} \begin{proof} \textcolor{black}{Assume that $\mathbb{Z}_{4}$ is strongly ss-injective $\mathbb{Z}$-module. Let $A=<2>=\left\{ 0,2\right\} $. It is clear that }\textit{\textcolor{black}{A}}\textcolor{black}{{} is a semisimple small and essential submodule of $\mathbb{Z}_{4}$ as $\mathbb{Z}$-module. Thus by Corollary~\ref{Corollary:(3.5)} we have that $\mathbb{Z}_{4}$ is injective $\mathbb{Z}$-module and this is a contradiction. Thus $\mathbb{Z}_{4}$ as $\mathbb{Z}$-module is not strongly ss-injective. Since $E(\mathbb{Z}_{2^{2}})=\mathbb{Z}_{2^{\infty}}$ as $\mathbb{Z}$-module, thus $\mathbb{Z}_{4}$ is not ss-$\mathbb{Z}_{2^{\infty}}$-injective, by Corollary~\ref{Corollary:(3.5)}.}\end{proof} \begin{cor}\label{Corollary:(3.7)} Let $M$ be a right $R$-module such that soc$(M)\cap J(M)$ is small submodule in $M$ (in particular, if $M$ is finitely generated). If $M$ is strongly ss-injective, then $M=E\oplus T$, where $E$ is injective and $T\cap$ soc$(M)\cap J(M)=0$. Moreover, if soc$(M)\cap J(M)\neq0$, then we can take soc$(M)\cap J(M)\leq^{ess}E$. \end{cor} \begin{proof} \textcolor{black}{By taking $A=$ soc$(M)\cap J(M)$ and applying Theorem~\ref{Theorem:(3.4)}} \end{proof} The following example shows that the converse of Theorem~\ref{Theorem:(3.4)} and Corollary~\ref{Corollary:(3.7)} is not true. \begin{example}\label{Example:(3.8)} \emph{\textcolor{black}{Let $M=\mathbb{Z}_{6}$ as $\mathbb{Z}$-module. Since $J(M)=0$ and soc$(M)=M$, thus soc$(M)\cap J(M)=0$. So, we can write $M=0\oplus M$ with $M\cap($soc$(M)\cap J(M))=0$. Let $N=\mathbb{Z}_{8}$ as $\mathbb{Z}$-module. Since $J(N)=<\bar{2}>$ and soc$(N)=<\bar{4}>$. Define $\gamma:$ soc$(N)\cap J(N)\longrightarrow M$ by \,\, $\gamma(\bar{4})=\bar{3}$, \,\, thus $\gamma$ \,\,is \,\,a \,\,\,\,$\mathbb{Z}$-homomorphism. Assume that }\textit{\textcolor{black}{M}}\textcolor{black}{{} is strongly ss-injective, thus }\textit{\textcolor{black}{M}}\textcolor{black}{{} is ss-}\textit{\textcolor{black}{N}}\textcolor{black}{-injective, so there exists $\mathbb{Z}$-homomorphism $\beta:N\longrightarrow M$ such that $\beta\circ i=\gamma$, where }\textit{\textcolor{black}{i}}\textcolor{black}{{} is the inclusion map from soc$(N)\cap J(N)$ to }\textit{\textcolor{black}{N}}\textcolor{black}{. Since $\beta(J(N))\subseteq J(M)$, thus $\bar{3}=\gamma(\bar{4})=\beta(\bar{4})\in\beta(J(N))\subseteq J(M)=0$ and this contradiction, so }\textit{\textcolor{black}{M}}\textcolor{black}{{} is not strongly ss-injective $\mathbb{Z}$-module.}}\end{example} \begin{cor}\label{Corollary:(3.9)} The following statements are equivalent: \noindent (1) \emph{soc}$(M)\cap J(M)=0$, for all right $R$-module $M$. \noindent (2) Every right $R$-module is strongly ss-injective. \noindent (3) Every simple right $R$-module is strongly ss-injective. \end{cor} \begin{proof} By Proposition~\ref{Proposition:(2.11)}. \end{proof} \subparagraph*{\textmd{Recall that a ring }\textmd{\textit{R}}\textmd{ is called a right }\textmd{\textit{V}}\textmd{-ring (}\textmd{\textit{GV}}\textmd{-ring}\textmd{\textit{, SI}}\textmd{-ring, respectively) if every simple (simple singular, singular, respectively) right }\textmd{\textit{R}}\textmd{-module is injective. A right }\textmd{\textit{R}}\textmd{-module }\textmd{\textit{M}}\textmd{ is called strongly s-injective if every }\textmd{\textit{R}}\textmd{-homomorphism from }\textmd{\textit{K}}\textmd{ to }\textmd{\textit{M}}\textmd{ extends to }\textmd{\textit{N}}\textmd{ for every right }\textmd{\textit{R}}\textmd{-module }\textmd{\textit{N}}\textmd{, where ${\color{black}K\subseteq Z(N)}$ (see \cite{22Zey14}). }\textmd{\textcolor{black}{A submodule }}\textmd{\textit{\textcolor{black}{K}}}\textmd{\textcolor{black}{{} of a right }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{-module }}\textmd{\textit{\textcolor{black}{M}}}\textmd{\textcolor{black}{{} is called }}\textmd{\textit{\textcolor{black}{t}}}\textmd{\textcolor{black}{-essential in }}\textmd{\textit{\textcolor{black}{M}}}\textmd{\textcolor{black}{{} (written $K\subseteq^{tes}M$ ) if for every submodule }}\textmd{\textit{\textcolor{black}{L}}}\textmd{\textcolor{black}{{} of }}\textmd{\textit{\textcolor{black}{M}}}\textmd{\textcolor{black}{, $K\cap L\subseteq Z_{2}(M)$ implies that $L\subseteq Z_{2}(M)$,}}\textmd{\textit{\textcolor{black}{{} }}}\textmd{\textcolor{black}{}}\textmd{\textit{\textcolor{black}{{} M}}}\textmd{\textcolor{black}{{} is said to be }}\textmd{\textit{\textcolor{black}{t}}}\textmd{\textcolor{black}{-semisimple if for every submodule }}\textmd{\textit{\textcolor{black}{A}}}\textmd{\textcolor{black}{{} of }}\textmd{\textit{\textcolor{black}{M}}}\textmd{\textcolor{black}{{} there exists a direct summand }}\textmd{\textit{\textcolor{black}{B}}}\textmd{\textcolor{black}{{} of }}\textmd{\textit{\textcolor{black}{M}}}\textmd{\textcolor{black}{{} such that $B\subseteq^{tes}A$ (see \cite{4AsHaTo13}) }}\textmd{. In the next results, we will give some relations between ss-injectivity and other injectivities and we provide many new equivalences of }\textmd{\textit{V}}\textmd{-rings, }\textmd{\textit{GV}}\textmd{-rings, }\textmd{\textit{SI}}\textmd{ rings and }\textmd{\textit{QF}}\textmd{ rings.}} \begin{lem}\label{Lemma:(3.10)} Let $M/N$ be a semisimple right $R$-module and $C$ any right $R$-module. Then every homomorphism from a right submodule (resp. a right semisimple submodule) $A$ of $M$ to $C$ can be extended to a homomorphism from $M$ to $C$ if and only if every homomorphism from a right submodule (resp. a right semisimple submodule) $B$ of $N$ to $C$ can be extended to a homomorphism from $M$ to $C$.\end{lem} \begin{proof} \textcolor{black}{($\Rightarrow$) is obtained directly.} \textcolor{black}{($\Leftarrow$) Let }\textit{\textcolor{black}{f}}\textcolor{black}{{} be a right }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism from a right submodule }\textit{\textcolor{black}{A}}\textcolor{black}{{} of }\textit{\textcolor{black}{M}}\textcolor{black}{{} to }\textit{\textcolor{black}{C}}\textcolor{black}{. Since $M/N$ is semisimple, thus there exists a right submodule }\textit{\textcolor{black}{L}}\textcolor{black}{{} of }\textit{\textcolor{black}{M}}\textcolor{black}{{} such that $A+L=M$ and $A\cap L\leq N$ (see \cite[Proposition 2.1]{11Lom99}). Thus there exists a right }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism $g:M\longrightarrow C$ such that $g(x)=f(x)$ for all $x\in A\cap L$. Define $h:M\longrightarrow C$ such that for any $x=a+\ell$, $a\in A$, $\ell\in L$, $h(x)=f(a)+g(\ell)$. Thus }\textit{\textcolor{black}{h}}\textcolor{black}{{} is a well define }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism, because if $a_{1}+\ell_{1}=a_{2}+\ell_{2}$, $a_{i}\in A$, $\ell_{i}\in L$, $i=1,2$, then $a_{1}-a_{2}=\ell_{2}-\ell_{1}\in A\cap L$, that is $f(a_{1}-a_{2})=g(\ell_{2}-\ell_{1})$ which leads to $h(a_{1}+\ell_{1})=h(a_{2}+\ell_{2})$. Thus }\textit{\textcolor{black}{h }}\textcolor{black}{is a well define }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism and extension of }\textit{\textcolor{black}{f}}\textcolor{black}{.}\end{proof} \begin{cor}\label{Corollary:(3.11)} For right $R$-modules $M$ and $N$, then the following hold: \noindent (1) If $M$ is finitely generated and $M/J(M)$ is semisimple right $R$-module, then $N$ is right \emph{soc}-$M$-injective if and only if $N$ is right ss-$M$-injective. \noindent (2) If $M/$\emph{soc}$(M)$ is a semisimple right $R$-module, then $N$ is \emph{soc}-$M$-injective if and only if $N$ is $M$-injective. \noindent (3) If $R/S_{r}$ is semisimple right $R$-module, then $N$ is \emph{soc}-injective if and only if $N$ is injective. \noindent (4) If $R/S_{r}$ is semisimple right $R$-module, then $N$ is ss-injective if and only if $N$ is small injective. \end{cor} \begin{proof} \textcolor{black}{(1). ($\Rightarrow$) Clear.} \textcolor{black}{($\Leftarrow$) Since }\textit{\textcolor{black}{N}}\textcolor{black}{{} is a right ss-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective, thus every homomorphism from a semisimple small submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{{} to }\textit{\textcolor{black}{N}}\textcolor{black}{{} extends to }\textit{\textcolor{black}{M}}\textcolor{black}{. Since }\textit{\textcolor{black}{M}}\textcolor{black}{{} is finitely generated, thus $J(M)\ll M$ and hence every homomorphism from any semisimple submodule of }\textit{\textcolor{black}{$J(M)$}}\textcolor{black}{{} to }\textit{\textcolor{black}{N}}\textcolor{black}{{} extends to }\textit{\textcolor{black}{M}}\textcolor{black}{. Since $M/J(M)$ is semisimple. Thus every homomorphism from any semisimple submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{{} to }\textit{\textcolor{black}{N}}\textcolor{black}{{} extends to }\textit{\textcolor{black}{M}}\textcolor{black}{{} by Lemma~\ref{Lemma:(3.10)}. Therefore }\textit{\textcolor{black}{N}}\textcolor{black}{{} is a soc-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective right }\textit{\textcolor{black}{R}}\textcolor{black}{-module.} \textcolor{black}{(2). ($\Rightarrow$) Since }\textit{\textcolor{black}{N}}\textcolor{black}{{} is soc-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective. Thus every homomorphism from any submodule of soc$(M)$ to }\textit{\textcolor{black}{N}}\textcolor{black}{{} extends to }\textit{\textcolor{black}{M}}\textcolor{black}{. Since $M/$soc$(M)$ is semisimple, thus Lemma~\ref{Lemma:(3.10)} implies that every homomorphism from any submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{{} to }\textit{\textcolor{black}{N}}\textcolor{black}{{} extends to }\textit{\textcolor{black}{M}}\textcolor{black}{. Hence }\textit{\textcolor{black}{N}}\textcolor{black}{{} is }\textit{\textcolor{black}{M}}\textcolor{black}{-injective.} \textcolor{black}{($\Leftarrow$) Clear.} \textcolor{black}{(3) By (2).} \textcolor{black}{(4) Since $R/S_{r}$ is semisimple right }\textit{\textcolor{black}{R}}\textcolor{black}{-module, thus $J(R/S_{r})=0$. By \cite[Theorem 9.1.4(b)]{9Kas82}, we have $J\subseteq S_{r}$ and hence $J=J\cap S_{r}$. Thus }\textit{\textcolor{black}{N}}\textcolor{black}{{} is ss-injective if and only if }\textit{\textcolor{black}{N}}\textcolor{black}{{} is small injective.}\end{proof} \begin{cor}\label{Corollary:(3.12)} Let $R$ be a semilocal ring, then $S_{r}\cap J$ is finitely generated if and only if $S_{r}$ is finitely generated. \end{cor} \begin{proof} \textcolor{black}{Suppose that $S_{r}\cap J$ is finitely generated. By Corollary~\ref{Corollary:(2.27)}, every direct sum of soc-injective right }\textit{\textcolor{black}{R}}\textcolor{black}{-modules is ss-injective. Thus it follows from Corollary~\ref{Corollary:(3.11)}(1) and \cite[Corollary 2.11]{2AmYoZe05} that $S_{r}$ is finitely generated. The converse is clear.}\end{proof} \begin{thm}\label{Theorem:(3.13)} If $R$ is a right perfect ring, then a right $R$-module $M$ is strongly soc-injective if and only if $M$ is strongly ss-injective.\end{thm} \begin{proof} \textcolor{black}{($\Rightarrow$) Clear.} \textcolor{black}{($\Leftarrow$) Let }\textit{\textcolor{black}{R}}\textcolor{black}{{} be a right perfect ring and }\textit{\textcolor{black}{M}}\textcolor{black}{{} be a strongly ss-injective right $R$-module. By \cite[Theorem 3.8]{11Lom99}, }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a semilocal ring and hence by \cite[Theorem 3.5]{11Lom99}, we have every right }\textit{\textcolor{black}{R}}\textcolor{black}{-module }\textit{\textcolor{black}{N}}\textcolor{black}{{} is semilocal and hence $N/J(N)$ is semisimple right }\textit{\textcolor{black}{R}}\textcolor{black}{-module. Since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right perfect ring, thus the Jacobson radical of every right }\textit{\textcolor{black}{R}}\textcolor{black}{-module is small by \cite[Theorem 4.3 and 4.4, p. 69]{7ChDiMa05}. Thus $N/J(N)$ is semisimple and $J(N)\ll N$, for any $N\in$ Mod-}\textit{\textcolor{black}{R}}\textcolor{black}{. Since }\textit{\textcolor{black}{M}}\textcolor{black}{{} is strongly ss-injective, thus every homomorphism from a semisimple small submodule of }\textit{\textcolor{black}{N}}\textcolor{black}{{} to }\textit{\textcolor{black}{M}}\textcolor{black}{{} extends to }\textit{\textcolor{black}{N}}\textcolor{black}{, for every $N\in$ Mod-}\textit{\textcolor{black}{R}}\textcolor{black}{, and this implies that every homomorphism from any semisimple submodule of $J(N)$ to }\textit{\textcolor{black}{M}}\textcolor{black}{{} extends to }\textit{\textcolor{black}{N}}\textcolor{black}{, for every $N\in$ Mod-}\textit{\textcolor{black}{R}}\textcolor{black}{. Since $N/J(N)$ is semisimple right }\textit{\textcolor{black}{R}}\textcolor{black}{-module, for every $N\in$ Mod-}\textit{\textcolor{black}{R}}\textcolor{black}{. Thus Lemma~\ref{Lemma:(3.10)} implies that every homomorphism from any semisimple submodule of }\textit{\textcolor{black}{N}}\textcolor{black}{{} to }\textit{\textcolor{black}{M}}\textcolor{black}{{} extends to }\textit{\textcolor{black}{N}}\textcolor{black}{, for every $N\in$ Mod-}\textit{\textcolor{black}{R}}\textcolor{black}{{} and hence }\textit{\textcolor{black}{M}}\textcolor{black}{{} is strongly soc-injective.}\end{proof} \begin{cor}\label{Corollary:(3.14)} A ring $R$ is $QF$ ring if and only if every strongly ss-injective right $R$-module is projective. \end{cor} \begin{proof} \textcolor{black}{($\Rightarrow$) If }\textit{\textcolor{black}{R}}\textcolor{black}{{} is }\textit{\textcolor{black}{QF}}\textcolor{black}{{} ring, then }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right perfect ring, so by Theorem~\ref{Theorem:(3.13)} and \cite[Proposition 3.7]{2AmYoZe05} we have every strongly ss-injective right $R$-module is projective.} \textcolor{black}{($\Leftarrow$) By hypothesis we have every injective right }\textit{\textcolor{black}{R}}\textcolor{black}{-module is projective and hence }\textit{\textcolor{black}{R}}\textcolor{black}{{} is }\textit{\textcolor{black}{QF}}\textcolor{black}{{} ring (see for instance \cite[Proposition 12.5.13]{6Bla11}).}\end{proof} \begin{thm}\label{Theorem:(3.15)} The following statements are equivalent for a ring $R$. \noindent (1) Every direct sum of strongly ss-injective right $R$-modules is injective. \noindent (2) Every direct sum of strongly soc-injective right $R$-modules is injective. \noindent (3) $R$ is right artinian. \end{thm} \begin{proof} \textcolor{black}{(1)$\Rightarrow$(2) Clear.} \textcolor{black}{(2)$\Rightarrow$(3) Since every direct sum of strongly soc-injective right }\textit{\textcolor{black}{R}}\textcolor{black}{-modules is injective, thus }\textit{\textcolor{black}{R}}\textcolor{black}{{} is right noetherian and right semiartinian by \cite[Theorem 3.3 and Theorem 3.6]{2AmYoZe05}, so it follows from \cite[Proposition 5.2, p.189]{18Ste75} that }\textit{\textcolor{black}{R}}\textcolor{black}{{} is right artinian.} \textcolor{black}{(3)$\Rightarrow$(1) By hypothesis, }\textit{\textcolor{black}{R}}\textcolor{black}{{} is right perfect and right noetherian. It follows from Theorem~\ref{Theorem:(3.13)} and \cite[Theorem 3.3]{2AmYoZe05} that every direct sum of strongly ss-injective right }\textit{\textcolor{black}{R}}\textcolor{black}{-modules is strongly soc-injective. Since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is right semiartinian, so \cite[Theorem 3.6]{2AmYoZe05} implies that every direct sum of strongly ss-injective right }\textit{\textcolor{black}{R}}\textcolor{black}{-modules is injective .}\end{proof} \begin{thm}\label{Theorem:(3.16)} If $R$ is a right $t$-semisimple, then a right $R$-module $M$ is injective if and only if $M$ is strongly s-injective. \end{thm} \begin{proof} \textcolor{black}{($\Rightarrow$) Obvious.} \textcolor{black}{($\Leftarrow$) Since }\textit{\textcolor{black}{M}}\textcolor{black}{{} is strongly s-injective, thus $Z_{2}(M)$ is injective by \cite[Proposition 3, p.27]{22Zey14}. Thus every homomorphism $f:K\longrightarrow M$, where $K\subseteq Z_{2}^{r}$ extends to }\textit{\textcolor{black}{R}}\textcolor{black}{{} by \cite[Lemma 1, p.26]{22Zey14}. Since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right }\textit{\textcolor{black}{t}}\textcolor{black}{-semisimple, thus $R/Z_{2}^{r}$ is a right semisimple (see \cite[Theorem 2.3]{4AsHaTo13}). So by applying Lemma~\ref{Lemma:(3.10)}, we conclude that }\textit{\textcolor{black}{M}}\textcolor{black}{{} is injective.}\end{proof} \begin{cor}\label{Corollary:(3.17)} The following statements are equivalent for a ring $R$. \noindent (1) $R$ is right $SI$ and right $t$-semisimple. \noindent (2) $R$ is semisimple. \end{cor} \begin{proof} \textcolor{black}{(1)$\Rightarrow$(2). Since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right }\textit{\textcolor{black}{SI}}\textcolor{black}{{} ring, thus every right }\textit{\textcolor{black}{R}}\textcolor{black}{-module is strongly s-injective by \cite[Theorem 1, p.29]{22Zey14}. By Theorem~\ref{Theorem:(3.16)}, we have every right }\textit{\textcolor{black}{R}}\textcolor{black}{-module is injective and hence }\textit{\textcolor{black}{R}}\textcolor{black}{{} is semisimple ring.} \textcolor{black}{(2)$\Rightarrow$(1). Clear.}\end{proof} \begin{cor}\label{Corollary:(3.18)} If $R$ is a right $t$-semisimple ring, then $R$ is right $V$-ring if and only if $R$ is right $GV$-ring. \end{cor} \begin{proof} \textcolor{black}{($\Rightarrow$). Clear.} \textcolor{black}{($\Leftarrow$). By \cite[Proposition 5, p.28]{22Zey14} and Theorem~\ref{Theorem:(3.16)}.}\end{proof} \begin{cor}\label{Corollary:(3.19)} If $R$ is a right $t$-semisimple ring, then $R/S_{r}$ is noetherian right $R$-module if and only if $R$ is right noetherian. \end{cor} \begin{proof} If \textcolor{black}{$R/S_{r}$ is a noetherian right }\textit{\textcolor{black}{R}}\textcolor{black}{-module, thus every direct sum of injective right }\textit{\textcolor{black}{R}}\textcolor{black}{-modules is strongly s-injective by \cite[Proposition 6]{22Zey14}. Since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is right }\textit{\textcolor{black}{t}}\textcolor{black}{-semisimple, so it follows from Theorem~\ref{Theorem:(3.16)} that every direct sum of injective right }\textit{\textcolor{black}{R}}\textcolor{black}{-modules is injective and hence }\textit{\textcolor{black}{R}}\textcolor{black}{{} is right noetherian. The converse is clear.}\end{proof} \section{SS-Injective Rings} \subparagraph*{\textmd{\textcolor{black}{We recall that the dual of a right }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{-module }}\textmd{\textit{\textcolor{black}{M}}}\textmd{\textcolor{black}{{} is $M^{d}=$Hom$_{R}(M,R_{R})$ and clearly that $M^{d}$ is a left }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{-module.}}} \begin{prop}\label{Proposition:(4.1)} The following statements are equivalent for a ring $R$. \noindent (1) $R$ is a right ss-injective ring. \noindent (2) If $K$ is a semisimple right $R$-module, $P$ and $Q$ are finitely generated projective right $R$-modules, $\beta:K\longrightarrow P$ is an $R$-monomorphism with $\beta(K)\ll P$ and $f:K\longrightarrow Q$ is an $R$-homomorphism, then $f$ can be extended to an $R$-homomorphism $h:P\longrightarrow Q$. \noindent (3) If $M$ is a right semisimple $R$-module and $f$ is a nonzero monomorphism from $M$ to $R_{R}$ with $f(M)\ll R_{R}$, then $M^{d}=Rf$. \end{prop} \begin{proof} \textcolor{black}{(2)$\Rightarrow$(1) Clear.} \textcolor{black}{(1)$\Rightarrow$(2)} Since \textit{Q} is finitely generated, there is an \textit{R}-epimorphism \textit{\textcolor{black}{$\alpha_{1}:R^{n}\longrightarrow Q$ }}for some \textit{\textcolor{black}{$n\in\mathbb{Z}^{+}$.}} Since \textit{Q} is projective, there is an \textit{R}-homomorphism \textit{\textcolor{black}{$\alpha_{2}:Q\longrightarrow R^{n}$}} such that \textit{\textcolor{black}{$\alpha_{1}\alpha_{2}=I_{Q}$.}} Define \textit{\textcolor{black}{$\tilde{\beta}:K\longrightarrow\beta(K)$}} by \textit{\textcolor{black}{$\tilde{\beta}(a)=\beta(a)$ }}for all ${\color{black}a\in K}$. Since \textit{R} is a right ss-injective ring (by hypothesis), it follows from Proposition~\ref{Proposition:(2.8)} and Corollary~\ref{Corollary:(2.4)}(1) that \textit{\textcolor{black}{$R^{n}$ }}is a right ss-\textit{P}-injective \textit{R}-module. So there exists an \textit{R}-homomorphism \textit{\textcolor{black}{$h:P\longrightarrow R^{n}$ }}such that \textit{\textcolor{black}{$hi=\alpha_{2}f$ $\tilde{\beta}^{-1}$}}. Put ${\color{black}g=\alpha_{1}h:P\longrightarrow Q}$. Thus \textit{\textcolor{black}{$gi=(\alpha_{1}h)i=\alpha_{1}(\alpha_{2}f$ $\tilde{\beta}^{-1})=f$ $\tilde{\beta}^{-1}$}} and hence \textit{\textcolor{black}{$(g\beta)(a)=g(i(\beta(a)))=(f$ $\tilde{\beta}^{-1})(\beta(a))=f$ $(a)$}} for all ${\color{black}a\in K}$. Therefore, there is an \textit{R}-homomorphism \textit{\textcolor{black}{$g:P\longrightarrow Q$}} such that \textit{\textcolor{black}{$g\beta=f$. }} \textcolor{black}{(1)$\Rightarrow$(3) Let $g\in M^{d}$, we have $gf^{-1}:f(M)\rightarrow R_{R}$. Since $f(M)$ is a semisimple small right ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{{} and }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right ss-injective ring (by hypothesis), thus $gf^{-1}=a.$ for some $a\in R$. Therefore, $g=af$ and hence $M^{d}=Rf$.} \textcolor{black}{(3)$\Rightarrow$(1) Let $f:K\longrightarrow R$ be a right }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism, where }\textit{\textcolor{black}{K}}\textcolor{black}{{} is a semisimple small right ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{{} and let $i:K\longrightarrow R$ be the inclusion map, thus by (2) we have $K^{d}=Ri$ and hence $f=ci$ in $K^{d}$ for some $c\in R$. Thus there is $c\in R$ such that $f(a)=ca$ for all $a\in K$ and this implies that }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right ss-injective ring.} \end{proof} \begin{example}\label{Example:(4.2)} \noindent\emph{(1) Every universally mininjective ring is ss-injective, but not conversely (see Example~\ref{Example:(5.7)}).} \noindent\emph{(2) The two classes of universally mininjective rings and soc-injective rings are different (see Example~\ref{Example:(5.7)} and Example~\ref{Example:(5.8)}).}\end{example} \begin{cor}\label{Corollary:(4.3)} Let $R$ be a right ss-injective ring. Then: \noindent (1) $R$ is a right mininjective ring. \noindent (2) $lr(a)=Ra$, for all $a\in S_{r}\cap J$. \noindent (3) $r(a)\subseteq r(b)$, $a\in S_{r}\cap J$, $b\in R$ implies $Rb\subseteq Ra$. \noindent (4) $l(bR\cap r(a))=l(b)+Ra$, for all $a\in S_{r}\cap J$, $b\in R$. \noindent (5) $l(K_{1}\cap K_{2})=l(K_{1})+l(K_{2})$, for all semisimple small right ideals $K_{1}$ and $K_{2}$ of $R$.\end{cor} \begin{proof} (1) By Lemma~\ref{Lemma:(2.5)}. (2), (3),(4) and (5) are obtained by Lemma~\ref{Lemma:(2.13)}. \end{proof} \subparagraph*{\textmd{The following is an example of a right mininjective ring which is not right ss-injective.}} \begin{example}\label{Example:(4.4)} \emph{(The Bj\"{o}rk Example \cite[Example 2.5]{15NiYu03})\textcolor{black}{. Let }\textit{\textcolor{black}{F}}\textcolor{black}{{} be a field and let $a\mapsto\bar{a}$ be an isomorphism $F\longrightarrow\bar{F}\subseteq F$, where the subfield $\bar{F}\neq F$. Let }\textit{\textcolor{black}{R}}\textcolor{black}{{} denote the left vector space on basis $\{1, t\}$, and make }\textit{\textcolor{black}{R}}\textcolor{black}{{} into an }\textit{\textcolor{black}{F}}\textcolor{black}{-algebra by defining $t^{2}=0$ and $ta=\bar{a}t$ for all $a\in F$. By \cite[Example 2.5]{15NiYu03} we have }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right mininjective local ring. It is mentioned in \cite[Example 4.15]{2AmYoZe05}, that }\textit{\textcolor{black}{R}}\textcolor{black}{{} is not right soc-injective. Since $R$ is a local ring, thus by Corollary~\ref{Corollary:(3.11)}(1), }\textit{\textcolor{black}{R}}\textcolor{black}{{} is not right ss-injective ring.}}\end{example} \begin{thm}\label{Theorem:(4.5)} Let $R$ be a right ss-injective ring. Then: \noindent (1) $S_{r}\cap J\subseteq Z_{r}$. \noindent (2) If the ascending chain $r(a_{1})\subseteq r(a_{2}a_{1})\subseteq...$ terminates for any sequence $a_{1},a_{2},...$ in $Z_{r}\cap S_{r}$, then $S_{r}\cap J$ is right t-nilpotent and $S_{r}\cap J=Z_{r}\cap S_{r}$. \end{thm} \begin{proof} \textcolor{black}{(1) Let $a\in S_{r}\cap J$ and $bR\cap r(a)=0$ for any $b\in R$. By Corollary~\ref{Corollary:(4.3)}(4), $l(b)+Ra=l(bR\cap r(a))=l(0)=R$, so $l(b)=R$ because $a\in J$, implies that $b=0$. Thus $r(a)\subseteq^{ess}R_{R}$ and hence $S_{r}\cap J\subseteq Z_{r}$.} \textcolor{black}{(2) For any sequence $x_{1},x_{2},...$ in $Z_{r}\cap S_{r}$, we have $r(x_{1})\subseteq r(x_{2}x_{1})\subseteq...$ . By hypothesis, there exists $m\in\mathbb{N}$ such that $r(x_{m}...x_{2}x_{1})=r(x_{m+1}x_{m}...x_{2}x_{1})$. If $x_{m}...x_{2}x_{1}\neq0$, then $(x_{m}...x_{2}x_{1})R\cap r(x_{m+1})\neq0$ and hence $0\neq x_{m}...x_{2}x_{1}r\in r(x_{m+1})$ for some $r\in R$. Thus $x_{m+1}x_{m}...x_{2}x_{1}r=0$ and this implies that $x_{m}...x_{2}x_{1}r=0$, a contradiction. Thus $Z_{r}\cap S_{r}$ is right t-nilpotent, so $Z_{r}\cap S_{r}\subseteq J$ . Therefore, $S_{r}\cap J=Z_{r}\cap S_{r}$ by (1).}\end{proof} \begin{prop}\label{Proposition:(4.6)} Let $R$ be a right ss-injective ring. Then: \noindent (1) If $Ra$ is a simple left ideal of $R$, then soc$(aR)\cap J(aR)$ is zero or simple. \noindent (2) $rl(S_{r}\cap J)=S_{r}\cap J$ if and only if $rl(K)=K$ for all semisimple small right ideals $K$ of $R$.\end{prop} \begin{proof} (1)\textcolor{black}{{} Suppose that soc$(aR)\cap J(aR)$ is a nonzero. Let $x_{1}R$ and $x_{2}R$ be any simple small right ideals of }\textit{\textcolor{black}{R}}\textcolor{black}{{} with $x_{i}\in aR$, $i=1,2$. If $x_{1}R\cap x_{2}R=0$, then by Corollary~\ref{Corollary:(4.3)}(5) $l(x_{1})+l(x_{2})=R$. Since $x_{i}\in aR$, thus $x_{i}=ar_{i}$ for some $r_{i}\in R$, $i=1,2$, that is $l(a)\subseteq l(ar_{i})=l(x_{i})$, $i=1,2$. Since $Ra$ is a simple, then $l(a)\subseteq^{max}R$, that is $l(x_{1})=l(x_{2})=l(a)$. Therefore, $l(a)=R$ and hence $a=0$ and this contradicts the minimality of $Ra$. Thus soc$(aR)\cap J(aR)$ is simple.} \textcolor{black}{(2) Suppose that $rl(S_{r}\cap J)=S_{r}\cap J$ and let }\textit{\textcolor{black}{K}}\textcolor{black}{{} be a semisimple small right ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{, trivially we have $K\subseteq rl(K)$. If $K\cap xR=0$ for some $x\in rl(K)$, then by Corollary~\ref{Corollary:(4.3)}(5) $l(K\cap xR)=l(K)+l(xR)=R$, since $x\in rl(K)\subseteq$$rl(S_{r}\cap J)=S_{r}\cap J$ . If $y\in l(K)$, then $yx=0$, that is $y(xr)=0$ for all $r\in R$ and hence $l(K)\subseteq l(xR)$.Thus $l(xR)=R$, so $x=0$ and this means that $K\subseteq^{ess}rl(K)$. Since $K\subseteq^{ess}rl(K)\subseteq rl(S_{r}\cap J)=S_{r}\cap J$, it follows that $K=rl(K)$. The converse is trivial.}\end{proof} \begin{lem}\label{Lemma:(4.7)} The following statements are equivalent. \noindent (1) $rl(K)=K$, for all semisimple small right ideals $K$ of $R$. \noindent (2) $r(l(K)\cap Ra)=K+r(a)$, for all semisimple small right ideals $K$ of $R$ and all $a\in R$. \end{lem} \begin{proof} \textcolor{black}{(1)$\Rightarrow$(2). Clearly, $K+r(a)\subseteq r(l(K)\cap Ra)$ by \cite[Proposition 2.16]{3AnFu74}. Now, let $x\in r(l(K)\cap Ra)$ and $y\in l(aK)$. Then $yaK=0$ and $y\in l(ax)$. Thus $l(aK)\subseteq l(ax)$, and so $ax\in rl(ax)\subseteq rl(aK)=aK$, since $aK$ is a semisimple small right ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{. Hence $ax=ak$ for some $k\in K$, and so $(x-k)\in r(a)$. This leads to $x\in K+r(a)$, that is $r(l(K)\cap Ra)=K+r(a)$.} \textcolor{black}{(2)$\Rightarrow$(1). By taking $a=1$.} \end{proof} \subparagraph*{\textmd{\textcolor{black}{Recall that a right ideal }}\textmd{\textit{\textcolor{black}{I }}}\textmd{\textcolor{black}{of }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{} is said to be lie over a summand of $R_{R}$, if there exists a direct decomposition $R_{R}=A_{R}\oplus B_{R}$ with $A\subseteq I$ and $B\cap I\ll R_{R}$ (see \cite{13Nich76}) which leads to $I=A\oplus(B\cap I)$.}}} \begin{lem}\label{Lemma:(4.8)} Let $K$ be an $m$-generated semisimple right ideal lies over summand of $R_{R}$. If $R$ is right ss-injective, then every homomorphism from $K$ to $R_{R}$ can be extended to an endomorphism of $R_{R}$. \end{lem} \begin{proof} \textcolor{black}{Let $\alpha:K\longrightarrow R$ be a right }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism. By hypothesis, $K=eR\oplus B$, for some $e^{2}=e\in R$, where $B$ is an }\textit{\textcolor{black}{m}}\textcolor{black}{-generated semisimple small right ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{{} . Now, we need prove that $K=eR\oplus(1-e)B$. Clearly, $eR+(1-e)B$ is a direct sum. Let $x\in K$, then $x=a+b$ for some $a\in eR$, $b\in B$, so we can write $x=a+eb+(1-e)b$ and this implies that $x\in eR\oplus(1-e)B$. Conversely, let $x\in eR\oplus(1-e)B$. Thus $x=a+(1-e)b$, for some $a\in eR$, $b\in B$. We obtain $x=a+(1-e)b=(a-eb)+b\in eR\oplus B$. It is obvious that $(1-e)B$ is an }\textit{\textcolor{black}{m}}\textcolor{black}{-generated semisimple small right ideal. Since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right ss-injective, then there exists $\gamma\in End(R_{R})$ such that $\gamma_{|(1-e)B}=\alpha_{|(1-e)B}$ . Define $\beta:R_{R}\longrightarrow R_{R}$ by $\beta(x)=\alpha(ex)+\gamma((1-e)x)$, for all $x\in R$ which is a well defined $R$-homomorphism. If $x\in K$, then $x=a+b$ where $a\in eR$ and $b\in(1-e)B$, so $\beta(x)=\alpha(ex)+\gamma((1-e)x)=\alpha(a)+\gamma(b)=\alpha(a)+\alpha(b)=\alpha(x)$ which yields $\beta$ is an extension of $\alpha$. }\end{proof} \begin{cor}\label{Corollary:(4.9)} Let $R$ be a semiregular ring (or just every finitely generated semisimple right ideal lies over a summand of $R_{R}$). If $R$ is a right ss-injective ring, then every $R$-homomorphism from a finitely generated semisimple right ideal to $R$ extends to $R$.\end{cor} \begin{proof} \textcolor{black}{By \cite[Theorem 2.9]{13Nich76} and Lemma~\ref{Lemma:(4.8)}.}\end{proof} \begin{cor}\label{Corollary:(4.10)} Let $S_{r}$ be a finitely generated and lie over a summand of $R_{R}$, then $R$ is a right ss-injective ring if and only if $R$ is right soc-injective. \end{cor} \subparagraph*{\textmd{\textcolor{black}{Recall that a ring }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{} is called right minannihilator if every simple right ideal }}\textmd{\textit{\textcolor{black}{K}}}\textmd{\textcolor{black}{{} of }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{} is an annihilator; equivalently, if $rl(K)=K$ (see \cite{14NiYo97}).}}} \begin{lem}\label{Lemma:(4.11)} A ring $R$ is a right minannihilator if and only if $rl(K)=K$ for any simple small right ideal $K$ of $R$. \end{lem} \begin{lem}\label{Lemma:(4.12)} A ring $R$ is a left minannihilator if and only if $lr(K)=K$ for any simple small left ideal $K$ of $R$. \end{lem} \begin{cor}\label{Corollary:(4.13)} Let $R$ be a right ss-injective ring, then the following hold: \noindent (1) If $rl(S_{r}\cap J)=S_{r}\cap J$, then $R$ is right minannihilator. \noindent (2) If $S_{\ell}\subseteq S_{r}$, then: \noindent i) $S_{\ell}=S_{r}$. \noindent ii) $R$ is a left minannihilator ring. \end{cor} \begin{proof} (1)\textcolor{black}{{} Let $aR$ be a simple small right ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{, thus $rl(a)=aR$ by Proposition~\ref{Proposition:(4.6)}(2). Therefore, }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right minannihilator ring.} \textcolor{black}{(2) i) Since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right ss-injective ring, thus it is right mininjective and it follows from \cite[Proposition 1.14 (4)]{14NiYo97} that $S_{\ell}=S_{r}$ .} \textcolor{black}{ii) If $Ra$ is a simple small left ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{, then $lr(a)=Ra$ by Corollary~\ref{Corollary:(4.3)}(2) and hence }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a left minannihilator ring.}\end{proof} \begin{prop}\label{Proposition:(4.14)} The following statements are equivalent for a right ss-injective ring $R$. \noindent (1) $S_{\ell}\subseteq S_{r}$. \noindent (2) $S_{\ell}=S_{r}$. \noindent (3) $R$ is a left mininjective ring. \end{prop} \begin{proof} \textcolor{black}{(1)$\Rightarrow$(2) By Corollary~\ref{Corollary:(4.13)}(2) (i).} \textcolor{black}{(2)$\Rightarrow$(3) By Corollary~\ref{Corollary:(4.13)}(2) and \cite[Corollary 2.34]{15NiYu03}, we need only show that }\textit{\textcolor{black}{R}}\textcolor{black}{{} is right minannihilator ring. Let $aR$ be a simple small right ideal, then $Ra$ is a simple small left ideal by \cite[Theorem 1.14]{14NiYo97}. Let $0\neq x\in rl(aR)$, then $l(a)\subseteq l(x)$. Since $l(a)\leq^{max}R$, thus $l(a)=l(x)$ and hence $Rx$ is simple left ideal, that is $x\in S_{r}$. Now , if $Rx=Re$ for some $e=e^{2}\in R$, then $e=rx$ for some $0\neq r\in R$. Since $(e-1)e=0$, then $(e-1)rx=0$, that is $(e-1)ra=0$ and this implies that $ra\in eR$. Thus $raR\subseteq eR$, but $eR$ is semisimple right ideal, so $raR\subseteq^{\oplus}R$ and hence $ra=0$. Therefore, $rx=0$, that is $e=0$, a contradiction. Thus $x\in J$ and hence $x\in S_{r}\cap J$. Therefore, $aR\subseteq rl(aR)\subseteq S_{r}\cap J$. Now, let $aR\cap yR=0$ for some $y\in rl(aR)$, thus $l(aR)+l(yR)=l(aR\cap yR)=R$. Since $y\in rl(aR)$, thus $l(aR)\subseteq l(yR)$ and hence $l(yR)=R$, that is $y=0$. Therefore, $aR\subseteq^{ess}rl(aR)$, so $aR=rl(aR)$ as desired.} \textcolor{black}{(3)$\Rightarrow$(1) Follows from \cite[Corollary 2.34]{15NiYu03}.} \end{proof} \subparagraph*{\textmd{\textcolor{black}{Recall that a ring }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{} is said to be right minfull if it is semiperfect, right mininjective and soc$(eR)\neq0$ for each local idempotent $e\in R$ (see \cite{15NiYu03}). A ring }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{} is called right min-}}\textmd{\textit{\textcolor{black}{PF}}}\textmd{\textcolor{black}{, if it is a semiperfect, right mininjective, $S_{r}\subseteq^{ess}R_{R}$, $lr(K)=K$ for every simple left ideal $K\subseteq Re$ for some local idempotent $e\in R$ (see \cite{15NiYu03}).}}} \begin{cor}\label{Corollary:(4.18)} Let $R$ be a right ss-injective ring, semiperfect with $S_{r}\subseteq^{ess}R_{R}$. Then $R$ is right minfull ring and the following statements hold: \noindent (1) Every simple right ideal of $R$ is essential in a summand. \noindent (2) soc$(eR)$ is simple and essential in $eR$ for every local idempotent $e\in R$. Moreover, $R$ is right finitely cogenerated. \noindent (3) For every semisimple right ideal $I$ of $R$, there exists $e=e^{2}\in R$ such that $I\subseteq^{ess}rl(I)\subseteq^{ess}eR$. \noindent (4) $S_{r}\subseteq S_{\ell}\subseteq rl(S_{r})$. \noindent (5) If $I$ is a semisimple right ideal of $R$ and $aR$ is a simple right ideal of $R$ with $I\cap aR=0$, then $rl(I\oplus aR)=rl(I)\oplus rl(aR)$. \noindent (6) $rl(\overset{{\scriptscriptstyle n}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}a_{i}R)=\underset{{\scriptscriptstyle i=1}}{\overset{{\scriptscriptstyle n}}{\bigoplus}}rl(a_{i}R)$ , where $\overset{{\scriptscriptstyle n}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}a_{i}R$ is a direct sum of simple right ideals. \noindent (7) The following statements are equivalent. \noindent (a) $S_{r}=rl(S_{r})$. \noindent (b) $K=rl(K)$ for every semisimple right ideals $K$ of $R$. \noindent (c) $kR=rl(kR)$ for every simple right ideals $kR$ of $R$. \noindent (d) $S_{r}=S_{\ell}$. \noindent (e) \emph{soc}$(Re)$ is simple for all local idempotent $e\in R$. \noindent (f) \emph{soc}$(Re)=S_{r}e$ for every local idempotent $e\in R$. \noindent (g) $R$ is left mininjective. \noindent (h) $L=lr(L)$ for every semisimple left ideals $L$ of $R$. \noindent (i) $R$ is left minfull ring. \noindent (j) $S_{r}\cap J=rl(S_{r}\cap J)$. \noindent (k) $K=rl(K)$ for every semisimple small right ideals $K$ of $R$. \noindent (l) $L=lr(L)$ for every semisimple small left ideals $L$ of $R$. \noindent (8) If $R$ satisfies any condition of (7), then $r(S_{\ell}\cap J)\subseteq^{ess}R_{R}$. \end{cor} \begin{proof} \textcolor{black}{(1), (2), (3), (4), (5) and (6) are obtained by Corollary~\ref{Corollary:(3.11)} and \cite[Theorem 4.12]{2AmYoZe05}.} \textcolor{black}{(7) The equivalence of (a), (b), (c), (d), (e), (f), (g), (h) and (i) follows from Corollary~\ref{Corollary:(3.11)} and \cite[Theorem 4.12]{2AmYoZe05}.} \textcolor{black}{(b)$\Rightarrow$(j) Clear.} \textcolor{black}{(j)$\Leftrightarrow$(k) By Proposition~\ref{Proposition:(4.6)}(2).} \textcolor{black}{(k)$\Rightarrow$(c) By Corollary~\ref{Corollary:(4.13)}(1).} \textcolor{black}{(h)$\Rightarrow$(l) Clear.} \textcolor{black}{(l)$\Rightarrow$(d) Let $Ra$ be a simple left ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{. By hypothesis, $lr(A)=A$ for any simple small left ideal }\textit{\textcolor{black}{A}}\textcolor{black}{{} of }\textit{\textcolor{black}{R}}\textcolor{black}{. By Lemma~\ref{Lemma:(4.12)}, $lr(A)=A$ for any simple left ideal }\textit{\textcolor{black}{A}}\textcolor{black}{{} of }\textit{\textcolor{black}{R}}\textcolor{black}{{} and hence $lr(Ra)=Ra$. Thus }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right min-PF ring and it follows from \cite[Theorem 3.14]{14NiYo97} that $S_{r}=S_{\ell}$.} \textcolor{black}{(8) Let }\textit{\textcolor{black}{K}}\textcolor{black}{{} be a right ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{{} such that $r(S_{\ell}\cap J)\cap K=0$. Then $Kr(S_{\ell}\cap J)=0$ and we have $K\subseteq lr(S_{\ell}\cap J)=S_{\ell}\cap J=S_{r}\cap J$. Now, $r((S_{\ell}\cap J)+l(K))=$$r(S_{\ell}\cap J)\cap K=0$. Since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is left Kasch, then $(S_{\ell}\cap J)+l(K)=R$ by \cite[Corollary 8.28(5)]{10Lam99}. Thus $l(K)=R$ and hence $K=0$, so $r(S_{\ell}\cap J)\subseteq^{ess}R_{R}$.} \end{proof} \subparagraph*{\textmd{\textcolor{black}{Recall that }}\textmd{a}\textmd{\textcolor{black}{{} right }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{-module }}\textmd{\textit{\textcolor{black}{M}}}\textmd{\textcolor{black}{{} is called almost-injective if $M=E\oplus K$, where }}\textmd{\textit{\textcolor{black}{E}}}\textmd{\textcolor{black}{{} is injective and }}\textmd{\textit{\textcolor{black}{K}}}\textmd{\textcolor{black}{{} has zero radical (see \cite{23ZeHuAm11}). After reflect on \cite[Theorem 2.12]{23ZeHuAm11} we found it is not true always and the reason is due to the homomorphism $h:(L+J)/J\longrightarrow K$ in the part (3)$\Rightarrow$(1) of the proof of Theorem 2.12 in \cite{23ZeHuAm11} is not well define, in particular see the following example.}}} \begin{example}\label{Example:(4.19)} \emph{\textcolor{black}{In particular from the proof of the part (3)$\Rightarrow$(1) in \cite[Theorem 2.12]{23ZeHuAm11}, we consider $R=\mathbb{Z}_{8}$ and $M=K=<\bar{4}>=\left\{ \bar{0},\bar{4}\right\} $. Thus $M=E\oplus K$, where $E=0$ is a trivial injective }\textit{\textcolor{black}{R}}\textcolor{black}{-module and $J(K)=0$. Let $f:L\longrightarrow K$ is the identity map, where $L=K$. So, the map $h:(L+J)/J\longrightarrow K$ which is given by $h(\ell+J)=f(\ell)$ is not well define, because $J=\bar{4}+J$ but $h(J)=f(\bar{0})=\bar{0}\neq\bar{4}=f(\bar{4})=h(\bar{4}+J)$.}} \end{example} \subparagraph*{\textmd{The following example shows that there is a contradiction in \cite[Theorem 2.12]{23ZeHuAm11}.}} \begin{example}\label{Example:(4.20)} \emph{\textcolor{black}{Assume that }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right artinian ring but not semisimple (this claim is found because for example $\mathbb{Z}_{8}$ satisfies this property). Now, let }\textit{\textcolor{black}{M}}\textcolor{black}{{} be a simple right }\textit{\textcolor{black}{R}}\textcolor{black}{-module, then }\textit{\textcolor{black}{M}}\textcolor{black}{{} is almost-injective. Clearly, }\textit{\textcolor{black}{R}}\textcolor{black}{{} is semilocal (see \cite[Theorem 9.2.2]{9Kas82}), thus }\textit{\textcolor{black}{M}}\textcolor{black}{{} is injective by \cite[Theorem 2.12]{23ZeHuAm11}. Therefore, }\textit{\textcolor{black}{R}}\textcolor{black}{{} is }\textit{\textcolor{black}{V}}\textcolor{black}{-ring and hence }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right semisimple ring but this contradiction. In other word, Since $\mathbb{Z}_{8}$ is semilocal ring and $<\bar{4}>=\left\{ \bar{0},\bar{4}\right\} $ is almost injective as $\mathbb{Z}_{8}$-module, then $<\bar{4}>$ is injective by \cite[Theorem 2.12]{23ZeHuAm11}. Thus $<\bar{4}>\subseteq^{\oplus}\mathbb{Z}_{8}$ and this contradiction.}}\end{example} \begin{thm}\label{Theorem:(4.21)} The following statements are equivalent for a ring $R$. \noindent (1) $R$ is semiprimitive and every almost-injective right $R$-module is quasi-continuous. \noindent (2) $R$ is right ss-injective and right minannihilator ring, $J$ is right artinian, and every almost-injective right $R$-module is quasi-continuous. \noindent (3) $R$ is a semisimple ring. \end{thm} \begin{proof} \textcolor{black}{(1)$\Rightarrow$(2) and (3)$\Rightarrow$(1) are clear.} \textcolor{black}{(2)$\Rightarrow$(3) Let }\textit{\textcolor{black}{M}}\textcolor{black}{{} be a right }\textit{\textcolor{black}{R}}\textcolor{black}{-module with zero radical. If }\textit{\textcolor{black}{N}}\textcolor{black}{{} is an arbitrary nonzero submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{, then $N\oplus M$ is quasi-continuous and by \cite[Corollary 2.14]{12MoMu90}, }\textit{\textcolor{black}{N}}\textcolor{black}{{} is }\textit{\textcolor{black}{M}}\textcolor{black}{-injective. Thus $N\leq^{\oplus}M$ and hence }\textit{\textcolor{black}{M}}\textcolor{black}{{} is semisimple. In particular $R/J$ is semisimple }\textit{\textcolor{black}{R}}\textcolor{black}{-module and hence $R/J$ is artinian by \cite[Theorem 9.2.2(b)]{9Kas82}, so }\textit{\textcolor{black}{R}}\textcolor{black}{{} is semilocal ring. Since }\textsl{\textcolor{black}{J}}\textcolor{black}{{} is a right artinian, then }\textit{\textcolor{black}{R}}\textcolor{black}{{} is right artinian. So it follows from Corollary~\ref{Corollary:(4.18)}(7) that }\textit{\textcolor{black}{R}}\textcolor{black}{{} is right and left mininjective. Thus \cite[Corollary 4.8]{14NiYo97} implies that }\textit{\textcolor{black}{R}}\textcolor{black}{{} is }\textit{\textcolor{black}{QF}}\textcolor{black}{{} ring. By hypothesis, $R\oplus(R/J)$ is quasi-continuous (since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is self-injective), so again by \cite[Corollary 2.14]{12MoMu90} we have that $R/J$ is injective. Since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is }\textit{\textcolor{black}{QF}}\textcolor{black}{{} ring, then $R/J$ is projective (see \cite[Theorem 13.6.1]{9Kas82}). Thus the canonical map $\pi:R\longrightarrow R/J$ is splits and hence $J\leq^{\oplus}R$, that is $J=0$. Therefore }\textit{\textcolor{black}{R}}\textcolor{black}{{} is semisimple. } \end{proof} \section{STRONGLY SS-INJECTIVE RINGS} \begin{prop}\label{Proposition:(5.1)} A ring $R$ is strongly right ss-injective if and only if every finitely generated projective right $R$-module is strongly ss-injective. \end{prop} \begin{proof} Since a finite direct sum of strongly ss-injective modules is strongly ss-injective, so every finitely generated free right $R$-module is strongly ss-injective. But a direct summand of strongly ss-injective is strongly ss-injective. Therefore, every finitely generated projective is strongly ss-injective. The converse is clear. \end{proof} \subparagraph*{\textmd{\textcolor{black}{A ring }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{} is called a right Ikeda-Nakayama ring if $l(A\cap B)=l(A)+l(B)$ for all right ideals }}\textmd{\textit{\textcolor{black}{A}}}\textmd{\textcolor{black}{{} and }}\textmd{\textit{\textcolor{black}{B}}}\textmd{\textcolor{black}{{} of }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{} (see \cite[p.148]{15NiYu03}). In the next proposition, the strongly ss-injectivity gives a new version of Ikeda-Nakayama rings.}}} \begin{prop}\label{Proposition:(5.2)} Let $R$ be a strongly right ss-injective ring, then $l(A\cap B)=l(A)+l(B)$ for all semisimple small right ideals $A$ and all right ideals $B$ of $R$.\end{prop} \begin{proof} \textcolor{black}{Let $x\in l(A\cap B)$ and define $\alpha:A+B\longrightarrow R_{R}$ by $\alpha(a+b)=xa$ for all $a\in A$ and $b\in B$. Clearly, $\alpha$ is well define, because if $a_{1}+b_{1}=a_{2}+b_{2}$, then $a_{1}-a_{2}=b_{2}-b_{1}$, that is $x(a_{1}-a_{2})=0$, so $\alpha(a_{1}+b_{1})=\alpha(a_{2}+b_{2})$. The map $\alpha$ induces an }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism $\tilde{\alpha}:(A+B)/B\longrightarrow R_{R}$ which is given by $\tilde{\alpha}(a+B)=xa$ for all $a\in A$. Since $(A+B)/B\subseteq$ soc$(R/B)\cap J(R/B)$ and }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a strongly right ss-injective, $\tilde{\alpha}$ can be extended to an }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism $\gamma:R/B\longrightarrow R_{R}$. If $\gamma(1+B)=y$, for some $y\in R$, then $y(a+b)=xa$, for all $a\in A$ and $b\in B$. In particular, $ya=xa$ for all $a\in A$ and $yb=0$ for all $b\in B$. Hence $x=(x-y)+y\in l(A)+l(B)$. Therefore, $l(A\cap B)\subseteq l(A)+l(B)$. Since the converse is always holds, thus the proof is complete.} \end{proof} \subparagraph*{\textmd{\textcolor{black}{Recall that a ring }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{} is said to be right simple }}\textmd{\textit{\textcolor{black}{J}}}\textmd{\textcolor{black}{-injective if for any small right ideal }}\textmd{\textit{\textcolor{black}{I}}}\textmd{\textcolor{black}{{} and any }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{-homomorphism $\alpha:I\longrightarrow R_{R}$ with simple image, $\alpha=c.$ for some $c\in R$ (see \cite{21YoZh04}).}}} \begin{cor}\label{Corollary:(5.3)} Every strongly right ss-injective ring is right simple $J$-injective. \end{cor} \begin{proof} By Proposition~\ref{Proposition:(3.1)}.\end{proof} \begin{rem}\label{Remark:(5.4)}\emph{ The converse of Corollary~\ref{Corollary:(5.3)} is not true (see Example~\ref{Example:(5.7)})}.\end{rem} \begin{prop}\label{Proposition:(5.5)} Let $R$ be a right Kasch and strongly right ss-injective ring. Then: \noindent (1) $rl(K)=K$, for every small right ideal $K$. Moreover, $R$ is right minannihilator. \noindent (2) If $R$ is left Kasch, then $r(J)\subseteq^{ess}R_{R}$. \end{prop} \begin{proof} \textcolor{black}{(1) By Corollary~\ref{Corollary:(5.3)} and \cite[Lemma 2.4]{21YoZh04}.} \textcolor{black}{(2) Let }\textit{\textcolor{black}{K}}\textcolor{black}{{} be a right ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{{} and $r(J)\cap K=0$. Then $Kr(J)=0$ and we obtain $K\subseteq lr(J)=J$, because }\textit{\textcolor{black}{R}}\textcolor{black}{{} is left Kasch. By (1), we have $r(J+l(K))=r(J)\cap K=0$ and this means that $J+l(K)=R$ (since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is left Kasch). Thus $K=0$ and hence $r(J)\subseteq^{ess}R_{R}$.} \end{proof} \subparagraph*{\textmd{The following examples show that the classes of rings: strongly ss-injective rings, soc-injective rings and of small injective rings are different.}} \begin{example}\label{Example:(5.6)} \emph{\textcolor{black}{Let $R=\mathbb{Z}_{(p)}=\{\frac{m}{n}\mid p$ does not divide $n\}$, the localization ring of $\mathbb{Z}$ at the prime $p$. Then }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a commutative local ring and it has zero socle but not principally small injective (see \cite[Example 4]{20Xia11}). Since $S_{r}=0$, thus }\textit{\textcolor{black}{R}}\textcolor{black}{{} is strongly soc-injective ring and hence }\textit{\textcolor{black}{R}}\textcolor{black}{{} is strongly ss-injective ring.}} \end{example} \begin{example}\label{Example:(5.7)} \emph{Let $R=\left\{ \begin{array}{cc} \left(\begin{array}{cc} n & x\\ 0 & n\end{array}\right)\mid & n\in\mathbb{Z}, \, x\in\mathbb{Z}_{2}\end{array}\right\} $. Thus $R$ is a commutative ring, $J=S_{r}=\left\{ \begin{array}{cc} \left(\begin{array}{cc} 0 & x\\ 0 & 0\end{array}\right)\mid & x\in\mathbb{Z}_{2}\end{array}\right\} $ and $R$ is small injective (see \cite[Example(i)]{19ThQu09}). Let $A=J$ and } \emph{\noindent $B=\left\{ \begin{array}{cc} \left(\begin{array}{cc} 2n & 0\\ 0 & 2n\end{array}\right)\mid & n\in\mathbb{Z}\end{array}\right\} $, then $l(A)=\left\{ \begin{array}{cc} \left(\begin{array}{cc} 2n & y\\ 0 & 2n\end{array}\right)\mid & n\in\mathbb{Z},\,y\in\mathbb{Z}_{2}\end{array}\right\} $ and } \noindent\emph{ $l(B)=\left\{ \begin{array}{cc} \left(\begin{array}{cc} 0 & y\\ 0 & 0\end{array}\right)\mid & y\in\mathbb{Z}_{2}\end{array}\right\} $. Thus $l(A)+l(B)=\left\{ \begin{array}{cc} \left(\begin{array}{cc} 2n & y\\ 0 & 2n\end{array}\right)\mid & n\in\mathbb{Z},\, y\in\mathbb{Z}_{2}\end{array}\right\} $.} \noindent\emph{ Since $A\cap B=0$, thus $l(A\cap B)=R$ and this implies that $l(A)+l(B)\neq l(A\cap B)$. Therefore $R$ is not strongly ss-injective and not strongly soc-injective by Proposition~\ref{Proposition:(5.2)}.} \end{example} \begin{example}\label{Example:(5.8)} \emph{\textcolor{black}{Let $F=\mathbb{Z}_{2}$ be the field of two elements, $F_{i}=F$ for $i=1,2,3,...$, $Q=\overset{{\scriptscriptstyle \infty}}{\underset{{\scriptscriptstyle i=1}}{\prod}}F_{i}$, $S=\overset{{\scriptscriptstyle \infty}}{\underset{{\scriptscriptstyle i=1}}{\bigoplus}}F_{i}$ . If }\textit{\textcolor{black}{R}}\textcolor{black}{{} is the subring of }\textit{\textcolor{black}{Q}}\textcolor{black}{{} generated by 1 and }\textit{\textcolor{black}{S}}\textcolor{black}{, then }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a Von Neumann regular ring (see \cite[Example (1), p.28]{22Zey14}). Since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is commutative, thus every simple }\textit{\textcolor{black}{R}}\textcolor{black}{-module is injective by \cite[Corollary 3.73]{10Lam99}. Thus }\textit{\textcolor{black}{R}}\textcolor{black}{{} is }\textit{\textcolor{black}{V}}\textcolor{black}{-ring and hence $J(N)=0$ for every right }\textit{\textcolor{black}{R}}\textcolor{black}{-module }\textit{\textcolor{black}{N}}\textcolor{black}{. It follows from Corollary~\ref{Corollary:(3.9)} that every }\textit{\textcolor{black}{R}}\textcolor{black}{-module is strongly ss-injective. In particular, }\textit{\textcolor{black}{R}}\textcolor{black}{{} is strongly ss-injective ring. But }\textit{\textcolor{black}{R}}\textcolor{black}{{} is not soc-injective (see \cite[Example (1)]{22Zey14}).}} \end{example} \begin{example}\label{Example:(5.9)} \emph{\textcolor{black}{Let $R=\mathbb{Z}_{2}[x_{1},x_{2},...]$ where $\mathbb{Z}_{2}$ is the field of two elements, $x_{i}^{3}=0$ for all i, $x_{i}x_{j}=0$ for all $i\neq j$ and $x_{i}^{2}=x_{j}^{2}\neq0$ for all i and j. If $m=x_{i}^{2}$, then }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a commutative, semiprimary, local, soc-injective ring with $J=$span\{m, $x_{1}$, $x_{2}$, ... \}, and }\textit{\textcolor{black}{R}}\textcolor{black}{{} has simple essential socle $J^{2}=\mathbb{Z}_{2}m$ (see \cite[Example 5.7]{2AmYoZe05}). It follows from \cite[Example 5.7]{2AmYoZe05} that the }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism $\gamma:J\longrightarrow R$ which is given by $\gamma(a)=a^{2}$ for all $a\in J$ with simple image can be not extended to }\textit{\textcolor{black}{R}}\textcolor{black}{, then }\textit{\textcolor{black}{R}}\textcolor{black}{{} is not simple }\textit{\textcolor{black}{J}}\textcolor{black}{-injective and not small injective, so it follows from Corollary~\ref{Corollary:(5.3)} that }\textit{\textcolor{black}{R}}\textcolor{black}{{} is not strongly ss-injective.}} \end{example} \subparagraph*{\textmd{\textcolor{black}{Recall that }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{} is said to be right minsymmetric ring if $aR$ is simple right ideal then $Ra$ is simple left ideal (see \cite{14NiYo97}). Every right mininjective ring is right minsymmetric by \cite[Theorem 1.14]{14NiYo97}.}}} \begin{thm}\label{Theorem:(5.10)} A ring $R$ is QF if and only if $R$ is a strongly right ss-injective and right noetherian ring with $S_{r}\subseteq^{ess}R_{R}$.\end{thm} \begin{proof} \textcolor{black}{($\Rightarrow$) This is clear.} \textcolor{black}{($\Leftarrow$) By Corollary~\ref{Corollary:(4.3)}(1), }\textit{\textcolor{black}{R}}\textcolor{black}{{} is right minsymmetric. It follows from \cite[Lemma 2.2]{19ThQu09} that }\textit{\textcolor{black}{R}}\textcolor{black}{{} is right perfect. Thus }\textit{\textcolor{black}{R}}\textcolor{black}{{} is strongly right soc-injective, by Theorem~\ref{Theorem:(3.13)}. Since $S_{r}\subseteq^{ess}R_{R}$, so it follows from \cite[Corollary 3.2]{2AmYoZe05} that }\textit{\textcolor{black}{R}}\textcolor{black}{{} is self-injective and hence }\textit{\textcolor{black}{R}}\textcolor{black}{{} is }\textit{\textcolor{black}{QF}}\textcolor{black}{.}\end{proof} \begin{cor}\label{Corollary:(5.11)} For a ring $R$ the following statements are true. \noindent (1) $R$ is semisimple if and only if $S_{r}\subseteq^{ess}R_{R}$ and every semisimple right $R$-module is strongly soc-injective. \noindent (2) $R$ is QF if and only if $R$ is strongly right ss-injective, semiperfect with essential right socle and $R/S_{r}$ is noetherian as right $R$-module. \end{cor} \begin{proof} (1) Suppose that $S_{r}\subseteq^{ess}R_{R}$ and every semisimple right $R$-module is strongly soc-injective, then $R$ is a right noetherian right V-ring by \cite[Proposition 3.12]{2AmYoZe05}, so it follows from Corollary~\ref{Corollary:(3.9)} that $R$ is strongly right ss-injective. Thus $R$ is QF by Theorem~\ref{Theorem:(5.10)}. But $J=0$, so $R$ is semisimple. The converse is clear. (2) By\textcolor{black}{{} \cite[Theorem 2.9]{14NiYo97}, $J=Z_{r}$. Since $R/Z_{2}^{r}$ is a homomorphic image of $R/Z_{r}$ and }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a semilocal ring, thus }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right }\textit{\textcolor{black}{t}}\textcolor{black}{-semisimple. By Corollary~\ref{Corollary:(3.19)}, }\textit{\textcolor{black}{R}}\textcolor{black}{{} is right noetherian, so it follows from Theorem~\ref{Theorem:(5.10)} that }\textit{\textcolor{black}{R}}\textcolor{black}{{} is }\textit{\textcolor{black}{QF}}\textcolor{black}{. The converse is clear.}\end{proof} \begin{thm}\label{Theorem:(5.12)} A ring $R$ is $QF$ if and only if $R$ is a strongly right ss-injective, $l(J^{2})$ is a countable generated left ideal, $S_{r}\subseteq^{ess}R_{R}$ and the chain $r(x_{1})\subseteq r(x_{2}x_{1})\subseteq...\subseteq r(x_{n}x_{n-1}...x_{1})\subseteq...$ terminates for every infinite sequence $x_{1},x_{2},...$ in $R$.\end{thm} \begin{proof} \textcolor{black}{($\Rightarrow$) Clear.} \textcolor{black}{($\Leftarrow$) By \cite[Lemma 2.2]{19ThQu09}, }\textit{\textcolor{black}{R}}\textcolor{black}{{} is right perfect. Since $S_{r}\subseteq^{ess}R_{R}$, thus }\textit{\textcolor{black}{R}}\textcolor{black}{{} is right Kasch (by \cite[Theorem 3.7]{14NiYo97}). Since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is strongly right ss-injective, thus }\textit{\textcolor{black}{R}}\textcolor{black}{{} is right simple }\textit{\textcolor{black}{J}}\textcolor{black}{-injective, by Corollary~\ref{Corollary:(5.3)}. Now, by Proposition~\ref{Proposition:(5.5)}(1) we have $rl(S_{r}\cap J)=S_{r}\cap J$, so it follows from Corollary~\ref{Corollary:(4.18)}(7) that $S_{r}=S_{\ell}$. By \cite[Lemma 3.36]{15NiYu03}, $S_{2}^{r}=l(J^{2})$. The result now follows from \cite[Theorem 2.18]{21YoZh04}.}\end{proof} \begin{rem}\label{Remark:(5.13)} \emph{The condition \textcolor{black}{$S_{r}\subseteq^{ess}R_{R}$ in Theorem~\ref{Theorem:(5.10)} and Theorem~\ref{Theorem:(5.12)} can be not deleted, for example, $\mathbb{Z}$ is strongly ss-injective noetherian ring but not }\textit{\textcolor{black}{QF}}\textcolor{black}{.}}\end{rem} The following two results are extension of Proposition 5.8 in \cite{2AmYoZe05}. \begin{cor}\label{Corollary:(5.15)} The following statements are equivalent. \noindent (1) $R$ is a $QF$ ring. \noindent (2) $R$ is a left perfect, strongly left and right ss-injective ring. \end{cor} \begin{proof} By Corollary~\ref{Corollary:(5.3)} and \cite[Corollary 2.12]{21YoZh04}.\end{proof} \begin{thm}\label{Theorem:(5.16)} The following statements are equivalent: \noindent (1) $R$ is a $QF$ ring. \noindent (2) $R$ is a strongly left and right ss-injective, right Kasch and $J$ is left $t$-nilpotent. \noindent (3) $R$ is a strongly left and right ss-injective, left Kasch and $J$ is left $t$-nilpotent. \end{thm} \begin{proof} (1)\textcolor{black}{$\Rightarrow$(2) and (1)$\Rightarrow$(3) are clear.} \textcolor{black}{(3)$\Rightarrow$(1) Suppose that $xR$ is simple right ideal. Thus either $rl(x)=xR\subseteq^{\oplus}R_{R}$ or $x\in J$. If $x\in J$, then $rl(x)=xR$ (since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is right minannihilator), so it follows from Theorem~\ref{Theorem:(3.4)} that $rl(x)\subseteq^{ess}E\subseteq^{\oplus}R_{R}$. Therefore, $rl(x)$ is essential in a direct summand of $R_{R}$ for every simple right ideal $xR$. Let }\textit{\textcolor{black}{K}}\textcolor{black}{{} be a maximal left ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{. Since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is left Kasch, thus $r(K)\neq0$ by \cite[Corollary 8.28]{10Lam99}. Choose $0\neq y\in r(K)$, so $K\subseteq l(y)$ and we conclude that $K=l(y)$. Since $Ry\cong R/l(y)$, thus $Ry$ is simple left ideal. But }\textit{\textcolor{black}{R}}\textcolor{black}{{} is left mininjective ring, so $yR$ is right simple ideal by \cite[Theorem 1.14]{14NiYo97} and this implies that $r(K)\subseteq^{ess}eR$ for some $e^{2}=e\in R$ (since $r(K)=rl(y)$). Thus }\textit{\textcolor{black}{R}}\textcolor{black}{{} is semiperfect by \cite[Lemma 4.1]{15NiYu03} and hence }\textit{\textcolor{black}{R}}\textcolor{black}{{} is left perfect (since }\textit{\textcolor{black}{J}}\textcolor{black}{{} is left }\textit{\textcolor{black}{t}}\textcolor{black}{-nilpotent), so it follows from Corollary~\ref{Corollary:(5.15)} that }\textit{\textcolor{black}{R}}\textcolor{black}{{} is }\textit{\textcolor{black}{QF}}\textcolor{black}{.} \textcolor{black}{(2)$\Rightarrow$(1) It is similar to the proof of (3)$\Rightarrow$(1).}\end{proof} \begin{thm}\label{Theorem:(5.17)} The ring $R$ is $QF$ if and only if $R$ is strongly left and right ss-injective, left and right Kasch, and the chain $l(a_{1})\subseteq l(a_{1}a_{2})\subseteq l(a_{1}a_{2}a_{3})\subseteq...$ terminates for every $a_{1},a_{2},...\in Z_{\ell}$. \end{thm} \begin{proof} \textcolor{black}{($\Rightarrow$) Clear.} \textcolor{black}{($\Leftarrow$) By Proposition~\ref{Proposition:(5.5)}, $l(J)$ is essential in $_{R}R$. Thus $J\subseteq Z_{\ell}$. Let $a_{1},a_{2},...\in J$ , we have $l(a_{1})\subseteq l(a_{1}a_{2})\subseteq l(a_{1}a_{2}a_{3})\subseteq...$. Thus there exists $k\in\mathbb{N}$ such that $l(a_{1}...a_{k})=l(a_{1}...a_{k}a_{k+1})$ (by hypothesis). Suppose that $a_{1}...a_{k}\neq0$, so $R(a_{1}...a_{k})\cap l(a_{k+1})\neq0$ (since $l(a_{k+1})$ is essential in $_{R}R$). Thus $ra_{1}...a_{k}\neq0$ and $ra_{1}...a_{k}a_{k+1}=0$ for some $r\in R$, a contradiction. Therefore, $a_{1}...a_{k}=0$ and hence }\textit{\textcolor{black}{J}}\textcolor{black}{{} is left }\textit{\textcolor{black}{t}}\textcolor{black}{-nilpotent, so it follows from Theorem~\ref{Theorem:(5.16)} that }\textit{\textcolor{black}{R}}\textcolor{black}{{} is }\textit{\textcolor{black}{QF}}\textcolor{black}{.}\end{proof} \begin{cor}\label{Corollary:(5.18)} The ring $R$ is $QF$ if and only if $R$ is strongly left and right ss-injective with essential right socle, and the chain $r(a_{1})\subseteq r(a_{2}a_{1})\subseteq r(a_{3}a_{2}a_{1})\subseteq...$ terminates for every infinite sequence $a_{1},a_{2},...$ in $R$.\end{cor} \begin{proof} By \cite[Lemma 2.2]{19ThQu09} and Corollary~\ref{Corollary:(5.15)}. \end{proof} \end{document}
\begin{document} \title{ What does monogamy in higher powers of a correlation measure mean?} \author{P. J. Geetha}\affiliation{Department of Physics, Kuvempu University, Shankaraghatta, Shimoga-577 451, India} \author{Sudha } \affiliation{Department of Physics, Kuvempu University, Shankaraghatta, Shimoga-577 451, India} \affiliation{Inspire Institute Inc., Alexandria, Virginia, 22303, USA.} \author{A. R. Usha Devi} \affiliation{ Department of Physics, Bangalore University, Bangalore 560 056, India} \affiliation{Inspire Institute Inc., Alexandria, Virginia, 22303, USA.} \date{\today} \begin{abstract} We examine here the proposition that all multiparty quantum states can be made monogamous by considering positive integral powers of any quantum correlation measure. With Rajagopal-Rendell quantum deficit as the measure of quantum correlations for symmetric $3$-qubit pure states, we illustrate that monogamy inequality is satisfied for higher powers of quantum deficit. We discuss the drawbacks of this inequality in quantification of correlations in the state. We also prove a monogamy inequality in higher powers of classical mutual information and bring out the fact that such inequality need not necessarily imply restricted shareability of correlations. We thus disprove the utility of higher powers of any correlation measure in establishing monogamous nature in multiparty quantum states. \end{abstract} \maketitle \section{Introduction} \label{intro} It is well known that classical correlations are infinitely shareable whereas there is a restriction on the shareability of quantum entanglement amongst the several parts of a multipartite quantum state~\cite{ckw,osb,ter,proof}. The concept of monogamy of entanglement and monogamy of quantum correlations has been studied quite extensively~[1--18] and it is shown that the measures of quantum correlations such as quantum discord~\cite{oz}, quantum deficit~\cite{akrqd} are not monogamous for some category of pure states~\cite{prabhu,sudha}. The polygamous nature of quantum correlations other than entanglement has initiated discussions on the properties to be satisfied by a correlation measure to be monogamous~\cite{bruss} and it is shown that a measure of correlations is in general non-monogamous if it does not vanish on the set of all separable states~\cite{bruss}. While it has been shown that~\cite{sqd} square of the quantum discord~\cite{oz} obeys monogamy inequality for $3$-qubit pure states, an attempt to show that {\emph {all multiparty states can be made monogamous}} by considering higher integral powers of a non-monogamous quantum correlation measure has been done in Ref.~\cite{salini}. It is shown that (See Theorem 1 of Ref.~\cite{salini}) if $Q$ is a non-monogamous correlation measure and is monotonically decreasing under discarding systems then $Q^n$, $n=2,\,3,\cdots $ can be a monogamous correlation measure for tripartite states~\cite{salini}. With quantum work-deficit $Q_{wd}$ as a correlation measure, it is numerically shown that almost all $3$-qubit pure states become monogamous when fifth power of ${Q}_{wd}$ is considered~\cite{salini}. In this work, we analyze the implications of the proposition~\cite{salini} that higher integral powers of a quantum correlation measure reveal monogamy in all multiparty quantum states. Towards this end we first verify the above proposition by adopting quantum deficit~\cite{akrqd}, an operational measure of quantum correlations for our analysis. Quantum deficit has been shown to be, in general, a non-monogamous measure of correlations for $3$ qubit pure states~\cite{sudha}. In Ref.~\cite{sudha}, the monogamy properties (with respect to quantum deficit) of symmetric $3$-qubit pure states belonging to $2$-, $3$- distinct Majorana spinors classes~\cite{maj} has been examined and it has been shown that all states belonging to the $2$-distinct spinors class (including W-states) are polygamous. It has also been shown~\cite{sudha} that the superposition of W, obverse W states, belonging to the SLOCC class of $3$-distinct Majorana spinors~\cite{maj}, are polygamous with respect to quantum deficit. Here we consider both these classes of states and illustrate that they can be monogamous with respect to higher powers of quantum deficit. We examine the possibility of quantification of tripartite correlations using monogamy relation in higher powers of a quantum correlation measure and illustrate that such an exercise is unlikely to yield fruitful results. In order to analyze the relevance of monogamy with respect to higher integral powers of a quantum correlation measure, we bring forth a monogamy-kind-of-an-inequality in higher powers of classical mutual information~\cite{nc}. The possibility of a monogamy relation in higher powers of a classical correlation measure even in the arena of classical probability theory raises questions regarding the meaning attributed to such an inequality. We discuss this aspect and bring out the fact that monogamy in higher powers of a quantum correlation measure need not necessarily imply limited shareability of correlations. Quantum deficit, a useful measure of quantum correlations was proposed by Rajagopal and Rendell~\cite{akrqd} while enquiring into the circumstances in which entropic methods can distinguish the quantum separability and classical correlations of a composite state. It is defined as the relative entropy~\cite{nc} of the state $\rho_{AB}$ with its classically decohered counterpart $\rho^d_{AB}$. That is, \begin{eqnarray} D_{AB}&=&S(\rho_{AB}\vert\vert \rho^d_{AB}) \nonumber \\ &=&\mbox{Tr} (\rho_{AB} \ln \rho_{AB})-\mbox{Tr} (\rho_{AB} \ln \rho^d_{AB}). \end{eqnarray} is the quantum deficit of the state $\rho_{AB}$ and it determines the quantum excess of correlations in $\rho_{AB}$ with reference to its classically decohered counterpart $\rho_{AB}^d$. As $\rho^d_{AB}$ is diagonal in the eigenbasis $\{\vert a\rangle\}$, $\{\vert b\rangle\}$ of the subsystems $\rho_A$, $\rho_B$ (common to both $\rho_{AB}$, $\rho_{AB}^d$) one can readily evaluate $D_{AB}$ as~\cite{sudha} \begin{eqnarray} \label{newd} D_{AB}&=&\mbox{Tr} (\rho_{AB} \ln \rho_{AB})-\mbox{Tr} (\rho_{AB} \ln \rho^d_{AB})\nonumber \\ &=& \sum_i \lambda_i \ln \lambda_i-\sum_{a,b} P_{ab}\ln P_{ab}, \end{eqnarray} where $\lambda_i$ are the eigenvalues of the state $\rho_{AB}$ and $P_{ab}=\langle a, b\vert\rho_{AB}\vert a, b\rangle$ denote the diagonal elements of $\rho_{AB}^d$. Through an explicit evaluation of the quantum deficit $D_{AB}$, the polygamous nature (with respect to quantum deficit $D_{AB}$) of two SLOCC inequivalent classes of symmetric $3$-qubit pure states has been illustrated in Ref.~\cite{sudha}. In particular, it is shown that~\cite{sudha} the monogamy relation \begin{equation} \label{qdmon} D_{AB}+D_{AC}\leq D_{A:BC} \end{equation} is {\emph{not satisfied}} for symmetric $3$-qubit states with $2$-distinct Majorana spinors~\cite{maj}. Amongst the $3$-qubit GHZ and superposition of W, obverse W states, the monogamy inequality (\ref{qdmon}) is satisfied by the GHZ states while the superposition of W and obverse W states does not obey it~\cite{sudha} inspite of both the states belonging to the SLOCC family of $3$-distinct spinors~\cite{maj}. In the following we illustrate that symmetric $3$-~qubit pure states obey monogamy relation in higher powers of quantum deficit $D_{AB}$. The states of interest are given by, \begin{eqnarray} \label{w} \vert \rm{W} \rangle &=& \frac{\vert 100 \rangle+\vert 010 \rangle+\vert 001 \rangle}{\sqrt{3}} \nonumber \\ \label{ww} \vert \rm{ W\bar{W}} \rangle &=& \frac{\vert {\rm W} \rangle +\vert {\rm{\bar W}} \rangle}{\sqrt{2}} \end{eqnarray} where $\vert {\rm{\bar W}}\rangle=\frac{\vert 011 \rangle+\vert 101 \rangle+\vert 110 \rangle}{\sqrt{3}}$ is the obverse W state. The reduced density matrices of the $3$ qubit W state are given by \begin{eqnarray} \rho_{AB} &=& \rho_{AC}=\frac{1}{3}\ba{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\0 & 0 & 0 & 0 \end{array}\right) \mbox{and} \nonumber \\ \rho_A &=& \rho_B=\rho_C=\frac{1}{3}\ba{cc} 2 & 0 \\ 0 & 1 \end{array}\right) \end{eqnarray} With $\chi_1=(1,\,0)$, $\chi_2=(0,\,1)$ being the eigenvectors of $\rho_A$, the decohered counterpart $\rho_{AB}^d$ of $\rho_{AB}$ is obtained as~\cite{sudha} \begin{eqnarray} \rho^d_{AB}&=&\mbox{diag}\,\left( P_{11}, P_{22},\,P_{33},\,P_{44} \right) \nonumber \\ &=&\mbox{diag}\,\left(\frac{1}{3},\,\frac{1}{3},\,\frac{1}{3},\,0\right)=\rho^d_{AC}; \\ P_{ii}&=&\langle \chi_i,\chi_i\vert \rho_{AB}\vert \chi_i, \chi_i\rangle; \nonumber \end{eqnarray} As $\lambda_1=\frac{2}{3}$, $\lambda_2=\frac{1}{3}$ are the non-zero eigenvalues of $\rho_{AB}$, we obtain the quantum deficit $D_{AB}$ to be, \begin{equation} D_{AB}=\frac{2}{3}\ln \frac{2}{3}+\frac{1}{3}\ln \frac{1}{3}- \ln \frac{1}{3} \approx 0.462 \end{equation} An evaluation of the eigenvectors $\eta_j$, $j=1,\,2,\,3,\,4$ of the bipartite subsystems $\rho_{AB}=\rho_{AC}$ of the state $\vert {\rm W}\rangle$ allows us to find out the decohered counterpart $\rho^d_{A:BC}$ of the state $\rho_{ABC}$ and we have \begin{eqnarray} \rho^d_{A:BC}&=&\mbox{diag}\,\left(P_{11},\, P_{12},\,P_{13},\,P_{14},\,P_{21},\, P_{22},\,P_{23},\,P_{24} \right) \nonumber \\ &=& \mbox{diag} \, \left(0,\,0,\,\frac{2}{3},\,0,\,0,\,0,\,0,\,\frac{1}{3}\right); \\ P_{ij}&=&\langle \chi_i,\eta_j\vert \rho_{\rm W}\vert \chi_i, \eta_j\rangle, \ \ \rho_{\rm W}=\vert {\rm W} \rangle \langle {\rm W} \vert. \nonumber \end{eqnarray} The quantum deficit $D_{A:BC}$ of the W state is thus obtained as, \begin{equation} D_{A:BC}=0-\left(\frac{2}{3}\ln \frac{2}{3}+\frac{1}{3}\ln \frac{1}{3} \right) \approx 0.636. \nonumber \end{equation} It is easy to see that \begin{equation} D_{AB}+D_{AC}=2 D_{AB}\approx 2 \times 0.462 > D_{A:BC} \approx 0.636 \end{equation} and the monogamy inequality (\ref{qdmon}) is not obeyed~\cite{sudha}. For the state $\vert \rm{ W\bar{W}}\rangle$, the reduced density matrices are \begin{eqnarray} \rho_{AB}&=&\rho_{AC}=\frac{1}{6}\ba{cccc} 1 & 1 & 1 & 0 \\ 1 & 2 & 2 & 1 \\ 1 & 2 & 2 & 1 \\0 & 1 & 1 & 1 \end{array}\right); \nonumber \\ \rho_A &=& \rho_B=\rho_C=\frac{1}{6}\ba{cc} 3 & 2 \\ 2 & 3 \end{array}\right) \end{eqnarray} and their common non-zero eigenvalues are $\lambda_1=\frac{2}{3}$, $\lambda_2=\frac{1}{3}$. The decohered density matrices $\rho^d_{AB}$, $\rho^d_{A:BC}$ are respectively given by~\cite{sudha} \begin{eqnarray} \rho^d_{AB}&=&\mbox{diag}\,\left(\frac{3}{4},\,\frac{1}{12},\,\frac{1}{12},\,\frac{1}{12}\right) \nonumber \\ & & \nonumber \\ \rho^d_{A:BC}&=&\mbox{diag} \, \left(\frac{5}{6},\,0,\,0,\,0,\,0,\,\frac{1}{6},\,0,\,0\right) \end{eqnarray} The quantum deficit $D_{AB}(=D_{AC})$ and $D_{A:BC}$ are therefore obtained as \[ D_{AB}=\frac{2}{3}\ln \frac{2}{3}+\frac{1}{3}\ln \frac{1}{3}-\left(\frac{3}{4} \ln \frac{3}{4}+\frac{3}{12}\ln \frac{1}{12}\right) \approx 0.386 \] \begin{equation} D_{A:BC}=0-\left(\frac{5}{6}\ln \frac{5}{6}+\frac{1}{6}\ln \frac{1}{6} \right) \approx 0.45. \end{equation} Here too we have \begin{equation} D_{AB}+D_{AC}=2 D_{AB}\approx 2 \times 0.386 > D_{A:BC} \approx 0.45 \end{equation} and the monogamy inequality (\ref{qdmon}) is not obeyed~\cite{sudha}. Having illustrated the polygamous nature of the states $\vert {\rm W}\rangle$, $\vert {\rm W{\bar W}}\rangle$ with respect to quantum deficit, we wish to see whether higher powers of quantum deficit indicate monogamy in these and if so for what powers. In Table~I, we have tabulated $D^n_{AB}$, $D^n_{A:BC}$ and the value of $D_{A:BC}^{n}-2 D_{AB}^{n}$, $n\geq 1$ for both $\vert {\rm W}\rangle$. \begin{table}[ht] \caption{Monogamy w.r.t integral powers of Quantum Deficit for $3$ qubit pure symmetric states} \begin{center} \scriptsize{\textbf{ \begin{tabular}{|c|c|c|c|c|} \hline & & \multicolumn{2}{c|}{Quantum Deficit} & \\ \cline{3-1} \cline{4-1} State & Powers & $D_{AB}^{n}$ & $D_{A:BC}^{n}$ & $D_{A:BC}^{n}-2 D_{AB}^{n}$ \\ & ($n$) & ($=D_{AC}^{n}$)& & \\ \hline\hline & $1$ & $0.462$ & $0.636$ & $-0.287$ \\ \cline{2-1} \cline{3-1} \cline{4-1} \cline{5-1} $3$ qubit& $2$ & $0.213$ & $0.405$ & $-0.022$ \\ \cline{2-1} \cline{3-1} \cline{4-1} \cline{5-1} W & $3$ & $0.098$ & $0.257$ & $0.060$ \\ \cline{2-1} \cline{3-1} \cline{4-1} \cline{5-1} & $4$ & $0.045$ & $0.164$ & $0.072$ \\ \cline{2-1} \cline{3-1} \cline{4-1} \cline{5-1} & $5$ & $0.021$ & $0.104$ & $0.062$ \\ \hline \hline & $1$ & $0.386$ & $0.450$ & $-0.322$ \\ \cline{2-1} \cline{3-1} \cline{4-1} \cline{5-1} $3$ qubit & $2$ & $0.149$ & $0.203$ & $-0.095$ \\ \cline{2-1} \cline{3-1} \cline{4-1} \cline{5-1} $\rm{W\bar{W}}$ & $3$ & $0.057$ & $0.091$ & $-0.023$ \\ \cline{2-1} \cline{3-1} \cline{4-1} \cline{5-1} & $4$ & $0.022$ & $0.041$ & $-0.003$ \\ \cline{2-1} \cline{3-1} \cline{4-1} \cline{5-1} & $5$ & $0.008$ & $0.018$ & $0.0013$ \\ \hline \end{tabular}}} \end{center} \end{table} It can be readily seen from the table that though the $3$-qubit states $\vert {\rm W} \rangle$ and $\vert {\rm W{\bar W}}\rangle$ are polygamous with respect to quantum deficit, its third power satisfies monogamy inequality for $\vert {\rm W} \rangle$ whereas fifth power of quantum deficit is required for making the state $\vert {\rm W{\bar W}}\rangle$ monogamous. We have also examined the monogamy with respect to $D^n_{AB}\left(=D^n_{AC}\right)$, $D_{A:BC}$ for an arbitrary symmetric $3$-qubit pure state belonging to the family of $2$-distinct spinors~\cite{maj}. The state is given by~\cite{sudha} \begin{equation} \vert \psi \rangle \equiv \cos \frac{\theta}{2} \vert 000 \rangle+ \sin \frac{\theta}{2} \left(\frac{\vert 100 \rangle+\vert 010 \rangle+\vert 001 \rangle}{\sqrt{3}}\right) \end{equation} with $0<\theta<\pi$ and for $\theta=\pi$, we get the $\vert {\rm{W}}\rangle$ state. An explicit evaluation of $D_{AB}$, $D_{A:BC}$, as a function of $\theta$, has been done in Ref. ~\cite{sudha} and the state is seen to be polygamous for all values $\theta$. But in higher powers of $D_{AB}$, $D_{A:BC}$, monogamy inequality is satisfied and as the power $n$ increases more states become monogamous. Fig.~1 illustrates this feature. \begin{figure} \caption{The plot of $(D_{A:BC})^n-2(D_{AB})^n$ versus $\theta$ for $3$-qubit pure states with $2$-distinct spinors} \end{figure} At this juncture, it would be of interest to know whether quantification of non-classical correlations in a tripartite state is possible through the monogamy inequality in higher powers of the correlation measure. Notice that monogamy inequalities in squared concurrence~\cite{ckw}, squared negativity~\cite{neg} have been useful in quantifying the tripartite correlation through concurrence tangle~\cite{ckw} and negativity tangle~\cite{neg}. A systematic attempt to quantify the correlations in $3$-qubit pure states using square of quantum discord as the measure of quantum correlations has been done in Ref.~\cite{sqd}. We observe here that in order to quantify the tripartite correlations using monogamy inequality in $Q^n$, one has to consider the non-negative quantity $\tau_q=Q^r_{A:BC}-Q^r_{AB}-Q^r_{AC}$, where $r\geq 1$ is the {\emph{minimum degree} at which the state becomes monogamous. But there immediately arise questions regarding whether the minimum degree $r$ of $Q$ has any bearing on the amount of non-classical tripartite correlations in the state. In fact, every correlation measure requires a different integral power $r$ to reveal monogamous nature in a $3$-qubit state~\cite{salini}. Whereas fifth power of quantum discord is sufficient to make almost all $3$-qubit pure states monogamous~\cite{salini}, we have seen here that one requires still higher powers in quantum deficit ($r\geq 10$) (See Fig. 1) for some pure symmetric $3$-qubit states. As such there does not appear to be any relation between the non-classical correlations in the state and the minimum degree $r$ of a correlation measure. For instance, we have (See Table~I) \begin{eqnarray*} D^r_{A:BC}-D^r_{AB}-D^r_{AC}&=&0.06 \ \mbox{for } \ \vert {\rm{W}}\rangle \ \mbox{when}\ r=3 \\ D^r_{A:BC}-D^r_{AB}-D^r_{AC}&=&0.0013 \ \mbox{for} \ \vert {\rm{W{\bar W}}}\rangle \ \mbox{when}\ r=5. \end{eqnarray*} Also, as $D_{A:BC}-D_{AB}-D_{AC}=1$ for $3$-qubit GHZ state~\cite{sudha}, we have $D^r_{A:BC}-D^r_{AB}-D^r_{AC}=1$ at $r=1$ for GHZ state. It is not apparent whether states having higher correlations possess larger $r$ with smaller value of $\tau_q$ or vice versa. Whichever be the case, the way in which $r$ can be accommodated in finding the tripartite correlations is not evident even in these simplest examples. Also, the monogamy relation in higher powers of a correlation measure $Q$ will not have the physical meaning of restricted shareability of correlations if $Q^r$ is not established as a proper correlation measure satisfying essential properties such as local unitary invariance. The tripartite correlation measure $\tau_{q}$ should also be established as a valid correlation measure for each $r$~\footnote{For square of quantum discord $\tau_q$ is shown to satisfy all properties of a correlation measure in Ref.~\cite{sqd}.}. Without addressing these issues, a mere quantification of tripartite correlations through $\tau_q$ may not yield justifiable results. \section{Monogamy-kind-of relation in higher powers of mutual information} We now go about exploring the meaning associated with monogamy in positive integral powers of a correlation measure. Towards this end, we prove a monogamy-kind-of-a relation in higher powers of classical mutual information and investigate its consequences. From the strong subadditivity property of Shannon entropy~\cite{nc}, we have, \begin{equation} \label{subad} H(X,Y,Z) + H(Y) \leq H(X,Y) + H(Y,Z). \end{equation} Casting Eq. (\ref{subad}) in terms of the mutual information~\cite{nc} \begin{eqnarray*} H(X:Y)&=&H(X) + H(Y) - H(X,Y), \\ H(Y,Z)&=&H(Y) + H(Z)- H(Y:Z). \end{eqnarray*} we obtain \[ H(X,Y,Z) + H(X:Y) - H(X) \leq H(Y) + H(Z)- H(Y:Z). \] and this implies \begin{equation} \label{subad3} H(X:Y)+ H(Y:Z) \leq H(X) + H(Y) + H(Z)-H(X,Y,Z). \end{equation} In view of the fact that \begin{equation} H(Y:XZ) = H(Y) + H(X,Z) - H(X,Y,Z) \nonumber \end{equation} where $H(Y:XZ)$ denotes the mutual information between the random variables $Y$, $XZ$, we make use of the relation $H(X,Y,Z) = H(Y) + H(X,Z) - H(Y:XZ)$ in Eq. (\ref{subad3}) to obtain \begin{eqnarray} H(X:Y) + H(Y:Z) &\leq& H(X) + H(Y) + H(Z) - \left [H(Y)+ H(X,Z) - H(Y:XZ)\right] \nonumber \\ &\leq& H(X) + H(Z) - H(X,Z) + H(Y:XZ). \nonumber \end{eqnarray} As $H(X:Z)=H(X) + H(Z) - H(X,Z)$, $H(X:Y)=H(Y:X)$ we obtain the relation \begin{equation} \label{subadfi} H(Y:X) + H(Y:Z) \leq H(Y:XZ) + H(X:Z) \end{equation} obeyed by the trivariate joint probability distribution $P(X,Y,Z)$ indexed by the random variable $XYZ$. Notice that the relation \begin{equation} \label{mon1} H(Y:X) + H(Y:Z) \leq H(Y:XZ) \end{equation} represents a monogamy relation between the random variables $X$, $Y$ and $Z$. But this inequality is not true due to the existence of the non-negative term $H(X:Z)$ on the right hand side of Eq. (\ref{subadfi}). That is we have, \begin{equation} \label{subadfi2} H(Y:X) + H(Y:Z) \geq H(Y:XZ) \end{equation} According to Theorem~1 of Ref. \cite{salini}, a non-monogamous measure of correlations satisfies monogamy inequality in higher integral powers when the measure is decreasing under removal of a subsystem. Thus, in order to prove the monogamy relation in higher powers of mutual information, we need to establish that \[ H(Y:X) \leq H(Y:XZ),\ \ H(Y:Z) \leq H(Y:XZ). \] We show in the following that mutual information indeed is non-increasing under removal of a random variable. On making use of the relations \begin{eqnarray} H(X:Y) &=& H(X) + H(Y) - H(X,Y),\nonumber \\ H(Y:XZ) &=& H(Y) + H(X,Z) - H(X,Y,Z) \nonumber \end{eqnarray} we have, \begin{eqnarray} \label{mono2} H(Y:XZ) - H(X:Y)& =& H(Y) + H(X,Z) - H(X,Y,Z) -\left[H(X) + H(Y) - H(X,Y)\right] \nonumber \\ &=&H(X,Y)+H(X,Z)-H(X)-H(X,Y,Z) \end{eqnarray} As $H(Z|XY)=H(X,Y,Z) - H(X,Y)$, $H(Z|X)=H(X,Z)-H(X)$ denote the respective conditional entropies, Eq. (\ref{mono2}) simplifies to \begin{equation} H(Y:XZ)-H(Y:X)=H(Z|X) - H(Z|XY). \nonumber \end{equation} Using the fact that conditioning reduces entropy, i.e., $H(Z\vert XY)\leq H(Z\vert X)$, we readily have \begin{equation} \label{monofi1} H(Z|X)-H(Z|XY) \geq 0 \ \mbox{or} \ H(Y:X)\leq H(Y:XZ) \end{equation} One can similarly show that \begin{equation} \label{monofi2} H(Y:Z)\leq H(Y:XZ). \end{equation} We are now in a position to prove the monogamy relation \begin{equation} \label{hi} H^n(Y:X) + H^n(Y:Z) \leq H^n(Y:XZ) \ \ \mbox{for} \ \ n>r \end{equation} based on the proof of Theorem~1 of Ref.~\cite{salini}. Here $r>1$ is the lowest integer for which the above equality is satisfied. Denoting \begin{equation} H(Y:XZ) = x, \ \ H(Y:X) = y,\ \ H(Y:Z) = z \end{equation} we have $x < y + z$, $x > y > 0$, $x > z > 0$ and hence \[ 0< y/x < 1, \ \ 0 < z/x < 1 \] which follow from the non-negativity of mutual information and from Eqs.(\ref{subadfi2}), (\ref{monofi1}), (\ref{monofi2}). This implies $\lim_{m\rightarrow \infty} (y/x)^{m} = 0$, $\lim_{\rightarrow \infty} (z/x)^{m} = 0$ and hence $\forall\ \epsilon > 0$ there exist positive integers $n_1(\epsilon), n_2(\epsilon)$ such that, \begin{eqnarray} & & (y/x)^{n} < \epsilon \ \ \forall \ \ \ \mbox{positive integers} \ \ n\geq m_1(\epsilon) , \nonumber \\ & & (z/x)^{n} < \epsilon \ \ \forall \ \ \ \mbox{positive integers} \ \ n\geq m_2(\epsilon) \end{eqnarray} With a choice of $\epsilon = \epsilon_1 < \frac{1}{2}$ we get $0<(y/x)^{n} < \epsilon_1$ and $0<(z/x)^{n} < \epsilon_1$, $\forall$ positive integers $n\geq m(\epsilon_1)$, where $m(\epsilon_1)= \max\{m_1(\epsilon_1),m_2(\epsilon_1)\}$ we readily obtain the inequality \begin{equation} \label{hi2} (y/x)^{n}+ (z/x)^{n} < 1 \ \forall \ n\geq m(\epsilon_1) \end{equation} which is essentially the monogamy relation Eq. (\ref{hi}) Having established the monogamy relation in higher powers of classical mutual information (See Eq. (\ref{hi})), we now examine its implications. In fact, we are interested in knowing whether the monogamy relation in higher powers of a correlation measure (classical/quantum) reflects restricted shareability of correlations. Towards this end we raise the following questions. \begin{itemize} \item[(a)] Does Eq. (\ref{hi}) imply that the distribution of bipartite correlations between $X$, $Y$ and $Y$, $Z$ are restrictively shared among the random variables $X$, $Y$, $Z$ in the trivariate probability distribution $P(X,\,Y,\,Z)$? \item[(b)] If a classical correlation measure satisfies monogamy inequality in its higher powers, does it mean limited shareability of classical correlations in a quantum state? \item[(c)] What does the monogamy relation satisfied by higher power of a non-monogamous measure of quantum correlations mean? \end{itemize} An affirmative answer to (a) and (b) negates the unrestricted shareability of classical correlations. But it is well known that classical correlations in a multiparty quantum state can be distributed at will amongst its parties. This implies we need to negate both the statements (a) and (b). Now it is not difficult to see that negation of (a) and (b) immediately provides an answer to (c): \\ {\emph {Existence of a monogamy relation in higher powers of any correlation measure (classical or quantum) does not necessarily mean restricted shareability of correlations in a multiparty state}}. \section{Conclusion} In conclusion, we have illustrated that monogamy relation satisfied in higher powers of a non-monogamous correlation measure is not useful either to quantify the correlations or to signify that all mutiparty states have restricted shareability of correlations. We hope that this work is helpful in clarifying whether or not higher powers of quantum correlation measure are to be taken up for examining the monogamous nature of quantum states. \end{document}
\begin{document} \title[On the differentiability of weak solutions] {On the differentiability of weak solutions\\ of an abstract evolution equation\\ with a scalar type spectral operator\\ on the real axis} \author[Marat V. Markin]{Marat V. Markin} \address{ Department of Mathematics\newline California State University, Fresno\newline 5245 N. Backer Avenue, M/S PB 108\newline Fresno, CA 93740-8001, USA } \email{[email protected]} \dedicatory{} \keywords{Weak solution, scalar type spectral operator} \subjclass[2010]{Primary 34G10, 47B40; Secondary 47B15, 47D06, 47D60} \begin{abstract} Given the abstract evolution equation \begin{equation*} y'(t)=Ay(t),\ t\in {\mathbb R}, \end{equation*} with \textit{scalar type spectral operator} $A$ in a complex Banach space, found are conditions \textit{necessary and sufficient} for all \textit{weak solutions} of the equation, which a priori need not be strongly differentiable, to be strongly infinite differentiable on ${\mathbb R}$. The important case of the equation with a \textit{normal operator} $A$ in a complex Hilbert space is obtained immediately as a particular case. Also, proved is the following inherent smoothness improvement effect explaining why the case of the strong finite differentiability of the weak solutions is superfluous: if every weak solution of the equation is strongly differentiable at $0$, then all of them are strongly infinite differentiable on ${\mathbb R}$. \end{abstract} \maketitle \epigraph{\textit{Curiosity is the lust of the mind.}}{Thomas Hobbes} \section[Introduction]{Introduction} We find conditions on a \textit{scalar type spectral} operator $A$ in a complex Banach space necessary and sufficient for all \textit{weak solutions} of the evolution equation \begin{equation}\label{1} y'(t)=Ay(t),\ t\in {\mathbb R}, \end{equation} which a priori need not be strongly differentiable, to be strongly infinite differentiable on ${\mathbb R}$. The important case of the equation with a \textit{normal operator} $A$ in a complex Hilbert space is obtained immediately as a particular case. We also prove the following inherent smoothness improvement effect explaining why the case of the strong finite differentiability of the weak solutions is superfluous: if every weak solution of the equation is strongly differentiable at $0$, then all of them are strongly infinite differentiable on ${\mathbb R}$. The found results develop those of paper \cite{Markin2011}, where similar consideration is given to the strong differentiability of the weak solutions of the equation \begin{equation}\label{+} y'(t)=Ay(t),\ t\ge 0, \end{equation} on $[0,\infty)$ and $(0,\infty)$. \begin{defn}[Weak Solution]\label{ws}\ \\ Let $A$ be a densely defined closed linear operator in a Banach space $X$ and $I$ be an interval of the real axis ${\mathbb R}$. A strongly continuous vector function $y:I\rightarrow X$ is called a {\it weak solution} of the evolution equation \begin{equation}\label{2} y'(t)=Ay(t),\ t\in I, \end{equation} if, for any $g^* \in D(A^*)$, \begin{equation*} \dfrac{d}{dt}\langle y(t),g^*\rangle = \langle y(t),A^*g^* \rangle,\ t\in I, \end{equation*} where $D(\cdot)$ is the \textit{domain} of an operator, $A^*$ is the operator {\it adjoint} to $A$, and $\langle\cdot,\cdot\rangle$ is the {\it pairing} between the space $X$ and its dual $X^*$ (cf. \cite{Ball}). \end{defn} \begin{rems}\label{remsws}\ \begin{itemize} \item Due to the \textit{closedness} of $A$, a weak solution of equation \eqref{2} can be equivalently defined to be a strongly continuous vector function $y:I\mapsto X$ such that, for all $t\in I$, \begin{equation*} \int_{t_0}^ty(s)\,ds\in D(A)\ \text{and} \ y(t)=y(t_0)+A\int_{t_0}^ty(s)\,ds, \end{equation*} where $t_0$ is an arbitrary fixed point of the interval $I$, and is also called a \textit{mild solution} (cf. {\cite[Ch. II, Definition 6.3]{Engel-Nagel}}, see also {\cite[Preliminaries]{Markin2018(2)}}). \item Such a notion of \textit{weak solution}, which need not be differentiable in the strong sense, generalizes that of \textit{classical} one, strongly differentiable on $I$ and satisfying the equation in the traditional plug-in sense, the classical solutions being precisely the weak ones strongly differentiable on $I$. \item As is easily seen $y:{\mathbb R}\to X$ is a weak solution of equation \eqref{1} \textit{iff} \[ y_+(t):=y(t),\ t\ge 0, \] is a weak solution of equation \eqref{+} and \[ y_-(t):=y(-t),\ t\ge 0, \] is a weak solution of the equation \begin{equation}\label{-} y'(t)=-Ay(t),\ t\ge 0. \end{equation} \item When a closed densely defined linear operator $A$ in a complex Banach space $X$ generates a strongly continuous group $\left\{T(t) \right\}_{t\in {\mathbb R}}$ of bounded linear operators (see, e.g., \cite{Hille-Phillips,Engel-Nagel}), i.e., the associated \textit{abstract Cauchy problem} (\textit{ACP}) \begin{equation}\label{ACP} \begin{cases} y'(t)=Ay(t),\ t\in {\mathbb R},\\ y(0)=f \end{cases} \end{equation} is \textit{well-posed} (cf. {\cite[Ch. II, Definition 6.8]{Engel-Nagel}}), the weak solutions of equation \eqref{1} are the orbits \begin{equation}\label{group} y(t)=T(t)f,\ t\in {\mathbb R}, \end{equation} with $f\in X$ (cf. {\cite[Ch. II, Proposition 6.4]{Engel-Nagel}}, see also {\cite[Theorem]{Ball}}), whereas the classical ones are those with $f\in D(A)$ (see, e.g., {\cite[Ch. II, Proposition 6.3]{Engel-Nagel}}). \item In our discourse, the associated \textit{ACP} may be \textit{ill-posed}, i.e., the scalar type spectral operator $A$ need not generate a strongly continuous group of bounded linear operators (cf. \cite{Markin2002(2)}). \end{itemize} \end{rems} \section[Preliminaries]{Preliminaries} Here, for the reader's convenience, we outline certain essential preliminaries. Henceforth, unless specified otherwise, $A$ is supposed to be a {\it scalar type spectral operator} in a complex Banach space $(X,\|\cdot\|)$ with strongly $\sigma$-additive \textit{spectral measure} (the \textit{resolution of the identity}) $E_A(\cdot)$ assigning to each Borel set $\delta$ of the complex plane ${\mathbb C}$ a projection operator $E_A(\delta)$ on $X$ and having the operator's \textit{spectrum} $\sigma(A)$ as its {\it support} \cite{Survey58,Dun-SchIII}. Observe that, in a complex finite-dimensional space, the scalar type spectral operators are all linear operators on the space, for which there is an \textit{eigenbasis} (see, e.g., \cite{Survey58,Dun-SchIII}) and, in a complex Hilbert space, the scalar type spectral operators are precisely all those that are similar to the {\it normal} ones \cite{Wermer}. Associated with a scalar type spectral operator in a complex Banach space is the {\it Borel operational calculus} analogous to that for a \textit{normal operator} in a complex Hilbert space \cite{Survey58,Dun-SchII,Dun-SchIII,Plesner}, which assigns to any Borel measurable function $F:\sigma(A)\to {\mathbb C}$ a scalar type spectral operator \begin{equation*} F(A):=\int\limits_{\sigma(A)} F(\lambda)\,dE_A(\lambda) \end{equation*} (see \cite{Survey58,Dun-SchIII}). In particular, \begin{equation}\label{power} A^n=\int\limits_{\sigma(A)} \lambda^n\,dE_A(\lambda),\ n\in{\mathbb Z}_+, \end{equation} (${\mathbb Z}_+:=\left\{0,1,2,\dots\right\}$ is the set of \textit{nonnegative integers}, $A^0:=I$, $I$ is the \textit{identity operator} on $X$) and \begin{equation}\label{exp} e^{zA}:=\int\limits_{\sigma(A)} e^{z\lambda}\,dE_A(\lambda),\ z\in{\mathbb C}. \end{equation} The properties of the {\it spectral measure} and {\it operational calculus}, exhaustively delineated in \cite{Survey58,Dun-SchIII}, underlie the entire subsequent discourse. Here, we underline a few facts of particular importance. Due to its {\it strong countable additivity}, the spectral measure $E_A(\cdot)$ is {\it bounded} \cite{Dun-SchI,Dun-SchIII}, i.e., there is such an $M>0$ that, for any Borel set $\delta\subseteq {\mathbb C}$, \begin{equation}\label{bounded} \|E_A(\delta)\|\le M. \end{equation} Observe that the notation $\|\cdot\|$ is used here to designate the norm in the space $L(X)$ of all bounded linear operators on $X$. We adhere to this rather conventional economy of symbols in what follows also adopting the same notation for the norm in the dual space $X^*$. For any $f\in X$ and $g^*\in X^*$, the \textit{total variation measure} $v(f,g^*,\cdot)$ of the complex-valued Borel measure $\langle E_A(\cdot)f,g^* \rangle$ is a {\it finite} positive Borel measure with \begin{equation}\label{tv} v(f,g^*,{\mathbb C})=v(f,g^*,\sigma(A))\le 4M\|f\|\|g^*\| \end{equation} (see, e.g., \cite{Markin2004(1),Markin2004(2)}). Also (Ibid.), for a Borel measurable function $F:{\mathbb C}\to {\mathbb C}$, $f\in D(F(A))$, $g^*\in X^*$, and a Borel set $\delta\subseteq {\mathbb C}$, \begin{equation}\label{cond(ii)} \int\limits_\delta|F(\lambda)|\,dv(f,g^*,\lambda) \le 4M\|E_A(\delta)F(A)f\|\|g^*\|. \end{equation} In particular, for $\delta=\sigma(A)$, $E_A(\sigma(A))=I$ and \begin{equation}\label{cond(i)} \int\limits_{\sigma(A)}|F(\lambda)|\,d v(f,g^*,\lambda)\le 4M\|F(A)f\|\|g^*\|. \end{equation} Observe that the constant $M>0$ in \eqref{tv}--\eqref{cond(i)} is from \eqref{bounded}. Further, for a Borel measurable function $F:{\mathbb C}\to [0,\infty)$, a Borel set $\delta\subseteq {\mathbb C}$, a sequence $\left\{\Delta_n\right\}_{n=1}^\infty$ of pairwise disjoint Borel sets in ${\mathbb C}$, and $f\in X$, $g^*\in X^*$, \begin{equation}\label{decompose} \int\limits_{\delta}F(\lambda)\,dv(E_A(\cup_{n=1}^\infty \Delta_n)f,g^*,\lambda) =\sum_{n=1}^\infty \int\limits_{\delta\cap\Delta_n}F(\lambda)\,dv(E_A(\Delta_n)f,g^*,\lambda). \end{equation} Indeed, since, for any Borel sets $\delta,\sigma\subseteq {\mathbb C}$, \begin{equation*} E_A(\delta)E_A(\sigma)=E_A(\delta\cap\sigma) \end{equation*} \cite{Survey58,Dun-SchIII}, for the total variation measure, \begin{equation*} v(E_A(\delta)f,g^*,\sigma)=v(f,g^*,\delta\cap\sigma). \end{equation*} Whence, due to the {\it nonnegativity} of $F(\cdot)$ (see, e.g., \cite{Halmos}), \begin{multline*} \int\limits_\delta F(\lambda)\,dv(E_A(\cup_{n=1}^\infty \Delta_n)f,g^*,\lambda) =\int\limits_{\delta\cap\cup_{n=1}^\infty \Delta_n}F(\lambda)\,dv(f,g^*,\lambda) \\ \ \ =\sum_{n=1}^\infty \int\limits_{\delta\cap\Delta_n}F(\lambda)\,dv(f,g^*,\lambda) =\sum_{n=1}^\infty \int\limits_{\delta\cap\Delta_n}F(\lambda)\,dv(E_A(\Delta_n)f,g^*,\lambda). \end{multline*} The following statement, allowing to characterize the domains of Borel measurable functions of a scalar type spectral operator in terms of positive Borel measures, is fundamental for our discourse. \begin{prop}[{\cite[Proposition $3.1$]{Markin2002(1)}}]\label{prop}\ \\ Let $A$ be a scalar type spectral operator in a complex Banach space $(X,\|\cdot\|)$ with spectral measure $E_A(\cdot)$ and $F:\sigma(A)\to {\mathbb C}$ be a Borel measurable function. Then $f\in D(F(A))$ iff \begin{enumerate} \item[(i)] for each $g^*\in X^*$, $\displaystyle \int\limits_{\sigma(A)} |F(\lambda)|\,d v(f,g^*,\lambda)<\infty$ and \item[(ii)] $\displaystyle \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\{\lambda\in\sigma(A)\,|\,|F(\lambda)|>n\}} |F(\lambda)|\,dv(f,g^*,\lambda)\to 0,\ n\to\infty$, \end{enumerate} where $v(f,g^*,\cdot)$ is the total variation measure of $\langle E_A(\cdot)f,g^* \rangle$. \end{prop} The succeeding key theorem provides a description of the weak solutions of equation \eqref{+} with a scalar type spectral operator $A$ in a complex Banach space. \begin{thm}[{\cite[Theorem $4.2$]{Markin2002(1)}} with $T=\infty$]\label{GWS+}\ \\ Let $A$ be a scalar type spectral operator in a complex Banach space $(X,\|\cdot\|)$. A vector function $y:[0,\infty) \to X$ is a weak solution of equation \eqref{+} iff there is an $\displaystyle f \in \bigcap_{t\ge 0}D(e^{tA})$ such that \begin{equation*} y(t)=e^{tA}f,\ t\ge 0, \end{equation*} the operator exponentials understood in the sense of the Borel operational calculus (see \eqref{exp}). \end{thm} \begin{rem} Theorem \ref{GWS+} generalizes {\cite[Theorem $3.1$]{Markin1999}}, its counterpart for a normal operator $A$ in a complex Hilbert space. \end{rem} We also need the following characterizations of a particular weak solution's of equation \eqref{+} with a scalar type spectral operator $A$ in a complex Banach space being strongly differentiable on a subinterval $I$ of $[0,\infty)$. \begin{prop}[{\cite[Proposition $3.1$]{Markin2011}} with $T=\infty$]\label{Prop}\ \\ Let $n\in{\mathbb N}$ and $I$ be a subinterval of $[0,\infty)$. A weak solution $y(\cdot)$ of equation \eqref{+} is $n$ times strongly differentiable on $I$ iff \begin{equation*} y(t) \in D(A^n),\ t\in I, \end{equation*} in which case, \begin{equation*} y^{(k)}(t)=A^ky(t),\ k=1,\dots,n,t\in I. \end{equation*} \end{prop} Subsequently, the frequent terms {\it ``spectral measure"} and {\it ``operational calculus"} are abbreviated to {\it s.m.} and {\it o.c.}, respectively. \section{General Weak Solution} \begin{thm}[General Weak Solution]\label{GWS}\ \\ Let $A$ be a scalar type spectral operator in a complex Banach space $(X,\|\cdot\|)$. A vector function $y:{\mathbb R} \to X$ is a weak solution of equation \eqref{1} iff there is an $\displaystyle f \in \bigcap_{t\in {\mathbb R}}D(e^{tA})$ such that \begin{equation}\label{expf} y(t)=e^{tA}f,\ t\in {\mathbb R}, \end{equation} the operator exponentials understood in the sense of the Borel operational calculus (see \eqref{exp}). \end{thm} \begin{proof}\quad As is noted in the Introduction, $y:{\mathbb R}\to X$ is a weak solution of \eqref{1} \textit{iff} \[ y_+(t):=y(t),\ t\ge 0, \] is a weak solution of equation \eqref{+} and \[ y_-(t):=y(-t),\ t\ge 0, \] is a weak solution of equation \eqref{-}. Applying Theorem \ref{GWS+}, to $y_+(\cdot)$ and $y_-(\cdot)$, we infer that, this is equivalent to the fact \[ y(t)=e^{tA}f,\ t\in{\mathbb R},\ \text{with some}\ f \in \bigcap_{t\in {\mathbb R}}D(e^{tA}). \] \end{proof} \begin{rems}\ \begin{itemize} \item More generally, Theorem \ref{GWS+} and its proof can be easily modified to describe in the same manner all weak solution of equation \eqref{2} for an arbitrary interval $I$ of the real axis ${\mathbb R}$. \item Theorem \ref{GWS} implies, in particular, \begin{itemize} \item that the subspace $\bigcap_{t\in{\mathbb R}}D(e^{tA})$ of all possible initial values of the weak solutions of equation \eqref{1} is the largest permissible for the exponential form given by \eqref{expf}, which highlights the naturalness of the notion of weak solution, and \item that associated \textit{ACP} \eqref{ACP}, whenever solvable, is solvable \textit{uniquely}. \end{itemize} \item Observe that the initial-value subspace $\bigcap_{t\in{\mathbb R}}D(e^{tA})$ of equation \eqref{1}, containing the dense in $X$ subspace $\bigcup_{\alpha>0}E_A(\Delta_\alpha)X$, where \begin{equation*} \Delta_\alpha:=\left\{\lambda\in{\mathbb C}\,\middle|\,|\lambda|\le \alpha \right\},\ \alpha>0, \end{equation*} which coincides with the class ${\mathscr E}^{\{0\}}(A)$ of \textit{entire} vectors of $A$ of \textit{exponential type} \cite{Markin2015}, is \textit{dense} in $X$ as well. \item When a scalar type spectral operator $A$ in a complex Banach space generates a strongly continuous group $\left\{T(t) \right\}_{t\in {\mathbb R}}$ of bounded linear operators, \[ T(t)=e^{tA}\ \text{and}\ D(e^{tA})=X,\ t\in {\mathbb R}, \] \cite{Markin2002(2)}, and hence, Theorem \ref{GWS} is consistent with the well-known description of the weak solutions for this setup (see \eqref{group}). \item Clearly, the initial-value subspace $\bigcap_{t\in{\mathbb R}}D(e^{tA})$ of equation \eqref{1} is narrower than the initial-value subspace $\bigcap_{t\ge 0}D(e^{tA})$ of equation \eqref{+} and the initial-value subspace $\bigcap_{t\ge 0}D(e^{t(-A)})=\bigcap_{t\le 0}D(e^{tA})$ of equation \eqref{-}, in fact it is the intersection of the latter two. \end{itemize} \end{rems} \section{Differentiability of a Particular Weak Solution} Here, we characterize a particular weak solution's of equation \eqref{1} with a scalar type spectral operator $A$ in a complex Banach space being strongly differentiable on a subinterval $I$ of ${\mathbb R}$. \begin{prop}[Differentiability of a Particular Weak Solution]\label{particular}\ \\ Let $n\in{\mathbb N}$ and $I$ be a subinterval of ${\mathbb R}$. A weak solution $y(\cdot)$ of equation \eqref{1} is $n$ times strongly differentiable on $I$ iff \begin{equation*} y(t) \in D(A^n),\ t\in I, \end{equation*} in which case, \begin{equation*} y^{(k)}(t)=A^ky(t),\ k=1,\dots,n,t\in I. \end{equation*} \end{prop} \begin{proof}\quad The statement immediately follows from the prior theorem and Proposition \ref{Prop} applied to \[ y_+(t):=y(t)\ \text{and}\ y_-(t):=y(-t),\ t\ge 0, \] for an arbitrary weak solution $y(\cdot)$ of equation \eqref{1}. \end{proof} \begin{rem} Observe that, as well as for Proposition \ref{Prop}, for $n=1$, the subinterval $I$ can degenerate into a singleton. \end{rem} Inductively, we immediately obtain the following analog of {\cite[Corollary $3.2$]{Markin2011}}: \begin{cor}[Infinite Differentiability of a Particular Weak Solution]\label{Cor}\ \\ Let $A$ be a scalar type spectral operator in a complex Banach space $(X,\|\cdot\|)$ and $I$ be a subinterval of ${\mathbb R}$. A weak solution $y(\cdot)$ of equation \eqref{1} is strongly infinite differentiable on $I$ ($y(\cdot)\in C^\infty(I,X)$) iff, for each $t\in I$, \begin{equation*} y(t) \in C^\infty(A):=\bigcap_{n=1}^{\infty}D(A^n), \end{equation*} in which case \begin{equation*} y^{(n)}(t)=A^ny(t),\ n\in {\mathbb N},t\in I. \end{equation*} \end{cor} \section{Infinite Differentiability of Weak Solutions} In this section, we characterize the strong infinite differentiability on ${\mathbb R}$ of all weak solutions of equation \eqref{1} with a scalar type spectral operator $A$ in a complex Banach space. \begin{thm}[Infinite Differentiability of Weak Solutions]\label{real}\ \\ Let $A$ be a scalar type spectral operator in a complex Banach space $(X,\|\cdot\|)$ with spectral measure $E_A(\cdot)$. Every weak solution of equation \eqref{1} is strongly infinite differentiable on ${\mathbb R}$ iff there exist $b_+>0$ and $ b_->0$ such that the set $\sigma(A)\setminus {\mathscr L}_{b_-,b_+}$, where \begin{equation*} {\mathscr L}_{b_-,b_+}:=\left\{\lambda \in {\mathbb C}\, \middle|\, \Rep\lambda \le \min\left(0,-b_-\ln|\Imp\lambda|\right) \ \text{or}\ \Rep\lambda \ge \max\left(0,b_+\ln|\Imp\lambda|\right)\right\}, \end{equation*} is bounded (see Fig. \ref{fig:graph2}). \begin{figure}\label{fig:graph2} \end{figure} \end{thm} \begin{proof} \textit{``If"} part.\quad Suppose that there exist $b_+>0$ and $ b_->0$ such that the set $\sigma(A)\setminus {\mathscr L}_{b_-,b_+}$ is \textit{bounded} and let $y(\cdot)$ be an arbitrary weak solution of equation \eqref{1}. By Theorem \ref{GWS}, \begin{equation*} y(t)=e^{tA}f,\ t\in {\mathbb R},\ \text{with some}\ f \in \bigcap_{t\in {\mathbb R}}D(e^{tA}). \end{equation*} Our purpose is to show that $y(\cdot)\in C^\infty\left({\mathbb R},X\right)$, which, by Corollary \ref{Cor}, is attained by showing that, for each $t\in{\mathbb R}$, \[ y(t)\in C^\infty(A):=\bigcap_{n=1}^\infty D(A^n). \] Let us proceed by proving that, for any $t\in{\mathbb R}$ and $m\in{\mathbb N}$ \[ y(t)\in D(A^m) \] via Proposition \ref{prop}. For any $t\in{\mathbb R}$, $m\in{\mathbb N}$ and an arbitrary $g^*\in X^*$, \begin{multline}\label{first} \int\limits_{\sigma(A)}|\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) =\int\limits_{\sigma(A)\setminus{\mathscr L}_{b_-,b_+}}|\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) \\ \shoveleft{ +\int\limits_{\left\{\lambda\in \sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,-1<\Rep\lambda<1 \right\}}|\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) }\\ \shoveleft{ +\int\limits_{\left\{\lambda\in \sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\ge 1 \right\}}|\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) }\\ \hspace{1.2cm} +\int\limits_{\left\{\lambda\in \sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\le -1 \right\}}|\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda)<\infty. \end{multline} Indeed, \[ \int\limits_{\sigma(A)\setminus{\mathscr L}_{b_-,b_+}}|\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda)<\infty \] and \[ \int\limits_{\left\{\lambda\in \sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,-1<\Rep\lambda<1 \right\}}|\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda)<\infty \] due to the boundedness of the sets \[ \sigma(A)\setminus{\mathscr L}_{b_-,b_+}\ \text{and}\ \left\{\lambda\in \sigma(A)\cap{\mathscr L}_{b_-,b_+}\;\middle|\;-1<\Rep\lambda<1 \right\}, \] the continuity of the integrated function on ${\mathbb C}$, and the finiteness of the measure $v(f,g^*,\cdot)$. Further, for any $t\in{\mathbb R}$, $m\in{\mathbb N}$ and an arbitrary $g^*\in X^*$, \begin{multline}\label{interm} \int\limits_{\left\{\lambda\in \sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\ge 1 \right\}}|\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) \\ \shoveleft{ \le\int\limits_{\left\{\lambda\in \sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\ge 1 \right\}}{\left[|\Rep\lambda|+|\Imp\lambda|\right]}^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) }\\ \text{since, for $\lambda\in\sigma(A)\cap{\mathscr L}_{b_-,b_+}$ with $\Rep\lambda\ge 1$, $e^{b_+^{-1}\Rep\lambda}\ge |\Imp\lambda|$;} \\ \shoveleft{ \le \int\limits_{\left\{\lambda\in \sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\ge 1 \right\}}{\left[\Rep\lambda+e^{b_+^{-1}\Rep\lambda}\right]}^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) }\\ \text{since, in view of $\Rep\lambda\ge 1$, $b_+e^{b_+^{-1}\Rep\lambda}\ge\Rep\lambda$;} \\ \shoveleft{ \le \int\limits_{\left\{\lambda\in \sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\ge 1 \right\}}{\left[b_+e^{b_+^{-1}\Rep\lambda}+e^{b_+^{-1}\Rep\lambda}\right]}^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) }\\ \shoveleft{ = [b_++1]^m\int\limits_{\left\{\lambda\in \sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\ge 1 \right\}}e^{[mb_+^{-1}+t]\Rep\lambda}\,dv(f,g^*,\lambda) }\\ \text{since $f\in \bigcap\limits_{t\in{\mathbb R}}D(e^{tA})$, by Proposition \ref{prop};} \\ \hspace{1.2cm} <\infty. \end{multline} Finally, for any $t\in{\mathbb R}$, $m\in{\mathbb N}$ and an arbitrary $g^*\in X^*$, \begin{multline}\label{interm2} \int\limits_{\left\{\lambda\in \sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\le -1 \right\}}|\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) \\ \shoveleft{ \le\int\limits_{\left\{\lambda\in \sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\le -1 \right\}}{\left[|\Rep\lambda|+|\Imp\lambda|\right]}^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) }\\ \text{since, for $\lambda\in\sigma(A)\cap{\mathscr L}_{b_-,b_+}$ with $\Rep\lambda\le -1$, $e^{b_-^{-1}(-\Rep\lambda)}\ge |\Imp\lambda|$;} \\ \shoveleft{ \le \int\limits_{\left\{\lambda\in \sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\le -1 \right\}}{\left[-\Rep\lambda+e^{b_-^{-1}(-\Rep\lambda)}\right]}^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) }\\ \text{since, in view of $-\Rep\lambda\ge 1$, $b_-e^{b_-^{-1}(-\Rep\lambda)}\ge-\Rep\lambda$;} \\ \shoveleft{ \le \int\limits_{\left\{\lambda\in \sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\le -1 \right\}}{\left[b_-e^{b_-^{-1}(-\Rep\lambda)}+e^{b_-^{-1}(-\Rep\lambda)}\right]}^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) }\\ \shoveleft{ = [b_-+1]^m\int\limits_{\left\{\lambda\in \sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\le -1 \right\}}e^{[t-mb_-^{-1}]\Rep\lambda}\,dv(f,g^*,\lambda) }\\ \text{since $f\in \bigcap\limits_{t\in{\mathbb R}}D(e^{tA})$, by Proposition \ref{prop};} \\ \hspace{1.2cm} <\infty. \end{multline} Also, for any $t\in{\mathbb R}$, $m\in{\mathbb N}$ and an arbitrary $n\in{\mathbb N}$, \begin{multline}\label{second} \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,|\lambda|^me^{t\Rep\lambda}>n\right\}} |\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) \\ \shoveleft{ \le \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\left\{\lambda\in\sigma(A) \setminus{\mathscr L}_{b_-,b_+}\,\middle|\,|\lambda|^me^{t\Rep\lambda}>n\right\}}|\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) }\\ \shoveleft{ + \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\left\{\lambda\in\sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,-1<\Rep\lambda<1,\, |\lambda|^me^{t\Rep\lambda}>n\right\}}|\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) }\\ \shoveleft{ + \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\left\{\lambda\in\sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\ge 1,\, |\lambda|^me^{t\Rep\lambda}>n\right\}}|\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) }\\ \shoveleft{ + \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\left\{\lambda\in\sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\le -1,\, |\lambda|^me^{t\Rep\lambda}>n\right\}}|\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) }\\ \hspace{1.2cm} \to 0,\ n\to\infty. \end{multline} Indeed, since, due to the boundedness of the sets \[ \sigma(A)\setminus{\mathscr L}_{b_-,b_+}\ \text{and}\ \left\{\lambda\in\sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,-1<\Rep\lambda<1\right\} \] and the continuity of the integrated function on ${\mathbb C}$, the sets \[ \left\{\lambda\in\sigma(A)\setminus{\mathscr L}_{b_-,b_+}\,\middle|\,|\lambda|^me^{t\Rep\lambda}>n\right\} \] and \[ \left\{\lambda\in\sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,-1<\Rep\lambda<1,\, |\lambda|^me^{t\Rep\lambda}>n\right\} \] are \textit{empty} for all sufficiently large $n\in {\mathbb N}$, we immediately infer that, for any $t\in{\mathbb R}$ and $m\in{\mathbb N}$, \[ \lim_{n\to\infty}\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\left\{\lambda\in\sigma(A) \setminus{\mathscr L}_{b_-,b_+}\,\middle|\,|\lambda|^me^{t\Rep\lambda}>n\right\}}|\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda)=0 \] and \[ \lim_{n\to\infty}\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\left\{\lambda\in\sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,-1<\Rep\lambda<1,\, |\lambda|^me^{t\Rep\lambda}>n\right\}}|\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) =0. \] Further, for any $t\in{\mathbb R}$, $m\in{\mathbb N}$ and an arbitrary $n\in{\mathbb N}$, \begin{multline*} \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\left\{\lambda\in\sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\ge 1,\, |\lambda|^me^{t\Rep\lambda}>n\right\}}|\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) \\ \text{as in \eqref{interm};} \\ \shoveleft{ \le \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} [b_++1]^m\int\limits_{\left\{\lambda\in\sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\ge 1,\, |\lambda|^me^{t\Rep\lambda}>n\right\}}e^{[mb_+^{-1}+t]\Rep\lambda}\,dv(f,g^*,\lambda) }\\ \text{since $f\in \bigcap\limits_{t\in{\mathbb R}}D(e^{tA})$, by \eqref{cond(ii)};} \\ \shoveleft{ \le \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} }\\ \shoveleft{ [b_++1]^m4M\left\|E_A\left(\left\{\lambda\in\sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\ge 1,\, |\lambda|^me^{t\Rep\lambda}>n\right\}\right) e^{[mb_+^{-1}+t]A}f\right\|\|g^*\| }\\ \shoveleft{ \le [b_++1]^m4M\left\|E_A\left(\left\{\lambda\in\sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\ge 1,\, |\lambda|^me^{t\Rep\lambda}>n\right\}\right) e^{[mb_+^{-1}+t]A}f\right\| }\\ \text{by the strong continuity of the {\it s.m.};} \\ \ \ \to [b_++1]^m4M\left\|E_A\left(\emptyset\right)e^{[mb_+^{-1}+t]A}f\right\|=0,\ n\to\infty. \end{multline*} Finally, for any $t\in{\mathbb R}$, $m\in{\mathbb N}$ and an arbitrary $n\in{\mathbb N}$, \begin{multline*} \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\left\{\lambda\in\sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\le -1,\, |\lambda|^me^{t\Rep\lambda}>n\right\}}|\lambda|^me^{t\Rep\lambda}\,dv(f,g^*,\lambda) \\ \text{as in \eqref{interm2};} \\ \shoveleft{ \le \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} [b_-+1]^m\int\limits_{\left\{\lambda\in\sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\le -1,\, |\lambda|^me^{t\Rep\lambda}>n\right\}}e^{[t-mb_-^{-1}]\Rep\lambda}\,dv(f,g^*,\lambda) }\\ \text{since $f\in \bigcap\limits_{t\in{\mathbb R}}D(e^{tA})$, by \eqref{cond(ii)};} \\ \shoveleft{ \le \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} }\\ \shoveleft{ [b_-+1]^m4M\left\|E_A\left(\left\{\lambda\in\sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\le -1,\, |\lambda|^me^{t\Rep\lambda}>n\right\}\right) e^{[t-mb_-^{-1}]A}f\right\|\|g^*\| }\\ \shoveleft{ \le [b_-+1]^m4M\left\|E_A\left(\left\{\lambda\in\sigma(A)\cap{\mathscr L}_{b_-,b_+}\,\middle|\,\Rep\lambda\le -1,\, |\lambda|^me^{t\Rep\lambda}>n\right\}\right) e^{[t-mb_-^{-1}]A}f\right\| }\\ \text{by the strong continuity of the {\it s.m.};} \\ \ \ \to [b_++1]^m4M\left\|E_A\left(\emptyset\right)e^{[t-mb_-^{-1}]A}f\right\|=0,\ n\to\infty. \end{multline*} By Proposition \ref{prop} and the properties of the \textit{o.c.} (see {\cite[Theorem XVIII.$2.11$ (f)]{Dun-SchIII}}), \eqref{first} and \eqref{second} jointly imply that, for any $t\in{\mathbb R}$ and $m\in{\mathbb N}$, \[ f\in D(A^me^{tA}), \] which further implies that, for each $t\in{\mathbb R}$, \begin{equation*} y(t)=e^{tA}f\in \bigcap_{n=1}^\infty D(A^n) =:C^\infty(A). \end{equation*} Whence, by Corollary \ref{Cor}, we infer that \begin{equation*} y(\cdot) \in C^\infty\left({\mathbb R},X\right), \end{equation*} which completes the proof of the \textit{``if"} part. \textit{``Only if"} part.\quad Let us prove this part {\it by contrapositive} assuming that, for any $b_+>0$ and $b_->0$, the set $\sigma(A)\setminus {\mathscr L}_{b_-,b_+}$ is \textit{unbounded}. In particular, this means that, for any $n\in {\mathbb N}$, unbounded is the set \begin{equation*} \sigma(A)\setminus {\mathscr L}_{(2n)^{-1},(2n)^{-1}}= \left\{\lambda \in \sigma(A)\,\middle| -(2n)^{-1}\ln|\Imp\lambda|<\Rep\lambda < (2n)^{-1}\ln|\Imp\lambda|\right\}. \end{equation*} Hence, we can choose a sequence of points $\left\{\lambda_n\right\}_{n=1}^\infty$ in the complex plane as follows: \begin{equation*} \begin{split} &\lambda_n \in \sigma(A),\ n\in {\mathbb N},\\ &-(2n)^{-1}\ln|\Imp\lambda_n|<\Rep\lambda_n < (2n)^{-1}\ln|\Imp\lambda_n|,\ n\in {\mathbb N},\\ &\lambda_0:=0,\ |\lambda_n|>\max\left[n^4,|\lambda_{n-1}|\right],\ n\in {\mathbb N}.\\ \end{split} \end{equation*} The latter implies, in particular, that the points $\lambda_n$, $n\in{\mathbb N}$, are \textit{distinct} ($\lambda_i \neq \lambda_j$, $i\neq j$). Since, for each $n\in {\mathbb N}$, the set \begin{equation*} \left\{ \lambda \in {\mathbb C}\,\middle|\, -(2n)^{-1}\ln|\Imp\lambda|<\Rep\lambda < (2n)^{-1}\ln|\Imp\lambda|,\ |\lambda|>\max\bigl[n^4,|\lambda_{n-1}|\bigr]\right\} \end{equation*} is {\it open} in ${\mathbb C}$, along with the point $\lambda_n$, it contains an {\it open disk} \begin{equation*} \Delta_n:=\left\{\lambda \in {\mathbb C}\, \middle|\,|\lambda-\lambda_n|<\varepsilon_n \right\} \end{equation*} centered at $\lambda_n$ of some radius $\varepsilon_n>0$, i.e., for each $\lambda \in \Delta_n$, \begin{equation}\label{disks1} -(2n)^{-1}\ln|\Imp\lambda|<\Rep\lambda < (2n)^{-1}\ln|\Imp\lambda|\ \text{and}\ |\lambda|>\max\bigl[n^4,|\lambda_{n-1}|\bigr]. \end{equation} Furthermore, we can regard the radii of the disks to be small enough so that \begin{equation}\label{radii1} \begin{split} &0<\varepsilon_n<\dfrac{1}{n},\ n\in{\mathbb N},\ \text{and}\\ &\Delta_i \cap \Delta_j=\emptyset,\ i\neq j \quad \text{(i.e., the disks are {\it pairwise disjoint})}. \end{split} \end{equation} Whence, by the properties of the {\it s.m.}, \begin{equation*} E_A(\Delta_i)E_A(\Delta_j)=0,\ i\neq j, \end{equation*} where $0$ stands for the \textit{zero operator} on $X$. Observe also, that the subspaces $E_A(\Delta_n)X$, $n\in {\mathbb N}$, are \textit{nontrivial} since \[ \Delta_n \cap \sigma(A)\neq \emptyset,\ n\in{\mathbb N}, \] with $\Delta_n$ being an \textit{open set} in ${\mathbb C}$. By choosing a unit vector $e_n\in E_A(\Delta_n)X$ for each $n\in{\mathbb N}$, we obtain a sequence $\left\{e_n\right\}_{n=1}^\infty$ in $X$ such that \begin{equation}\label{ortho1} \|e_n\|=1,\ n\in{\mathbb N},\ \text{and}\ E_A(\Delta_i)e_j=\delta_{ij}e_j,\ i,j\in{\mathbb N}, \end{equation} where $\delta_{ij}$ is the \textit{Kronecker delta}. As is easily seen, \eqref{ortho1} implies that the vectors $e_n$, $n\in {\mathbb N}$, are \textit{linearly independent}. Furthermore, there is an $\varepsilon>0$ such that \begin{equation}\label{dist1} d_n:=\dist\left(e_n,\spa\left(\left\{e_i\,|\,i\in{\mathbb N},\ i\neq n\right\}\right)\right)\ge\varepsilon,\ n\in{\mathbb N}. \end{equation} Indeed, the opposite implies the existence of a subsequence $\left\{d_{n(k)}\right\}_{k=1}^\infty$ such that \begin{equation*} d_{n(k)}\to 0,\ k\to\infty. \end{equation*} Then, by selecting a vector \[ f_{n(k)}\in \spa\left(\left\{e_i\,|\,i\in{\mathbb N},\ i\neq n(k)\right\}\right),\ k\in{\mathbb N}, \] such that \[ \|e_{n(k)}-f_{n(k)}\|<d_{n(k)}+1/k,\ k\in{\mathbb N}, \] we arrive at \begin{multline*} 1=\|e_{n(k)}\| \text{since, by \eqref{ortho1}, $E_A(\Delta_{n(k)})f_{n(k)}=0$;} \\ \shoveleft{ =\|E_A(\Delta_{n(k)})(e_{n(k)}-f_{n(k)})\|\ \le \|E_A(\Delta_{n(k)})\|\|e_{n(k)}-f_{n(k)}\| \text{by \eqref{bounded};} }\\ \ \ \le M\|e_{n(k)}-f_{n(k)}\|\le M\left[d_{n(k)}+1/k\right] \to 0,\ k\to\infty, \end{multline*} which is a \textit{contradiction} proving \eqref{dist1}. As follows from the {\it Hahn-Banach Theorem}, for any $n\in{\mathbb N}$, there is an $e^*_n\in X^*$ such that \begin{equation}\label{H-B1} \|e_n^*\|=1,\ n\in{\mathbb N},\ \text{and}\ \langle e_i,e_j^*\rangle=\delta_{ij}d_i,\ i,j\in{\mathbb N}. \end{equation} Let us consider separately the two possibilities concerning the sequence of the real parts $\{\Rep\lambda_n\}_{n=1}^\infty$: its being \textit{bounded} or \textit{unbounded}. First, suppose that the sequence $\{\Rep\lambda_n\}_{n=1}^\infty$ is \textit{bounded}, i.e., there is such an $\omega>0$ that \begin{equation}\label{bounded1} |\Rep\lambda_n| \le \omega,\ n\in{\mathbb N}, \end{equation} and consider the element \begin{equation*} f:=\sum_{k=1}^\infty k^{-2}e_k\in X, \end{equation*} which is well defined since $\left\{k^{-2}\right\}_{k=1}^\infty\in l_1$ ($l_1$ is the space of absolutely summable sequences) and $\|e_k\|=1$, $k\in{\mathbb N}$ (see \eqref{ortho1}). In view of \eqref{ortho1}, by the properties of the \textit{s.m.}, \begin{equation}\label{vectors1} E_A(\cup_{k=1}^\infty\Delta_k)f=f\ \text{and}\ E_A(\Delta_k)f=k^{-2}e_k,\ k\in{\mathbb N}. \end{equation} For any $t\ge 0$ and an arbitrary $g^*\in X^*$, \begin{multline}\label{first1} \int\limits_{\sigma(A)}e^{t\Rep\lambda}\,dv(f,g^*,\lambda) \text{by \eqref{vectors1};} \\ \shoveleft{ =\int\limits_{\sigma(A)} e^{t\Rep\lambda}\,d v(E_A(\cup_{k=1}^\infty \Delta_k)f,g^*,\lambda) \text{by \eqref{decompose};} }\\ \shoveleft{ =\sum_{k=1}^\infty\int\limits_{\sigma(A)\cap\Delta_k}e^{t\Rep\lambda}\,dv(E_A(\Delta_k)f,g^*,\lambda) \text{by \eqref{vectors1};} }\\ \shoveleft{ =\sum_{k=1}^\infty k^{-2}\int\limits_{\sigma(A)\cap\Delta_k}e^{t\Rep\lambda}\,dv(e_k,g^*,\lambda) }\\ \text{since, for $\lambda\in \Delta_k$, by \eqref{bounded1} and \eqref{radii1},}\ \Rep\lambda=\Rep\lambda_k+(\Rep\lambda-\Rep\lambda_k) \\ \le \Rep\lambda_k+|\lambda-\lambda_k|\le \omega+\varepsilon_k\le \omega+1; \\ \shoveleft{ \le e^{t(\omega+1)}\sum_{k=1}^\infty k^{-2}\int\limits_{\sigma(A)\cap\Delta_k}1\,dv(e_k,g^*,\lambda) = e^{t(\omega+1)}\sum_{k=1}^\infty k^{-2}v(e_k,g^*,\Delta_k) }\\ \text{by \eqref{tv};} \\ \hspace{1.2cm} \le e^{t(\omega+1)}\sum_{k=1}^\infty k^{-2}4M\|e_k\|\|g^*\| = 4Me^{t(\omega+1)}\|g^*\|\sum_{k=1}^\infty k^{-2}<\infty. \end{multline} Also, for any $t<0$ and an arbitrary $g^*\in X^*$, \begin{multline}\label{ffirst1} \int\limits_{\sigma(A)}e^{t\Rep\lambda}\,dv(f,g^*,\lambda) \text{by \eqref{vectors1};} \\ \shoveleft{ =\int\limits_{\sigma(A)} e^{t\Rep\lambda}\,d v(E_A(\cup_{k=1}^\infty \Delta_k)f,g^*,\lambda) \text{by \eqref{decompose};} }\\ \shoveleft{ =\sum_{k=1}^\infty\int\limits_{\sigma(A)\cap\Delta_k}e^{t\Rep\lambda}\,dv(E_A(\Delta_k)f,g^*,\lambda) \text{by \eqref{vectors1};} }\\ \shoveleft{ =\sum_{k=1}^\infty k^{-2}\int\limits_{\sigma(A)\cap\Delta_k}e^{t\Rep\lambda}\,dv(e_k,g^*,\lambda) }\\ \text{since, for $\lambda\in \Delta_k$, by \eqref{bounded1} and \eqref{radii1},}\ \Rep\lambda=\Rep\lambda_k-(\Rep\lambda_k-\Rep\lambda) \\ \ge \Rep\lambda_k-|\Rep\lambda_k-\Rep\lambda|\ge -\omega-\varepsilon_k\ge -\omega-1; \\ \shoveleft{ \le e^{-t(\omega+1)}\sum_{k=1}^\infty k^{-2}\int\limits_{\sigma(A)\cap\Delta_k}1\,dv(e_k,g^*,\lambda) = e^{-t(\omega+1)}\sum_{k=1}^\infty k^{-2}v(e_k,g^*,\Delta_k) }\\ \text{by \eqref{tv};} \\ \hspace{1.2cm} \le e^{-t(\omega+1)}\sum_{k=1}^\infty k^{-2}4M\|e_k\|\|g^*\| = 4Me^{-t(\omega+1)}\|g^*\|\sum_{k=1}^\infty k^{-2}<\infty. \end{multline} Similarly, to \eqref{first1} for any $t\ge 0$ and an arbitrary $n\in{\mathbb N}$, \begin{multline}\label{second1} \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}} e^{t\Rep\lambda}\,dv(f,g^*,\lambda) \\ \shoveleft{ \le \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}e^{t(\omega+1)}\sum_{k=1}^\infty k^{-2} \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\cap \Delta_k}1\,dv(e_k,g^*,\lambda) }\\ \text{by \eqref{vectors1};} \\ \shoveleft{ =e^{t(\omega+1)}\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}\sum_{k=1}^\infty \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\cap \Delta_k}1\,dv(E_A(\Delta_k)f,g^*,\lambda) }\\ \text{by \eqref{decompose};} \\ \shoveleft{ = e^{t(\omega+1)}\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\{\lambda\in\sigma(A)\,|\,e^{t\Rep\lambda}>n\}}1\,dv(E_A(\cup_{k=1}^\infty\Delta_k)f,g^*,\lambda) }\\ \text{by \eqref{vectors1};} \\ \shoveleft{ = e^{t(\omega+1)}\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\{\lambda\in\sigma(A)\,|\,e^{t\Rep\lambda}>n\}}1\,dv(f,g^*,\lambda) \text{by \eqref{cond(ii)};} }\\ \shoveleft{ \le e^{t(\omega+1)}\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}4M\left\|E_A\left(\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\right)f\right\|\|g^*\| }\\ \shoveleft{ \le 4Me^{t(\omega+1)}\left\|E_A\left(\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\right)f\right\| }\\ \text{by the strong continuity of the {\it s.m.};} \\ \hspace{1.2cm} \to 4Me^{t(\omega+1)}\left\|E_A\left(\emptyset\right)f\right\|=0,\ n\to\infty. \end{multline} Similarly, to \eqref{ffirst1} for any $t<0$ and an arbitrary $n\in{\mathbb N}$, \begin{multline}\label{ssecond1} \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}} e^{t\Rep\lambda}\,dv(f,g^*,\lambda) \\ \shoveleft{ \le \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}e^{-t(\omega+1)}\sum_{k=1}^\infty k^{-2} \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\cap \Delta_k}1\,dv(e_k,g^*,\lambda) }\\ \text{by \eqref{vectors1};} \\ \shoveleft{ =e^{-t(\omega+1)}\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}\sum_{k=1}^\infty \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\cap \Delta_k}1\,dv(E_A(\Delta_k)f,g^*,\lambda) }\\ \text{by \eqref{decompose};} \\ \shoveleft{ = e^{-t(\omega+1)}\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\{\lambda\in\sigma(A)\,|\,e^{t\Rep\lambda}>n\}}1\,dv(E_A(\cup_{k=1}^\infty\Delta_k)f,g^*,\lambda) }\\ \text{by \eqref{vectors1};} \\ \shoveleft{ = e^{-t(\omega+1)}\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\{\lambda\in\sigma(A)\,|\,e^{t\Rep\lambda}>n\}}1\,dv(f,g^*,\lambda) \text{by \eqref{cond(ii)};} }\\ \shoveleft{ \le e^{-t(\omega+1)}\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}4M\left\|E_A\left(\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\right)f\right\|\|g^*\| }\\ \shoveleft{ \le 4Me^{-t(\omega+1)}\left\|E_A\left(\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\right)f\right\| }\\ \text{by the strong continuity of the {\it s.m.};} \\ \hspace{1.2cm} \to 4Me^{-t(\omega+1)}\left\|E_A\left(\emptyset\right)f\right\|=0,\ n\to\infty. \end{multline} By Proposition \ref{prop}, \eqref{first1}, \eqref{ffirst1}, \eqref{second1}, and \eqref{ssecond1} jointly imply that \[ f\in \bigcap\limits_{t\in{\mathbb R}}D(e^{tA}), \] and hence, by Theorem \ref{GWS}, \[ y(t):=e^{tA}f,\ t\in{\mathbb R}, \] is a weak solution of equation \eqref{1}. Let \begin{equation}\label{functional1} h^*:=\sum_{k=1}^\infty k^{-2}e_k^*\in X^*, \end{equation} the functional being well defined since $\{k^{-2}\}_{k=1}^\infty\in l_1$ and $\|e_k^*\|=1$, $k\in{\mathbb N}$ (see \eqref{H-B1}). In view of \eqref{H-B1} and \eqref{dist1}, we have: \begin{equation}\label{funct-dist1} \langle e_n,h^*\rangle=\langle e_k,k^{-2}e_k^*\rangle=d_k k^{-2}\ge \varepsilon k^{-2},\ k\in{\mathbb N}. \end{equation} Hence, \begin{multline}\label{notin1} \int\limits_{\sigma(A)}|\lambda|\,dv(f,h^*,\lambda) \text{by \eqref{decompose} as in \eqref{first1};} \\ \shoveleft{ =\sum_{k=1}^\infty k^{-2}\int\limits_{\sigma(A)\cap \Delta_k}|\lambda|\,dv(e_k,h^*,\lambda) }\\ \text{since, for $\lambda\in \Delta_k$, by \eqref{disks1}, $|\lambda|\ge k^4$;} \\ \shoveleft{ \ge \sum_{k=1}^\infty k^{-2}k^4 v(e_k,h^*,\Delta_k)\ge\sum_{k=1}^\infty k^2|\langle E_A(\Delta_k)e_k,h^*\rangle| }\\ \text{by \eqref{ortho1} and \eqref{funct-dist1};} \\ \hspace{1.2cm} \ge \sum_{k=1}^\infty k^2 \varepsilon k^{-2}=\infty. \end{multline} By Proposition \ref{prop}, \eqref{notin1} implies that \[ y(0)=f\notin D(A), \] which, by Proposition \ref{particular} ($n=1$, $I=\{0\}$) further implies that the weak solution $y(t)=e^{tA}f$, $t\in{\mathbb R}$, of equation \eqref{1} is not strongly differentiable at $0$. Now, suppose that the sequence $\{\Rep\lambda_n\}_{n=1}^\infty$ is \textit{unbounded}. Therefore, there is a subsequence $\{\Rep\lambda_{n(k)}\}_{k=1}^\infty$ such that \[ \Rep\lambda_{n(k)}\to \infty \ \text{or}\ \Rep\lambda_{n(k)}\to -\infty,\ k\to \infty. \] Let us consider separately each of the two cases. First, suppose that \[ \Rep\lambda_{n(k)}\to \infty,\ k\to \infty \] Then, without loss of generality, we can regard that \begin{equation}\label{infinity} \Rep\lambda_{n(k)} \ge k,\ k\in{\mathbb N}. \end{equation} Consider the elements \begin{equation*} f:=\sum_{k=1}^\infty e^{-n(k)\Rep\lambda_{n(k)}}e_{n(k)}\in X \ \text{and}\ h:=\sum_{k=1}^\infty e^{-\frac{n(k)}{2}\Rep\lambda_{n(k)}}e_{n(k)}\in X, \end{equation*} well defined since, by \eqref{infinity}, \[ \left\{e^{-n(k)\Rep\lambda_{n(k)}}\right\}_{k=1}^\infty, \left\{e^{-\frac{n(k)}{2}\Rep\lambda_{n(k)}}\right\}_{k=1}^\infty \in l_1 \] and $\|e_{n(k)}\|=1$, $k\in{\mathbb N}$ (see \eqref{ortho1}). By \eqref{ortho1}, \begin{equation}\label{subvectors1} E_A(\cup_{k=1}^\infty\Delta_{n(k)})f=f\ \text{and}\ E_A(\Delta_{n(k)})f=e^{-n(k)\Rep\lambda_{n(k)}}e_{n(k)},\ k\in{\mathbb N}, \end{equation} and \begin{equation}\label{subvectors12} E_A(\cup_{k=1}^\infty\Delta_{n(k)})h=h\ \text{and}\ E_A(\Delta_{n(k)})h=e^{-\frac{n(k)}{2}\Rep\lambda_{n(k)}}e_{n(k)},\ k\in{\mathbb N}. \end{equation} For any $t\ge 0$ and an arbitrary $g^*\in X^*$, \begin{multline}\label{first2} \int\limits_{\sigma(A)}e^{t\Rep\lambda}\,dv(f,g^*,\lambda) \text{by \eqref{decompose} as in \eqref{first1};} \\ \shoveleft{ =\sum_{k=1}^\infty e^{-n(k)\Rep\lambda_{n(k)}}\int\limits_{\sigma(A)\cap\Delta_{n(k)}}e^{t\Rep\lambda}\,dv(e_{n(k)},g^*,\lambda) }\\ \text{since, for $\lambda\in \Delta_{n(k)}$, by \eqref{radii1},}\ \Rep\lambda =\Rep\lambda_{n(k)}+(\Rep\lambda-\Rep\lambda_{n(k)}) \\ \le \Rep\lambda_{n(k)}+|\lambda-\lambda_{n(k)}|\le \Rep\lambda_{n(k)}+1; \\ \shoveleft{ \le \sum_{k=1}^\infty e^{-n(k)\Rep\lambda_{n(k)}} e^{t(\Rep\lambda_{n(k)}+1)} \int\limits_{\sigma(A)\cap\Delta_{n(k)}}1\,dv(e_{n(k)},g^*,\lambda) }\\ \shoveleft{ = e^t\sum_{k=1}^\infty e^{-[n(k)-t]\Rep\lambda_{n(k)}}v(e_{n(k)},g^*,\Delta_{n(k)}) \text{by \eqref{tv};} }\\ \shoveleft{ \le e^t\sum_{k=1}^\infty e^{-[n(k)-t]\Rep\lambda_{n(k)}}4M\|e_{n(k)}\|\|g^*\| = 4Me^t\|g^*\|\sum_{k=1}^\infty e^{-[n(k)-t]\Rep\lambda_{n(k)}} }\\ \hspace{1.2cm} <\infty. \end{multline} Indeed, for all $k\in {\mathbb N}$ sufficiently large so that \[ n(k)\ge t+1, \] in view of \eqref{infinity}, \[ e^{-[n(k)-t]\Rep\lambda_{n(k)}}\le e^{-k}. \] For any $t<0$ and an arbitrary $g^*\in X^*$, \begin{multline}\label{ffirst2} \int\limits_{\sigma(A)}e^{t\Rep\lambda}\,dv(f,g^*,\lambda) \text{by \eqref{decompose} as in \eqref{first1};} \\ \shoveleft{ =\sum_{k=1}^\infty e^{-n(k)\Rep\lambda_{n(k)}}\int\limits_{\sigma(A)\cap\Delta_{n(k)}}e^{t\Rep\lambda}\,dv(e_{n(k)},g^*,\lambda) }\\ \text{since, for $\lambda\in \Delta_{n(k)}$, by \eqref{radii1},}\ \Rep\lambda =\Rep\lambda_{n(k)}-(\Rep\lambda_{n(k)}-\Rep\lambda) \\ \ge \Rep\lambda_{n(k)}-|\Rep\lambda_{n(k)}-\Rep\lambda|\ge \Rep\lambda_{n(k)}-1; \\ \shoveleft{ \le \sum_{k=1}^\infty e^{-n(k)\Rep\lambda_{n(k)}} e^{t(\Rep\lambda_{n(k)}-1)} \int\limits_{\sigma(A)\cap\Delta_{n(k)}}1\,dv(e_{n(k)},g^*,\lambda) }\\ \shoveleft{ = e^{-t}\sum_{k=1}^\infty e^{-[n(k)-t]\Rep\lambda_{n(k)}}v(e_{n(k)},g^*,\Delta_{n(k)}) \text{by \eqref{tv};} }\\ \shoveleft{ \le e^{-t}\sum_{k=1}^\infty e^{-[n(k)-t]\Rep\lambda_{n(k)}}4M\|e_{n(k)}\|\|g^*\| = 4Me^{-t}\|g^*\|\sum_{k=1}^\infty e^{-[n(k)-t]\Rep\lambda_{n(k)}} }\\ \hspace{1.2cm} <\infty. \end{multline} Indeed, for all $k\in {\mathbb N}$, in view of $t<0$, \[ n(k)-t\ge n(k)\ge 1, \] and hence, in view of \eqref{infinity}, \[ e^{-[n(k)-t]\Rep\lambda_{n(k)}}\le e^{-k}. \] Similarly to \eqref{first2}, for any $t\ge 0$ and an arbitrary $n\in{\mathbb N}$, \begin{multline}\label{second2} \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}}e^{t\Rep\lambda}\,dv(f,g^*,\lambda) \\ \shoveleft{ \le \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}e^t\sum_{k=1}^\infty e^{-[n(k)-t]\Rep\lambda_{n(k)}} \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\cap \Delta_{n(k)}}1\,dv(e_{n(k)},g^*,\lambda) }\\ \shoveleft{ =e^t\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}\sum_{k=1}^\infty e^{-\left[\frac{n(k)}{2}-t\right]\Rep\lambda_{n(k)}} e^{-\frac{n(k)}{2}\Rep\lambda_{(k)}} }\\ \shoveleft{ \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\cap \Delta_{n(k)}}1\,dv(e_{n(k)},g^*,\lambda) }\\ \text{since, by \eqref{infinity}, there is an $L>0$ such that $e^{-\left[\frac{n(k)}{2}-t\right]\Rep\lambda_{n(k)}}\le L$, $k\in{\mathbb N}$;} \\ \shoveleft{ \le Le^t\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}\sum_{k=1}^\infty e^{-\frac{n(k)}{2}\Rep\lambda_{n(k)}} \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\cap \Delta_{n(k)}}1\,dv(e_{n(k)},g^*,\lambda) }\\ \text{by \eqref{subvectors12};} \\ \shoveleft{ = Le^t\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}\sum_{k=1}^\infty \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\cap \Delta_{n(k)}}1\,dv(E_A(\Delta_{n(k)})h,g^*,\lambda) }\\ \text{by \eqref{decompose};} \\ \shoveleft{ = Le^t\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}}1\,dv(E_A(\cup_{k=1}^\infty\Delta_{n(k)})h,g^*,\lambda) }\\ \text{by \eqref{subvectors12};} \\ \shoveleft{ =Le^t\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}\int\limits_{\{\lambda\in\sigma(A)\,|\,e^{t\Rep\lambda}>n\}}1\,dv(h,g^*,\lambda) \text{by \eqref{cond(ii)};} }\\ \shoveleft{ \le Le^t\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}4M \left\|E_A\left(\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\right)h\right\|\|g^*\| }\\ \shoveleft{ \le 4LMe^t\|E_A(\{\lambda\in\sigma(A)\,|\,e^{t\Rep\lambda}>n\})h\| }\\ \text{by the strong continuity of the {\it s.m.};} \\ \hspace{1.2cm} \to 4LMe^t\left\|E_A\left(\emptyset\right)h\right\|=0,\ n\to\infty. \end{multline} Similarly to \eqref{ffirst2}, for any $t<0$ and an arbitrary $n\in{\mathbb N}$, \begin{multline}\label{ssecond2} \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}}e^{t\Rep\lambda}\,dv(f,g^*,\lambda) \\ \shoveleft{ \le \sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}e^{-t}\sum_{k=1}^\infty e^{-[n(k)-t]\Rep\lambda_{n(k)}} \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\cap \Delta_{n(k)}}1\,dv(e_{n(k)},g^*,\lambda) }\\ \shoveleft{ =e^{-t}\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}\sum_{k=1}^\infty e^{-\left[\frac{n(k)}{2}-t\right]\Rep\lambda_{n(k)}} e^{-\frac{n(k)}{2}\Rep\lambda_{(k)}} }\\ \shoveleft{ \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\cap \Delta_{n(k)}}1\,dv(e_{n(k)},g^*,\lambda) }\\ \text{since, by \eqref{infinity}, there is an $L>0$ such that $e^{-\left[\frac{n(k)}{2}-t\right]\Rep\lambda_{n(k)}}\le L$, $k\in{\mathbb N}$;} \\ \shoveleft{ \le Le^{-t}\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}\sum_{k=1}^\infty e^{-\frac{n(k)}{2}\Rep\lambda_{n(k)}} \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\cap \Delta_{n(k)}}1\,dv(e_{n(k)},g^*,\lambda) }\\ \text{by \eqref{subvectors12};} \\ \shoveleft{ = Le^{-t}\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}\sum_{k=1}^\infty \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\cap \Delta_{n(k)}}1\,dv(E_A(\Delta_{n(k)})h,g^*,\lambda) }\\ \text{by \eqref{decompose};} \\ \shoveleft{ = Le^{-t}\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}} \int\limits_{\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}}1\,dv(E_A(\cup_{k=1}^\infty\Delta_{n(k)})h,g^*,\lambda) }\\ \text{by \eqref{subvectors12};} \\ \shoveleft{ =Le^{-t}\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}\int\limits_{\{\lambda\in\sigma(A)\,|\,e^{t\Rep\lambda}>n\}}1\,dv(h,g^*,\lambda) \text{by \eqref{cond(ii)};} }\\ \shoveleft{ \le Le^{-t}\sup_{\{g^*\in X^*\,|\,\|g^*\|=1\}}4M \left\|E_A\left(\left\{\lambda\in\sigma(A)\,\middle|\,e^{t\Rep\lambda}>n\right\}\right)h\right\|\|g^*\| }\\ \shoveleft{ \le 4LMe^{-t}\|E_A(\{\lambda\in\sigma(A)\,|\,e^{t\Rep\lambda}>n\})h\| }\\ \text{by the strong continuity of the {\it s.m.};} \\ \hspace{1.2cm} \to 4LMe^{-t}\left\|E_A\left(\emptyset\right)h\right\|=0,\ n\to\infty. \end{multline} By Proposition \ref{prop}, \eqref{first2}, \eqref{ffirst2}, \eqref{second2}, and \eqref{ssecond2} jointly imply that \[ f\in \bigcap\limits_{t\in{\mathbb R}}D(e^{tA}), \] and hence, by Theorem \ref{GWS}, \[ y(t):=e^{tA}f,\ t\in{\mathbb R}, \] is a weak solution of equation \eqref{1}. Since, for any $\lambda \in \Delta_{n(k)}$, $k\in {\mathbb N}$, by \eqref{radii1}, \eqref{infinity}, \begin{multline*} \Rep\lambda =\Rep\lambda_{n(k)}-(\Rep\lambda_{n(k)}-\Rep\lambda) \ge \Rep\lambda_{n(k)}-|\Rep\lambda_{n(k)}-\Rep\lambda| \\ \ \ \ \ge \Rep\lambda_{n(k)}-\varepsilon_{n(k)} \ge \Rep\lambda_{n(k)}-1/n(k)\ge k-1\ge 0 \end{multline*} and, by \eqref{disks1}, \[ \Rep\lambda<(2n(k))^{-1}\ln|\Imp\lambda|, \] we infer that, for any $\lambda \in \Delta_{n(k)}$, $k\in {\mathbb N}$, \begin{equation*} |\lambda|\ge|\Imp\lambda|\ge e^{2n(k)\Rep\lambda}\ge e^{2n(k)(\Rep\lambda_{n(k)}-1/n(k))}. \end{equation*} Using this estimate, for the functional $h^*\in X^*$ defined by \eqref{functional1}, we have: \begin{multline}\label{notin} \int\limits_{\sigma(A)}|\lambda|\,dv(f,h^*,\lambda) \text{by \eqref{decompose} as in \eqref{first1};} \\ \shoveleft{ =\sum_{k=1}^\infty e^{-n(k)\Rep\lambda_{n(k)}}\int\limits_{\Delta_{n(k)}}|\lambda|\,dv(e_{n(k)},h^*,\lambda) }\\ \shoveleft{ \ge\sum_{k=1}^\infty e^{-n(k)\Rep\lambda_{n(k)}}e^{2n(k)(\Rep\lambda_{n(k)}-1/n(k))}v(e_{n(k)},h^*,\Delta_{n(k)}) }\\ \shoveleft{ = \sum_{k=1}^\infty e^{-2} e^{n(k)\Rep\lambda_{n(k)}}|\langle E_A(\Delta_{n(k)})e_{n(k)},h^*\rangle| \text{by \eqref{infinity}, \eqref{ortho1}, and \eqref{funct-dist1};} }\\ \hspace{1.2cm} \ge \sum_{k=1}^\infty e^{-2}\varepsilon\dfrac{e^{n(k)}}{n(k)^2}=\infty. \end{multline} By Proposition \ref{prop}, \eqref{notin1} implies that \[ y(0)=f\notin D(A), \] which, by Proposition \ref{particular} ($n=1$, $I=\{0\}$), further implies that the weak solution $y(t)=e^{tA}f$, $t\in{\mathbb R}$, of equation \eqref{1} is not strongly differentiable at $0$. The remaining case of \[ \Rep\lambda_{n(k)}\to -\infty,\ k\to \infty \] is symmetric to the case of \[ \Rep\lambda_{n(k)}\to \infty,\ k\to \infty \] and is considered in absolutely the same manner, which furnishes a weak solution $y(\cdot)$ of equation \eqref{1} such that \begin{equation*} y(0)\not\in D(A), \end{equation*} and hence, by Proposition \ref{particular} ($n=1$, $I=\{0\}$), not strongly differentiable at $0$. With every possibility concerning $\{\Rep\lambda_n\}_{n=1}^\infty$ considered, we infer that assuming the opposite to the \textit{``if"} part's premise allows to find a weak solution of \eqref{1} on $[0,\infty)$ that is not strongly differentiable at $0$, and hence, much less strongly infinite differentiable on ${\mathbb R}$. Thus, the proof by contrapositive of the \textit{``only if" part} is complete and so is the proof of the entire statement \end{proof} From Theorem \ref{real} and {\cite[Theorem $4.2$]{Markin2011}}, the latter characterizing the strong infinite differentiability of all weak solution of equation \eqref{+} on $(0,\infty)$, we also obtain \begin{cor}\label{case+open} Let $A$ be a scalar type spectral operator in a complex Banach space. If all weak solutions of equation \eqref{+} are strongly infinite differentiable on $(0,\infty)$, then all weak solutions of equation \eqref{1} are strongly infinite differentiable on ${\mathbb R}$. \end{cor} \begin{rem} As follows from Theorem \ref{real}, all weak solutions of equation \eqref{1} with a scalar type spectral operator $A$ in a complex Banach space can be \textit{strongly infinite differentiable} while the operator $A$ is \textit{unbounded}, e.g., when $A$ is an unbounded \textit{self-adjoint} operator in a complex Hilbert space (cf. {\cite[Theorem $7.1$]{Markin1999}}). This fact contrasts the situation when a closed densely defined linear operator $A$ in a complex Banach space generates a strongly continuous group $\left\{T(t) \right\}_{t\in {\mathbb R}}$ of bounded linear operators, i.e., the associated abstract Cauchy problem is \textit{well-posed} (see Remarks \ref{remsws}), in which case even the (left or right) strong differentiability of all weak solutions of equation \eqref{1} at $0$ immediately implies \textit{boundedness} for $A$ (cf. \cite{Engel-Nagel}). \end{rem} \section{The Cases of Normal and Self-Adjoint Operators} As an important particular case of Theorem \ref{real}, we obtain \begin{cor}[The Case of a Normal Operator]\label{realnormal}\ \\ Let $A$ be a normal operator in a complex Hilbert space. Every weak solution of equation \eqref{1} is strongly infinite differentiable on ${\mathbb R}$ iff there exist $b_+>0$ and $ b_->0$ such that the set $\sigma(A)\setminus {\mathscr L}_{b_-,b_+}$, where \begin{equation*} {\mathscr L}_{b_-,b_+}:=\left\{\lambda \in {\mathbb C}\, \middle|\, \Rep\lambda \le \min\left(0,-b_-\ln|\Imp\lambda|\right) \ \text{or}\ \Rep\lambda \ge \max\left(0,b_+\ln|\Imp\lambda|\right)\right\}, \end{equation*} is bounded (see Fig. \ref{fig:graph2}). \end{cor} \begin{rem} Corollary \ref{realnormal} develops the results of paper \cite{Markin1999}, where similar consideration is given to the strong differentiability of the weak solutions of equation \eqref{+} with a normal operator $A$ in a complex Hilbert space on $[0,\infty)$ and $(0,\infty)$. \end{rem} From Corollary \ref{case+open}, we immediately obtain the following \begin{cor}\label{casenormal+open} Let $A$ be a normal operator in a complex Hilbert space. If all weak solutions of equation \eqref{+} are strongly infinite differentiable on $(0,\infty)$ (cf. {\cite[Theorem $5.2$]{Markin1999}}), then all weak solutions of equation \eqref{1} are strongly infinite differentiable on ${\mathbb R}$. \end{cor} Considering that, for a self-adjoint operator $A$ in a complex Hilbert space $X$, \[ \sigma(A)\subseteq {\mathbb R} \] (see, e.g., \cite{Dun-SchII,Plesner}), we further arrive at \begin{cor}[The Case of a Self-Adjoint Operator]\label{real self-adjoint}\ \\ Every weak solution of equation \eqref{1} with a self-adjoint operator $A$ in a complex Hilbert space is strongly infinite differentiable on ${\mathbb R}$. \end{cor} Cf. {\cite[Theorem $7.1$]{Markin1999}}. \section{Inherent Smoothness Improvement Effect} As is observed in the proof of the \textit{``only if"} part of Theorem \ref{real}, the opposite to the \textit{``if"} part's premise implies that there is a weak solution of equation \eqref{1}, which is not strongly differentiable at $0$. This renders the case of finite strong differentiability of the weak solutions superfluous and we arrive at the following inherent effect of smoothness improvement. \begin{prop} Let $A$ be a scalar type spectral operator in a complex Banach space $(X,\|\cdot\|)$. If every weak solution of equation \eqref{1} is strongly differentiable at $0$, then all of them are strongly infinite differentiable on ${\mathbb R}$. \end{prop} Cf. {\cite[Proposition $5.1$]{Markin2011}}. \section{Concluding Remark} Due to the {\it scalar type spectrality} of the operator $A$, Theorem \ref{real} is stated exclusively in terms of the location of its {\it spectrum} in the complex plane, similarly to the celebrated \textit{Lyapunov stability theorem} \cite{Lyapunov1892} (cf. {\cite[Ch. I, Theorem 2.10]{Engel-Nagel}}), and thus, is an intrinsically qualitative statement (cf. \cite{Pazy1968,Markin2011}). \section{Acknowledgments} The author extends sincere appreciation to his colleague, Dr.~Maria Nogin of the Department of Mathematics, California State University, Fresno, for her kind assistance with the graphics. \section{Conflicts of Interest} The author declares that there are no conflicts of interest regarding the publication of this paper. \end{document}
\begin{document} \title[Gilbert's conjecture ]{Gilbert's conjecture and A new way to octonionic analytic functions from the clifford analysis} \date{\today} \author{Yong Li} \address{School of Mathematics and Statistics, Anhui Normal University, Wuhu 241002, Anhui, People's Republic of China} \email{[email protected]} \thanks{The author was partially supported by Anhui Province Scientific Research Compilation Plan Project 2022AH050175.} \subjclass[2020]{Primary 30A05, 30G35, 30H10 ; Secondary 42B30, 42B35} \keywords{Clifford analysis, Hardy space, Gilbert's conjecture, Octonionic analytic functions, Octonionic Hardy space} \begin{abstract} In this article we will give a affirmative answer to Gilbert's conjecture on Hardy spaces of Clifford analytic functions in upper half-space of $\mathbb{R}^8$. It depends on a explicit construction of Spinor space $\mathcal{R}_8$ and Clifford algebra $Cl_8$ by octonion algbra. What's more , it gives us an associative way to octonionic analytic function theory. And the similar question has been discussed in Octonionic Hardy space in upper-half space, some classical results about octonionic analytic functions have been reformulated, too. \end{abstract} \maketitle \tableofcontents \section{Introduction} The classical $H^p$ theory - the study of spaces of functions analytic in the upper half-plane $\mathbb{C}_+=\{z\in \mathbb{C}: Imz>0\}$ started in 1915, when G.H. Hardy considered the analytic functions in $\mathbb{C}_+$, with $$\HNC{f}=\sup_{t>0} \left(\int_{\mathbb{R}}{\abs{F(x+it)}^p}\mathrm{d}x\right)^{\frac{1}{p}}<\infty.$$ With some great mathematicians efforts, the theory has developed into a elegant theory in mathematics. A classical result about $H^p$ theory says, $H^p$ function can be determined by its real part of its non-tangent boundary value. We have following routine to get this result. \begin{theorem} Suppose $F\in H^p(\mathbb{C}_+), p>1$. Then there is a function $f\in L^p(\mathbb{R},\mathbb{C})$ such that \begin{enumerate} \item $\lim\limits_{z\to x \ n.t} F(z)=f(x)$ exists for almost $x\in \mathbb{R}$. \item $\lim\limits_{t\to 0}\displaystyle\int_{-\infty}^{+\infty}\abs{F(x+it)-f(x)}^p\mathrm{d}x=0.$ \end{enumerate} \end{theorem} Conversely, for a $f\in L^p(\mathbb{R},\mathbb{C}), p\ge 1$ we can consider its Cauchy integral $\mathcal{C}(f)$: $$\mathcal{C}(f)=\frac{1}{2\pi i}\int_{-\infty}^{+\infty}\frac{f(t)}{t-z}\mathrm{d} t, \quad \quad z\in \mathbb{C}\setminus \mathbb{R}.$$ $\mathcal{C}(f)$ is analytic function on $\mathbb{C}_+$ with boundary value $\dfrac{1}{2}(f+iHf)$, where $H$ is the Hilbert transform operator defined by: $$H(f)(x)=p.v.\frac{1}{\pi}\int_{-\infty}^{+\infty}\frac{f(y)}{x-y}\mathrm{d}y.$$ So we can view $H^p(\mathbb{C}_+)$ as $f\in L^p(\mathbb{R},\mathbb{C}), f=iHf$. Actually we can identify $H^p(\mathbb{C}_+)$ with $ L^p(\mathbb{R},\mathbb{R})$ by take real parts of those $f\in L^p(\mathbb{R},\mathbb{C})$ and $ f=iHf$. Or roughly speaking, \begin{equation}\label{eq:complex} H^p(\mathbb{C}_+) = \begin{dcases} L^p(\mathbb{R}, \mathbb{R}), & \quad p>1\\ \{f\in L^1(\mathbb{R}, \mathbb{R}): Hf\in L^1(\mathbb{R}, \mathbb{R})\}, &\quad p=1. \end{dcases} \end{equation} Are these things can be bringed into Clifford cases or octonionic case, this is a conjecture recorded in \cite[P.140 Conjecture 7.23]{Gilbert}. \subsection{Clifford module-valued Hardy space} Under the development of Clifford analysis, the Clifford Hardy space theory has be established completely similarly. We recall some basics and the Gilbert's conjecture. For a Clifford module $\mathfrak{H}$ (see Definition \ref{def:CM}), we said $F\in H^p(\mathbb{R}^n_+, \mathfrak{H})$ , if $F$ is a Clifford analytic function in upper-half space $\mathbb{R}^n_+=\{(t, \underline{x}): t>0,\underline{x}\in \mathbb{R}^{n-1}\}$ and $$\|F\|_{H^p(\mathbb{R}^n_+, \mathfrak{H})}=\sup_{t>0}\left(\int_{\mathbb{R}^{n-1}}\abs{F(t, \underline{x})}^p\mathrm{d} x\right)^{\frac{1}{p}}<\infty.$$ Firstly, analogue of classical case, we shall show that for $p>\frac{n-2}{n-1}$, every $H^p\left(\mathbb{R}_{+}^n, \mathfrak{H}\right)$ function has almost-everywhere non-tangential limits. \begin{theorem}\cite[P.120 Theorem 5.4]{Gilbert}\label{thm:bdv} Suppose $F \in H^p\left(\mathbb{R}_{+}^n, \mathfrak{H}\right), p>\frac{n-2}{n-1}$. Then there is a function $f \in$ $L^p\left(\mathbb{R}^{n-1}, \mathfrak{H}\right)$ such that \begin{enumerate} \item $\lim \limits_{z \rightarrow x n.t.}$ $F(z)=f(x)$ exists for almost all $x \in \mathbb{R}^{n-1}$, \item $\lim \limits_{t \rightarrow 0}\displaystyle \int_{\mathbb{R}^{n-1}}|F(x, t)-f(x)|^p d x=0$. \end{enumerate} \end{theorem} Then, for $p \geq 1$, we shall give a boundary integral characterization of $H^p\left(\mathbb{R}_{+}^n, \mathfrak{H}\right)$, whereby the Hardy space may be identified with a subspace of $L^p\left(\mathbb{R}^{n-1}, \mathfrak{H}\right)$ on the boundary. More precisely, $f \in L^p\left(\mathbb{R}^{n-1}, \mathfrak{H}\right)$. We define the Cauchy integral of $f$ on $\mathbb{R}^{n-1}$ by setting \begin{equation}\label{eq:Cauchy} C f(z)=\frac{1}{\omega_n} \int_{\mathbb{R}^{n-1}} \frac{u-z}{|u-z|^n} \bfe_0 f(u) d u \end{equation} for $z=(x, t) \in \mathbb{R}^n \backslash \mathbb{R}^{n-1}$. $C f$ is clearly analytic on $\mathbb{R}_{+}^n$, and \begin{align*} C f(z)&=\frac{1}{2}\left(P_t * f\right)(x)+\frac{1}{2} \sum_{j=1}^{n-1} \bfe_0\bfe_j\left(Q_t^{(j)} * f\right)(x) \\ & =\left\{P_t * \frac{1}{2}\left(I+\bfe_0\mathcal{H} \right) f\right\}(x), \quad z=(x, t) \in \mathbb{R}_{+}^n. \end{align*} Where $\mathcal{H}=\sum_{j=1}^{n-1} \bfe_j R_j$, and $R_j$ is the $j$ th Riesz transform, given by $$ R_j g(x)=\frac{2}{\omega_n} \int_{\mathbb{R}^{n-1}} \frac{x_j-u_j}{|x-u|^n} g(u) \mathrm{d} u, \quad \quad 1 \leq j \leq n-1 $$ and \begin{align*} P_t(x)=&\frac{2}{\omega_n} \frac{t}{\left[t^2+|x|^2\right]^{n / 2}}=\frac{2}{\omega_n} \frac{1}{t^{n-1}} \frac{1}{\left[1+|x / t|^2\right]^{n / 2}} \\ Q_t^{(j)}(x)=& \frac{2}{\omega_n} \frac{x_j}{\left[t^2+|x|^2\right]^{n / 2}}=\frac{2}{\omega_n} \frac{1}{t^{n-1}} \frac{x_j / t}{\left[1+|x / t|^2\right]^{n / 2}}, \quad j=1,\cdots, n-1, \end{align*} $$$$ is the Poisson kernel and $j$ th conjugate Poisson kernel for $\mathbb{R}^{n-1}$, the following result is a straightforward consequence of the properties of the Poisson kernel. \begin{theorem}\cite[P.122 Theorem 5.16]{Gilbert}\label{thm:Hp} Suppose that either (i) $1<p<\infty$ and $f \in L^p\left(\mathbb{R}^{n-1}, \mathfrak{H}\right)$, or (ii) $p=$ 1 and $f, \mathcal{H} f \in L^1\left(\mathbb{R}^{n-1}, \mathfrak{H}\right)$. Then $C f \in H^p\left(\mathbb{R}_{+}^n, \mathfrak{H}\right)$, and $$ \lim _{z \rightarrow x, n.t} C f(z)=\frac{1}{2}\left(I+\bfe_0\mathcal{H} \right) f(x) $$ for almost all $x \in \mathbb{R}^{n-1}$. Conversely, if $1 \leq p<\infty$ and suppose $F \in H^p\left(\mathbb{R}_{+}^n, \mathfrak{H}\right)$. Then $F=C f$, where $f$ is the almost-everywhere non-tangential limit of $F$ given by Theorem \ref{thm:bdv}. \end{theorem} Theorem \ref{thm:Hp} says, in effect, that, for $1<p<\infty, H^p\left(\mathbb{R}_{+}^n, \mathfrak{H}\right)$ is precisely the image of $L^p\left(\mathbb{R}^{n-1}, \mathfrak{H}\right)$ under the Cauchy integral operator, while $H^1\left(\mathbb{R}_{+}^n, \mathfrak{H}\right)$ is the image under $C$ of the set of all $f \in$ $L^1\left(\mathbb{R}^{n-1}, \mathfrak{H}\right)$ for which $\mathcal{H} f \in L^1\left(\mathbb{R}^{n-1}, \mathfrak{H}\right)$. Moreover, for $1 \leq p<\infty$, $f \in L^p\left(\mathbb{R}^{n-1}, \mathfrak{H}\right)$ arises as the non-tangential boundary value of an $H^p\left(\mathbb{R}_{+}^n, \mathfrak{H}\right)$-function if and only if $f=\bfe_0\mathcal{H} f \in L^p\left(\mathbb{R}^{n-1}, \mathfrak{H}\right)$. Thus for $p>1$, in the special case of $\mathfrak{H}=Cl_n, H^p\left(\mathbb{R}_{+}^n, Cl_n\right)$ is effectively isomorphic to $L^p\left(\mathbb{R}^{n-1}, Cl_{n-1}\right)$, while $H^1\left(\mathbb{R}_{+}^n Cl_n\right)$ is isomorphic to a subspace of $L^1\left(\mathbb{R}^{n-1}, Cl_{n-1}\right)$. And in the special case of $\mathfrak{H}=Cl_n, H^p\left(\mathbb{R}_{+}^n, Cl_n\right)$ has a similar description like \eqref{eq:complex}: \begin{equation}\label{eq:CLn} H^p(\mathbb{C}_+) = \begin{dcases} L^p(\mathbb{R}^{n-1}, Cl_{n-1}), & \quad p>1\\ \{f\in L^1(\mathbb{R}^{n-1}, Cl_{n-1}): \bfe_0\mathcal{H}f\in L^1(\mathbb{R}, Cl_{n-1})\}, &\quad p=1. \end{dcases} \end{equation} In general cases, Gilbert makes following conjecture in his book . \begin{conjecture}\cite[P.140 Conjecture 7.23]{Gilbert} For every Clifford module $\mathcal{H}$ there exists $\eta \in \mathbb{R}^n$ with $\eta^2=-1$ and a subspace $\mathfrak{H}_0$ of $\mathfrak{H}$, such that \begin{enumerate} \item $\mathcal{H}$ admits the splitting $\mathfrak{H}=\mathfrak{H}_0+\eta\mathfrak{H}_0$, \item the Cauchy integral operator $C_M$ is an isomorphism from $L^p(\partial M, \mathfrak{H}_0 )$ onto $H^p(M,\mathfrak{H})$ for all $p>p_0$, \item if $Tan$ denotes the projection of $\mathfrak{H}$ onto$\mathcal{H}_0$, then the boundary operator mapping $F\to Tan(F^+)$ is continuous from $H^p(M,\mathfrak{H})$ onto $L^p(\partial M, \mathfrak{H}_0 )$ for all $p>p_0$. \end{enumerate} \end{conjecture} We will give a affirmative answer in the case Hardy spaces of Clifford analytic functions in upper half-space of $\mathbb{R}^8$. \begin{theorem}\label{thm:main1} For every Clifford module $\mathfrak{H}$ there exists $\eta \in \mathbb{R}^8$ with $\eta^2=-1$ and a subspace $\mathfrak{H}_0$ of $\mathfrak{H}$, such that \begin{enumerate} \item $\mathfrak{H}$ admits the splitting $\mathfrak{H}=\mathfrak{H}_0\oplus \eta\mathfrak{H}_0$, \item the Cauchy integral operator $C$ is an isomorphism from $L^p(\mathbb{R}^7, \mathfrak{H}_0 )$ onto $H^p(\mathbb{R}^8_+,\mathfrak{H})$ for all $p>1$, \item if $Tan$ denotes the projection of $\mathfrak{H}$ onto $\mathfrak{H}_0$, then the boundary operator mapping $F\to Tan(F^+)$ is continuous from $H^p(\mathbb{R}^8_+,\mathfrak{H})$ onto $L^p(\mathbb{R}^7, \mathfrak{H}_0 )$ for all $p>1$. \end{enumerate} \end{theorem} \subsection{Octonionic Hardy space} When we review the material we used in Clifford case, the Cauchy integral involves $\bfe_0\bfe_j$ only, that means we can consider the Spin(8) module or $Cl_{7}$ module. It's known the octonion algebra can be realized as a $Cl_{7}$ module\cite{Harvey90,HLR21}. What will happen if we take Cauchy integral of a octonion-valued function ? Under the natural question and with the help of the previous progresses, we find a way from Clifford analysis to octonion analytic function theory. And octonionic Hardy space theory can be built easily, in which case Theorem \ref{thm:bdv} and Theorem \ref{thm:Hp} still holds. Thus , it's very natural to ask the same question to octonion case, but we have the following totally different answer . \begin{theorem}\label{thm:main2} There don't exist two proper subspaces $\mathfrak{H}_0, \mathfrak{H}_1$ of $\mathbb{O}$, satisfy the following all three statements: \begin{enumerate} \item $\mathbb{O}$ admits the splitting $\mathfrak{H}=\mathfrak{H}_0\oplus \mathfrak{H}_1$, \item the Cauchy integral operator $C_{\mathbb{O}}$ is an isomorphism from $L^p(\mathbb{R}^7, \mathfrak{H}_0 )$ onto $H^p(\mathbb{R}^8_+,\mathbb{O})$ for all $p>1$, \item if $Tan$ denotes the projection of $\mathbb{O}$ onto$\mathfrak{H}_0$, then the boundary operator mapping $F\to Tan(F^+)$ is continuous from $H^p(\mathbb{R}^8_+,\mathbb{O})$ onto $L^p(\mathbb{R}^7, \mathfrak{H}_0 )$ for all $p>1$. \end{enumerate} \end{theorem} Our paper is organized as follows. Section 2 is devoted to recalling some basics for Clifford algebras and octonion algebra. In Section 3, the most important realization of $Cl_8$ and spinor space $\mathcal{R}_8$ are given, which connects octonion algebra and $\mathcal{R}_8$ closely . A new way from Clifford analysis to Octonionic analytic functions theory has been introduced in Section 4, and octonionic Hardy space in upper half space has been built by Clifford Hardy spaces. And at last section, we will give the proofs of Theorem \ref{thm:main1} and Theorem \ref{thm:main2}. \section{Preliminaries} \subsection{Some basic properties of Universal Clifford algebra $Cl_n$ over $\mathbb{R}^n$} We don't want to discuss the properties of universal Clifford algebra $Cl_n$ too much here, but the following notions and conventions are needed. For a detailed discussion on Clifford algebra, we refer the readers to \cite{Atiyah,Gilbert,F.S,BDS82}. \begin{definition} Let $\mathbb{A}$ be a associative algebra over $\mathbb{R}$ with identity 1,$v:\mathbb{R}^n\rightarrow \mathbb{A} $ is a $\mathbb{R}$-linear embedding.The pair $(\mathbb{A},v)$ is said to be a universal Clifford algebra $Cl_n$ over $\mathbb{R}^n$,if: \begin{enumerate}[label=(\arabic*)] \item $A$ is generated as an algebra by $\{v(x):x\in \mathbb{R}^n\}$ and $\{\lambda1,\lambda\in\mathbb{R}\}$, \item $(v(x))^2=-\abs{x}^2,\forall x\in \mathbb{R}^n$ \item $dim_{\mathbb{R}}\mathbb{A}=2^n$ \end{enumerate} \end{definition} \textbf{Some notations and conventions} \begin{itemize} \item\label{g_i} Let $\{\bfe_{i-1}=(\underbrace{0,\cdots,0}_{i-1},1,0,\cdots,0),i=1,\cdots,n\}$ be the orthogonal normalized basis of $\mathbb{R}^n$, and we don't distinguish $\bfe_i\in \mathbb{R}^n$ and $v(\bfe_i)\in Cl_n$ ; \item $\mathbf{x}=\sum_{i=0}^{n-1}x_i\bfe_i \in \mathbb{R}^n,\abs{x}^2=\sum_{i=0}^{n-1}x_i^2$; \item Let$\mathcal{P}(n)$ be the set of subset of $\{0,\cdots,n-1\}$, $\forall \alpha\in \mathcal{P}(n) ,\alpha\neq \emptyset,\alpha=\{\alpha_1\cdots,\alpha_k\},0\le \alpha_1<\cdots<\alpha_k\le n-1$, denote $\bfe_{\alpha}=\bfe_{\alpha_1}\cdots \bfe_{\alpha_k}$, and $\bfe_{\emptyset}=1$. \item $Cl_n$ is $\mathbb{R}$ linearly generated by $\{\bfe_\alpha,\alpha\in \mathcal{P}(n)\}$; \item $\bfe_i\bfe_j+\bfe_j\bfe_i=-2\delta_{ij},\forall i,j=0,\cdots,n-1$. \item $x=\sum_{A}x_A \bfe_A \in Cl_n$, the Clifford conjugation $x^\star=\sum_{A}x_A \bfe_A^\star $, where $\bfe_A^\star=(-1)^{\frac{|A|(|A|+1)}{2}}\bfe_A.$ \end{itemize} \begin{definition}\label{def:CM} A finite dimensional real Hilbert space $\mathcal{H}$ is said to be $Cl_n$ module when there exist skew-adjoint real linear operator $T_1,\cdots, T_n$ such that $$T_jT_k+T_kT_j=-2\delta_{jk}id,\quad \quad (1\le j,k \le n).$$ \end{definition} Among all $Cl_n$ module, there is a class of most important $Cl_n$ module named real spinor space $\mathcal{R}_n$. When $n\neq 4l+3$, it can be characterized as $Cl_n=End_{\mathbb{R}}(\mathcal{R}_n)$\cite[P.59 (7.42)]{Gilbert}, where $End_{\mathbb{R}}(\mathcal{R}_n)$ is the real linear operator algebra from $\mathcal{R}_n$ to $\mathcal{R}_n$. \subsection{Octonion algebra} The algebra of octonions $\mathbb{O}$ is a non-commutative, non-associative, normed 8 dimensional division algebra over $\mathbb{R}$. Comparing to the Clifford algebra, the following notions and conventions are needed. We refer to \cite{Baez,CS03} for a detailed discussion on octonion algebra. \textbf{Some notations and conventions again} \begin{itemize} \item Let $\mathbf{e_0}=1,\bfe_1 \ldots,\mathbf{e_7}$ be its natural basis, we have $$\mathbf{e_i}\mathbf{e_j}+\mathbf{e_j}\mathbf{e_i}=-2\delta_{ij},\quad i,j=1,\ldots,7.$$Noticed that in $\mathbb{R}^8$, we can have three different means of $\bfe_i$, a vector in $\mathbb{R}^8$, a Clifford number in $Cl_8$, and a octonion number in $\mathbb{O}$, we don't distinguish them if they don't make confuses. \item $x=x_0+\sum_{i=1}^7x_i\mathbf{e_i}\in \mathbb{O},\quad x_i\in\mathbb{R}$ we define octonion conjugation by $\overline{x}=x_0-\sum_{i=1}^7x_i\mathbf{e_i} $, and real part operator $Re x=x_0$, $\mathbb{O}$ is a 8-dimensional Euclidean space, under inner product $(p,q)=Re(p\bar{q}), \quad p,q\in \mathbb{O}$. \item We shall use the associator, defined by $$[a, b, c]=(ab)c-a(bc).$$ It is well-known that the associator of octonion is alternative, i.e., $$[a, b, c]=-[a, c, b]=-[b,a,c]=-[\overline{a}, b, c].$$ \item The full multiplication table is conveniently encoded in the Fano plane (see Figure 1 and \cite{Baez}). In the Fano plane, the vertices are labeled by $\mathbf{e_1},\ldots, \mathbf{e_7}$. Each of the 7 oriented lines gives a quaternionic triple. The product of any two imaginary units is given by the third unit on the unique line connecting them, with the sign determined by the relative orientation. \begin{figure} \caption{Fano plane} \label{fig:1} \end{figure} \end{itemize} \section{An explicit realization of Clifford algebra $Cl_8$ and the spinor space $\mathcal{R}_8$} Let $V=\mathbb{R}^8$ be the real $8$ dimensional Euclidean space. We give a embedding $A:V\to End_{\mathbb{R}}(\mathbb{O}\oplus \mathbb{O})$ by $$A(q)= \left(\begin{matrix} 0 & L_q\\ -L_{\bar{q}} & 0 \end{matrix} \right). $$ Where the $L_q\in End_{\mathbb{R}}(\mathbb{O})$ is the left multiply operator defined by $$L_q:\mathbb{O}\to \mathbb{O}, L_q(p)=qp,$$ for any $p\in \mathbb{O}$. \begin{lemma} $\forall p,q \in \mathbb{O}$, we have $$L_pL_{\bar{q}}+L_qL_{\bar{p}}=2(p,q)id.$$ \end{lemma} \begin{proof} This simple fact just follows that for any octonion $z$, we have \begin{align*} &\left(L_pL_{\bar{q}}+L_qL_{\bar{p}}\right)(z)=p(\bar{q}z)+q(\bar{p}z)\\ =&-[p, \bar{q}, z]+(p\bar{q})z-[q, \bar{p}, z]+(q\bar{p})z=(p\bar{q}+q\bar{p})z\\ =&2Re(p\overline{q})=2(p,q)z. \end{align*} \end{proof} \begin{theorem}[\cite{Harvey90}]\label{realization} The map $A:V\to End_{\mathbb{R}}(\mathbb{O}\oplus \mathbb{O})$ gives a realization of Clifford algebra $Cl_8$ , thus $\mathbb{O}\oplus \mathbb{O}$ can be considered as the real spinor space $\mathcal{R}_8$. \end{theorem} \begin{proof} Actually we have \begin{align*} A(p)A(q)+A(q)A(p)=&\left(\begin{matrix} 0 & L_p\\ -L_{\bar{p}} & 0 \end{matrix} \right)\left(\begin{matrix} 0 & L_q\\ -L_{\bar{q}} & 0 \end{matrix} \right)+\left(\begin{matrix} 0 & L_q\\ -L_{\bar{q}} & 0 \end{matrix} \right)\left(\begin{matrix} 0 & L_p\\ -L_{\bar{p}} & 0 \end{matrix} \right)\\ =& \left(\begin{matrix} -(L_pL_{\bar{q}}+L_qL_{\bar{p}}) & 0\\ 0 & -(L_{\bar{p}}L_{q}+L_{\bar{q}}L_{p}) \end{matrix} \right)\\ =&-2(p,q)id. \end{align*} Noticed that every Clifford algebra on even dimensional Euclidean space is universal (\cite[P.25 Corollary(3.6)]{Gilbert}), and $$dim_{\mathbb{R}}End_{\mathbb{R}}(\mathbb{O}\oplus \mathbb{O})=2^8=dim_{\mathbb{R}}Cl_8.$$ Thus, the map $A$ gives a realization of $Cl_8$, and $\mathbb{O}\oplus \mathbb{O}$ is just the real spinor space $\mathcal{R}_8$ by definition \ref{def:CM}. \end{proof} This realization is extremely important for us, and it connects the Clifford analysis and octonionic analytic function theory closely. \section{The Octonionic analytic functions} Under the realization in former section, we will give the explicit expression of Dirac operator in this case, which connects octonion Cauchy operator. And we will see their relationships soon. \subsection{The Dirac operator and octonion Cauchy operator on $\mathbb{R}^8$} \begin{definition}[\cite{Gilbert}] Under the realization of Clifford algebra $Cl_8$ in the Theorem \ref{realization} , the Dirac operator $D_8$ associated with the Euclidean space $V$ is the first-order differential operator on $\mathcal{C}^1(V, \mathbb{O}\oplus \mathbb{O})$ defined by $$D_8\left(\begin{matrix} f_1\\ f_2 \end{matrix} \right)=\sum_{j=0}^7A(e_j)\frac{\partial\ \ }{\partial x_j}\left(\begin{matrix} f_1\\ f_2 \end{matrix} \right)=\sum_{j=0}^7\left(\begin{matrix} 0 & L_{e_j}\\ -L_{\overline{e_j}} & 0 \end{matrix} \right)\frac{\partial\ \ }{\partial x_j}\left(\begin{matrix} f_1\\ f_2 \end{matrix} \right)=\left(\begin{matrix} 0 & D\\ -\overline{D} & 0 \end{matrix} \right)\left(\begin{matrix} f_1\\ f_2 \end{matrix} \right)$$ Where $f_1,f_2\in \mathcal{C}^1(V, \mathbb{O})$ and $D=\sum_{j=0}^7e_j\frac{\partial\ \ }{\partial x_j}$ is the octonionic Cauchy operator on $\mathcal{C}^1(V, \mathbb{O})$. \end{definition} The classical Clifford analysis \cite{Gilbert,F.S,BDS82} is to investigate the spinor-valued functions annihilated by Dirac operator, and it has full development, has became an important mathematics branch. It's so clearly $D_8$ annihilated functions is connected with octonionic analytic functions, we give the both definitions formally here. \begin{definition}\label{Canalytic} Suppose $\Omega\subset V$ is a domain, $f_1,f_2\in \mathcal{C}^1(V, \mathbb{O})$. A function $\left(\begin{matrix} f_1\\ f_2 \end{matrix} \right)\in \mathcal{C}^1(\Omega, \mathbb{O} \oplus \mathbb{O})$ is said to be Clifford analytic on $\Omega$, when $$D_8\left(\begin{matrix} f_1\\ f_2 \end{matrix} \right)=\left(\begin{matrix} Df_2\\ -\overline{D}f_1 \end{matrix} \right)=0$$ on $\Omega$. The set of $\mathcal{R}_8=\mathbb{O}\oplus \mathbb{O}$-valued Clifford analytic functions in $\Omega$ is denoted by $A(\Omega)$. \end{definition} \begin{definition} Let $\Omega\subset \mathbb{R}^8$ be a domain, a function $f\in \mathcal{C}^1(\Omega, \mathbb{O})$ is said to be (left) octonionic analytic on $\Omega$, when $$Df=\sum_{j=0}^7\bfe_j\frac{\partial f\ }{\partial x_j}=0$$ on $\Omega$, the set of octonionic analytic functions in $\Omega$ is denoted by $A(\Omega, \mathbb{O})$. \end{definition} We give some remarks here to illustrate the connections between Clifford analytic and octonionic analytic. \begin{remark}\label{rem:CandO} \begin{enumerate} \item For any octonionic analytic function $f$ on $\Omega$, from the Definition \ref{Canalytic}, we know that \begin{equation}\label{eq:OandC} \left(\begin{matrix} 0\\ f \end{matrix} \right)\in A(\Omega) \end{equation} this is the fundamental evidence why we can study octonionic analytic functions by Clifford analytic functions. \item The reader who familiar with Dirac operator and index theorem in differential geometry will be aware that the relationship between $D_8$ and $D$ is connected with the $\mathbb{Z}_2$-grade of spinor spaces and Dirac $\mathcal{D}$-operators and Dirac operators. See \cite[P.207]{Gilbert} or \cite{BGV92} for example. \end{enumerate} \end{remark} \subsection{A new way to octonionic analysis from Clifford analysis} Here, with the help of Equation \eqref{eq:OandC}, we can view a octonionic analytic function as a Clifford analytic function. Thus some results on octonionic analytic functions theory can be reformulated. We give some examples here to tells the reader how to realize it. First, we refer the Cauchy integral theorem of spinor-valued functions, and we will show it how to transfer to octonionic Cauchy integral theorem in detail. \begin{theorem}[\cite{Gilbert} ]\label{thm:CauchyforC} If $M$ is a compact, $8$-dimensional, oriented $\mathcal{C}^{\infty}-$manifold in $\Omega$, then for each Clifford analytic function $f$ in $\mathcal{C}^{\infty}(\Omega, \mathcal{R}_8)$, we have \begin{equation} f(z)=\frac{1}{\omega_8}\int_{\partial M}\frac{(x-z)^{\star}}{\left|x-z\right|^8} \mathrm{d}\sigma(x)f(x). \end{equation} for each $z$ in the interior of $M$.Where $\mathrm{d}\sigma(x)=A(\eta(x))\mathrm{d}S(x)$ and $\eta(x)$ is the outer unit normal to $\partial M$ at $x$, $\mathrm{d}S(x)$ is the scalar element of surface area on $\partial M$. \end{theorem} From this and Equation\eqref{eq:OandC}, we can get the Cauchy integral formula for octonionic analytic functions immediately. \begin{theorem} If $M$ is a compact, $8$-dimensional, oriented $\mathcal{C}^{\infty}-$manifold in $\Omega$, then for each octonionic analytic function $f$ in $\mathcal{C}^{\infty}(\Omega, \mathbb{O})$, we have \begin{equation}\label{eq:cauchy} f(z)=\frac{1}{\omega_8}\int_{\partial M}\frac{\overline{x-z}}{\left|x-z\right|^8} \left(\mathrm{d}\sigma(x)f(x)\right). \end{equation} for each $z$ in the interior of $M$. Where $\mathrm{d}\sigma(x)=\eta(x)\mathrm{d}S(x)$ and $\eta(x)\in \mathbb{O}$ is the octonion number determined by the outer unit normal to $\partial M$ at $x$, $\mathrm{d}S(x)$ is the scalar element of surface area on $\partial M$. \end{theorem} \begin{proof} For any octoionic analytic function $f$ on $\omega$, we have $\left(\begin{matrix} 0\\ f \end{matrix} \right)\in A(\Omega)$, thus for each $z$ in the interior of $M$, by Theorem \ref{thm:CauchyforC}, we have \begin{align*} \left(\begin{matrix} 0\\ f \end{matrix} \right) &=\frac{1}{\omega_8}\int_{\partial M}\frac{(x-z)^{\star}}{\left|x-z\right|^8} \mathrm{d}\sigma(x)\left(\begin{matrix} 0\\ f \end{matrix} \right)\\ &=\frac{1}{\omega_8}\int_{\partial M}\frac{1}{\left|x-z\right|^8} \left(\begin{matrix} 0 & -L_{x-z}\\ L_{\overline{x-z}} & 0 \end{matrix} \right)\left(\begin{matrix} 0 & L_{\eta(x)}\\ -L_{\overline{\eta(x)}} & 0 \end{matrix} \right)\left(\begin{matrix} 0\\ f \end{matrix} \right)\mathrm{d}S(x)\\ &=\frac{1}{\omega_8}\int_{\partial M}\frac{1}{\left|x-z\right|^8} \left(\begin{matrix} 0\\ L_{\overline{x-z}}L_{{\eta(x)}} f \end{matrix} \right)\mathrm{d}S(x). \end{align*} It's nothing but the equation \eqref{eq:cauchy}, so we done. \end{proof} \begin{remark} \begin{enumerate} \item This example shows us how to get octonionic analytic function properties by the spinor-valued Clifford analytic functions. Actually if the properties only involves the octonionic multiplications and real-linear properties, then we can get these properties by $\mathcal{R}_8$-valued Clifford analytic functions. The deep reason why we can do this way is that the octionic left multiplication operator can gives by Clifford action. \item This result has been proved by \cite{LP}, and in this article, the authors get the ``integration by parts" formula. We point out that it's nothing but the equation (3.22) in \cite[P.102]{Gilbert}. \end{enumerate} \end{remark} The critical index of subharmonicity of octonionic analytic function has been proved by \cite{LW} and \cite{AD}, but actually we can also get this result from Clifford analytic, and we refer reader to \cite[Chapter 4]{Gilbert} for an excellent introduction. \begin{theorem} Let $\Omega$ be any open set in $V$. Then \begin{enumerate} \item whenever $p\ge \dfrac{6}{7}$ and $f$ is a octonionic analytic function on $\Omega$, $x\to\left|f(x)\right|^p$ is subharmonic on $\Omega$. \item if $0<p<\dfrac{6}{7}$, there is a octonionic analytic function $f$ such that $x\to\left|f(x)\right|^p$ is not subharmonic anywhere on its domain of definition. \end{enumerate} \end{theorem} \begin{proof}The first proposition just follows the results of spinor-valued Clifford analytic functions. \begin{quote} \begin{theorem}\cite[P.108 Corollary 3.39]{Gilbert} Let $\Omega$ be any open set in $V$, whenever $p\ge \dfrac{6}{7}$ and $f$ is a $\mathcal{R}_8$-valued Clifford analytic function on $\Omega$, $x\to\left|f(x)\right|^p$ is subharmonic on $\Omega$. \end{theorem} and the fact the norm of $\left(\begin{matrix} 0\\ f \end{matrix} \right)$ is equal to $\left|f\right|$. \end{quote} As for the second proposition, just take $f(x)=\dfrac{\overline{x}}{\left|x\right|^8}, x\neq 0$. \end{proof} There are still some Clifford analytic results can be transfer into octonionic analytic, for example, Mean-value theorem, Cauchy theorem, Morera's theorem, Maximum Modulus Principle, Weierstrass theorem etc, we don't repeat it so much, we refer to \cite[Chapter2]{Gilbert} and \cite{BDS82} for an introduction to Clifford analytic results and we encourage readers to get octonionic result directly by the method we introduced. \subsection{Octonionic Hardy space} Review the materials we used in Clifford Hardy space and \eqref{eq:OandC}, the upper half-space Octonionic Hardy space theory can be built without any affords. \begin{definition} For any $p>0$, the Hardy space $H^p(\mathbb{R}^8_+, \mathbb{O})$ of octonionic analytic functions is defined to be the space of all octonionic analytic $F$ in $\mathbb{R}^8_+$ satisfying : $$\HN{f}=\sup_{t>0} \left(\int_{\mathbb{R}^7}{\abs{F(t, \underline{x})}^p}\mathrm{d}x\right)^{\frac{1}{p}}<\infty.$$ \end{definition} Notice that for any $f_1,f_2\in L^p(\mathbb{R}^7, \mathbb{O}), p\ge 1$, recall \eqref{eq:Cauchy} give us : \begin{align*} C\left(\begin{matrix} f_1\\ f_2 \end{matrix}\right) & = \frac{1}{\omega_8} \int_{\mathbb{R}^{7}} \frac{u-z}{|u-z|^8} \bfe_0 \left(\begin{matrix} f_1\\ f_2 \end{matrix}\right) \mathrm{d} u\\ &=\frac{1}{\omega_8}\int_{\mathbb{R}^{7}}\frac{1}{\left|u-z\right|^8} \left(\begin{matrix} 0 & L_{u-z}\\ -L_{\overline{u-z}} & 0 \end{matrix} \right)\left(\begin{matrix} 0 & L_1\\ -L_{1} & 0 \end{matrix} \right)\left(\begin{matrix} f_1\\ f_2 \end{matrix} \right)\mathrm{d}u\\ &=\frac{1}{\omega_8}\int_{\mathbb{R}^{7}}\left(\begin{matrix} -\frac{u-z}{|u-z|^8}f_1\\ -\frac{\overline{u-z}}{|u-z|^8}f_2 \end{matrix} \right)\mathrm{d}u. \end{align*} Thus for $f\in L^p(\mathbb{R}^7, \mathbb{O}), p\ge 1$, if we define the octonionic Cauchy integral of $f$ by $$ C_{\mathbb{O}}(f)=\frac{1}{\omega_8}\int_{\mathbb{R}^{7}}\frac{\overline{z-u}}{|u-z|^8}f(u)\mathrm{d}u.$$ and $\mathcal{H}_{\mathbb{O}}=-\sum_{j=1}^{7} \bfe_j R_j$, where $R_j$ is the $j$-th Riesz transform as former, then we will have something analogue to Theorem \ref{thm:bdv} and Theorem \ref{thm:Hp}. \begin{theorem}\label{thm:obdv} Suppose $F \in H^p\left(\mathbb{R}_{+}^n, \mathbb{O}\right), p>\frac{6}{7}$. Then there is a function $f \in$ $L^p\left(\mathbb{R}^{7}, \mathbb{O}\right)$ such that \begin{enumerate} \item $\lim \limits_{z \rightarrow x n.t.}$ $F(z)=f(x)$ exists for almost all $x \in \mathbb{R}^{7}$, \item $\lim \limits_{t \rightarrow 0}\displaystyle \int_{\mathbb{R}^{7}}|F(x, t)-f(x)|^p \mathrm{d} x=0$. \end{enumerate} \end{theorem} \begin{proof} Just take $\left(\begin{matrix} 0\\ F \end{matrix}\right)\in H^p\left(\mathbb{R}_{+}^8, \mathbb{O}\oplus \mathbb{O}\right)= H^p\left(\mathbb{R}_{+}^8, \mathcal{R}_8\right)$ in Theorem \ref{thm:bdv}. \end{proof} \begin{theorem}\label{thm:oHp} Suppose that either (i) $1<p<\infty$ and $f \in L^p\left(\mathbb{R}^{7}, \mathbb{O}\right)$, or (ii) $p= 1$ and $f, \mathcal{H}_{\mathbb{O}} f \in L^1\left(\mathbb{R}^{7}, \mathbb{O}\right)$. Then $C_{\mathbb{O}} f \in H^p\left(\mathbb{R}_{+}^8, \mathbb{O}\right)$, and $$ \lim _{z \rightarrow x, n.t} C_{\mathbb{O}} f(z)=\frac{1}{2}\left(I+\mathcal{H}_{\mathbb{O}} \right) f(x) $$ for almost all $x \in \mathbb{R}^{7}$. Conversely, if $1 \leq p<\infty$ and suppose $F \in H^p\left(\mathbb{R}_{+}^8, \mathbb{O}\right)$. Then $F=C_{\mathbb{O}} f$, where $f$ is the almost-everywhere non-tangential limit of $F$ given by Theorem \ref{thm:obdv}. \end{theorem} \begin{proof} Just take $\left(\begin{matrix} 0\\ f \end{matrix}\right)\in L^p(\mathbb{R}^7, \mathbb{O}\oplus \mathbb{O})=L^p(\mathbb{R}^7,\mathcal{R}_8) $ and$\left(\begin{matrix} 0\\ F \end{matrix}\right)\in H^p\left(\mathbb{R}_{+}^8, \mathbb{O}\oplus \mathbb{O}\right)=H^p\left(\mathbb{R}_{+}^8, \mathcal{R}_8\right)$ in Theorem \ref{thm:Hp}. \end{proof} Theorem \ref{thm:oHp} says that for $1<p<\infty, H^p\left(\mathbb{R}_{+}^8, \mathbb{O}\right)$ is precisely the image of $L^p\left(\mathbb{R}^{7}, \mathbb{O}\right)$ under the octonionic Cauchy integral operator $C_{\mathbb{O}}$, while $H^1\left(\mathbb{R}_{+}^8, \mathbb{O}\right)$ is the image under $C_{\mathbb{O}}$ of the set of all $f \in$ $L^1\left(\mathbb{R}^{7}, {\mathbb{O}}\right)$ for which $\mathcal{H}_{\mathbb{O}} f \in L^1\left(\mathbb{R}^{7}, {\mathbb{O}}\right)$. Moreover, for $1 \leq p<\infty$, $f \in L^p\left(\mathbb{R}^{n-1}, \mathbb{O}\right)$ arises as the non-tangential boundary value of an $H^p\left(\mathbb{R}_{+}^n, \mathbb{O}\right)$ function if and only if $f=\mathcal{H}_{\mathbb{O}} f \in L^p\left(\mathbb{R}^{7}, \mathbb{O}\right)$. \section{Proof of main theorems} Now we turn to the proof of Theorem \ref{thm:main1}: \begin{proof}[ Proof of Theorem \ref{thm:main1}] First we proof for the spinor space $\mathcal{R}_8=\mathbb{O}\oplus \mathbb{O}$, Theorem \ref{thm:main1} holds. We take $\eta=\bfe_0, \mathfrak{H}_0=\mathbb{O}\left(\begin{matrix} 1\\ 1 \end{matrix}\right)=\left\{\left(\begin{matrix} p\\ p \end{matrix}\right): p\in \mathbb{O}\right\}, $ then we have: $$\eta \mathfrak{H}_0=\left\{\left(\begin{matrix} 0&1\\ -1&0 \end{matrix}\right)\left(\begin{matrix} p\\ p \end{matrix}\right): p\in \mathbb{O}\right\}=\left\{\left(\begin{matrix} p\\ -p \end{matrix}\right): p\in \mathbb{O}\right\}=\mathbb{O}\left(\begin{matrix} 1\\ -1 \end{matrix}\right).$$ So we have \begin{equation}\label{eq:subspace} \mathbb{O}\oplus\mathbb{O}=\mathfrak{H}_0\oplus \eta \mathfrak{H}_0. \end{equation} Before check the Cauchy integral operator $C$ is an isomorphism from $L^p(\mathbb{R}^7, \mathfrak{H}_0 )$ onto $H^p(\mathbb{R}^8_+,\mathbb{O}\oplus\mathbb{O})$ for all $p>1$, we need some explains the meaning of isomorphism. For any $f\in L^p(\mathbb{R}^7, \mathfrak{H}_0 ),p>1 $, we know $C(f)\in H^p(\mathbb{R}^8_+,\mathbb{O}\oplus\mathbb{O}) $ by Theorem\ref{thm:Hp}, and $C(f)$ has a non-tangent limits $\dfrac{1}{2}(f(x)+\bfe_0\mathcal{H}f(x))\in L^p(\mathbb{R}^7, \mathbb{O}\oplus\mathbb{O} ) $ for almost $x\in\mathbb{R}^7$, so we must check the projection of $(f+\bfe_0\mathcal{H}f)$ onto $\mathfrak{H}_0$ is just $f\in L^p(\mathbb{R}^7, \mathfrak{H}_0 )$. Conversely, for any $F\in H^p(\mathbb{R}^8_+,\mathbb{O}\oplus\mathbb{O}), p>1 $ it has a non-tangent limits $f\in L^p(\mathbb{R}^7, \mathbb{O}\oplus\mathbb{O} )$, which satisfies $f=\bfe_0\mathcal{H}f$. It has a decomposition \begin{equation*} f=g\left(\begin{matrix} 1\\ 1 \end{matrix}\right)+h\left(\begin{matrix} 1\\ -1 \end{matrix}\right) \end{equation*} $g,h\in L^p(\mathbb{R}^7,\mathbb{O})$ by \eqref{eq:subspace}. So we also need check \begin{equation}\label{eq:task2} f=g\left(\begin{matrix} 1\\ 1 \end{matrix}\right)+\bfe_0\mathcal{H}\left[g\left(\begin{matrix} 1\\ 1 \end{matrix}\right)\right]. \end{equation} But all these just follow the action of $Cl_8$ on $\mathbb{O}\oplus \mathbb{O}$, more precisely: For $p>1$ and $f=g\left(\begin{matrix} 1\\ 1 \end{matrix}\right)\in L^p(\mathbb{R}^7, \mathfrak{H}_0 ), g\in L^p(\mathbb{R}^7, \mathbb{O}) $ we have \begin{align*} f+\bfe_0\mathcal{H}f & =\left(\begin{matrix} g\\ g \end{matrix}\right)+\sum_{j=1}^{7}\left(\begin{matrix} 0&1\\ -1&0 \end{matrix}\right)\left(\begin{matrix} 0&L_{\bfe_j}\\ L_{\bfe_j}&0 \end{matrix}\right)R_j\left(\begin{matrix} g\\ g \end{matrix}\right) \\ &=\left(\begin{matrix} g\\ g \end{matrix}\right)+\sum_{j=1}^{7}\left(\begin{matrix} L_{\bfe_j}&0\\ 0&-L_{\bfe_j} \end{matrix}\right)\left(\begin{matrix} R_jg\\ R_jg \end{matrix}\right)\\ &=\left(\begin{matrix} g\\ g \end{matrix}\right)+\sum_{j=1}^{7}L_{\bfe_j}R_jg\left(\begin{matrix} 1\\ -1 \end{matrix}\right)=g\left(\begin{matrix} 1\\ 1 \end{matrix}\right)-\mathcal{H}_{\mathbb{O}}g\left(\begin{matrix} 1\\ -1 \end{matrix}\right) \end{align*} So the $\mathfrak{H}_0$ part of $f+\bfe_0\mathcal{H}f$ is just $f$. And follow the result above, the equation \eqref{eq:task2} is noting but $h=-\mathcal{H}_{\mathbb{O}}g$. By $f=\bfe_0\mathcal{H}f$, we can get \begin{align}\label{eq:f=hf} g\left(\begin{matrix} 1\\ 1 \end{matrix}\right)+h\left(\begin{matrix} 1\\ -1 \end{matrix}\right)=&\sum_{j=1}^{7}\left(\begin{matrix} 0&1\\ -1&0 \end{matrix}\right)\left(\begin{matrix} 0&L_{\bfe_j}\\ L_{\bfe_j}&0 \end{matrix}\right)R_j\left[g\left(\begin{matrix} 1\\ 1 \end{matrix}\right)+h\left(\begin{matrix} 1\\ -1 \end{matrix}\right)\right]\nonumber\\ =&\sum_{j=1}^{7}L_{\bfe_j}R_jg\left(\begin{matrix} 1\\ -1 \end{matrix}\right)+\sum_{j=1}^{7}L_{\bfe_j}R_jh\left(\begin{matrix} 1\\ 1 \end{matrix}\right)=-\mathcal{H}_{\mathbb{O}}h\left(\begin{matrix} 1\\ 1 \end{matrix}\right)-\mathcal{H}_{\mathbb{O}}g\left(\begin{matrix} 1\\ -1 \end{matrix}\right). \end{align} So we complete the proof of (2) in Theorem \ref{thm:main1} . For (3), we denote $Tan$ the projection of $\mathbb{O}\oplus \mathbb{O}$ onto $\mathfrak{H}_0$, and $F^+\in L^p(\mathbb{R}^7, \mathbb{O}\oplus \mathbb{O}), p>1$ be the almost-everywhere non-tangential limits of a Clifford Hardy function $F\in H^p(\mathbb{R}^8_+, \mathbb{O}\oplus \mathbb{O})$, so the boundary operator mapping from $H^p(\mathbb{R}^8_+, \mathbb{O}\oplus \mathbb{O})$ onto $L^p(\mathbb{R}^7, \mathfrak{H}_0 )$ is just $F\to Tan(F^+)$. By theorem \ref{thm:bdv}, we have $$\| F^+\|_{L^p(\mathbb{R}^7,\mathbb{O}\oplus\mathbb{O})}=\lim_{t\to 0}\left(\int_{\mathbb{R}^7}\abs{F(x,t)}^p\mathrm{d}x\right)^{\frac{1}{p}}\le \sup_{t> 0}\left(\int_{\mathbb{R}^7}\abs{F(x,t)}^p\mathrm{d}x\right)^{\frac{1}{p}}=\|F\|_{H^p(\mathbb{R}^8_+, \mathbb{O}\oplus \mathbb{O})}$$ Thus we have $$\|Tan ( F^+)\|_{L^p(\mathbb{R}^7,\mathfrak{H}_0)}\le \|F^+\|_{L^p(\mathbb{R}^7,\mathbb{O}\oplus\mathbb{O})}\le \|F\|_{H^p(\mathbb{R}^8_+, \mathbb{O}\oplus \mathbb{O})}.$$ which implies (3) in Theorem \ref{thm:main1}, so we complete our proof in $H^p(\mathbb{R}^8_+,\mathcal{R}_8)$ case. In the general cases, for any Clifford module $\mathfrak{H}$, since $Cl_8$ is a simple algebra, consequently, $\mathfrak{H}$ is isomorphic to a direct sum of $\mathcal{R}_8$. So theorem \ref{thm:main1} still holds for any Clifford module $\mathfrak{H}$, we have complete the conjecture on $\mathbb{R}^8$ totally. \end{proof} \begin{remark} In the \eqref{eq:f=hf}, we get not only $h=-\mathcal{H}_{\mathbb{O}}g$, but also $g=-\mathcal{H}_{\mathbb{O}}h$. But it does not surprise us, because in general we have $$\mathcal{H}_{\mathbb{O}}^2g=g,\quad \forall g\in L^p(\mathbb{R}^7,\mathbb{O}).$$ This fact based on some basic properties of Riesz transforms $R_j$, for example $$R_jR_kg=R_kR_jg, $$ and $$\sum_{j=1}^{7}R_j^2g=-g,\forall g\in L^p(\mathbb{R}^7,\mathbb{O}).$$ All these properties can be obtained by Fourier transform easily, we refer readers to \cite[P.324]{GTM249} for a detailed introduction. \end{remark} At last of the section, we give a proof of Theorem \ref{thm:main2}. \begin{proof}[Proof of Theorem \ref{thm:main2}] First we prove that there is a real-valued Schwartz function $f\in \mathcal{S}(\mathbb{R}^7)$, such that \begin{equation*} R_jf(0)=\frac{2}{\omega_n} \int_{\mathbb{R}^{7}} \frac{-u_j}{|u|^8} f(u) \mathrm{d} u=\delta_{j1},\quad j=1,2,\cdots,7. \end{equation*} Actually we can take $f(x)=c x_1e^{-\abs{x}^2}$ and the constant $c$ is selected to satisfy $R_1f(0)=1$. Now, suppose we have two proper subspaces $\mathfrak{H}_0, \mathfrak{H}_1$ of $\mathbb{O}$ satisfy all three statements in Theorem \ref{thm:main2}, and $$\mathfrak{H}_0=span_{\mathbb{R}}\{\xi_1,\xi_2,\cdots,\xi_m\}.$$ Where $\{\xi_1,\xi_2,\cdots,\xi_m\}$ forms a orthogonal basis of $\mathfrak{H}_0$. Take $g=f\xi_1\in L^p(\mathbb{R}^7,\mathfrak{H}_0)$, we know that $C_{\mathbb{O}}(f)$ has a non-tangential limits $$C_{\mathbb{O}}(f)^+=\dfrac{1}{2}\left(g+\mathcal{H}_{\mathbb{O}}g\right),$$ by Theorem \ref{thm:oHp}. And according (2) in the Theorem \ref{thm:main2}, we must have $$Tan(C_{\mathbb{O}}(f)^+)=g=f\xi_1,$$ so $\mathcal{H}_{\mathbb{O}}g\in L^p(\mathbb{R}^7,\mathfrak{H}_1)$. While $\mathcal{H}_{\mathbb{O}}g(0)=\bfe_1\xi_1\in \mathfrak{H}_1$. Under the same reason, we have $$\bfe_j\xi_1\in \mathfrak{H}_1, \quad \forall j=1,2,\cdots,7.$$ But we have $$(\bfe_i\xi_1,\bfe_j\xi_1)=((\bfe_i\xi_1)\overline{\xi_1}, \bfe_j)=(\bfe_i,\bfe_j)=\delta_{ij}, \quad \forall i,j=1,2,\cdots ,7.$$ That means $dim_{\mathbb{R}}\mathfrak{H}_1\ge 7$, so we have $\mathfrak{H}_0=span_{\mathbb{R}}\{\mathbf{p}\}$, while $$\mathfrak{H}_1=span_{\mathbb{R}}\{\bfe_1\mathbf{p},\bfe_2\mathbf{p},\ldots,\bfe_7\mathbf{p}\}.$$ Now for a $F \in H^p(\mathbb{R}^8_+ \mathbb{O}), p>1 $ it has a non-tangent limits $F^+\in L^p(\mathbb{R}^7, \mathbb{O})$, which satisfies $F^+=\mathcal{H}_{\mathbb{O}}F^+$. Suppose $$F^+=\sum_{j=0}^7f_j\bfe_j\mathbf{p}$$ where $f_j\in L^p(\mathbb{R}^7,\mathbb{R}), j=1, 2, \ldots,7$. So we have $$Tan(F^+)=h:=f_0\mathbf{p}\in L^p(\mathbb{R}^7,\mathfrak{H}_0),$$ Now if (2) in the Theorem \ref{thm:main2} holds, we must have $$F^+=h+\mathcal{H}_{\mathbb{O}}h=h-\sum_{j=1}^{7}\bfe_jR_jh=f_0\mathbf{p}-\sum_{j=1}^{7}R_jf_0\bfe_j\mathbf{p}.$$ So we have \begin{equation}\label{eq:SW} f_j=-R_j(f_0), j=1,2,\ldots,7. \end{equation} We turn our direction to take a $F \in H^p(\mathbb{R}^8_+ \mathbb{O}), p>1 $ such that \eqref{eq:SW} doesn't hold, consequently, (2) in the Theorem \ref{thm:main2} doesn't hold. Take $g\in L^p(\mathbb{R}^7,\mathbb{R})$ and $$F^+:=g\bfe_1\mathbf{p}+\mathcal{H}_{\mathbb{O}}g\bfe_1\mathbf{p},$$ then we have $F^+=\mathcal{H}_{\mathbb{O}}F^+$, Theorem \ref{thm:oHp} tells us it's a non-tangential limits of a octonion Hardy function $F\in H^p(\mathbb{R}^8_+,\mathbb{O})$. We claim that for this $F^+=\sum\limits_{j=0}^7f_j\bfe_j\mathbf{p}$ is what we want, it means \eqref{eq:SW} doesn't hold for $F^+$. As a fact, $$F^+:=g\bfe_1\mathbf{p}-\sum_{j=1}^{7}R_j\bfe_j (g\bfe_1\mathbf{p})$$ So we have : \begin{equation}\label{eq:f+} F^+(\overline{p})=g\bfe_1\mathbf{p}(\overline{\mathbf{p}})-\sum_{j=1}^{7}R_jg (\bfe_j(\bfe_1\mathbf{p}))\overline{\mathbf{p}}=R_1g+g\bfe_1-\sum_{j=2}^{7}g_j \bfe_j \end{equation} And \eqref{eq:f+} just follows that \begin{align*} ((\bfe_j(\bfe_1\mathbf{p}))\overline{\mathbf{p}},\bfe_0) & =-(\bfe_1\mathbf{p},\bfe_j\mathbf{p})=-\delta_{1j},j=1,\ldots,7 \\ ((\bfe_j(\bfe_1\mathbf{p}))\overline{\mathbf{p}},\bfe_1) & =((\bfe_j(\bfe_1\mathbf{p})),\bfe_1\mathbf{p})=0,j=1,\ldots,7. \end{align*} So we have $g_0=R_1g$ and $g_1=g$. Thus if \eqref{eq:SW} holds, then we have $$g=-R_1^2g,\quad \forall g\in L^p(\mathbb{R}^7,\mathbb{R})$$ which is impossible. So we complete our proof. \end{proof} \begin{remark} We give some remarks on equation \eqref{eq:SW} . Now we suppose equation \eqref{eq:SW} holds, let $$u_j(t, x)=P_t*f_j, \quad j=0,1,\ldots, 7, $$ follow the \cite[P.332]{GTM249} , we know that $\mathbf{F}=(u_0, u_1, \ldots, u_7)$ satisfies the following system of generalized Cauchy-Riemann equations: \begin{align*} \sum_{j=0}^{7}\frac{\partial u_j}{\partial x_j}(t,x) =0 \\ \frac{\partial u_j}{\partial x_k}(t,x)=\frac{\partial u_k}{\partial x_j}(t,x), 0\le j\neq k\le 7, \end{align*} here $\dfrac{\partial \ }{\partial x_0}=\dfrac{\partial \ }{\partial t}$. And this implies $\mathbf{F}\in H^p(\mathbb{R}^8_+)$ , where$H^p(\mathbb{R}^8_+)$ denotes the $H^p$ space of conjugate harmonic functions in Stein-Weiss sense\cite[P.236]{SW71}. What's more the Cauchy integral of $f_0$ is just $$F=C_{\mathbb{O}}(f_0)=\frac{1}{2}\sum_{j=0}^{7}u_j\overline{\bfe_j}. $$ A classical result \cite{LP2} on octonionic analytic functions says that $F$ is not only left octonionic analytic, but also right octonionic analytic. But we know, not all left octonionic analytic function is either right octonionic analytic. Thus, from this point view, we can also get that the equation \eqref{eq:SW} doesn't hold generally. \end{remark} \end{document}
\begin{document} \title{Functions preserving nonnegativity of matrices} \author{Gautam Bharali \\ Department of Mathematics \\ Indian Institute of Science \\ Bangalore 560012 India \and Olga Holtz \\ Department of Mathematics \\ University of California \\ Berkeley, CA 94720 USA} \date{November 12, 2005, revised April 5, 2007} \maketitle \begin{keywords} Nonnegative inverse eigenvalue problem, circulant matrices, (block) upper-triangular matrices, symmetric matrices, positive definite matrices, entire functions, divided differences. \end{keywords} \begin{AMS} 15A29, 15A48, 15A42 \end{AMS} \begin{abstract} The main goal of this work is to determine which entire functions preserve nonnegativity of matrices of a fixed order $n$ --- i.e., to characterize entire functions $f$ with the property that $f(A)$ is entrywise nonnegative for every entrywise nonnegative matrix $A$ of size $n\times n$. Towards this goal, we present a complete characterization of functions preserving nonnegativity of (block) upper-triangular matrices and those preserving nonnegativity of circulant matrices. We also derive necessary conditions and sufficient conditions for entire functions that preserve nonnegativity of symmetric matrices. We also show that some of these latter conditions characterize the even or odd functions that preserve nonnegativity of symmetric matrices. \end{abstract} \section{Motivation} The purpose of this paper is to investigate which entire functions preserve nonnegativity of matrices of a fixed order. More specifically, we consider several classes of structured matrices whose structure is preserved by entire functions and characterize those entire functions $f$ with the property that $f(A)$ is entrywise nonnegative for each entrywise nonnegative matrix $A$ of size $n\times n$. The characterizations that we obtain might be of independent interest in matrix theory and other areas of mathematics. One of our own motivations behind our investigation is its relevance to the inverse eigenvalue problem for nonnegative matrices. The long-standing inverse eigenvalue problem for nonnegative matrices is the problem of determining, given an $n$-tuple (multiset) $\Lambda$ of complex numbers, whether there exists an entrywise nonnegative matrix $A$ whose spectrum $\sigma(A)$ is $\Lambda$. The literature on the subject is vast and we make no attempt to review it. The interested reader is referred to books~\cite{Minc} and~\cite{BermanPlemmons}, expository papers~\cite{Chu}, \cite{Eglestonetal}, \cite{Laffey1}, \cite{Laffey2} and references therein, as well as to some recent papers~\cite{McDonaldNeumann}, \cite{ChuXu}, \cite{DuarteJohnson}, \cite{Smigoc2004}, \cite{Smigoc}, \cite{SotoMoro}, \cite{LaffeySmigoc}, \cite{Orsi}. The necessary conditions for a given $n$-tuple to be realizable as the spectrum of a nonnegative matrix known so far for arbitrary values of $n$ can be divided into three groups: conditions for nonnegativity of moments, Johnson-Loewy-London inequalities, and Newton's inequalities. Given an $n$-tuple $\Lambda$, its {\em moments\/} are defined as follows: $$ s_m(\Lambda)\mathop{:}{=} \sum_{\lambda \in \Lambda} \lambda^m ,\qquad m \in \N. $$ If $\Lambda=\sigma(A)$ for some nonnegative matrix $A$, then $s_m(\Lambda)$ is nothing but the trace ${\rm tr\,} (A^m)$, and therefore must be nonnegative. Another basic condition follows from the Perron-Frobenius theory~\cite{Perron}, \cite{Frobenius}: the largest absolute value $\max_{\lambda\in \Lambda} |\lambda|$ must be the Perron eigenvalue of a realizing matrix $A$ and therefore must itself be in $\Lambda$. Finally, the multiset $\Lambda$ must be closed under complex conjugation, being the spectrum of a real matrix $A$. Interestingly, the last two conditions are in fact not independent conditions, but follow from the nonnegativity of moments, as was shown by Friedland in~\cite{Friedland}. Thus, there turns out to be just one set of basic conditions $$ s_m(\Lambda)\geq 0\qquad {\rm for}\;\; m\in \N. $$ The next set of necessary conditions was discovered independently by Loewy and London in~\cite{LoewyLondon} and by Johnson in~\cite{Johnson}. These conditions relate moments among themselves as follows: $$ s_k^m(\Lambda) \leq n^{m-1} s_{km}(\Lambda), \quad k, m\in \N. $$ Newton's inequalities were conjectured in~\cite{HoltzSchneider} and proved for $M$-matrices in~\cite{Holtz1}. An $M$-{\em{matrix}\/} is a matrix of the form $rI-A$, where $A$ is a nonnegative matrix, $r\geq \varrho(A)$, and where $\varrho(A)$ is the {\em spectral radius\/} of $A$: $$ \varrho(A)\mathop{:}{=} \max_{\lambda\in \sigma(A)} |\lambda|. $$ If $M$ is an $M$-matrix of order $n$, then the normalized coefficients $c_j(M)$ of its characteristic polynomial defined by $$ \det (\lambda I - M) \mathop{{=}{:}} \sum_{j=0}^n (-1)^j {n\choose j} c_j(M) \lambda^{n-j} $$ must satisfy Newton's inequalities $$ c_j^2(M) \geq c_{j-1}(M) c_{j+1}(M), \quad j=1, \ldots, n-1. $$ Since the coefficients $c_j(M)$ are determined entirely by the spectrum of $M$, and the latter is obtained from the spectrum of a nonnegative matrix $A$ by an appropriate shift, Newton's inequalities form yet another set of conditions necessary for an $n$-tuple to be realizable as the spectrum of a nonnegative matrix. The above three sets of conditions --- i.e., nonnegativity of moments, Johnson-Loewy-London inequalities and Newton's inequalities are all independent of each other but are not sufficient for realizability of a given $n$-tuple (see~\cite{Holtz1}). Quite a few sufficient conditions are also known (see, e.g., \cite{Suleimanova}, \cite{Laffey2}, \cite{Friedland}, \cite{Chu}) as well as certain techniques for perturbing or combining realizable $n$-tuples into new realizable $n$- or $m$-tuples (where $m\geq n$) (see, e.g.,~\cite{Soules}, \cite{Soto}, \cite{Smigoc}). Also, necessary and sufficient conditions on an $n$-tuple to serve as the nonzero part of the spectrum of some nonnegative matrix are due to Boyle and Handelman~\cite{BoyleHandelman}. Finally, it follows from the Tarski-Seidenberg theorem~\cite{Tarski, Seidenberg} that all realizable $n$-tuples form a {\em semialgebraic set\/} (see also~\cite{Jacobson}), i.e., for any given $n$, there exist only finitely many polynomial inequalities that are necessary and sufficient for an $n$-tuple $\Lambda$ to be realizable as the spectrum of some nonnegative matrix $A$ (this observation was communicated to us by S.~Friedland): Indeed, each realizable $n$-tuple $\Lambda=(\lambda_1, \ldots, \lambda_n)$ is characterized by the condition $$ \exists \; A\geq 0 \;\; : \;\; \det (\lambda I -A)=\prod_{j=1}^n (\lambda-\lambda_j). $$ The last condition is equivalent to each elementary symmetric function $\sigma_j(\Lambda)$ being equal to the $j$th coefficient of the characteristic polynomial of $A$ multiplied by $(-1)^j$ --- i.e., to the sum of all principal minors of $A$ of order $j$, for $j=1, \ldots, n$. Since the set of all nonnegative matrices is a semialgebraic set in $n^2$ entries of the matrix and since each sum of all principal minors of $A$ of order $j$ is a polynomial in the entries of $A$, the lists of coefficients of characteristic polynomials of nonnegative matrices form a semialgebraic set, and hence the $n$-tuples whose elementary symmetric functions match one of those lists also form a semialgebraic set by the Tarski-Seidenberg theorem. However, despite so many insights into the subject, and despite the results obtained so far, the nonnegative inverse eigenvalue problem remains open. In fact, the problem remains open when specialized to several important classes of structured matrices --- for instance, the class of entrywise nonnegative symmetric matrices. Note that the three sets of conditions on an $n$-tuple $\Lambda$ that we discussed above --- i.e., nonnegativity of moments, the Johnson-Loewy-London inequalities and the Newton inequalities --- are necessary conditions for the realizability of $\Lambda$ as the spectrum of a {\em symmetric} $n\times n$ matrix with nonnegative entries (provided, of course, that all the entries of $\Lambda$ are now real). A significant fraction of this paper will be devoted to an idea that has relevance to the inverse eigenvalue problem for nonnegative symmetric matrices. It is an idea that was first expressed by Loewy and London in~\cite{LoewyLondon}. When adapted to symmetric matrices, it may be stated as follows: Suppose a primary matrix function $f$ is known to map nonnegative symmetric matrices of some fixed order $n$ into themselves. Thus $f(A)$ is nonnegative whenever $A$ is. Since $f(\sigma(A))=\sigma(f(A))$, both the spectrum $\sigma(A)$ and its image under the map $f$ must then satisfy the aforementioned conditions for realizability. This enlarges the class of necessary conditions for the symmetric nonnegative inverse eigenvalue problem. Describing this larger class would require knowing exactly what functions $f$ preserve nonnegativity of such matrices matrices (of a fixed order). Towards this end, we provide a characterization of all the {\em even} and {\em odd} entire functions that preserve entrywise nonnegativity of nonnegative symmetric matrices. Along the way, we also obtain complete characterizations of all entire functions that preserve nonnegativity of the following classes of structured matrices: \begin{itemize} \item Triangular and block-triangular matrices \item Circulant matrices \end{itemize} We ought to add here that, for the above classes of structured matrices, our results do not have a bearing on the nonnegative inverse eigenvalue problems associated to them. In fact, the solutions of the latter problems are quite straightforward. To be precise: an $n$-tuple $\Lambda$ is the spectrum of an $n\times n$ triangular matrix if and only if all the entries of $\Lambda$ are non-negative. As for circulants: the eigenvalues of a circulant matrix $A$ are determined by its first row $\mathbf{a}:=[a_0 \; a_1\dots a_{n-1}]$ (see~\cite{Davis}), and in fact, there is a constant matrix $\mathsf{W}$ (i.e., independent of $\mathbf{a}$ and $A$) such that $\sigma(A)=\mathbf{a}\mathsf{W}$. Thus the realizable $n$-tuples in this case are of the form $\mathbf{a}\mathsf{W}, \ \mathbf{a}\in\R_+^n$. Nevertheless, we feel that the problem of characterizing the functions that preserve nonnegativity of the above classes of matrices can be of interest, independent of the nonnegative inverse eigenvalue problem. \section{Outline} This paper is organized as follows. We make several preliminary observations in Section~\ref{sec_prelim}. Before focusing attention on aspects of the symmetric nonnegative inverse eigenvalue problem, we study the structured matrices just discussed. In section~\ref{sec_tri}, we characterize the class of functions preserving nonnegativity of triangular and block triangular matrices. It turns out that these are characterized by nonnegativity conditions on their divided differences over the nonnegative reals. Next, in Section~\ref{sec_circ}, we obtain a characterization of functions preserving nonnegativity of circulant matrices. This characterization is quite different from that in Section~\ref{sec_tri} --- it involves linear combinations of function values taken at certain non-real points of $\C$. In Section~\ref{sec_small}, we obtain a complete characterization of the class $\F{n}$ for small values of~$n$. The remainder of the paper is essentially devoted to functions that preserve nonnegativity of symmetric matrices. In Section~\ref{sec_spd}, we review existing results in that direction. In particular, we discuss the restriction of \cite[Corollary 3.1]{MicchelliWilloughby} to entire functions, which claims to provide a characterization of entire functions that preserve entrywise nonnegativity of symmetric matrices of a fixed order. We point out that, while this result is true when restricted to {\em nonnegative definite} nonnegative symmetric matrices, the condition occurring in that result is {\em not sufficient} for an entire function to preserve nonnegativity of all symmetric matrices. The {\em techniques} leading to \cite[Corollary 3.1]{MicchelliWilloughby}, however, turn out to be very useful. We use these techniques, along with some new ideas, to obtain necessary conditions and sufficient conditions, and characterizations of the {\em even} and {\em odd} entire functions that preserve nonnegativity of symmetric matrices of a fixed order. This is the content of Sections~\ref{sec_sym2} and~\ref{sec_sym3}. Because of a gap between the necessary and the sufficient conditions, which we also point out in Section~\ref{sec_sym2}, the results of that section do not provide a characterization of {\em all} functions preserving nonnegativity of symmetric matrices. We end the paper with a list of several open problems in Section~\ref{sec_next}, and suggest various approaches to their solution that we have not explored in this paper. \section{Notation} \label{sec_not} We use standard notation $\R^{m\times n}$ for real matrices of size $m{\times}n$, $\R_+$ for nonnegative reals and $\Z_+$ for nonnegative integers, $A\geq 0$ ($A>0$) to denote that a matrix $A$ is entrywise nonnegative (positive), and $\sigma(A)$ to denote the spectrum of $A$. For $x\in\R$, we use $\gin{x}$ to denote the greatest integer that is less than or equal to $x$. \section{Preliminaries} \label{sec_prelim} The main goal of the paper is to characterize functions $f$ such that the matrix $f(A)$ is (entrywise) nonnegative for any nonnegative matrix $A$ of order $n$. Since the primary matrix function $f(A)$ is defined in accordance with values of $f$ and its derivatives on the spectrum of $A$ (see, e.g.,~\cite[Sections 6.1, 6.2]{HornJohnson}), we want to avoid functions that are not differentiable at some points in $\C$. Therefore we restrict ourselves to functions that are analytic everywhere in $\C$, i.e., to entire functions. Thus we consider the class $$ \F{n}\mathop{:}{=} \{ \; f \; {\rm entire} \; : \; A\in \R^{n\times n},\; A\geq 0\; \Longrightarrow \; f(A)\geq 0\}. $$ Note right away that the classes $\F{n}$ are ordered by inclusion: \begin{lem} \label{incl} For any $n\in \N$, $\F{n} \supseteq \F{n+1}$. \end{lem} \nt {\bf{Proof.} {\hskip 0.07cm} } Let $A$ be a nonnegative matrix of order $n$ and let $f\in \F{n+1}$. Consider the block-diagonal matrix $B\mathop{:}{=} \mathop{\rm diag}\nolimits(A,0)$ obtained by adding an extra zero row and column to $A$. Since $f(B)=\mathop{\rm diag}\nolimits(f(A),0)$, the matrix $f(A)$ must be nonnegative. Thus $f\in \F{n}$. \eop Recall that any entire function can be expanded into its Taylor series around any point in $\C$, and that the resulting series converges everywhere (see, e.g.,~\cite{Conway}). We will mostly focus on Taylor series of functions in $\F{n}$ centered at the origin. We start with some simple observations regarding a few initial Taylor coefficients of such a function. \begin{prop} \label{coefs} Let $f(z)=\sum_{j=0}^\infty a_j z^j$ be a function in $\F{n}$. Then, $a_j\geq 0$ for $j=0, \ldots, n-1$. \end{prop} \nt {\bf{Proof.} {\hskip 0.07cm} } For $n=1$, the statement follows from evaluating $f$ at $0$. If $n>1$ and $f\in \F{n}$, then evaluate the function $f$ at the matrix $$ A \mathop{:}{=} \left[ \begin{array}{cccccc} 0 & 1 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 1 & \cdots & 0 & 0 \\ 0 & 0 & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 0 & 1 \\ 0 & 0 & 0 & \cdots & 0 & 0 \end{array} \right]. $$ Since $$ f(A) = \left[ \begin{array}{cccccc} a_0 & a_1 & a_2 & \cdots & a_{n-2} & a_{n-1} \\ 0 & a_0 & a_1 & \cdots & a_{n-3} & a_{n-2} \\ 0 & 0 & a_0 & \cdots & a_{n-4} & a_{n-3} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & a_0 & a_1 \\ 0 & 0 & 0 & \cdots & 0 & a_0 \end{array} \right], $$ the entries $a_0$, $\ldots$, $a_{n-1}$ of $f(A)$ must be nonnegative. This finishes the proof. \eop \begin{corol} A function $f$ is in $\F{n}$ for all $n\in \N$ if and only if it has the form $f(z)=\sum_{j=0}^\infty a_j z^j$ with $a_j\geq 0$ for all $j\in \Z_+$. \end{corol} \nt {\bf{Proof.} {\hskip 0.07cm} } One direction follows from Proposition~\ref{coefs}. The other direction is trivial: if all terms in the Taylor expansion of $f$ around the origin are nonnegative, then $f(A)$ combines powers of a nonnegative matrix $A$ using nonnegative coefficients, so the resulting matrix is nonnegative. Here we make use of the standard fact~\cite[Theorem~6.2.8]{HornJohnson} that the matrix power series $\sum_{j=0}^\infty a_j A^j$ converges to $f(A)$. \eop \nt {\bf{Remark.} {\hskip 0.07cm} } It must be noted that Proposition~\ref{coefs} {\em cannot} be a necessary condition for an entire function to belong to $\F{n}$. This is easy to see; fix an $n\in \N$ and set $$ F(x) = -x^n+\sum_{j=0}^{n-1}a_jx^j, $$ where we choose $a_j\geq 0, \ j=0,\dots,n-1$. Then, there exists an $x_0>0$ such that $F(x)<0 \ \forall x\in(x_0,\infty)$. If we set $A=rI$, for some $r\in(x_0,\infty)$, then $A$ is entrywise nonnegative while the diagonal entries of $F(A)$ are negative. Hence, although $a_j\geq 0$ for $j=0,\dots,n-1$, $F$ does not preserve nonnegativity. To conclude this section, we make two more general observations. \begin{lem} \label{poslem} An entire function $f$ belongs to $\F{n}$ if and only if it maps positive matrices of order $n$ into nonnegative matrices. \end{lem} \nt {\bf{Proof.} {\hskip 0.07cm} } This is simply due to the continuity of $f$, since the set of strictly positive matrices is dense in the set of all nonnegative matrices of order $n$. \eop \begin{lem} \label{simlem} For any primary matrix function $f$, any permutation matrix $P$ and any diagonal matrix $D$ with positive diagonal elements, $f(A)$ is nonnegative if and only if $f(PD A (PD)^{-1})$ is nonnegative. \end{lem} \nt {\bf{Proof.} {\hskip 0.07cm} } Note that $(PD)f(A)(PD)^{-1} = f(PD A (PD)^{-1})$ and that both matrices $PD$ and $(PD)^{-1}$ are nonnegative. So, $f(A)$ is nonnegative if and only if the matrix $f(PD A (PD)^{-1})$ is nonnegative. \eop We now analyze three superclasses of our class $\F{n}$: \begin{itemize} \item entire functions preserving nonnegativity of upper-triangular matrices; \item entire functions preserving nonnegativity of circulant matrices; \item entire functions preserving nonnegativity of symmetric matrices. \end{itemize} \section{Preserving nonnegativity of (block-)triangular matrices} \label{sec_tri} We first discuss functions preserving nonnegativity of upper- (or lower-)triangular matrices. The characterization that we obtain makes use of the notion of divided differences. The {\em divided difference\/} (see, e.g.,~\cite{deBoor}) of a smooth function $f$ at points $x_1$, $\ldots$, $x_k$ (which can be thought of as an ordered sequence $x_1\leq \cdots \leq x_k$) is usually defined via the recurrence relation $$ f[x_1, \ldots, x_k]\mathop{:}{=} \cases{\frac{f[x_2, \ldots, x_k]- f[x_1, \ldots, x_{k-1}]}{x_k-x_1} & $x_1\neq x_k$, \cr {} & ${ \ }$ \cr f^{(k-1)}(x_1)/(k-1)! & $x_1=x_k$,} $$ and where $f[x]\mathop{:}{=} f(x)$. Divided differences play a large part in this paper. We shall, however, make no attempt to review the results on divided differences that we shall draw upon, especially since they are quite readily accessible. The interested reader is referred to~\cite{deBoor}. \begin{thm} \label{thm_tri} An entire function $f$ preserves nonnegativity of upper-triangular matrices of order $n$ if and only if its divided differences of order up to $n$ are nonnegative over $\R_+$, i.e., \begin{equation} f[x_1, \ldots, x_k]\ge 0 \quad {\rm for}\;\; x_1, \ldots, x_k \ge 0, \quad k=1, \ldots, n, \label{divdif} \end{equation} or, equivalently, that all derivatives of $f$ of order up to $n{-}1$ are nonnegative on $\R_+$. \end{thm} \nt {\bf{Proof.} {\hskip 0.07cm} } {\em Sufficiency:} Let $A\mathop{{=}{:}} (a_{ij})$ be a nonnegative upper-triangular matrix. Suppose a function $f$ satisfies~(\ref{divdif}). By~\cite{Schmitt}, \cite{Stafney} (see also~\cite{Stafney_cor}), the elements of the matrix $f(A)$ can be written explicitly as \begin{equation} f(A)_{ij}= \cases{ f(a_{ii}) & $i=j$, \cr \sum_{i<i_1<\cdots<i_k<j} a_{ii_1}\cdots a_{i_k j} f[a_{ii},a_{i_1i_1},\ldots, a_{i_k i_k},a_{jj}] & $i<j$, \cr 0 & $i>j$.} \label{expand} \end{equation} The divided differences appearing in the sum on the right-hand side are of order not exceeding $n$; hence all the summands, and therefore the sums, are nonnegative. {\em Necessity:} We proceed by induction on $n$. If $f$ preserves nonnegativity of upper-triangular matrices of order $n$, it does so also for matrices of order $n-1$. Thus, by our inductive hypothesis, (\ref{divdif}) holds up to order $n-1$. To see that all divided differences of order $n$ are also nonnegative over nonnegative reals, consider the matrix $A$ whose first upper diagonal consists of ones, main diagonal of $n$ arbitrary nonnegative numbers $x_1, \ldots, x_n$, and all of whose other entries are zero. Then,~(\ref{expand}) shows that $f(A)_{1n}=f[x_1,\ldots, x_n]$ and must be nonnegative. Finally, since all divided differences of a fixed order $k$ at points in a domain $D$ are nonnegative if and only if $f^{(k-1)}(x)$ is nonnegative for every point $x\in D$~\cite{deBoor}, we see that condition~(\ref{divdif}) is equivalent to all derivatives of $f$ of order up to $n{-}1$ being nonnegative on $\R_+$. This finishes the proof. \eop The proofs of~(\ref{expand}) in~\cite{Stafney} and~\cite{Schmitt} are based on the following observation. \begin{res}[{\cite{Stafney}, \cite{Schmitt}}] \label{triang} A block-triangular matrix of the form $$ M=\left[ \begin{array}{cc} A & B \\ 0 & a \end{array} \right],\qquad a\in \C\setminus \sigma(A), $$ is mapped to the matrix $$ f(M)=\left[ \begin{array}{cc} f(A) & (A-aI)^{-1} (f(A)-f(a)I) B \\ 0 & f(a) \end{array} \right] $$ by a function $f$. \end{res} One can prove an analogous statement in the block-triangular case: \begin{prop} \label{blocktriang} Let $f$ be an entire function and let $$ M=\left[ \begin{array}{cc} A & B \\ 0 & C \end{array} \right],\qquad \sigma(A)\cap \sigma(C)=\emptyset. $$ Then, $$ f(M)=\left[ \begin{array}{cc} f(A) & f(A)X-Xf(C) \\ 0 & f(C) \end{array} \right], $$ where $X$ is the (unique) solution to the equation $$ AX-XC=B. $$ \end{prop} \nt {\bf{Proof.} {\hskip 0.07cm} } Let $X$ be a solution of the Sylvester equation $AX-XC=B$. Since the spectra of $A$ and $C$ are disjoint, this solution is unique~\cite[Section 4.4]{HornJohnson}. Then, $M=T^{-1}\mathop{\rm diag}\nolimits(A,C)T$ where $$ T=\left[ \begin{array}{cc} I & X \\ 0 & I \end{array} \right]. $$ Hence $f(M)=T^{-1}\mathop{\rm diag}\nolimits(f(A), f(C)) T$, which proves the proposition. \eop As an immediate corollary, we obtain an indirect characterization of functions preserving nonnegativity of block-triangular matrices with two diagonal blocks. \begin{corol} An entire function $f$ preserves nonnegativity of block upper-triangular matrices of the form $$ \left[ \begin{array}{cc} A & B \\ 0 & C \end{array} \right],\qquad A\in \R^{n_1{\times}n_1}, \;\; C \in \R^{n_2{\times}n_2}, $$ if and only if \begin{itemize} \item[a)] $f\in \F{N}$, where $N\mathop{:}{=} \max\{n_1,n_2 \}$; and \item[b)] $f(A)X-Xf(C)\geq 0$ for every $A\in \R^{n_1{\times}n_1}$, $B\in \R^{n_1{\times}n_2}$, $C \in \R^{n_2{\times}n_2}$ such that $A, B, C \geq 0$, $\sigma(A)\cap \sigma(C)=\emptyset$, and the (unique) matrix $X$ that satisfies the equation $AX-XC=B$. \end{itemize} \end{corol} \nt {\bf{Proof.} {\hskip 0.07cm} } For $f$ to preserve nonnegativity of blocks $A$ and $C$, it has to belong to $\F{N}$ (keeping in mind Lemma~\ref{incl}). The remainder of our assertion follows from Proposition~\ref{blocktriang} and the fact that the matrices with nonnegative blocks $A$, $B$, $C$, such that the spectra of $A$ and $C$ are disjoint, are dense in the set of all block upper-triangular matrices. \eop The above proposition, however, does not allow for an explicit formula of the type~(\ref{divdif}) as in Theorem~\ref{thm_tri}. \vskip 0.2cm \nt {\bf{Remark.} {\hskip 0.07cm} } Note that the results of this section characterize functions preserving nonnegativity of the (block) lower-triangular matrices as well. \section{Preserving nonnegativity of circulant matrices} \label{sec_circ} A {\em circulant matrix\/} (see, e.g.,~\cite{Davis}) $A$ is determined by its first row $(a_0, \ldots, a_{n-1})$ as follows: $$ \left[ \begin{array}{ccccc} a_0 & a_1 & a_2 & \cdots & a_{n-1} \\ a_{n-1} & a_0 & a_1 & \cdots & a_{n-2} \\ a_{n-2} & a_{n-1} & a_0 & \cdots & a_{n-3} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_1 & a_2 & a_3 & \cdots & a_0 \end{array} \right]. $$ All circulant matrices of size $n$ are polynomials in the basic circulant matrix $$ \left[ \begin{array}{ccccc} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & 0 & 0 & \cdots & 0 \end{array} \right], $$ which implies in particular that any function $f(A)$ of a circulant matrix is a circulant matrix as well. Moreover, the eigenvalues of a circulant matrix are determined by its first row~(see~\cite{Davis}) by the formula $$ \{ \; \sum_{j=0}^{n-1} \omega^{kj} a_j \; : \; k=0, \ldots, n-1 \}, \;\;\; {\rm where} \;\; \omega\mathop{:}{=} e^{2\pi {\rm i}/n}. $$ Hence the eigenvalues of $f(A)$ are $$ \{ f( \; \sum_{j=0}^{n-1} \omega^{kj} a_j ) \; : \; k=0, \ldots, n-1 \}. $$ Thus, the elements $(f_0, \ldots, f_{n-1})$ of the first row of $f(A)$ can be read off from its spectrum: $$ f_l={1\over n} \sum_{k=0}^{n-1} \omega^{-lk} f( \sum_{j=0}^{n-1} \omega^{jk} a_j ), \;\;\; l=0, \ldots, n-1. $$ This argument proves the following theorem. \begin{thm} For an entire function $f$ to preserve nonnegativity of circulant matrices of order $n$, it is necessary and sufficient that for $l=0,\dots,n-1$, \begin{equation} \sum_{k=0}^{n-1} \omega^{-lk} f( \sum_{j=0}^{n-1} \omega^{jk} a_j )\geq 0\;\;\; whenever\;\; a_j\geq 0, \;\; j=0,\ldots, n-1, \;\; where\;\; \omega=e^{2\pi{\rm i}/n}. \label{circsums} \end{equation} \end{thm} \section{Characterization of $\F{n}$ for small values of $n$} \label{sec_small} We now focus of the function classes $\F{n}$ for small values of $n$. Recall the inclusion $\F{n+1}\subseteq \F{n}$ from Lemma~\ref{incl}, which means that all conditions satisfied by the functions from $\F{n}$ get inherited by the functions from $\F{n+1}$. Thus we need to find out precisely how to strengthen the conditions that determine $\F{n}$ to get to the next class $\F{n+1}$. \subsection{The case $n=1$} A function $f$ is in $\F{1}$ if and only if $f$ maps nonnegative reals into themselves. While this statement is in a way a characterization in itself, if $f$ is an entire function with {\em finitely many zeros}, we can give a description of the form that $f$ takes. For such $f$, the proposition below serves as an alternative characterization. \begin{prop} A function $f$ having finitely many zeros is in $\F{1}$ if and only if it has the form \begin{equation} f(z)=g(z) \prod_{\alpha, \beta} ((z+\alpha)^2+\beta^2) \prod_{\gamma} (z+\gamma), \label{case1} \end{equation} where the $\alpha$'s and the $\beta$'s are arbitrary reals, the $\gamma$'s are nonnegative, and $g$ is an entire function that has no zeros in $\C$ and is positive on $\R_+$. \end{prop} \nt {\bf{Proof.} {\hskip 0.07cm} } First note that since $f$ takes real values over the nonnegative reals, all its zeros occur in conjugate pairs. Moreover, while the multiplicity of the real negative zeros is not resticted in any way, the nonnegative zeros must occur with even multiplicities. This produces exactly the factors recorded in~(\ref{case1}), with nonnegative zeros corresponding to $\beta=0$. After factoring out all the linear factors, we are left with an entire function --- which we call $g(z)$ --- that has no zeros, and takes only positive values on $\R_+$. This gives us the expression~(\ref{case1}). \eop \nt {\bf{Remark.} {\hskip 0.07cm} } Incidentally, all polynomials $f$ that take only positive values on $\R_+$ are characterized by a theorem due to Poincar\'e and P\'{o}lya (see, e.g.,~\cite[p.175]{D'Angelo}): there exists a number $N\in \Z_+$ such that the polynomial $ (1+z)^N f(z)$ must have positive coefficients. Since we include non-polynomial functions into our class $\F{1}$, and since we allow functions to have zeros in $\R_+$, the Poincar\'e-P\'{o}lya characterization is not directly relevant to our setup. \subsection{The case $n=2$} We just saw that functions in $\F{1}$ are characterized by one inequality, viz. \begin{equation} f(x)\geq 0 \;\;\;\; \forall \; x\geq 0. \label{1by1} \end{equation} In this subsection we will see that functions in $\F{2}$ are characterized by two inequalities, one involving a divided difference. We recall two preliminary observations, Lemmas~\ref{poslem} and~\ref{simlem} that were proved in Section~\ref{sec_prelim}. Their specialization to the case $n=2$ gives the following corollary. \begin{corol} \label{sym2} An entire function $f$ belongs to $\F{2}$ if and only if it maps positive symmetric matrices of order $2$ into nonnegative matrices. \end{corol} \nt {\bf{Proof.} {\hskip 0.07cm} } A strictly positive $2{\times}2$ matrix $A$ can be symmetrized by using the transformation $DAD^{-1}$, where $D$ is a diagonal matrix with positive diagonal elements. Thus, Lemmas~\ref{poslem} and~\ref{simlem} imply that $f(A)$ is nonnegative for all strictly positive, and hence for all nonnegative matrices $A$ of order $2$, if and only if $f(A)$ is nonnegative for all symmetric matrices. \eop Now we are in a position to prove a characterization theorem for the class $\F{2}$. \begin{thm} \label{thm_2by2} An entire function $f$ is in $\F{2}$ if and only if it satisfies the conditions \begin{eqnarray} f(x+y)-f(x-y) &\geq& 0 \quad\forall\; x, y\geq 0, \label{2by2-1} \\ (x+y-z)f(x-y)+(z-x+y)f(x+y) &\geq& 0 \quad\forall \; x\geq z \geq 0, \; y\geq x-z, \label{2by2-2} \end{eqnarray} or, equivalently, if $f$ satisfies~(\ref{2by2-1}) and the condition \begin{equation} (x+y)f(x-y)+(y-x)f(x+y) \geq 0 \quad \forall \; y\geq x \geq 0. \label{2by2-2'} \end{equation} \end{thm} \nt {\bf{Proof.} {\hskip 0.07cm} } If $f\in \F{2}$, then, in particular, $f$ preserves nonnegativity of nonnegative circulant matrices. Thus, the conditions~(\ref{circsums}) are necessary for $f$ to belong to $\F{2}$. Observe that the condition~(\ref{2by2-1}) is one of the two necessary conditions~(\ref{circsums}) in case $n=2$ (taking $a_0=x$ and $a_1=y$). Therefore, we need to check that the condition~(\ref{2by2-2}) is also necessary and that both together are sufficient. Then we also need to check that conditions~(\ref{2by2-1}) and~(\ref{2by2-2}) are equivalent to conditions~(\ref{2by2-1}) and~(\ref{2by2-2'}). By Corollary~\ref{sym2}, we can restrict ourselves to the case when $A$ is a positive symmetric matrix, i.e., when $$ A=\left[ \begin{array}{cc} a_{11} & b \\ b & a_{22} \end{array} \right],\quad a_{11}, b, a_{22}>0. $$ Since the value of $f$ at $A$ coincides with the value of its interpolating polynomial of degree~1 with nodes of interpolation chosen at the eigenvalues of $A$~\cite[Sections 6.1, 6.2]{HornJohnson}, we get $$ f(A)=f[r_1]I+f[r_1,r_2](A-r_1 I), $$ where $$ r_j\mathop{:}{=} {a_{11}+a_{22}\over 2}+(-1)^j {\sqrt{(a_{11}-a_{22})^2+4b^2} \over 2}, \quad j=1,2. $$ So, the off-diagonal entries of $f(A)$ are equal to $$f[r_1,r_2]b, $$ while the diagonal entries are $$f[r_1,r_2](a_{jj}-r_1)+f(r_1), \quad j=1, 2. $$ Writing \begin{eqnarray*} x & \mathop{:}{=} & {a_{11}+a_{22}\over 2}, \\ y & \mathop{:}{=} & { \sqrt{(a_{11}-a_{22})^2+4b^2} \over 2}, \\ z & \mathop{:}{=} & \min(a_{11}, a_{22}), \end{eqnarray*} we see that the characterization for $\F{2}$ consists precisely of conditions~(\ref{2by2-1}) and~(\ref{2by2-2}). It remains to prove that~(\ref{2by2-1}) and~(\ref{2by2-2}) are equivalent to~(\ref{2by2-1}) and~(\ref{2by2-2'}). By simply taking $z=0$ in~(\ref{2by2-2}), we see that~(\ref{2by2-2}) implies~(\ref{2by2-2'}). So let us now assume~(\ref{2by2-1}) and~(\ref{2by2-2'}). We begin by stating a simple auxiliary fact. Taking $x=0$ and $y>0$ in~(\ref{2by2-1}) and~(\ref{2by2-2'}), we get $f(y)\pm f(-y)\geq 0 \;\; \forall y>0$. We conclude from this that $f(y)\geq 0$ whenever $y\geq 0$ --- i.e., that $f$ satisfies~(\ref{1by1}). First consider $y$ lying in the range $x-z\leq y \leq x$. In this case, we get \begin{eqnarray*} && (x+y-z) f(x-y)+(z-x+y)f(x+y) \\ && \qquad \qquad = (y-(x-z))(f(x+y)-f(x-y))+2yf(x-y)\geq 0 \qquad {\rm for} \;\; x\geq z \geq 0. \end{eqnarray*} The nonnegativity of the second term above is a consequence of~(\ref{1by1}), since $x-y$ is nonnegative in this case. Now if $y\geq x$, then~(\ref{2by2-1}) and~(\ref{2by2-2'}) simply imply that \begin{eqnarray*} && (x+y-z) f(x-y)+(z-x+y)f(x+y) \\ && \qquad \qquad = ((x+y)f(x-y)+(y-x)f(x+y))+z(f(x+y)-f(x-y)) \geq 0 \\ && \qquad \qquad \quad \; {\rm for} \;\; y\geq x\geq z \geq 0. \end{eqnarray*} The last two inequalities show that~(\ref{2by2-1}) and~(\ref{2by2-2'}) imply~(\ref{2by2-1}) and~(\ref{2by2-2}). This finishes the proof. \eop \section{Preserving nonnegative symmetric matrices} \label{sec_sym} We now focus on the characterization problem for the class of entire functions that preserve nonnegativity of symmetric matrices. We begin by recalling known facts about functions that preserve nonnegative symmetric matrices that are in addition nonnegative definite, i.e., have only nonnegative eigenvalues. \subsection{Preserving nonnegative definite nonnegative symmetric matrices} \label{sec_spd} Interestingly, the condition necessary and sufficient for preserving nonnegative symmetric matrices that are nonnegative definite turns out to be exactly the same as the condition for preserving upper- (or lower-)triangular nonnegative matrices. The characterization of functions that preserve the class of {\em nonnegative definite}, entrywise nonnegative symmetric matrices is due to Micchelli and Willoughby~\cite{MicchelliWilloughby}. We next state a version of their result that is useful for our purposes. \begin{res}[version {of~\cite[Corollary 3.1]{MicchelliWilloughby}}] \label{spdres} An entire function $f$ preserves the class of nonnegative definite, entrywise nonnegative symmetric matrices if and only if all the divided differences of $f$ of order up to $n$ are nonnegative over $\R_+$, i.e., $f$ satisfies~(\ref{divdif}) or, equivalently, all derivatives $f^{(j)}$ of $f$ up to order $n{-}1$ are nonnegative on $\R_+$. \end{res} The proof of Result~\ref{spdres} in~\cite{MicchelliWilloughby} relies on two facts. The first is that $f(A)$ coincides with the interpolating polynomial of $f$, with nodes at the eigenvalues of $A$, evaluated at $A$, i.e. that \begin{equation} f(A)=f[r_1] I +f[r_1,r_2](A-r_1 I)+\cdots + f[r_1,\ldots r_n] (A-r_1 I)\cdots (A-r_{n-1}A). \label{Lagrange} \end{equation} The second fact is the entrywise nonnegativity of all matrix products $$ (A-r_1 I)\cdots (A-r_j I), \qquad j=1, \ldots n-1, $$ which holds under the assumption that the eigenvalues $r_1, \ldots r_n$ of $A$ are ordered $$ r_1\leq r_2 \leq \cdots \leq r_n. $$ Observe, however, that conditions~(\ref{divdif}) are not sufficient for a function to preserve nonnegativity of {\em all\/} nonnegative symmetric matrices. Indeed, let $n=2$ and let $$ f(x)=1+x+{1\over 2} x^2 -{2\over 3} x^3 +{1\over 4} x^4. $$ This function satisfies the condition~(\ref{divdif}) with $n=2$, but it maps the matrix $$ \left[ \begin{array}{cc} 0 & M \\ M & 0 \end{array} \right] \; , $$ which is {\em not nonnegative definite}, to a matrix with negative off-diagonal entries when $M>0$ is chosen to be sufficiently large. In fact, any $M>\sqrt{3/2}$ will produce a matrix with negative entries. Motivated by Result~\ref{spdres}, we would therefore like to find out what conditions are necessary and sufficient for a function to preserve nonnegativity of a nonnegative symmetric matrices. We begin, in the next subsection, by analyzing even and odd functions. \subsection{Even and odd functions preserving nonnegativity of symmetric matrices} \label{sec_sym2} Using the Micchelli-Willoughby result --- i.e., Result~\ref{spdres} from the previous section --- and an auxiliary result from~\cite{Holtz2}, we shall obtain a characterization of even and odd functions that preserve nonnegativity of symmetric matrices. Our proof below will require the notion of a Jacobi matrix and that of a symmetric anti-bidiagonal matrix. A {\em Jacobi matrix\/} is a real, nonnegative definite, tridiagonal symmetric matrix having positive subdiagonal entries. A matrix $A$ is called a {\em symmetric anti-bidiagonal matrix\/} if it has the form \begin{equation} A= \left [ \begin{array}{ccccc} 0 & 0 & \cdots & 0 & a_n \\ 0 & 0 & \cdots & a_{n-2} & a_{n-1} \\ \vdots & \vdots & \cdot & \vdots & \vdots \\0 & a_{n-2} & \cdots & 0 & 0 \\ a_{n} & a_{n-1} & \cdots & 0 & 0 \end{array} \right], \quad a_1, \ldots, a_n\in \R. \label{mainform} \end{equation} We make use of the next two results, from~\cite{MicchelliWilloughby} and from~\cite{Holtz2}. \begin{res}[{\cite{MicchelliWilloughby}}] \label{aux_jacobi} A matrix function $f$ preserves nonnegativity of symmetric nonnegative definite matrices of order $n$ if and only if it maps Jacobi matrices of order $n$ into nonnegative matrices or, equivalently, if the divided differences of $f$ up to order $n$ satisfy~(\ref{divdif}) for each ordered $n$-tuple $x_1\leq x_2 \leq \cdots \leq x_n$ of eigenvalues of a Jacobi matrix. \end{res} The above result is not stated in precisely these words in \cite{MicchelliWilloughby}, but it is easily inferred --- it lies at the heart of the proof of \cite[Theorem 2.2]{MicchelliWilloughby}. In addition, we shall also need the following result: \begin{res}[{Corollary~3, \cite{Holtz2}}] \label{aux_anti} Let ${{\cal M}}$ be a positive real $n$-tuple. Then, there exists a Jacobi matrix that realizes ${{\cal M}}$ as its spectrum and has a symmetric anti-bidiagonal square root of the form~(\ref{mainform}) with all $a_j$'s positive. \end{res} We are now in a position obtain a characterization of even and odd matrix functions that are of interest to us. \begin{thm}\label{even_odd} An even entire function $f(z)\mathop{{=}{:}} g(z^2)$ preserves nonnegativity of symmetric matrices of order $n$ if and only if the divided differences of $g$ up to order $n$ are nonnegative on $\R_+$ --- i.e., if $g$ satisfies~(\ref{divdif}). An odd function $f(z)\mathop{{=}{:}} zh(z^2)$ preserves nonnegativity of symmetric matrices of order $n$ if and only if $h$ satisfies~(\ref{divdif}). \end{thm} \nt {\bf{Proof.} {\hskip 0.07cm} } Let $f$ be even. Then, $f(z)=g(z^2)$ for some entire function $g$. If a matrix $A$ is entrywise nonnegative symmetric, then $A^2$ is entrywise nonnegative, symmetric, and nonnegative definite. By Result~\ref{spdres}, if $g$ satisfies~(\ref{divdif}), then $g(A^2)$ is nonnegative. To prove the converse, consider an arbitrary $n$-tuple ${{\cal M}}$ of positive numbers. We can think of ${{\cal M}}$ as being ordered \begin{equation} {{\cal M}}=(x_1, \ldots, x_n), \qquad x_1\leq \cdots \leq x_n. \label{mu} \end{equation} By Result~\ref{aux_anti}, there exists a nonnegative symmetric anti-bidiagonal matrix $A$ such that $A^2$ is a Jacobi matrix with spectrum ${{\cal M}}$. Then, by Result~\ref{aux_jacobi}, the divided differences of $g$ must be nonnegative when evaluated at the first $k$ points of ${{\cal M}}$, for each $k=1, \ldots, n$. This implies, by the standard density reasoning, that all divided differences of $g$ must be nonnegative over $\R_+$. Now let $f$ be odd. Then, $f(z)=zh(z^2)$ for some entire function $h$. If all the divided differences of $h$ up to order $n$ are nonnegative, then by the same argument as above, $h(A^2)$ is nonnegative for each symmetric nonnegative matrix $A$, and multiplication of $h(A^2)$ by a nonnegative matrix $A$ produces a nonnegative matrix again. To prove the converse, we use induction and a technique from~\cite{MicchelliWilloughby}. Since $f$ has to preserve nonnegativity of symmetric matrices of order $n{-}1$ as well, we can assume the nonnegativity of the divided differences of orders $k=1, \ldots, n{-}1$. To prove that the $n$th divided difference is nonnegative, let ${{\cal M}}$ be an arbitrary positive $n$-tuple~(\ref{mu}). As above, by Result~\ref{aux_anti}, there exists a symmetric anti-bidiagonal matrix $A$ such that $A^2$ is a Jacobi matrix with spectrum ${{\cal M}}$. By~\cite{MicchelliWilloughby}, formula~(\ref{Lagrange}) shows that the $(1,n)$ entry of the function $h(A^2)$ is a positive multiple of $f[x_1,\ldots, x_n]$, hence the $(1,1)$ entry of the product $h(A^2) A$ is again a positive multiple of $f[x_1, \ldots, x_n]$. Thus the $n$th divided difference has to be nonnegative as well, which finishes the proof. \eop This theorem provides a rather natural characterization of even and odd functions that preserve nonnegativity of symmetric matrices in terms of their divided differences. However, the ``natural'' idea, that the even and odd parts of any entire function that preserves nonnegativity of symmetric functions must be also nonnegativity-preserving, turns out to be wrong. Here is an example that illustrates why that may not be the case. \begin{ex}\label{failurex} Let $$ f(z)\mathop{:}{=} \alpha + \beta z -z^3+z^5+\gamma z^6, $$ where $\beta>1/4$, and $\alpha,\gamma>0$ are chosen to be so large that $f(x)\geq 0$ for all $x\in \R$ and $f'(x)\geq 0$ for all $x\in \R_+$. Then, $f$ preserves nonnegativity of symmetric matrices of order $2$, but its odd part $f_{odd}$ does not. \end{ex} \nt {\bf{Proof.} {\hskip 0.07cm} } The function $f$ satisfies conditions~(\ref{2by2-1}) and~(\ref{2by2-2'}). Indeed, since $f\geq 0$ on $\R$, we have $$ (t+s) f(-t)+tf(t+s) \geq 0 \qquad \forall \; s,t\geq 0, $$ which is equivalent to condition~(\ref{2by2-2'}). Now, the odd part of $f$ is given by $$ f_{odd}(z)=\beta z-z^3+z^5\mathop{{=}{:}} zh(z^2). $$ Since $\beta>1/4$, $h(x)>0$ for all $x\in \R$. Since $f$ is monotone increasing on $\R_+$, we have $$ f(s+t)-f(-s)\geq f(s)-f(-s)=2f_{odd}(s)\geq0 \qquad \forall\; s, t \geq 0, $$ which yields condition~(\ref{2by2-1}). Thus, by Theorem~\ref{thm_2by2}, $f$ preserves nonnegativity of symmetric matrices of order $2$. However, $$ h'(x)=2x-1<0 \qquad {\rm for} \;\; x<1/2. $$ Therefore, by Theorem~\ref{even_odd}, $f_{odd}$ does not preserve nonnegativity of symmetric functions of order $2$. \eop We conclude this section with a simple observation about even and odd parts of a nonnegativity-preserving function. \begin{prop} If an entire function $f$ preserves nonnegativity of symmetric functions of order $n$, then its odd and even parts $f_{odd}$ and $f_{even}$ preserve nonnegativity of matrices of order $\lfloor n/2 \rfloor$. \end{prop} \nt {\bf{Proof.} {\hskip 0.07cm} } For $n$ even, consider matrices of the form $$ A=\left[ \begin{array}{cc} 0 & B \\ B & 0 \end{array} \right], $$ and for $n$ odd, matrices of the form $$ A=\mathop{\rm diag}\nolimits (\left[ \begin{array}{cc} 0 & B \\ B & 0 \end{array} \right], 0), $$ where $B$ is an $\lfloor n/2 \rfloor{\times}\lfloor n/2 \rfloor$ symmetric nonnegative matrix. Since $$ f(A)=\left[ \begin{array}{cc} f_{even}(B) & f_{odd}(B) \\ f_{odd}(B) & f_{even}(B) \end{array} \right] \qquad {\rm for} \; n \; {\rm even}, $$ $$ f(A)=\mathop{\rm diag}\nolimits(\left[ \begin{array}{cc} f_{even}(B) & f_{odd}(B) \\ f_{odd}(B) & f_{even}(B) \end{array} \right], 0 ) \qquad {\rm for} \; n \; {\rm odd}, $$ the see that $f_{even}$ and $f_{odd}$ must preserve nonnegativity of symmetric functions of order $\lfloor n/2 \rfloor$. \eop \subsection{Other necessary conditions} \label{sec_sym3} Results from~\cite{Holtz2} allow us to derive an additional set of necessary conditions. The motivation behind these conditions is as follows. We believe that the power of Results~\ref{aux_jacobi} and~\ref{aux_anti} --- or rather, {\em the methods} behind those results --- have not been exhausted by Theorem~\ref{even_odd}. Our next theorem is presented as an illustration of this viewpoint. On comparison with Theorem~\ref{thm_2by2}, we find that the conditions derived in our next theorem constitute a complete characterization for the functions of interest in the $n=2$ case. To derive these new necessary conditions, we will need the following two results. \begin{res}[{Theorem~1, \cite{Holtz2}}] \label{aux_antiChar} A real n-tuple $\Lambda$ can be realized as the spectrum of a symmetric anti-bidiagonal matrix~(\ref{mainform}) with all $a_j$'s positive if and only if $\Lambda=(\lambda_1,\ldots,\lambda_n)$ satisfies $$ \lambda_1 > -\lambda_2 > \lambda_3 > \cdots > (-1)^{n-1}\lambda_n > 0. $$ \end{res} \begin{lem}\label{aux_pattern} Let $A$ be a symmetric anti-bidiagonal matrix of order $n$, and let $A^p_{ij}$ denote the $(i,j)$ entry of $A^p$. Then \begin{enumerate} \item[a)] The $(i,j)$ entry of $A^{2q-1}$ is zero whenever $2\leq i+j\leq (n-q+1)$, $q\geq 1$. \item[b)] The $(i,j)$ entry of $A^{2q}$ is zero whenever $1+q\leq j-i\leq n-1$, $q\geq 1$. \item[c)] Adopting the notation in~(\ref{mainform}) for the entries of $A$, \begin{eqnarray} A^{2q-1}_{1,n-q+1} &=& a_n a_{n-1}\ldots a_{n-2q+2}, \qquad 1\leq q\leq \gin{(n+1)/2}, \label{prod1}\\ A^{2q}_{1,1+q} &=& a_n a_{n-1}\ldots a_{n-2q+1}, \qquad 1\leq q\leq \gin{n/2}. \label{prod2} \end{eqnarray} \end{enumerate} \end{lem} \nt {\bf{Proof.} {\hskip 0.07cm} } We proceed by induction on $q$. Note that (a), (b) and (c) are obvious when $q=1$. Let us now assume that (a) and (b) are true for some $q<n-3$. Note that since $A$ is anti-bidiagonal, \begin{equation} A^{2q+1}_{ij} = A^{2q}_{i,n-j+1}A_{n-j+1,j}+A^{2q}_{i,n-j+2}A_{n-j+2,j}. \label{expan(a)} \end{equation} However, if $i+j\leq(n-(q+1)+1)$, then $$ (n-j+2)-i\geq (n-j+1)-i\geq q+1. $$ Applying our inductive hypothesis on (b), we conclude from the above inequalities that the right-hand side of~(\ref{expan(a)}) reduces to zero when $i+j\leq(n-(q+1)+1)$. Thus, (a) is established for $q+1$. We establish (b) for $q+1$ in a similar fashion. We note that \begin{equation} A^{2q+2}_{ij} = A^{2q+1}_{i,n-j+1}A_{n-j+1,j}+A^{2q+1}_{i,n-j+2}A_{n-j+2,j}. \label{expan(b)} \end{equation} When $j-i\geq 1+(q+1)$, then $$ i+(n-j+1)\leq i+(n-j+2)\leq n-(q+1)+1. $$ Since we just established (a) for $q+1$, the above inequalities tell us that the right-hand side of~(\ref{expan(b)}) reduces to zero when $j-i\geq 1+(q+1)$. Thus, (b) too is established for $q+1$. By induction, (a) and (b) are true for all relevant $q$. Part (c) now follows easily by substituting $i=1$ and $j=n-q$ into equation~(\ref{expan(a)}) to carry out the inductive step for~(\ref{prod1}), and by substituting $i=1$ and $j=q+2$ into equation~(\ref{expan(b)}) to carry out the inductive step for~(\ref{prod2}). \eop We can now present the aforementioned necessary conditions. \begin{thm}\label{thm_newnc} If an entire function $f$ preserves nonnegativity of symmetric matrices of order $n$, $n\geq 2$, then, for each ordered $n$-tuple $(x_1, \ldots, x_n)$ where \begin{equation} x_1 > -x_2 > x_3 > \cdots > (-1)^{n-1} x_n > 0, \label{cond} \end{equation} $f$ must satisfy \begin{equation} f[x_1, \ldots, x_n] \geq 0, \label{newnc1} \end{equation} and for each $k=1,\dots,n$, $f$ must satisfy \begin{equation} f[x_1,\ldots, x_{k-1},x_{k+1}, \ldots, x_n]-( \; \sum_{j\neq k}x_j)f[x_1, \ldots, x_n] \geq 0. \label{newnc2} \end{equation} \end{thm} \nt {\bf{Proof.} {\hskip 0.07cm} } We choose an $n$-tuple $(x_1,\ldots,x_n)$ that satisfies~(\ref{cond}). By Result~\ref{aux_antiChar}, there is a symmetric anti-bidiagonal matrix of the form~(\ref{mainform}), with all $a_j$'s positive, whose spectrum is $(x_1,\ldots,x_n)$. Let us express $f(A)$ using the formula~(\ref{Lagrange}), with the substitutions $r_j=x_j$, $j=1,\ldots,n$. Then, in view of Lemma~\ref{aux_pattern}, the $(1,\gin{n/2}+1)$ entry of $f(A)$ is $a_n a_{n-1}\ldots a_2f[x_1,\ldots,x_n]$. Since $f$ preserves nonnegativity, and all the $a_j$'s are positive, $f[x_1,\ldots,x_n]$ has to be nonnegative. This establishes~(\ref{newnc1}). To demonstrate~(\ref{newnc2}), we look at the entries of $f(A)$ that are {\em adjacent} to the $(1,\gin{n/2}+1)$ entry that was considered above. Let us fix a $k=1,\ldots,n$. This time, however, in using formula~(\ref{Lagrange}) to express $f(A)$, we make the following substitutions $$ r_j = \cases{x_j & if $j<k$, \cr x_{j+1} & if $k\leq j < n$, \cr x_k & if $j=n$.} $$ Our analysis splits into two cases. {\em {\bf Case 1.} $n$ is odd:} In this case, let us look at the $(1,\gin{n/2}+2)$ entry of $f(A)$. By Lemma~\ref{aux_pattern}, and the fact that $n$ is odd, the only power of $A$ that contributes to this entry is $A^{n-2}$. Consequently \begin{eqnarray*} f(A)_{1,\gin{n/2}+2} &=& \{f[x_1,\ldots,x_{k-1},x_{k+1},\ldots,x_n]-( \; \sum_{j\neq k}x_j)f[x_1,\ldots,x_n] \} A^{n-2}_{1,\gin{n/2}+2} \\ &=& \ a_n a_{n-1}\ldots a_3 \{f[x_1,\dots,x_{k-1},x_{k+1}\dots,x_n]-( \; \sum_{j\neq k}x_j)f[x_1,\ldots,x_n] \}. \end{eqnarray*} Since $f$ preserves nonnegativity,~(\ref{newnc2}) follows from the above equalities. {\em {\bf Case 2.} $n$ is even:} In this case, we focus on the $(1,\gin{n/2})$ entry of $f(A)$. We recover~(\ref{newnc2}) by arguing exactly as above. In either case,~(\ref{newnc2}) is established, which concludes our proof. \eop We conclude this section by showing that a subset of the necessary conditions derived above are in fact sufficient to characterize those entire functions that preserve nonnegativity of $2\times 2$ symmetric matrices. Specifically, we show that \begin{eqnarray*} f[x_1,x_2] &\geq& 0 \quad{\rm and} \\ f(x_2)-x_2f[x_1,x_2] &\geq& 0 \quad{\rm for} \; {\rm all} \; \; x_1>-x_2>0, \end{eqnarray*} imply the conditions~(\ref{2by2-1}) and~(\ref{2by2-2'}). This is achieved simply by taking some $y>x>0$, making the substitutions $x_1=y+x$ and $x_2=x-y$, and then invoking continuity to obtain~(\ref{2by2-1}) and~(\ref{2by2-2'}) for all $y\geq x\geq 0$. \section{Open problems and further ideas} \label{sec_next} We conclude this paper by listing some ideas that we did not pursue, which however may lead to further progress. One can consider matrices that preserve nonnegativity of other classes of structured matrices, such as Toeplitz or Hankel. However, since these classes are not invariant under the action of an arbitrary matrix function, their matrix functions can be quite difficult to analyze. Also, the eigenstructure of some structured matrices is rather involved, which could be an additional obstacle. Theorem~1.3 of~\cite{Stafney} gives an interesting formula for $f(A)$ when $f$ is a polynomial, which therefore must also be true for entire functions. Precisely, if $A$ is a matrix with minimal polynomial $p_0$ and $C$ is the companion matrix of $p_0$, then $$ f(A)=\sum_{j=1}^n f(C)_{j1} A^{j-1}.$$ In particular, $f(A)$ is nonnegative whenever the first column of $f(C)$ is nonnegative. It would be worthwhile to find out what functions have this property. Note that the set $\F{n}$ contains positive constants and is closed under addition, multiplication, and composition. We are not aware of any work on systems of entire functions (or even polynomials) that satisfy this property. Perhaps one could describe a minimal set of generators (with respect to these three operations) that generate such a system. For example, in the case $n=1$, the generators are positive constants, the function $p_1(x)=x$ plus all quadrics of the form $(x-a)^2$, $a>0$. Incidentally, the set of polynomials with nonnegative coefficients is generated by positive constants and $p_1(x)=x$. We do not have a characterization of generators for $n\geq 2$. In particular, $\F{n}$ is a semigroup with respect to any of these operations, so some general results on semigroups may prove to be useful in our setting. Also note that the set of nonnegative matrices of order $n$, on which $\F{n}$ acts, is also a semigroup (closed under addition and multiplications), which could also be of potential use. Finally, both $\F{n}$ and the set of nonnegative matrices of order $n$ are also cones, so the problem might also have a cone theoretic form. If we consider polynomials instead of entire functions, we can further restrict ourselves to polynomials of degree bounded by a fixed positive integer. Then, we will obtain a proper cone, whose extreme directions may be of interest. The general problem then can also be looked upon in an appropriate similar setting. \section*{Acknowledgments} We are grateful to Raphael Loewy, Michael Neumann and Shmuel Friedland for helpful discussions and to anonymous referees for useful suggestions. \end{document}
\begin{document} \title{Clustering with Neighborhoods} \begin{abstract} In the standard planar $k$-center clustering problem, one is given a set $P$ of $n$ points in the plane, and the goal is to select $k$ center points, so as to minimize the maximum distance over points in $P$ to their nearest center. Here we initiate the systematic study of the clustering with neighborhoods problem, which generalizes the $k$-center problem to allow the covered objects to be a set of general disjoint convex objects $\mathscr{C}$ rather than just a point set $P$. For this problem we first show that there is a PTAS for approximating the number of centers. Specifically, if $r_{opt}$ is the optimal radius for $k$ centers, then in $n^{O(1/{\varepsilon}^2)}$ time we can produce a set of $(1+{\varepsilon})k$ centers with radius $\leq r_{opt}$. If instead one considers the standard goal of approximating the optimal clustering radius, while keeping $k$ as a hard constraint, we show that the radius cannot be approximated within any factor in polynomial time unless $\mathsf{P=NP}$, even when $\mathscr{C}$ is a set of line segments. When $\mathscr{C}$ is a set of unit disks we show the problem is hard to approximate within a factor of $\frac{\sqrt{13}-\sqrt{3}}{2-\sqrt{3}}\approx 6.99$. This hardness result complements our main result, where we show that when the objects are disks, of possibly differing radii, there is a $(5+2\sqrt{3})\approx 8.46$ approximation algorithm. Additionally, for unit disks we give an $O(n\log k)+(k/{\varepsilon})^{O(k)}$ time $(1+\epsilon)$-approximation to the optimal radius, that is, an FPTAS for constant $k$ whose running time depends only linearly on $n$. Finally, we show that the one dimensional version of the problem, even when intersections are allowed, can be solved exactly in $O(n\log n)$ time. \end{abstract} \section{Introduction} In the standard $k$-center clustering problem, one is given a set $P$ of $n$ points in a metric space and an integer parameter $k\geq 0$, and the goal is to select $k$ points from the metric space (or from $P$ in the discrete $k$-center problem), called centers, so as to minimize the maximum distance over points in $P$ to their nearest center. Equivalently, the problem can be viewed as covering $P$ with $k$ balls with the same radius $r$, where the goal is to minimize $r$. It is well known that it is NP-hard to approximate the optimal $k$-center radius $r_{opt}$ within any factor less than $2$ in general metric spaces \cite{hn-ehblp-79}, and that the problem remains hard to approximate within a factor of roughly 1.82 in the plane \cite{fg-oaac-88}. For general metric spaces, the standard greedy algorithm of Gonzalez \cite{g-cmmid-85}, which repeatedly selects the next center to be the point from $P$ which is furthest from the current set of centers, achieves an optimal $2$-approximation to $r_{opt}$. An alternative algorithm due to Hochbaum and Shmoys \cite{hs-bphkcp-85} also achieves an optimal approximation ratio of $2$ by approximately searching for the optimal radius, observing that if $r\geq r_{opt}$ then all points will be covered after $k$ rounds of repeatedly removing points in $2r$ radius balls centered at any remaining point of $P$. In this paper we consider a natural generalization of $k$-center clustering in the plane, where the objects which we must cover are general disjoint convex objects rather than points. Specifically, in the \emph{clustering with neighborhoods} problem the goal is to select $k$ center points so that balls centered at these points with minimum possible radius intersect all the convex objects. This generalization is natural as real world objects may not be well modeled as individual points. This generalized setting has previously been considered for other classical point based problems in the plane, such as the Traveling Salesperson Problem \cite{dm-aatspn-03}, where the authors referred to these objects as neighborhoods. (We instead typically refer to them as \emph{objects}.) To the best of our knowledge we are the first to consider the general problem of clustering convex objects in this context, though as we discuss below many closely related problems have been considered, some of which equate to special or extreme cases of our problem. We remark that since a point is a convex set, the hardness results for $k$-center clustering immediately apply to clustering with neighborhoods. \paragraph*{Related Work} As clustering is a fundamental data analysis task, countless variants have been considered. Here we focus on variants which share our $k$-center objective of minimizing the maximum radius of the balls at the chosen centers. Bandyapadhyay \textit{et~al.}\xspace \cite{bipv-cack-19} considered the colorful $k$-center problem, where the points are partitioned into color classes $P_1, \ldots, P_c$ and the goal is to find $k$ balls with minimum radius which cover at least $t_i$ points from each color class $P_i$. When our convex objects have bounded diameter our problem can be approximately cast as an instance of colorful $k$-center by replacing each object with the set $P_i$ of grid points it intersects and setting $t_i=1$. General colorful clustering, however, is more challenging as the color classes can be interspersed, which is why \cite{bipv-cack-19} assumes the number of color classes is a constant, allowing for a constant factor approximation, which subsequently was improved \cite{aakz-takcc-20,jss-fckc-20}. Note that colorful $k$-center itself generalizes the $k$-center with outliers problem \cite{ckmn-aflpo-01}, corresponding to the case with a single color class $P$ with $n-t$ outliers allowed. Xu and Xu \cite{xx-eaacp-10} considered the $k$-center clustering problem on points sets (KCS) where given points sets $S_1,\ldots,S_n$ the goal is to find $k$ balls of minimum radius such that each $S_i$ is entirely contained in one of the balls. Again when our objects have bounded diameter we can relate our problem to KCS by discretizing the objects. Their requirement that all of $S_i$ be covered by a single ball immediately implies that the optimal radius is at least the radius of the largest object, whereas in our case as only a single point of $S_i$ needs to be covered the radius can be arbitrarily smaller. In particular, while \cite{xx-eaacp-10} achieves a $(1+\sqrt{3})$-approximation, we show our problem cannot in general be approximated within any factor in polynomial time unless $\mathsf{P=NP}$. For the special case when $k=1$ or $k=2$, there are several prior results which closely relate to our problem. When $k=1$, i.e.\ the one-center problem, the solution can be derived from the farthest object Voronoi diagram, for which Cheong \textit{et~al.}\xspace \cite{c-fpvd-11} gave a near linear time algorithm for polygon objects. For disk objects, Ahn \textit{et~al.}\xspace \cite{a-cpdtc-13} gave a near quadratic time algorithm for the two-center problem. Several papers have also considered generalizing to higher dimensions, but restricting the convex objects to affine subspaces of dimension $\Delta$. Gao \textit{et~al.}\xspace \cite{gls-aidiht-08} introduced the $1$-center problem for $n$ lines, achieving a linear time $(1+{\varepsilon})$-approximation, as well as a $(1+{\varepsilon})$-approximation for higher dimensional flats or convex sets whose running time depends exponentially on $\Delta$. Later in \cite{gls-clhs-10} the same authors considered the more challenging $k=2$ and $k=3$ cases for lines, providing a $(2+{\varepsilon})$-approximation in quasi-linear time. Subsequently, \cite{ls-casha-13} considered the problem for axis-parallel flats, where they provide an improved approximation for $k=1$, hardness results for $k=2$, and an approximation for larger $k$ where the time depends exponentially on both $k$ and $\Delta$. While our focus is on the $k$-center objective, we remark that $k$-means clustering for lines was considered by Marom and Feldman \cite{mf-kclbd-19}, who gave a PTAS for constant $k$. The $k$-center problem for points in a metric space can also be viewed as clustering the vertices according to the shortest path metric of a positively weighted graph. This allows one to consider specific graph classes, for example, Eisenstat \textit{et~al.}\xspace \cite{ekm-akpg-14} gave a polynomial time bi-criteria approximation scheme for $k$-center in planar graphs (i.e.\ they allow both the number of centers and radius to be violated). We remark, however, that for our problem, and the various others described above where the objects are not points, the complete graph with all pairwise distances between the objects, is not necessarily metric (i.e.\ it may not be its own metric completion). For example, the triangle inequality would be violated if you had two small convex objects (e.g.\ points) which are far from one another but both are close to some other large convex object. Note that this non-metric behavior is what allows us to prove a stronger hardness of approximation result than that for points in the plane \cite{fg-oaac-88}. Finally, we note that there is a polynomial time algorithm for $k$-center when $k$ is a constant and the objects are points in $d$-dimensional Euclidean space, for constant $d$. Specifically, Agarwal and Procopiuc \cite{ap-eaac-02} gave an $n^{O(k^{1-1/d})}$ time exact algorithm, as well as a $O(n\log k)+(k/{\varepsilon})^{O(k^{1-1/d})}$ time $(1+{\varepsilon})$-approximation. Later B\u{a}doiu \textit{et~al.}\xspace \cite{bhi-accs-02} removed the bounded dimension assumption, achieving a $2^{O((k\log k)/{\varepsilon}^2)}\cdot dn$ time $(1+{\varepsilon})$-approximation. \paragraph*{Our Contribution} In this paper we initiate the systematic study of the $\mathsf{NP}$-hard clustering with neighborhoods problem. While this problem allows centers to be placed anywhere in the plane, in \secref{ptas} we first argue that one can compute a cubic sized set of points $P$ and a cubic sized set of radii $R$, such that for any integer $k\geq 0$ there is an optimal set $S\subseteq P$ of $k$ centers with optimal radius $r_{opt}\in R$. This naturally leads to a PTAS for approximating the optimal number of centers by using Minkowski sums to reduce the problem to instances of geometric hitting set, for which there is a well known PTAS \cite{mr-irghsp-10}. Specifically, if $r_{opt}$ is the optimal radius for $k$ centers, then in $n^{O(1/{\varepsilon}^2)}$ time we can produce a set of $(1+{\varepsilon})k$ centers with radius $\leq r_{opt}$. In clustering problems, however, often the emphasis is on approximating the radius, while keeping $k$ as a hard constraint. In \secref{hard} we prove this problem is significantly harder, by adapting the hardness proof of \cite{fg-oaac-88} for planar $k$-center. Specifically, we show that the radius cannot be approximated within any factor in polynomial time unless $\mathsf{P=NP}$, even when the convex objects are restricted to disjoint line segments. On the other hand, for disjoint unit disks, a more in depth proof shows the problem is $\mathsf{APX}$-hard, and in particular cannot be approximated within $\frac{\sqrt{13}-\sqrt{3}}{2-\sqrt{3}}\approx 6.99$ in polynomial time unless $\mathsf{P=NP}$. Complementing this result, in \secref{balls} we present our main result, showing that when the objects are disjoint disks (of possibly varying radii) there is a $(5+2\sqrt{3})$-approximation for the optimal radius. Significantly, for the case of disks, our approximation factor of $5+2\sqrt{3} \approx 8.46$ is close to our hardness bound of $\frac{\sqrt{13}-\sqrt{3}}{2-\sqrt{3}}\approx 6.99$. Moreover, while our approximation holds for disks of varying radii, interestingly our hardness bound applies even for disks of uniform radii. Further probing the complexity of clustering with neighborhoods, in \secref{bounded} we show there is an FPTAS for unit disks when $k$ is bounded by a constant. Specifically, we give an $O(n\log k)+(k/{\varepsilon})^{O(k)}$ time $(1+{\varepsilon})$-approximation to the optimal radius, by carefully reducing to the algorithm of \cite{ap-eaac-02} for $k$-center. Finally in \secref{oned}, by utilizing the searching procedure of \cite{f-pslsct-91}, we show that in one dimension the problem can be solved exactly in $O(n\log n)$ time even when intersections are allowed, contrasting our hardness of approximation results in the plane. \section{Preliminaries} Given points $x,y\in \mathbb{R}^d$, $||x-y||$ denotes their Euclidean distance. Given two closed sets $X,Y\subset \mathbb{R}^d$, $\distX{X}{Y} = \min_{x\in X, y\in Y} ||x-y||$ denotes their distance. For a single point $x$ we write $\distX{x}{Y} = \distX{\{x\}}{Y}$. For a point $x$ and a value $r\geq 0$, let $B(x,r)$ denote the closed ball centered at $x$ and with radius $r$. Let $\mathscr{C}$ be a set of $n$ pairwise disjoint convex objects in the plane. For simplicity, we assume $\mathscr{C}$ is in general position. We work under the standard assumption that the objects in $\mathscr{C}$ are semi-algebraic sets of constant descriptive complexity. Namely, the boundary of each object is composed of a set of algebraic arcs where the sum of the degrees of these arcs is bounded by a constant, and any natural standard operation on such objects, such as computing the distance between any pair of objects, can be carried out in constant time. See Agarwal \textit{et~al.}\xspace \cite{ams-rsss-12} for a more detailed discussion of this model. Our analysis generalizes to the case where $n$ is the total complexity of $\mathscr{C}$ and individual objects in $\mathscr{C}$ are not required to have constant complexity, however, assuming constant complexity simplifies certain structural statements and the polynomial degree of $n$ in our running time statements. \begin{problem}[Clustering with Neighborhoods]\problab{main} Given a set $\mathscr{C}$ of $n$ disjoint convex objects in the plane, and an integer parameter $k\geq 0$, find a set of $k$ points $S$ (called centers) which minimize the maximum distance to a convex object in $\mathscr{C}$. That is, \[ S=\arg \min_{S'\subset \mathbb{R}^2, |S'|=k} \max_{C\in \mathscr{C}} \distX{C}{S'}. \] \end{problem} Let $S$ be any set of $k$ points, and let $r=\max_{C\in \mathscr{C}} \distX{C}{S}$. We refer to $r$ as the \emph{radius} of the solution $S$, since $r$ is the minimum radius such that the set of all balls $B(s,r)$ for $s\in S$, intersect all $C\in \mathscr{C}$. If $S$ is an optimal solution then we refer to its radius $r_{opt}$ as the optimal radius. In this paper we will consider two types of approximations. \footnote{We refrain from using the standard bi-criteria approximation terminology to emphasize that in each case only the size or only the radius is being approximated, not both. } Let $\mathscr{C}, k$ be an instance of \probref{main} with optimal radius $r_{opt}$. For a value $\alpha\geq 1$, we refer to a polynomial time algorithm as an \emph{$\alpha$-size-approximation} if it returns a solution $S$ of radius $\leq r_{opt}$ where $|S|\leq \alpha k$. Alternatively, we refer to a polynomial time algorithm as an \emph{$\alpha$-radius-approximation} if it returns a solution $S$ of radius $\leq \alpha r_{opt}$ where $|S|=k$. Often we refer to the latter radius case simply as an $\alpha$-approximation. \section{Canonical Sets and a PTAS for Approximating the Size} \seclab{ptas} In this section we show that while \probref{main} allows centers to be placed anywhere in the plane, we can compute a canonical cubic sized set of points $P$ and a set of corresponding radii $R$, such that for any integer $k\geq 0$ there is an optimal set $S\subseteq P$ of $k$ centers with optimal radius $r_{opt}\in R$. We then use this property to give a PTAS for \probref{main} when approximating the size of an optimal solution. Specifically, for any fixed ${\varepsilon}>0$, we give a $(1+{\varepsilon})$-size-approximation with running time $n^{O(1/{\varepsilon}^2)}$. In \secref{balls}, we will again use this canonical set when designing our constant factor radius-approximation for disks. The \emph{bisector} of two convex objects $C,C'$ is the set of all points $x$ in the plane such that $\distX{x}{C}=\distX{x}{C'}$. Let $\bisecX{C}{C'}$ denote the bisector of $C$ and $C'$. As discussed in \cite{ky-vdpco-03}, any set $\mathscr{C}$ of $n$ disjoint constant-complexity convex objects in general position satisfies the conditions of an abstract Voronoi diagram \cite{k-cavd-89}. In particular we can assume the following: \begin{enumerate}[1)] \item For any $C, C'\in \mathscr{C}$ we have that $\bisecX{C}{C'}$ is an unbounded simple curve. \item The intersection of any two bisectors is a discrete set with a constant number of points. \end{enumerate} We point out that in the following lemma there is a single pair of sets $P,R$ which works simultaneously for all values of $k$. \begin{lemma}\lemlab{canon} Let $\mathscr{C}$ be a set of $n$ disjoint convex objects. In $O(n^3 \log n)$ time one can compute a set of $O(n^3)$ points $P$, and a corresponding set of $O(n^3)$ radii $R$, such that for any value $k\geq 0$ for the instance $\mathscr{C},k$ of \probref{main} there is an optimal set of $k$ centers $S$ with optimal radius $r_{opt}$ such that $S\subseteq P$ and $r_{opt}\in R$. \end{lemma} \begin{proof} Let $I(\mathscr{C})$ be a set containing exactly one (arbitrary) point from each convex object in $\mathscr{C}$. For any number of centers $k$, let $S$ be any optimal solution, and let $r_{opt}$ be the optimal radius. Consider an arbitrary center $s\in S$. Let $\mathscr{C}'$ be the subset of objects in $\mathscr{C}$ which intersect the ball $B(s,r_{opt})$. We can assume $\mathscr{C}'$ is non-empty, as otherwise the center $s$ does not cover any convex object within radius $r_{opt}$ and so can be thrown out. Moreover, if $|\mathscr{C}'|=1$ then we can assume $s$ is the point from $I(\mathscr{C})$ which intersects this one convex object. So assume $|\mathscr{C}'|>1$, and let $C$ be the convex object in $\mathscr{C}'$ which lies furthest from $s$. Now consider moving $s$ continuously toward the convex object $C$. As we do so the distance from $s$ to $C$ monotonically decreases. Thus so long as $C$ remains the furthest convex object from $s$ in $\mathscr{C}'$, the ball $B(s,r_{opt})$ still intersects all of $\mathscr{C}'$ (i.e.\ we did not increase the solution radius). Now if $C$ always remains the furthest, when $s$ eventually reaches and intersects $C$ then this will imply its distance to all objects in $\mathscr{C}'$ is zero, which is a contradiction as we assumed the convex objects do not intersect. Otherwise, at some point $C$ is no longer the furthest, which implies we must have crossed a bisector $\bisecX{C}{C'}$ for some other convex object $C'\in \mathscr{C}'$. So far we have shown one can assume each center $s$ either is in $I(\mathscr{C})$, or lies on the bisector $\beta=\bisecX{C}{C'}$ of the two objects, $C,C'$, which lie furthest away from $s$ among the set of objects $\mathscr{C}'$ which intersect the ball $B(s,r_{opt})$. In the latter case, let $T_\beta$ denote the set of all points $p$ on $\beta$ such that there exists a third object $X\in \mathscr{C}$ such that $\distX{p}{X}=\distX{p}{C}$ (or equivalently $\distX{p}{X}=\distX{p}{C'}$). Note that such points lie at intersections of bisectors and thus from the above discussion before the lemma, we know $T_\beta$ is a discrete set. As $\beta$ is a simple curve, we can view points in $T_\beta$ as being ordered along $\beta$. Suppose that $s\notin T_\beta$, and let $p$ and $q$ be the points of $T_\beta$ which come immediately before and after $s$ along $\beta$, and let $[p,q]$ denote the portion of $\beta$ lying between these points. (This interval may be unbounded to one side if $s$ comes after or before all points in $T_\beta$.) Recall that $\mathscr{C}'$ is the subset of objects intersecting $B(s,r_{opt})$, and $C$ and $C'$ are the furthest from $s$ among those in $\mathscr{C}'$. Observe that for any other point $z$ in $[p,q]$, $C$ and $C'$ must also be the furthest objects from $z$ among those in $\mathscr{C}'$, as otherwise as we move continuously along $\beta$ from $s$ to $z$ we must cross another point from $T_\beta$ before reaching $z$ and there are no such points in $(p,q)$. Thus if we replace $s$ with the point in $[p,q]$ minimizing the distance to $C$ (or equivalently $C'$) then all objects previously intersected by $B(s,r_{opt})$ will remain intersected by $B(s,r_{opt})$. Let $M(\mathscr{C})$ be a set containing, for each bisector $\beta$, the set $T_\beta$ and one minimum distance point from each such interval $[p,q]$. We thus have argued that the points of $S$ can be assumed to lie in $P=I(\mathscr{C})\cup M(\mathscr{C})$. As for the running time and size of these sets, first observe that $I(\mathscr{C})$ has size $n$ and can be trivially computed in $O(n)$ time. For the set $M(\mathscr{C})$, first observe that there are $O(n^2)$ bisectors. For any bisector $\beta$, the set $T_\beta$ of intersection points of $\beta$ with other bisectors that are equidistant at the intersection point, has size $O(n)$, since by general position every point is equidistant to at most 3 objects and as mentioned above any pair of bisectors intersect in a constant number of points. (In other words, we ultimately consider all $O(n^3)$ points equidistant to three objects, as opposed to all $O(n^4)$ bisector intersections.) Thus the set $M(\mathscr{C})$, and correspondingly $P$, has size $O(n^3)$ as claimed. For the running time, as the objects in $\mathscr{C}$ all have constant complexity, so do their bisectors, and thus $T_\beta$ can be computed in $O(n)$ time. The minimum points of $M(\mathscr{C})$ on $\beta$, can thus be computed by sorting $T_\beta$ along $\beta$, in $O(n \log n)$ time, and then computing the minimum point in constant time for each constant complexity interval between consecutive pairs of points from $T_\beta$ along $\beta$. Thus over all $O(n^2)$ bisectors it takes $O(n^3 \log n)$ time to compute $M(\mathscr{C})$. \end{proof} We now argue the canonical sets $P$ and $R$ from the above lemma naturally lead to a PTAS for size-approximation by using Minkowski sums. For sets $A,B \subset \mathbb{R}^2$, let $A\oplus B = \{a+b\mid a\in A, ~b\in B\}$ denote their Minkowski sum. Let $B(r)$ denote the ball of radius $r$ centered at the origin. Then we write $\mathscr{C}\oplus B(r) = \{C\oplus B(r) \mid C\in \mathscr{C}\}$. A set of points $S$ is called a \emph{hitting set} for a set of objects if every object has non-empty intersection with $S$. \begin{observation} A set $S$ of $k$ centers is a solution to \probref{main} of radius $r$ if and only if $S$ is a hitting set of size $k$ for $\mathscr{C}\oplus B(r)$. This holds since for any $C\in \mathscr{C}$ and $s\in S$, $B(s,r)\cap C\neq \emptyset$ if and only if $s\in C\oplus B(r)$. \end{observation} In the geometric hitting set problem we are given a set $\mathcal{R}$ of $n$ regions and a set $P$ of $m$ points in the plane, and the goal is to select a minimum sized hitting set for $\mathcal{R}$ using points from $P$. The above observation implies we can reduce any given instance $\mathscr{C}, k$ of \probref{main} to multiple instances of geometric hitting set. Specifically, by \lemref{canon}, in $O(n^3\log n)$ time we can compute a set $R$ of $O(n^3)$ values, one of which must be the optimal radius $r_{opt}$. Then for each $r\in R$ we construct a hitting set instance where $\mathcal{R}=\mathscr{C}\oplus B(r)$, and $P$ is the set of points from \lemref{canon}. By the above observation, if $r<r_{opt}$, then the hitting set instance requires more than $k$ points, and if $r\geq r_{opt}$ then it requires at most $k$ points. Therefore, given an algorithm for geometric hitting set we can use it to binary search for $r_{opt}$. While hitting set is in general $\mathsf{NP}$-hard to approximate within logarithmic factors \cite{rs-scep-97}, in our case there is a PTAS as the regions are nicely behaved. A collection of regions in the plane is called a set of \emph{pseudo-disks} if the boundaries of any two distinct regions in the set cross at most twice. Mustafa and Ray \cite{mr-irghsp-10} showed that there is an $nm^{O(1/{\varepsilon}^2)}$ time PTAS for geometric hitting set when $\mathcal{R}$ is a collection of $n$ pseudo-disks and $P$ is a set of $m$ points. It is known that if we take the Minkowski sum of a single convex object with each member of a set of disjoint convex objects, then the resulting set is a collection of pseudo-disks (see for example \cite{aps-su-08}). Thus $\mathscr{C}\oplus B(r)$ is a collection of pseudo-disks. Therefore, by the above discussion, we have the following theorem. As the decision procedure is now approximate, the binary search must be modified to look at larger radii when the hitting set algorithm returns $>(1+{\varepsilon})k$ points, and smaller radii otherwise. (This yields an adjacent pair $r<r'$ such that $r<r_{opt}$, implying $r'\leq r_{opt}$, and an $r'$- cover of the input using $\leq (1+{\varepsilon})k$ points.) \begin{theorem} There is a PTAS for \probref{main} for approximating the optimal solution size. That is, for any fixed ${\varepsilon}>0$, there is a $(1+{\varepsilon})$-size-approximation with running time $n^{O(1/{\varepsilon}^2)}$. \end{theorem} We remark that the PTAS of \cite{mr-irghsp-10} implicitly assumes the objects are in general position, that is if two objects intersect then they properly intersect (i.e.\ their interiors intersect). While $\mathscr{C}$ satisfies this property, it may not after we take the Minkowski sum with a given radius. However, as we can compute distances between our objects, this is easily overcome by computing the smallest non-zero distance $d$ between two objects in $C\oplus B(r)$, and instead running the hitting set algorithm on $C\oplus B(r+\alpha)$, where $\alpha$ is some infinitesimal value less than $d/2$. This ensures any objects which intersected in $C\oplus B(r)$ now properly intersect, and there are no new intersections. \section{Radius Approximation Hardness} \seclab{hard} In this section we argue that for \probref{main} it is hard to approximate the radius within any factor, even when $\mathscr{C}$ is restricted to being a set of line segments. Moreover, for the case when $\mathscr{C}$ is a set of disks, i.e.\ the case considered in \secref{balls}, we argue the problem is $\mathsf{APX}$-Hard. Our hardness results use a construction similar to the one from \cite{fg-oaac-88}, where they reduce from the problem of planar vertex cover where the maximum degree of a vertex is three, which is known to be $\mathsf{NP}$-complete \cite{vc3}. We denote this problem as $\mathsf{P3VC}$. \subsection{Line Segments} Here we argue that it is hard to radius-approximate \probref{main} within any factor, even when $\mathscr{C}$ is a set of line segments. We remark that the following reduction works for any instance of planar vertex cover (i.e.\ regardless of the degree), but the reduction for disks in the next subsection uses that the degree is at most three. \begin{theorem}\thmlab{seghard} \probref{main} cannot in polynomial time be radius-approximated within any factor that is computable in polynomial time unless $\mathsf{P}=\mathsf{NP}$, even when restricting to the set of instances in which $\mathscr{C}$ is a set of disjoint line segments. \end{theorem} \begin{figure} \caption{Reducing planar Vertex Cover to \probref{main} for segments.} \end{figure} \begin{proof} Let $G,k$ be an instance of $\mathsf{P3VC}$. Consider a straight line embedding of $G$, and let $d$ denote the distance between the closest pair of non-adjacent segment edges. \footnote{ In $O(n\log n)$ time one can compute a straight line embedding of $G$ where the vertices are on an $(2n-4) \times (n-2)$ grid \cite{dpp-hdpgg-90}. This implies a lower bound on $d$ with a polynomial number of bits.} Let ${\varepsilon}>0$ be a value strictly smaller than $d/2$ and strictly smaller than half the length of any segment edge. The set $\mathscr{C}$ of segments in our instance of \probref{main} will be the segment edges from the embedding, but where each segment has an ${\varepsilon}$ amount removed from each end, i.e.\ we remove all portions of segments in ${\varepsilon}$ balls around the vertices, see \figref{segments}. We use the same value of $k$ in our \probref{main} instance as in the $\mathsf{P3VC}$ instance. If there is a vertex cover of size at most $k$ then if we place balls of radius ${\varepsilon}$ at each of the $k$ corresponding vertices of the embedding, then these balls will intersect all segments in $\mathscr{C}$, i.e.\ we have a solution to \probref{main} of radius ${\varepsilon}$. On the other hand, by the definition of $d$, any ball of radius $<d/2$ cannot simultaneously intersect two segments from $\mathscr{C}$ if they correspond to non-adjacent edges from $G$. (Note when we shrunk the edges by ${\varepsilon}$ this could only have made them further apart.) Thus if the minimum vertex cover requires $>k$ vertices, then our instance of \probref{main} requires $>k$ centers if we limit to balls with radius $<d/2$. Therefore, if we could approximate the minimum radius of our \probref{main} instance within any factor less than $d/(2{\varepsilon})$ then we can determine whether the corresponding vertex cover instance had a solution with $\leq k$ vertices. However, we are free to make ${\varepsilon}>0$ as small as we want and thus $d/(2{\varepsilon})$ as large as we want, so long as this quantity (or more precisely a lower bound on it) is computable in polynomial time. \end{proof} \subsection{Disks} Here we argue that it is hard to radius-approximate \probref{main} within a constant factor when $\mathscr{C}$ is restricted to be a set of unit disks. The following reduction from $\mathsf{P3VC}$ is similar to the one given in \cite{fg-oaac-88}, which embeds the graph such that edges are replaced by odd length sequences of points. In our case, these odd length sequences of points are instead replaced with odd length sequences of appropriately spaced disks. \begin{theorem}\thmlab{hardmain} For the set of instances in which $\mathscr{C}$ is a set of disjoint unit disks, \probref{main} cannot be radius-approximated to any factor less than $\frac{\sqrt{13} - \sqrt{3}}{2-\sqrt{3}}$ in polynomial time unless $\mathsf{P=NP}$. \end{theorem} \begin{proof} To simplify our construction description, instead of requiring the disks be disjoint, we allow them to intersect at their boundaries, but not their interiors. Later we remark how this easily implies the result for the disjoint disk case. So let $G=(V,E),k$ be an instance of $\mathsf{P3VC}$. For every edge in $E$ we create a sequence of an odd number (greater than 1) of unit disks, where consecutive disks in the sequence are spaced $2(2/\sqrt{3}-1)$ apart from one another. (Note $2(2/\sqrt{3}-1)$ is the distance between the disks, not their centers.) For a vertex $v$ of degree two, we place the disks corresponding to the $v$ end of the adjacent edges again at distance $2(2/\sqrt{3}-1)$ apart. For a vertex $v$ of degree three, we place the disks corresponding to the $v$ end of the adjacent edges such that they all just touch one another at their boundaries, see \figref{threecircles}. Thus the centers of these disks form an equilateral triangle, and let the center point of this triangle be $t$. For any one of the adjacent edges, we further require that the centers of the first two disks (on the $v$ end of the edge) lie on a straight line containing $t$, in other words the edges leaving $v$ do not bend until several disks away from $v$. As $G$ is a planar graph with maximum degree three such an embedding of polynomial size is possible, similar to the case in \cite{fg-oaac-88}. Doing so requires using different numbers of disks for each edge and allowing the edges to bend (i.e. the centers of three consecutive disks of an edge may not lie on a line). However, we will require these bends to be gradual. Specifically, observe that if the centers of three consecutive disks of an edge were on a straight line, the distance between the two non-consecutive disks would be $2(1+2(2/\sqrt{3}-1))>2.6$, see \figref{twocircles}. We then require that the bends are shallow enough such that two non-consecutive disks of an edge are more than $2.5$ apart. We also require this for disks from edges adjacent to a degree two vertex (when they are not both the disks immediately adjacent at the vertex), or a degree three vertex when neither disk is one of corresponding three touching disks of the vertex. Finally, for disks that come from edges that are not adjacent, we easily enforce that they are again more than 2.5 apart. (This is similar to the value $d$ from \thmref{seghard}.) \begin{figure} \caption{Consecutive disks along an edge.} \end{figure} So given an instance $G,k$ of $\mathsf{P3VC}$, we construct an instance $\mathscr{C}, \kappa$ of \probref{main} where $\mathscr{C}$ is determined from $G$ as described above and $\kappa=k+(|\mathscr{C}|-|E|)/2$. We first argue if $G$ has a vertex cover of size $k$ then for our instance of \probref{main} there is a solution of radius $2/\sqrt{3}-1$. First, for any vertex $v$ in the vertex cover we create a center, and roughly speaking place it at the location of $v$ in the embedding. Namely, if $v$ had degree two then we place the center at the midpoint of the centers of the disks at the ends of the edges adjacent to $v$, which by construction are exactly $2(2/\sqrt{3}-1)$ apart and thus a ball at the midpoint with radius $2/\sqrt{3}-1$ intersects both. If $v$ has degree three then we place a center at the center point $t$ of the equilateral triangle determined by three touching disks of the adjacent edges. An easy calculation\footnote{For an equilateral triangle with edge length 2, the distance from an edge to the center point of the triangle is $1/\sqrt{3}$, thus the distance from the center point to any one of the unit balls is $2/\sqrt{3}-1$.} shows that since our disks have unit radius, that $B(t, 2/\sqrt{3}-1)$ intersects the three touching disks. We now cover the remaining disks with $(|\mathscr{C}|-|E|)/2$ centers. For any edge $e\in E$ let $n_e$ be the number of disks used for $e$ in the above construction. Observe that as we already placed centers at vertices corresponding to a vertex cover of the edges, at least one disk at the end of each edge is already covered, and so there are at most $n_e-1$ consecutive disks that need to be covered. (Note $n_e-1$ is even.) However, as consecutive disks are exactly $2(2/\sqrt{3}-1)$ apart on each edge, these $n_e-1$ disks can be covered with $(n_e-1)/2$ balls of radius $(2/\sqrt{3}-1)$ by covering the disks in pairs. Thus the total number of centers used is $k+\sum_{e\in E} (n_e-1)/2 = k+(|\mathscr{C}|-|E|)/2=\kappa$. \begin{figure} \caption{Three touching disks corresponding to a degree three vertex.} \end{figure} Now suppose the minimum vertex cover of $G$ requires $>k$ vertices. In this case we argue that our instance of \probref{main} requires more than $\kappa$ centers if we limit to balls with radius $<\sqrt{13/3}-1$. Call any two disks in $\mathscr{C}$ neighboring if they are consecutive on an edge or if they are disks on the $v$ end of two edges adjacent to a vertex $v$. By construction, neighboring disks have distance $\leq 2/\sqrt{3}-1$ from each other. For a pair of disks which are not neighboring we now argue their distance is at least $2\sqrt{13/3} -2$. Specifically, if these disks come from the same edge but are not consecutive along that edge, or if they are from distinct edges that are either non-adjacent or are adjacent to a degree two vertex (but not the two disks of that vertex), then by construction their distance is $>2.5>2\sqrt{13/3} -2$. The remaining case is when the disks are from distinct edges adjacent to a degree three vertex, but they are not both from the three touching disks of the vertex. It is easy to see that the closest two such disks can be is when one of the disks is one of the three touching disks, and the other is the second disk on another edge. We now calculate the distance between two such disks, see \figref{threecircles}. Let the three touching disks be denoted $D_1$, $D_2$, and $D_3$, with centers $d_1$, $d_2$, and $d_3$, respectively. Let $D_2'$ denote the second disk on the edge containing $D_2$, and let its center by $d_2'$. We wish to compute $\distX{D_1}{D_2'} =\distX{d_1}{d_2'}-2$, as these are unit disks. Let $m$ denote the midpoint of $d_1$ and $d_3$, and observe that the line through $d_2$ and $d_2'$ passes through $m$ and is orthogonal to the line through $d_1$ and $d_3$, as the points $d_1$, $d_2$, and $d_3$ form and equilateral triangle. Thus by the Pythagorean theorem we have $\distX{d_1}{d_2'}^2 = 1^2+(1+2(2/\sqrt{3}-1)+1+\sqrt{3})^2 = 1+(4/\sqrt{3}+\sqrt{3})^2=52/3$, where the $+\sqrt{3}$ term is the height of an equilateral triangle of side length $2$. Thus $\distX{D_1}{D_2'} =\distX{d_1}{d_2'}-2 = 2\sqrt{13/3} -2$. Now we finish the argument that when the minimum vertex cover of $G$ requires $>k$ vertices, our instance of \probref{main} requires more than $\kappa$ centers if we limit to balls with radius $<\sqrt{13/3}-1$. By the above, limiting to radius $<\sqrt{13/3}-1$ implies that any ball either covers just a single disk, or a pair of neighboring disks. An edge $e$ with $n_e$ disks thus requires at least $\lceil n_e/2 \rceil = 1+(n_e-1)/2$ disks to cover it. Moreover, a ball can only cover both a disk of $e$ and $e'$ if those disks are on the $v$ end of two edges adjacent to $v$. Let $E_z$ be the subset of edges with at least one disk covered by such a ball (i.e.\ a ball corresponding to a vertex), and let $z$ be the number of such balls. Then the total number of balls required is \begin{align*} &\geq z+\sum_{e\in E_z} (n_e-1)/2 + \sum_{e\in E\setminus E_z} (1+(n_e-1)/2)\\ &= z + (|\mathscr{C}|-|E|)/2 + |E\setminus E_z| = z+(\kappa-k) + |E\setminus E_z|, \end{align*} which is more than $\kappa$ when $z+|E\setminus E_z|> k$. Notice, however, there is a vertex cover of $G$ of size $z+|E\setminus E_z|$, consisting of the vertices that $z$ counted, and one vertex from either end of each edge in $E\setminus E_z$. Thus as the minimum vertex cover has size $>k$, we have $z+|E\setminus E_z|>k$ as desired. Therefore, if we could approximate the minimum radius of our \probref{main} instance within any factor less than $\frac{\sqrt{13/3}-1}{2/\sqrt{3}-1} = \frac{\sqrt{13} - \sqrt{3}}{2-\sqrt{3}}$ then we can determine whether the corresponding vertex cover instance had a solution with $\leq k$ vertices. In the above analysis the boundaries of the circles were allowed to intersect, but we can enforce that all disks are disjoint without changing the approximation hardness factor since we showed the problem is hard for any factor that is less than $\frac{\sqrt{13} - \sqrt{3}}{2-\sqrt{3}}$. Specifically, rather than having the disks for a degree three vertex touch, we can instead make them arbitrarily close to touching. \end{proof} \section{Constant Factor Radius Approximation for Disks} \seclab{balls} In this section we argue that when $\mathscr{C}$ is a set of disjoint disks (of possibly differing radii), that there is a constant factor radius-approximation for \probref{main}. \begin{lemma} \lemlab{arc} Let $\mathscr{C}$ be a set of pairwise disjoint disks such that for all $C\in \mathscr{C}$, the radius of $C$ is $\geq r$. If there is a point $s\in \mathbb{R}^2$ where $\distX{s}{C}\leq (2/\sqrt{3}-1)r$ for all $C\in \mathscr{C}$, then $|\mathscr{C}|\leq 2$. \end{lemma} \newcommand{\twoproof}{ \begin{proof} We give a proof by contradiction. So suppose there exists a point $s$ such that there are three disjoint disks in $\mathscr{C}$, each with radius $\geq r$, and all of which intersect the ball $B(s,(\frac{2}{\sqrt{3}}-1)r)$. Observe that if any one of these three disjoint disks $C$ has radius $>r$, then it can be replaced by a disk $C'$ of radius $r$ such that $C'\subset C$ and $C'$ still intersects $B(s,(\frac{2}{\sqrt{3}}-1)r)$. As these new disks are all still disjoint and intersect $B(s,(\frac{2}{\sqrt{3}}-1)r)$, it suffices to argue we get a contradiction when all three disks have radius exactly $r$. Let the centers of these three disks be denoted $x$, $y$, and $z$. Now, at least one of the angles $\angle xsy$, $\angle ysz$, and $\angle zsx$ is $\leq 2\pi/3$. Without loss of generality assume it is $\angle xsy$, and let $\gamma=\angle xsy$. Consider the triangle $\triangle sxy$, and let its side lengths be denoted $a=\distX{x}{s}, b=\distX{y}{s}, c=\distX{x}{y}$. Since $\gamma\leq 2\pi/3$, by the Law of Cosines we thus have $c^2 = a^2 + b^2 - 2ab \cos(\gamma) \leq a^2 + b^2 + ab$. As the $r$ radius disks with centers $x$ and $y$ are disjoint, we know that $2r<c$. Combining these two inequalities we get $4r^2 < a^2 + b^2 + ab$. As $B(s,(\frac{2}{\sqrt{3}}-1)r)$ intersects the $r$ radius disks centered at both $x$ and at $y$, we also have that $a,b\leq (\frac{2}{\sqrt{3}}-1)r + r = \frac{2r}{\sqrt{3}}$. Combining this with the previous inequality gives $4r^2 < a^2 + b^2 + ab\leq 4r^2/3+4r^2/3+4r^2/3 = 4r^2$, which is a clear contradiction and thus the number of disks in $\mathscr{C}$ is at most 2. \end{proof} } \twoproof For any constant $c\geq 1$, we call an algorithm a \emph{$c$-decider} for \probref{main}, if for a given instance with optimal radius $r_{opt}$, and for any given query radius $r$, if $r\geq r_{opt}$ then the algorithm returns a solution $S$ of radius $\leq cr$, and if $r< r_{opt}/c$ it returns False (for $r_{opt}/c\leq r<r_{opt}$ either answer is allowed). \begin{lemma}\lemlab{decider} There is an $O(n^{2.5})$ time $(5+2\sqrt{3})$-decider for \probref{main}, when restricted to instances where $\mathscr{C}$ is a set of disjoint disks. \end{lemma} \begin{proof} Let $r$ be the given query radius. We build a set $S$ of centers as follows, where initially $S=\emptyset$. Let $P$ be the set of center points of all disks in $\mathscr{C}$ with radius $<(3+2\sqrt{3}) r$. Until $P$ is empty repeatedly add an arbitrary point $p\in P$ to the set $S$, remove all disks from $\mathscr{C}$ which intersect $B(p,(5+2\sqrt{3})r)$, and remove all center points from $P$ corresponding to disks removed from $\mathscr{C}$. Let $S_1$ refer to the resulting set of centers. For the remaining set of disks $\mathscr{C}'$, define the subset $\mathscr{C}'' = \{C\in \mathscr{C}' \mid \exists~ D\in \mathscr{C}'\setminus\{C\} \text{ s.t. } \distX{C}{D}\leq 2r\}$. First, for every disk $C$ in $\mathscr{C}'\setminus \mathscr{C}''$ we add an arbitrary point from $C$ to $S$. Let this set of added centers be denoted $S_2$. Now for the set $\mathscr{C}''$ we construct a graph $G=(V,E)$ where $V=\mathscr{C}''$ and there is an edge from $C$ to $D$ if and only if $\distX{C}{D}\leq 2r$. Let $\mathcal{E}$ be a minimum edge cover of $G$. (Note every vertex in $G$ has an adjacent edge by the definition of $\mathscr{C}''$ and thus $\mathcal{E}$ exists.) For every edge $(C,D)\in \mathcal{E}$, $\distX{C}{D}\leq 2r$ and thus there is a point $p\in \mathbb{R}^2$ such that $\distX{p}{C}, \distX{p}{D}\leq r$. So finally, for each $(C,D)\in \mathcal{E}$ we add this corresponding point $p$ to $S$. Let this final set of added centers be denoted $S_3$. If $|S|\leq k$ we return $S$ (which is the disjoint union of $S_1$, $S_2$, and $S_3$) and otherwise we return False. To prove the above algorithm is a $(5+2\sqrt{3})$-decider, first we argue that if $r<r_{opt}/(5+2\sqrt{3})$ then it returns False. To do so we prove the contrapositive. So assume $|S|\leq k$. Let $S_1$, $S_2$, and $S_3$, and $\mathscr{C}'' \subseteq \mathscr{C}' \subseteq \mathscr{C}$ be as defined above. As we used balls of radius $(5+2\sqrt{3})r$, all $C\in \mathscr{C}\setminus \mathscr{C}'$ are within distance $(5+2\sqrt{3})r$ of points in $S_1$. All $C \in \mathscr{C}'\setminus \mathscr{C}''$ have distance zero to a point in $S_2$. Finally, all $C \in \mathscr{C}''$ have distance $\leq r$ to a point in $S_3$. As $S$ is the disjoint union of $S_1$, $S_2$, and $S_3$, we thus have that all $C\in \mathscr{C}$ are within distance $(5+2\sqrt{3})r$ to a set $S$ with $\leq k$ points, which by the definition of \probref{main} means that $r_{opt}\leq (5+2\sqrt{3})r$. Now suppose $r\geq r_{opt}$, where $r_{opt}$ is the optimal radius for the given instance $\mathscr{C}, k$ of \probref{main}. In order to prove the algorithm is a $(5+2\sqrt{3})$-decider, in this case we must argue it returns a $\leq (5+2\sqrt{3})r$ radius solution. As already shown above, if the algorithm returns a solution then it has radius $\leq (5+2\sqrt{3})r$, thus all we must argue is that a solution is returned, namely that $|S|\leq k$. So fix an optimal solution $S^*$ for the original input instance $\mathscr{C}, k$. We argue that there are disjoint subsets $S^*_1$, $S^*_2$, and $S^*_3$ of $S^*$ such that $|S^*_1|\geq |S_1|$, $|S^*_2|\geq |S_2|$, and $|S^*_3|\geq |S_3|$, and therefore $|S|\leq |S^*|=k$. Let the points in $S_1=\{t_1,\ldots, t_{|S_1|}\}$ be indexed in the order they were selected. Consider the point $t_i$, which is the center of some disk $C_i\in \mathscr{C}$ with radius $\leq (3+2\sqrt{3})r$. Let $U_i = \cup_{j\leq i} B(t_j,(5+2\sqrt{3})r)$. Define $S^*_1$ as the centers $s$ of $S^*$ such that $B(s,r)\subseteq U_{|S_1|}$. To argue $|S_1^*|\geq |S_1|$, it suffices to argue that for all $i$ there exists some $s\in S^*$ such that $B(s,r)\not \subseteq U_{i-1}$ while $B(s,r)\subseteq U_{i}$ (i.e.\ $s$ gets charged uniquely to $t_i$). Now there must be some center $s\in S^*$ such that $B(s,r_{opt})\cap C_i\neq \emptyset$, as $S^*$ covers $\mathscr{C}$ with radius $r_{opt}$. Moreover, since $r\geq r_{opt}$, we have $B(s,r)\not \subseteq U_{i-1}$, since otherwise it implies $U_{i-1}\cap C_i\neq \emptyset$ and thus $t_i$ could not have been selected in the $i$th round as the algorithm had already removed it from $P$. Conversely, $B(s,r)\subseteq U_{i}$, since $B(s,r)$ intersects $C_i$ and $C_i$ has radius $\leq (3+2\sqrt{3})r$, and thus $B(s,r)\subseteq B(t_i,(5+2\sqrt{3})r)\subseteq U_{i}$. Therefore $|S_1^*|\geq |S_1|$. For any $s\in S^*_1$, $B(s,r)\subseteq U_{|S_1|}$, and since the disks of $\mathscr{C}'$ do not intersect $U_{|S_1|}$, in the optimal solution $\mathscr{C}'$ must be $r_{opt}$-covered only using centers from $S^*\setminus S^*_1$. Let $S^*_2$ be the subset of centers from $S^*\setminus S^*_1$ which $r_{opt}$ covers $\mathscr{C}'\setminus \mathscr{C}''$. Since any disk $C\in (\mathscr{C}'\setminus \mathscr{C}'')$ has distance $>2r$ to its nearest neighbor in $\mathscr{C}'\setminus \{C\}$ and $r_{opt}\leq r$, the optimal solution must use a distinct center to cover each disk in $\mathscr{C}'\setminus \mathscr{C}''$, i.e.\ $|S^*_2|\geq |S_2|$, and moreover, $\mathscr{C}''$ must be covered in the optimal solution by $S^*\setminus (S^*_1\cup S^*_2)$. So finally, let $S^*_3$ be the subset of centers from $S^*\setminus (S^*_1\cup S^*_2)$ which $r_{opt}$ covers $\mathscr{C}''$. By construction, the radius of each $C\in \mathscr{C}''$ is $\geq (3+2\sqrt{3})r$. Thus, by \lemref{arc} any point from $S^*_3$ can $(2/\sqrt{3}-1)\cdot (3+2\sqrt{3})r = r\geq r_{opt}$ cover at most 2 disks from $\mathscr{C}''$. Now the graph $G$, for which our algorithm computes a minimum edge cover $\mathcal{E}$, contains an edge for every pair of disks which can be simultaneously covered with a single $r$ radius ball. Therefore $|S^*_3|\geq |\mathcal{E}| = |S_3|$. For the running time, computing the set $P$ takes $O(n)$ time. Selecting a new point $p\in P$ and removing all disks from $\mathscr{C}$ which intersect $B(p,(5+2\sqrt{3})r)$ can be done in $O(n)$ time, and thus repeating this till $P$ is empty takes $O(n^2)$ time. Determining the subset $\mathscr{C}''$, and hence the graph $G$, can naively be done in $O(n^2)$ by checking the distances between all pairs in $\mathscr{C}'$. Selecting a point from each $C\in (\mathscr{C}'\setminus \mathscr{C}'')$ takes $O(n)$ time. Finally, since computing a minimum edge cover can be reduced to computing a maximum matching, $\mathcal{E}$ can be found in $O(n^{2.5})$ time (see \cite{mv-afmmgg-80}). \end{proof} We remark that it should be possible to improve the running time of the above decision procedure, by arguing that the graph $G$ it constructs is sparse. However, ultimately that will not improve the running time of the following optimization procedure, as it searches over the $O(n^3)$ sized set of \lemref{canon}. \begin{theorem}\thmlab{radiusapprox} There is an $O(n^3 \log n)$ time $(5+2\sqrt{3})$-radius-approximation algorithm for \probref{main}, when restricted to instances where $\mathscr{C}$ is a set of disjoint disks. \end{theorem} \begin{proof} By \lemref{canon}, in $O(n^3\log n)$ time we can compute an $O(n^3)$ sized set $R$ of values, such that $r_{opt}\in R$, where $r_{opt}$ is the optimal radius. So sort the values in $R$, and then binary search over them using the $(5+2\sqrt{3})$-decider of \lemref{decider}, which we denote $\mathsf{decider}(r)$. Specifically, if $\mathsf{decider}$ returns False we recurse to the right, and if it returns a solution (i.e.\ True) then we recurse on the left. Note that since our decision procedure is approximate, the values for which it returns True or for which it returns False may not be contiguous in the sorted order of $R$. Regardless, however, our binary search allows us to find a pair $r'<r$ which are consecutive in $R$ and such that $\mathsf{decider}(r')$ is False, and $\mathsf{decider}(r)$ is True. (Unless $\mathsf{decider}$ always returns True, in which case it returns the smallest value in $R$.) By \lemref{decider} $\mathsf{decider}$ is a $(5+2\sqrt{3})$-decider, and thus since $\mathsf{decider}(r')$ is False by definition we have that $r'<r_{opt}$. However, as $r'<r$ are consecutive in the sorted order of $R$ and since $r_{opt}\in R$, this implies $r_{opt}\geq r$. On the other hand, again by the definition of a $(5+2\sqrt{3})$-decider, $\mathsf{decider}(r)$ outputs a solution with radius at most $(5+2\sqrt{3})r\leq (5+2\sqrt{3})r_{opt}$, thus giving us a $(5+2\sqrt{3})$-approximation as claimed. By \lemref{canon}, computing and sorting the $O(n^3)$ values in $R$ takes $O(n^3\log n)$ time. By \lemref{decider} each call to $\mathsf{decider}$ takes $O(n^{2.5})$ time, and since we are binary searching over $O(n^3)$ values, the time for all calls to $\mathsf{decider}$ is $O(n^{2.5}\log( n^{3}))=O(n^{2.5}\log n)$. Thus the total time is $O(n^3\log n)$ as claimed. \end{proof} Our focus in this paper is on the planar case, however, in \apndref{extension} we remark how the above decision procedure works in higher dimensions. The above optimization procedure does not immediately extend as it makes use of \lemref{canon}, however, in the appendix we informally sketch how one can approximately recover the same result. \section{An Efficient FPTAS for Bounded k} \seclab{bounded} By \lemref{canon}, we can compute a set of $O(n^{3})$ points which contains a subset of size $k$ that is an optimal $k$-center solution. Thus, for $k$ is constant, enumerating all $O(n^{3k})$ possible subsets, and taking the minimum cost solution found, yields a polynomial time algorithm. In this section, we argue that for constant $k$, we can achieve a $(1+{\varepsilon})$-radius-approximation for unit disks, whose running time depends only linearly on $n$. Contrast this with \thmref{hardmain}, where we argued that when $k$ is not assumed to be constant, that the problem is hard to approximate for unit disks within a given constant factor. We use the following from Agarwal and Procopiuc \cite{ap-eaac-02}. \newcommand{\Algorithm{kCenter}\xspace}{\Algorithm{kCenter}\xspace} \begin{theorem}[\cite{ap-eaac-02}]\thmlab{ptas} Given a set $P$ of $n$ points in the plane, there is an $O(n\log k)+(k/{\varepsilon})^{O(\sqrt{k})}$ time $(1+{\varepsilon})$-radius-approximation algorithm for $k$-center, denoted $\Algorithm{kCenter}\xspace({\varepsilon},P)$. \end{theorem} \begin{theorem} \thmlab{newptas} There is an $O(n\log k)+(k/{\varepsilon})^{O(k)}$ time $(1+{\varepsilon})$-radius-approximation algorithm for \probref{main}, when restricted to instances where $\mathscr{C}$ is a set of disjoint unit disks. \end{theorem} \begin{proof} Let $P$ denote the set of center points of the disks in $\mathscr{C}$. For any given set $S$ of $k$ points in the plane, let $r_P(S) = \max_{p\in P} \distX{p}{S}$ and $r_\mathscr{C}(S) = \max_{C\in \mathscr{C}} \distX{C}{S}$. Observe that $r_\mathscr{C}(S) \leq r_P(S)\leq r_\mathscr{C}(S)+1$. Specifically, $r_\mathscr{C}(S)\leq r_P(S)$ since any ball (in particular one centered at a point from $S$) which contains a center point from $P$ also intersects the corresponding disk in $\mathscr{C}$. On the other hand, $r_P(S)\leq r_\mathscr{C}(S)+1$ since for any ball intersecting a disk in $\mathscr{C}$, if we increase its radius by $1$ then it will contain the center point of that disk, as $\mathscr{C}$ consists of unit disks. Let $r_{opt}$ denote the optimum radius for the given instance $\mathscr{C},k$ of \probref{main}. We consider two cases based on the value of $r_{opt}$. First, suppose that $r_{opt}>2/{\varepsilon}$. Let $S'$ denote the solution returned by $\Algorithm{kCenter}\xspace({\varepsilon}/3,P)$. By the above inequalities and \thmref{ptas}, \begin{align*} &r_\mathscr{C}(S') \leq r_P(S') \leq (1+{\varepsilon}/3) \min_{S\subset \mathbb{R}^2, |S|=k} r_P(S) \leq (1+{\varepsilon}/3) (1+\min_{S\subset \mathbb{R}^2, |S|=k} r_\mathscr{C}(S))\\ &\!= (1+{\varepsilon}/3)(1+r_{opt}) < (1+{\varepsilon}/3)({\varepsilon} r_{opt}/2+r_{opt}) = (1+{\varepsilon}/3)(1+{\varepsilon}/2) r_{opt} \leq (1+{\varepsilon})r_{opt}, \end{align*} where the last inequality assumed ${\varepsilon}\leq 1$. Thus $S'$ is $(1+{\varepsilon})$-approximation for \probref{main}. Now suppose that $r_{opt}\leq 2/{\varepsilon}$. In this case observe that for any point $x\in \mathbb{R}^2$, the ball $B(x,r_{opt})$ can intersect only $O(1/{\varepsilon}^2)$ disks from $\mathscr{C}$ as they are disjoint and all have radius 1. Thus any center from the optimal solution can cover at most $O(1/{\varepsilon}^2)$ disks within the optimal radius, and so it must be that $n = O(k/{\varepsilon}^2)$. The algorithm is now straightforward. If $n \leq \gamma k/{\varepsilon}^2$, for some sufficiently large constant $\gamma$, then by \lemref{canon} in $O((k/{\varepsilon}^2)^3 \log (k/{\varepsilon}))$ time we can compute a set $P$ of $O((k/{\varepsilon}^2)^3)$ points such that $P$ contains an optimal set of $k$ centers. We try all possible subsets of $P$ of size $k$ and take the best one. There are $O((k/{\varepsilon}^2)^{3k})$ such subsets, and for each subset its cost can be determined in $O(kn) = O((k/{\varepsilon})^2)$ time. Thus in this case we can compute the optimal solution in $O((k/{\varepsilon})^2 \cdot (k/{\varepsilon}^2)^{3k}) = (k/{\varepsilon})^{O(k)}$ time. On the other hand, if $n > \gamma k/{\varepsilon}^2$ then the above implies $r_{opt}> 2/{\varepsilon}$. In this case it was argued above that $\Algorithm{kCenter}\xspace({\varepsilon}/3,P)$ returns a $(1+{\varepsilon})$-approximation, and by \thmref{ptas} it does so in $O(n\log k)+(k/{\varepsilon})^{O(\sqrt{k})}$ time. In either case, we have a $(1+{\varepsilon})$-approximation (or better) and the total time is $\max\{(k/{\varepsilon})^{O(k)}, O(n\log k)+(k/{\varepsilon})^{O(\sqrt{k})}\}$. \end{proof} \section{One Dimensional Clustering with Neighborhoods} \seclab{oned} In this section we show that despite clustering with neighborhoods being hard to radius approximate within any factor in the plane, we can solve the one dimensional variant exactly in $O(n\log n)$ time, even when object intersections are allowed. First, we argue the decision problem can be solved in linear time. Then we argue that we can use a scheme similar to that in \cite{f-pslsct-91} to search for the optimal radius. In one dimension, a convex object is just a closed interval. Thus we have the following one dimensional version of \probref{main}, where intersections are no longer prohibited. \begin{problem}[One Dimensional Clustering with Neighborhoods] \problab{problem1d} Given a set $\mathscr{C}$ of $n$ closed intervals on the real line, and an integer parameter $k\geq 0$, find a set of $k$ points $S$ (called centers) which minimize the maximum distance to an interval in $\mathscr{C}$. That is, \[ S=\arg \min_{S'\subset \mathbb{R}, |S'|=k} \max_{C\in \mathscr{C}} \distX{C}{S'}. \] \end{problem} The following decision procedure is similar in spirit to various folklore results for interval problems in one dimension (for example, see the discussion in \cite{f-ghssfo-18} on interval stabbing). The challenge is turning this decision procedure into an efficient optimization procedure, for which as discussed below we make use of \cite{f-pslsct-91}. We first sort the intervals in increasing order both by their left and by their right endpoints. We maintain cross links between the two sorted lists so that if we remove an interval from one list, its copy in the other list can be removed in constant time. \begin{lemma} \lemlab{feasible} Given an instance $\mathscr{C},k$ of \probref{problem1d}, where the intervals have been presorted, for any query radius $r$, in $O(n)$ time one can decide whether $r \geq r_{opt}$. \lemlab{decision1D} \end{lemma} \begin{proof} We build a set $S$ of centers as follows, where initially $S = \emptyset$. Let $[\alpha,\beta]$ denote the interval with the leftmost right endpoint (i.e.\ $\beta$ is smallest among all intervals). We place a center at $\beta+r$ and add it $S$. Next we remove all intervals which intersect the ball $B(\beta+r,r)$. Note that these intersecting intervals are precisely those whose left endpoint is $\leq \beta+2r$, as this condition is clearly necessary to intersect $B(\beta+r,r)$, but also sufficient as all intervals have right end point $\geq \beta$. We then repeat this process until all intervals are removed. If $|S| \leq k$ we return True and otherwise we return False. Observe that every time we place a center, we remove intervals it covers within distance $r$. Thus the final set $S$ is a set of centers of radius $r$, and so if $|S|\leq k$, then $r\geq r_{opt}$ and the algorithm correctly returns True. Moreover, we now argue that $S$ is a minimum cardinality set of centers of radius $r$, and thus if $|S|> k$ then the algorithm correctly returns False. Adopting notation from above, let $[\alpha,\beta]$ be the interval with leftmost right endpoint, and let $c$ be the center our algorithm places at $\beta+r$. Now in the minimum cardinality solution, there must be at least one center $c'$ within distance $r$ from $[\alpha,\beta]$, implying the location of $c'$ is $\leq \beta+r$. Thus $c'$ can only $r$-cover intervals with left endpoint $\leq \beta+2r$. However, as described above, $c$ $r$-covers all intervals with left endpoint $\leq \beta+2r$, and thus $c'$ $r$-covers a subset of those $c$ does. Conversely, the subset of intervals not $r$-covered by $c$ is a subset of those not $r$-covered by $c'$. By induction our algorithm uses the smallest possible number of centers to $r$-cover the intervals not $r$-covered by $c$, which therefore is at most the number centers the global minimum solution uses to $r$-cover the superset of intervals not $r$-covered by $c'$. Thus overall our set of centers was an $r$-cover of minimum cardinality. For the running time, observe that determining the location of the next center takes constant time since it only depends on the leftmost right endpoint, and we assumed we have the sorted ordering of the intervals by right endpoint. Moreover, we can remove all of the intervals intersecting the $r$ radius ball at the new center in time linear in the number of intersecting intervals, since as discussed above these intersecting intervals are a prefix of the sorted ordering by left endpoint. As we spend constant time per interval removed, overall this is an $O(n)$ time algorithm. \end{proof} \lemref{decision1D} gives us a decision procedure for \probref{problem1d} which we now wish to utilize to search for the optimum radius. We use the following lemma to reduce the search space, which can be seen as a simplification of \lemref{canon} for the one dimensional case, where here we only need to consider distances from bisecting points rather than bisecting curves. \begin{lemma} \lemlab{radius} Let $\mathscr{C}$ be a set of closed intervals. Then for any value $k$, the optimal radius for the instance $\mathscr{C},k$ of \probref{problem1d} is either 0 or $\distX{C}{C'}/2$ for some pair $C,C'\in \mathscr{C}$. \end{lemma} \begin{proof} For any value $k$, let $S$ be an optimal solution with optimal radius $r_{opt}$. Consider an arbitrary center $s \in S$, and let $\mathscr{C}'$ be the subset of $\mathscr{C}$ which intersects the ball $B(s,r_{opt})$. We can assume that $|\mathscr{C}'| \geq 1$, as otherwise $B(s,r_{opt})$ does not intersect any interval and so $s$ can be thrown out. If $|\mathscr{C}'| = 1$, then $s$ intersects only one interval, and thus without loss of generality $s$ can be placed inside this interval, i.e.\ at distance 0 from it. So assume $|\mathscr{C}'| > 1$, and let $C$ be the furthest interval from $s$ in $\mathscr{C}'$. As we move $s$ towards $C$, so long as $C$ remains the furthest interval from $s$ in $\mathscr{C}'$, $B(s,\distX{s}{C})$ will continue to intersect all intervals in $\mathscr{C}'$. If $C$ always remains the furthest, when $s$ eventually reaches $C$, its distance to $C$ and hence all of $\mathscr{C}'$ will be 0. Otherwise, if before we reach $C$, $s$ is no longer the furthest from $s$, then we must have crossed the bisector point between $C$ and some other interval in $\mathscr{C}'$. In this case, we can place $s$ on this bisector point and $B(s,\distX{s}{C})$ will intersect all intervals in $\mathscr{C}'$, and moreover $\distX{s}{C}\leq r_{opt}$ since $\distX{s}{C}$ monotonically decreased as we moved $s$ towards $C$. Modifying all centers in $S$ in this way thus produces a solution whose radius is $\leq r_{opt}$ and is either 0 or the distance from a bisector point to either interval in the pair it bisects. \end{proof} Given a set $\mathscr{C}$ of $n$ intervals, let $P(\mathscr{C})$ denote the set of all $2n$ left and right endpoints of the intervals in $\mathscr{C}$. To find the optimal solution to an instance $\mathscr{C},k$ of \probref{problem1d}, by \lemref{radius}, we can binary search over the interpoint distances of points in $P(\mathscr{C})$ using our decider from \lemref{feasible}. (When we call the decider we divide the interpoint distance by two as \lemref{radius} actually tells us it is a bisector distance.) As there are $\Theta(n^2)$ interpoint distances, naively this approach takes $O(n^2 \log n)$ time. However, \cite{f-pslsct-91} previously showed that in the abstract setting where one is given a linear time decider, and the optimal solution is an interpoint distance, one can find the optimal solution in $O(n\log n)$ time. This is achieved by reducing the problem to searching in an implicitly defined sorted matrix, which for completeness we now describe. A matrix is said to be \textit{sorted} if the elements in every row and in every column are in nonincreasing order. Let $P=\{p_1,\ldots,p_m\}$ be a set of $m$ values on the real line, indexed in increasing order. \cite{f-pslsct-91} defines a sorted $m-1\times m-1$ matrix from $P$, containing all interpoint distance, as follows. Let $A_i = p_i - p_1$ (i.e.\ shift the points so $p_1$ is the origin). Observe that for any $i<j$, $p_j-p_i = A_j - A_i$. Let $M(P)$ be the $m-1 \times m-1$ matrix whose $ij$-th entry is $A_{m+1-i} - A_j$. It is easy to see that $M(P)$ is a sorted matrix, and as $P$ is indexed in sorted order we have an $O(n)$ space implicit representation of $M(P)$ where each entry can be computed in constant time. We now reproduce the description of the procedure \textit{MSEARCH}, presented in \cite{f-pslsct-91} (which combines ideas from \cite{f-oatp-91,fj-fpcgsgds-83,fj-gsrsm-84}). The input is a set of sorted matrices, a stopping count $c$, and a searching range $(\lambda_1, \lambda_2)$ such that $\lambda_2$ is feasible and $\lambda_1$ is not, where initially we set $(\lambda_1, \lambda_2) = (0,\infty)$. \textit{MSEARCH} produces a sequence of values one at a time to be tested for feasibility, where the result of each test allows us to discard some elements in the set of matrices. If a value $\lambda \notin (\lambda_1, \lambda_2)$ is produced, then it does not need to be tested. If $\lambda$ is feasible, then $\lambda_2$ is reset to $\lambda$, otherwise $\lambda_1$ is reset to $\lambda$. \textit{MSEARCH} stops once the number of matrix elements remaining is no larger than the stopping count. \begin{lemma}[\cite{f-pslsct-91}, Theorem 2.1] \lemlab{msearch} Let $\mathcal{M}$ be a set of $N$ sorted matrices $\{M_1, M_2,\allowbreak \ldots, M_N \}$ in which matrix $M_j$ is of dimension $m_j \times n_j, m_j \leq n_j$, and $\sum^N_{j=1} m_j = m$. Let $c \geq 0$. The number of feasibility tests needed by \textit{MSEARCH} to discard all but at most $c$ of the elements is $O(\{\max \{\log \max_j\{ n_j\}, \log (m/(c+1)) \})$, and the total time of \textit{MSEARCH} exclusive of feasibility tests is $O(\sum^N_{j=1} m_j \log (2n_j / m_j))$. \end{lemma} In our case we have a single sorted matrix which we exhaustively search for the optimum by setting $c=0$. (Note \cite{f-pslsct-91} allowed for multiple sorted matrices as their input was a tree which they decomposed into multiple paths.) Thus we have the following simplified corollary. \begin{corollary} Given an $m\times m$ sorted matrix $M$, the number of feasibility tests needed by \textit{MSEARCH} to find the optimum is $O(\log m)$, and the total time of \textit{MSEARCH} exclusive of feasibility tests is $O(m)$. \end{corollary} Thus if we set $M=M(P(\mathscr{C}))$ (and hence $m+1 = 2n$ in the above corollary), then \textit{MSEARCH} with our linear time decision procedure from \lemref{feasible} gives \thmref{onedmain}. \begin{theorem}\thmlab{onedmain} \probref{problem1d} can be solved in $O(n\log n)$ time, where $n=|\mathscr{C}|$. \end{theorem} \appendix \section{Extending the Disk Approximation to Higher Dimensions}\apndlab{extension} Here we informally remark how the results in \secref{balls} can be extended to higher dimensions. To do so we need to extend \lemref{arc}, \lemref{decider}, and \thmref{radiusapprox}. To extend \lemref{arc}, we consider the same setup as in the proof in two dimensions. Namely let $x$, $y$, and $z$ be the centers of the disjoint $r$ radius balls (which are now in $\mathbb{R}^d$), and let $s$ be some fourth point. Observe that the centers $x$, $y$, and $z$ define a two dimensional plane, and moreover the intersection of their respective balls with this plane is a set of three disjoint $r$ radius disks in this plane. Let $s'$ denote the orthogonal projection of $s$ into this plane. We now have the same two dimensional setup as in \lemref{arc}, and thus the argument there applies so long as we can argue the distance from $s'$ to any one of the three disks in the plane lower bounds the distance from $s$ in $\mathbb{R}^d$ to any one of the three balls. To argue this it suffices to observe that $||x-s'||^2\leq ||x-s||^2$ (and similarly for $y$ and $z$). Specifically, suppose we rotate space so that this plane corresponds to the first two coordinate axes. Then, $||x-s'||^2 = (x_1-s_1')^2+(x_2-s_2')^2 = (x_1-s_1)^2+(x_2-s_2)^2 \leq ||x-s||^2$. A careful read of \lemref{decider} reveals that in fact the same proof works in $\mathbb{R}^d$, if we just change the word ``disk'' to ``ball''. Thus at this point we have a $(5+2\sqrt{3})$-decider that works in $\mathbb{R}^d$. \thmref{radiusapprox} showed how to turn this into a optimization procedure in the plane. Specifically, in the proof we search using our approximate decider over a set of values given by \lemref{canon}. The issue with extending the optimization procedure to higher dimensions is that \lemref{canon} no longer applies. That said, one can find an approximate set of radii to search over, and this is implied by the proof of \lemref{decider}. Specifically, consider the proof of \lemref{decider} when $r=r_{opt}$. A set of $k$ centers $S$ is constructed which is the disjoint union of three types of centers. Namely, type $S_1$, which are center points of input disks (i.e.\ input balls in $\mathbb{R}^d$), type $S_2$ which are arbitrary points in a disk, and type $S_3$ which are midpoints between two disks. Every input disk is assigned to exactly one of these types of centers. A disk is only assigned to a center of type $S_2$ if that center lies in the disk, i.e.\ is at distance zero. Disks assigned to $S_1$ or $S_3$ centers are at distance at most $(5+2\sqrt{3})r_{opt}$ from their respective center. So let $x$ denote the largest distance from a disk to its assigned center. Then $x\leq (5+2\sqrt{3})r_{opt}$. On the other hand $x\geq r_{opt}$, since $r_{opt}$ is the minimum disk to center distance under an optimal assignment to an optimal set of $k$ centers, whereas $x$ is determined by some set of $k$ centers $S$ and potentially a non-optimal assignment of disks to centers in $S$. Let $R$ be the set containing all distances between an input disk and a disk center point (i.e.\ $S_1$ type distances), the value zero (i.e.\ $S_2$ types distances), and all distances between the midpoint of two disks to one of the two disks (i.e.\ $S_3$ type distances). Then by the above $x\in R$, and thus $R$ contains a value which is constant factor approximation to $r_{opt}$. There are a quadratic number of values in $R$, which we can compute, and then binary search over using our decider. This yields an $O(1)$ approximation to $r_{opt}$, though not necessarily a $(5+2\sqrt{3})$-approximation (as both the decider and the value $x$ were approximate). However, one can turn it into a $(5+2\sqrt{3}+{\varepsilon})$-approximation with $O(1/{\varepsilon})$ additional calls to our $(5+2\sqrt{3})$-decider, by using standard techniques. (Namely, given any constant spread interval $[z,cz]$ containing the optimum, one uses the decider to exponential search over all values $z(1+{\varepsilon})^i$ in this interval.) Thus, for any constant ${\varepsilon}>0$, there is a polynomial time $(5+2\sqrt{3}+{\varepsilon})$-approximation algorithm which works for input balls in $\mathbb{R}^d$, as the proof of \lemref{decider} and the above discussion generalizes from disks to balls in $\mathbb{R}^d$. \end{document}
\begin{document} \title{Scalably Scheduling Power-Heterogeneous Processors\footnote{A preliminary version of this paper appeared in ICALP 2010}} \author{ Anupam Gupta\thanks{ Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA. Supported in part by NSF award CCF-0729022 and an Alfred P.~Sloan Fellowship.} \and Ravishankar Krishnaswamy$^\dagger$ \and Kirk Pruhs\thanks{Computer Science Department, University of Pittsburgh, Pittsburgh, PA 15260, USA. Supported in part by NSF grants CNS-0325353, IIS-0534531, and CCF-0830558, and an IBM Faculty Award.} } \date{} \maketitle \begin{abstract} We show that a natural online algorithm for scheduling jobs on a heterogeneous multiprocessor, with arbitrary power functions, is scalable for the objective function of weighted flow plus energy. \end{abstract} \section{Introduction} \label{sec:introduction} Many prominent computer architects believe that architectures consisting of heterogeneous processors/cores, such as the STI Cell processor, will be the dominant architectural design in the future~\cite{Bower2008,Kumar2004,Kumar2006,Merritt2008,Tomer2006}. The main advantage of a heterogeneous architecture, relative to an architecture of identical processors, is that it allows for the inclusion of processors whose design is specialized for particular types of jobs, and for jobs to be assigned to a processor best suited for that job. Most notably, it is envisioned that these heterogeneous architectures will consist of a small number of high-power high-performance processors for critical jobs, and a larger number of lower-power lower-performance processors for less critical jobs. Naturally, the lower-power processors would be more energy efficient in terms of the computation performed per unit of energy expended, and would generate less heat per unit of computation. For a given area and power budget, heterogeneous designs can give significantly better performance for standard workloads~\cite{Bower2008,Merritt2008}; Emulations in \cite{Kumar2006} suggest a figure of 40\% better performance, and emulations in \cite{Tomer2006} suggest a figure of 67\% better performance. Moreover, even processors that were designed to be homogeneous, are increasingly likely to be heterogeneous at run time~\cite{Bower2008}: the dominant underlying cause is the increasing variability in the fabrication process as the feature size is scaled down (although run time faults will also play a role). Since manufacturing yields would be unacceptably low if every processor/core was required to be perfect, and since there would be significant performance loss from derating the entire chip to the functioning of the least functional processor (which is what would be required in order to attain processor homogeneity), some processor heterogeneity seems inevitable in chips with many processors/cores. The position paper~\cite{Bower2008} identifies three fundamental challenges in scheduling heterogeneous multiprocessors: (1)~the OS must discover the status of each processor, (2)~the OS must discover the resource demand of each job, and (3)~given this information about processors and jobs, the OS must match jobs to processors as well as possible. In this paper, we address this third fundamental challenge. In particular, we assume that different jobs are of differing importance, and we study how to assign these jobs to processors of varying power and varying energy efficiency, so as to achieve the best possible trade-off between energy and performance. Formally, we assume that a collection of jobs arrive in an online fashion over time. When a job $j$ arrives in the system, the system is able to discover a \emph{size} $p_j \in {\mathbb R}_{> 0}$, as well as a \emph{importance/weight} $w_j \in {\mathbb R}_{> 0}$, for that job. The importance $w_j$ specifies an upper bound on the amount of energy that the system is allowed to invest in running $j$ to reduce $j$'s flow by one unit of time (assuming that this energy investment in $j$ doesn't decrease the flow of other jobs)---hence jobs with high weight are more important, since higher investments of energy are permissible to justify a fixed reduction in flow. Furthermore, we assume that the system knows the allowable speeds for each processor, and the system also knows the power used when each processor is run at its set of allowable speeds. We make no real restrictions on the allowable speeds, or on the power used for these speeds.\footnote{So the processors may or may not be speed scalable, the speeds may be continuous or discrete or a mixture, the static power may or may not be negligible, the dynamic power may or may not satisfy the cube root rule, etc.} The online scheduler has three component policies: \begin{description} \item[Job Selection:] Determines which job to run on each processor at any time. \item[Speed Scaling:]Determines the speed of each processor at each time. \item[Assignment:] When a new job arrives, it determines the processor to which this new job is assigned. \end{description} The objective we consider is that of \emph{weighted flow plus energy}. The rationale for this objective function is that the optimal schedule under this objective gives the best possible weighted flow for the energy invested, and increasing the energy investment will not lead to a corresponding reduction in weighted flow (intuitively, it is not possible to speed up a collection of jobs with an investment of energy proportional to these jobs' importance). We consider the following natural online algorithm that essentially adopts the job selection and speed scaling algorithms from the uniprocessor algorithm in \cite{BCP}, and then greedily assigns the jobs based on these policies. \begin{shadebox} \begin{description} \item[Job Selection:] Highest Density First (HDF) \item[Speed Scaling:] The speed is set so that the power is the fractional weight of the unfinished jobs. \item[Assignment:] A new job is assigned to the processor that results in the least increase in the projected future weighted flow, assuming the adopted speed scaling and job selection policies, and ignoring the possibility of jobs arriving in the future. \end{description} \end{shadebox} Our main result is then: \begin{theorem} \label{thm:main1} This online algorithm is scalable for scheduling jobs on a heterogeneous multiprocessor with arbitrary power functions to minimize the objective function of weighted flow plus energy. \end{theorem} In this context, \emph{scalable} means that if the adversary can run processor $i$ at speed $s$ and power $P(s)$, the online algorithm is allowed to run the processor at speed $(1+\epsilon)s$ and power $P(s)$, and then for all inputs, the online cost is bounded by $O(f(\epsilon))$ times the optimal cost. Intuitively, a scalable algorithm can handle almost the same load as optimal; for further elaboration, see~\cite{PST,Pruhs07}. Theorem \ref{thm:main1} extends theorems showing similar results for weighted flow plus energy on a uniprocessor~\cite{BCP,Lachlan2009}, and for weighted flow on a multiprocessor without power considerations~\cite{Chadha2009}. As scheduling on identical processors with the objective of total flow, and scheduling on a uniprocessor with the objective of weighted flow, are special cases of our problem, constant competitiveness is not possible without some sort of resource augmentation~\cite{LR,BC09}. Our analysis is an amortized local-competitiveness argument. As is usually the case with such arguments, the main technical hurdle is to discover the ``right'' potential function. The most natural straw-man potential function to try is the sum over all processors of the single processor potential function used in~\cite{BCP}. While one can prove constant competitiveness with this potential in some special cases (e.g. where for each processor the allowable speeds are the non-negative reals, and the power satisfies the cube-root rule), one can not prove constant competitiveness for general power functions with this potential function. The reason for this is that the uniprocessor potential function from~\cite{BCP} is not sufficiently accurate. Specifically, one can construct configurations where the adversary has finished all jobs, and where the potential is much higher than the remaining online cost. This did not mess up the analysis in \cite{BCP} because to finish all these jobs by this time the adversary would have had to run very fast in the past, wasting a lot of energy, which could then be used to pay for this unnecessarily high potential. But since we consider multiple processors, the adversary may have no jobs left on a particular processor simply because it assigned these jobs to a different processor, and there may not be a corresponding unnecessarily high adversarial cost that can be used to pay for this unnecessarily high potential. Thus, the main technical contribution in this paper is a seemingly more accurate potential function expressing the additional cost required to finish one collection of jobs compared to another collection of jobs. Our potential function is arguably more transparent than the one used in~\cite{BCP}, and we expect that this potential function will find future application in the analysis of other power management algorithms. In section~\ref{dsec:unweighted-flow}, we show that a similar online algorithm is $O(1/\epsilon)$-competitive with $(1+\epsilon)$-speedup for \emph{unweighted} flow plus energy. We also remark that when the power functions $P_i(s)$ are restricted to be of the form $s^{\alpha_i}$, our algorithms give a $O(\alpha^2)$-competitive algorithm (with no resource augmentation needed) for the problem of minimizing weighted flow plus energy, and an $O(\alpha)$-competitive algorithm for minimizing the unweighted flow plus energy, where $\alpha = \max_i \alpha_i$. \subsection{Related Results} \label{sec:related-results} Let us first consider previous work for the case of a single processor, with unbounded speed, and a polynomially bounded power function $P(s) = s^\alpha$. \cite{PUW} gave an efficient offline algorithm to find the schedule that minimizes average flow subject to a constraint on the amount of energy used, in the case that jobs have unit work. \cite{AF} introduced the objective of flow plus energy and gave a constant competitive algorithm for this objective in the case of unit work jobs. \cite{BPS} gave a constant competitive algorithm for the objective of weighted flow plus energy. The competitive ratio was improved by \cite{LLTW08} for the unweighted case using a potential function specifically tailored to integer flow. \cite{BCLL08} extended the results of \cite{BPS} to the bounded speed model, and~\cite{STACS2009} gave a \emph{nonclairvoyant} algorithm that is $O(1)$-competitive. Still for a single processor, dropping the assumptions of unbounded speed and polynomially-bounded power functions, \cite{BCP} gave a $3$-competitive algorithm for the objective of unweighted flow plus energy, and a $2$-competitive algorithm for fractional weighted flow plus energy, both in the uniprocessor case for a large class of power functions. The former analysis was subsequently improved by~\cite{Lachlan2009} to show $2$-competitiveness, along with a matching lower bound. Now for multiple processors: \cite{Lam08} considered the setting of multiple \emph{homogeneous} processors, where the allowable speeds range between zero and some upper bound, and the power function is polynomial in this range. They gave an algorithm that uses a variant of round-robin for the assignment policy, and job selection and speed scaling policies from \cite{BPS}, and showed that this algorithm is scalable for the objective of (unweighted) flow plus energy. Subsequently, \cite{GNS09} showed that a randomized machine selection algorithm is scalable for weighted flow plus energy (and even more general objective functions) in the setting of polynomial power functions. Both these algorithms provide non-migratory schedules and compare their costs with optimal solutions which could even be migratory. In comparison, as mentioned above, for the case of polynomial power functions, our techniques can give a deterministic constant-competitive online algorithm for non-migratory weighted flow time plus energy. (Details appear in the final version.) In non-power-aware settings, the paper most relevant to this work is that of~\cite{Chadha2009}, which gives a scalable online algorithm for minimizing weighted flow on unrelated processors. Their setting is even more demanding, since they allow the processing requirement of the job to be processor dependent (which captures a type of heterogeneity that is orthogonal to the performance energy-efficiency heterogeneity that we consider in this paper). Our algorithm is based on the same general intuition as theirs: they assign each new job to the processor that would result in the least increment in future weighted flow (assuming HDF is used for job selection), and show that this online algorithm is scalable using an amortized local competitiveness argument. However, it is unclear how to directly extend their potential function to our power-aware setting; we had success only in the case that each processor had allowable speed-power combinations lying in $\{(0,0), (s_i, P_i)\}$. \subsection{Preliminaries} \label{sec:preliminaries} \subsubsection{Scheduling Basics.} We consider only non-migratory schedules, which means that no job can ever run on one processor, and later run on some other processor. In general, migration is undesirable as the overhead can be significant. We assume that preemption is allowed, that is, that jobs may be suspended, and restarted later from the point of suspension. It is clear that if preemption is not allowed, bounded competitiveness is not obtainable. The speed is the rate at which work is completed; a job $j$ with size $p_j$ run at a constant speed $s$ completes in $\frac{p_j}{s}$ seconds. A job is completed when all of its work has been processed. The flow of a job is the completion time of the job minus the release time of the job. The weighted flow of a job is the weight of the job times the flow of the job. For a $t \ge r_j$, let $p_j(t)$ be the remaining unprocessed work on job $j$ at time $t$. The fractional weight of job $j$ at this time is $w_j \frac{p_j(t)}{p_j}$. The fractional weighted flow of a job is the integral over times between the job's release time and its completion time of its fractional weight at that time. The density of a job is its weight divided by its size. The job selection policy Highest Density First (HDF) always runs the job of highest density. The inverse density of a job is its size divided by its weight. \subsubsection{Power Functions.} The power function for processor $i$ is denoted by $P_i(s)$, and specifies the power used when processor is run at speed $s$. We essentially allow any reasonable power function. However, we do require the following minimal conditions on each power function, which we adopt from~\cite{BCP}. We assume that the allowable speeds are a countable collection of disjoint subintervals of $[0, \infty)$. We assume that all the intervals, except possibly the rightmost interval, are closed on both ends. The rightmost interval may be open on the right if the power $P_i(s)$ approaches infinity as the speed $s$ approaches the rightmost endpoint of that interval. We assume that $P_i$ is non-negative, and $P_i$ is continuous and differentiable on all but countably many points. We assume that either there is a maximum allowable speed $T$, or that the limit inferior of $P_i(s)/s$ as $s$ approaches infinity is not zero (if this condition doesn't hold then, then the optimal speed scaling policy is to run at infinite speed). Using transformations specified in~\cite{BCP}, we may assume without loss of generality that the power functions satisfy the following properties: $P$ is continuous and differentiable, $P(0)=0$, $P$ is strictly increasing, $P$ is strictly convex, and $P$ is unbounded. We use $Q_i$ to denote $P_i^{-1}$; i.e., $Q_i(y)$ gives us the speed that we can run processor $i$ at, if we specify a limit of $y$. \subsubsection{Local Competitiveness and Potential Functions.} Finally, let us quickly review amortized local competitiveness analysis on a single processor. Consider an objective $G$. Let $G_A(t)$ be the increase in the objective in the schedule for algorithm A at time $t$. So when $G$ is fractional weighted flow plus energy, $G_A(t)$ is $P_A^t + w_A^t$, where $P_A^t$ is the power for A at time $t$ and $w_A^t$ is the fractional weight of the unfinished jobs for A at time $t$. Let OPT be the offline adversary that optimizes $G$. A is locally $c$-competitive if for all times $t$, if $G_{A}(t) \le c \cdot G_{OPT}(t)$. To prove A is $(c+d)$-competitive using an amortized local competitiveness argument, it suffices to give a potential function $\Phi(t)$ such that the following conditions hold (see for example~\cite{Pruhs07}). \begin{OneLiners} \item[{\bf Boundary condition:}] $\Phi$ is zero before any job is released and $\Phi$ is non-negative after all jobs are finished. \item[{\bf Completion condition:}] $\Phi$ does not increase due to completions by either A or OPT. \item[{\bf Arrival condition:}] $\Phi$ does not increase more than $d \cdot OPT$ due to job arrivals. \item[{\bf Running condition:}] At any time $t$ when no job arrives or is completed, \begin{equation}\label{eqn:eq1} G_{A}(t) + \frac{d\Phi(t)}{dt} \le c \cdot G_{OPT}(t) \end{equation} \end{OneLiners} The sufficiency of these conditions for proving $(c+d)$-competitiveness follows from integrating them over time. \section{Weighted Flow} \label{sec:weighted-flow-time} Our goal in this section is to prove Theorem \ref{thm:main1}. We first show that the online algorithm is $(1+\epsilon)$-speed $O(\frac{1}{\epsilon})$-competitive for the objective of \emph{fractional} weighted flow plus energy. Theorem \ref{thm:main1} then follows since HDF is $(1+\epsilon)$-speed $O(\frac{1}{\epsilon})$-competitive for fixed processor speeds~\cite{BLMP1} for the objective of (integer) weighted flow. Let \textrm{\sc OPT}\xspace be some optimal schedule minimizing fractional weighted flow. Let $w_{a,i}^{t}(q)$ denote the total fractional weight of jobs in processor $i$'s queue that have an inverse density of at least $q$. Let $w_{a,i}^{t} := w_{a,i}^{t}(0)$ be the total fractional weight of unfinished jobs in the queue. Let $w_{a}^{t} := \sum_i w_{a,i}^{t}$ be the total fractional weight of unfinished jobs in all queues. Let $w_{o,i}^{t}(q)$, $w_{o,i}^{t}$, and $w_{o}^{t}$ be similarly defined for \textrm{\sc OPT}\xspace. When the time instant being considered is clear, we drop the superscript of $t$ from all variables. We assume that once \textrm{\sc OPT}\xspace has assigned a job to some processor, it runs the {\sf BCP}\xspace algorithm~\cite{BCP} for job selection and speed scaling---i.e., it sets the speed of the $i^{th}$ processor to $Q_i(w_{o,i})$, and hence the $i^{th}$ processor uses power $W_{o,i}$, and uses HDF for job selection. We can make such an assumption because the results of~\cite{BCP} show that the fractional weighted flow plus energy of the schedule output by this algorithm is within a factor of two of optimal. Therefore, the only real difference between \textrm{\sc OPT}\xspace and the online algorithm is the assignment policy. \subsection{The Assignment Policy} To better understand the online algorithm's assignment policy, define the ``shadow potential'' for processor $i$ at time $t$ to be \begin{equation} \label{eq:shadow-wtd} \tsty {\widehat\Phi}_{a,i}(t) = \int_{q = 0}^{\infty} \int_{x = 0}^{w^{t}_{a,i}(q)} \frac{x}{Q_i(x)} \, dx\, dq \end{equation} The shadow potential captures (up to a constant factor) the total fractional weighted flow to serve the current set of jobs if no jobs arrive in the future. Based on this, the online algorithm's assignment policy can alternatively be described as follows: \noindent {\bf Assignment Policy.} When a new job with size $p_j$ and weight $w_j$ arrives at time $t$, the assignment policy assigns it to a processor which would cause the smallest increase in the shadow potential; i.e. a processor minimizing \begin{eqnarray*} &&\int_{q=0}^{d_j} \int_{x=0}^{w^{t}_{a,i}(q) + w_j } \frac{x}{Q_i(x)} \,dx\,dq- \int_{q=0}^{d_j} \int_{x=0}^{w^{t}_{a,i}(q) } \frac{x}{Q_i(x)} \,dx\,dq\\ &=& \int_{q=0}^{d_j} \int_{x = w^{t}_{a,i}(q)}^{ w^{t}_{a,i}(q) + w_j } \frac{x}{Q_i(x)} \,dx\,dq \end{eqnarray*} \subsection{Amortized Local Competitiveness Analysis} We apply a local competitiveness argument as described in subsection \ref{sec:preliminaries}. Because the online algorithm is using the BCP algorithm on each processor, the power for the online algorithm is $\sum_i P_i(Q_i(w_{a,i}))=w_{a}$. Thus $G_A = 2w_{a}$. Similarly, since \textrm{\sc OPT}\xspace is using BCP on each processor $G_{\textrm{\sc OPT}\xspace}= 2 w_{o}$. \subsubsection{Defining the potential function} For processor $i$, define the potential \begin{gather} \tsty \Phi_i(t) = \const \int_{q=0}^{\infty} \int_{x=0}^{(w_{a,i}^{t}(q) - w_{o,i}^{t}(q))_+} \frac{x}{Q_i(x)} \,dx\,dq \label{eq:pot-wf} \end{gather} Here $(\cdot)_+ = \max(\cdot, 0)$. The global potential is then defined to be $\Phi(t) = \sum_i \Phi_i(t)$. Firstly, we observe that the function $x/Q_i(x)$ is increasing and subadditive. Then, the following lemma will be useful subsequently, the proof of which will appear in the full version of the paper. \begin{lemma} \label{lem:concave-arrival} Let $g$ be any increasing subadditive function with $g(0) \geq 0$, and $w_a, w_o, w_j \in {\mathbb R}_{\geq 0}$. Then, \begin{equation*} \tsty \int_{x=w_a}^{w_a + w_j} g(x) \,dx - \int_{x = (w_a - w_o - w_j)_{+}}^{(w_a - w_o)_{+}} g(x) \,dx \leq 2 \int_{x=0}^{w_j} g(w_o + x) \,dx \end{equation*} \end{lemma} That the boundary and completion conditions are satisfied are obvious. In Lemma \ref{lem:arrival-wtd} we prove that the arrival condition holds, and in Lemma \ref{lem:running-wtd} we prove that the running condition holds. \begin{lemma} \label{lem:arrival-wtd} The arrival condition holds with $d = \frac{4}{\epsilon}$. \end{lemma} \begin{proof} Consider a new job $j$ with processing time $p_j$, weight $w_j$ and inverse density $d_j = p_j/w_j$, which the algorithm assigns to processor~$1$ while the optimal solution assigns it to processor $2$. Observe that $\int_{q=0}^{d_j} \int_{x=w_{o,2}(q)}^{w_{o,2}(q) + w_j} \frac{ x}{Q_2( x)} \,dx \,dq $ is the increase in \textrm{\sc OPT}\xspace's fractional weighted flow due to this new job $j$. Thus our goal is to prove that the increase in the potential due to job $j$'s arrival is at most this amount. The change in the potential $\Delta \Phi$ is: \begin{equation} \label{eq:phi1-wtd} \nonumber \tsty \const \int_{q=0}^{d_j} \left( \int_{x= (w_{a,1}(q) - w_{o,1}(q))_{+}}^{(w_{a,1}(q) - w_{o,1}(q) + w_j)_+} \frac{x}{Q_1(x)} \,dx - \int_{x= (w_{a,2}(q) - w_{o,2}(q) -w_j)_+}^{(w_{a,2}(q) - w_{o,2}(q))_+} \frac{x}{Q_2(x)} \,dx \right) \,dq \end{equation} Now, since $x/Q_1(x)$ is an increasing function we have that \[ \tsty \int_{x= (w_{a,1}(q) - w_{o,1}(q))_{+}}^{(w_{a,1}(q) - w_{o,1}(q) + w_j)_+} \frac{x}{Q_1(x)} \,dx \leq \int_{x= w_{a,1}(q)}^{w_{a,1}(q) + w_j} \frac{x}{Q_1(x)} \,dx \] and hence the change of potential can be bounded by \[ \tsty \const \int_{q=0}^{d_j} \left( \int_{x= w_{a,1}(q)}^{w_{a,1}(q) + w_j} \frac{x}{Q_1(x)} \,dx - \int_{x= (w_{a,2}(q) - w_{o,2}(q)-w_j)_+}^{(w_{a,2}(q) - w_{o,2}(q))_+} \frac{x}{Q_2(x)} \,dx \right) \,dq \] Since we assigned the job to processor~$1$, we know that \[ \int_{q=0}^{d_j} \int_{x= w_{a,1}(q)}^{w_{a,1}(q) + w_j} \frac{x}{Q_1(x)} \,dx\,dq \leq \int_{q=0}^{d_j} \int_{x= w_{a,2}(q)}^{w_{a,2}(q) + w_j} \frac{x}{Q_2(x)} \,dx\,dq \] Therefore, the change in potential is at most \begin{align*} \Delta \Phi &\leq \tsty \const \int_{q=0}^{d_j} \left( \int_{x= w_{a,2}(q)}^{w_{a,2}(q) + w_j} \frac{x}{Q_2(x)} \,dx - \int_{x= (w_{a,2}(q) - w_{o,2}(q) - w_j)_+}^{(w_{a,2}(q) - w_{o,2}(q))_+} \frac{x}{Q_2(x)} \,dx \right) \,dq \end{align*} Applying Lemma~\ref{lem:concave-arrival}, we get: \begin{align*} \Delta \Phi & \leq \big( 2 \cdot \const \big) \int_{q=0}^{d_j} \int_{x=w_{o,2}(q)}^{w_{o,2}(q) + w_j} \frac{ x}{Q_2( x)} \,dx \,dq \end{align*} \end{proof} \begin{lemma} \label{lem:running-wtd} The running condition holds with constant $c=1+\frac{1}{\epsilon}$. \end{lemma} \begin{proof} Let us consider an infinitesimally small interval $[t, t+dt)$ during which no jobs arrive and analyze the change in the potential $\Phi(t)$. Since $\Phi(t) = \sum_i \Phi_i(t)$, we can do this on a per-processor basis. Fix a single processor $i$, and time $t$. Let $w_i(q) := (w_{a,i}(q) - w_{o,i}(q))_+$, and $w_i := (w_{a,i} - w_{o,i})_+$. Let $q_a$ and $q_o$ denote the inverse densities of the jobs being executed on processor $i$ by the algorithm and optimal solution respectively (which are the densest jobs in their respective queues, since both run HDF). Define $s_a = Q_i(w_{a,i})$ and $s_o = Q_i(w_{o,i})$. Since we assumed that \textrm{\sc OPT}\xspace uses the {\sf BCP}\xspace algorithm on each processor, \textrm{\sc OPT}\xspace runs processor $i$ at speed $s_o$. Since the online algorithm is also using {\sf BCP}\xspace, but has $(1+\epsilon)$-speed augmentation, the online algorithms runs the processor at speed $(1+\epsilon) s_a$. Hence the fractional weight of the job the online algorithm works on decreases at a rate of $s_a (1+\epsilon)/ q_a$. Therefore, the quantity $w_{a,i}(q)$ drops by $s_a \,dt (1+\epsilon)/ q_a$ for $q \in [0, q_a]$. Likewise, $w_{o,i}(q)$ drops by $s_o \,dt/q_o$ for $q \in [0, q_o]$ due to the optimal algorithm working on its densest job. We consider several different cases based on the values of $q_o, q_a, w_{o,i}$, and $w_{a,i}$ and establish bounds on $d \Phi_i(t)/dt$; Recall the definition of $\Phi_i(t)$ from equation~(\ref{eq:pot-wf}): \begin{equation*} \Phi_i(t) = \const \int_{q=0}^{\infty} \int_{x=0}^{(w_{a,i}^{t}(q) - w_{o,i}^{t}(q))_+} \frac{x}{Q_i(x)} \,dx\,dq \end{equation*} \noindent {\bf Case (1): $w_{a,i} < w_{o,i}$}: The only possible increase in potential function occurs due to the decrease in $w_{o,i}(q)$, which happens for values of $q \in [0, q_o]$. But for $q$'s in this range, $w_{a,i}(q) \le w_{a,i}$ and $w_{o,i}(q) = w_{o,i}$. Thus the inner integral is empty, resulting in no increase in potential. The running condition then holds since $w_{a,i} < w_{o,i}$. \noindent {\bf Case (2): $w_{a,i} > w_{o,i}$}: To quantify the change in potential due to the online algorithm working, observe that for any $q \in [0, q_a]$, the inner integral of $\Phi_i$ decreases by \[ \int_{x= 0}^{w_i(q)} \frac{x}{Q_i(x)} \,dx - \int_{x= 0}^{w_i(q) - (1+\epsilon) \frac{s_a \,dt}{q_a}} \frac{x}{Q_i(x)} \,dx = \frac{w_i(q)}{Q_i(w_i(q))} (1+\epsilon) \frac{s_a \,dt}{q_a} \] Here, we have used the fact that $dt$ is infinitisemally small to get the above equality. Hence, the total drop in $\Phi_i$ due to the online algorithm's processing is \begin{eqnarray*} \const \int_{q = 0}^{q_a} \frac{w_i(q)}{Q_i(w_i(q))} (1+\epsilon) \frac{s_a \,dt}{q_a} \,dq &\geq & \const \int_{q = 0}^{q_a} \frac{w_i}{Q_i(w_i)} (1+\epsilon) \frac{s_a \,dt}{q_a} \,dq\\ &=& \const \frac{w_i}{Q_i(w_i)} (1+\epsilon) s_a \,dt \end{eqnarray*} Here, the first inequality holds because $x/Q_i(x)$ is a non-decreasing function, and for all $q \in [0, q_a]$, we have $w_{a,i}(q) = w_{a,i}$ and $w_{o,i}(q) \leq w_{o,i}$ and hence $w_i(q) \geq w_i$. Now to quantify the increase in the potential due to the optimal algorithm working: observe that for $q \in [0, q_o]$, the inner integral of $\Phi_i$ increases by at most \[ \int_{x = w_{i}(q)}^{w_i(q) + \frac{s_o \,dt}{q_o} } \frac{x}{Q_i(x)} \,dx = \frac{w_i(q)}{Q_i(w_i(q))} \frac{s_o \,dt}{q_o} \] Again notice that we have used that fact that here $dt$ is an infinitesimal period of time that in the limit is zero. Hence the total increase in $\Phi_i$ due to the optimal algorithm's processing is at most \[ \const \int_{q = 0}^{q_o} \frac{w_i(q)}{Q_i(w_i(q))} \frac{s_o \,dt}{q_o} \,dq \leq \const \int_{q = 0}^{q_o} \frac{w_i}{Q_i(w_i)} \frac{s_o \,dt}{q_o} \,dq = \const \frac{w_i}{Q_i(w_i)} s_o \,dt. \] Again here, the first inequality holds because $x/Q_i(x)$ is a non-decreasing function, and for all $q \in [0, q_o]$, we have $w_{a,i}(q) \leq w_{a,i}$ and $w_{o,i}(q) = w_{o,i}$ and hence $w_i(q) \leq w_i$. Putting the two together, the overall increase in $\Phi_i(t)$ can be bounded by \begin{align*} \dphidt &\leq \const \frac{w_{a,i} - w_{o,i}}{Q_i(w_{a,i} - w_{o,i})} \left[ -(1+\epsilon) s_a + s_o \right] \\ &= \const ( w_{a,i} - w_{o,i} ) \frac {[ -(1+\epsilon) Q_i(w_{a,i}) + Q_i(w_{o,i}) ]} {Q_i(w_{a,i} - w_{o,i})} \\ &\leq - \const \epsilon (w_{a,i} - w_{o,i}) = - 2 (w_{a,i} - w_{o,i}) \label{eq:phi} \end{align*} It is now easy to verify that by plugging this bound on $\dphidt$ into the running condition that one gets a valid inequality. \noindent {\bf Case (3): $w_{a,i} = w_{o,i}$}: In this case, let us just consider the increase due to \textrm{\sc OPT}\xspace working. The inner integral in the potential function starts off from zero (since $w_{a,i} - w_{o,i} = 0$) and potentially (in the worst case) could increase to \[ \int_{0}^{ \frac{s_o \,dt}{q_o}} \frac{x}{Q_i(x)} \,dx \] (since $w_{o,i}$ drops by $s_o \,dt/q_o$ and $w_{a,i}$ cannot increase). However, since $x/Q_i(x)$ is a monotone non-decreasing function, this is at most \[ \int_{0}^{ \frac{s_o \,dt}{q_o}} \frac{w_{o,i}}{Q_i(w_{o,i})} \,dx = \frac{s_o \,dt}{q_o} \frac{w_{o,i}}{Q_i(w_{o,i})} \] Therefore, the total increase in the potential $\Phi_i(t)$ can be bounded by \[ \const \int_{q = 0}^{q_o} \frac{w_{o,i}}{Q_i(w_{o,i})} \frac{s_o \,dt}{q_o} \,dq = \const s_o \,dt \frac{w_{o,i}}{Q_i(w_{o,i})} = \const w_{o,i} \,dt \] It is now easy to verify that by plugging this bound on $\dphidt$ into the running condition, and using the fact that $w_{a,i} = w_{o,i}$, one gets a valid inequality. \end{proof} \section{Algorithm for Unweighted Flow} \label{dsec:unweighted-flow} In this section, we give an immediate assignment based scheduling policy and show that it is $O(1/\epsilon)$-competitive against a non-migratory adversary for the objective of \emph{unweighted} flow plus energy, assuming the online algorithm has resource augmentation of $(1+\epsilon)$ in speed. Note that this result has a better competitiveness than the result for weighted flow from Section~\ref{sec:weighted-flow-time}, but holds only for the unweighted case. We begin by giving intuition behind our algorithm, which is again similar to that for the weighted case. Let \textrm{\sc OPT}\xspace be some optimal schedule. However, for the rest of the section, we assume that on a single machine, the optimal scheduling algorithm for minimizing sum of flow times plus energy on a single machine is that of Andrew et al.\cite{Lachlan2009} which sets the power at any time to be $Q(n)$ when there are $n$ unfinished jobs, and processes jobs according to SRPT. Since we know that this {\sf ALW}\xspace algorithm~\cite{Lachlan2009} is $2$-competitive against the optimal schedule on a single processor, we will imagine that, once \textrm{\sc OPT}\xspace has assigned a job to some processor, it uses the {\sf ALW}\xspace algorithm on each processor. Likewise, once our assignment policy assigns a job to some processor, our algorithm also runs the {\sf ALW}\xspace algorithm on each processor. Therefore, just like the weighted case, the crux of our algorithm is in designing a good assignment policy, and arguing that it is $O(1)$-competitive even though our algorithm and \textrm{\sc OPT}\xspace may schedule a new job on different processors with completely different power functions. \subsection{Algorithm} \label{dsec:algorithm-uf} Our algorithm works as follows: Each processor maintains a queue of jobs that have currently been assigned to it. At some time instant $t$, for any processor $i$, let $n_{a,i}^{t}(q)$ denote the number of jobs in processor $i$'s queue that have a remaining processing time of at least $q$. Let $n_{a,i}^{t}$ denote the total number of unfinished jobs in the queue. Also, let us define the \emph{shadow potential} for processor $i$ at this time $t$ as \begin{equation} \label{deq:shadow} \tsty {\widehat\Phi}_{a,i}(t) = \int_{q = 0}^{\infty} \sum_{j = 1}^{n^{t}_{a,i}(q)} \frac{j}{Q_i(j)} \,dq \end{equation} Note that the shadow potential ${\widehat\Phi}_{a,i}(t)$ is the total future cost of the online algorithm (up to a constant factor) assuming no jobs arrive after this time instant, and the online algorithm runs the {\sf ALW}\xspace algorithm on all processors (i.e., the job selection is SRPT, and the processor is run at a speed of $Q_i(n_{a,i}^{t})$). Now our algorithm is the following: \begin{shadebox} \noindent When a new job arrives, the assignment policy assigns it to a processor which would cause the smallest increase in the ``shadow potential''; i.e., a processor minimizing \[ \int_{q=0}^{p} \sum_{j=1}^{n^{t}_{a,i}(q) + 1 } \frac{j}{Q_i(j)} \,dq - \int_{q=0}^{p} \sum_{j=1}^{n^{t}_{a,i}(q) } \frac{j}{Q_i(j)} \,dq = \int_{q=0}^{p} \frac{(n^{t}_{a,i}(q) + 1)}{Q_i(n^{t}_{a,i}(q) + 1)} \,dq \] The job selection on each processor is SRPT (Shortest Remaining Processing Time), and we set the power of processor $i$ at time $t$ to $n_{a,i}^{t}$. Once the job is assigned to a processor, it is never migrated. \end{shadebox} \subsection{The Amortized Local-Competitive Analysis} \label{dsec:analysis-uf} We again employ a potential function based analysis, similar to the one in Section~\ref{sec:weighted-flow-time}. \subsubsection{The Potential Function.} We now describe our potential function $\Phi$. For time $t$ and processor $i$, recall the definitions $n_{a,i}^{t}$ and $n_{a,i}^t(q)$ given above; analogously define $n_{o,i}^{t}$ as the number of unfinished jobs assigned to processor $i$ by the optimal solution at time $t$, and $n_{o,i}^{t}(q)$ to be the number of these jobs with remaining processing time at least $q$. Henceforth, we will drop the superscript $t$ from these terms whenever the time instant $t$ is clear from the context. Now, we define the global potential function to be $\Phi(t) = \sum_i \Phi_i(t)$, where $\Phi_i(t)$ is the potential for processor $i$ defined as: \begin{gather} \tsty \Phi_i(t) = \constun \int_{q=0}^{\infty} \sum_{j=1}^{(n_{a,i}^{t}(q) - n_{o,i}^{t}(q))_+} j/Q_i(j) \,dq \label{deq:pot-uf} \end{gather} Recall that $(x)_+ = \max(x,0)$, and $Q_i = P_i^{-1}$. Notice that if the optimal solution has no jobs remaining on processor $i$ at time $t$, we get $\Phi_i(t)$ is (within a constant off) simply ${\widehat\Phi}_{a,i}(t)$. \subsubsection{Proving the Arrival Condition.} We now show that the increase in the potential $\Phi$ is bounded (up to a constant factor) by the increase in the future optimal cost when a new job arrives. Suppose a new job of size $p$ arrives at time $t$, and suppose the online algorithm assigns it to processor~$1$ while the optimal solution assigns it to processor $2$. Then $\Phi_1$ increases since $n_{a,1}(q)$ goes up by $1$ for all $q \in [0,p]$, $\Phi_2$ could decrease due to $n_{o,2}(q)$ dropping by $1$ for all $q \in [0,p]$, and $\Phi_i$ (for $i \notin \{1,2\}$) does not change. Let us first assume that $n_{a,i}(q) \geq n_{o,i}(q)$ for all $q \in [0,p]$ and for $i \in \{1,2\}$; we will show below how to remove this assumption. Under this assumption, the total change in potential $\Phi$ is \[ \tsty \constun \int_{q=0}^{p} \big( \frac{n_{a,1}(q) - n_{o,1}(q)+1}{Q_1( n_{a,1}(q) - n_{o,1}(q) + 1)} - \frac{n_{a,2}(q) - n_{o,2}(q) }{Q_2( n_{a,2}(q) - n_{o,2}(q))} \big) \,dq \] But since $x/Q(x)$ is increasing this is less than \begin{equation} \label{deq:phi1} \tsty \constun \int_{q=0}^{p} \big( \frac{n_{a,1}(q) + 1}{Q_1( n_{a,1}(q) + 1)} - \frac{n_{a,2}(q) - n_{o,2}(q) }{Q_2( n_{a,2}(q) - n_{o,2}(q))} \big) \,dq \end{equation} By the greedy choice of processor $1$ (instead of $2$), this is less than \begin{equation} \label{deq:phi2} \tsty \constun \int_{q=0}^{p} \big( \frac{n_{a,2}(q) + 1}{Q_2( n_{a,2}(q) + 1)} - \frac{n_{a,2}(q) - n_{o,2}(q) }{Q_2( n_{a,2}(q) - n_{o,2}(q) )} \big) \,dq \end{equation} Now, since $x/Q(x)$ is subadditive this is less than $\constun \int_{q=0}^{p} \frac{n_{o,2}(q) + 1}{Q_2( n_{o,2}(q) + 1)} \,dq$, which in turn is (within a factor of $\constun$) precisely the \emph{increase in the future cost} incurred by the optimal solution, since we had assumed that \textrm{\sc OPT}\xspace also runs the {\sf ALW}\xspace algorithm on its processors. Now suppose $n_{a,1}(q) < n_{o,1}(q)$ for some $q \in [0,p]$. There is no increase in the inner sum of $\Phi_1$ for such values of $q$, and hence we can trivially upper bound this zero increase by $\constun \int_{q=0}^{p} \frac{n_{a,1}(q) + 1}{Q_1( n_{a,1}(q) + 1)} \leq \constun \int_{q=0}^{p} \frac{n_{a,2}(q) + 1}{Q_2( n_{a,2}(q) + 1)}$. And to discharge the assumption for processor $2$, note that if $n_{a,2}(q) < n_{o,2}(q)$ for some $q$, there is no decrease in the inner sum of $\Phi_2$ for this value of $q$, but in this case we can simply use $\frac{n_{a,2}(q) + 1}{Q_1( n_{a,2}(q) + 1)} \leq \frac{n_{o,2}(q) + 1}{Q_1( n_{o,2}(q) + 1)}$ for such values of $q$. Therefore, we get the same bound of \[ \constun \int_{q=0}^{p} \frac{n_{o,2}(q) + 1}{Q_2( n_{o,2}(q) + 1)} \,dq \] on the increase in all cases, thus proving the following lemma. \begin{lemma} \label{dlem:arrival} The arrival condition holds for the unweighted case with $d = \constun$. \end{lemma} \subsubsection{Proving the Running Condition.} In this section, our goal is to analyze the change in $\Phi$ in an infinitesimally small time interval $[t, t+ dt)$ and compare $d\Phi/dt$ to $d\ensuremath{{\sf Alg}}\xspace/dt$ and $d\textrm{\sc OPT}\xspace/dt$. We do this on a per-processor basis; let us focus on processor $i$ at time $t$. Recall (after dropping the $t$ superscripts) the definitions of $n_{a,i}(q), n_{o,i}(q), n_{a,i}$ and $n_{o,i}$ from above, and define $n_i(q) := (n_{a,i}(q) - n_{o,i}(q))_+$. Finally, let $q_a$ and $q_o$ denote the remaining sizes of the jobs being worked on by the algorithm and optimal solution respectively at time $t$; recall that both of them use SRPT for job selection. Define $s_a = Q_i(n_{a,i})$ and $s_o = Q_i(n_{o,i})$ to be the (unaugmented) speeds of processor $i$ according to the online algorithm and the optimal algorithm respectively---though, since we assume resource augmentation, our processor runs at speed $(1+ \epsilon) Q_i(n_{a,i})$. Hence $n_{a,i}(q)$ drops by $1$ for $q \in (q_a - (1+\epsilon) s_a \,dt, q_a]$ and $n_{o,i}(q)$ drops by $1$ for $q \in (q_o - s_o \,dt, q_o]$ for the optimal algorithm. Let $I_a := (q_a - (1+\epsilon) s_a \,dt, q_a]$ and $I_o := (q_o - s_o \,dt, q_o]$ denote these two intervals. Let us consider some cases: \noindent {\bf Case (1): $n_{a,i} < n_{o,i}$}: The increase in potential function may occur due to $n_{o,i}(q)$ dropping by $1$ in $q \in I_o$. However, since $n_{a,i} < n_{o,i}$, it follows that $n_{a,i}(q) \leq n_{a,i} < n_{o,i} = n_{o,i}(q)$ for all $q \in I_o$; the equality follows from \textrm{\sc OPT}\xspace using SRPT. Consequently, even with $n_{o,i}(q)$ dropping by $1$, $n_{a,i}(q) - n_{o,i}(q) \leq 0$ and there is no increase in potential, or equivalently $d \Phi_i(t)/dt \leq 0$. Hence, in this case, $4 n_{a,i} + d \Phi_i(t)/dt \leq 4 n_{o,i}$. \noindent {\bf Case (2a): $n_{a,i} \geq n_{o,i}$, and $q_a < q_o$}: For $q \in I_a$, the inner summation of $\Phi_i$ drops by $ \frac{n_{a,i}(q) - n_{o,i}(q)}{Q_i(n_{a,i}(q) - n_{o,i}(q))}$, since $n_{a,i}(q)$ decreases by~$1$. Moreover, $n_{a,i}(q) = n_{a,i}$ and $n_{o,i}(q) = n_{o,i}$, because both \ensuremath{{\sf Alg}}\xspace and \textrm{\sc OPT}\xspace run SRPT, and we're considering $q \leq q_a < q_o$. For $q \in I_o$, the inner summation of $\Phi_i$ increases by $\frac{n_{a,i}(q) - (n_{o,i}(q) - 1)}{Q_i( n_{a,i}(q) - (n_{o,i}(q) - 1))}$. However, we have $n_{a,i}(q) \leq n_{a,i} - 1$ and $n_{o,i}(q) = n_{o,i}$ because $q_a < q_o$, and $n_{o,i}(q) = n_{o,i}$ because \textrm{\sc OPT}\xspace runs SRPT. Therefore the increase is at most $\frac{n_{a,i} - n_{o,i}}{Q_i(n_{a,i} - n_{o,i})}$ for all $q \in I_o$. Combining these two, we get \begin{align*} \tsty \dphidt &\leq \tsty \constun \frac{n_{a,i} - n_{o,i}}{Q_i(n_{a,i} - n_{o,i})} \big[ -(1+\epsilon) s_a + s_o \big] \\ &= \tsty \constun \big( n_{a,i} - n_{o,i} \big) \frac {\big[ -(1+\epsilon) Q_i(n_{a,i}) + Q_i(n_{o,i}) \big]} {Q_i(n_{a,i} - n_{o,i} )} \\ &\leq \tsty \constun \big( n_{a,i} - n_{o,i} \big)\frac{ -\epsilon\, Q_i(n_{a,i}) } {Q_i(n_{a,i} - n_{o,i})} \leq - 4 (n_{a,i} - n_{o,i}), \end{align*} where we repeatedly use that $Q_i(\cdot)$ is a non-decreasing function. This implies that $4 n_{a,i} + d \Phi_i(t)/dt \leq 4 n_{o,i}$. \noindent {\bf Case (2b): $n_{a,i} \geq n_{o,i}$, and $q_a > q_o$}: In this case, for $q \in I_o$, the inner summation of $\Phi_i$ increases by $ \frac{n_{a,i}(q) - (n_{o,i}(q) - 1)}{Q_i( n_{a,i}(q) - (n_{o,i}(q) - 1))}$. Also, we have $n_{a,i}(q) = n_{a,i}$ and $n_{o,i}(q) = n_{o,i}$, because $q \leq q_o < q_a$ and both algorithms run SRPT. Therefore the overall increase in potential function can be bounded by $\constun \frac{n_{a,i} - n_{o,i} + 1}{Q_i(n_{a,i} - n_{o,i} + 1)} \; s_o \,dt $. Moreover, for $q \in I_a$, the inner summation of $\Phi_i$ drops by $ \frac{n_{a,i}(q) - n_{o,i}(q)}{Q_i(n_{a,i}(q) - n_{o,i}(q))}$. Also, $n_{a,i}(q) = n_{a,i}$ and $n_{o,i}(q) \leq n_{o,i} - 1$, because $q_o < q_a$ and there was a job of remaining size $q_o$ among the optimal solution's active jobs. Thus the potential function drops by at least $ \constun \frac{n_{a,i} - n_{o,i} + 1}{Q_i(n_{a,i} - n_{o,i} + 1)}\; (1 +\epsilon) s_a \,dt $, since $x/Q_i(x)$ is a non-decreasing function. Combining these terms, \begin{align*} \tsty \dphidt &\leq \tsty \constun \frac{n_{a,i} - n_{o,i} + 1}{Q_i(n_{a,i} - n_{o,i} + 1)} \big[ -(1+\epsilon) s_a + s_o \big] \\ & = \constun \big( n_{a,i} - n_{o,i} + 1 \big) \frac {[ -(1+\epsilon) Q_i(n_{a,i}) + Q_i(n_{o,i}) ]} {Q_i(n_{a,i} - n_{o,i} + 1)} \\ &\tsty \leq - \constun \epsilon (n_{a,i} - n_{o,i} + 1) \leq - 4 (n_{a,i} - n_{o,i}) \end{align*} In the above, we have used the fact that $n_{o,i} \geq 1$, and consequently, $Q_i(n_{a,i}) \geq Q_i(n_{a,i}- n_{o,i} + 1)$. Therefore, in this case too we get $4 n_{a,i} + d \Phi_i(t)/dt \leq 4 n_{o,i}$. \noindent {\bf Case (2c): $n_{a,i} \geq n_{o,i}$, and $q_a = q_o$}: Since $n_{a,i} \geq n_{o,i}$, and $Q_i$ is an increasing function, $s_a \geq s_o$ and thus $I_o \subset I_a$. For $q$ in the interval $I_o$, the term $n_{o,i}(q)$ drops by $1$ \emph{and} the term $n_{a,i}(q)$ drops by $1$, and therefore there is no change in $n_{a,i}(q) - n_{o,i}(q)$. For $q \in I_a \setminus I_o$, the inner summation for $\Phi_i$ drops by $\frac{n_{a,i}(q) - n_{o,i}(q) }{Q_i( n_{a,i}(q) - n_{o,i}(q))}$. Also, $n_{a,i}(q) = n_{a,i}$ and $n_{o,i}(q) = n_{o,i}$, and the decrease in potential function is $\constun ( (1+\epsilon) s_a \,dt - s_o \,dt) \; \frac{n_{a,i} - n_{o,i} }{Q_i(n_{a,i} - n_{o,i})}$. But the analysis in Case~(2a) implies that $4n_{a,i} + d \Phi_i(t)/dt \leq 4 n_{o,i}$ in this case as well. Summing over all $i$, we get \begin{lemma} \label{dlem:running} The running condition holds for the unweighted case with constant $4$. At any time $t$ when there are no job arrivals, we have \end{lemma} Combining Lemmas~\ref{dlem:arrival} and~\ref{dlem:running} with the standard potential function argument indicated in Section~\ref{sec:preliminaries}, we get the following theorem. \begin{theorem} \label{dthm:unwtd} There is a $(1+\epsilon)$-speed $O(1/\epsilon)$-competitive immediate-assignment algorithm to minimize the total flow plus energy on heterogeneous processors with arbitrary power functions. \end{theorem} \noindent {\bf Acknowledgments:} We thank Sangyeun Cho and Bruce Childers for helpful discussions about heterogeneous multicore processors, and also Srivatsan Narayanan for several useful discussions. \appendix \section{Estimating the Future Cost of BCP} \label{sec:cost-estimate} Suppose we have a set of $n$ jobs, with weight $w_j$ and processing time $p_j$ for job $j$ ($1 \leq j \leq n$) such that $p_1/w_1 \leq p_2/w_2 \leq \ldots \leq p_n/w_n$. Let $d_j = p_j/w_j$ denote the inverse density of a job $j$. We now explain how we get the estimate of the future cost of this configuration when we run the BCP algorithm, i.e. HDF at a speed of $Q(w^t)$, where $w^t$ is the total fractional weight of unfinished jobs at time $t$. Firstly, observe that by virtue of our algorithm running HDF, it schedules job $1$ followed by job $2$, etc. Also, as long as the algorithm is running job $1$, it runs the processor at speed $Q(W_{\geq 2} + \widetilde{w_1}(t))$, where $W_{\geq 2} := w_2 + w_3 + \ldots w_n$ and $\widetilde{w_1}(t)$ is the fractional weight of job $1$ remaining. Secondly, since our algorithm always uses power equal to the fractional weight remaining, the rate of increase of the objective function at any time $t$ is simply $2 w^t$. Therefore, the following equations immediately follow: \begin{eqnarray} \nonumber G_A(t) &=& \frac{dA}{dt} = 2\big(W_{\geq 2} + \widetilde{w_1}(t)\big) \\ \nonumber \frac{ d \widetilde{w_1}(t)}{dt} &=& -\bigg( \frac{w_1}{p_1}\bigg) Q\bigg( W_{\geq 2} + \widetilde{w_1}(t) \bigg) \\ \nonumber \Rightarrow \frac{dA}{ d \widetilde{w_1}(t)} &=& - 2 \bigg( \frac{p_1}{w_1} \bigg) \frac{W_{\geq 2} + \widetilde{w_1}(t)}{Q\big( W_{\geq 2} + \widetilde{w_1}(t) \big)} \\ \nonumber \Rightarrow A_1 &=& - 2 \int_{x = W_{\geq 2} + w_1}^{W_{\geq 2}} d_1 \frac{W_{\geq 2} + x}{Q\big( W_{\geq 2} + x \big)} \,dx \end{eqnarray} That is, the total cost incurred while job $1$ is being scheduled is $$ 2 \int_{x = W_{\geq 2}}^{W_{\geq 2} + w_1} d_1 \frac{W_{\geq 2} + x}{Q\big( W_{\geq 2} + x \big)} \,dx = 2 \int_{q = 0}^{d_1} \int_{x = W_{\geq 2}}^{W_{\geq 2} + w_1} \frac{W_{\geq 2} + x}{Q\big( W_{\geq 2} + x \big)} \,dx \,dq $$ Similarly, while any job $i$ is being scheduled, we can use the same arguments as above to show that the total fractional flow incurred is $$ 2 \int_{x = W_{\geq (i+1)}}^{W_{\geq (i+1)} + w_i} d_i \frac{W_{\geq (i+1)} + x}{Q\big( W_{\geq (i+1)} + x \big)} \,dx = 2 \int_{q = 0}^{d_i} \int_{x = W_{\geq (i+1)}}^{W_{\geq (i+1)} + w_i} \frac{W_{\geq (i+1)} + x}{Q\big( W_{\geq (i+1)} + x \big)} \,dx \,dq $$ Summing over $i$, the total fractional flow incurred by our algorithm is $$ 2 \sum_{i =1}^{n} \int_{q = 0}^{d_i} \int_{x = W_{\geq (i+1)}}^{W_{\geq (i+1)} + w_i} \frac{W_{\geq (i+1)} + x}{Q\big( W_{\geq (i+1)} + x \big)} \,dx \,dq $$ Rearranging the terms, it is not hard to see (given $d_1 \leq d_2 \leq \ldots \leq d_n$) that this is equal to $$ 2 \int_{q = 0}^{\infty} \int_{x = 0}^{w(q)} \frac{x}{Q(x)} \,dx \,dq $$ where $w(q)$ is the total weight of jobs with inverse density at least $q$. \section{Subadditivity of x/Q(x)} \label{sec:subadditive} Let $Q(x): {\mathbb R}_{\geq 0} \rightarrow {\mathbb R}_{\geq 0}$ be any concave function such that $Q(0) \geq 0$ and $Q'(x) \geq 0$ for all $x \geq 0$, and let $g(x) = x/Q(x)$. Then the following facts are true about $g(\cdot)$. \begin{enumerate} \item[(a)] $g(\cdot)$ is non-decreasing. That is, $g(y) \geq g(x)$ for all $y \geq x$. \item[(b)] $g(\cdot)$ is subadditive. That is, $g(x) + g(y) \geq g(x+y)$ for all $x, y \in {\mathbb R}_{\geq 0}$ \end{enumerate} To see why the first is true, consider $x$ and $y = \lambda x$ for some $\lambda \geq 1$. Then, showing (a) is equivalent to showing $$ \frac{\lambda x}{Q(\lambda x)} \geq \frac{x}{Q(x)} $$ But this reduces to showing $Q(\lambda x) \leq \lambda Q(x)$ which is true because $Q(\cdot)$ is a concave function. To prove the second property, we first observe that the function $1/Q(x)$ is convex. This is because the second derivative of $1/Q(x)$ is $$ \frac{-Q(x)^2 Q''(x) + 2 Q(x) Q'(x)^2}{Q(x)^4} $$ which is always non-negative for all $x \geq 0$, since $Q(x)$ is non-negative and $Q''(x)$ is non-positive for all $x\geq 0$. Therefore, since $1/Q(\cdot)$ is convex, it holds for any $x$, $y$, and $\alpha \geq 0$, $\beta \geq 0$ that $$ \frac{\frac{\alpha}{Q(x)} + \frac{\beta}{Q(y)}}{\alpha + \beta} \geq \frac{1}{Q(\frac{\alpha x + \beta y}{\alpha + \beta})} $$ Plugging in $\alpha = x$ and $\beta = y$, we get $$ \frac{\frac{x}{Q(x)} + \frac{y}{Q(y)}}{x + y} \geq \frac{1}{Q(\frac{x^2 + y^2}{x + y})} $$ which implies $$ \frac{x}{Q(x)} + \frac{y}{Q(y)} \geq \frac{x + y}{Q(\frac{x^2 + y^2}{x + y})} $$ But since $Q(\cdot)$ is non-decreasing, we have $Q(x + y) \geq Q(\frac{x^2 + y^2}{x + y})$ and hence $$ \frac{x}{Q(x)} + \frac{y}{Q(y)} \geq \frac{x + y}{Q(x + y)} $$ \section{Missing Proofs} \label{sec:missing-proofs} \begin{proofof}{Lemma~\ref{lem:concave-arrival}} We first show that $$\nonumber \int_{x=w_a}^{w_a + w_j} g(x) \,dx - \int_{x = (w_a - w_o - w_j)_{+}}^{(w_a - w_o)_{+}} g(x) \,dx \leq \int_{x=0}^{w_j} g(w_o + w_j) \,dx$$ and then argue that $\int_{x=0}^{w_j} g(w_o + w_j) \,dx \leq 2 \int_{x=0}^{w_j} g(w_o + x) \,dx$ because $g(\cdot)$ is subadditive. To this end, we consider several cases and prove the lemma. Suppose $w_a$ is such that $w_a \geq w_o + w_j$: in this case we can discard the $(\cdot)_+$ operators on all the limits to get \begin{align*} & \int_{x = w_a}^{w_a + w_j} g(x) \, dx \; - \; \int_{x = w_a - w_o - w_j}^{w_a - w_o} g(x)\,dx \\ &= \int_{x = 0}^{w_j} \bigg( g(w_a + x) - g(w_a - w_o - w_j + x) \bigg) \,dx \; \leq \; \int_{x = 0}^{w_j} g(w_o + w_j) \,dx \end{align*} Here, the final inequality follows because $g(\cdot)$ is a subadditive function. On the other hand, suppose it is the case that $w_a \leq w_o$, then both limits $(w_a - w_o)_{+}$ and $(w_a - w_o - w_j)_{+}$ are zero, and therefore we only need to bound $ \int_{x = w_a}^{w_a + w_j} g(x) \, dx$, which can be done as follows: \begin{gather*} \int_{x = w_a}^{w_a + w_j} g(x) \, dx = \int_{x = 0}^{w_j} g(w_a + x) \, dx \leq \int_{x = 0}^{w_j} g(w_o + w_j) \, dx \end{gather*} Finally, if $w_a = w_o + \delta$ for some $\delta \in (0, w_j)$, we first observe that $\int_{x = (w_a - w_o - w_j)_{+}}^{(w_a - w_o)_{+}} g(x) dx$ simplifies to $\int_{x = 0}^{\delta} g(x) dx$. Therefore, we are interested in bounding \begin{align*} & \int_{x = w_a}^{w_a + w_j} g(x) \, dx \; - \; \int_{x = 0}^{\delta} g(x)\,dx = \int_{x = w_a}^{w_a + w_j - \delta} g(x) \, dx + \int_{x = w_a + w_j - \delta}^{w_a + w_j} g(x) \, dx \; - \; \int_{x = 0}^{\delta} g(x)\,dx \\ &\leq (w_j - \delta) g(w_a + w_j - \delta) + \int_{x = 0}^{\delta} g(w_a + w_j - \delta) \,dx \; \leq \; \int_{x = 0}^{w_j} g(w_o + w_j) \,dx \end{align*} Here again, in the second to last inequality, we used the fact that $g(\cdot)$ is subadditive and therefore $g(w_a + w_j - \delta + x) - g(x) \leq g(w_a + w_j - \delta)$, for all values of $x \geq 0$; hence we get $\int_{x = w_a + w_j - \delta}^{w_a + w_j} g(x) \, dx - \int_{x = 0}^{\delta} g(x) \, dx \leq \int_{x = 0}^{\delta} g(w_a + w_j - \delta) \, dx$. \noindent To complete the proof, we need to show that $\int_{x=0}^{w_j} g(w_o + w_j) \,dx \leq 2 \int_{x=0}^{w_j} g(w_o + x) \,dx$. To see this, consider the following sequence of steps: For any $x \in [0, w_j]$, since $g$ is subadditive, we have $$ g(w_o + w_j - x) + g(x) \geq g(w_o + w_j) $$ Integrating both sides from $x = 0$ to $w_j$ we get $$ \int_{x = 0}^{w_j} g(w_o + w_j - x) \,dx + \int_{x=0}^{w_j} g(x) \,dx \geq \int_{x = 0}^{w_j} g(w_o + w_j) \,dx $$ which is, by variable renaming, equivalent to $$ \int_{y = 0}^{w_j} g(w_o + y) \,dy + \int_{x=0}^{w_j} g(x) \,dx \geq \int_{x = 0}^{w_j} g(w_o + w_j) \,dx $$ But since $g(\cdot)$ is non-decreasing, we have $\int_{x=0}^{w_j} g(x) \,dx \leq \int_{x=0}^{w_j} g(w_o + x) \,dx$ and therefore $$ \int_{y = 0}^{w_j} g(w_o + y) \,dy + \int_{x=0}^{w_j} g(w_o + x) \,dx \geq \int_{x = 0}^{w_j} g(w_o + w_j) \,dx $$ which is what we want. \end{proofof} \end{document}
\begin{document} \title{The round handle problem} \author{Min Hoon Kim} \address{Center for Research in Topology, Department of Mathematics, POSTECH, Pohang 37673, Republic of Korea} \email{[email protected]} \author{Mark Powell} \address{Department of Mathematical Sciences, Durham University, United Kingdom} \email{[email protected]} \author{Peter Teichner} \address{Max Planck Institut f\"{u}r Mathematik, Vivatsgasse 7, 53111 Bonn, Germany} \email{[email protected]} \def\textup{2010} Mathematics Subject Classification{\textup{2010} Mathematics Subject Classification} \expandafter\let\csname subjclassname@1991\endcsname=\textup{2010} Mathematics Subject Classification \expandafter\let\csname subjclassname@2000\endcsname=\textup{2010} Mathematics Subject Classification \subjclass{ 57M25, 57M27, 57N13, 57N70, } \keywords{Round handle problem, topological surgery, $s$-cobordism} \begin{abstract} We present the Round Handle Problem (RHP), proposed by Freedman and Krushkal. It asks whether a collection of links, which contains the Generalised Borromean Rings (GBRs), are slice in a $4$-manifold $R$ constructed from adding round handles to the four ball. A negative answer would contradict the union of the surgery conjecture and the $s$-cobordism conjecture for $4$-manifolds with free fundamental group. \end{abstract} \maketitle \section{Statement of the RHP} We give an alternative proof of the connection of the Round Handle Problem to the topological surgery and $s$-cobordism conjectures (these will all be recalled below). The Round Handle Problem (RHP) was formulated in \cite[Section~5.1]{Freedman-Krushkal-2016-A}. We give a shorter and easier argument that the above mentioned conjectures imply a positive answer to the RHP. Let $L = L_1 \sqcup \cdots \sqcup L_m$ be an oriented ordered link in $S^3$ with vanishing pairwise linking numbers. We will be particularly concerned with the Generalised Borromean Rings (GBRs). By definition these are the collection of links arising from iterated Bing doubling starting with a Hopf link. An example is shown in Figure~\ref{figure:GBR}. \begin{figure} \caption{An example of a GBR: the two-fold Bing double of the Hopf link.} \label{figure:GBR} \end{figure} Write $X_L := S^3 \setminus N(L)$ for the exterior of $L$. Let $\mu_{i} \subset X_L$ be an oriented meridian of the $i$th component of $L$, and let $\lambda_i \subset X_L$ be a zero-framed oriented longitude. Both are smoothly embedded curves. Make $\mu_i$ small enough that $\lk(\mu_i,\lambda_i)=0$ (of course $\lk(\mu_i,L_i) =1$ and $\lk(\lambda_i,L_i)=0$). Let $N(\mu_i),\, N(\lambda_i) \subset X_L$ be closed tubular neighbourhoods, each homeomorphic to $S^1 \times D^2$. A \emph{Round handle} $H$ is a copy of $S^1 \times D^2 \times D^1$. The \emph{attaching region} is $S^1 \times D^2 \times S^0 \subset \partial(S^1 \times D^2 \times D^1) \cong S^1 \times S^2$. The notion of round handles is due to Asimov~\cite{Asimov}. \begin{definition} Given an $m$-component link $L$, construct a manifold $R(L)$ by attaching $m$ round handles $\{H_i\}_{i=1}^m$ to $D^4$ as follows. For the $i$th round handle, glue $S^1 \times D^2 \times \{-1\}$ to $N(\mu_i) \subset X_L \subset S^3= \partial D^4$, and glue $S^1 \times D^2 \times \{1\}$ to $N(\lambda_i)$. In both cases use the zero-framing for the identification of $N(\mu_i)$ and $N(\lambda_i)$ with $S^1 \times D^2$. Note that the link $L$ lies in $\partial R(L)$. \end{definition} \noindent The key question will be whether $L$ is slice in $R(L)$. \begin{definition}[Round Handle Slice] A link $L$ is \emph{Round Handle Slice $($RHS$)$} if $L \subset \partial R(L)$ is slice in $R(L)$, that is if $L$ is the boundary of a disjoint union of locally flat embedded discs in $R(L)$. \end{definition} \begin{theorem}\label{theorem:RHP} Suppose that the topological surgery and $s$-cobordism conjectures hold for free fundamental groups. Then for any link $L$ with pairwise linking numbers all zero, $L$ is round handle slice. \end{theorem} \begin{problem} The Round Handle Problem is to determine whether all pairwise linking number zero links are round handle slice. \end{problem} By Theorem~\ref{theorem:RHP}, a negative answer for one such link would contradict the logical union of the topological surgery conjecture and the $s$-cobordism conjecture for free fundamental groups. It is suggested by Freedman and Krushkal, but by no means compulsory, to focus on the links arising as GBRs. It is also suggested that one might try to adapt Milnor's invariants to provide obstructions. The primary purpose of this problem, like the AB slice problem, is to provide a way to get obstructions to surgery and $s$-cobordism. Key work on the AB slice problem includes~\cite{Freedman-AB,Freedman-Lin:1989-1,Krushkal-AB,Freedman-Krushkal-2016-A}. We briefly recall the statements of these conjectures and their relation to the disc embedding problem. \begin{conjecture}[Topological surgery conjecture]\label{conjecture:surgery} Every degree one normal map $(M,\partial M) \to (X,\partial X)$ from a compact $4$-manifold $M$ to a $4$-dimensional Poincar\'{e} pair $(X,\partial X)$, that is a $\mathbb Z[\pi_1(X)]$-homology equivalence on the boundary, is topologically normally bordant rel.\ boundary to a homotopy equivalence if and only if the surgery obstruction in $L_4(\mathbb Z[\pi_1(X)])$ vanishes. \end{conjecture} \begin{conjecture}[$s$-cobordism conjecture]\label{conjecture:s-cobordism} Every compact topological $5$-dimensional $s$-cobordism $(W;M_0,M_1)$, that is a product on the boundary, is homeomorphic to a product $W \cong M_0 \times I \cong M_1 \times I$, extending the given product structure on the boundary. \end{conjecture} In Section~\ref{section:disc-embedding} we will explain why the union of these two conjectures is equivalent to the disc embedding conjecture, stated below. In the statement of this conjecture we use the equivariant intersection form \[\lambda \colon H_2(M,\partial M;\mathbb Z[\pi_1(M)]) \times H_2(M;\mathbb Z[\pi_1(M)]) \to \mathbb Z[\pi_1(M)]\] and the group-valued self-intersection number \[\mu \colon H_2(M;\mathbb Z[\pi_1(M)]) \to \frac{\mathbb Z[\pi_1(M)]}{g \sim w(g)g^{-1},\, 1 \sim 0},\] where $w \colon \pi_1(M) \to C_2 = \{\pm 1\}$ is the orientation character. Also note that the transverse spheres are required to be \emph{framed}, which means that they have trivialised normal bundles. \begin{conjecture}[Disc embedding conjecture]\label{conjecture:disc-embedding} Let $f_i\colon (D^2,S^1) \looparrowright (M,\partial M)$ be a collection of generically immersed discs in a compact $4$-manifold $M$ with disjointly embedded boundaries. Suppose that there are framed generically immersed spheres $g_i \colon S^2 \looparrowright M$ such that for every $i,j$ we have $\lambda(g_i,g_j)=0$, $\mu(g_i)=0$, and the $g_i$ are transverse spheres, so $\lambda(f_i,g_j) = \delta_{ij}$. Then the circles $f_i(S^1)$ bound disjointly embedded, locally flat discs in $M$ with geometrically transverse spheres, inducing the same framing on $f_i(S^1)$ as the $f_i$. \end{conjecture} Conjectures~\ref{conjecture:surgery}, \ref{conjecture:s-cobordism}, and \ref{conjecture:disc-embedding} are already theorems for good groups, a class of groups containing groups of subexponential growth~\cite{Freedman-Teichner:1995-1,Krushkal-Quinn:2000-1}, and closed under taking subgroups, extensions, quotients, and direct limits. \begin{remark} The obstruction theory presented in the proof of \cite[Lemma~5.4]{Freedman-Krushkal-2016-A}, which forms part of the proof given there of Theorem~\ref{theorem:RHP}, is incomplete. First, $H^3(R,\partial R;\pi_2(R')) \cong H_1(R;\pi_2(R')) = 0$ since $\pi_2(R')$ is a free $\mathbb Z[\pi_1(R')] \cong \mathbb Z[\pi_1(R)]$-module, so the obstruction here certainly vanishes, as asserted in \cite{Freedman-Krushkal-2016-A}. However a potentially non-trivial obstruction, not considered in \cite{Freedman-Krushkal-2016-A}, lies in $H^4(R,\partial R;\pi_3(R'))$. Analysing this depends on the relationship between the intersection forms of $R$ and $R'$. Our proof of Theorem~\ref{theorem:RHP} avoids obstruction theory altogether. \end{remark} \begin{remark} Our proof implies that every \emph{knot} is round handle slice, since in that case the proof applies Conjectures~\ref{conjecture:surgery} and~\ref{conjecture:s-cobordism} with fundamental group $\mathbb Z$. But for fundamental group $\mathbb Z$ these conjectures are theorems, since they are both implied by the disc embedding theorem~\cite[Section~2.9,~Theorem~5.1A]{Freedman-Quinn:1990-1}. \end{remark} \section{Proof of Theorem~\ref{theorem:RHP}} The proof of Theorem~\ref{theorem:RHP} involves the construction of an $s$-cobordism rel.\ boundary from the manifold $R(L)$, henceforth abbreviated to $R$, to another $4$-manifold $R'$, in which $L$ is slice. We begin with a Kirby diagram for $R$, shown in Figure~\ref{figure:RHP2}. \begin{figure} \caption{A handle diagram for $R$ in $N(L_i)$. Replicate this for each $i=1,\dots,m$.} \label{figure:RHP2} \end{figure} First we will explain the figure, then we will explain why this is a diagram for $R$. The diagram does not show the literal Kirby diagram for $R$. Rather, the curve labelled $d$ specifies a solid torus, as the complement of a regular neighbourhood of this curve. Inside the solid torus a dotted circle, corresponding to a 1-handle, and a zero-framed circle, corresponding to a $2$-handle, can be seen. Embed a copy of this solid torus into a closed tubular neighbourhood $N(L_i)$ for each $i=1,\dots,m$, using the zero framing. One therefore has $m$ 1-handles and $m$ 2-handles, one pair in each solid torus neighbourhood $N(L_i)$, arranged as shown in Figure~\ref{figure:RHP2}. The diagram also shows the link component $L_i$ parallel to the core of the solid torus. Now we explain why Figure~\ref{figure:RHP2} is a diagram for the $4$-manifold $R$. A round handle can be constructed from a 1-handle and a 2-handle whose boundary goes around one attaching circle of the round handle (a meridian of $L$), traverses the 1-handle, goes around the other attaching circle (a zero-framed longitude of the same component of $L$), and then traverses the $1$-handle in the other direction. Ignoring the link $L$, we see that $R$ is diffeomorphic to the zero-trace of $L$ with $m$ $1$-handles added. \begin{figure} \caption{The handle diagram from Figure~\ref{figure:RHP2} with a cancelling pair introduced.} \label{figure:RHP1} \end{figure} Figure~\ref{figure:RHP1} shows another diagram for $R$ with a cancelling 1-handle and 2-handle pair introduced in each $N(L_i)$. Next, Figure~\ref{figure:RHP3} shows a Kirby diagram, with the same convention as above, for a 4-manifold that we call $R_M$. Here $M$ stands for ``middle,'' since this manifold will lie in the middle of the $s$-cobordism we are about to construct. \begin{figure} \caption{A handle diagram for $R_M$ in $N(L_i)$. The 2-handles in the picture are labelled $\alpha_i$, $\beta_i$ and $\gamma_i$.} \label{figure:RHP3} \end{figure} The diagram for $R_M$ is very similar to the diagram for $R$ from Figure~\ref{figure:RHP1}; in order to get from the diagram for $R_M$ to that for $R$, inside each solid torus neighbourhood $N(L_i)$, change the zero-framed 2-handle whose attaching curve is labelled $\alpha_i$ in Figure~\ref{figure:RHP3} to a 1-handle. That is, for each $i$, perform surgery on the 2-sphere obtained from the core of the 2-handle union a disc bounded by the attaching circle in $D^4$. The fundamental group of $R_M$ is $\pi_1(R_M) \cong F_m$, the free group on~$m$ letters, generated by meridians of the dotted circles. Note that, by virtue of the cores of the $\beta_i$ $2$-handles, $L$ is slice in $R_M$. To see this, observe that $L_i$ can be passed through the attaching region of the $\alpha_i$ $2$-handle in Figure~\ref{figure:RHP3}. \begin{remark} This remark explains why the remainder of the proof is necessary: it is far from obvious that $L$ is slice in $R$. In Figure~\ref{figure:RHP1}, $L_i$ cannot be passed through a dotted circle corresponding to a $1$-handle, so the argument just given cannot be used to show that $L_i$ is slice in $R$ via Figure~\ref{figure:RHP1}. On the other hand if one isotopes the link through the attaching region of a $2$-handle, one cannot later use the core of that $2$-handle to construct an embedded slice disc, so one cannot use Figure~\ref{figure:RHP2} to see that~$L$ is slice in~$R$. \end{remark} Next, there are also generically immersed 2-spheres in $R_M$ obtained from the union of the cores of the $\beta_i$ 2-handles with immersed discs~$D_i$ in~$D^4$ bounded by the $\beta_i$ attaching curves. By choosing the immersed discs $D_i$ so that their normal bundles induce the 0-framing on the curves $\beta_i\subset S^3$, we have framed immersed spheres. We call these the $\beta_i$-spheres. The linking number zero hypothesis implies that the algebraic intersection numbers in $\mathbb Z[\pi_1(R_M)] \cong \mathbb Z[F_m]$ between these 2-spheres vanish. Consider similar framed spheres arising from the round handle $2$-handles, namely the $2$-handles whose attaching curves are labelled $\gamma_i$ in Figure~\ref{figure:RHP3}. For each $i$, isotope the curve $\gamma_i$ through $\beta_i$ and out from the 1-handle; that is, pull the oxbow part straight until $\gamma_i$ is a round circle, parallel to $\beta_i$. Embed the isotopies in a collar $S^3 \times [1-\varepsilon,1]$. Use parallel push offs of the discs $D_i$, minus their intersection with $S^3 \times [1-\varepsilon,1]$, to cap the resulting curves. We have just constructed discs $E_i$ with boundary $\gamma_i$, that intersect the discs $D_j$ algebraically in $\delta_{ij}$. Cap off the discs $E_i$ with the cores of the $\gamma_i$ 2-handles to obtain framed immersed 2-spheres in $R_M$, that we call the $\gamma_i$-spheres. The $\beta_i$- and $\gamma_j$-spheres are algebraically dual over $\mathbb Z[F_m]$. \begin{lemma}\label{lemma:emb-spheres} There exist framed, locally flat, embedded spheres $B_i \subset R_M$ in the complement of the slice discs for $L$, with $B_i$ regularly homotopic to the $\beta_i$-sphere for $i=1,\dots,m$. \end{lemma} \begin{proof} To prove Lemma~\ref{lemma:emb-spheres}, we will apply the disc embedding conjecture to immersed Whitney discs~$f_k$ pairing up double points of the $\beta_i$-spheres, in the complement of the slice discs for $L$ in $R_M$, and in the complement of the $\beta_i$-spheres themselves. We will then perform the Whitney move using the resulting embedded Whitney discs to obtain the spheres $B_i$. We argue that the immersed Whitney discs $f_k$ can be found. First, apply the geometric Casson lemma~\cite[Lemma~3.1]{Freedman:1982-1},~\cite[Section~1.5]{Freedman-Quinn:1990-1} to convert the $\beta_i$-spheres and the $\gamma_j$-spheres from algebraic duals into geometric duals, intersecting in precisely one point if $i=j$ and with empty intersection otherwise. Preliminary immersed Whitney discs $f_k'$ can be found in the complement of slice discs for $L$ because the slice discs for $L$ in $R_M$ use push offs of the core of the $\beta_i$ 2-handles, whereas the double points of the $\beta_i$-spheres lie in the interior of $D^4$. So one can find immersed Whitney discs in $D^4$ pairing up all double points among the $\beta_i$-spheres. However, these initial Whitney discs $f_k'$, which we can assume to be framed Whitney discs by boundary twisting, might intersect the $\beta_i$-spheres. Tube each intersection of a Whitney disc with a $\beta_i$-sphere into a parallel copy of the dual sphere $\gamma_i$. This produces Whitney discs $f_k$ in $R_M$ that are framed and disjoint from both the slice discs for $L$ and the $\beta_i$-spheres. Construct framed transverse spheres for the $f_k$ from Clifford tori for the double points, with caps given by normal discs to the $\beta_i$-spheres tubed into the dual $\gamma_i$-spheres. Use the caps to symmetrically contract~\cite[Section~2.3]{Freedman-Quinn:1990-1} the tori to immersed spheres. See \cite[Corollary~5.2B]{Freedman-Quinn:1990-1} for more details. Call the resulting spheres $g_k$. All intersections among the transverse spheres $g_k$ arose from contraction, so they cancel algebraically over $\mathbb Z[F_m]$, and we therefore have $\lambda(g_k,g_{\ell})=0 = \mu(g_k)$ for every $k,\ell$. Similarly, all of the intersection points between the $f_k$ and the $g_{\ell}$ cancel, except those arising from the original intersection points between Clifford tori and the Whitney discs $f_k$. It follows that the $f_k$ and the $g_{\ell}$ are algebraically dual over $\mathbb Z[F_m]$. We may therefore apply the disc embedding Conjecture~\ref{conjecture:disc-embedding} to find embedded Whitney discs, in the complement of the slice discs for $L$ and in the complement of the~$\beta_i$-spheres. The disc embedding conjecture has no hypothesis on the fundamental group, so we do not need to control the fundamental group here. Whitney moves across the embedded discs resulting from Conjecture~\ref{conjecture:disc-embedding} give a regular homotopy to the desired framed embedded spheres~$B_i$. This completes the proof of Lemma~\ref{lemma:emb-spheres}. \end{proof} Perform surgery on $R_M$ using these framed embedded spheres $B_i$, and define $R'$ to be the $4$-manifold obtained as result of these surgeries. Note that $L$ is still slice in $R'$, since the spheres $B_i$ lie in the complement of the slice discs. \begin{lemma}\label{lemma:s-cob} The $4$-manifolds $R$ and $R'$ are $s$-cobordant rel.\ boundary. \end{lemma} \begin{proof} To prove Lemma~\ref{lemma:s-cob}, start with $R_M$. The trace of surgeries on the $\alpha_i$-spheres gives a cobordism to $R$. The trace of surgeries on the $\beta_i$-spheres gives a cobordism to $R'$. The union of the two cobordisms along $R_M$ is an $s$-cobordism from $R$ to $R'$, since algebraically the intersection numbers $\alpha_i \cdot \beta_j = \delta_{ij}$. This completes the proof of Lemma~\ref{lemma:s-cob}. \end{proof} Note that we used duals to the $\beta_i$-spheres twice, once to apply surgery and once to prove that we have an $s$-cobordism. However we use \emph{different} duals. For the surgery we used the $\gamma_i$-spheres arising from the round handle $2$-handles. For the $s$-cobordism, we used the $\alpha_i$-spheres. Then since $R$ and $R'$ are $s$-cobordant, the $s$-cobordism Conjecture~\ref{conjecture:s-cobordism} implies that they are homeomorphic rel.\ boundary. Since the homeomorphism is an identity on the boundary, the link~$L$ is preserved. Thus the image of the slice discs for $L$ in $R'$ under the homeomorphism $f \colon R' \to R$ are slice discs for $L$ in~$R$. It follows that $L$ is Round Handle Slice as desired. This completes the proof of Theorem~\ref{theorem:RHP}. \section{Disc embedding is equivalent to surgery and $s$-cobordism}\label{section:disc-embedding} In this section we briefly argue that the disc embedding Conjecture~\ref{conjecture:disc-embedding} is equivalent to the combination of the surgery and $s$-cobordism Conjectures, numbered \ref{conjecture:surgery} and \ref{conjecture:s-cobordism} respectively. There are no new equivalences described in this section. Indeed, references are given throughout, mostly to the relevant subsections of \cite{Freedman-Quinn:1990-1}. We include this section for readers wanting a succinct guide to establishing these equivalences. We will argue that the following are equivalent: (i) surgery and $s$ cobordism; (ii) disc embedding; (iii) height 1.5 capped gropes contain embedded discs with the same boundary; (iv) certain links $L \cup m$, to be described below, are slice with standard slice discs for $L$. We will show: \[\text{(i)} \underset{(4)}{\implies} \text{(iv)} \underset{(3)}{\iff} \text{(iii)} \underset{(2)}{\iff} \text{(ii)} \underset{(1)}{\implies} \text{(i)}.\] \begin{enumerate}[(1)] \item \emph{The disc embedding conjecture \textup{(ii)} implies \textup{(i)} surgery and $s$-cobordism.} This follows from inspection of the high dimensional proof: the proof of topological surgery in dimension four and the five dimensional topological $s$-cobordism theorem can be reduced to precisely the need to find embedded discs with geometrically transverse spheres in the presence of algebraically transverse spheres. See for example~\cite{Luck-surgery-book} for an exposition of the high dimensional theory. The $s$-cobordism theorem requires an extra argument to find the transverse spheres, which can be found in~\cite[Chapter~7]{Freedman-Quinn:1990-1}. \item \emph{The disc embedding conjecture \textup{(ii)} is equivalent to the statement \textup{(iii)} that every height $1.5$ capped grope contains an embedded disc with the same framed boundary.} For one direction, if disc embedding holds, then we can use it to find a disc in a height $1.5$ capped grope, as follows. The caps on the height $1$ side are immersed discs, and parallel copies of the symmetric contraction of the height $1.5$ side, together with annuli in neighbourhoods of the boundary circles, give transverse spheres that have the right algebraic intersection data. See \cite[Section~2.6]{Freedman-Quinn:1990-1} for the construction of transverse gropes within a grope neighbourhood, which are then symmetrically contracted~\cite[Section~2.3]{Freedman-Quinn:1990-1} to yield transverse spheres. Apply disc embedding to find embedded discs with framed boundary the same as the height 1 caps' framed boundary. These correctly framed embedded discs can be used to asymmetrically contract the first stage of the height $1.5$ grope to an embedded disc. On the other hand, a collection of discs with transverse spheres as in Conjecture~\ref{conjecture:disc-embedding} gives rise to a height $1.5$ capped grope with the same boundary and with geometrically transverse spheres for the bottom stage, as shown in \cite[Section~5.1]{Freedman-Quinn:1990-1}. Thus if every height $1.5$ capped grope contains an embedded disc, then disc embedding holds. \item \emph{Height 1.5 capped gropes contain embedded discs with the same boundary \textup{(iii)} if and only if \textup{(iv)} certain links $L \cup m$ are slice with standard slice discs for $L$.} A Kirby diagram for a capped grope consists of an unlink $L$, in the form of a link obtained from the unknot by iterated ramified Bing doubling, followed by a single operation of ramified Whitehead doubling. Place a dot on every component to denote that they correspond to $1$-handles; a neighbourhood of a capped grope is diffeomorphic to a boundary connected sum of copies of $S^1 \times D^3$. The boundary circle of the grope is represented by a meridian $m$ to the original unknot. One can think of performing the ramified Bing and Whitehead doubling on one component of the Hopf link. A grope contains an embedded disc with the same framed boundary if and only if this link $L \cup m$ is slice with standard smooth slice discs for all the dotted components. The desired embedded disc is the slice disc for~$m$. See \cite[Proposition~12.3A]{Freedman-Quinn:1990-1} for further details. \item \emph{Surgery and $s$-cobordism \textup{(i)} together imply \textup{(iv)} that the links $L \cup m$ are slice with standard slice discs for $L$.} Let $L \cup m$ be any link from the family constructed in the previous item, using iterated ramified Bing and Whitehead doubling on one component of the Hopf link. The zero surgery on $L \cup m$ bounds a spin $4$-manifold over a wedge of circles since the Arf invariants of the components vanish. By the topological surgery conjecture, this can be improved, via a normal bordism rel.\ boundary, to be homotopy equivalent to the wedge of circles. Attach a $2$-handle to fill in the surgery torus $D^2 \times S^1$ of~$m$. The remaining 4-manifold is homeomorphic to a boundary connected sum of copies of $S^1 \times D^3$, by the $s$-cobordism conjecture. Therefore it is homeomorphic to the exterior of standard smooth slice discs for $L$ in $D^4$. (We have no control over the remaining slice disc, whose boundary is the link component~$m$.) Thus surgery and $s$-cobordism imply that the link $L \cup m$ is slice with standard slice discs for $L$. More details are given in \cite[Section~11.7C]{Freedman-Quinn:1990-1} and the preceding sections of Chapter $11$. \end{enumerate} \def\MR#1{} \end{document}
\begin{document} \title[Quasiconformal Mappings and Neumann Eigenvalues]{Quasiconformal Mappings and Neumann Eigenvalues of Divergent Elliptic Operators} \author{V.~Gol'dshtein, V.~Pchelintsev, A.~Ukhlov} \begin{abstract} We study spectral properties of divergence form elliptic operators $-\textrm{div} [A(z) \nabla f(z)]$ with the Neumann boundary condition in planar domains (including some fractal type domains), that satisfy to the quasihyperbolic boundary conditions. Our method is based on an interplay between quasiconformal mappings, elliptic operators and composition operators on Sobolev spaces. \end{abstract} \maketitle \footnotetext{\textbf{Key words and phrases:} Elliptic equations, Sobolev spaces, quasiconformal mappings.} \footnotetext{\textbf{2010 Mathematics Subject Classification:} 35P15, 46E35, 30C60.} \section{Introduction} In this paper we apply methods of the (quasi)conformal geometry to spectral problems for $A$-divergent form elliptic operators with the Neumann boundary condition \begin{equation}\label{EllDivOper} L_{A}=-\textrm{div} [A(z) \nabla f(z)], \quad z=(x,y)\in \Omega, \quad \left\langle A(z) \nabla f, n \right\rangle\big|_{\partial \Omega}=0, \end{equation} in the large class of (non)convex domains $\Omega \subset \mathbb C$ that satisfy the quasihyperbolic boundary condition \cite{KOT01,KOT02}. Here matrix functions $A(z)=\left\{a_{kl}(z)\right\}$ with measurable entries $a_{kl}(z)$ belongs to a class $M^{2 \times 2}(\Omega)$ of all $2 \times 2$ symmetric matrix functions that satisfy to an additional condition $\textrm{det} A=1$ a.e. and to the uniform ellipticity condition: \begin{equation}\label{UEC} \frac{1}{K}|\xi|^2 \leq \left\langle A(z) \xi, \xi \right\rangle \leq K |\xi|^2 \,\,\, \text{a.e. in}\,\,\, \Omega, \end{equation} for every $\xi \in \mathbb C$ and for some $1\leq K< \infty$. Such type of elliptic operators arise in various problems of mathematical physics (see, for example, \cite{AIM}). The suggested method is based on a (quasi)conformal representation of a non smooth Riemannian metric in the domain $\Omega$: \[ ds^2=a_{11}(x,y)dx^2+2a_{12}(x,y)dxdy+a_{22}(x,y)dy^2 \] induced by the matrix $A$. The complex dilatation $\mu$ of the corresponding quasiconformal mapping $\varphi$ from $\Omega$ to the unit disc $\mathbb D\subset\mathbb C$ can be calculated using the matrix $A$ (see, for example, \cite[p. 412]{AIM}). Inverse, if the complex dilatation $\mu$ is given then the matrix $A$ can be reproduced. It means that there is one to one correspondence between the matrices and the complex dilatations. By the construction the quasiconformal mapping $\varphi:\Omega\to\mathbb D$ is an isometry of the domain $\Omega$ with this new Riemannian metric $ds$ (induced by the matrix $A$) and the unit disc $\mathbb{D}$ with the hyperbolic metric. It is reasonable to call such metric $ds$ as an $A$-quasiconformal metric and the quasiconformal mapping $\varphi$ as an $A$-quasiconformal mapping (i.e. quasiconformal mapping agreed with the matrix $A$). Hence, we can conclude that an $A$-quasiconformal mapping is conformal (i.e. preserve angles) in this $A$-quasiconformal Riemannian metric. Let us remind that conformal homeomorphisms induce isometries of uniform Sobolev spaces $L^{1,2}(\Omega)$ and $L^{1,2}(\mathbb D)$. In the present article we prove that $A$-quasi\-con\-for\-mal mappings induce isometries of an uniform Sobolev space $L_A^{1,2}(\Omega)$ (Sobolev space agreed with the matrix $A$) and the uniform Sobolev space $L^{1,2}(\mathbb D)$. \vskip 0.2cm {\bf Conjecture.} {\it Spectral properties of the $A$-divergent form elliptic operators with the Neumann boundary condition depends on $A$-quasiconformal geometry of domains only.} \vskip 0.2cm The suggested method is based on connections between composition operators on Sobolev spaces, elliptic operators and quasiconformal mappings. We prove that $\varphi:\Omega\to\Omega'$ is a quasiconformal mapping agreed with the matrix $A$ (i.e. induced by $A$ via the Beltrami equation) if and only if \[ \iint\limits_\Omega \left\langle A(z)\nabla f(\varphi(z)),\nabla f(\varphi(z))\right\rangle\,dxdy=\iint\limits_{\Omega'} \left\langle \nabla f(w),\nabla f(w)\right\rangle\,dudv, \] for all $f\in L^{1,2}(\Omega')$. This result permits us to introduce an $A$-norm in the corresponding uniform Sobolev space $L^{1,2}_A$. For this norm the $A$-quasiconformal mappings play a role similar to conformal mappings for the Laplace operator and uniform Sobolev spaces $L^{1,2}$. In particular, we prove that the $A$-quasiconformal mappings generalize the well known property of conformal mappings to generate isometries of uniform Sobolev spaces $L^{1,2}(\Omega)$ and $L^{1,2}(\Omega')$ (see, for example, \cite{C50}). In terms of the $A$-quasiconformal mappings we also refine the functional characterization of quasiconformal mappings, obtained in the article \cite{VG75} in the terms of isomorphisms of uniform Sobolev spaces $L^{1,2}$. Short historical remarks. Spectral estimates of elliptic operators eigenvalues represent an important part of the modern spectral theory (see, for example, \cite{A98,AB06,BLL,BCT15,EP15,ENT,FNT,LM98}). The classical upper estimate for the first non-trivial Neumann eigenvalue of the Laplace operator (the $A$-divergent form elliptic operator with the matrix $A=I$) \begin{equation*} \mu_1(I,\Omega):=\mu_1(\Omega)\leq \mu_1(\Omega^{\ast})=\frac{{j_{1,1}'^2}}{R^2_{\ast}} \end{equation*} was proved by Szeg\"o \cite{S54} for simply connected planar domains via a conformal mappings technique ("the method of conformal normalization"). In this inequality $j_{1,1}'$ denotes the first positive zero of the derivative of the Bessel function $J_1$ and $\Omega^{\ast}$ is a disc of the same area as $\Omega$ with $R_{\ast}$ as its radius. In convex domains $\Omega\subset\mathbb R^n$, $n\geq 2$, the classical lower estimates of the Neumann eigenvalues of the Laplace operator \cite{PW} state that \begin{equation} \label{PW} \mu_1(\Omega)\geq \frac{\pi^2}{d(\Omega)^2}, \end{equation} where $d(\Omega)$ is a diameter of a convex domain $\Omega$. Similar estimates for the non-linear $p$-Laplace operator, $p\ne 2$, were obtained much later in \cite{ENT}. Unfortunately, for non-convex domains $\mu_1(\Omega)$ can not be characterized in the terms of its Euclidean diameters. It can be seen by considering a domain consisting of two identical squares connected by a thin corridor \cite{BCDL16}. Let us return to our studies. In the previous works \cite{GPU17_2,GPU19,GU16} we returned to applications of a (quasi)conformal mappings techniques to such estimates in rough (non-convex) domains. Let us remind that some applications of a conformal mappings to this problem can be found in \cite{S54}). We used (quasi)conformal mappings in a framework of composition operators on Sobolev spaces \cite{U93,VG75,VU02}. This method permitted us to obtain lower estimates of the first non-trivial Neumann-Laplace eigenvalue $\mu_1(\Omega)$ in the terms of the hyperbolic (conformal) radius of $\Omega$ for a large class of domains that includes some fractal domains. In this paper we use the $A$-quasiconformal mappings via the composition operator theory. The corresponding composition operators (isometries for the norm induced by matrices $A$) allows us reduce the spectral problem for the divergence form elliptic operator \eqref{EllDivOper} defined in a simply connected domain $\Omega\subset\mathbb C$ to a weighted spectral problem for the Laplace operator in the unit disc $\mathbb D\subset\mathbb C$. Roughly speaking, by the chain rule applied to a function $f(z)=g \circ \varphi(z)$ \cite{GNR18}, we have \begin{equation}\label{QCE} -\textrm{div} [A(z) \nabla f(z)] = -\textrm{div} [A(z) \nabla g(\varphi(z))]= -\left|J(w,\varphi^{-1})\right|^{-1} \Delta g(w), \end{equation} where the weight $J(w,\varphi^{-1})$ is the Jacobian of the inverse mapping $\varphi^{-1}:\mathbb D\to\Omega$. As an example we consider the divergent form operator $-\textrm{div} [A(z) \nabla f(z)]$ with the matrix $$ A(z)=\begin{pmatrix} \frac{a+b}{a-b} & 0 \\ 0 & \frac{a-b}{a+b} \end{pmatrix},\,\,a>b\geq 0, $$ defined in the interior of ellipse $\Omega_e$ with semi-axes $a+b$ and $a-b$. By Theorem~\ref{T4.7} we have $$ \mu_1(A,\Omega_e) \geq \frac{(j'_{1,1})^2}{a^2-b^2}, $$ what is better (Example~\ref{example1}) than the lower estimate obtained by using the classical estimate (\ref{PW}) and the uniform ellipticity condition: $$ \mu_1(A,\Omega_e) \geq \frac{\pi^2}{4(a+b)^2} \frac{a-b}{a+b}. $$ For thin ellipses, i.e $a+b$ fixed and $(a-b)$ tends to zero an asymptotic of our estimate is $\infty$ when the classical asymptotic is $0$. The application of the composition operators theory to spectral problems of the $A$-divergent form elliptic operators is based on reducing of a positive quadratic form \[ ds^2=a_{11}(x,y)dx^2+2a_{12}(x,y)dxdy+a_{22}(x,y)dy^2 \] defined in a planar domain $\Omega$, by means of a quasiconformal change of variables, to the canonical form \[ ds^2=\Lambda(du^2+dv^2),\,\, \Lambda\neq 0,\,\, \text{a.e. in}\,\, \Omega', \] given that $a_{11}a_{22}-a^2_{12} \geq \kappa_0>0$, $a_{11}>0$, almost everywhere in $\Omega$ \cite{Ahl66, AIM, BGMR}. Note that this fact can be extended to linear operators of the form $\textrm{div} [A(z) \nabla f(z)]$, $z=x+iy$, for matrix function $A \in M^{2 \times 2}(\Omega)$. Let $\xi(z)=\Re(\varphi(z))$ be a real part of a quasiconformal mapping $\varphi(z)=\xi(z)+i \eta(z)$, which satisfies to the Beltrami equation: \begin{equation}\label{BelEq} \varphi_{\overline{z}}(z)=\mu(z) \varphi_{z}(z),\,\,\, \text{a.e. in}\,\,\, \Omega, \end{equation} where $$ \varphi_{z}=\frac{1}{2}\left(\frac{\partial \varphi}{\partial x}-i\frac{\partial \varphi}{\partial y}\right) \quad \text{and} \quad \varphi_{\overline{z}}=\frac{1}{2}\left(\frac{\partial \varphi}{\partial x}+i\frac{\partial \varphi}{\partial y}\right), $$ with the complex dilatation $\mu(z)$ is given by \begin{equation}\label{ComDil} \mu(z)=\frac{a_{22}(z)-a_{11}(z)-2ia_{12}(z)}{\det(I+A(z))},\quad I= \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}. \end{equation} We call this quasiconformal mapping (with the complex dilatation $\mu$ defined by (\ref{ComDil})) as an $A$-quasiconformal mapping. Note that the uniform ellipticity condition \eqref{UEC} can be reformulated as \begin{equation}\label{OVCE} |\mu(z)|\leq \frac{K-1}{K+1},\,\,\, \text{a.e. in}\,\,\, \Omega. \end{equation} Conversely, using the complex dilatation $\mu$ we can obtain from \eqref{ComDil} (see, for example, \cite[p.412]{AIM}) the following representation of the matrix $A$ : \begin{equation}\label{Matrix-F} A(z)= \begin{pmatrix} \frac{|1-\mu|^2}{1-|\mu|^2} & \frac{-2 \Imag \mu}{1-|\mu|^2} \\ \frac{-2 \Imag \mu}{1-|\mu|^2} & \frac{|1+\mu|^2}{1-|\mu|^2} \end{pmatrix},\,\,\, \text{a.e. in}\,\,\, \Omega. \end{equation} So, given any $A \in M^{2 \times 2}(\Omega)$, one produced, by \eqref{OVCE}, the complex dilatation $\mu(z)$, for which, in turn, the Beltrami equation \eqref{BelEq} induces a quasiconformal homeomorphism $\varphi:\Omega \to \varphi(\Omega)$ as its solution, by the Riemann measurable mapping theorem (see, for example, \cite{Ahl66}). We will say that the matrix function $A$ induces the corresponding $A$-quasiconformal homeomorphism $\varphi$ or that $A$ and $\varphi$ are agreed. The $A$-quasiconformal mapping $\psi: \Omega \to \mathbb D$ of simply connected domain $\Omega \subset \mathbb C$ onto the unit disc $\mathbb D \subset \mathbb C$ can be obtained as a composition of $A$-quasiconformal homeomorphism $\varphi:\Omega \to \varphi(\Omega)$ and a conformal mapping $\omega : \varphi(\Omega) \to \mathbb D$. So, by the given an $A$-divergent form elliptic operator defined in a domain $\Omega\subset\mathbb C$ we construct an $A$-quasiconformal mapping $\psi: \Omega \to \mathbb D$ with a metric quasiconformality coefficient $$ K=\frac{1+\|\mu\mid L^{\infty}(\Omega)\|}{1-\|\mu\mid L^{\infty}(\Omega)\|}, $$ where $\mu$ defined by (\ref{ComDil}). We prove that any $A$-quasiconformal mapping $\varphi: \Omega \to \Omega'$ induces an isometry of the spaces $L^{1,2}_A(\Omega)$ and $L^{1,2}(\Omega')$. This is the main technical result of this paper about Sobolev spaces. Using applications of quasiconformal mappings to the Sobolev type embedding theorems \cite{GG94,GU09}, we prove discreteness of the spectrum of the divergence form elliptic operators $-\textrm{div} [A(z) \nabla f(z)]$ with the Neumann boundary condition. Well-known estimates of constants in the Sobolev-Poincar\'e inequality for the unit disc and the previous isometry result in the framework of the composition operator theory permit us to obtain lower estimates of Neumann eigenvalues in the terms of integrals of derivatives of $A$-quasiconformal mappings for a large class of rough domains that includes a subclass of domains with fractal boundaries (quasidiscs). From geometrical point of view it means that we study a class domains $\Omega\subset\mathbb C$ equipped with the corresponding quasiconformal geometry. Any such domain can be considered as a Riemannian manifold and we suppose that our estimates of Neumann eigenvalues are closely connected to spectral estimates of the Beltrami-Laplace operator. \section{Sobolev spaces and $A$-quasiconformal mappings} Let $E \subset \mathbb C$ be a measurable set on the complex plane and $h:E \to \mathbb R$ be a positive a.e. locally integrable function i.e. a weight. The weighted Lebesgue space $L^p(E,h)$, $1\leq p<\infty$, is the space of all locally integrable functions endowed with the following norm: $$ \|f\,|\,L^{p}(E,h)\|= \left(\iint\limits_E|f(z)|^ph(z)\,dxdy \right)^{\frac{1}{p}}< \infty. $$ The two-weighted Sobolev space $W^{1,p}(\Omega,h,1)$, $1\leq p< \infty$, is defined as the normed space of all locally integrable weakly differentiable functions $f:\Omega\to\mathbb{R}$ endowed with the following norm: \[ \|f\mid W^{1,p}(\Omega,h,1)\|=\|f\,|\,L^{p}(\Omega,h)\|+\|\nabla f\mid L^{p}(\Omega)\|. \] In the case $h=1$ this weighted Sobolev space coincides with the classical Sobolev space $W^{1,p}(\Omega)$. The seminormed Sobolev space $L^{1,p}(\Omega)$, $1\leq p< \infty$, is the space of all locally integrable weakly differentiable functions $f:\Omega\to\mathbb{R}$ endowed with the following seminorm: \[ \|f\mid L^{1,p}(\Omega)\|=\|\nabla f\mid L^p(\Omega)\|, \,\, 1\leq p<\infty. \] We also need a weighted seminormed Sobolev space $L_{A}^{1,2}(\Omega)$ (associated with the matrix $A$), defined as the space of all locally integrable weakly differentiable functions $f:\Omega\to\mathbb{R}$ endowed with the following norm: \[ \|f\mid L_{A}^{1,2}(\Omega)\|=\left(\iint\limits_\Omega \left\langle A(z)\nabla f(z),\nabla f(z)\right\rangle\,dxdy \right)^{\frac{1}{2}}. \] The corresponding Sobolev space $W^{1,2}_{A}(\Omega)$ is defined as the normed space of all locally integrable weakly differentiable functions $f:\Omega\to\mathbb{R}$ endowed with the following norm: \[ \|f\mid W^{1,2}_{A}(\Omega)\|=\|f\,|\,L^{2}(\Omega)\|+\|f\mid L^{1,2}_{A}(\Omega)\|. \] These Sobolev spaces are closely connected with quasiconformal mappings. Recall that a homeomorphism $\varphi: \Omega\to \Omega'$, where $\Omega,\, \Omega'\subset\mathbb C$, is called a $K$-quasiconformal mapping if $\varphi\in W^{1,2}_{\loc}(\Omega)$ and there exists a constant $1\leq K<\infty$ such that $$ |D\varphi(z)|^2\leq K |J(z,\varphi)|\,\,\text{for almost all}\,\,z\in\Omega. $$ An important subclass of quasiconformal mappings represent the class of bi-Lipschitz mappings. Note that a homeomorphism $\varphi: \Omega\to \Omega'$ is said to be an $L$-bi-Lipschitz if it satisfies the double inequality \begin{equation}\label{Bi-Lip} \frac{1}{L}|z-z'| \leq |\varphi(z)- \varphi(z')| \leq L|z-z'|, \end{equation} whenever $z, z' \in \Omega$. The smallest $L \geq 1$ for which \eqref{Bi-Lip} holds is called the isometric distortion of $\varphi$. It is known (see, for example, \cite{VGR}) that each $L$-bi-Lipschitz mapping $\varphi$ is $L^2$-quasiconformal. Conversely we have the following connection between quasiconformal and bi-Lipschitz mappings: \begin{lemma} Let $\varphi: \Omega\to \Omega'$ be a $K$-quasiconformal mapping such that $|J(z, \varphi)|=1$ for almost all $z \in \Omega$. Then $\varphi$ is locally $\sqrt{K}$-bi-Lipschitz a.e. in $\Omega$. \end{lemma} \begin{proof} Since $\varphi: \Omega\to \Omega'$ is a $K$-quasiconformal mapping then $\varphi$ is differentiable almost everywhere in $\Omega$ and we have $$ |D\varphi(z)|^2\leq K |J(z,\varphi)|\,\,\,\text{for almost all}\,\,\,z\in\Omega. $$ Because $|J(z, \varphi)|=1$ a.e. in $\Omega$ we obtain $$ \lim\limits_{z'\to z}\frac{|\varphi(z)- \varphi(z')|}{|z-z'|}=|D\varphi(z)|\leq \sqrt{K}\,\,\, \text{for almost all}\,\,\, z\in\Omega. $$ Hence, $\varphi$ is locally $L$-Lipschitz a.e. in $\Omega$ with $L \leq \sqrt{K}$. On the other hand, it is known that the inverse mapping to $\varphi$ is again $K$-quasiconformal. So, $\varphi^{-1}$ is also locally $L$-Lipschitz a.e. in $\Omega$ with $L \leq \sqrt{K}$. Hence, $\varphi$ is locally $\sqrt{K}$-bi-Lipschitz a.e. in $\Omega$. \end{proof} Now we study a connection between composition operators on Sobolev spaces and the $A$-quasiconformal mappings that refine (in the case $n=2$) the corresponding assertion for quasiconformal mappings (\cite{VG75}). \begin{theorem}\label{IsomSS} Let $\Omega,\Omega'$ be domains in $\mathbb C$. Then a homeomorphism $\varphi :\Omega \to \Omega'$ is an $A$-quasiconformal mapping if and only if $\varphi$ induces, by the composition rule $\varphi^{*}(f)=f \circ \varphi$, an isometry of Sobolev spaces $L^{1,2}_A(\Omega)$ and $L^{1,2}(\Omega')$: \[ \|\varphi^{*}(f)\,|\,L^{1,2}_A(\Omega)\|=\|f\,|\,L^{1,2}(\Omega')\| \] for any $f \in L^{1,2}(\Omega')$. \end{theorem} \begin{proof} Sufficiency. We prove that if $\varphi :\Omega \to \Omega'$ is an $A$-quasiconformal mapping then the composition operator \[ \varphi^{*}:L^{1,2}(\Omega') \to L^{1,2}_A(\Omega),\,\,\, \varphi^{*}(f)=f \circ \varphi \] is an isometry. Let $f \in L^{1,2}(\Omega')$ be a smooth function. Then the composition $g(z)=f \circ \varphi(z)$ is defined on $\Omega$ and is weakly differentiable almost everywhere in $\Omega$ \cite{VG75}. Let us check that $g(z)=f \circ \varphi(z)$ belongs to the Sobolev space $L^{1,2}_A(\Omega)$. By the chain rule \cite{VGR} we have \begin{multline*} \|g\,|\,L^{1,2}_A(\Omega)\| = \left(\iint\limits_{\Omega} \left\langle A(z) \nabla (f \circ \varphi(z)), \nabla (f \circ \varphi(z)) \right\rangle dxdy\right)^{\frac{1}{2}} \\ = \left(\iint\limits_{\Omega} |\nabla f|^{2}(\varphi(z)) |J(z,\varphi)| dxdy\right)^{\frac{1}{2}} \\ =\left(\iint\limits_{\Omega'} |\nabla f|^{2}(w) dudv\right)^{\frac{1}{2}} =\|f\,|\,L^{1,2}(\Omega')\|. \end{multline*} Let $f \in L^{1,2}(\Omega')$ be an arbitrary function. Then there exists a sequence $\{f_k\}$, $k=1,2,...$ of smooth functions such that $f_k \in L^{1,2}(\Omega')$, $$ \lim\limits_{k\to\infty}\|f-f_k\mid L^{1,2}(\Omega')\|=0 $$ and $\{f_k\}$ converges to $f$ a.e. in $\Omega'$. Denote by $g_k=f_k\circ\varphi$, $k=1,2,...\,$. Then $$ \|g_k-g_l\,|\,L^{1,2}_A(\Omega)\|=\|f_k-f_l\,|\,L^{1,2}(\Omega')\|,\,\,k,l\in\mathbb N, $$ and because the sequence $\{f_k\}$ converges in $L^{1,2}(\Omega')$ then the sequence $\{g_k\}$ converges in $L^{1,2}_A(\Omega)$. Note that quasiconformal mappings possess the $N^{-1}$-Luzin property. It means that the preimage of a set of measure zero has measure zero. So, the sequence $g_k=f_k\circ\varphi$ converges to $g=f\circ\varphi$ a.e. in $\Omega$ and hence in $L^{1,2}_A(\Omega)$. Therefore \[ \|\varphi^{*}(f)\,|\,L^{1,2}_A(\Omega)\|=\|f\,|\,L^{1,2}(\Omega')\| \] for any $f \in L^{1,2}(\Omega')$. Necessity. Suppose that the composition operator \[ \varphi^{*}:L^{1,2}(\Omega') \to L^{1,2}_A(\Omega) \] is an isometry, i.e. \begin{equation} \label{eqA} \iint\limits_{\Omega} \left\langle A(z) \nabla (f \circ \varphi(z)), \nabla (f \circ \varphi(z)) \right\rangle dxdy = \iint\limits_{\Omega'} |\nabla f|^{2}(w) dudv. \end{equation} Because the matrix $A$ satisfies the uniform ellipticity condition (\ref{UEC}) then by \eqref{eqA} we have \begin{multline*} \frac{1}{K}\iint\limits_{\Omega} |\nabla (f \circ \varphi(z))|^2~dxdy\leq \iint\limits_{\Omega} \left\langle A(z) \nabla (f \circ \varphi(z)), \nabla (f \circ \varphi(z)) \right\rangle dxdy\\ = \iint\limits_{\Omega'} |\nabla f|^{2}(w) dudv. \end{multline*} Hence the following inequality $$ \left(\iint\limits_{\Omega} |\nabla (f \circ \varphi(z))|^2~dxdy\right)^{\frac{1}{2}} \leq K^{\frac{1}{2}} \left(\iint\limits_{\Omega'} |\nabla f|^{2}(w)~dudv\right)^{\frac{1}{2}} $$ holds for any $f\in L^{1,2}(\Omega')$. So, by \cite{VG75} we can conclude that the mapping $\varphi :\Omega \to \Omega'$ will be a $K$-quasiconformal mapping. Hence, by \cite{Ahl66} $\varphi$ will be a solution of the Beltrami equation \begin{equation}\label{Beltr} \varphi_{\overline{z}}(z)=\nu(z)\varphi_{z}(z),\,\,\, \text{a.e. in}\,\,\, \Omega \end{equation} with some complex dilatation $\nu(z)$, $|\nu(z)|<1$ a.e. in $\Omega$. Now we consider the matrix $B$ generated by the complex dilatation $\nu(z)$: \[ B(z)=\begin{pmatrix} \frac{|1-\nu|^2}{1-|\nu|^2} & \frac{-2 \Imag \nu}{1-|\nu|^2} \\ \frac{-2 \Imag \nu}{1-|\nu|^2} & \frac{|1+ \nu|^2}{1-|\nu|^2} \end{pmatrix},\,\,\, \text{a.e. in}\,\,\, \Omega. \] Then $\varphi$ is a $B$-quasiconformal mapping. Because $\varphi$ defined by (\ref{Beltr}) we have finally \begin{equation} \label{eqB} \iint\limits_{\Omega} \left\langle B(z) \nabla (f \circ \varphi(z)), \nabla (f \circ \varphi(z)) \right\rangle dxdy = \iint\limits_{\Omega'} |\nabla f|^{2}(w) dudv \end{equation} for any $f \in L^{1,2}(\Omega')$. Now using the equalities (\ref{eqA}) and (\ref{eqB}) we obtain \[ \iint\limits_{\Omega} \left\langle A(z) \nabla g(z), \nabla g(z) \right\rangle dxdy = \iint\limits_{\Omega} \left\langle B(z) \nabla g(z), \nabla g(z) \right\rangle dxdy \] for any $g \in L^{1,2}_{A}(\Omega)$. It means that Hilbert spaces $W^{1,2}_A(\Omega)$ and $W^{1,2}_B(\Omega)$ coincide. Therefore $A=B$ and $\mu=\nu$ a.e. in $\Omega$. \end{proof} Next, we set the following property for $A$-quasiconformal mappings. \begin{lemma} Let $\varphi : \Omega \to \mathbb D$ be an $A$-quasiconformal mapping. Then the inverse mapping $\psi=\varphi^{-1} : \mathbb D \to \Omega$ is $A^{-1}$-quasiconformal. \end{lemma} \begin{proof} Let $\varphi : \Omega \to \mathbb D$ be an $A$-quasiconformal mapping with the matrix $A$ defined by the formula \eqref{Matrix-F}, i.e. $$ A(z)= \begin{pmatrix} \frac{|1-\mu(z)|^2}{1-|\mu(z)|^2} & \frac{-2 \Imag \mu(z)}{1-|\mu(z)|^2} \\ \frac{-2 \Imag \mu(z)}{1-|\mu(z)|^2} & \frac{|1+\mu(z)|^2}{1-|\mu(z)|^2} \end{pmatrix},\,\,\, \text{a.e. in}\,\,\, \Omega. $$ By \cite{Ahl66} it is known that the complex dilatation for the inverse mapping $\varphi^{-1} : \mathbb D \to \Omega$ satisfies \[ \mu_{\varphi^{-1}}(w)=-\nu_{\varphi} \circ \varphi^{-1}(w)\,\,\,\text{for almost all}\,\,\,w\in \mathbb D, \] where \[ \nu_{\varphi}=\frac{\varphi_{\overline{z}}}{\overline{\varphi_z}}=\left(\frac{\varphi_z}{|\varphi_z|}\right)^2\mu_{\varphi},\,\,\, \text{a.e. in}\,\,\, \Omega \] is called the second complex dilatation of $\varphi$. Hence, the matrix $B$ induces by the complex dilatation $\mu_{\varphi^{-1}}$ of the inverse mapping $\varphi^{-1}$ has the form $$ B(w)= \begin{pmatrix} \frac{|1+\nu_{\varphi} \circ \varphi^{-1}|^2}{1-|\nu_{\varphi} \circ \varphi^{-1}|^2} & \frac{2 \Imag (\nu_{\varphi} \circ \varphi^{-1})}{1-|\nu_{\varphi} \circ \varphi^{-1}|^2} \\ \frac{2 \Imag (\nu_{\varphi} \circ \varphi^{-1})}{1-|\nu_{\varphi} \circ \varphi^{-1}|^2} & \frac{|1-\nu_{\varphi} \circ \varphi^{-1}|^2}{1-|\nu_{\varphi} \circ \varphi^{-1}|^2} \end{pmatrix}, \,\,\,\text{a.e. in}\,\,\,\mathbb D. $$ Because $\det B=1$, $|\mu_{\varphi}(z)|=|\mu_{\varphi^{-1}}(w)|$, $\Imag \mu_{\varphi} = -\Imag (\nu_{\varphi} \circ \varphi^{-1})$ for almost all $z\in \Omega$ and almost all $w=\varphi(z)\in \mathbb D$ we have $$ A(z)B(\varphi(z))=I\,\,\,\text{for almost all}\,\,\, z\in \Omega. $$ Therefore we conclude that $B(w)=A^{-1}(\varphi^{-1}(w))$ for almost all $w\in \mathbb D$ and $A^{-1}(z)=B(\varphi(z))$ for almost all $z\in \Omega$. \end{proof} \section{ Weighted Sobolev-Poincar\'e inequalities} Denote by $B_{r,2}(\mathbb D)$, $1<r<\infty$, the best constant in the (non-weighted) Sobolev-Poincar\'e inequality in the unit disc $\mathbb D$. Exact calculations of $B_{r,2}(\mathbb D)$, $r\ne 2$, is an open problem and we use the upper estimate (see, for example, \cite{GT77,GU16}): $$ B_{r,2}(\mathbb D) \leq \left(2^{-1} \pi\right)^{\frac{2-r}{2r}}\left(r+2\right)^{\frac{r+2}{2r}}. $$ On the basis of Theorem~\ref{IsomSS} we prove an universal weighted Sobolev-Poincar\'e inequality which holds in any simply connected planar domain with non-empty boundary. Denote by $h(z) =|J(z,\varphi)|$ the quasihyperbolic weight defined by an $A$-quasiconformal mapping $\varphi : \Omega \to \mathbb D$. \begin{theorem}\label{Th4.1} Let $A$ belongs to a class $M^{2 \times 2}(\Omega)$ and $\Omega$ be a simply connected planar domain. Then for any function $f \in W^{1,2}_{A}(\Omega)$ the following weighted Sobolev-Poincar\'e inequality \[ \inf\limits_{c \in \mathbb R}\left(\iint\limits_\Omega |f(z)-c|^rh(z)dxdy\right)^{\frac{1}{r}} \leq B_{r,2}(h,A,\Omega) \left(\iint\limits_\Omega \left\langle A(z) \nabla f(z), \nabla f(z) \right\rangle dxdy\right)^{\frac{1}{2}} \] holds for any $r \geq 1$ with the constant $B_{r,2}(h,A,\Omega) = B_{r,2}(\mathbb D)$. \end{theorem} \begin{proof} Because $\Omega$ is a simply connected planar domain, then there exists \cite{Ahl66} an $\mu$-quasiconformal homeomorphism $\varphi : \Omega \to \mathbb D$ with \begin{equation*} \mu(z)=\frac{a_{22}(z)-a_{11}(z)-2ia_{12}(z)}{\det(I+A(z))}, \end{equation*} which is an $A$-quasiconformal mapping. Hence by Theorem~\ref{IsomSS} the equality \begin{equation}\label{IN2.1} ||f \circ \varphi^{-1} \,|\, L^{1,2}(\mathbb D)|| = ||f \,|\, L^{1,2}_{A}(\Omega)|| \end{equation} holds for any function $f \in L^{1,2}_{A}(\Omega)$. Denote by $h(z):=|J(z,\varphi)|$ the quasihyperbolic weight in $\Omega$. Now using the change of variable formula for the quasiconformal mappings \cite{VGR}, the equality \eqref{IN2.1} and the classical Sobolev-Poincar\'e inequality for the unit disc $\mathbb D$ \cite{M} \begin{equation*}\label{IN2.3} \inf\limits_{c \in \mathbb R}\left(\iint\limits_{\mathbb D} |f \circ \varphi^{-1}(w)-c|^rdudv\right)^{\frac{1}{r}} \\ \leq B_{r,2}(\mathbb D) \left(\iint\limits_{\mathbb D} \nabla (f \circ \varphi^{-1}(w))dudv\right)^{\frac{1}{2}} \end{equation*} that holds for any $r \geq 1$, we obtain that for any smooth function $f\in L^{1,2}_{A}(\Omega)$ \begin{multline*} \inf\limits_{c \in \mathbb R}\left(\iint\limits_\Omega |f(z)-c|^rh(z)dxdy\right)^{\frac{1}{r}} = \inf\limits_{c \in \mathbb R}\left(\iint\limits_\Omega |f(z)-c|^r |J(z,\varphi)| dxdy\right)^{\frac{1}{r}} \\ = \inf\limits_{c \in \mathbb R}\left(\iint\limits_{\mathbb D} |f \circ \varphi^{-1}(w)-c|^rdudv\right)^{\frac{1}{r}} \leq B_{r,2}(\mathbb D) \left(\iint\limits_{\mathbb D} \nabla (f \circ \varphi^{-1}(w))dudv\right)^{\frac{1}{2}} \\ = B_{r,2}(\mathbb D) \left(\iint\limits_{\Omega} \left\langle A(z) \nabla g(z), \nabla f(z) \right\rangle dxdy\right)^{\frac{1}{2}}. \end{multline*} Approximating an arbitrary function $f \in W^{1,2}_{A}(\Omega)$ by smooth functions we obtain finally $$ \inf\limits_{c \in \mathbb R}\left(\iint\limits_\Omega |f(z)-c|^rh(z)dxdy\right)^{\frac{1}{r}} \leq B_{r,2}(h,A,\Omega) \left(\iint\limits_{\Omega} \left\langle A(z) \nabla f(z), \nabla f(z) \right\rangle dxdy\right)^{\frac{1}{2}}, $$ with the constant $$ B_{r,2}(h,A,\Omega)=B_{r,2}(\mathbb D) \leq \left(2^{-1} \pi\right)^{\frac{2-r}{2r}}\left(r+2\right)^{\frac{r+2}{2r}}. $$ \end{proof} \section{ Estimates of Sobolev-Poincar\'e constants} In this section we consider (sharp) upper estimates of Sobolev-Poincar\'e constants in domains that satisfy the quasihyperbolic boundary condition. Recall that a domain $\Omega$ satisfy the $\gamma$-quasihyperbolic boundary condition \cite{KOT01,KOT02} with some $\gamma>0$, if the growth condition on the quasihyperbolic metric $$ k_{\Omega}(x_0,x)\leq \frac{1}{\gamma}\log\frac{\dist(x_0,\partial\Omega)}{\dist(x,\partial\Omega)}+C_0 $$ is satisfied for all $x\in\Omega$, where $x_0\in\Omega$ is a fixed base point and $C_0=C_0(x_0)<\infty$. This quasihyperbolic boundary condition is equivalent to integrability of Jacobians of corresponding quasiconformal mappings with some exponent $\beta>1$. Let us reformulate a theorem about integrability of the Jacobians from \cite{AK} in the convenient for our study form. Firstly, recall that for quasiconformal mappings $\psi:\mathbb D\to\Omega$ the volume derivative $$ J_{\psi}(w):=\lim\limits_{r\to 0}\frac{|\psi(B(w,r))|}{|B(w,r)|}=|J(w, \psi)| $$ is defined for almost all $w\in\mathbb D$. \begin{theorem} \cite{AK} Let $\psi: \mathbb{D} \to \Omega$ be a quasiconformal mapping. Then $J_{\psi} \in L^{\beta}(\mathbb{D})$ for some $\beta>1$ if and only if $\Omega$ satisfy to a $\gamma$-quasihyperbolic boundary conditions for some $\gamma$. \end{theorem} Let us remark that the degree of integrability $\beta$ depends only on $\Omega$ and the quasiconformility coefficient $K(\psi)$. Our main goal is a reduction of weighted Sobolev-Poincar\'e inequalities to non-weighted embedding theorems. This goal requires the exact value of integrability exponent of Jacobians. It leads us to a following new definition. Namely, we say that a simply connected domain $\Omega \subset \mathbb C$ is called an $A$-quasiconformal $\beta$-regular domain, $\beta >1$, if $$ \iint\limits_\mathbb D |J(w, \varphi^{-1})|^{\beta}~dudv < \infty, $$ where $\varphi: \Omega\to\mathbb D$ is a corresponding $A$-quasiconformal mapping. Since (see, for example, \cite{Ahl66}) $A$-quasiconformal mappings $\varphi: \Omega\to\mathbb D$ are defined up to conformal automorphisms of $\mathbb D$, a property of quasiconformal $\beta$-regularity doesn't depend on a choice of $\varphi$ and depends on the "quasihyperbolic geometry" of $\Omega$ only. Of course, any quasiconformal $\beta$-regular domain satisfies to some $\gamma$-quasihyperbolic boundary conditions and the class of all quasiconformal regular domains coincide with the class of all domains satisfying to the quasihyperbolic boundary conditions. In \cite{GU14} it was proved that if $\Omega \subset \mathbb C$ is an $A$-quasiconformal $\beta$-regular domain, $\beta >1$, then $\Omega$ has a finite geodesic diameter. Hence, "maze-like" domains \cite{GPU18,KOT02} are not $A$-quasiconformal $\beta$-regular domains. Ahlfors domains \cite{Ahl66} (quasidiscs \cite{VGR}) represent an important subclass of $A$-quasi\-con\-for\-mal $\beta$-regular domains. Moreover, in these domains spectral estimates can be specified in terms of the "quasiconformal geometry" of domains (Section~6). The following theorem represents a Sobolev type embedding theorem with estimates of the norm of the embedding operator in quasiconformal regular domains. \begin{theorem}\label{Th4.3} Let $A$ belongs to a class $M^{2 \times 2}(\Omega)$ and $\Omega$ be an $A$-quasiconformal $\beta$-regular domain. Then: \begin{enumerate} \item the embedding operator \[ i_{\Omega}:W^{1,2}_{A}(\Omega) \hookrightarrow L^s(\Omega) \] is compact for any $s \geq 1$; \item for any function $f \in W^{1,2}_{A}(\Omega)$ and for any $s \geq 1$, the Sobolev-Poincar\'e inequality \[ \inf\limits_{c \in \mathbb R}\|f-c\mid L^s(\Omega)\| \leq B_{s,2}(A,\Omega) \|f\mid L^{1,2}_{A}(\Omega)\| \] holds with the constant $$ B_{s,2}(A,\Omega) \leq B_{\frac{\beta s}{\beta-1},2}(\mathbb D) \|J_{\varphi^{-1}}\mid L^{\beta}(\mathbb D)\|^{\frac{1}{s}}, $$ where $J_{\varphi^{-1}}$ is a Jacobian of the quasiconformal mapping $\varphi^{-1}:\mathbb D\to\Omega$. \end{enumerate} \end{theorem} \begin{proof} Let $s \geq 1$. Since $\Omega$ is an $A$-quasiconformal $\beta$-regular domain, then there exists an $A$-quasiconformal mapping $\varphi: \Omega \to \mathbb D$ satisfies the condition of $\beta$-regularity: \[ \iint\limits_\mathbb D \big|J(w,\varphi^{-1})\big|^{\beta}~dudv < \infty. \] Hence \cite{VU04} the composition operator for Lebesgue spaces \[ \varphi^{*}:L^r(\mathbb D) \to L^s(\Omega) \] is bounded for $r/(r-s)=\beta$ i.e. for $r=\beta s/(\beta -1)$. By Theorem~\ref{IsomSS} $A$-quasiconformal mappings $\varphi: \Omega \to \mathbb D$ generate a bounded composition operator on seminormed Sobolev spaces \[ (\varphi^{-1})^{*}:L^{1,2}_{A}(\Omega) \to L^{1,2}(\mathbb D). \] Because the matrix $A$ satisfies to the uniform ellipticity condition~\eqref{UEC} then the norm of Sobolev space $W^{1,2}_{A}(\Omega)$ is equivalent to the norm of Sobolev space $W^{1,2}(\Omega)$ and by \cite{GPU19} we obtain that the composition operator on normed Sobolev spaces \[ (\varphi^{-1})^{*}:W^{1,2}_{A}(\Omega) \to W^{1,2}(\mathbb D),\,\,\, (\varphi^{-1})^{*}(f)=f \circ \varphi^{-1} , \] is bounded. Therefore according to the "transfer" diagram \cite{GG94} we obtain that the embedding operator \[ i_{\Omega}:W^{1,2}_{A}(\Omega) \hookrightarrow L^s(\Omega) \] is compact as a composition of three operators: the bounded composition operator on Sobolev spaces $(\varphi^{-1})^{*}:W^{1,2}_{A}(\Omega) \to W^{1,2}(\mathbb D)$, the compact embedding operator \[ i_{\mathbb D}:W^{1,2}(\mathbb D) \hookrightarrow L^r(\mathbb D) \] and the bounded composition operator on Lebesgue spaces $\varphi^{*}:L^r(\mathbb D) \to L^s(\Omega)$. Let $s=\frac{\beta -1}{\beta}r$ then by \cite{GPU19} the inequality \begin{equation} \label{Weight} ||f\,|\,L^s(\Omega)|| \leq \left(\iint\limits_\mathbb D \big|J(w,\varphi^{-1})\big|^{\beta}~dudv \right)^{{\frac{1}{\beta}} \cdot \frac{1}{s}} ||f\,|\,L^r(\Omega,h)|| \end{equation} holds for any function $f\in L^{r}(\Omega,h)$. Using Theorem \ref{Th4.1} and inequality~\eqref{Weight} we have \begin{multline*} \inf_{c \in \mathbb R} \left(\iint\limits_{\Omega} |f(z)-c|^s dxdy\right)^{\frac{1}{s}} \\ {} \leq \left(\iint\limits_\mathbb D \big|J(w,\varphi^{-1})\big|^{\beta}dudv \right)^{{\frac{1}{\beta}} \cdot \frac{1}{s}} \inf_{c \in \mathbb R} \left(\iint\limits_{\Omega} |f(z)-c|^r h(z)dxdy\right)^{\frac{1}{r}} \\ {} \leq B_{r,2}(\mathbb D) \left(\iint\limits_\mathbb D \big|J(w,\varphi^{-1})\big|^{\beta}dudv \right)^{{\frac{1}{\beta}} \cdot \frac{1}{s}} \left(\iint\limits_\Omega \left\langle A(z) \nabla f(z), \nabla f(z) \right\rangle dxdy\right)^{\frac{1}{2}} \end{multline*} for $s\geq 1$. \end{proof} The following theorem gives compactness of the embedding operator in the limit case $\beta = \infty$: \begin{theorem}\label{T4.5} Let $A$ belongs to a class $M^{2 \times 2}(\Omega)$ and $\Omega$ be an $A$-quasiconformal $\infty$-regular domain. Then: \begin{enumerate} \item The embedding operator \[ i_{\Omega}:W^{1,2}_{A}(\Omega) \hookrightarrow L^2(\Omega), \] is compact. \item For any function $f \in W^{1,2}_{A}(\Omega)$, the Poincar\'e--Sobolev inequality \[ \inf\limits_{c \in \mathbb R}\|f-c\mid L^2(\Omega)\| \leq B_{2,2}(A,\Omega) \|f\mid L^{1,2}_{A}(\Omega)\| \] holds with the constant $B_{2,2}(A,\Omega) \leq B_{2,2}(\mathbb D) \big\|J_{\varphi^{-1}}\mid L^{\infty}(\mathbb D)\big\|^{\frac{1}{2}}$, where $J_{\varphi^{-1}}$ is a Jacobian of the quasiconformal mapping $\varphi^{-1}:\mathbb D\to\Omega$. \end{enumerate} \end{theorem} \begin{remark} The constant $B_{2,2}^2(\mathbb D)=1/\mu_1(\mathbb D)$, where $\mu_1(\mathbb D)=j'^2_{1,1}$ is the first non-trivial Neumann eigenvalue of Laplacian in the unit disc $\mathbb D\subset\mathbb C$. \end{remark} \begin{proof} Since $\Omega$ is an $A$-quasiconformal $\infty$-regular domain, then there exists an $A$-quasiconformal mapping $\varphi: \Omega \to \mathbb D$ that generates a bounded composition operator \[ (\varphi^{-1})^{*}:L^{1,2}_{A}(\Omega) \to L^{1,2}(\mathbb D). \] Using the embedding $L^{1,2}(\mathbb D)\subset L^2(\mathbb D)$ (see, for example, \cite{M}) we obtain that the composition operator on normed Sobolev spaces \[ (\varphi^{-1})^{*}:W^{1,2}_{A}(\Omega) \to W^{1,2}(\mathbb D) \] is bounded also. Because $\Omega$ is an $A$-quasiconformal $\infty$-regular domain, then the $A$-quasiconformal mapping $\varphi: \Omega \to \mathbb D$ satisfies the following condition: \[ \big\|J_{\varphi^{-1}}\mid L^{\infty}(\mathbb D)\big\|=\esssup\limits_{|w|<1}|J(w,\varphi^{-1})|<\infty, \] and we have that the composition operator $$ \varphi^{*}:L^2(\mathbb D) \to L^2(\Omega) $$ is bounded \cite{VU04}. Finally, note that in the unit disc $\mathbb D$ the embedding operator $$ i_{\mathbb D}:W^{1,2}(\mathbb D) \hookrightarrow L^2(\mathbb D) $$ is compact (see, for example, \cite{M}). Therefore the embedding operator $$ i_{\Omega}:W^{1,2}_A(\Omega)\to L_2(\Omega) $$ is compact as a composition of bounded composition operators $\varphi^{*}$, $(\varphi^{-1})^{*}$ and the compact embedding operator $i_{\mathbb D}$. Let a function $f \in L^{2}(\Omega)$. Because quasiconformal mappings possess the Luzin $N$-property, then $|J(z,\varphi)|^{-1}=|J(w,\varphi^{-1})|$ for almost all $z\in\Omega$ and for almost all $w=\varphi(z)\in \mathbb D$. Hence the following inequality is correct: \begin{multline*} \inf\limits_{c \in \mathbb R}\left(\iint\limits_\Omega |f(z)-c|^2dxdy\right)^{\frac{1}{2}} = \inf\limits_{c \in \mathbb R}\left(\iint\limits_\Omega |f(z)-c|^2 |J(z,\varphi)|^{-1} |J(z,\varphi)|~dxdy\right)^{\frac{1}{2}} \\ {} \leq \big\|J_{\varphi}\mid L^{\infty}(\Omega)\big\|^{-\frac{1}{2}} \inf\limits_{c \in \mathbb R}\left(\iint\limits_\Omega |f(z)-c|^2 |J(z,\varphi)|~dxdy\right)^{\frac{1}{2}}. \end{multline*} By Theorem~\ref{Th4.1} we obtain \begin{multline*} \inf\limits_{c \in \mathbb R}\left(\iint\limits_\Omega |f(z)-c|^2dxdy\right)^{\frac{1}{2}} \leq \big\|J_{\varphi^{-1}}\mid L^{\infty}(\mathbb D)\big\|^{\frac{1}{2}} \inf\limits_{c \in \mathbb R}\left(\iint\limits_\mathbb D |g(w)-c|^2~dudv\right)^{\frac{1}{2}} \\ {} \leq B_{2,2}(\mathbb D) \big\|J_{\varphi^{-1}}\mid L^{\infty}(\mathbb D)\big\|^{\frac{1}{2}} \left(\iint\limits_\Omega \left\langle A(z) \nabla f(z), \nabla f(z) \right\rangle dxdy\right)^{\frac{1}{2}}, \end{multline*} for any $f\in L_A^{1,2}(\Omega)$. \end{proof} \section{Eigenvalue Problem for Neumann Divergence Form Elliptic Operator} We consider the weak formulation of the Neumann eigenvalue problem~(\ref{EllDivOper}): \begin{equation}\label{WFWEP} \iint\limits_\Omega \left\langle A(z )\nabla f(z), \nabla \overline{g(z)} \right\rangle dxdy = \mu \iint\limits_\Omega f(z)\overline{g(z)}~dxdy, \,\,\, \forall g\in W_{A}^{1,2}(\Omega). \end{equation} By the Min--Max Principle (see, for example, \cite{Henr}) the first non-trivial Neumann eigenvalue $\mu_1(\Omega)$ of the divergence form elliptic operator $L_{A}=-\textrm{div} [A(z) \nabla f(z)]$ can be characterized as $$ \mu_1(A,\Omega)=\min\left\{\frac{\|f \mid L^{1,2}_{A}(\Omega)\|^2}{\|f \mid L^{2}(\Omega)\|^2}: f \in W^{1,2}_{A}(\Omega) \setminus \{0\},\,\, \iint\limits _{\Omega}f\, dxdy=0 \right\}. $$ Hence $\mu_1(A,\Omega)^{-\frac{1}{2}}$ is the best constant $B_{2,2}(A,\Omega)$ in the following Poincar\'e inequality $$ \inf\limits _{c \in \mathbb R} \|f-c \mid L^2(\Omega)\| \leq B_{2,2}(A,\Omega) \|f \mid L^{1,2}_{A}(\Omega)\|, \quad f \in W^{1,2}_{A}(\Omega). $$ \begin{theorem}\label{Th5.1} Let $A$ belongs to a class $M^{2 \times 2}(\Omega)$ and $\Omega$ be an $A$-quasiconformal $\beta$-regular domain. Then the spectrum of the Neumann divergence form elliptic operator $L_{A}$ in $\Omega$ is discrete, and can be written in the form of a non-decreasing sequence: \[ 0=\mu_0(A,\Omega)<\mu_1(A,\Omega)\leq \mu_2(A,\Omega)\leq \ldots \leq \mu_n(A,\Omega)\leq \ldots , \] and \[ \frac{1}{\mu_1(A,\Omega)} \leq B_{\frac{2\beta}{\beta -1},2}(\mathbb D) \|J_{\varphi^{-1}}\mid L^{\beta}(\mathbb D)\| {} \leq \frac{4}{\sqrt[\beta]{\pi}} \left(\frac{2\beta -1}{\beta -1}\right)^{\frac{2 \beta-1}{\beta}} \big\|J_{\varphi^{-1}}\mid L^{\beta}(\mathbb D)\big\|, \] where $J_{\varphi^{-1}}$ is a Jacobian of the quasiconformal mapping $\varphi^{-1}:\mathbb D\to\Omega$. \end{theorem} \begin{proof} By Theorem~\ref{Th4.3} in the case $s=2$, the embedding operator $$ i_{\Omega}:W^{1,2}_{A}(\Omega)\to L_2(\Omega) $$ is compact. Hence the spectrum of the Neumann divergence form elliptic operator $L_{A}$ is discrete and can be written in the form of a non-decreasing sequence \[ 0=\mu_0(A,\Omega)<\mu_1(A,\Omega)\leq \mu_2(A,\Omega)\leq \ldots \leq \mu_n(A,\Omega)\leq \ldots . \] By the Min-Max principle and Theorem~\ref{Th4.3} we have \[ \inf_{c \in \mathbb R} \left(\iint\limits_{\Omega} |f(z)-c|^2 dxdy\right) \leq B^2_{2,2}(A,\Omega) \iint\limits_\Omega \left\langle A(z) \nabla f(z), \nabla f(z) \right\rangle dxdy, \] where \[ B_{2,2}(A,\Omega) \leq B_{r,2}(\mathbb D) \left(\iint\limits_\mathbb D \big|J(w,\varphi^{-1})\big|^{\beta}~dudv \right)^{{\frac{1}{2\beta}}}. \] Hence \[ \frac{1}{\mu_1(A,\Omega)} \leq B^2_{r,2}(\mathbb D) \left(\iint\limits_\mathbb D \big|J(w,\varphi^{-1})\big|^{\beta}~dudv \right)^{{\frac{1}{\beta}}}. \] Using the upper estimate of the $(r,2)$-Poincar\'e constant in the unit disc (see, for example, \cite{GT77,GU16}) \[ B_{r,2}(\mathbb D) \leq \left(2^{-1} \pi\right)^{\frac{2-r}{2r}}\left(r+2\right)^{\frac{r+2}{2r}}, \] where by Theorem~\ref{Th4.3}, $r=2\beta /(\beta -1)$, we obtain \[ \frac{1}{\mu_1(A,\Omega)} \leq \frac{4}{\sqrt[\beta]{\pi}} \left(\frac{2\beta -1}{\beta -1}\right)^{\frac{2 \beta-1}{\beta}} \big\|J_{\varphi^{-1}}\mid L^{\beta}(\mathbb D)\big\|. \] \end{proof} In the case of $A$-quasiconformal $\infty$-regular domains we have: \begin{theorem}\label{T4.7} Let $A$ belongs to a class $M^{2 \times 2}(\Omega)$ and $\Omega$ be an $A$-quasiconformal $\infty$-regular domain. Then the spectrum of the Neumann divergence form elliptic operator $L_{A}$ in $\Omega$ is discrete, and can be written in the form of a non-decreasing sequence: \[ 0=\mu_0(A,\Omega)<\mu_1(A,\Omega)\leq \mu_2(A,\Omega)\leq \ldots \leq \mu_n(A,\Omega)\leq \ldots , \] and \begin{equation} \frac{1}{\mu_1(A,\Omega)} \leq B^2_{2,2}(\mathbb D) \big\|J_{\varphi^{-1}}\mid L^{\infty}(\mathbb D)\big\| = \frac{\big\|J_{\varphi^{-1}}\mid L^{\infty}(\mathbb D)\big\|}{(j'_{1,1})^2}, \end{equation} where $j'_{1,1}\approx 1.84118$ denotes the first positive zero of the derivative of the Bessel function $J_1$, and $J_{\varphi^{-1}}$ is a Jacobian of the quasiconformal mapping $\varphi^{-1}:\mathbb D\to\Omega$. \end{theorem} As an applications of Theorem~\ref{T4.7} we consider some examples. \begin{example} \label{example1} The homeomorphism \[ \varphi(z)= \frac{a}{a^2-b^2}z- \frac{b}{a^2-b^2} \overline{z}, \quad z=x+iy, \quad a>b\geq 0, \] is an $A$-quasiconformal and maps the interior of ellipse $$ \Omega_e= \left\{(x,y) \in \mathbb R^2: \frac{x^2}{(a+b)^2}+\frac{y^2}{(a-b)^2}=1\right\} $$ onto the unit disc $\mathbb D.$ The mapping $\varphi$ satisfies the Beltrami equation with \[ \mu(z)=\frac{\varphi_{\overline{z}}}{\varphi_{z}}=-\frac{b}{a} \] and the Jacobian $J(z,\varphi)=|\varphi_{z}|^2-|\varphi_{\overline{z}}|^2=1/(a^2-b^2)$. It is easy to verify that $\mu$ induces, by formula \eqref{Matrix-F}, the matrix function $A(z)$ form $$ A(z)=\begin{pmatrix} \frac{a+b}{a-b} & 0 \\ 0 & \frac{a-b}{a+b} \end{pmatrix}. $$ Given that $|J(w,\varphi^{-1})|=|J(z,\varphi)|^{-1}=a^2-b^2$. Then by Theorem~\ref{T4.7} we have $$ \frac{1}{\mu_1(A,\Omega_e)} \leq \frac{1}{(j'_{1,1})^2} \esssup\limits_{|w|<1}|J(w,\varphi^{-1})| = \frac{a^2-b^2}{(j'_{1,1})^2}. $$ The classical estimate~\ref{PW} with the uniform ellipticity condition states that $$ \mu_1(A,\Omega_e) \geq \frac{\pi^2}{4(a+b)^2} \frac{a-b}{a+b} $$ and we have that $$ \frac{\pi^2}{4(a+b)^2} \frac{a-b}{a+b}< \frac{(j'_{1,1})^2}{a^2-b^2}. $$ \end{example} \begin{example} The homeomorphism \[ \varphi(z)= \frac{z^{\frac{3}{2}}}{\sqrt{2} \cdot \overline{z}^{\frac{1}{2}}}-1,\,\, \varphi(0)=-1, \quad z=x+iy, \] is an $A$-quasiconformal and maps the interior of the ``rose petal" $$ \Omega_p:=\left\{(\rho, \theta) \in \mathbb R^2:\rho=2\sqrt{2}\cos(2 \theta), \quad -\frac{\pi}{4} \leq \theta \leq \frac{\pi}{4}\right\} $$ onto the unit disc $\mathbb D$. The mapping $\varphi$ satisfies the Beltrami equation with \[ \mu(z)=\frac{\varphi_{\overline{z}}}{\varphi_{z}}=-\frac{1}{3}\frac{z}{\overline{z}} \] and the Jacobian $J(z,\varphi)=|\varphi_{z}|^2-|\varphi_{\overline{z}}|^2=1$. We see that $\mu$ induces, by formula \eqref{Matrix-F}, the matrix function $A(z)$ form $$ A(z)=\begin{pmatrix} \frac{|3\overline{z}+z|^2}{8|\overline{z}|^2} & \frac{3}{4}\Imag \frac{z}{\overline{z}} \\ \frac{3}{4}\Imag \frac{z}{\overline{z}} & \frac{|3\overline{z}-z|^2}{8|\overline{z}|^2} \end{pmatrix}. $$ Given that $|J(w,\varphi^{-1})|=|J(z,\varphi)|^{-1}=1$. Then by Theorem~\ref{T4.7} we have $$ \frac{1}{\mu_1(A,\Omega_p)} \leq \frac{1}{(j'_{1,1})^2} \esssup\limits_{|w|<1}|J(w,\varphi^{-1})| = \frac{1}{(j'_{1,1})^2}. $$ The classical estimate~\ref{PW} with the uniform ellipticity condition states that $$ \mu_1(A,\Omega_p) \geq \left(\frac{\pi}{4}\right)^2 $$ and we have that $$ \left(\frac{\pi}{4}\right)^2<(j'_{1,1})^2 \quad \text{or} \quad \frac{\pi}{4}<j'_{1,1}. $$ \end{example} \begin{example} The homeomorphism \[ \varphi(z)= \frac{2 \cdot z^{\frac{3}{8}}}{\overline{z}^{\frac{1}{8}}}-1,\,\, \varphi(0)=-1, \quad z=x+iy, \] is an $A$-quasiconformal and maps the interior of the non-convex domain $$ \Omega_c:=\left\{(\rho, \theta) \in \mathbb R^2:\rho=\cos^{4}\left(\frac{\theta}{2}\right), \quad - \pi \leq \theta \leq \pi\right\} $$ onto the unit disc $\mathbb D$. The mapping $\varphi$ satisfies the Beltrami equation with \[ \mu(z)=\frac{\varphi_{\overline{z}}}{\varphi_{z}}=-\frac{1}{3}\frac{z}{\overline{z}} \] and the Jacobian $$J(z,\varphi)=|\varphi_{z}|^2-|\varphi_{\overline{z}}|^2=\frac{1}{2\cdot |z|^{\frac{3}{2}}}. $$ We see that $\mu$ induces, by formula \eqref{Matrix-F}, the matrix function $A(z)$ form $$ A(z)=\begin{pmatrix} \frac{|3\overline{z}+z|^2}{8|\overline{z}|^2} & \frac{3}{4}\Imag \frac{z}{\overline{z}} \\ \frac{3}{4}\Imag \frac{z}{\overline{z}} & \frac{|3\overline{z}-z|^2}{8|\overline{z}|^2} \end{pmatrix}. $$ Given that $|J(w,\varphi^{-1})|=|J(z,\varphi)|^{-1}=2\cdot |z|^{\frac{3}{2}}$. Then by Theorem~\ref{T4.7} we have $$ \frac{1}{\mu_1(A,\Omega_c)} \leq \frac{1}{(j'_{1,1})^2} \esssup\limits_{|w|<1}|J(w,\varphi^{-1})| \leq \frac{2}{(j'_{1,1})^2}. $$ \end{example} \section{Spectral estimates in quasidiscs} In this section we precise Theorem~\ref{Th5.1} for Ahlfors-type domains (i.e. quasidiscs) using the weak inverse H\"older inequality and the sharp estimates of the constants in doubling conditions for measures generated by Jacobians of quasiconformal mappings \cite{GPU17_2}. Recall that a domain $\Omega$ is called a $K$-quasidisc if it is the image of the unit disc $\mathbb D$ under a $K$-quasicon\-for\-mal homeomorphism of the plane onto itself. A domain $\Omega$ is a quasidisc if it is a $K$-quasidisc for some $K \geq 1$. According to \cite{GH01}, the boundary of any $K$-quasidisc $\Omega$ admits a $K^{2}$-quasi\-con\-for\-mal reflection and thus, for example, any quasiconformal homeomorphism $\psi:\mathbb{D}\to\Omega$ can be extended to a $K^{2}$-quasiconformal homeomorphism of the whole plane to itself. Recall that for any planar $K$-quasiconformal homeomorphism $\psi:\Omega\rightarrow \Omega'$ the following sharp result is known: $J(w,\psi)\in L^p_{\loc}(\Omega)$ for any $1 \leq p<\frac{K}{K-1}$ (\cite{Ast,G81}). In \cite{GPU17_2} was proved but not formulated the result concerning an estimate of the constant in the inverse H\"older inequality for Jacobians of quasiconformal mappings. \vskip 0.2cm \begin{theorem} \label{thm:IHIN} Let $\psi:\mathbb R^2 \to \mathbb R^2$ be a $K$-quasiconformal mapping. Then for every disc $\mathbb D \subset \mathbb R^2$ and for any $1<\kappa<\frac{K}{K-1}$ the inverse H\"older inequality \begin{equation*}\label{RHJ} \left(\iint\limits_{\mathbb D} |J(w,\psi)|^{\kappa}~dudv \right)^{\frac{1}{\kappa}} \leq \frac{C_\kappa^2 K \pi^{\frac{1}{\kappa}-1}}{4} \exp\left\{{\frac{K \pi^2(2+ \pi^2)^2}{2\log3}}\right\}\iint\limits_{\mathbb D} |J(w,\psi)|~dudv \end{equation*} holds. Here $$ C_\kappa=\frac{10^{6}}{[(2\kappa -1)(1- \nu)]^{1/2\kappa}}, \quad \nu = 10^{8 \kappa}\frac{2\kappa -2}{2\kappa -1}(24\pi^2K)^{2\kappa}<1. $$ \end{theorem} If $\Omega$ is a $K$-quasidisc, then given the previous theorem and that a quasiconformal mapping $\psi:\mathbb{D}\to\Omega$ allows $K^2$-quasiconformal reflection \cite{Ahl66, GH01}, we obtain the following assertion. \begin{corollary}\label{Est_Der} Let $\Omega\subset\mathbb R^2$ be a $K$-quasidisc and $\varphi:\Omega \to \mathbb D$ be an $A$-quasiconformal mapping. Assume that $1<\kappa<\frac{K}{K-1}$. Then \begin{equation*}\label{Ineq_2} \left(\iint\limits_{\mathbb D} |J(w,\varphi^{-1})|^{\kappa}~dudv \right)^{\frac{1}{\kappa}} \leq \frac{C_\kappa^2 K^2 \pi^{\frac{1}{\kappa}-1}}{4} \exp\left\{{\frac{K^2 \pi^2(2+ \pi^2)^2}{2\log3}}\right\}\cdot |\Omega|. \end{equation*} where $$ C_\kappa=\frac{10^{6}}{[(2\kappa -1)(1- \nu)]^{1/2\kappa}}, \quad \nu = 10^{8 \kappa}\frac{2\kappa -2}{2\kappa -1}(24\pi^2K^2)^{2\kappa}<1. $$ \end{corollary} Combining Theorem~\ref{Th5.1} and Corollary~\ref{Est_Der} we obtain spectral estimates of linear elliptic operators in divergence form with Neumann boundary conditions in Ahlfors-type domains. \begin{theorem}\label{Quasidisk} Let $\Omega$ be a $K$-quasidisc. Then \begin{equation*} \mu_1(A,\Omega) \geq \frac{M(K)}{|\Omega|}=\frac{M^{*}(K)}{R^2_{*}}, \end{equation*} where $R_{*}$ is a radius of a disc $\Omega^{*}$ of the same area as $\Omega$ and $M^{*}(K)=M(K)\pi^{-1}$. \end{theorem} The quantity $M(K)$ depends only on a quasiconformality coefficient K of $\Omega$: \[ M(K):= \frac{\pi}{K^2} \exp\left\{{-\frac{K^2 \pi^2(2+ \pi^2)^2}{2\log3}}\right\} \inf\limits_{1< \beta <\beta^{*}} \Biggl\{ \left(\frac{2\beta -1}{\beta -1}\right)^{-\frac{2 \beta-1}{\beta}} C^{-2}_{\beta} \Biggr\}, \] \[ C_\beta=\frac{10^{6}}{[(2\beta -1)(1- \nu(\beta))]^{1/2\beta}}, \] where $\beta^{*}=\min{\left(\frac{K}{K-1}, \widetilde{\beta}\right)}$, and $\widetilde{\beta}$ is the unique solution of the equation $$\nu(\beta):=10^{8 \beta}\frac{2\beta -2}{2\beta -1}(24\pi^2K^2)^{2\beta}=1. $$ The function $\nu(\beta)$ is a monotone increasing function. Hence for any $\beta < \beta^{*}$ the number $(1- \nu(\beta))>0$ and $C_\beta > 0$. \begin{proof} Given that, for $K\geq 1$, $K$-quasidiscs are $A$-quasiconformal $\beta$-regular domains if $1<\beta<\frac{K}{K-1}$. Therefore, by Theorem~\ref{Th5.1} for $1<\beta<\frac{K}{K-1}$ we have \begin{equation}\label{Inequal_1} \frac{1}{\mu_1(A,\Omega)} \leq \frac{4}{\sqrt[\beta]{\pi}} \left(\frac{2\beta -1}{\beta -1}\right)^{\frac{2 \beta-1}{\beta}} \big\|J_{\varphi^{-1}}\mid L^{\beta}(\mathbb D)\big\|. \end{equation} Now, using Corollary~\ref{Est_Der} we estimate the quantity $\|J_{\varphi^{-1}}\,|\,L^{\beta}(\mathbb D)\|$. Direct calculations yield \begin{multline}\label{Inequal_2} \|J_{\varphi^{-1}}\,|\,L^{\beta}(\mathbb D)\| = \left(\iint\limits_{\mathbb D} |J(w,\varphi^{-1})|^{\beta}~dudv \right)^{\frac{1}{\beta}} \\ \leq \frac{C^2_{\beta} K^2 \pi^{\frac{1-\beta}{\beta}}}{4} \exp\left\{{\frac{K^2 \pi^2(2+ \pi^2)^2}{2\log3}}\right\} \cdot |\Omega|. \end{multline} Finally, combining inequality \eqref{Inequal_1} with inequality \eqref{Inequal_2} after some computations, we obtain \[ \frac{1}{\mu_1(A,\Omega)} \leq \frac{C^2_{\beta} K^2}{\pi} \left(\frac{2\beta -1}{\beta -1}\right)^{\frac{2 \beta-1}{\beta}} \exp\left\{{\frac{K^2 \pi^2(2+ \pi^2)^2}{2\log3}}\right\} \cdot |\Omega|. \] \end{proof} Let $\varphi:\Omega \to \Omega'$ be quasiconformal mappings. We note that there exist so-called volume-preserving maps, i.e. $|J(z,\varphi)|=1$, $z \in \Omega$. Examples of such maps were considered in the previous section. Now we construct another examples of such maps. Let $f\in L^{\infty}(\mathbb R)$. Then $\varphi(x,y)=(x+f(y),\,y)$ is a quasiconformal mapping with a quasiconformality coefficient $K=\lambda/J_{\varphi}(x,y)$. Here $\lambda$ is the largest eigenvalue of the matrix $Q=DD^T$, where $D=D\varphi(x,y)$ is Jacobi matrix of mapping $\varphi=\varphi(x,y)$ and $J_{\varphi}(x,y)=\det D\varphi(x,y)$ is its Jacobian. It is easy to see that the Jacobi matrix corresponding to the mapping $\varphi=\varphi(x,y)$ has the form \[ D=\left(\begin{array}{cc} 1 & f'(y)\\ 0 & 1 \end{array}\right). \] A basic calculation implies $J_{\varphi}(x,y)=1$ and \[ \lambda=\left(1+\frac{\left(f'(y)\right)^{2}}{2}\right)\left(1+\sqrt{1-\frac{4}{\left(2+\left(f'(y)\right)^{2}\right)^{2}}}\right)\,. \] Therefore any mapping $\varphi=\varphi(x,y)$ is a quasiconformal mapping from $\mathbb R^{2}\to \mathbb R^{2}$ with $J_{\varphi}(x,y)=1$ and arbitrary large quasiconformality coefficient. We can use their restrictions $\varphi|_{\mathbb D}$ to the unit disc $\mathbb D$. Images can be very exotic quasidiscs. If $a>0$ then mappings $\varphi(x,y)=(ax+f(y),\,\frac{1}{a}y)$ have similar properties. In this case we obtain lower estimates of the first non-trivial Neumann eigenvalues of the divergent form elliptic operator $L_A$ in $A$-quasiconformal $\beta$-regular domains via the Sobolev-Poincar\'e constant for the unit disc $\mathbb D$. \vskip 0.2cm \textbf{Acknowledgements.} The first author was supported by the United States-Israel Binational Science Foundation (BSF Grant No. 2014055). \vskip 0.3cm \vskip 0.3cm Department of Mathematics, Ben-Gurion University of the Negev, P.O.Box 653, Beer Sheva, 8410501, Israel \emph{E-mail address:} \email{[email protected]} \\ Division for Mathematics and Computer Sciences, Tomsk Polytechnic University, 634050 Tomsk, Lenin Ave. 30, Russia; Regional Scientific and Educational Mathematical Center, Tomsk State University, 634050 Tomsk, Lenin Ave. 36, Russia \emph{E-mail address:} \email{[email protected]} \\ Department of Mathematics, Ben-Gurion University of the Negev, P.O.Box 653, Beer Sheva, 8410501, Israel \emph{E-mail address:} \email{[email protected]} \end{document}
\begin{document} \begin{spacing}{1.00} \thispagestyle{plain} \title{f Option Pricing with Mixed L\'{e} \bigbreak \bigbreak \noindent \textbf{Abstract}\ \ \ \ \ It is essential to incorporate the impact of investor behavior when modeling the dynamics of asset returns. In this paper, we reconcile behavioral finance and rational finance by incorporating investor behavior within the framework of dynamic asset pricing theory. To include the views of investors, we employ the method of subordination which has been proposed in the literature by including business (intrinsic, market) time. We define a mixed L\'{e}vy subordinated model by adding a single subordinated L\'{e}vy process to the well-known log-normal model, resulting in a new log-price process. We apply the proposed models to study the behavioral finance notion of ``greed and fear" disposition from the perspective of rational dynamic asset pricing theory. The greedy or fearful disposition of option traders is studied using the shape of the probability weighting function. We then derive the implied probability weighting function for the fear and greed deposition of option traders in comparison to spot traders. Our result shows the diminishing sensitivity of option traders. Diminishing sensitivity results in option traders overweighting the probability of big losses in comparison to spot traders. \\ \\ \noindent \textbf{Keywords}\ \ \ \ \ Rational dynamic asset pricing theory; behavioral finance; mixed subordinated L\'{e}vy process; probability weighting function.\\ \\ \noindent \textbf{JEL}\ \ \ \ \ C02, G10, G12, G13. \\ \doublespacing \section*{} Several studies provide empirical evidence that the behavior of investors has an impact on stock returns.\footnote{See, for example, \cite{Brown:2004}, \cite{Baker:2007}, and \cite{Long:1990}.} To obtain a more realistic log return pricing model, it is essential to incorporate investor behavior and investor sentiment. \cite{Shefrin:2005} combined two different normally distributed log returns to represent the views of the buyer and seller for pricing options of a certain asset return model. The asset return model that he used, the mixture of normal distributions, is not infinity divisible due to its underlying finite support. Thus, according to \cite{Black:1973} and \cite{Merton1973a}, this model would lead to arbitrage opportunities, making it inappropriate for pricing options. In rational finance, some researchers have modeled the price process by incorporating a subordinator process into the classical Black-Scholes-Merton (BSM) model. The subordinating, time change, process is a technique for introducing additional parameters into the return model for the purpose of capturing the following features: (1) the asymmetry and leptokurtic behavior of asset return distributions, (2) the effect of investor behavior and investor sentiment on the market underlying price model, (3) time-varying volatility of asset returns, (4) regime switching in stock market returns, and (5) leverage effects. \cite{Mandelbrot:1967} and \cite{Clark:1973} applied the concept of time change to the Brownian motion process to obtain a more realistic speculative price process. \cite{Merton:1976} introduced a jump-diffusion model using a compound Poisson time-change L\'{e}vy process. Two decades later, \cite{Hurst:1997} applied various subordinated log return model processes to model the well-documented heavy-tail phenomena exhibited by asset return distributions. The views of investors can be incorporated into log return asset pricing models and option pricing models by introducing an intrinsic time process, which is referred to as a behavioral subordinator \citep[see][]{Shirvani:2019}. In this paper, we attempt to reconcile behavioral finance and rational finance by incorporating investor behavior into the framework of a dynamic asset pricing model. We extend the approach of \cite{Black:1973} and \cite{Merton1973a} by mixing a subordinated L\'{e}vy process, with a Gaussian component to represent investor behavior. The price process -- referred to as a mixed L\'{e}vy subordinated market model (MLSM)-- is a mixture of a Brownian motion process and a subordinator process. The subordinator process is a pure jump L\'{e}vy process. We use the mean-correction martingale measure (MCMM) method to price options and show using MCMM that our proposed pricing model is indeed arbitrage-free. Then, following \cite{Rachev:2017}, we define a Probability Weighting Function (PWF) consistent with dynamic asset pricing theory to quantify an option trader's greed and fear disposition. The choices of PWF in \cite{Rachev:2017} as well as in this paper guarantee that the pricing model is arbitrage-free. With the exception of \cite{ Prelec:1998},\footnote{\cite{ Prelec:1998}'s PWF maps the Gumbel distribution to another distribution. The Gumbel distribution, an infinitely divisible distribution, can be used as a model for asset pricing. Unfortunately, a pricing model with a Gumbel return distribution is overly simplistic for capturing heavy tailness and symmetry of the asset return.} all other PWFs known in the literature lead to a market model with arbitrage opportunities \cite[see][]{Rachev:2017}. To quantify an option trader's fear and greed disposition, we map the spot trader's cumulative distribution function (CDF) to another CDF corresponding to an option trader's views on the spot price for the option's underlying asset. In this way, we can study the fear and greed disposition of option traders using the shape of the implied PWF. Our result shows that the PWF shape of option traders is an inverse-S-shape. This feature of PWF is referred to by \cite{Tversky:1992} as \textit{diminishing sensitivity}. Diminishing sensitivity means that people become less sensitive to changes in probability as they move away from a reference point (see \cite{Gonzalez:1999} and \cite{Fox:1996}). In the probability domain, the two endpoints 0 and 1 serve as reference points. Thus, option traders are more sensitive to returns with a probability close to the reference points. Diminishing sensitivity results in the over-weighting of the reference points or “big losses" and “big profits." The PWF of option traders rises sharply near the left endpoints (events with zero probability), and steepness rising again near the right endpoint (events with probability one). This steepness indicates the fearfulness of option traders toward the market. Finally, it is worthwhile motioning that the slope of the PWF near the left endpoint, $0$, is steeper than the right endpoint and this difference strongly suggests that the significant losses are the main concern of option traders. \\ \indent There are two main contributions of this paper. First, we introduce a new L\'{e}vy process for asset returns in the form of a mixed geometric Brownian motion and subordinated L\'{e}vy process designed to describe (1) the view of the asset's spot price by spot traders and (2) the view of the asset's spot price by option traders. Second, we derive the implied PWF determining the fear and greed deposition of option traders in comparison to the spot price dynamics as viewed by spot traders. The paper is organized as follows. After introducing the MLSM, we present the equivalent martingale measure for pricing options. We first apply the option pricing formula for a mixed subordinate normal inverse Gaussian process, and then empirically estimate the model's parameters and investigate the distribution of the log return process. We then calibrate our model parameters to the observed price of European call options based on the SPDR S$\&$P 500 ETF (SPY), followed by an investigation of the investor's fear disposition using the implied PWF we obtain. \section*{\uppercase{option pricing for mixed subordinated L\'{e}vy process}} In this section we derive our option pricing model where the underlying asset price is driven by a mixed subordinated L\'{e}vy process.\footnote{For a general introduction to L\textrm{\'{e}}vy processes in finance, see \cite{Sato:1999}, \cite{Bertoin:1996}, \cite{Cont:2004}, \cite{Jacod:2003}, \cite{Carr:2004}, or \cite{Schoutens:2003}.} \subsection*{Dynamic Asset Pricing Model} Let $\mathcal{S}$ be a traded risky asset with price process $\mathbb{S}= (S_t,t\geq 0 )$ and log-price process $\mathbb{X}= (X_t=lnS_t,t\geq 0 )$ which is a mixed subordinated L\'{e}vy process with an added Gaussian component \citep[see][Chapter 6]{Sato:1999}. The price and log price are defined as \begin{equation} \label{Eq1} S_t=S_0e^{X_t},\ t\geq 0,S_0>0 \end{equation} \begin{equation} \label{Eq2} X_t=\mu t+\varrho B_t+\sigma L_{V_t},t\ge 0,\ \ \ \mu \in R,\varrho \in R\setminus \{0\},\ \sigma \in R \end{equation} where $ \mathbb{B}= (B_t,t\geq 0 )$ is a standard Brownian motion, $\mathbb{L}= (L_t,t\geq 0,L_0 = 0 )$ is a pure jump L\'{e}vy process, and $\mathbb{V}= (\ V_t,t\geq 0,V_0=0 )$ is a L\'{e}vy subordinator.\footnote{A L\textrm{\'{e}}vy subordinator is a L\textrm{\'{e}}vy processes with an increasing sample path \citep[see][Chapter 6]{Sato:1999}.} $\mathbb{E}L_1=0,\ \mathbb{E}L^2_1=1$. Note that $\mathbb{B}$, $\mathbb{L}$, and $\mathbb{V}$ are independent stochastic bases of the natural world $ (\mathrm{\Omega },\mathcal{F},\ \mathbb{F}= ({\mathcal{F}}_t,t\ge 0 )\mathrm{,}\mathbb{P} )$. The trajectories of $\mathbb{L}$ and $\mathbb{V}$ are assumed to be right-continuous with left limits. We view $\mathbb{V}$ as the $\mathcal{S}$-\textit{intrinsic} (business) time of the pure jump (the non-Gaussian, non- diffusion) part of the log return process representing the cumulative price value at time $t\geq0$ of a traded asset $\mathcal{V}$. We will refer to the asset $\mathcal{V}={\mathcal{V}}_{\mathcal{S}}$, as the $\mathcal{S}$-intrinsic jump volatility. Parameter $\varrho \neq 0$ is the volatility of the continuous dynamics of $\mathbb{X}$, and $\sigma$ is the volatility of the pure jump of the subordinated process $L_{V_t}$. \subsection*{Equivalent Martingale Measure} Let $\mathcal{B}$ be a riskless asset with price $b_t=e^{rt},t\ge 0$, where $r\ge 0$ is the riskless rate. For the pricing of financial derivatives, we search for an equivalent martingale measure (EMM) $\mathbb{Q}$ of $\mathbb{P}$ on $ (\mathrm{\Omega },\mathcal{F},\ \mathbb{F}= ({\mathcal{F}}_t,t\ge 0 )\mathrm{,}\mathbb{Q} )$. The discounted price process $Z_t=\frac{S_t}{b_t}$ is a martingale.\footnote{ See \citet[Chapter 6]{Duffie:2001}, and \citet[Section 2.5]{Schoutens:2003}.} The market $ (\mathcal{S},\mathcal{B} )$ is incomplete and the solution of EMM is not unique. It is generally accepted that the MCMM is sufficiently flexible for calibrating market data.\footnote{ See \citet[Chapters 6 and 7]{Schoutens:2003}. It is tempting to find a EMM using the Esscher transform (see \cite{Esscher:1932}, \cite{Gerber:1994}, \cite{Salhi:2017}), as in this case we can set $\varrho =0$. However, with $\varrho =0$, the Esscher transform method requires finding a unique solution $h^*$ of the equation: $r=\mu +K_{V_1} ( K_{ L_1} ( (u+1 )\sigma ) )-K_{V_1} (K_{L_1} (h\sigma))$, where $K_{L_1 } (u )=ln\mathbb{E}e^{uX_1},K_{L_1} (u )=ln\mathbb{E}e^{uL_1}$ and $K_{V_1} (u )=ln\mathbb{E}e^{uV_1}$, $u\in R$, are the cumulant-generating functions for $\mathbb{X}$, $\mathbb{L}$ and $\mathbb{V}$. In the general setting of \eqref{Eq2}, this is an impossible task.} Thus, we choose MCMM as the risk-neutral probability space, $\mathbb{Q}$. \cite{Yao:2011} demonstrated that $\mathbb{Q}$ obtained by the MCMM is equivalent to $\mathbb{P}$ if and only if the Gaussian part in the L\textrm{\'{e}}vy-Khintchine formula for the characteristic function of $\mathbb{X}$ is non-zero. If $\mathbb{X}$ is a pure jump L\textrm{\'{e}}vy process, the MCMM $\mathbb{Q}$ is not equivalent to $\mathbb{P}$. However, because the European call option pricing formula under $\mathbb{Q}$ is still arbitrage free, the price dynamics of $\mathcal{S}$ on $\mathbb{Q}$ is given by \begin{equation} \label{Eq3} S^{(\mathbb{Q})}_t=S_0\frac{b_t}{M_{X_t} (1)}e^{X_t}=S_0e^{(r-K_{X_1}(1))t+X_t},t \geq 0 \end{equation} where the moment-generating functions (MGF) $M^{ (\mathbb{X} )}_t$ and the cumulant-generating function (CGF) $K^{ (\mathbb{X} )}_t$ of the L\'{e}vy process $\mathbb{X}\ $ are \begin{equation} M_{ X_t } (u )=\mathbb{E}e^{uX_t}={(M_{X_1}(u))}^t,u\ge 0 \end{equation} \begin{equation} K_{ X_t}(u)=lnM_{X_t}(u),\ u\ge 0,\ t\ge 0 \end{equation} Similarly, let $M_{ L_t}$ and $M_{V_t}$, $u\in R$, $t\ge 0$ be the MGFs of $\mathbb{L}$ and $\mathbb{V}$, respectively. And let $K_{L_t}$ and $K^{V_t}$ be the CGFs of $\mathbb{L}$ and $\mathbb{V}$, respectively. We then have \begin{equation} \label{Eq4} K_{ X_1} (1)=\mu +\frac{{\varrho }^2}{2}+K_{ V_1} (K_{L_1} (\sigma))\ <\infty \end{equation} \subsection*{Option Pricing Model} Let $\mathcal{C}$ be a European call contract with underlying risky asset $\mathcal{S}$, maturity $T>0$, and strike $K>0.$ Then the price of $\mathcal{C}$ at $t=0,$ is given by \begin{equation} \label{Eq5} C (S_0,r,K,T )=e^{-rT}{\mathbb{E}}^{\mathbb{Q}}{\mathrm{max} (S^{ (\mathbb{Q} )}_T-K,0 )} \end{equation} Carr and Madan (1998) \iffalse\footnote{ See also \citet[][Chapter 2]{Schoutens:2003}, and \cite{Lee:2004}.} \fi showed that if $a>0$, which leads to ${\mathbb{E}}^{\mathbb{Q}}{ (S^{ (\mathbb{Q} )}_T )}^{a}<\infty $, then \begin{equation} \label{Eq6} C (S_0,r,K,T )=\frac{e^{-rT-ak}}{\pi }\int^{\infty }_0{e^{-ivk}}\frac{{\varphi }_{lnS^{ (\mathbb{Q} )}_T} (v-i(a+1) )}{{a}^2+a -v^2+i (2a +2 )v}dv \end{equation} where $k=lnK$ and ${\varphi }_{lnS^{ (\mathbb{Q} )}_t} (v )={\mathbb{E}}^{ (\mathbb{Q} )}e^{ivlnS^{ (Q )}_t}$ is the characteristic function (ch.f.) of the log-price process ${\mathbb{L}\mathbb{S}}^{ (\mathbb{Q} )}= (lnS^{ (\mathbb{Q} )}_t,t\ge 0 )$. From \eqref{Eq3} and \eqref{Eq4} the ch.f. ${\varphi }_{lnS^{ (\mathbb{Q} )}_t}$ of the log-price process ${\mathbb{L}\mathbb{S}}^{ (\mathbb{Q} )}$ is given by \begin{equation} \label{Eq7} \begin{split} {\varphi }_{lnS^{ (\mathbb{Q} )}_t} (v )&=S^{iv}_0e^{iv (r-K_{X_1} (1) )t}{\varphi }_{X_t}(v) \\ &=S^{iv}_0{\mathrm{exp} \{ [iv (r-K_{X_1}(1))+{\psi }_{X_t} (v) ]t\}} \end{split} \end{equation} where ${\varphi }_{ X_t} (v)=\mathbb{E}e^{ivX_t}$ is the ch.f. of $\mathbb{X}$ and ${\psi }_{ X_t} (v )=ln{\varphi }_{ X_t}(v)$ is the characteristic exponent of $\mathbb{X}$. Similarly, the characteristic functions and corresponding characteristic exponents for $\mathbb{L}$ and $\mathbb{V}$ are ${\varphi }_{ L_t}$, ${\psi }_{L_t}$, ${\varphi }_{ V_t}$, and ${\psi}_{ V_t}$. And the domain of those functions and exponents are complex planes. From \cite{Sato:1999}, the exponential moment conditions guaranteeing that ${\psi }_{L_t} (v ),\ v\in \mathbb{C}$ and ${\psi }_{V_t} (v ),\ v\in \mathbb{C}\mathrm{,}$ are well defined. Then, we have \begin{equation} \label{Eq8} {\psi }_{X_t}(v)=iv\mu -\frac{{\varrho }^2}{2}v^2+{\psi}_{V_t} (-i{\psi }_{ L_t} (v\sigma)),\,v\in \mathbb{C}. \end{equation} Thus, we derive the call option price $C (S_0,r,K,T )\ $ in \eqref{Eq5} using \eqref{Eq6}, \eqref{Eq7}, and \eqref{Eq8}. \section*{\uppercase{option pricing for mixed subordinated normal inverse Gaussian process}} In this section, we apply the European call option pricing formula \eqref{Eq6} where $\mathbb{L}$ is the Normal Inverse Gaussian (NIG) L\'{e}vy process\footnote{ See \cite{Barndorff:1994}, \cite{Eriksson:2009}, and \citet[][Section 5.3.8]{Schoutens:2003}.} and $\mathbb{V}$ is the Inverse Gaussian (IG) L\'{e}vy subordinator.\footnote{ See \citet[][Chapter 12]{Barndorff:2015} and \citet[][Section 5.3.2]{Schoutens:2003}.} Then, the CGF $K_{ L_1}$ of the NIG process $\mathbb{L}$ has the following parametric form: \begin{equation} \label{Eq9} K_{L_1}(u)=mu+d (\sqrt{\alpha^2-\beta^2}-\sqrt{\alpha^2-{ (\beta+u )}^2} ),u\in (-\alpha-\beta,\alpha-\beta ) \end{equation} where $m\in R$ is the location parameter, $\alpha>0$ is the tail-heaviness parameter, $\beta\in R$ ($ \lvert \beta \rvert < \alpha$) is the asymmetry parameter, and $d$ is the scale parameter. Then the CGF $K{V_1}$ of the IG subordinator $\mathbb{V}$ is given by \begin{equation} \label{Eq10} K_{V_1}(u)=\frac{\ell }{h} (1-\sqrt{1-\frac{2h^2u}{\ell }} ),u\in (0,\frac{\ell }{2h^2} ), \end{equation} where $k >0$ is the mean of $V_1$ and $\ell>0$ is the shape parameter for the IG distribution. \subsection*{Characterization of the distributional law of process $\mathbb{X}$ with log-price } We now study the ch.f and the cumulant of $ X_t=\mu t+\varrho B_t+\sigma L_{V_t}$, $t\ge 0$, $\mu \in R$, $\varrho \in R\setminus \{0 \}$, and $\sigma \in R$. The ch.f of ${X_1}$ has the form \begin{equation} \label{Eq11} \begin{array}{c} \varphi_{X_1}(v)=e^{iv \mu - \frac{1}{2} \rho^2 v^2+\frac{l}{h} [1-\sqrt{1-\frac{2h^2}{l} [d [ \sqrt{\alpha^2-\beta^2}-\sqrt{\alpha^2- ( \beta+\sigma iv )^2 } ] +iv m \sigma ] } ] }, v\in \mathbb{C} \end{array} \end{equation} The MGF of $X_1$, $M_{X_1}(u)$, is obtained with $v=\frac{u}{i}$ \begin{equation} \label{Eq12} \begin{array}{c} M_{X_1}(u)=e^{u \mu + \frac{1}{2} \rho^2 u^2+\frac{l}{h} [1-\sqrt{1-\frac{2h^2}{l} [d [ \sqrt{\alpha^2-\beta^2}-\sqrt{\alpha^2- ( \beta+\sigma u )^2 } ] +u m \sigma ] }] }. \end{array} \end{equation} with the constraints \begin{equation} \label{MGF_condition} 0<u< (\frac{\alpha-\beta}{\sigma} ) \end{equation} \begin{equation} u ( m \sigma ) +d ( \sqrt{\alpha^2-\beta^2}-\sqrt{\alpha^2- ( \beta+\sigma u )^2 } ) <\frac{l}{2h^2} \end{equation} In this case, $X_1$ has a finite exponential moment for any $u$ in \eqref{MGF_condition}. From the representation of the MGF, we can determine all four moments of $X_1$. To find the four central moments of $X_1$, we use the CGF ${\mathrm{K}}_{\mathrm{X_1}} (v )=ln{\ M}_{\mathrm{X_1}} (v )$, and the cumulants ${\kappa }_n\mathrm{=}{ [\frac{{\partial }^n}{\partial u^n}K_{X(1)} (u ) ]}_{u=0},\ n=1,2,3,4$. The CGF is \begin{equation} \label{Eq13} \begin{split} K_{ X_1}(u)=u \mu + \frac{1}{2} \rho^2 u^2+\frac{l}{h} [1-\sqrt{1-\frac{2h^2}{l} [d[ \sqrt{\alpha^2-\beta^2}-\sqrt{\alpha^2- ( \beta+\sigma u )^2 }]+u m \sigma]}] \end{split} \end{equation} Then, we have \begin{align*} \mathbb{E}(X_1)&={\mathrm{\kappa }}_{1} \\ Var (X_1 )&={\mathrm{\kappa }}_{2} \\ {Skewness} (X_1)&=\frac{\mathbb{E}{ [X_1-\mathbb{E}X_1 ]}^3}{{ [var(X_1) ]}^{\frac{3}{2}}}=\frac{{\kappa }_{3}}{{ ({\kappa }_{2} )}^{\frac{3}{2}}} \\ {Excess Kurtosis} (X_1 )&=\frac{\mathbb{E}{ [X_1-\mathbb{E}X_1 ]}^4}{{ [var(X_1) ]}^2}-3=\frac{{\kappa }_{4}}{{ ({\kappa }_{2} )}^2}. \end{align*} More specifically,, the mean of $X_1$ is given by \begin{equation} \label{Eq14} E(X_1)=\mu + h ( m\sigma + \frac{\beta d \sigma}{\sqrt{\alpha^2 - \beta^2}} ) \end{equation} For the variance of $X_1$ we have \begin{equation} \label{Eq15} Var(X_1)=h ( \frac{ d\sigma^2} { \sqrt{ \alpha^2-\beta^2}} +\frac{\beta^2 d \sigma^2}{ ( \alpha^2-\beta^2 )^{\frac{3}{2}}} ) +\rho^2 +\frac{h^3 ( m \sigma +\frac{ \beta d \sigma}{\sqrt{\alpha^2-\beta^2}} )^2}{l} \end{equation} The skewness and kurtosis are obtained by applying the same method, and thus are omitted. \subsection*{Option pricing with log-price process $\protect\mathbb{X}$ } Carr and Madan (1998) developed an explicit pricing method for vanilla options when the characteristics function of the log-price process under the risk-neutral world is known. If we know the ch.f. of $lnS^{ (Q )}$, we can calculate the price of a call option by applying \eqref{Eq6}. From \eqref{Eq3} and \eqref{Eq4}, we can derive the ch.f. of the log-price process ${\mathbb{L}\mathbb{S}}^{ (\mathbb{Q} )}= (lnS^{ (\mathbb{Q} )}_t,t\ge 0 )$ as follows \begin{equation} \label{Eq16} \begin{split} {\varphi }_{lnS^{ (\mathbb{Q} )}_t} (v )&={\mathbb{E}}^{ (\mathbb{Q} )}e^{ivlnS^{ (Q )}_t} \\ &=S^{iv}_0e^{ivrt-\frac{1}{2}vt\rho^2(i+v)-P_1-P_2} \end{split} \end{equation} where \begin{align*} P_1&=\frac{l}{h}t(1+iv-iv\sqrt{1-\frac{2h^2}{l}[d[\sqrt{\alpha^2-\beta^2}-\sqrt{\alpha^2-(\beta+\sigma)^2}]+m\sigma]} \\ P_2&=\sqrt{1+d[\sqrt{\alpha^2-\beta^2}-\sqrt{\alpha^2-(\beta+\sigma iv)^2}]+ivm\sigma]} \end{align*} To determine the price of a call option, we substitute \eqref{Eq16} into \eqref{Eq5} and perform the required integration. We use the fast Fourier transformation (FFT) to estimate the call option price in \eqref{Eq6} with strike $K$, time to maturity $T$, and risk-free rate $r$ at time $0$. \section*{\uppercase{numerical example}} \iffalse The following US market assets are treated as potential market proxies for the pair $(\mathcal{S},\mathcal{V})$, and the data are collected from Yahoo Finance and Cboe Global Markets: \begin{itemize} \item[(i)] S$\&$P500 index, VIX \item[(ii)] AAPL, VXAPL \item[(iii)] AMZN, VXAZN \item[(iv)] GS, VXGS \item[(v)] IBM, VXIBM \end{itemize} \fi In this section, we apply the method introduced in the previous section. We use the historical data of the S$\&$P 500 index\footnote {See \url{https://us.spdrs.com/en/etf/spdr-sp-500-etf-SPY}} and CBOE volatility index (VIX) \footnote{VIX is an index created by CBOE, representing 30-day implied volatility calculated by S\&P500 options, see {http://www.cboe.com/vix.}} to estimate the model parameters for spot traders, while using the call option prices for the SPRD S\&P 500 ETF (SPY) \footnote{https://finance.yahoo.com/quote/SPY/options?p=SPY} as the dataset to estimate the model parameters for option traders. \iffalse The following are examples of US-market assets which we view as potential market proxies for the pair $ (\mathcal{S},\mathcal{V} )$: $ (i )$ S\&P500 index \footnote {See S\&P Dow Jones Indices, {https://us.spindices.com/}},VIX \footnote{VIX is an index created by CBOE, representing 30-day implied volatility calculated by S\&P500 options, see {http://www.cboe.com/vix.}}; $ (ii )$ AAPL \footnote{ Apple Inc. (AAPL), {https://finance.yahoo.com/quote/AAPL/}}, VXAPL \footnote{ Cboe Equity VIX $circledR$ on Apple (VXAPL), {https://www.cboe.com/products/vix-index-volatility/volatility-on-individual-equities/cboe-equity-vix-on-apple-vxapl}}; $ (iii )$ AMZN \footnote{\url{www.Amazon.com}, Inc. (AMZN), {https://finance.yahoo.com/quote/AMZN/}},VXAZN \footnote{ Cboe Equity VIX $circledR$ ~on Amazon (VXAZN), {https://www.cboe.com/products/vix-index-volatility/volatility-on-individual-equities/cboe-equity-vix-on-amazon-vxazn}}; $ (iv )$ GS \footnote{The Goldman Sachs Group, Inc. (GS), {https://finance.yahoo.com/quote/GS/history}}, VXGS \footnote{ Cboe Equity VIX $circledR$~on Goldman Sachs (VXGS), {https://www.cboe.com/products/vix-index-volatility/volatility-on-individual-equities/cboe-equity-vix-on-goldman-sachs-vxgs}}; $ (v )$ IBM \footnote {International Business Machines Corporation (IBM), {https://www.nasdaq.com/}} ,VXIBM \footnote{ Cboe Equity VIX $circledR$ ~on IBM (VXIBM),{https://www.cboe.com/products/vix-index-volatility/volatility-on-individual-equities/cboe-equity-vix-on-ibm-vxibm}}. \fi \subsection* {Fitting the spot market data} In this subsection, we apply the models we proposed earlier to estimate the returns of a broad-based market index (the S$\&$P 500) whose return is measured by the return of an exchange-traded fund, SPY. We use market indices by the pair $ (X_t,V_t )$, $t\geq 0$ where $(1)\ X_t$, $t\geq$ as a stochastic model for the log-return of SPY index, and $(2)\ V (t )$, $t\ge0$ as the cumulative VIX (i.e., $V(t)$ represents the cumulative value of VIX in $ [0,t ]$), where $ (a)\ X_t$, $t\geq$ is a stochastic model for the log-return of the SPY and $ (b)\ V (t )$, $t\geq0$ is the cumulative VIX (i.e., $V (t )$). We then fit the IG distribution to the daily VIX data and evaluate the density using PP-plot, goodness-of-fit test, and the probability integral transforms (PIT) test. The mean $(h)$ and shape $(l)$ are $0.192548$ and $1.49156$ respectively, fitted by maximum likelihood method on daily VIX index data from January 1993 to the end of March 2019. Exhibit 1 shows the fitted results of the empirical CDF and theoretical CDF and the PP-plot of IG. Our estimated model performs well with respect to the CDF and the empirical CDF fitting process. Moreover, the apparent linearity of the PP-plot shows that the corresponding distributions are well-fitted. The Kolmogorov--Smirnov test gives a P--value$ ( \simeq0.062 )$, meaning that it fails to reject the null hypothesis that our model is sufficient to describe the data. \begin{figure} \caption{IG PP-plot of VIX data} \label{the_IG_fitting} \end{figure} We then investigate the distribution of \begin{equation*} X_t=\mu t+\varrho B_t+\sigma L_{V_t}, t\ge 0, \mu \in R, \varrho \in R\setminus \{0 \}, \sigma \in R \end{equation*} as the stochastic model for the SPY log-return index by fitting the distribution derived from the ch.f. of $X_t$ to the data. Among the $10$ parameters of stochastic process $X_t$, for two of them $( l,h )$, the parameter of the intrinsic time-change process $V_t$, is estimated by fitting the IG distribution to the VIX data. Instead of using the maximum likelihood method for the other parameters, we apply model fitting via the empirical characteristic function (ECF) \citep[see][]{Yu:2003} to estimate the model parameters. Notice that the probability density function (pdf) is the FFT of the ch.f.. The existence of a one-to-one correspondence between the CDF and the ch.f. makes inference and estimation using the ECF method as efficient as the maximum likelihood method. To estimate the model parameter, we minimized \begin{equation} \label{emprical_ch_minimization} h (r,x,\theta )=\int_{-\infty}^{\infty} ( \frac{1}{n}\sum_{i=1}^{n} e^{i\theta x_i} -C ( r,\theta ) ) ^2 dr \end{equation} where $C(r,\theta)$ is the ch.f. of $X_t$ given by \eqref{Eq11}. The database covers the period from January 1993 to March 2019, a total 6,591 observations collected from Yahoo Finance. The initial values are obtained using the method of moments estimation and by making instructed guesses. For any initial value, we estimated the model parameters and consider the model as a good candidate to fit the data. We implemented the FFT to calculate both the pdf and the corresponding likelihood values. The best model to fit and explain the observed data is chosen as the one with the largest likelihood value. The estimated parameters are summarized in Exhibit 2. The model density estimates corresponding to the empirical density of the daily log-return SPY index are plotted in Exhibit 3. The Exhibit reveals that our estimated model offers a good match between the pdf and the empirical density of the data. In our estimation $E(L_1)=m+\frac{d\beta}{\sqrt(\alpha^2-\beta^2)}\approx0$ and $Var(L_1)=\frac{d\alpha^2}{ ( \sqrt(\alpha^2-\beta^2) ) ^3}\approx 1$. \begin{table}[htb] \caption*{ Exhibit 2: The estimated parameters of the distribution fitted to daily SPDR S\&P 500 log-returns.} \label{tab:Table_Option_trader} \begin{tabularx}{\textwidth}{c *{8}{Y}} \toprule[1pt] {$\mu$}&{$m$}&{$\alpha$}&{$\beta$}&{$d$}&{$\rho$}&{$\sigma$}\\ \hline 0.00002&-0.00018&310.8&1.19&0.007&0.0011&2.199\\ \bottomrule[1pt] \end{tabularx} \end{table} \begin{figure}\label{figure:SPY_fit} \end{figure} \subsection*{Calibration of the spot market data} We now apply our mixed subordinated L\'{e}vy process model to price a European vanilla option on the SPY index. First, we calibrate the parameters of the model's risk-neutral probability measure. The calibration is performed by implementing the ``Inverse of the Modified Call Price" methods introduced by \cite{Carr:1998}. The data we use for call option prices are from Yahoo Finance for 08/29/2019 with different expiration dates and strike prices. The expiration date varies from 08/30/2019 to 12/17/2021, and the strike price varies from \$25 to \$430 among 2,440 different call option contracts. As the underlying of the call option, the SPY index price is \$292.58 on 08/29/2019. We use the 10-year Treasury yield curve rate\footnote{https://www.treasury.gov/resource-center/data-chart-center/interest-rates/Pages/TextView.aspx?data=yieldYear\&year=2019} on 08/29/2019 as the risk-free rate $r$, here $r=0.015$. Following \cite{Schoutens:2003}, we set $a=0.75$ and calibrate parameters from call option prices by \eqref{Eq6}. The estimated parameters of the best model are reported in Exhibit 4. \begin{table}[htb] \caption*{Exhibit 4: The calibrated parameters fitted call option prices on 08/29/2019.} \label{TableStock_trader} \begin{tabularx}{\textwidth}{c *{7}{Y}} \toprule[1pt] {$m$}&{$\alpha$}&{$\beta$}&{$d$}&{$\rho$}&{$\sigma$}\\ \hline -0.4&241&1.2&5&0.05&2.13\\ \bottomrule[1pt] \end{tabularx} \end{table} We use the inverse FFT and nonlinear least-squares minimization strategy to calibrate the parameters. As shown in Exhibit 4, the calibrated parameters have similar values to those reported in Exhibit 2, which is from the spot SPY and VIX. Note that the same method can be applied to put options. Since the model parameters are estimated from call option data, the model is the asset log-return model observed by option traders. \section*{\uppercase{implied Probability Weighting function}} The general framework of behavioral finance provides an alternative view of the mixed subordinated price process \citep[see][]{Barberis:2003}. \cite{Tversky:1992} introduced the Cumulative Prospect Theory (CPT). According to this theory, positive and negative returns on financial assets are treated differently due to the general fear disposition of investors. To quantify an investor's fear disposition, \cite{Tversky:1992} and \cite{Prelec:1998} introduced a PWF, $w^{ (\mathcal{R},\mathcal{S} )}: [0,1 ]\to [0,1 ]$, transforming the asset return distribution given by \begin{equation*} F_{\mathcal{R}} (x )\mathrm{=}\mathbb{P} (\mathcal{R}\le x ),x\in R \end{equation*} to a new one given by \begin{equation*} F_{\mathcal{S}} (x )\mathrm{=}\mathbb{P} (\mathcal{S}\le x )=w^{ (\mathcal{R},\mathcal{S} )} (F_R (x ) ),x\in R \end{equation*} corresponding to an option trader's views. \cite{Tversky:1992} introduced the following PWF \begin{equation} \label{PWT} w^{ (\mathcal{R},\mathcal{S};TK )} (u )=\frac{u^{\gamma }}{{ [u^{\gamma }+{ (1-u )}^{\gamma } ]}^{\frac{1}{\gamma }}},\,\,u\in (0,1 ),\,\, \gamma \in [0,1]. \end{equation} This PWF corresponding to $F_{\mathcal{S}} (x )$ requires an infinitely divisible distribution of the asset return. If not, it would lead to arbitrage opportunities in behavioral asset pricing models. \cite{Rachev:2017} studied the general form of PWF consistent with dynamic asset pricing theory. They treated $\mathcal{R}=M_t$, $t\ge0$ as the asset price dynamics before introducing the views of investors, where $\mathcal{R}=M_t$, a single subordinated log-price process, is given by \begin{equation} \label{single_subordinated} M_t=\mu t+\gamma U (t )+\sigma B_{U(t)}\, ,\,\,t\ge 0,\,\,\mu \in R,\,\,\gamma \in R,\,\,\sigma >0 \end{equation} The investor’s fear can be taken into account by introducing a new log-price process with a second ``behavioral" subordinator \citep[see][]{Shirvani:2019}. In our work, the investor's fear is incorporated into the BSM asset return model by introducing a pure jump L\'{e}vy process $L_t$ with $\mathbb{E}L_1=0,\ \mathbb{E}L^2_1=1$. The new mixed L\'{e}vy process is \begin{equation} X_t=\mu t+\varrho B_t+\sigma L_{t},\,\,t\ge 0,\ \mu \in R,\,\varrho \in R\diagdown \{0 \},\ \sigma \in R \label{investor_view_equation} \end{equation} The ch.f., $\varphi_{X_1}(v)$ has the form \begin{equation} \label{Chf_investor} \begin{array}{c} \varphi_{X_1} (v )=e^{iv \mu - \frac{1}{2} \rho^2 v^2+iv m \sigma +d [ \sqrt{\alpha^2-\beta^2}-\sqrt{\alpha^2- ( \beta+\sigma iv )^2 } ] }, v\in \mathbb{C}. \end{array} \end{equation} The MGF of $X_1$, $M_{X_1} (u )$, is obtained by setting $v=\frac{u}{i}$ \begin{equation} \label{MGF_trade} \begin{array}{c} M_{X_1} (u )=e^{u \mu + \frac{1}{2} \rho^2 u^2+u m \sigma +d [ \sqrt{\alpha^2-\beta^2}-\sqrt{\alpha^2- ( \beta+\sigma u )^2 } ] }, u\in (0,\frac{\alpha-\beta}{\sigma} ) \end{array} \end{equation} The corresponding PWF, $w^{ (\mathcal{R},\mathcal{S} )}: [0,1 ]\to [0,1 ]$, is defined by \begin{equation*} w^{ (\mathcal{R},\mathcal{S} )} (u )=F_{\mathcal{S}} (F^{inv}_{\mathcal{R}} (u ) ) \end{equation*} where $F^{inv}_{\mathcal{R}} (u )={\mathrm{min} \{x:F_{\mathcal{R}} (x )>u \}\ }$ is the inverse function of $F_{\mathcal{R}} (x )$ \citep[see][]{Rachev:2017}. This PWF $w^{ (\mathcal{R},\mathcal{S} )}$ represents the views of the option trader on the spot market model. These views about the market are different from those of a spot trader. In general, option traders are more ``fearful" than spot traders due to the non-linearity of the risk factors they face. To study whether option traders are greedy or fearful, we need to calculate $w^{ (\mathcal{R},\mathcal{S} )}$ and focus on the shape of PWF. To do so, we calculate the PWF of option traders by transforming the spot trader's distribution to the corresponding option trader's distribution where the asset log-return process follows \eqref{investor_view_equation}. We take $\mathcal{R}=X_t$, $t\ge0$ as the dynamics of the current log-price return observed by spot traders if the parameters of $X_t$, $t\ge0$ are estimated from the spot market or the natural world. Moreover, we consider $\mathcal{S}=X^{risk-neutral}_t$ as the dynamics of the log-price return observed by option traders where $X^{risk-neutral}_t$ is \begin{equation*} X^{risk-neutral}_t=X^{ (\mathbb{Q} )}_{t}=r t+\varrho B_t+\sigma^{\mathbb{Q}} L^{\mathbb{Q}}_{t}\,,\, \varrho \in R\setminus \{0 \}, \sigma^{\mathbb{Q}} \in R \end{equation*} where $\varrho$ is estimated from the spot prices of the underlying asset. The remaining parameters for the distribution of $X^{risk-neutral}_t$ are calibrated from the risk-neutral world. To estimate the parameters in $\mathcal{R}=X_t$, where $X_t$ represents the dynamics of the log-price return observed by spot traders, we applied the ch.f. method to daily log-returns (based to closing prices) of the SPY from January 1993 to March 2019. The model's estimated parameters are summarized in Exhibit 5. We implemented the FFT to calculate the CDF of the model. The result, plotted in Exhibit 6, shows that our estimated model provides a good match between the CDF and the CDF of the data. \begin{table}[htb] \begin{center} \label{taboptin_trade} \caption*{Exhibit 5: The estimated parameters of the distribution of spot traders fitted to daily SPDR S\&P 500 log-returns} \begin{tabularx}{\textwidth}{c *{7}{Y}} \toprule[1pt] {$m$}&{$\alpha$}&{$\beta$}&{$d$}&{$\rho$}&{$\mu$}&{$\sigma$}\\ \hline 0.00039&176.8&3.45&0.0025&0.0011&-0.00008&1.399\\ \bottomrule[1pt] \end{tabularx} \end{center} \end{table} \begin{figure}\label{figureTrader_fit} \end{figure} We calibrate the parameters of $\mathcal{S}=X^{risk-neutral}_t$ in the risk-neutral probability space using the ``Inverse of the Modified Call Price" methods (\cite{Carr:1998}). Let $\mathcal{S}$ be a traded risky asset with price process \begin{equation*} S_t=S_0 e^{X_t},\ t\ge 0,S_0>0 \end{equation*} where the log-price process $\mathbb{X}= (X_t=lnS_t,t\ge 0 )$ is a mixed L\'{e}vy process: \begin{equation*} X_t=\mu t+\varrho B_t+\sigma L_{t},\,\,t\ge 0,\ \mu \in R,\,\varrho \in R\setminus \{0 \},\ \sigma \in R \end{equation*} Since $X_t$ is a pure jump L\textrm{\'{e}}vy process, the MCMM $\mathbb{Q}$ is not equivalent to $\mathbb{P}$, while the European call option pricing formula under $\mathbb{Q}$ is still arbitrage free. The ch.f. for $X_t$, with $ v\in \mathbb{C}$ \iffalse${\varphi }_{lnS^{ (\mathbb{Q} )}_t} (v )={\mathbb{E}}^{ (\mathbb{Q} )}e^{ivlnS^{ (Q )}_t}$\fi , is given by \begin{equation} \label{Chf_option_trader} {\varphi }_{lnS^{ (\mathbb{Q} )}_t} (v )=S^{iv}_0e^{ivrt- \frac{1}{2} vt\rho^2 (i+v ) -ivtd [ \sqrt{\alpha^2-\beta^2}-\sqrt{\alpha^2- ( \beta+\sigma )^2 } ] +td [ \sqrt{\alpha^2-\beta^2}-\sqrt{\alpha^2- ( \beta+\sigma iv )^2 } ] } \end{equation} To calibrate our model parameters, we use the same dataset of call option prices in the previous section. The 10-year Treasury yield curve rate is regarded as the risk-free rate $r$. According to \cite{Schoutens:2003}, we set $a=0.75$ and calibrate parameters based on call option prices by \eqref{Eq6} with the same methods mentioned in the previous section and construct the CDF of option traders. Using the CDFs of $\mathcal{S}$ and $\mathcal{R}$, we numerically computed the corresponding PWF, $w^{ (\mathcal{R},\mathcal{S} )}$. \cite{Gonzalez:1999} discuss two features of PWF: \textit{Diminishing sensitivity and discriminability} and \textit{attractiveness}. They interpreted the discriminability as the degree of curvature of the PWF and attractiveness as the elevation of the PWF. \cite{Tversky:1992} presented a psychological definition for diminishing sensitivity as: people are less sensitive to change in probability as they move from reference points. Zero and one refer to reference points in the probability domain. The plotted PWF in Exhibit 7 shows diminishing sensitivity of option traders. As shown in Exhibit 7, the PWF has an inverse-S-shape, first concave and then convex. The plot falls sharply near the probability value $0.17$ and rises steeply near the point $0.95$ to $1$. The PWF varies slightly in interval $ (0.1,0.9)$, indicating that option traders overestimate the probability of values that are not close to reference points. In other words, option traders overweight the probability of big losses and underweight the probability of big profits. The falling near to zero and rising near the endpoint (concave, then convex) of the PWF, represent option trader's fear of a big jump in the market, especially for big losses. That is, option traders tend to be more fearful than spot traders. The second feature of the PWF discussed by \cite{Gonzalez:1999} is not related to the shape and curvature of the PWF and therefore beyond the scope of this paper. \begin{figure}\label{figurePWF} \end{figure} \section*{\uppercase{conclusion}} In this paper, we develop a more realistic asset pricing model by mixing the BSM asset return process with a single L\'{e}vy subordinated process, through which we are able to incorporate the behavior and sentiment of investors in a log-return pricing model. Then we present the arbitrage-free equivalent market measure. We apply the European call option pricing formula where the subordinated process is a Normal Inverse Gaussian L\'{e}vy process. The model parameters are calibrated using the SPY index. The investor’s fear disposition is evaluated by the PWF. We reviewed the shape of the weighting function in terms of discriminability. The PWF shape of option traders starts out as concave and then becomes convex. This inverse-S-shape indicates that option traders are more sensitive to the change in probability of realizing a “big loss" and “big profit"; in other words, their behavior is such that option traders are more fearful than spot traders. \normalem \end{spacing} \end{document}
\begin{document} \title{Experimental measurement-device-independent quantum key distribution with the double-scanning method} \author{Yi-Peng Chen$^{1,2,3,\dagger}$} \author{Jing-Yang Liu$^{1,2,3,\dagger}$} \author{Ming-Shuo Sun$^{1,2,3}$} \author{Xing-Yu Zhou$^{1,2,3,\S}$} \author{Chun-Hui Zhang$^{1,2,3}$} \author{Jian Li$^{1,2,3}$} \author{Qin Wang$^{1,2,3}$} \email{[email protected]} \email{[email protected]} \affiliation{$^{1}$Institute of quantum information and technology, Nanjing University of Posts and Telecommunications, Nanjing 210003, China.} \affiliation{$^{2}$"Broadband Wireless Communication and Sensor Network Technology" Key Lab of Ministry of Education, NUPT, Nanjing 210003, China.} \affiliation{$^{3}$"Telecommunication and Networks" National Engineering Research Center, NUPT, Nanjing 210003, China.} \begin{abstract} \noindent The measurement-device-independent quantum key distribution (MDI-QKD) can be immune to all detector side-channel attacks. Moreover, it can be easily implemented combining with the matured decoy-state methods under current technology. It thus seems a very promising candidate in practical implementation of quantum communications. However, it suffers from severe finite-data-size effect in most existing MDI-QKD protocols, resulting in relatively low key rates. Recently, \textit{Jiang et al.} [Phys. Rev. A 103, 012402 (2021)] proposed a double-scanning method to drastically increase the key rate of MDI-QKD. Based on \textit{Jiang et al.'s} theoretical work, here we for the first time implement the double-scanning method into MDI-QKD and carry out corresponding experimental demonstration. With a moderate number of pulses of $ 10^{10} $, we can achieve 150 km secure transmission distance which is impossible with all former methods. Therefore, our present work paves the way towards practical implementation of MDI-QKD. \end{abstract} \maketitle \section{Introduction} Quantum Key Distribution (QKD), based on the laws of quantum physics, can offer unconditional secure communication between two legitimate parties (Alice and Bob) \cite{Artur,Lo,May}, even if there exists a malicious eavesdropper (Eve). The first QKD protocol is proposed by Bennett and Brassard in 1984 named BB84 \cite{Bennett}. Hereafter, it has attracted extensive attention and developed rapidly both in theory and experiment \cite{I6,I7,I8}. The security proof of BB84 protocol with nonideal photon sources was given by Gottesman et al. \cite{Single1,Single2}, and the decoy state method \cite{Wang,LO. HK} was invented to counter attack the photon number splitting (PNS) attacks \cite{PNS}, corresponding experimental demonstrations were exhibited \cite{Tokyo,BB84 421,Onchip}. Moreover, to solve those attacks directed on the detecting side, the measurement-device-independent quantum key distribution (MDI-QKD) protocol was put forward \cite{MDI1,MDI2}. Combined with the decoy-state method, MDI-QKD has the ability of avoiding attacks aiming at detectors and multi-photon components in the light source. Thereafter, a larger number of theoretical and experimental works have been carried out to improve practical performances of MDI-QKD \cite{MDIImprove3,MDIImprove4,MDIImprove6,MDIImprove7,MDIImprove8,MDIImprove9}. Up to date, the secure transmission distance of MDI-QKD has been extended up to more than four hundred kilometres \cite{MDIImprove6} which shows great advantages in long-distance communication. However, its key rate is still seriously affected by the finite-size effect, and the block size used for considering finite-size effects is usually larger than $ 10^{12} $ \cite{MDIImprove6,MDIImprove9}. Fortunately, the 4-intensity MDI-QKD protocol \cite{MDIImprove4} and the double-scanning method \cite{DoubleScan} have been invented and dramatically improve the performance of the MDI-QKD. Based on the above theoretical work, we carry out a proof-of-principle experimental demonstration, and mainly focus on the situation with a small data size, i.e., $ 10^{10} $ pulses are emitted and 150 km transmission distance is successfully achieved. In what follows, we first briefly review the theory on the MDI-QKD protocol with double-scanning method in Sec. \uppercase\expandafter{\romannumeral2}. We then describe our decoy-state MDI-QKD experimental setup in Sec. \uppercase\expandafter{\romannumeral3}. Experimental results are shown in Sec. \uppercase\expandafter{\romannumeral4}. Finally, the conclusion and outlook are given out in Sec. \uppercase\expandafter{\romannumeral5}. \section{Theory} In the following, we will briefly review the implementation flow of the 4-intensity MDI-QKD protocol with the double-scanning method. $Step$ (i). Alice (Bob) randomly modulates the light source $l$ $(r)$ into four different intensities, including the signal state $\mu$ , the decoy states $\nu$, $\omega$ and the vacuum state $o$, i.e., $l$ $(r) \in \{\mu, \nu, \omega, o\}$. The probabilities of choosing different intensities for Alice (Bob), are denoted as $P_l$ ($P_r$). The photon-number distribution of the phase-randomized light sources follows the Poisson distribution, $p_{n}^{\lambda} = \frac{\lambda^{n}}{n!}e^{-\lambda}$, where $\lambda$ is the average intensity, and $n \in \{0,1,2...\}$. In our work, encoding bases are decided by the different intensities, and the signal state pulses are only prepared in the Z basis while the decoy-state pulses are modulated in the X basis. $Step$ (ii). Charlie performs Bell-state measurements (BSMs) on the incident pulse pairs sent by Alice and Bob. For simplicity, we only consider the effective event when the pulse pairs are projected onto state ${|\psi^- \rangle}=({|01 \rangle}-{|10 \rangle})/\sqrt{2}$, where Alice and Bob can easily make their bit strings identical through fliping any one’s bit. After sufficient effective-event counts are recorded, Charlie publicly declares the BSM results. $Step$ (iii). Alice and Bob announce their basis choices to implement basis reconciliation. Corresponding gains $Q_{lr}^{XX}$, $Q_{lr}^{ZZ}$ and the quantum bit error rates (QBER) $E_{lr}^{XX}$, $E_{lr}^{ZZ}$ can be obtained directly. With these experimental observations, we carry out parameter estimations. Then, error correction and privacy amplification are processed to obtain the final secure key strings. In the process of parameter estimation, we need to calculate the yield of single-photon-pair contributions $(Y_{11}^{ZZ})$ and the phase-flip error of single-photon-pair $(e_{11}^{ph})$ in signal states. According to Ref. \cite{MDIImprove1} we have the lower bound of single-photon-pair yield $(Y_{11,L}^{XX})$ and upper bound of the bit-flip error rate $({e_{11,U}^{XX}})$ in decoy states: \begin{equation} \begin{split} {Y_{11,L}^{XX}} \geq &\dfrac{1}{{p_1^\omega}{p_1^\nu}({p_1^\omega}{p_2^\nu}-{p_1^\nu}{p_2^\omega})}[({p_1^\nu}{p_2^\nu}{\langle Q_{\omega\omega}^{XX}\rangle}\\ &+{p_1^\omega}{p_2^\omega}{p_0^\nu}{\langle Q_{o\nu}^{XX}\rangle} +{p_1^\omega}{p_2^\omega}{p_0^\nu}{\langle Q_{\nu{o}}^{XX}\rangle})\\ &-({p_1^\omega}{p_2^\omega}{\langle Q_{\nu\nu}^{XX}\rangle}+{p_1^\omega}{p_2^\omega}{p_0^\nu}{p_0^\nu}{\langle Q_{o{o}}^{XX}\rangle})\\ &-{p_1^\nu}{p_2^\nu}({p_0^\omega}{\langle Q_{o{\omega}}^{XX}\rangle}+{p_0^\omega}{\langle Q_{{\omega}o}^{XX}\rangle}\\ &-{p_0^\omega}{p_0^\omega}{\langle Q_{oo}\rangle})], \end{split} \end{equation} \begin{equation} \begin{split} {e_{11,U}^{XX}} \leq &\dfrac{1}{{p_1^\omega}{p_1^\omega}{Y_{11}^{XX}}}[{\langle Q_{\omega\omega}^{XX}E_{\omega\omega}^{XX} \rangle}-({p_0^\omega}{\langle Q_{o\omega}^{XX}E_{o\omega}^{XX} \rangle}\\ &+{p_0^\omega}{\langle Q_{\omega{o}}^{XX}E_{\omega{o}}^{XX} \rangle} -{p_0^\omega}{p_0^\omega}{\langle Q_{oo}E_{oo}\rangle})], \end{split} \end{equation} where ${\langle * \rangle} $ represents the expected value of the experimental observation. Refering to the double-scanning method \cite{DoubleScan}, we extract those common parts that exists in ${Y_{11,L}^{XX}}$ and ${e_{11,U}^{XX}}$, i.e., the error counts rate $\mathcal{M}={\langle Q_{\omega\omega}^{XX}E_{\omega\omega}^{XX} \rangle}$ and the vacuum related counts rate $\mathcal{H}={p_0^\omega}{\langle Q_{o\omega}^{XX} \rangle}+{p_0^\omega}{\langle Q_{\omega{o}}^{XX} \rangle}-{p_0^\omega}{p_0^\omega}{\langle Q_{oo}\rangle}$. Thus, we can reformulate Eqs. (1) and (2) as: \begin{equation} \begin{split} {Y_{11,L}^{XX}} \geq &\dfrac{1}{{p_1^\omega}{p_1^\nu}({p_1^\omega}{p_2^\nu}-{p_1^\nu}{p_2^\omega})}[({p_1^\nu}{p_2^\nu} \bar {\rm {\mathcal{M}}}+{p_1^\omega}{p_2^\omega}{p_0^\nu}{\langle Q_{o\nu}^{XX}\rangle}\\ &+{p_1^\omega}{p_2^\omega}{p_0^\nu}{\langle Q_{\nu{o}}^{XX}\rangle})-({p_1^\omega}{p_2^\omega}{\langle Q_{\nu\nu}^{XX}\rangle}\\ &+{p_1^\omega}{p_2^\omega}{p_0^\nu}{p_0^\nu}{\langle Q_{o{o}}^{XX}\rangle}) +{p_1^\nu}{p_2^\nu}(\mathcal{M}-\mathcal{H})], \end{split} \end{equation} \begin{equation} \begin{split} {e_{11,U}^{XX}} \leq \dfrac{1}{{p_1^\omega}{p_1^\omega}{Y_{11}^{XX}}}(\mathcal{M}-\mathcal{H}/2), \end{split} \end{equation} where $\bar {\rm {\mathcal{M}}}\ = {\langle Q_{\omega\omega}^{xx}(1-E_{\omega\omega}^{xx})\rangle}$, and the expected values in the above formulas can be calculated with the observed values through Chernoff bound method \cite{chernoff}: \begin{equation} \begin{split} &{\langle {\Gamma}_{lr} \rangle} \geq {F^L({\Gamma}_{lr})} =: {\Gamma}_{lr} - f((\frac{\epsilon}{2})^{3/2})\sqrt{\dfrac{{\Gamma}_{lr}}{N_{lr}}},\\ &{\langle {\Gamma}_{lr} \rangle} \leq {F^U({\Gamma}_{lr})} =: {\Gamma}_{lr} + f(\dfrac{(\epsilon/2)^4}{16})\sqrt{\dfrac{{\Gamma}_{lr}}{N_{lr}}}, \end{split} \end{equation} where ${\Gamma}_{lr}$ denotes the experimental observation and $f(x) = \sqrt{2ln(1/x)}$. $F^L(*)$ and $F^U(*)$ represent the lower and upper of Chernoff bound, respectively. Here $\epsilon$ is the failure probability \cite{MDISecure}. Moreover, we combine the technique of jonit constrains \cite{DoubleScan} and Eq. (5) to construct linear programming of the $Y_{11,L}^{XX}$ related to $\bar {\rm {\mathcal{M}}}$, ${\langle Q_{o\nu}^{XX}\rangle}$, ${\langle Q_{\nu{o}}^{XX}\rangle}$, ${\langle Q_{\nu\nu}^{XX}\rangle}$ and ${\langle Q_{o{o}}^{XX}\rangle}$. By solving the problem of linear programming \cite{DoubleScan}, we can get the estimation values of $Y_{11,L}^{XX}$ and $e_{11,U}^{XX}$. Furthermore, according to the proof in Ref. \cite{MDIImprove4}, the yield ${Y_{11,L}^{ZZ}}$ and the phase-flip error $e_{11,U}^{ph}$ in signal states satisfy: \begin{equation} \begin{split} {Y_{11,L}^{ZZ}} = {{Y_{11,L}^{XX}}},\qquad \qquad \end{split} \begin{split} e_{11,U}^{ph} = {e_{11,U}^{XX}}. \end{split} \end{equation} With the above estimated parameters, the secure key generation rate can be calculated: \begin{equation} \begin{split} R(\mathcal{H},\mathcal{M}) = {{p_{\mu}}^2{\left\{{(p_1^{\mu})^2}{Y_{11,L}^{ZZ}}{[1-h(e_{11,U}^{ph})]-{fQ_{{\mu}{\mu}}^{ZZ}h(E_{{\mu}{\mu}}^{ZZ})}}\right\}}} \label{ep} \end{split} \end{equation} where $Q_{{\mu}{\mu}}^{ZZ}$ is the overall gain and $E_{{\mu}{\mu}}^{ZZ}$ is the QBER in signal states, and their values can be observed in experiment; $h(x) = -x{\log_2(x)}-(1-x){\log_2(1-x)}$ is the binary Shannon information function. Moreover, the lower and upper bounds of $\mathcal{H}$ and $\mathcal{M}$, i.e., $[{\mathcal{H}_L},{\mathcal{H}_U}]$ and $[{\mathcal{M}_L},{\mathcal{M}_U}]$, can also be acquired with the process of joining constraints described above. Then, we simultaneously scan $\mathcal{H}$ in $[{\mathcal{H}_L},{\mathcal{H}_U}]$ and $\mathcal{M}$ in $[{\mathcal{M}_L},{\mathcal{M}_U}]$ to implement the joint study, and get the final key rate: \begin{align} R_L = \min_{{\mathcal{H} \in [{\mathcal{H}_L},{\mathcal{H}_U}]}\atop{\mathcal{M} \in [{\mathcal{M}_L},{\mathcal{M}_U}]}}R(\mathcal{H},\mathcal{M}). \label{ep} \end{align} \section{Experiment} The schematic of our experimental setup is shown in Fig. 1. In this work, we employ Faraday-Michelson interferometers (FMIs) to implement a time-bin phase encoding scheme. Alice and Bob each have a narrow linewidth continuous-wave laser (Clarity NLL-1550-LP), whose wavelength is locked to the P14 line of C13 acetylene at 1550.51 nm. The frequency-locked lasers ensure that the two-photon interference at Charlie's side is consistent in frequency. Four intensity modulators (IMs) are deployed on each side, and we will singly introduce the function of the following IMs from the source to the detectors. The first two IMs are applied to chop continuous light into a pulse train with a repetition rate of 50 MHz and further modulate them into the decoy states. FMI, as the core device of the coding scheme, is composed of a 50/50 beam splitter (BS), a phase modulator (PM) and two Faraday mirrors (FMs). Moreover, the circulator can effectively filter light pulses reflected by the FMs. Each pulse entering the FMI is split into front and rear time bins. The arm length difference between the long and short arms of the FMI distinguishes time bins with a temporal difference of 10.3 ns. Phase encoding in the X basis can be performed by changing the extra phase voltages of PM at the long arm. The following third and fourth IMs are adopted for basis choice and time-bin encoding. For the Z basis, the two IMs can remove the front or rear time bin, representing $|0\rangle$ or $|1\rangle$ bit respectively. The cascaded IMs can further improve the extinction ratio, giving a reduced optical error rate (~0.1\%) in the Z basis. Moreover, considering the PM inevitably causes insertion loss, the latter two IMs should also designed to balance the intensity difference between the front and rear bin pulses. To obtaining the optimal operating voltages of PMs and IMs, periodically scanning the voltages and analyzing corresponding count rates are necessary for keeping free running experimental systems. \begin{figure*} \caption{Schematic of the experimental setup. Laser: continuous-wave laser; IM: intensity modulator; Circ: circulator; BS: beam splitter; PM: phase modulator; FM: Faraday mirror; ATT: attenuator; EPC: electronic polarization controller; SNSPD: superconducting nanowire single-photon detector. } \label{experiment} \end{figure*} The light pulses are attenuated to single-photon level before sent to Charlie through the commercial standard single-mode fiber with a transmission coefficient of 0.18 dB/km. Then, the pulse pairs from Alice and Bob interfere at Charlie's BS, further sent into two commercial superconducting nanowire single-photon detectors (SNSPDs) which are connected to the time-to-digital converter (TDC) for data processing. To be specify, the quality of two-photon interference is determined by the photon frequency, the arrival time and the photon polarization. Here the temporal indistinguishability has been kept by inserting an electric optical delay line in Alice's side (not shown in Fig. 1). To maintain the polarization stabilization, two electronic polarization controllers (EPC) plus two polarization BSs (PBSs) are inserted before the BS to implement polarization automatic compensation. Finally, the interference visibility in our experiment is better than 48\%. The total efficiency of the experimental system is 60\% including the losses of EPC, PBS, BS, and SNSPDS. Moreover, the SNSPDs used in this work run at 2.2 K and provide 85\% detection efficiencies at dark count rates of 12 Hz. \section{Results and discussion} \begin{figure} \caption{The theoretical and experimental secret key rates versus the transmission distance. The red solid line represents our theoretical key rates while the dot point and star point represent experimental results with 120 km and 150 km fibers, respectively; The black dotted line indicates theoretical predictions of Ref. \cite{MDIImprove4} and the blue dash line represents Ref. \cite{MDIImprove1}.} \label{fig-2} \end{figure} The system parameters implemented in our experiment are listed in Table 1. The total number of pulses used in the experiment is $10^{10}$, and the failure probability $\epsilon$ is reasonably set as $10^{-10}$. The calculated optimal experimental parameters and the relevant data are displayed in Table 2. We conduct the experiment over 120 km and 150 km fibers, and achieve corresponding key rates as 43.54 bps and 0.06 bps respectively, which agrees well with theoretical predictions. Moreover, in order to illustrate the advantages of the new scheme, we also plot out the variation of the key rate with transmission distance by using either 3-intensity \cite{MDIImprove1} or 4-intensity \cite{MDIImprove4} decoy-state method, see Fig. 2. Obviously, the present double-scanning method shows overwhelming advantage compared with the other two schemes. \begin{table}[ht] \renewcommand\arraystretch{1.3} \caption{List of experimental parameters. Here $\alpha$ is the fiber loss coefficient (dB/km); $\eta_{d}$ is the total efficiency of the Charlie; $Y_{0}$ is the dark count rate of the Charlie's detectors; $e_{d}^{Z}$ and $ e_{d}^{X}$ are the misalignment-error probability on the Z and X bases, respectively; $f$ is the inefficiency of error correction; $\epsilon$ is the failure probability.} \setlength{\tabcolsep}{2.0mm} \begin{tabular}{ccccccc} \hline\hline $\alpha$ &$\eta_{d}$&$Y_{0}$&$e_{d}^{Z}$ &$ e_{d}^{X}$&$f$&$\epsilon$ \\ \hline 0.18 dB/km&$60\%$&$4\times10^{-8}$&0.1\%& 1\%&1.16& $10^{-10}$ \\ \hline \hline \label{Parameter} \end{tabular} \label{table-1} \end{table} \begin{table}[h] \renewcommand\arraystretch{1.5} \caption{The optimal parameters of corresponding distance. $\mu$ is the intensity of signal state. $\nu$, $\omega$ are the intensities for decoy states. $P_{\mu}$, $P_{\nu}$, $P_{\omega}$ are the probabilities to choose different intensities.} \setlength{\tabcolsep}{2.25mm} \begin{tabular}{ccccccc} \hline \hline & $\mu$ & $\nu$ & $\omega$ &$P_{\mu}$ & $P_{\nu}$ & $P_{\omega}$ \\ \hline 120&0.5866 &0.3323&0.0767& 0.4151&0.1337&0.4305\\ 150 & 0.3851 &0.3707& 0.0763& 0.1763&0.1898&0.6124\\ \hline \hline \end{tabular} \label{table-2} \end{table} \section{Summaries and outlooks} In conclusion, we have performed the MDI-QKD experiment by using the state-of-the-art double-scanning method, drastically reducing the finite size effect. With an repetition rate of 50 MHz QKD system and run it for 5 minutes, we can obtain the secret key rates of 43.54 bps and 0.059 bps at 120 km and 150 km, respectively, which is impossible for all former methods. Expectably, if the repetition rate of the QKD system is increased to GHz level \cite{GHZ}, the time consumption will be further compressed to few seconds, which is very promising for one-time pad implementation of quantum communications. Therefore, our present work represent a further step towards practical implementation of MDI-QKD. \\ \\ \noindent \textbf{Funding.} This project supported by the National Key Research and Development Program of China (Grants Nos. 2018YFA0306400, 2017YFA0304100), the National Natural Science Foundation of China (Grants Nos.12074194, 11774180, U19A2075), the Leading-edge technology Program of Jiangsu Natural Science Foundation (Grants No. BK20192001), and the NUPTSF (Grants No. NY220122). \\ \\ \textbf{Disclosures.} The authors declare no conflicts of interest. \\ \\ $\dagger$ These two authors contribute equally to this work. \end{document}
\begin{document} \maketitle \begin{abstract} We consider the problem of minimising the $n^{th}-$eigenvalue of the Robin Laplacian in $\mathbb{R}^{N}$. Although for $n=1,2$ and a positive boundary parameter $\alpha$ it is known that the minimisers do not depend on $\alpha$, we demonstrate numerically that this will not always be the case and illustrate how the optimiser will depend on $\alpha$. We derive a Wolf-Keller type result for this problem and show that optimal eigenvalues grow at most with $n^{1/N}$, which is in sharp contrast with the Weyl asymptotics for a fixed domain. We further show that the gap between consecutive eigenvalues does go to zero as $n$ goes to infinity. Numerical results then support the conjecture that for each $n$ there exists a positive value of $\alpha_{n}$ such that the $n^{\rm th}$ eigenvalue is minimised by $n$ disks for all $0<\alpha<\alpha_{n}$ and, combined with analytic estimates, that this value is expected to grow with $n^{1/N}$. \end{abstract} \vspace*{2cm} ($^1$) Group of Mathematical Physics of the University of Lisbon, Complexo Interdisciplinar, Av.~Prof.~Gama Pinto~2, P-1649-003 Lisboa, Portugal, Tel.: +351--217904857, Fax: +351--217954288 {\tt e-mail address}: [email protected], [email protected]\vspace*{5mm} ($^2$) Department of Mathematics, Faculty of Human Kinetics of the Technical University of Lisbon {\rm and} Group of Mathematical Physics of the University of Lisbon, Complexo Interdisciplinar, Av.~Prof.~Gama Pinto~2, P-1649-003 Lisboa, Portugal, Tel.: +351--217904852, Fax: +351--217954288 {\tt e-mail address}: [email protected]\vspace*{5mm} ($^3$) Department of Mathematics, Universidade Lus\'{o}fona de Humanidades e Tecnologias, Av. do Campo Grande, 376, 1749-024 Lisboa, Portugal \section{Introduction} Optimisation of eigenvalues of the Laplace operator is a classic topic in spectral theory, going back to the work of Rayleigh at the end of the nineteenth century. The first result of this type is the well-known Rayleigh--Faber--Krahn inequality which states that among all Euclidean domains of fixed volume the ball minimises the first Dirichlet eigenvalue~\cite{rayl,fab,krahn1,krahn2}. As a more or less direct consequence of this result, it is possible to obtain that the second Dirichlet eigenvalue is minimised by two balls of equal volume. The case of other boundary conditions has also received some attention and it has been known since the $1950$'s that the ball is a maximiser for the first nontrivial Neumann eigenvalue~\cite{sz,wein} and, more recently, that it also minimises the first Robin eigenvalue with a positive boundary parameter~\cite{boss,buda,dane}, while two equal balls are again the minimiser for the second eigenvalue~\cite{kenn1}. In spite of this, for higher eigenvalues very little is known and even proving existence of minimisers in the Dirichlet case poses great difficulties. Bucur and Henrot showed the existence of a minimiser for the third eigenvalue among quasi-open sets in $2000$~\cite{buhe}, and it is only very recently that Bucur~\cite{bucu} and Mazzoleni and Pratelli~\cite{mapr} proved, independently, the existence of bounded minimisers for all eigenvalues in the context of quasi-open sets. Moreover, in the planar Dirichlet case and apart from the third and fourth eigenvalues, minimisers, are expected to be neither balls nor unions of balls, as is already known up to the fifteenth eigenvalue~\cite{anfr}. In fact, it is not to be expected that the boundaries of optimisers can explicitly be described in terms of known functions either, which means that the type of result that one may look for should be of a different nature from the Rayleigh--Faber--Krahn type. For instance, and always assuming existence of minimisers, there are several qualitative questions which may be raised with respect to this and related problems and which include multiplicity issues, symmetry and connectedness properties, to name just a few. However, even such results might not hold in full generality, as some recent numerical results seem to indicate~\cite{anfr}. All of the above issues make this field of research suitable ground for the combination of rigorous analytic methods with accurate numerical calculations in order to explore the properties of such problems. Indeed, and although numerical analysis of eigenvalue problems goes back many years, within the last decade there have been several extensive numerical studies based on new methods which allow us to obtain insight into the behaviour of such problems. To mention just two of the most recent related to eigenvalue optimisation, see~\cite{oude,anfr} for the optimisation of Dirichlet and Dirichlet and Neumann eigenvalues, respectively. The purpose of this paper is to consider the optimisation of higher eigenvalues $\lambda_{n}$ of the $N$-dimensional Robin eigenvalue problem and analyse some of its properties, combining both the approaches mentioned above. From a theoretical perspective, we begin by establishing a Wolf--Keller type result, which is needed in the numerical optimisation procedure in order to check for non--connected optimal domains. We then consider the asymptotic behaviour of both optimal values of $\lambda_{n}$, which we shall call $\lambda_{n}^{*}$, and the difference between $\lambda_{n+1}^{*}$ and $\lambda_{n}^{*}$. The main result here is the fact that $\lambda_{n}^{*}$ grows at most with $n^{1/N}$ as $n$ goes to infinity, and that the difference between optima does go to zero in this limit. Note that this asymptotic behaviour for optimal eigenvalues is in sharp contrast with Weyl's law for the behaviour of the high frequencies for a fixed domain $\Omega$, namely, \[ \lambda_{n}(\Omega) = \frac{\displaystyle 4\pi^{2}}{\displaystyle \left(\omega_{N} |\Omega|\right)^{2/N}}n^{2/N} + {\rm o}(n^{2/N}) \;\;\mbox{ as } n \to \infty, \] where $\omega_N$ denotes the volume of the ball of unit radius in $\mathbb{R}^N$ and $|\Omega|$ is the $N$-dimensional volume of $\Omega$. Finally, we prove some results regarding the behaviour of $\lambda_{n}(t\Omega,\alpha)$ as a function of the parameters $t$ and $\alpha$. Although intuitively obvious and part of the folklore, their proofs do not seem entirely trivial and it is difficult to source them precisely in the literature. Hence we have included proofs. At the numerical level, our results are obtained using a meshless method known as the Method of Fundamental Solutions. Since it is, as far as we know, the first time that such a method has been applied to the Robin problem, we begin by describing it and stating some basic properties. We then present the results of the optimisation procedure. This allows us to conclude numerically, as was observed in \cite[Sec.~3]{kenn2}, that the optimiser will depend on the value of $\alpha$ for $n$ larger than two, and provides support for the conjecture that for small positive values of $\alpha$ the $n^{\rm th}$ eigenvalue is minimised by $n$ identical balls. In fact, and assuming that the domain comprising $n$ equal balls stops being a minimiser when its $n^{\rm th}$ eigenvalue becomes larger than that of the set formed by $n-3$ small balls and a larger ball, we show that the value of $\alpha$ at which this happens is increasing with $n$ and grows to infinity. The paper is divided into three parts. In the first we present the analytic results described above, together with the corresponding proofs. This is followed by a description of the numerical method used, and finally we present the numerical results obtained. \section{Theoretical results} We write the eigenvalue problem as \begin{equation} \label{eq:robin} \begin{aligned} -\Delta u&= \lambda u &\quad &\text{in $\Omega$},\\ \frac{\partial u}{\partial\nu}+\alpha u&=0&& \text{on $\partial\Omega$} \end{aligned} \end{equation} where $\nu$ is the outer unit normal to $\Omega$ and the boundary parameter $\alpha>0$ is a constant. We will assume throughout this section that $\Omega \subset \mathbb{R}^N$ is a bounded, open set with Lipschitz boundary, not necessarily connected, with $N$-dimensional volume $|\Omega|$ equal to some fixed constant $V>0$. We will also use $\sigma$ to denote surface measure. As is standard, we will always interpret the problem \eqref{eq:robin} in the weak sense, so that an eigenvalue $\lambda\in\mathbb{R}$ and associated eigenfunction $u \in H^1(\Omega)$ solve the equation \begin{equation} \label{eq:weak} \int_\Omega \nabla u\cdot\nabla v\,dx + \int_{\partial\Omega} \alpha uv\,d\sigma = \lambda\int_\Omega uv\,dx \end{equation} for all $v\in H^1(\Omega)$. It is well known that for each $\Omega \subset \mathbb{R}^N$ and $\alpha>0$, there is a discrete set of eigenvalues $\{\lambda_n(\Omega,\alpha)\}_{n\geq 1}$, all positive, ordered by increasing size, and repeated according to their respective multiplicities. For each $n\geq 1$, we are interested in the quantity \begin{equation} \label{eq:ninf} \lambda_n^*=\lambda_n^*(V,\alpha):=\inf\{\lambda_n(\Omega,\alpha):|\Omega|=V\} \end{equation} for each fixed $V>0$ and $\alpha\geq 0$, where we assume $\Omega$ belongs to the class of all bounded, open, Lipschitz subsets of $\mathbb{R}^N$, as well as the properties of any associated minimising domain(s) $\Omega^* = \Omega^* (n,V,\alpha,N)$. As for the Dirichlet problem, when $n=1$, the unique minimising domain is a ball \cite{buda,dane}, while for $n=2$ it is the union of two equal balls \cite{kenn1,kenn2}. Unlike in the Dirichlet case, no existence result is known for any $n\geq 3$; in $\mathbb{R}^2$, it was shown in \cite{kenn2} that, for each $n\geq 3$, there cannot be a minimiser independent of $\alpha>0$. As the dependence of $\lambda_n^*(V,\alpha)$ on $\alpha \geq 0$ is one of the principal themes of this paper, we note here the following basic properties of this function. The proof will be deferred until Sec.~\ref{sec:append}. \begin{proposition} \label{prop:starvsalpha} Let $V>0$ and $n \geq 1$ be fixed and for each $\alpha\geq 0$ let $\lambda_n^*(V,\alpha)$ be given by \eqref{eq:ninf}. Then as a function of $\alpha \in [0,\infty)$, $\lambda_n^*(V,\alpha)$ is continuous and strictly monotonically increasing, with $\lambda_n^*(V,0) = 0$ and $\lambda_n^*(V,\alpha) < \lambda_n^*(V,\infty)$, the infimal value for the corresponding Dirichlet problem. \end{proposition} \begin{remark} \label{rem:existence} Throughout this section, we will tend to assume for simplicity, especially in the proofs, that \eqref{eq:ninf} does in fact possess a minimiser $\Omega^*$, for each $n\geq 1$ and $\alpha>0$. This assumption can easily be removed by considering an arbitrary sequence of domains $\Omega_k^*$ with $\lambda_n(\Omega_k^*) \to \lambda_n^*$. As this type of argument is quite standard, we omit the details. Note that without loss of generality each domain $\Omega_k^*$ may be assumed to have at most $n$ connected components, as more could only increase $\lambda_n$ (see \cite[Remark~3.2(ii)]{kenn2}). In fact, such a sequence may be assumed to be connected. This is a consequence of results on the stability of solutions to \eqref{eq:robin} with respect to domain perturbation: for any domain $\Omega$ with $n$ connected components, any $\alpha>0$ and any $\varepsilon>0$, by \cite[Corollary~3.7]{danc}, there exists another ``dumbbell"-type connected domain $\Omega'$, which has narrow passages joining the disconnected components of $\Omega$, such that $|\Omega'|=|\Omega|$ and $\lambda_n(\Omega',\alpha) \leq \lambda_n(\Omega,\alpha)+\varepsilon$ (cf.~also \cite[Example~2.2]{kenn1}). \end{remark} Our point of departure is the way the Robin problem behaves under homothetic scaling of the domain. That is, if we denote by $t\Omega$ the rescaled domain $\{tx \in \mathbb{R}^N: x \in \Omega\}$, then, by a simple change of variables, \eqref{eq:robin} is equivalent to \begin{equation} \label{eq:scaled} \begin{aligned} -\Delta u&= \frac{\lambda}{t^2} u &\quad &\text{in $t\Omega$},\\ \frac{\partial u}{\partial\nu}+\frac{\alpha}{t} u&=0&& \text{on $\partial (t\Omega)$}, \end{aligned} \end{equation} that is, $\lambda_n(\Omega,\alpha)=t^2\lambda_n(t\Omega,\alpha/t)$, for all (bounded, Lipschitz) $\Omega\subset\mathbb{R}^N$, $n \geq 1$, $\alpha > 0$ and $t>0$. (This of course remains valid considering the weak form \eqref{eq:weak}.) We highlight the change in the boundary parameter. This means that, unlike in the Dirichlet and Neumann cases, $|t\Omega|^\frac{2}{N} \lambda_n(t\Omega,\alpha)$ is \emph{not} invariant with respect to changes in $t>0$; rather, the invariant quantity is \begin{equation} \label{eq:invariant} |t\Omega|^{2/N} \lambda_n(t\Omega,\alpha/t). \end{equation} This will have a profound effect on the nature of the minimising value $\lambda_n^*$ and any corresponding minimising domains. We observe that $\lambda_n^*(V,\alpha)$ as a function of $\alpha\in (0,\infty)$ may be reformulated as a function taking the form $\lambda_n^*(tV,\alpha)$ for $t \in (0,\infty)$ and $\alpha>0$ fixed, arbitrary. The scaling relation gives us immediately that \begin{equation} \label{eq:astscaling} t^2 \lambda_j^*(V,\frac{\alpha}{t}) = \lambda_j^*(t^{-N}V,\alpha) \end{equation} for all $j\geq 1$, all $V>0$, all $\alpha>0$ and all $t>0$, since \eqref{eq:scaled} holds for every admissible domain $\Omega \subset \mathbb{R}^N$, so that the same must still be true of their infima. Proposition~\ref{prop:starvsalpha} may be reformulated as the following result, which will be useful to us in the sequel. The proof will again be left until Sec.~\ref{sec:append}. \begin{proposition} \label{prop:starvst} Fix $\alpha>0$ and $n \geq 1$. As a function of $V \in (0,\infty)$, $\lambda_n^*(V,\alpha)$ is continuous and strictly monotonically decreasing, with $\lambda_n^*(V,\alpha) \to \infty$ as $V \to 0$ and $\lambda_n^*(V,\alpha) \to 0$ as $V \to \infty$. \end{proposition} \subsection{A Wolf--Keller type result} An immediate consequence of \eqref{eq:invariant} is that both the statement and proof of a number of results that are elementary in the Dirichlet case now become more involved. Of particular relevance for us is the result of Wolf--Keller \cite[Theorem~8.1]{wolf}, that any disconnected domain minimising $\lambda_n$ as in \eqref{eq:ninf} must have as its connected components minimisers of lower numbered eigenvalues. Here, \eqref{eq:invariant} obviously means that we cannot hope to be quite as explicit in our description of any potential minimiser. \begin{theorem} \label{th:wkrobin} Given $V>0$ and $\alpha>0$, suppose that there exists a disconnected domain $\Omega^*$ such that $|\Omega^*| = V$ and $\lambda_n^*(V,\alpha) =\lambda_n (\Omega^*,\alpha)$. For every $1 \leq k \leq n-1$, there will exist a unique pair of numbers $\xi_1, \xi_2 > 1$ (depending on $k,V,\alpha$ and $N$) with $\xi_1^{-N}+\xi_2^{-N}=1$ which solve the problem \begin{equation} \label{eq:pair} \begin{split} &\min\bigl\{ \max\{t_1^2\lambda_k^*(V,\frac{\alpha}{t_1}),\,t_2^2\lambda_{n-k}^*(V,\frac{\alpha}{t_2})\}: t_1,t_2>1,\,t_1^{-N}+t_2^{-N}=1\bigr\}\\ =&\min\bigl\{ \max\{\lambda_k^*(t_1^{-N} V,\alpha),\,\lambda_{n-k}^*(t_2^{-N}V,\alpha)\}: t_1,t_2>1,\,t_1^{-N}+t_2^{-N}=1\bigr\}. \end{split} \end{equation} Then we may write \begin{equation} \label{eq:minvalue} (\lambda_n^*(V,\alpha))^\frac{N}{2} = \min_{1\leq k\leq n-1} \left\{(\lambda_k^*(V,\frac{\alpha}{\xi_1}))^\frac{N}{2} +(\lambda_{n-k}^*(V,\frac{\alpha}{\xi_2}))^\frac{N}{2}\right\}. \end{equation} Supposing this minimum to be achieved at some $j$ between $1$ and $n-1$, denoting by $\Omega_1$ and $\Omega_2$ the respective minimisers of $\lambda_j^*(V,\frac{\alpha}{\xi_1})$ and $\lambda_{n-j}^*(V,\frac{\alpha}{\xi_2})$, for this pair $\xi_1(j), \xi_2(j)$ we have \begin{equation} \label{eq:minform} \Omega^* = \frac{1}{\xi_1}\Omega_1 \cup \frac{1}{\xi_2}\Omega_2. \end{equation} \end{theorem} Because of \eqref{eq:invariant}, we have to define the scaling constants $\xi_1$ and $\xi_2$ in a somewhat artificial fashion, in terms of a minimax problem (we emphasise that $\xi_1,\xi_2$ will vary with $k$), and cannot link them directly to the optimal values $\lambda_j^*(V,\alpha)$ as would be the direct equivalent of \cite[Theorem~8.1]{wolf}. Otherwise, the proof proceeds essentially as in \cite{wolf}. \begin{proof} We start by proving for each fixed $k$ between $1$ and $n-1$ the existence of the pair $\xi_1,\xi_2>1$ as claimed in the theorem. First observe that the equivalence of the two minimax problems in \eqref{eq:pair} follows immediately from \eqref{eq:astscaling}. Now by Proposition~\ref{prop:starvst}, for $V>0$, $\alpha>0$ and $k\geq 1$ all fixed, as a function of $t_1 \in [1,\infty)$, $\lambda_k^*(t_1^{-N}V,\alpha)$ is continuous and strictly monotonically increasing from $\lambda_k^*(V,\alpha)$ at $t_1=1$ to $\infty$ as $t_1 \to \infty$. Moreover, since $t_2$ is determined by $t_1$ via the relation $t_2 = (1- t_1^{-N})^{-1/N}$, we may also consider $\lambda_{n-k}^*(t_2^{-N}V,\alpha)$ as a continuous and strictly monotonically decreasing function of $t_1 \in (1,\infty]$, approaching $\infty$ as $t_1 \to 1$ and $\lambda_{n-k}^*(V,\alpha)$ as $t_1 \to \infty$. That is, \begin{displaymath} \lambda_k^*(t_1^{-N} V,\alpha) \left\{ \begin{aligned} & < \lambda_{n-k}^*\left((1- t_1^{-N})^{-1/N}V,\alpha\right) & \qquad &\text{if $t_1 \approx 1$}\\ & > \lambda_{n-k}^*\left((1- t_1^{-N})^{-1/N}V,\alpha\right) & &\text{if $t_1$ is large enough,} \end{aligned} \right. \end{displaymath} with the left hand side value strictly increasing and the right hand side strictly decreasing in $t_1$. It follows that there exists a unique $t_1 \in (1,\infty)$ such that the two are equal. At this value, which we label as $t_1=:\xi_1$, $t_2 =(1- \xi_1^{-N})^{-1/N}=:\xi_2$, the maximum of the two will be minimised. Let us now suppose the minimiser $\Omega^*$ of $\lambda_n^*(V,\alpha)$ is a disjoint union $\Omega^* = U_1 \cup U_2$. Since the eigenvalues of $\Omega^*$ are found by collecting and ordering the respective eigenvalues of $U_1$ and $U_2$, there exists $1\leq k \leq n-1$ such that $\lambda_n(\Omega^*,\alpha) = \lambda_k(U_1,\alpha)$. For, if $k=n$, then $U_2$ makes no contribution, so rescaling $U_1$ would strictly decrease $\lambda_n$ by Lemma~\ref{lemma:tcontinuity}, contradicting minimality. A similar argument shows that $\lambda_n(\Omega^*,\alpha)=\lambda_{n-k}(U_2,\alpha)$, since otherwise, by expanding $U_1$ and contracting $U_2$, by Lemma~\ref{lemma:tcontinuity} we could likewise reduce $\lambda_n(\Omega^*,\alpha)$. It is also clear that $\lambda_k(U_1,\alpha)=\lambda_k^*(|U_1|,\alpha)$ and $\lambda_{n-k}(U_2,\alpha) = \lambda_{n-k}^*(|U_2|,\alpha)$, since otherwise we could replace $U_1$ and/or $U_2$ with their respective minimisers and repeat the rescaling argument to reduce $\lambda_n(\Omega^*,\alpha)$. Thus we have shown \begin{displaymath} \lambda_n^*(V,\alpha)=\lambda_n(\Omega^*,\alpha)=\lambda_k(U_1,\alpha)=\lambda_k^*(|U_1|,\alpha) =\lambda_{n-k}(U_2,\alpha)=\lambda_{n-k}^*(|U_2|,\alpha). \end{displaymath} We now rescale $U_1$ and $U_2$. Let $s_1,s_2>0$ be such that $|s_1 U_1|=|s_2 U_2|=V$. Since $V=|U_1|+|U_2|$, we have $s_1,s_2>1$ and $s_1^{-N}+s_1^{-N}=1$. Now by \eqref{eq:astscaling}, \begin{displaymath} (\lambda_k^*(V,\frac{\alpha}{s_1}))^\frac{N}{2}=(\lambda_k(s_1 U_1,\frac{\alpha}{s_1}))^\frac{N}{2} =(s_1^{-2}\lambda_k(U_1,\alpha))^\frac{N}{2}=s_1^{-N}(\lambda_n^*(V,\alpha))^\frac{N}{2}, \end{displaymath} with an analogous statement for $\lambda_{n-k}^*$ and $s_2$. Adding the two, and using that $s_1^{-N}+s_2^{-N}=1$ from the volume constraint, \begin{displaymath} (\lambda_k^*(V,\frac{\alpha}{s_1}))^\frac{N}{2} + (\lambda_{n-k}^*(V,\frac{\alpha}{s_2}))^\frac{N}{2} =(\lambda_n^*(V,\alpha))^\frac{N}{2}. \end{displaymath} To show that $s_1=\xi_1$ and $s_2=\xi_2$, we simply note that, given this $k$, the unique minimising pair $\xi_1,\xi_2$ is the \emph{only} pair of real numbers for which $\xi_1,\xi_2>1$, $\xi_1^{-N}+\xi_2^{-N}=1$ and for which there is equality ${\xi_1}^2\lambda_k^*(V,\frac{\alpha}{\xi_1}) = {\xi_2}^2\lambda_k^*(V,\frac{\alpha}{\xi_2})$. As $s_1$ and $s_2$ satisfy exactly the same properties, $s_1=\xi_1$ and $s_2=\xi_2$. Thus we have shown that $\Omega^*$ has the form \eqref{eq:minform}, and \eqref{eq:minvalue} holds for \emph{some} $1 \leq k \leq n-1$. It remains to prove that $\lambda_n^*$ is attained by the minimum over all such $k$. To do so, we choose $1\leq j \leq n-1$ arbitrary, label the solution to \eqref{eq:pair} as $j_1,j_2>1$, and set $\Omega_1^j$ to be the domain of volume $V$ such that \begin{displaymath} \lambda_j(\Omega_1^j,\frac{\alpha}{j_1}) = \lambda_j^*(V,\frac{\alpha}{j_1}), \end{displaymath} and analogously for $\Omega_2^j$ and $\lambda_{n-j}^*(V,\alpha/j_1)$. Now set \begin{displaymath} \Omega_j = \frac{1}{j_1}\Omega_1^j \cup \frac{1}{j_2}\Omega_2^j. \end{displaymath} It is easy to check that $|\Omega_j|=V$ and that, by choice of $j_1$ and $j_2$, \begin{displaymath} \lambda_j\left(\frac{1}{j_1}\Omega_1^j,\alpha\right)=j_1^2\lambda_j\left(\Omega_1^j,\frac{\alpha}{j_1}\right) =j_2^2\lambda_j\left(\Omega_2^j,\frac{\alpha}{j_2}\right)=\lambda_j\left(\frac{1}{j_2}\Omega_2^j,\alpha\right), \end{displaymath} meaning that $\lambda_n(\Omega_j,\alpha)$ must be equal to all the above quantities. Moreover, using the minimising properties of $\Omega_1^j$ and $\Omega_2^j$, and that $j_1^{-N}+j_2^{-N}=1$, we have \begin{displaymath} \left[\lambda_n^*(V,\alpha)\right]^\frac{N}{2} \leq \left[\lambda_n(\Omega_j,\alpha)\right]^\frac{N}{2}= \left[\lambda_j^*\left(V,\frac{\alpha}{j_1}\right)\right]^\frac{N}{2} +\left[\lambda_{n-j}^*\left(V,\frac{\alpha}{j_2}\right)\right]^\frac{N}{2}, \end{displaymath} proving \eqref{eq:minvalue}. \end{proof} \subsection{Asymptotic behaviour of the optimal values} Another consequence of \eqref{eq:invariant} is that any eigenvalue $\lambda_n (t\Omega,\alpha)$ grows more slowly than $\lambda_n(\Omega)^{2/N}$ as $t \to 0$. It is thus intuitively reasonable that we might expect any optimising domain to have a greater number of connected components than its Dirichlet counterpart. Indeed, recalling the variational characterisation of $\lambda_n$, it is not surprising that increasing the size of the boundary in such a fashion carries a fundamentally smaller penalty for the eigenvalues. As was noted in \cite[Sec.~3]{kenn2}, the domain given by the disjoint union of $n$ equal balls of volume $V/n$, call it $B_n$, is likely to play a prominent r\^ole in the study of $\lambda_n^*$ for sufficiently small positive values of $\alpha$. Here we go further and observe that, simply by estimating $\lambda_1(B_n,\alpha)$, we can already obtain quite a strong estimate on the behaviour of $\lambda_n^*$ with respect to $n$, for any $\alpha>0$. In fact, the following theorem, which again may be seen as an immediate consequence of \eqref{eq:invariant}, shows that we have $\lambda_n^*= {\rm o}(n^{1/N+\varepsilon})$ as $n \to \infty$ (for any $V,\alpha,\varepsilon>0$), a fundamental divergence from the Weyl asymptotics $\lambda_n(\Omega,\alpha) = {\rm O}(n^{2/N})$ for any fixed domain $\Omega \subset \mathbb{R}^N$. It is unclear whether this is optimal. \begin{theorem} \label{th:nballs} Given $V>0$ and $n \geq 1$, let $B_n$ denote the domain of volume $V$ consisting of $n$ equal balls of radius $r= (V/n\omega_N)^{1/N}$. Then, for every $\alpha>0$, \begin{equation} \label{eq:nballs} \lambda_n^*(V,\alpha) \leq \lambda_n(B_n,\alpha) \leq N\alpha \left(\frac{n\omega_N}{V}\right)^\frac{1}{N}. \end{equation} \end{theorem} \begin{proof} Since $\lambda_n(B_n,\alpha)=\lambda_1(B_n,\alpha)$, it certainly suffices to estimate the latter, that is, to estimate the first eigenvalue of a ball of volume $V/n$ and radius $r = (V/n\omega_N)^{1/N}$. Using concavity of $\lambda_1$ with respect to $\alpha>0$ (Lemma~\ref{lemma:continuity}), we estimate this from above by its tangent line at $\alpha=0$ (see Remark~\ref{rem:neumann}). Since a ball of radius $r$ has volume $r^N \omega_N$ and surface measure $Nr^{N-1}\omega_N$, \eqref{eq:alphaderivative} at $\alpha = 0$ gives \begin{displaymath} \lambda_1(B_n,\alpha)\leq \lambda_1'(B_n,0)\alpha = \frac{\sigma(\partial B_n)}{|B_n|}\alpha = N r^{-1} \alpha. \end{displaymath} Substituting the value $r = (V/n\omega_N)^{1/N}$ yields \eqref{eq:nballs}. \end{proof} \subsection{The optimal gap} Adapting an argument of Colbois and El~Soufi \cite{colbois} for the Dirichlet case, we may also estimate the dimensionally appropriate difference $(\lambda_{n+1}^*)^{N/2}-(\lambda_n^*)^{N/2}$ for each positive $V$ and $\alpha$, which we do in Theorem~\ref{th:gapbound}. Such an estimate serves two purposes, giving both a practical means to test the plausibility of numerical estimates, and a theoretical bound on eigenvalue gaps. In particular, this complements Theorem~\ref{th:nballs} by showing that the optimal gap tends to $0$ as $n \to \infty$, albeit not necessarily uniformly in $\alpha>0$ (see Corollary~\ref{cor:growth}). The idea is to take the optimising domain $\Omega^*$ for $\lambda_n^*$, add to it a ball $B$ whose first eigenvalue also equals $\lambda_n^*$ and then rescale to obtain a ``test domain" for $\lambda_{n+1}^*$. Although the scaling issue \eqref{eq:invariant} makes the new behaviour possible, it also causes obvious complications, and so we cannot obtain as tight a result as in \cite{colbois}. Instead, we will give two slightly different estimates. The first, \eqref{eq:comp}, is tighter but more abstruse, and will be used for computational verification, the second one \eqref{eq:explicit} being more explicit, although only in the first case does the bound converge to $0$ with $n$ (see Remark~\ref{rem:gapbound} and Corollary~\ref{cor:growth}). \begin{remark} \label{rem:dirichlet} The result that the optimal gap tends to $0$ as $n\to\infty$ is perhaps {\it{a priori}} surprising, and raises the question of whether such a result might also be true in the Dirichlet case. Unfortunately, our method tells us nothing about the latter, as it rests entirely on the scaling property \eqref{eq:scaled}. For each fixed $n\geq 1$, our bound \eqref{eq:comp} is of the form $(\lambda_{n+1}^*)^{N/2}-(\lambda_n^*)^{N/2} \leq C(\lambda_1(B_1,c_n\alpha))^{N/2}$ for appropriate constants $C=C(N,V)$ and $c_n=c(N,V,n)$. The idea is then to show (using \eqref{eq:scaled}) that $c_n \to 0$ as $n \to \infty$. But fixing $n\geq 1$ and letting $\alpha \to \infty$, we recover the bound from \cite{colbois} of the form $C(N,V) \lambda_1^D(B_1)$, uniform in $n \geq 1$. There is no evidence to suggest the latter result could be improved. \end{remark} The bound \eqref{eq:comp} relies on the following auxiliary lemma, needed for a good estimate of the constant $c_n$ mentioned in Remark~\ref{rem:dirichlet}. \begin{lemma} \label{lemma:ball} Fix $V>0$ and $\alpha>0$, let $\lambda_n^*=\lambda_n^*(V,\alpha)$ be as in \eqref{eq:ninf}, and denote by $B=B(0,r)$ the ball centred at $0$ of radius $r>0$. There exists a unique value of $r>0$ such that \begin{equation} \label{eq:ball} \lambda_1(B,\Bigl(\frac{V}{V+|B|}\Bigr)^{\frac{1}{N}} \alpha)=\lambda_n^*(V,\alpha). \end{equation} The corresponding ball $B$ satisfies \begin{equation} \label{eq:ballsize} |B|\leq \min\left\{V,\, \omega_N \left(\frac{j_{\frac{N}{2}-1,1}}{\sqrt{\lambda_n^*}}\right)^N \right\}, \end{equation} where $\omega_N$ is the $N$-dimensional volume of the unit ball in $\mathbb{R}^N$ and $j_{\frac{N}{2}-1,1}$ the first zero of the Bessel function $J_{\frac{N}{2}-1}$ of the first kind. \end{lemma} \begin{proof} Consider the left hand side of \eqref{eq:ball} as a function of $|B| \in (0,\infty)$. An increase in $|B|$ both increases the volume of the domain and decreases the Robin parameter. By Lemma~\ref{lemma:tcontinuity}, the combined effect must be to decrease $\lambda_1$ continuously and strictly monotonically, the latter implying there can be at most one value of $|B|$ giving equality in \eqref{eq:ball}. Now note that as $|B|\to 0$, since $V/(V+|B|)$ is bounded from below away from zero, $\lambda_1(B,(V/(V+|B|))^{(1/N)} \alpha) \to \infty$, while if $|B|=V$, \begin{displaymath} \lambda_1(B, (1/2)^\frac{1}{N}\alpha)< \lambda_1 (B,\alpha) \leq \lambda_1(\Omega^*,\alpha) \leq \lambda_n(\Omega^*,\alpha)=\lambda_n^*(V,\alpha), \end{displaymath} where $\Omega^*$ is the minimising domain for \eqref{eq:ninf}, and the second inequality follows from the Rayleigh--Faber--Krahn inequality for Robin problems \cite[Theorem~1.1]{buda}. Hence there must be a value of $|B|$ in $(0,V)$ for which there is equality in \eqref{eq:ball}. To show that the other bound in \eqref{eq:ballsize} also holds, we consider $B_r$, the ball of radius $r=j_{\frac{N}{2}-1,1}/\sqrt{\lambda_n^*}$, where $j_{p,q}$ is the $q$th zero of the Bessel function $J_p$ of the first kind of order $p$. Then we have \begin{displaymath} \lambda_1(B_r,\Bigl(\frac{V}{V+|B_r|}\Bigr)^{\frac{1}{N}} \alpha) < \lambda_1 (B_r,\alpha) < \lambda_1^D(B_r) =\lambda_n^*, \end{displaymath} where $\lambda_1^D(B_r)$ is the first Dirichlet eigenvalue of $B_r$ (cf.~Lemma~\ref{lemma:continuity}), and the last equality follows from our choice of $r$. This implies our desired $B$ must have radius less than $r$, giving us \eqref{eq:ballsize}. \end{proof} \begin{theorem} \label{th:gapbound} Fix $V>0$ and $\alpha>0$ and let $\lambda_n^*=\lambda_n^*(V,\alpha)$ be as in \eqref{eq:ninf}. Let $B^* = B^*(V,n,\alpha)$ be the ball satisfying the conclusions of Lemma~\ref{lemma:ball}, and let $B_1$ denote the ball of unit radius and $\omega_N$ its $N$-dimensional volume. Then \begin{equation} \label{eq:comp} \left(\lambda_{n+1}^* \right)^\frac{N}{2} - \left(\lambda_n^* \right)^\frac{N}{2} \leq \frac{\omega_N}{V} \left[\lambda_1\left(B_1, \left(\frac{V|B^*|}{V+|B^*|}\right)^\frac{1}{N} \omega_N^{-\frac{1}{N}}\alpha\right) \right]^\frac{N}{2} \end{equation} and, weaker but more explicitly, \begin{equation} \label{eq:explicit} \left(\lambda_{n+1}^* \right)^\frac{N}{2} - \left(\lambda_n^* \right)^\frac{N}{2} < \frac{\omega_N}{V} \left[\lambda_1\left(B_1, \left(\frac{V}{\omega_N}\right)^\frac{1}{N}\alpha\right)\right]^\frac{N}{2}. \end{equation} \end{theorem} Our proof will show, in a manner analogous to the Dirichlet case \cite{colbois}, that given \emph{any} domain $\widetilde\Omega$ we can find another domain $\widehat\Omega$ of the same volume $V$ such that \begin{displaymath} \left[\lambda_{n+1}(\widehat\Omega,\alpha)\right]^{\frac{N}{2}}- \left[\lambda_n(\widetilde\Omega,\alpha)\right]^{\frac{N}{2}} \leq \frac{\omega_N}{V}\left[\lambda_1(B_1,t\alpha)\right]^\frac{N}{2} \end{displaymath} for some appropriate $t\in (0,1)$ inversely proportional to $\lambda_n(\widetilde\Omega,\alpha)$. We omit the details. \begin{proof} Given $V,\alpha>0$, $B^*$ and $\lambda_n^*$ as in the statement of the theorem, we assume for simplicity that $\lambda_n^*$ is minimised by $\Omega^*$. Let $\widetilde \Omega:= \Omega^* \cup B^*$ (disjoint union), set \begin{displaymath} t^N:=\frac{V}{V+|B^*|} \end{displaymath} and consider the problem \eqref{eq:robin} on $\widetilde \Omega$, with boundary parameter \begin{displaymath} \hat\alpha:=\left\{ \begin{aligned} &\alpha \qquad\qquad &\text{on $\partial \Omega^*$}\\ &t\alpha \qquad\qquad &\text{on $\partial B^*$.} \end{aligned} \right. \end{displaymath} Then by choice of $B^*$ and definition of $t$, \begin{displaymath} \lambda_{n+1}(\widetilde\Omega,\hat\alpha)=\lambda_n(\Omega^*,\alpha)=\lambda_1(B^*,t\alpha). \end{displaymath} Let us now rescale $\widetilde\Omega$ to $t\widetilde\Omega$. If we set \begin{displaymath} \tilde\alpha:=\left\{ \begin{aligned} &\alpha/t \qquad\qquad &\text{on $\partial (t\Omega^*)$}\\ &\alpha \qquad\qquad &\text{on $\partial (t B^*)$,} \end{aligned} \right. \end{displaymath} we have \begin{displaymath} \lambda_{n+1}(t\widetilde\Omega,\widetilde\alpha)=\frac{1}{t^2}\lambda_{n+1}(\widetilde\Omega,\hat\alpha). \end{displaymath} Since $|t\widetilde\Omega|=V$, \begin{equation} \label{eq:loss} \lambda_{n+1}^* \leq \lambda_{n+1}(t\widetilde\Omega,\alpha)\leq \lambda_{n+1}(t\widetilde\Omega,\tilde\alpha), \end{equation} the latter inequality holding since $t<1$ so that $\alpha\leq \tilde\alpha=\tilde\alpha(x)$ at every point of $\partial(t\widetilde\Omega)$. Hence \begin{displaymath} \lambda_{n+1}^* \leq \frac{1}{t^2}\lambda_{n+1}(\widetilde\Omega,\hat\alpha) =\frac{1}{t^2}\lambda_n(\Omega^*)=\frac{1}{t^2}\lambda_1(B^*,t\alpha). \end{displaymath} Raising everything to the power of $N/2$, and subtracting $\left(\lambda_n^*\right)^{N/2} = \left(\lambda_1(B^*,t\alpha)\right)^{N/2}$, \begin{displaymath} \left(\lambda_{n+1}^* \right)^\frac{N}{2} - \left(\lambda_n^* \right)^\frac{N}{2} \leq \left(\frac{1}{t^N}-1\right) \left(\lambda_1(B^*,t\alpha)\right)^\frac{N}{2} \end{displaymath} Recalling the definition of $t$, we have $1/t^N-1 = |B^*|/V$. We will now rescale $B^*$, replacing it with a ball of unit radius. That is, letting $B^*$ have radius $r>0$, so that $|B^*|=r^N \omega_N$, and letting $B_1$ denote the ball of centre $0$ and radius $1$, \begin{displaymath} \left(\lambda_1(B^*,t\alpha)\right)^\frac{N}{2}=\left(r^{-2}\lambda_1(B_1,rt\alpha)\right)^\frac{N}{2} =r^{-N}\left(\lambda_1(B_1,rt\alpha)\right)^\frac{N}{2}. \end{displaymath} Writing \begin{equation} \label{eq:rt} rt = \left(\frac{V|B^*|}{V+|B^*|}\right)^\frac{1}{N} \omega_N^{-\frac{1}{N}} \end{equation} and $|B^*|/V = r^N \omega_N/V$, we obtain \begin{displaymath} \left(\lambda_{n+1}^* \right)^\frac{N}{2} - \left(\lambda_n^* \right)^\frac{N}{2} \leq \frac{r^N \omega_N}{V} r^{-N} \left[\lambda_1\left(B_1,\left(\frac{V|B^*|}{V+|B^*|}\right)^\frac{1}{N} \omega_N^{-\frac{1}{N}}\alpha\right) \right]^\frac{N}{2}, \end{displaymath} which is \eqref{eq:comp}. To remove the explicit dependence on $B^*$ and thus obtain \eqref{eq:explicit}, we simplify the expression \eqref{eq:rt} using the crude bounds $|B^*|<V$ in the numerator and $|B^*|>0$ in the denominator, giving $rt<V^{(1/N)} \omega_N^{-(1/N)}$. Monotonicity of $\lambda_1$ with respect to the Robin parameter, Lemma~\ref{lemma:continuity}, now gives \eqref{eq:explicit}. \end{proof} \begin{remark} \label{rem:gapbound} (i) The bound \eqref{eq:explicit} is as explicit as possible for the Robin problem, $\lambda_1(B_1,\beta)$ being given as the square of the first positive solution of the transcendental equation \begin{displaymath} \frac{\beta}{\sqrt{\lambda_1}} = -\frac{J_{\frac{N}{2}}(\sqrt \lambda_1)} {J_{\frac{N}{2}-1}(\sqrt \lambda_1)}, \end{displaymath} where $J_p$ denotes the $p$th Bessel function of the first kind. (ii) In the Dirichlet equivalent of Theorem~\ref{th:gapbound}, the bound is optimal exactly for those values of $n$ for which the optimising domain for $\lambda_{n+1}^*$ is obtained by adding an appropriate ball to the minimiser of $\lambda_n$, which is believed to be true only when $n=1$. In our case, since everything converges to its Dirichlet counterpart as $\alpha \to \infty$, \eqref{eq:comp} and \eqref{eq:explicit} are at least asymptotically sharp for $n=1$. Moreover, for every $n\geq 1$ the bounds converge to zero as $\alpha \to 0$ (as we would hope given that $\lambda_n^* \to 0$ as $\alpha\to 0$ for every $n\geq 1$). However, even for $n=1$, the scaling issue makes it essentially impossible to obtain a precise bound for any particular $\alpha>0$. Taking $N=2$ for simplicity, we have $\lambda_2^*=2\lambda_1^*$ in the Dirichlet case, but in the Robin case $\lambda_2^*(\alpha) < 2\lambda_1^*(\alpha)$ for all $\alpha>0$, since, denoting by $B$ the ball that minimises $\lambda_1^*(\alpha)$, we have $\lambda_2^*(\alpha) = \lambda_1(2^{-1/2}B,\alpha) = 2\lambda_1(B,\alpha/2) < 2\lambda_1(B,\alpha)=2\lambda_1^*(\alpha)$, where we have used the fact that the minimiser of $\lambda_2^*(\alpha)$ is the union of two equal balls \cite{kenn1}, the scaling relation \eqref{eq:scaled}, strict monotonicity of $\lambda_1(\Omega, \alpha)$ in $\alpha$ (Lemma~\ref{lemma:continuity}) and the Rayleigh--Faber--Krahn inequality for Robin problems \cite{buda}. Our bound in \eqref{eq:comp} is, in this case, also smaller than $2\lambda_1^*(\alpha)$ for all $\alpha>0$. However, constructing an estimate that involves rescaling domains in this fashion will always tend to introduce some error (as happens at \eqref{eq:loss} in our case), as we can never write down explicitly the change in the eigenvalues caused by introducing the scaling parameter $t$ into the boundary parameter. \end{remark} We now prove the aforementioned result, a complement to Theorem~\ref{th:nballs}, that the dimension-normalised gap $(\lambda_{n+1}^*)^{N/2}-(\lambda_n^*)^{N/2}$ approaches zero as $n$ goes to $\infty$ for every fixed positive value of $V$ and $\alpha$. The proof will combine \eqref{eq:comp} with \eqref{eq:ballsize}. In the process, we also obtain a growth estimate on $\lambda_n^*$, but this turns out to be weaker than the one found directly in Theorem~\ref{th:nballs}. We include the proof of the latter anyway, as both an alternative method and to illustrate the principle. \begin{corollary} \label{cor:growth} For $V,\alpha>0$ fixed, as $n\to\infty$ we have \begin{displaymath} \left[\lambda_{n+1}^*(V,\alpha)\right]^\frac{N}{2} - \left[\lambda_n^*(V,\alpha)\right]^\frac{N}{2} \to 0 \end{displaymath} and, for every $\varepsilon>0$, \begin{displaymath} \lambda_n^*(V,\alpha) = {\rm o}(n^{\frac{4}{3N}+\varepsilon}). \end{displaymath} \end{corollary} \begin{proof} Estimating $|B^*|$ from above by \eqref{eq:ballsize} and from below by $0$ in the bound \eqref{eq:comp} gives us \begin{displaymath} \left(\lambda_{n+1}^* \right)^\frac{N}{2} - \left(\lambda_n^* \right)^\frac{N}{2} < \frac{\omega_N}{V} \left[\lambda_1\left(B_1, \frac{j_{\frac{N}{2}-1,1}}{\sqrt{\lambda_n^*}}\alpha\right)\right]^\frac{N}{2}. \end{displaymath} Now, noting that $\lambda_1(B_1,\,\cdot\,)$ is concave in its second argument, we estimate it from above by the corresponding value of its tangent line at $\alpha=0$, namely $T(\beta)=\lambda_1'(B_1,0)\beta$. Since $\lambda_1' (B_1,0) = \sigma(\partial B_1)/\omega_N$ (see Remark~\ref{rem:neumann}), \begin{equation} \label{eq:growthcor} \left(\lambda_{n+1}^* \right)^\frac{N}{2} - \left(\lambda_n^* \right)^\frac{N}{2} < V^{-1} {\omega_N}^{1-\frac{N}{2}} \left(\sigma(\partial B_1)\alpha j_{\frac{N}{2}-1,1}\right)^\frac{N}{2}\left(\lambda_n^* \right)^{-\frac{N}{4}}. \end{equation} Observe that, for fixed $V,\alpha>0$, the right hand side of \eqref{eq:growthcor} converges to $0$ as $n\to\infty$, proving the first assertion of the corollary. To see the growth bound, label the coefficient of $\left(\lambda_n^* \right)^{-N/4}$ in \eqref{eq:growthcor} as $C(N,V,\alpha)$. Summing over $n$, the left hand side telescopes to give \begin{equation} \label{eq:sumbound} \left(\lambda_{n+1}^* \right)^\frac{N}{2} < \left(\lambda_{1}^* \right)^\frac{N}{2}+ C(N,V,\alpha) \sum_{k=1}^n \left(\lambda_k^* \right)^{-\frac{N}{4}}. \end{equation} Suppose now that for some $\gamma\in\mathbb{R}$, $(\lambda_n^*)^{N/2} \neq {\rm O}(n^\gamma)$ as $n \to \infty$, that is, $\limsup_{n\to\infty} (\lambda_n^*)^{N/2}/n^\gamma = \infty$. Since $\lambda_n^*$ is increasing in $n\geq 1$, a standard argument from elementary analysis shows that in fact $\lim_{n\to\infty} (\lambda_n^*)^{N/2}/n^\gamma = \infty$, that is, for all $C_0>0$, there exists $n_0 \geq 1$ such that $(\lambda_n^*)^{N/2} \geq C_0 n^\gamma$ for all $n \geq n_0$. Hence for $C_0>0$ fixed, for all $n \geq n_0$ we have \begin{equation} \label{eq:growthcontra} \sum_{k=1}^n (\lambda_k^*)^{-\frac{N}{4}}\leq \sum_{k=1}^{n_0-1}(\lambda_k^*)^{-\frac{N}{4}}+C_0 \sum_{k=n_0}^n k^{-\frac{\gamma}{2}}\leq C_1+C_0 \sum_{k=1}^n k^{-\frac{\gamma}{2}}, \end{equation} where $C_1 = \sum_{k=1}^{n_0-1}(\lambda_k^*)^{-N/4}$ depends only on $N,V,\alpha$ and $n_0$, that is, $C_0$. Now we observe \begin{equation} \label{eq:harmonics} {\rm O}(\sum_{k=1}^n k^{-\gamma/2}) = \left\{ \begin{aligned} & {\rm O}(n^{1-\frac{\gamma}{2}}) & \qquad \quad &\text{if $\gamma \in (0,2)$}\\ & {\rm O}(\ln n) & &\text{if $\gamma = 2$,} \end{aligned} \right. \end{equation} (use $\sum_{k=1}^n k^{-s} \sim \int_1^n x^{-s}dx$ if $s \leq 1$), while $\sum_{k=1}^n k^{-\gamma/2} \leq 1+\int_1^{n-1} x^{-\gamma/2}\,dx = 1+(\gamma/2-1)^{-1}$ for all $n \geq 1$, if $\gamma>2$. In particular, combining \eqref{eq:sumbound} and \eqref{eq:growthcontra}, fixing $C_0>0$ and a corresponding $n_0 \geq 1$ arbitrary, for all $n\geq n_0$, we have \begin{displaymath} (\lambda_{n+1}^*)^\frac{N}{2} < (\lambda_1^*)^\frac{N}{2}+C(N,V,\alpha)(C_1+C_0 \sum_{k=1}^n k^{-\frac{\gamma}{2}}), \end{displaymath} which we rewrite as \begin{equation} \label{eq:obound} (\lambda_n^*)^\frac{N}{2} < C_2 + C_3 \sum_{k=1}^n k^{-\frac{\gamma}{2}} \end{equation} for all $n \geq n_0$, where $C_2,C_3>0$ and $n_0\geq 1$ depend only on $N,V,\alpha$ and the free choice $C_0>0$. Recalling that \eqref{eq:obound} holds under the assumption $(\lambda_n^*)^{N/2} \neq {\rm O}(n^\gamma)$ as $n \to \infty$ and using \eqref{eq:harmonics}, this gives us an immediate contradiction if $\gamma>2/3$, forcing $(\lambda_n^*)^{N/2} = {\rm o}(n^{2/3+\varepsilon})$ for all $\varepsilon>0$. \end{proof} \subsection{Dependence of $\lambda_{n}(t\Omega,\alpha)$ on $t$ and $\alpha$} \label{sec:append} We will now give some appendiceal, but important, results on the behaviour of the Robin eigenvalues $\lambda_n (\Omega,\alpha)$ with respect to homothetic changes in $\Omega$ or $\alpha$. Although the material is folklore, we have included a proof as it seems difficult to find one explicitly. We will also give the proofs of the corresponding statements for $\lambda_n^*(V,\alpha)$, namely Propositions~\ref{prop:starvsalpha} and \ref{prop:starvst}. \begin{lemma} \label{lemma:continuity} For a given bounded, Lipschitz domain $\Omega \subset \mathbb{R}^N$, and $n \geq 1$, $\lambda_n(\Omega,\alpha)$ is an absolutely continuous and strictly increasing function of $\alpha \in [0,\infty)$, which is differentiable almost everywhere in $(0,\infty)$. Where it exists, its derivative is given by \begin{equation} \label{eq:alphaderivative} \frac{d}{d\alpha} \lambda_n(\Omega,\alpha) = \frac{\|u\|_{2,\partial\Omega}^2}{\|u\|_{2,\Omega}^2}, \end{equation} where $u\in H^1(\Omega)$ is any eigenfunction associated with $\lambda_n(\Omega,\alpha)$. In addition, when $n=1$, $\lambda_1(\Omega,\alpha)$ is concave, with $\lambda_1(\Omega,\alpha)\leq \lambda_1^D(\Omega)$, the first Dirichlet eigenvalue of $\Omega$, and if $\Omega$ is connected, then $\lambda_1(\Omega,\alpha)$ is analytic in $\alpha \geq 0$. \end{lemma} \begin{remark} \label{rem:neumann} The formula \eqref{eq:alphaderivative} is always valid when $n=1$ and $\alpha=0$, for any bounded, Lipschitz $\Omega \subset \mathbb{R}^N$. In this case, since this corresponds to the Neumann problem and the first eigenfunction is always constant, \eqref{eq:alphaderivative} simplifies to \begin{equation} \label{eq:neumannderivative} \frac{d}{d\alpha} \lambda_1(\Omega,\alpha)\Big|_{\alpha=0} = \frac{\sigma(\partial\Omega)}{|\Omega|}, \end{equation} a purely geometric property of $\Omega$. The equation \eqref{eq:neumannderivative} (which can be obtained from a trivial modification of our proof of Lemma~\ref{lemma:continuity}) is reasonably well known, and proofs may also be found in \cite{giorgi,lacey}, for example. \end{remark} \begin{lemma} \label{lemma:tcontinuity} Given $\Omega \subset \mathbb{R}^N$ bounded and Lipschitz, $n \geq 1$ and $\alpha>0$, $\lambda_n(t\Omega,\alpha)$ is a continuous and strictly decreasing function of $t \in (0,\infty)$. If $\frac{d}{d\beta} \lambda_n(\Omega,\beta)$ exists at $\beta=t\alpha>0$, then so does \begin{equation} \label{eq:tderivative} \frac{d}{dt}\lambda_n(t\Omega,\alpha) = -\frac{1}{t^3}\left(\frac{\|\nabla v\|_{2,\Omega}^2}{\|v\|_{2,\Omega}^2}-\lambda_n(\Omega,t\alpha)\right), \end{equation} where $v\in H^1(\Omega)$ is any eigenfunction associated with $\lambda_n(\Omega,t\alpha)$. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:continuity}] For the first statement, we note that (weak) monotonicity and, when $n=1$, concavity, are immediate from the minimax formula for $\lambda_n$ \cite[Chapter~VI]{courant}. We can also derive continuity directly from that formula, or use the general theory from \cite[Sec.~VII.3, 4]{kato}. That is, the form associated with \eqref{eq:robin} is \begin{displaymath} Q_\alpha(u)=\int_\Omega |\nabla u|^2\,dx+\int_{\partial\Omega}\alpha u^2\,d\sigma, \end{displaymath} which is analytic in $\alpha \in \mathbb{R}$ for each $u \in H^1(\Omega)$. Hence the associated family of self-adjoint operators is holomorphic of type (B) in the sense of Kato. It follows that each eigenvalue depends locally analytically on $\alpha$, with only a countable number of ``splitting points", that is, crossings of curves of eigenvalues, including the possibility of splits in multiplicities. In our case, for each $\lambda_n(\alpha)$, the number of such points will certainly be locally finite in $\alpha$. In particular, this means $\lambda_n(\alpha)$ consists locally of a finite number of smooth curves intersecting each other transversally, so it is absolutely continuous in the sense of \cite[Ch.~7]{rudin}. (Throughout this lemma we drop the $\Omega$ argument, as it is fixed.) If $n=1$ and $\Omega$ is connected, then $\lambda_1(\alpha)$ has multiplicity one for all $\alpha \geq 0$ and hence no splitting points, so that it is analytic. Since \cite{kato} also implies that the associated eigenprojections converge, given any non-splitting point $\alpha$ (at which $\lambda_n$ is analytic), $\alpha_k\to\alpha$ and any eigenfunction $u$ associated with $\lambda_n(\alpha)$, we can find eigenfunctions $u_k$ of $\lambda_n(\alpha_k)$ such that $u_k \to u$ in $L^2(\Omega)$. We now use a standard argument to show that in fact $u_k \to u$ in $H^1(\Omega)$. Denote by $\|v\|_*$ the norm on $H^1(\Omega)$ given by $(\|\nabla v\|_{2,\Omega}^2+\alpha\|v\|_{2,\partial\Omega}^2)^{1/2}$, equivalent to the standard one, and assume the eigenfunctions are normalised so that $\|u\|_{2,\Omega} = \|u_k\|_{2,\Omega} = 1$ for all $k \geq 1$. Then \begin{displaymath} \begin{split} \|u-u_k\|_*^2 &= \int_\Omega |\nabla u|^2+|\nabla u_k|^2 - 2\nabla u\cdot \nabla u_k\,dx +\int_{\partial\Omega}\alpha (u^2+u_k^2 - 2 u u_k)\,d\sigma\\ &=\lambda_n(\alpha)+\lambda_n(\alpha_k)-2\lambda_n(\alpha)\int_\Omega u u_k\,dx + (\alpha-\alpha_k) \int_{\partial\Omega}u_k^2\,d\sigma, \end{split} \end{displaymath} making repeated use of \eqref{eq:weak}. Now we have $\alpha_k \to \alpha$ by assumption, while we may use the crude but uniform bound $\int_{\partial\Omega} u_k^2\,d\sigma \leq \lambda_n(\alpha_k)/\alpha_k$ for $\alpha_k \to \alpha>0$ bounded away from zero. Meanwhile, by H\"older's inequality, \begin{displaymath} \left|\int_\Omega uu_k\,dx - \int_\Omega u^2\,dx\right|\leq \|u\|_{2,\Omega}\|u-u_k\|_{2,\Omega} \longrightarrow 0 \end{displaymath} as $k \to \infty$, since we know $u_k \to u$ in $L^2(\Omega)$, meaning $\int_\Omega uu_k\,dx \to 1$ due to our normalisation. Hence, since $\lambda_n(\alpha_k) \to \lambda_n(\alpha)$ also, \begin{displaymath} \|u-u_k\|_*^2 =\lambda_n(\alpha)+\lambda_n(\alpha_k) - 2\lambda_n(\alpha)\int_\Omega u u_k\,dx + (\alpha-\alpha_k)\int_{\partial\Omega} u_k^2\,d\sigma \longrightarrow 0, \end{displaymath} proving $u_k \to u$ in both the $\|\,.\,\|_*$-norm and hence also in the usual $H^1$-norm. Let us now compute the derivative of $\lambda_n(\alpha)$ at any non-splitting point. Suppose $\alpha<\beta$ are two different boundary parameters, with associated eigenfunctions $u,v \in H^1(\Omega)$, respectively. Then, using the weak form of $\lambda_n$, provided $u$ and $v$ are not orthogonal in $L^2(\Omega)$, we get immediately that \begin{displaymath} \lambda_n(\beta)-\lambda_n(\alpha)=(\beta-\alpha)\frac{\int_{\partial\Omega} uv\,d\sigma} {\int_\Omega uv\,dx}. \end{displaymath} Now divide through by $\beta-\alpha$ and let $\beta\to\alpha$. Since we have already seen that this forces $v\to u$ in $H^1(\Omega)$ (also implying that they are not orthogonal in $L^2(\Omega)$ for $\beta$ close to $\alpha$), this gives \eqref{eq:alphaderivative}. In particular, since this is strictly positive, and valid except on a countable set of $\alpha$, we conclude $\lambda_n$ is strictly increasing. Note that even at splitting points, we may still compute the left and right derivatives via this method; we see that it is the change in multiplicity (leading to extra eigenfunctions giving different values of \eqref{eq:alphaderivative}) that causes these derivatives to disagree. That $\lambda_1(\Omega,\alpha) \leq \lambda_1^D(\Omega)$ is an immediate consequence of the minimax formulae and the inclusion of the form domains $H^1_0(\Omega) \subset H^1(\Omega)$. Strict inequality is immediate, since $\lambda_1(\Omega,\alpha)$ is strictly increasing in $\alpha>0$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma:tcontinuity}] Since $\lambda_n(t\Omega,\alpha) = t^{-2}\lambda_n(\Omega,\alpha t)$, differentiability of the former at $t$ is equivalent to differentiability of the latter at $\beta=\alpha t$, and \begin{displaymath} \frac{d}{dt}\lambda_n(t\Omega,\alpha) = \frac{d}{dt}\left(t^{-2}\lambda_n(\Omega,\alpha t)\right) = -2t^{-3}\lambda_n(\Omega,\alpha t)+\alpha t^{-2}\frac{d}{d(\alpha t)}\lambda_n(\Omega,\alpha t). \end{displaymath} Using \eqref{eq:alphaderivative} and simplifying yields \eqref{eq:tderivative}. Continuity of $\lambda_n(t\Omega,\alpha)$ at every $t>0$ follows immediately from the identity \begin{displaymath} \lambda_n(s\Omega,\alpha)-\lambda_n(t\Omega,\alpha)=s^{-2}\lambda_n(\Omega,s\alpha)-t^{-2}\lambda(\Omega,t\alpha) \end{displaymath} together with continuity of $\lambda_n(\Omega,t\alpha)$ in $t$, so that $s^{-2}\lambda_n(\Omega,s\alpha) \to t^{-2}\lambda(\Omega,t\alpha)$ as $s \to t$. Finally, observe that \eqref{eq:tderivative}, holding almost everywhere, also confirms the strict monotonicity of $\lambda_n(t\Omega,\alpha)$ with respect to $t>0$. Also note that even at points of discontinuity, as in Lemma~\ref{lemma:continuity}, we can again compute left and right derivatives, which may disagree due to the change in multiplicity. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:starvsalpha}] It follows immediately from the definition of $\lambda_n^*(V,\alpha)$ as an infimum and the properties of $\lambda_n(\Omega,\alpha)$ given in Lemma~\ref{lemma:continuity} that $\lambda_n^*(V,\alpha)$ is strictly increasing and right continuous in $\alpha$. Indeed, given $\alpha_0 \in [0,\infty)$ and $\alpha>\alpha_0$, if $\Omega_0^*$ is a minimising domain so that $\lambda_n^*(V,\alpha_0) = \lambda_n(\Omega_0^*,\alpha_0)$, and $\alpha>0$, then $0\leq \lambda_n^*(V,\alpha) - \lambda_n(V,\alpha_0) < \lambda_n^*(\Omega_0^*,\alpha) - \lambda_n(\Omega_0^*,\alpha_0) \to 0$ as $\alpha \to \alpha_0$. For strict monotonicity, if $0 \leq \alpha_0 < \alpha$, we let $\Omega^*$ be such that $\lambda_n^*(V,\alpha) = \lambda_n(\Omega^*,\alpha) > \lambda_n (\Omega^*, \alpha_0) \geq \lambda_n^* (V,\alpha_0)$. Left continuity is harder. We use the property that for each fixed domain $\Omega$, by \eqref{eq:alphaderivative}, \begin{equation} \label{eq:alphacontrol} \frac{d}{d\alpha} \lambda_n(\Omega,\alpha) = \frac{\|u\|_{2,\partial\Omega}^2}{\|u\|_{2,\Omega}^2} \leq \frac{\lambda_n (\Omega, \alpha)}{\alpha} \end{equation} for almost every $\alpha>0$. Fixing now $\alpha_0 > 0$ and an arbitrary sequence $0 < \alpha_k \leq \alpha_0$, $k \geq 1$ with $\alpha_k \to \alpha_0$ monotonically, we may rewrite \eqref{eq:alphacontrol} as \begin{displaymath} \frac{d}{d\alpha} \lambda_n(\Omega,\alpha) \leq C \lambda_n (\Omega, \alpha), \end{displaymath} for almost all $\alpha \in [\alpha_1,\alpha_0]$, where $C=1/\alpha_1$ is independent of $\Omega$. Integrating this inequality and using the Fundamental Theorem of Calculus applied to the absolutely continuous function $\lambda_n(\Omega,\alpha)$ \cite[Theorem~7.18]{rudin}, \begin{displaymath} \lambda_n(\Omega,\alpha_0) \leq \lambda_n(\Omega,\alpha_k) e^{C(\alpha_0-\alpha_k)} \end{displaymath} for all $\alpha_k$, $k \geq 1$, and all $\Omega$. Letting $\Omega_k^*$ be an optimising domain at $\alpha_k$ for each $k \geq 1$. Then \begin{displaymath} \begin{split} \lambda_n^*(V,\alpha_0) &\leq \liminf_{k \to \infty} \lambda_n(\Omega_k^*,\alpha_0)\\ &\leq \limsup \lambda_n(\Omega_k^*,\alpha_k)e^{\alpha_0-\alpha_k}\\ &= \lim_{k\to\infty} \lambda_n(\Omega_k^*,\alpha_k) = \lim_{k\to\infty}\lambda_n^*(V,\alpha_k). \end{split} \end{displaymath} Since this holds for an arbitrary increasing sequence $\alpha_k \to \alpha_0$, and since the reverse inequality is obvious from monotonicity, this proves continuity. Finally, that $\lambda_n^*(V,0) = 0$ follows from considering any domain with at least $n$ connected components, while to show that $\lambda_n^*(V,\alpha) < \lambda_n^*(V,\infty)$, let $\widehat\Omega$ be a domain such that $|\widehat\Omega|=V$ and $\lambda_n^*(V,\infty) = \lambda_n^D(\Omega) > \lambda_n(\Omega,\alpha) \geq \lambda_n^*(V,\alpha)$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:starvst}] As continuity and monotonicity mirror the proof of Proposition~\ref{prop:starvsalpha} closely, we do not go into great detail, but note that now left continuity is obvious and the proof of right continuity uses the property \eqref{eq:tderivative} to give us the bound \begin{displaymath} \frac{d}{dt}\lambda_n(t\Omega,\alpha) \geq -\frac{2}{t^3}\lambda_n(\Omega,\alpha t) \end{displaymath} in place of \eqref{eq:alphacontrol}. If $t_k \to t_0$ is a decreasing sequence, then for $C=2/t_1^3$ this implies \begin{displaymath} \lambda_n(t_0\Omega,\alpha) \leq \lambda_n(t_k\Omega,\alpha) e^{C(t_k-t_0)} \end{displaymath} for all $k\geq 1$ and all $\Omega$, from which right continuity follows in the obvious way. We now prove that $\lambda_n^*(V,\alpha) \to \infty$ as $V \to 0$. By \eqref{eq:astscaling}, this is equivalent to $t^2 \lambda_n^*(1,\alpha/t) \to \infty$ as $t \to \infty$. Now \begin{displaymath} \lambda_n^*(1,\alpha/t) \geq \lambda_1^*(1,\alpha/t) = \lambda_1(B',\alpha/t), \end{displaymath} where $B'$ is any ball of volume $1$. Since $\lambda_1(B',\beta)$ is concave in $\beta$ with $\lambda_1(B',0)=0$, we have $\lambda_1(B',\alpha/t)\geq \lambda_1(B',\alpha)/t$, so that \begin{displaymath} t^2\lambda_n^*(1,\alpha/t) \geq t^2\lambda_1(B',\alpha/t) \geq t\lambda_1(B',\alpha) \longrightarrow \infty \end{displaymath} as $t \to \infty$. Finally, to show that $\lambda_n^*(V,\alpha) \to 0$ as $V \to \infty$, or equivalently, $t^2 \lambda_n^*(1, \alpha/t) \to 0$ as $t \to 0$, we simply note that \begin{displaymath} t^2\lambda_n^*(1,\alpha/t) \leq t^2\lambda_n^*(1,\infty) = C(N,n)t^2 \longrightarrow 0 \end{displaymath} as $t \to 0$, where $\lambda_n^*(1,\infty)$ is, as usual, the corresponding infimum for the Dirichlet problem. \end{proof} \section{General description of the numerical optimisation procedure\label{genopt}} The numerical solution of the shape optimisation problem is divided in two steps. At a first level we will describe the application of the Method of Fundamental Solutions (MFS) to the calculation of Robin eigenvalues for a fixed domain. Then, we will use a steepest descent method (eg.~\cite{nocedal}) to determine optimal domains for each of the Robin eigenvalues. \subsection{Numerical calculation of Robin eigenvalues using the MFS} We will consider the numerical optimisation of Robin eigenvalues in the class of sets $\Xi$ which are built with a finite number of star shaped and bounded planar domains. For simplicity, for now, assume that $\Omega\in\Xi$ has only one connected component. Thus, $\Omega$ is isometric to a domain $\Omega_\infty$ defined in polar coordinates by \[\Omega_\infty=\left\{(r,\theta):0<r<r_\infty(t)\right\}\] where \[r_\infty(t)=a_0+\sum_{i=1}^{\infty}\left(a_i\cos(i\theta)+b_i\sin(i\theta)\right).\] Now we consider the approximation \begin{equation} \label{dominio} r_\infty(t)\approx r_M(t):= a_0+\sum_{i=1}^{M}\left(a_i\cos(i\theta)+b_i\sin(i\theta)\right), \end{equation} for a given $M\in\mathbb{N}$ and the approximated domain $\Omega_M$ defined by \begin{equation} \label{dominioaprox} \Omega_M=\left\{(r,\theta):0<r<r_M(t)\right\}. \end{equation} Now we describe how to apply the MFS for the calculation of Robin eigenvalues of $\Omega_M$. We take the fundamental solution of the Helmholtz equation \begin{equation} \Phi_{\lambda}(x)=\frac{i}{4}H_{0}^{(1)}(\sqrt{\lambda}\left\Vert x\right\Vert ) \end{equation} where $\left\Vert.\right\Vert $ denotes the Euclidean norm in $\mathbb{R}^{2}$ and $H_{0}^{(1)}$ is the first H\"{a}nkel function. An eigenfunction solving the eigenvalue problem \eqref{eq:robin} is approximated by a linear combination \begin{equation} \label{mfsapprox} u(x)\approx\tilde{u}(x)=\sum_{j=1}^{N_p}\beta_j\phi_j(x), \end{equation} where \begin{equation} \label{ps} \phi_{j}(x)=\Phi_{\lambda}(x-y_{j}) \end{equation} are $N_p$ point sources centered at some points $y_{j}$ placed on an admissible source set which does not intersect $\bar{\Omega}$ (eg.~\cite{alan}). Each of the point sources $\phi_{j}$ satisfies the partial differential equation of the eigenvalue problem and thus, by construction, the MFS approximation also satisfies the partial differential equation of the problem. We take $N_p$ collocation points $x_i,\ i=1,...,N_p$ almost equally spaced on $\partial\Omega_M$ and for each of these points we determine the outward unitary vector $n_i$ which is normal to the boundary at $x_i$. The source points $y_i$ are calculated by \begin{equation} \label{pontosfonte} y_i=x_i+\gamma\ n_i,\ i=1,...,N_p \end{equation} where $\gamma$ is a constant (see~\cite{alan} for details). This choice of collocation and source points is illustrated in Figure~\ref{fig:pontosfonte} where the collocation points on the boundary and the source points are shown in black and grey, respectively. \begin{figure} \caption{The collocation and source points.} \label{fig:pontosfonte} \end{figure} The coefficients in the MFS approximation \eqref{mfsapprox} are determined by imposing the boundary conditions of the problem at the collocation points, \[\frac{\partial \tilde{u}}{\partial\nu}(x_i)+\alpha \tilde{u}(x_i)=0,\ i=1,...,N_p.\] This leads to a linear system \begin{equation} \label{sistema}A(\lambda).\overrightarrow{\beta}=\overrightarrow{0}, \end{equation} where \[A_{i,j}(\lambda)=\frac{\partial \phi_{j}}{\partial\nu}(x_i)+\alpha \phi_{j}(x_i),\ i,j=1,...,N_p,\] writing $\overrightarrow{\beta}=\left[\beta_1\ ...\ \beta_{N_p}\right]^T$ and $\overrightarrow{0}=\left[0\ ...\ 0\right]^T.$ As in~\cite{alan}, we calculate the approximations for the eigenvalues by determining the values of $\lambda$ such that the system \eqref{sistema} has non trivial solutions. \subsection{Numerical shape optimisation} Now we will describe the algorithm for the numerical solution of shape optimisation problem associated to the Robin eigenvalues. Each vector $\mathcal{C}:=(a_{0},a_{1},...,a_{M},b_{1},b_{2},...,b_{M})$ defines a domain using \eqref{dominio} and \eqref{dominioaprox}. The optimisation problems are solved by considering each Robin eigenvalue as a function of $\mathcal{C}$, $\lambda_n(\mathcal{C})$, and determining optimal vectors $\mathcal{C}$. Now we note that we must take into account the area constraint in problem \eqref{eq:ninf}. For each vector $\mathcal{C}$ we define a corresponding normalised vector $\widehat{\mathcal{C}}$ given by \[\widehat{\mathcal{C}}=\frac{\mathcal{C}}{\left|\Omega_M\right|^{\frac{1}{2}}}.\] The domain associated to $\widehat{\mathcal{C}}$ has unit area. Now, for each component of the vector $\mathcal{C}$, we define an approximation for the derivative of a Robin eigenvalue with respect to this component, given simply by a finite difference \[\lambda_{n,i}'=\frac{\lambda_n\left(\widehat{\mathcal{P}_i}\right)-\lambda_n\left(\widehat{\mathcal{C}}\right)}{\varepsilon},\] for a small value $\varepsilon$, where $\mathcal{P}_i=\widehat{\mathcal{C}}+\varepsilon\ e_i,\ i=1,...,2M+1$ and $e_1=[1\ 0\ ...\ 0]^T$, $e_2=[0\ 1\ 0\ 0\ ...\ 0]^T$, $e_3=[0\ 0\ 1\ 0\ ...\ 0]^T$, \ldots. We then build the approximation of the gradient \[d_n=\left[\lambda_{n,1}'\ \lambda_{n,2}'\ ...\ \lambda_{n,2M+1}'\right]^T\] which defines the searching direction for the steepest descent method. We start by defining $\mathcal{C}_0=\widehat{\mathcal{C}}$ and calculate the next points $\mathcal{C}_{n+1}$ solving the minimisation problem \[\textrm{Min}_{x}\ \lambda_n\left(\widehat{\mathcal{C}_n-x d_n}\right)\] using the golden ratio search (eg.~\cite{alan}). Once we have calculated the optimal step length $\delta$ we define \[\mathcal{C}_{n+1}=\widehat{\mathcal{C}_n-\delta d_n},\ n=0,1,\ldots.\] An alternative procedure would be to apply an optimisation numerical method similar to that studied in~\cite{curtovert}, considering the area constraint. \subsubsection{Multiple eigenvalues in the optimisation process} As in~\cite{anfr}, in the optimisation process we must deal with multiple eigenvalues. For each $n$, we start minimising $\lambda_n\left(\widehat{\mathcal{C}}\right)$ and once we obtain \[\lambda_n\left(\widehat{\mathcal{C}}\right)-\lambda_{n-1}\left(\widehat{\mathcal{C}}\right)\leq\theta,\] for a small value $\theta<1$, we modify the function to be minimised and try to minimise \[\lambda_n\left(\widehat{\mathcal{C}}\right)-\omega_{n-1}\log\left(\lambda_n\left(\widehat{\mathcal{C}}\right)-\lambda_{n-1}\left(\widehat{\mathcal{C}}\right)\right)\] for a sequence of constants $\omega_{n-1}\searrow0$. Then, once we obtain \[\lambda_n\left(\widehat{\mathcal{C}}\right)-\lambda_{n-1}\left(\widehat{\mathcal{C}}\right)\leq\theta\ \text{ and }\ \lambda_{n-1}\left(\widehat{\mathcal{C}}\right)-\lambda_{n-2}\left(\widehat{\mathcal{C}}\right)\leq\theta,\] we change the function to be minimised to \[\lambda_n\left(\widehat{\mathcal{C}}\right)-\omega_{n-1}\log\left(\lambda_n\left(\widehat{\mathcal{C}}\right)-\lambda_{n-1}\left(\widehat{\mathcal{C}}\right)\right)-\omega_{n-2}\log\left(\lambda_{n-1}\left(\widehat{\mathcal{C}}\right)-\lambda_{n-2}\left(\widehat{\mathcal{C}}\right)\right)\] for suitable choice of constants $\omega_{n-1},\omega_{n-2}\searrow0$ and continue applying this process, adding more eigenvalues to the linear combination defining the function to be minimised until we find the optimiser and the multiplicity of the corresponding eigenvalue. \subsubsection{The case of disconnected domains} In the case for which we have a set consisting of several connected components, we simply consider a vector $\mathcal{C}$ defining each component, such that the sum of the areas is equal to one and then perform optimisation on these vectors as described above. The application of the MFS in this case is straightforward, considering collocation points uniformly distributed on the boundary of each of the components and for each of these collocation points, calculate a source point by \eqref{pontosfonte}. Note that the parametrisation of domains that we considered limits the possible shapes to finite unions of star-shaped components. We know that each domain $\Omega_k^\ast$ has at most $k$ connected components (see Remark~\ref{rem:existence}) and thus, in the shape optimisation of $\lambda_k$, there is only a finite number of possibilities for building an optimal disconnected domain. These were studied exhaustively, working with a fixed number of connected components at each step and using the Wolf--Keller type result above (Theorem~\ref{th:wkrobin}) to test the numerical results obtained. This process becomes more difficult for higher eigenvalues and in that case it might be preferable to use a level set method, as in~\cite{oude}, for instance. \section{Numerical results} In this section we will present the main results obtained from our numerical study on the minimisation of the first seven Robin eigenvalues for domains with unit area. We will write $B_n$ as shorthand for the domain of unit area composed of $n$ equal balls. It is well known that the first and second Robin eigenvalues are minimised by $B_1$ and $B_2$, respectively. Figure~\ref{fig:lambda12} shows the evolution of the optimal values of $\lambda_1$ and $\lambda_2$, as a function of the Robin parameter $\alpha$. \begin{figure} \caption{Robin optimisers for $\lambda_1$ and $\lambda_2$.} \label{fig:lambda12} \end{figure} In Figure~\ref{fig:lambda3} we plot the optimal value of $\lambda_3$. We can observe that there are two types of optimal domains depending on the value of $\alpha$. More precisely, for $\alpha\leq\alpha_3\approx14.51236$, the third Robin eigenvalue is minimised by $B_3$, while for $\alpha\geq\alpha_3$, the ball $B_1$ seems to be the minimiser. In particular, for $\alpha=\alpha_3$ uniqueness of the minimiser appears to fail, the optimal value of $\lambda_3$ being attained by both domains. Note also that in the asymptotic case when $\alpha\rightarrow\infty$ this result agrees with the conjecture that the ball is the Dirichlet minimiser of the third eigenvalue (cf.~\cite{wolf,henr}). \begin{figure} \caption{Robin optimisers for $\lambda_3$.} \label{fig:lambda3} \end{figure} While for $\lambda_3$, the Dirichlet minimiser is also the Robin minimiser for $\alpha\geq\alpha_3$, the situation is different for higher eigenvalues. In Figure~\ref{fig:lambda45}-left we plot results for the minimisation of $\lambda_4$. We have marked with a dashed line the curve associated with the fourth Robin eigenvalue of the Dirichlet minimiser which is conjectured to be the union of two balls whose radii are in the ratio $\sqrt{j_{0,1}/j_{1,1}}$, where $j_{0,1}$ and $j_{1,1}$ are respectively the first zeros of the Bessel functions $J_0$ and $J_1$ (eg.~\cite{henr}). We can observe, as was to be expected, that it is not optimal. The solid curve below it also corresponds to domains built with two balls with different areas, but whose optimal ratio of the areas depends on the Robin parameter $\alpha$. It is this family of domains which appears to be minimal for larger $\alpha$. Again we have a value $\alpha_4\approx16.75743$ for which $\alpha\leq\alpha_4$ implies that the minimiser is $B_4$. In Figure~\ref{fig:lambda4areas} we plot the area of the largest ball in the optimal set $\Omega_4^\ast$, as a function of $\alpha\in[\alpha_4,100]$. We marked with a dashed line the asymptotic case of the Dirichlet optimiser for which this quantity is equal to $\frac{j_{1,1}^2}{j_{0,1}^2+j_{1,1}^2}\approx0.7174.$ \begin{figure} \caption{Area of the largest ball of $\Omega_4^\ast$.} \label{fig:lambda4areas} \end{figure} In Figure~\ref{fig:lambda45}-right we show results for the minimisation of $\lambda_5$. The curve corresponding to the Dirichlet minimiser found in~\cite{anfr} is again marked with a dashed line and the dotted line below it represents a family of domains of a very similar shape, deforming slowly, which appear to be optimisers at their respective values of $\alpha\geq\alpha^\ast\approx40$. \begin{figure} \caption{Robin optimisers of $\lambda_4$ and $\lambda_5$.} \label{fig:lambda45} \end{figure} Then, for $\alpha_5\leq\alpha\leq\alpha^\ast$, where $\alpha_5\approx18.73537$, the minimiser is a set composed by a big ball and two small balls of the same area, which corresponds to a union of scaled copies of $B_1$ and $B_2$, and where again the optimal ratio of these two areas depends on $\alpha$. For $\alpha\leq\alpha_5$, $\lambda_5$ is minimised by $B_5$. In Figure~\ref{fig:lambda6}-left we plot the results for the minimisation of $\lambda_6$. The curve corresponding to the Dirichlet minimiser is again marked with a dashed line, while the Robin optimisers for large $\alpha$, again close to their Dirichlet counterpart, are marked as a sequence of points below it. For some smaller values of $\alpha$ it appears that two balls of the same area are the minimiser and for $\alpha\leq\alpha_6\approx20.52358$, $\lambda_6$ is minimised by $B_6$. Figure~\ref{fig:lambda6}-right shows a zoom of the previous figure in the region obtained for $\alpha\in[10,40]$. \begin{figure} \caption{Robin optimisers for $\lambda_6$ and a zoom of the region associated with $\alpha\in[10,40]$.} \label{fig:lambda6} \end{figure} We can observe that there is a particular value of $\alpha$ for which the three curves associated to unions of balls have an intersection. For this particular $\alpha$ we have three distinct optimisers. Finally, in Figure~\ref{fig:lambda7} we show the results for the minimisation of $\lambda_7$. Again the dashed curve is associated to the Dirichlet minimiser and the points below it the optimal values obtained for some domains which are very similar to the Dirichlet optimiser. \begin{figure} \caption{Robin optimisers for $\lambda_7$.} \label{fig:lambda7} \end{figure} It is interesting to note that for the region associated with $\alpha\leq100$ which is plotted in the figure we have a family of domains whose curve is below the curve associated to the type of domains which are very similar to the Dirichlet optimiser. In the spirit of our Wolf-Keller type theorem (Theorem~\ref{th:wkrobin}), this curve corresponds to sets built with an optimiser of $\lambda_6$ and a small ball, which is an optimiser for $\lambda_1$. Since the Dirichlet optimiser is connected (cf.~\cite{anfr}), we expect that for some $\alpha>100$ the value obtained for this type of domain will become larger than the value obtained for domains similar to the Dirichlet minimiser, and indeed, testing the algorithm for $\alpha=300$, we find that the optimal domain is connected. For $\alpha\leq\alpha_7\approx22.167800$, the minimiser is $B_7$, while for $\alpha=\alpha_7$ we have three different types of optimal unions of balls. \subsection{Optimal unions of balls} The above results suggest that the minimiser of $\lambda_{n}$ in two dimensions and for small $\alpha$ consists of the set $B_n$ formed by $n$ equal balls, and that this situation changes when the line corresponding to $B_n$ intersects the line corresponding to $n-3$ equal balls with the same radius and a larger ball, that is, a union of scaled copies of $B_{n-3}$ and $B_1$. The following lemma gives the value at which this intersection takes place in terms of the root of an equation involving Bessel functions, showing that, for dimension $N$, this value grows with $n^{1/N}$. \begin{lemma}\label{intersection} Consider the Robin problem for domains in $\mathbb R^{N}$ of volume $V$. The value $\alpha_{n}$ for which the $n^{\rm th}$ eigenvalue of $n$ equal balls equals the $n^{\rm th}$ eigenvalue of the set formed by $n-(N+1)$ equal balls with the same radius as before and a larger ball is given by \[ \alpha_{n} = \gamma_{0}\frac{\displaystyle J_{\frac{N}{2}}(\gamma_{0})}{\displaystyle J_{\frac{N}{2}-1}(\gamma_{0})} \left(\frac{\displaystyle n\omega_{N}}{\displaystyle V}\right)^{1/N}, \] where $\gamma_{0}$ is the smallest positive solution of the equation \begin{equation}\label{eta0} J_{\frac{N}{2}-1}(\gamma)J_{\frac{N}{2}}(C_{N}\gamma)-C_{N}\gamma J_{\frac{N}{2}-1}(\gamma)J_{\frac{N}{2}+1}(C_{N}\gamma) + C_{N}\gamma J_{\frac{N}{2}}(\gamma)J_{\frac{N}{2}}(C_{N}\gamma)=0, \end{equation} with $C_{N} = (N+1)^{1/N}$. \end{lemma} \begin{proof} The first eigenvalue of $B_{n}$, say $\lambda$, has multiplicity $n$ and is given by the smallest positive solution of the equation \[ \sqrt{\lambda}J_{\frac{N}{2}}\Big(\sqrt{\lambda}r_{1}\Big)-\alpha J_{\frac{N}{2}-1}(\sqrt{\lambda}r_{1}) =0, \] where $r_{1}$ is the radius of each ball, given by $(V/(n \omega_{N}))^{1/N}$. If the smaller balls of the domain formed by $n-(N+1)$ equal balls and a larger ball all have radius $r_{1}$, then the larger ball will have volume $(N+1)V/n$ and radius $r_{2}=C_{N}r_{1}$. We thus have that $\lambda$ equals the $n^{\rm th}$ eigenvalue of this second domain, provided that the second eigenvalue of the larger ball, which has multiplicity $N$, also equals $\lambda$. This is now given by the smallest positive solution of the equation \[ (1+\alpha r_{2})J_{\frac{N}{2}}(\sqrt{\lambda}r_{2})- r_{2}\sqrt{\lambda}J_{\frac{N}{2}+1}(\sqrt{\lambda}r_{2})=0. \] Writing $\gamma=\sqrt{\lambda}r_{1}$, we see that we want to find $\gamma$ which is a solution of \[ \left\{ \begin{array}{l} (1+\frac{\displaystyle \alpha}{\displaystyle \sqrt{\lambda}}C_{N}\gamma)J_{\frac{N}{2}}(C_{N}\gamma)-C_{N}\gamma J_{\frac{N}{2}+1}(C_{N}\gamma)=0 \vspace*{2mm}\\ \sqrt{\lambda}J_{\frac{N}{2}}(\gamma)-\alpha J_{\frac{N}{2}-1}(\gamma)=0. \end{array} \right. \] Solving with respect to $\alpha/\sqrt{\lambda}$ in the second of these equations, and replacing the expression obtained in the first equation yields the desired result. \end{proof} \begin{table} \caption{Optimisers in dimension $2$ for $\alpha=\alpha_n, \;\; n=3,...,10$.} \begin{center} \begin{tabular}{|c|c|llll|} \hline n & $\alpha_n$ & & & & \\ \hline 3 & 14.51236 & \includegraphics[width=0.5cm]{B1} & \includegraphics[width=0.5cm]{b3} & & \\ \hline 4 & 16.75743 & \includegraphics[width=0.433cm]{B1}\includegraphics[width=0.25cm]{B1} & \includegraphics[width=0.5cm]{b4} & & \\ \hline 5 & 18.73537 & \includegraphics[width=0.387cm]{B1}\includegraphics[width=0.224cm]{B1}\includegraphics[width=0.224cm]{B1} & \includegraphics[width=0.5cm]{b5} & & \\ \hline 6 & 20.52358 & \includegraphics[width=0.354cm]{B1}\includegraphics[width=0.354cm]{B1} & \includegraphics[width=0.354cm]{B1}\includegraphics[width=0.354cm]{b3} &\includegraphics[width=0.5cm]{b6} & \\ \hline 7 & 22.16800 & \includegraphics[width=0.327cm]{B1}\includegraphics[width=0.327cm]{B1}\includegraphics[width=0.189cm]{B1} & \includegraphics[width=0.327cm]{B1}\includegraphics[width=0.378cm]{b4} & \includegraphics[width=0.5cm]{b7} & \\ \hline 8 & 23.69859 & \includegraphics[width=0.306cm]{B1}\includegraphics[width=0.306cm]{B1}\includegraphics[width=0.177cm]{B1}\includegraphics[width=0.177cm]{B1} & \includegraphics[width=0.306cm]{B1}\includegraphics[width=0.423cm]{b5} &\includegraphics[width=0.5cm]{b8} & \\ \hline 9 & 25.13615 & \includegraphics[width=0.289cm]{B1}\includegraphics[width=0.289cm]{B1}\includegraphics[width=0.289cm]{B1} & \includegraphics[width=0.289cm]{B1}\includegraphics[width=0.289cm]{B1}\includegraphics[width=0.289cm]{b3} & \includegraphics[width=0.289cm]{B1}\includegraphics[width=0.408cm]{b6} &\includegraphics[width=0.5cm]{b9} \\ \hline 10 & 26.49583 & \includegraphics[width=0.274cm]{B1}\includegraphics[width=0.274cm]{B1}\includegraphics[width=0.274cm]{B1}\includegraphics[width=0.158cm]{B1} & \includegraphics[width=0.274cm]{B1}\includegraphics[width=0.274cm]{B1}\includegraphics[width=0.316cm]{b4} & \includegraphics[width=0.274cm]{B1}\includegraphics[width=0.418cm]{b7} & \includegraphics[width=0.5cm]{b10} \\ \hline \end{tabular} \end{center} \label{tab1} \end{table} In the case of dimension two, equation~(\ref{eta0}) reduces to \[ J_{0}(\gamma)J_{1}(\sqrt{3}\gamma)-\sqrt{3}\gamma J_{0}(\gamma)J_{2}(\sqrt{3}\gamma)+ \sqrt{3}\gamma J_{1}(\gamma)J_{1}(\sqrt{3}\gamma)=0 \] whose first positive zero is $\gamma_{0}\approx1.97021$, yielding $\alpha_{n}\approx 8.37872\sqrt{n}$. Table~\ref{tab1} shows the corresponding values of $\lambda_{n}$ for $n$ between $3$ and $10$, together with the different possible unions of balls which also give the same $n$th eigenvalue for $\alpha=\alpha_{n}$. The above numerical results together with Lemma~\ref{intersection} suggest that, at least in dimension two, given a fixed value of $\alpha$ there will always exist $n^*$ sufficiently large such that the minimiser for $\lambda_{n}$ is $B_n$ for all $n$ greater than $n^{*}$. \begin{figure} \caption{Numerical values for bound \eqref{eq:comp} with n=1,2,...,6.} \label{fig:est} \end{figure} \subsection{Verification of numerical results using \eqref{eq:comp}} As a test of the plausibility of our numerical results, we finish by computing the error in bound \eqref{eq:comp} for the case $N=2$ and $V=1$. This is shown in Figure~\ref{fig:est} where we have plotted the quantity \[\lambda_{n+1}^\ast-\lambda_{n}^\ast- \pi\ \lambda_1\left(B_1,\left(\frac{\left|B^\ast\right|} {1+\left|B^\ast\right|}\right)^{\frac{1}{2}}\pi^{-\frac{1}{2}}\alpha\right)\] as a function of $\alpha$, for n=1,2,...,6, which according to \eqref{eq:comp} must always be negative. \section{Discussion} By combining computational and analytical techniques we were able to address the problem of optimizing Robin eigenvalues of the Laplacian, obtaining results for the full frequency range and in general dimensions. The application of the MFS to Robin problems provided a fast reliable method with which to apply a gradient--type optimization algorithm, yielding minimizers for positive boundary parameters $\alpha$ and up to $\lambda_{7}$. From this we conclude that, except for the first two eigenvalues, optimizers will depend on the boundary parameter and approach the Dirichlet optimizer as $\alpha$ goes to infinity. In order to address issues related to the connectedness of minimizers we derived a Wolf--Keller type result for Robin problems which is also useful to identify points where there are multiple optimizers and transitions between different branches occur. In particular, as $\alpha$ decreases (while keeping the volume fixed), optimizers tend to become disconnected, in contrast with the limitting Dirichlet case. From the numerical simulations and the analysis of the transition point between the $n^{\rm th}$ eigenvalue of $n$ equal balls and that of $n-(N+1)$ equal balls and a larger ball, we conjecture that for each $n$ there exists a transition point, say $\alpha_{n}$, below which the optimizer for $\lambda_{n}$ consists of $n$ equal balls and that $\alpha_{n}$ grows with $n^{1/N}$. Finally, we were able to show that optimizers do not follow Weyl's law for the high frequencies, growing at most with $n^{1/N}$, as opposed to the asymptotics for a fixed domain whose leading term is of order $n^{2/N}$. As far as we are aware, it is the first time that such behaviour has been identified. \end{document}
\begin{document} \title{Minimal Codes From Characteristic Functions Not Satisfying The Ashikhmin-Barg Condition} \abstract{A \textit{minimal code} is a linear code where the only instance that a codeword has its support contained in the support of another codeword is when the codewords are scalar multiples of each other. Ashikhmin and Barg gave a sufficient condition for a code to be minimal, which led to much interest in constructing minimal codes that do not satisfy their condition. We consider a particular family of codes $\mathcal C_f$ when $f$ is the indicator function of a set of points, and prove a sufficient condition for $\mathcal C_f$ to be minimal and not satisfy Ashikhmin and Barg's condition based on certain geometric properties of the support of $f$. We give a lower bound on the size of a set of points satisfying these geometric properties and show that the bound is tight.} \section{Introduction} Linear codes have found applications in areas far beyond error-correction. For example, an $(S,T)$ \textit{secret-sharing scheme} is a collection of $S$ ``shares" of a $q$-ary secret such that knowledge of any $T$ shares determines the secret, but knowledge of $T-1$ or fewer shares gives no information. Massey showed that certain codewords in the dual of a linear code, called \textit{minimal codewords}, can be used to construct a secret sharing scheme \cite{M}. However, determining the set of minimal codewords in a linear code is a hard problem in general, which galvanized the search for codes where every codeword is minimal, referred to as \textit{minimal codes}. In 1998, Ashikhmin and Barg gave a sufficient condition for a code to be minimal based on the ratio of the maximum and minimum weight of the code \cite{AB}. \begin{lemma} A $q$-ary linear $[n,k]$ code $\mathcal C$ is minimal if \[ \frac{w_{\max}}{w_{\min}} < \frac{q}{q-1}, \] where $w_{\min}$ and $w_{\max}$ are the minimum and maximum weights of $\mathcal C$, respectively. \end{lemma} For many years following this result all known examples of minimal codes satisfied Ashikhmin and Barg's condition, until 2017 when Chang and Hyun \cite{CH} constructed a minimal binary code from a simplicial complex that did not. As for finding a family of $q$-ary minimal codes not satisfying Ashikhmin and Barg's condition for $q$ an odd prime power, a particular code that has received much attention is the code $\mathcal C_f$, which we will now define. Let $f: \mathbb{F}_q^n \rightarrow \mathbb{F}_q$ be an arbitrary but fixed function. We define $\mathcal C_f$ to be the $\mathbb{F}_q$-subspace of $\mathbb{F}_q^{q^n-1}$ spanned by vectors of the form \begin{equation*} c(u,v):= (uf(x)+v \cdot x)_{x \in \mathbb{F}_q^n \setminus \{0\}} \end{equation*} where $u \in \mathbb{F}_q$, $v \in \mathbb{F}_q^n$, and $v\cdot x$ is the usual dot product. We record below the basic parameters of the code, which are well known. \begin{lemma} If $f:\mathbb{F}_q^n \rightarrow \mathbb{F}_q$ is not linear and $f(a) \neq 0$ for some $a \in \mathbb{F}_q^n \setminus \{0 \}$, then $\mathcal C_f$ is a $[q^n-1, n+1]$ linear code. \end{lemma} In \cite{DHZ}, Ding et. al. gave necessary and sufficient conditions for the code $\mathcal C_f$ to be minimal when $q=2$ based on the Walsh-Hadamard transform of $f$, which is defined to be the function $\hat{f}(x):= \sum_{v \in \mathbb{F}_2^n}(-1)^{f(x)+v\cdot x}$. \begin{lemma}\label{dingminimal} If $f:\mathbb{F}_2^n \rightarrow \mathbb{F}_2$ is not linear and $f(a) \neq 0$ for some $a \in \mathbb{F}_2^n \setminus \{0 \}$ then the binary code $\mathcal C_f$ is minimal if and only if $\hat{f}(x) + \hat{f}(y) \neq 2^n$ and $\hat{f}(x)-\hat{f}(y) \neq 2^n$ for every pair of distinct vectors $x, y \in \mathbb{F}_2^n$. \end{lemma} A boolean function $f: \mathbb{F}_2^n \rightarrow \mathbb{F}_2$ with $n$ a positive even integer is said to be \textit{bent} if $|\hat{f}(x)|=2^{n/2}$ for all $x \in \mathbb{F}_2^n$. Using Lemma~\ref{dingminimal}, it is easy to see that the binary code $\mathcal C_f$ is minimal when $f$ is a bent function. Bonini et. al. studied the code $\mathcal C_f$ for arbitrary prime powers $q$ and functions $f$, and showed that if the zero set of $f$ satisfies certain geometric properties then $\mathcal C_f$ is a minimal code \cite{BB}. We continue this line of work by considering the code $\mathcal C_f$ when $f$ is the indicator function of a set, and give sufficient conditions for $\mathcal C_f$ to be minimal and not satisfying the condition of Ashikhmin and Barg in terms of the geometric properties of the support of $f$. We give a tight lower bound on the size of sets satisfying our geometric conditions, and give an explicit example of a set meeting the lower bound. In section 2 we lay out the notation used, and in section 3 we present the main results. \section{Notation} \begin{definition} A \textit{linear} $[n,k]$ \textit{code} is a $k$-dimensional subspace $\mathcal C$ of $\mathbb{F}_q^n$. \end{definition} \begin{definition} A codeword $c \in \mathcal C$ is said to be \textit{minimal} if whenever $\textnormal{supp}(c) \subseteq \textnormal{supp}(c^\prime)$ for some codeword $c^\prime \in \mathcal C$ it implies that $c = \lambda c^\prime$ for some $\lambda \in \mathbb{F}_q^\times$. A linear code $\mathcal C$ is \textit{minimal} if all codewords of $\mathcal C$ are minimal. \end{definition} We summarize some of the notation used in the paper: \begin{itemize} \item $e_i$ will denote the $i^{th}$ standard basis vector. \item For $v \in \mathbb{F}_q^n$, we will let $H(v)$ denote the set $\{u \in \mathbb{F}_q^n : v \cdot u =0 \}$. \item For a function $f : \mathbb{F}_q^n \rightarrow \mathbb{F}_q$, we will let $V(f)$ denote the set of zeros of $f$, $\{ u \in \mathbb{F}_q^n : f(u)=0 \}$. \item If $U$ is any set, we will let $U^*:=U \setminus \{0 \}$, and $\overline{U}$ will denote the complement of $U$. \item By a \textit{hyperplane}, we will mean an $(n-1)$-dimensional subspace of $\mathbb{F}_q^n$. \item By an \textit{affine hyperplane}, we will mean a coset of an $(n-1)$-dimensional subspace of $\mathbb{F}_q^n$. Note that by our convention, a hyperplane is also an affine hyperplane. \end{itemize} \section{The Main Results} The next theorem is our main result. \begin{theorem}\label{main} Let $q$ be an arbitrary prime power, and let $S \subseteq \mathbb{F}_q^n \setminus \{0\}$ be a set of points such that \begin{enumerate} \item $S$ is not contained in any affine hyperplane, \item $S$ meets every affine hyperplane, \item $|S| < q^{n-2}(q-1)$ \end{enumerate} Then $\mathcal C_f$ with $f$ the indicator function of $S$ is a minimal code that does not satisfy the Ashikhmin-Barg condition. \end{theorem} \begin{proof} Suppose that $\textnormal{supp}( c(u^\prime,v^\prime)) \subseteq \textnormal{supp}(c(u,v))$ for some codewords $c(u,v), c(u^\prime, v^\prime)$ of $\mathcal C_f$. Equivalently, \begin{equation}\label{assump} V(uf(x)+v \cdot x)^* \subseteq V(u^\prime f(x) + v^\prime \cdot x)^* \end{equation} We proceed by cases to show that $c(u,v)=\lambda c(u^\prime,v^\prime)$ for some $\lambda \in \mathbb{F}_q^\times$. \textbf{Case 1:} If $v=0$, then it implies $V(f)^* \subseteq H(v^\prime)^*$, so that $\overline{H(v^\prime)^*} \subseteq S$, contradicting that $|S| < q^{n-2}(q-1)$. \textbf{Case 2:} If $v^\prime =0$ then $V(uf(x)+v\cdot x)^* \subseteq V(f)^*$. From the partition \begin{equation}\label{partition} V(uf(x) + v\cdot x)^* = (V(f)^* \cap H(v)^*) \cup ( S \cap \{v \cdot x = -u \}) \end{equation} it implies that $S$ does not meet the affine hyperplane $\{v \cdot x =-u \}$, a contradiction. \textbf{Case 3:} If $v, v^\prime \neq 0$, then from equation~\ref{assump} and the partition of equation~\ref{partition} we have \begin{equation} V(f)^* \cap H(v)^* \subseteq V(f)^* \cap H(v^\prime)^* \subseteq H(v^\prime)^* \end{equation} Here $|H(v)^* \cap V(f)^*| = |H(v)^* \setminus S| \geq q^{n-1} - |S| \geq q^{n-2}+1$, so that $H(v)^* \cap V(f)^*$ is not contained in a hyperplane other than $H(v)$, i.e. $H(v)=H(v^\prime)$. Thus we have $v^\prime = \lambda v$ for some $\lambda \in \mathbb{F}_q^\times$. Since $S$ meets every affine hyperplane, we can choose some $y$ in $S \cap \{v\cdot x = -u \}$. Equation~\ref{assump} then implies $u=-v\cdot y$ and $u^\prime=-\lambda v \cdot y$, so that $u^\prime = \lambda u$. We therefore conclude $c(u^\prime, v^\prime) = \lambda c(u,v)$ in this case, as was required. \textbf{Case 4:} If $u=0$ and $H(v) \neq H(v^\prime)$, then equation~\ref{assump} reads as $H(v)^* \subseteq V(u^\prime f(x)+v^\prime \cdot x)^*$. Using the partition of equation~\ref{partition} and the assumption that $S$ is not contained in any affine hyperplane, we have \begin{equation*} \begin{split} q^{n-1}-1 &= |H(v)^*| \\ &= |V(f)^* \cap H(v^\prime)^* \cap H(v)^*| + |S \cap \{ v^\prime \cdot x =-u^\prime \} \cap H(v)^* | \\ & \leq |H(v)^* \cap H(v^\prime)^*| + |S \cap \{ v^\prime \cdot x =-u^\prime \} | \\ & \leq q^{n-2}-1 + q^{n-2}(q-1) - 1 \\ &= q^{n-1} - 2\\ \end{split} \end{equation*} This clear contradiction means we therefore must have $H(v)=H(v^\prime)$, so that $v^\prime = \lambda v$ for some $\lambda \in \mathbb{F}_q^\times$. But the containment $H(v)^* \subseteq V(u^\prime f(x) + \lambda v \cdot x)^*$ implies that $H(v)^* \subseteq V(f)^*$, or equivalently $S \subseteq \overline{H(v)^*}$. This contradicts that $S$ meets the hyperplane $H(v)$. \textbf{Case 5:} If $u^\prime =0$, then we have $V(uf(x)+v\cdot x)^* \subseteq H(v^\prime)^*$, which follows by case 3. Lastly we check that the code $\mathcal C_f$ does not satisfy the Ashikhmin-Barg condition. The maximum weight is at least the weight of $c(0,1)$, which is the number of points in $H(v)$, and the minimum weight is at most the weight of $c(1,0)$, which is $|S|$. Thus: \begin{equation*} \frac{w_{\max}}{w_{\min}} \geq \frac{q^{n-1}}{|S|} > \frac{q}{q-1} \end{equation*} \end{proof} When $q=2$ the conditions of Theorem~\ref{main} simplify considerably, which we record in the following corollary. \begin{corollary}\label{mainbinary} Let $S \subseteq \mathbb{F}_2^n \setminus \{0 \}$ be a set of points such that \begin{enumerate} \item $S$ is not contained in any hyperplane, \item $S$ meets every hyperplane, \item $|S| < 2^{n-2}$ \end{enumerate} Then the binary code $C_f$ with $f$ the indicator function of $S$ is a minimal code not satisfying the Ashikhmin-Barg condition. \end{corollary} \begin{proof} It suffices to check that when $q=2$, and $S$ is a set of points that is not contained in a hyperplane and meets every hyperplane, then $S$ is not contained in an affine hyperplane, and meets every affine hyperplane. To see that $S$ is not contained in an affine hyperplane, suppose that $H$ is an affine hyperplane not containing the origin, and that $S \subseteq H$. Since $q=2$, then $\overline{H}$ is a hyperplane, so that $S$ does not meet the hyperplane $\overline{H}$, a contradiction. Similarly, if $S$ does not meet the affine hyperplane $H$ not containing the origin, then $S$ is contained in $\overline{H}$, which is a hyperplane. Therefore $S$ meets every affine hyperplane, so that $S$ satisfies the conditions of Theorem~\ref{main}. \end{proof} \begin{example} Assume that $n \geq 6$ is an even positive integer, and let $q=2$. A \textit{partial spread} of order $s$ is a set of $n/2$-dimensional subspaces $\{U_1,...,U_s\}$ of $\mathbb{F}_2^n$ such that $U_i \cap U_j = \{ 0 \}$ for all $1 \leq i, j \leq s$. It is easy to see that a partial spread of order $s$ has at most $2^{n/2}+1$ elements. In \cite{DHZ}, Ding et. al. showed that when $1 \leq s \leq 2^{n/2}+1$, $s \notin \{1, 2^{n/2}, 2^{n/2}+1 \}$, then $\mathcal C_f$ with $f$ the indicator function of the set $S = \cup_{i=1}^s U_i^*$ is a minimal code. Moreover, they showed that if, in addition, we have $s \leq 2^{\frac{n}{2}-2}$ then $\mathcal C_f$ does not satisfy Ashikhmin and Barg's condition. They proved this by computing the Walsh-Hadamard transform of $f$ and then applying Lemma~\ref{dingminimal}, but we can alternatively check that the set $S$ satisfies the conditions of Corollary~\ref{mainbinary}. Since $s \geq 2$, then $S$ clearly spans $\mathbb{F}_2^n$, and the assumption that $n \geq 6$ means that $\dim(U_i) \geq 3$, so $S$ meets every hyperplane. Finally, we have in general that $|S|=s(2^{n/2}-1)$, so if we assume that $s \leq 2^{\frac{n}{2}-2}$ then an easy computation shows that $|S| \leq 2^{n-2}-2^{\frac{n}{2}-2}< 2^{n-2}$. Therefore $S$ indeed satisfies the conditions of Corollary~\ref{mainbinary}. \end{example} \begin{example} Let $n \geq 7$ and $2 \leq k \leq \lfloor \frac{n-3}{2} \rfloor$. Let $S$ be the set of vectors of $\mathbb{F}_2^n$ with weight at most $k$. In \cite{DHZ}, Ding et. al. showed that $\mathcal C_f$ with $f$ the indicator function of $S$ is a minimal $[2^n-1, n+1, \sum_{i=1}^k {n \choose i}]$ binary code, and moreover that $\mathcal C_f$ does not satisfy the Ashikhmin-Barg condition if and only if \begin{equation}\label{dingbd} 1 + 2 \sum_{i=1}^k {n \choose i} \leq 2^{n-1} + {n-1 \choose k} \end{equation} We alternatively check that the set $S$ satisfies the conditions of Corollary~\ref{mainbinary}. Since $S$ contains the standard basis vectors, $S$ is clearly not contained in any hyperplane. Given any hyperplane $H(v)$, at least one of the vectors $e_1$, $e_2$, or $e_1+e_2$ is an element of $H(v)$, and each of these vectors is also an element of $S$. Therefore $S$ also meets every hyperplane. In general the size of $S$ is $\sum_{i=1}^k {n \choose k}$, so to apply Corollary~\ref{mainbinary} we lastly need to impose the restriction that $\sum_{i=1}^k {n \choose k} < 2^{n-2}$. We note that this is equivalent to the inequality \begin{equation} 1+ 2 \sum_{i=1}^k {n \choose k} \leq 2^{n-1} \end{equation} which is a more restrictive condition than the inequality given in Equation~\ref{dingbd}. \end{example} We lastly give a tight lower bound on the size of a set of points satisfying the conditions of Theorem~\ref{main}. The following lemma was first proved by Jameson \cite{J}. There are many known proofs of the result; for a survey on them we refer the reader to \cite{B}. \begin{lemma}\label{bound} If $S$ is a set of points in $\mathbb{F}_q^n$ meeting every affine hyperplane then $|S| \geq n(q-1)+1$. \end{lemma} The lower bound of Lemma~\ref{bound} clearly gives a lower bound on the size of a set of points satisfying the conditions of Theorem~\ref{main}. However, it is not obvious that this bound should be tight since the set of points in Theorem~\ref{main} does not contain the origin. \begin{theorem}\label{boundmain} Let $q$ be an arbitrary prime power. If $S \subseteq \mathbb{F}_q^n \setminus \{0\}$ is a set of points such that \begin{enumerate} \item $S$ is not contained in any affine hyperplane, \item $S$ meets every affine hyperplane, \item $|S| < q^{n-2}(q-1)$, \end{enumerate} then $|S| \geq n(q-1)+1$. Moreover, this lower bound is tight. \end{theorem} \begin{proof} To show that this lower bound is tight, consider the set of points \begin{equation*} S: =\{ a+ \lambda e_i : \lambda \in \mathbb{F}_q, 1 \leq i \leq n \} \end{equation*} where $a \in \mathbb{F}_q^n \setminus \{0 \}$ is any point not equal to $\lambda e_i$ for any $\lambda \in \mathbb{F}_q$, $1 \leq i \leq n$. By our choice of $a$, the origin is not an element of $S$. Clearly $S$ is not contained in an affine hyperplane, and $|S|=n(q-1)+1 <q^{n-2}(q-1)$. Lastly, if $u_1X_1+u_2X_2+...+u_nX_n=\alpha$ is the equation of an affine hyperplane, then it is easily checked that $a+\lambda e_i$ is a point on the affine hyperplane, where $i$ is chosen such that $u_i \neq 0$, and $\lambda$ is chosen to be the element $\lambda = \frac{1}{u_i}(\alpha- u \cdot a)$. \end{proof} The sufficient conditions given in Theorem~\ref{main} and Corollary~\ref{mainbinary} give geometric conditions for the code $\mathcal C_f$ to be minimal and not satisfy the Ashikhmin-Barg condition when $q$ is any prime power and $f$ is an indicator function. Moreover, since the minimum weight of $\mathcal C_f$ is at most $|S|$, then the tight lower bound on the size of a set of points satisfying these conditions given in Theorem~\ref{boundmain} shows that it is possible for $\mathcal C_f$ to additionally have small minimum weight. \newcommand{\Addresses}{{ \footnotesize Julien Sorci, \textsc{Department of Mathematics, University of Florida, P. O. Box 118105, Gainesville FL 32611, USA}\par\nopagebreak \textit{E-mail address}: \texttt{[email protected]} }} \title{Minimal Codes From Characteristic Functions Not Satisfying The Ashikhmin-Barg Condition} \Addresses \end{document}
\begin{document} \markboth{F. Aroca, G. Ilardi and L. López de Medrano} {Puiseux power series solutions for systems of equations} \title{PUISEUX POWER SERIES SOLUTIONS FOR SYSTEMS OF EQUATIONS} \author{Fuensanta Aroca, Giovanna Ilardi and Luc\'ia L\'opez de Medrano} \address{Instituto de Matem\'aticas, Unidad Cuernavaca Universidad Nacional Aut\'onoma de M\'exico, A.P. 273-3 Admon. 3, Cuernavaca, Morelos, 62251 M\'exico} \address{Dipartimento Matematica Ed Applicazioni ``R. Caccioppoli'' Universit\`{a} Degli Studi Di Napoli ``Federico II'' Via Cintia - Complesso Universitario Di Monte S. Angelo 80126 - Napoli - Italia} \subjclass[2000]{Primary 14J17, 52B20; Secondary 14B05, 14Q15, 13P99} \keywords{Puiseux series, Newton polygon, singularity, tropical variety.} \maketitle \begin{abstract} We give an algorithm to compute term by term multivariate Puiseux series expansions of series arising as local parametrizations of zeroes of systems of algebraic equations at singular points. The algorithm is an extension of Newton's method for plane algebraic curves replacing the Newton polygon by the tropical variety of the ideal generated by the system. As a corollary we deduce a property of tropical varieties of quasi-ordinary singularities. \end{abstract} \section*{Introduction} Isaac Newton described an algorithm to compute term by term the series arising as $y$-roots of algebraic equations $f(x,y)=0$ \cite["Methodus fluxionum et serierum infinitorum" ]{Newton:1670}. The main tool used in the algorithm is a geometrical object called the Newton polygon. The roots found belong to a field of power series called Puiseux series \cite{Puiseux:1850}. The extension of Newton-Puiseux's algorithm for equations of the form $f(x_1,\ldots ,x_N,y)=0$ is due to J. McDonald \cite{JMcDonald:1995}. As can be expected, the Newton polygon is extended by the Newton polyhedron. An extension for systems of equations of the form $\{ f_1(x,y_1,\ldots ,y_M)= {\cdots} = {f_r(x,y_1,\ldots ,y_M)=0} \}$ is described in \cite{Maurer:1980} using tropism and in \cite{JensenMarkwig:2008} using tropical geometry. J. McDonald gives an extension to systems of equations $$\{ f_1(x_1,\ldots ,x_N,y_1,\ldots ,y_M)= {\cdots} = {f_r(x_1,\ldots ,x_N,y_1,\ldots ,y_M)=0} \}$$ using the Minkowski sum of the Newton Polyhedra. However, this algorithm works only for ``general" polynomials \cite{JMcDonald:2002}. In this note we extend Newton's method to any dimension and codimension. The Newton polyhedron of a polynomial is replaced by its normal fan. The tropical variety comes in naturally as the intersection of normal fans. We prove that, in an algebraically closed field of characteristic zero, the algorithm given always works. The natural field into which to embed the algebraic closure of polynomials in one variable is the field of Puiseux series. When it comes to several variables there is a family of fields to choose from. Each field is determined by the choice of a vector $\omega\in{\mathbb R}^N$ of rationally independent coordinates. The need to choose $\omega$ had already appeared when working with a hypersurface \cite{JMcDonald:1995,GonzalezPerez:2000}. The introduction of the family of fields is done in \cite{ArocaIlardi:2009}.\\ We start the article recalling the main statements on which the Newton-Puiseux method for algebraic plane curves relies (Section \ref{Newton-Puiseux's Method}) and extending these statements to the general case (Section \ref{The general statement}). In the complex case, a series of positive order, obtained by the Newton-Puiseux method for algebraic plane curves, represents a local parametrization of the curve around the origin. In Section \ref{The local parametrizations defined by the series} we explain how an $M$-tuple of series arising as a solution to the general Newton-Puiseux statement also represents a local parametrization recalling results form \cite{FAroca:2004}. In Section \ref{The fields} we recall the definition given in \cite{ArocaIlardi:2009} of the family of fields of $\omega$-positive Puiseux series and their natural valuation. Then, in Section \ref{The extended ideal} we reformulate the question using the fields introduced and show how it becomes a lot simpler. Then we work in the ring of polynomials with coefficients $\omega$-positive Puiseux series (Sections \ref{Weighted orders and initial parts}, \ref{Initial Ideals} and \ref{A special case of Kapranow's Theorem}). The Newton-Puiseux algorithm is based on the fact that the first term of a $y$-root is the $y$-root of the equation restricted to an edge of the Newton polygon. The analogous of this fact is expressed in terms of initial ideals. In Sections \ref{Weighted orders and initial parts} and \ref{Initial Ideals}, weighted orders and initial ideals are defined. In Section \ref{A special case of Kapranow's Theorem} we prove that initial parts of zeroes are zeroes of weighted initial ideals. Then we consider ideals in the ring of polynomials $${\mathbb K} [x^*,y]:= {\mathbb K} [x_1,{x_1}^{-1},\ldots,x_N,{x_N}^{-1},y_1,\ldots y_M]$$ (Section \ref{Polynomial initial ideals}) and characterize initial ideals with zeroes in a given torus. This is done in terms of the tropical variety of the ideal (Section \ref{Tropicalization}). Sections \ref{omega-data} to \ref{The solutions} are devoted to explaining the algorithm. In the last section we show the theoretical implications of the extension of Newton-Puiseux algorithm by giving a property of the tropical variety associated to a quasi-ordinary singularity.\\ \section{Newton-Puiseux's Method.}\label{Newton-Puiseux's Method} Given an algebraic plane curve ${\mathcal C}:=\{f(x,y)=0\}$, the Newton-Puiseux method constructs all the fractional power series $y(x)$ such that $f(x,y(x))=0$. These series turn out to be Puiseux series. Newton-Puiseux's method is based on two points: Given a polynomial $f(x,y)\in{\mathbb K} [x,y]$ \begin{enumerate} \item\label{dos} $cx^\mu$ is the first term of a Puiseux series $y(x)=cx^\mu+...$ with the property $f(x,y(x))=0$ if and only if \begin{itemize} \item $\frac{-1}{\mu}$ is the slope of some edge $L$ of the Newton polygon of $f$. \item $cx^\mu$ is a solution of the characteristic equation associated to $L$. \end{itemize} \item\label{tres} If we iterate the method: Take $c_i x^{\mu_i}$ to be a solution of the characteristic equation associated to the edge of slope $\frac{-1}{\mu_i}$ of $f_i := f_{i-1} (x, y+ c_{i-1} x^{\mu_{i-1}})$ with $\mu_i>\mu_{i-1}$. We do get a Puiseux series $\sum_{i=0}^{\infty} c_i x^{\mu_i}$ with the property $f(x,y(x))=0$. \end{enumerate} In this paper we prove the extension of these points: Point \ref{dos} is extended in Section \ref{Tropicalization} Theorem \ref{Extension del punto uno} and, then, Point \ref{tres} in Section \ref{The solutions} Theorem \ref{ultimo teorema}. Point \ref{dos} is necessary to assure that the sequences in Point \ref{tres} always exist. But Point \ref{dos} does not imply that any sequence constructed in such a way leads to a solution. Both results have led to a deep understanding of algebraic plane curves. \section{The general statement.}\label{The general statement} Take an $N$-dimensional algebraic variety $V\subset {\mathbb K}^{N+M}$. There is no hope to find $k\in{\mathbb N}$ and an $M$-tuple of series $y_1,\ldots ,y_M$ in ${\mathbb K} [[x_1^k,\ldots ,x_M^k]]$ such that the substitution $x_j\mapsto y_j(x_1,\ldots ,x_N)$ makes $f$ identically zero for all $f$ vanishing on $V$. (Parametrizations covering a whole neighborhood of a singularity do not exist in general.) McDonald's great idea was to look for series with exponents in cones. Introducing rings of series with exponents in cones served to prove Newton-Puiseux's statement for the hypersurface case \cite{JMcDonald:1995} and has been the inspiration of lots of other results (both in algebraic geometry \cite{SotoVicente:2006,GonzalezPerez:2000} and differential equations \cite{TAranda:2002,FArocaJCano:2001}). In order to give a general statement for all dimension and codimension, we need to recall some definitions of convex geometry: A {\bf convex rational polyhedral cone} is a subset of $\mathbb{R}^N$ of the form \begin{displaymath} \sigma =\{ \lambda_1v_1+\cdots+\lambda_r v_r \mid \lambda_i\in \mathbb{R}, \lambda_i\geq 0\}, \end{displaymath} where $v_1,\dots,v_r\in \mathbb{Q}^N$ are vectors. A cone is said to be {\bf strongly convex} if it contains no nontrivial linear subspaces. A {\bf fractional power series} $\varphi$ in $N$ variables is expressed as \[ \varphi=\sum_{\alpha \in {{\mathbb Q}}^N} c_\alpha x^\alpha, \qquad c_\alpha \in {{\mathbb K}}, \quad x^{\alpha}:=x_1^{\alpha_1} \dots x_N^{\alpha_N}. \] The {\bf set of exponents} of $\varphi$ is the set \[ {\mathcal E}(\varphi ):=\{\alpha\in {{\mathbb Q} }^N\mid c_\alpha\neq 0\}. \] A fractional power series $\varphi$ is a {\bf Puiseux series} when its set of exponents is contained in a lattice. That is, there exists $K\in{\mathbb N}$ such that ${\mathcal E}(\varphi )\subset {\frac{1}{K}{\mathbb Z}}^N$. Let $\sigma\subset{\mathbb R}^N$ be a strongly convex cone. We say that a Puiseux series $\varphi$ has {\bf exponents in a translate of $\sigma$} when there exists $\gamma\in{\mathbb Q}^N$ such that ${\mathcal E} (x^\gamma\varphi )\subset \sigma$. It is easy to see that the set of Puiseux series with exponents in translates of a strongly convex cone $\sigma$ is a ring. (But, when $N>1$, it is not a field). Given a non-zero vector $\omega\in {\mathbb R}^N$, we say that a cone $\sigma$ is {\bf $\omega$-positive} when for all $v\in\sigma$ we have $v\cdot\omega\geq 0$. If $\omega$ has rationally independent coordinates, an $\omega$-positive rational cone is always strongly convex. Denote by ${\large \textsf{V}} ({\mathcal I})$ the set of common zeroes of the ideal ${\mathcal I}$. Extending Newton-Puiseux's statement for an algebraic variety of any dimension and codimension is equivalent to answering the following question: \begin{problem}\label{problema}Given an ideal ${{\mathcal I}}\subset {\mathbb K} [x_1,\ldots ,x_{N+M}]$ such that the projection \begin{equation}\label{la proyeccion} \begin{array}{cccc} \pi: & {\large \textsf{V}}({{\mathcal I} }) &\longrightarrow & {\mathbb K}^N\\ & (x_1,\ldots ,x_{N+M}) & \mapsto & (x_1,\ldots ,x_N) \end{array} \end{equation} is dominant and of generic finite fiber. Given $\omega\in{\mathbb R}^N$ of rationally independent coordinates. Can one always find an $\omega$-positive rational cone $\sigma$ and an $M$-tuple $\phi_1,\ldots, \phi_M$ of Puiseux series with exponents in some translate of $\sigma$ such that \[ f(x_1,\ldots ,x_N,\phi_1(x_1,\ldots ,x_N),\ldots, \phi_M(x_1,\ldots ,x_N))=0. \] for any $f\in{\mathcal I}$? \end{problem} If the projection is not {dominant} the problem has no solution. If the generic fiber is not {finite} an output will not be a parametrization. To emphasize the roll of the projection, the indeterminates will be denoted by $x_1,\ldots ,x_N,y_1,\ldots y_M$. We will work with an ideal ${\mathcal I}\subset {\mathbb K} [x,y]:= {\mathbb K} [x_1,\ldots ,x_N,y_1,\ldots ,y_M]$. With this notation, the set of common zeroes of ${\mathcal I}$ is given by \[ {\large \textsf{V}}({\mathcal I} )= \{ (x,y)\in {\mathbb K}^{N+M}\mid f(x,y)=0,\forall f\in{\mathcal I}\}. \] \begin{defin} We will say that an ideal ${\mathcal I}\subset{\mathbb K} [x,y]$ is {\bf N-admisible} when the Projection (\ref{la proyeccion}) is dominant and of finite generic fiber. We will say that an algebraic variety $V\subset {\mathbb K}^{N+M}$ is N-admissible when its defining ideal is N-admissible. Given an N-admissible ideal ${\mathcal I}\subset{\mathbb K} [x,y]$, and a vector $\omega\in{\mathbb R}^N$ of rationally independent coordinates; an $M$-tuple $\phi_1,\ldots ,\phi_M$ solving Question \ref{problema} will be called an {\bf $\omega$-solution for ${\mathcal I}$}. \end{defin} \section{The local parametrizations defined by the series.}\label{The local parametrizations defined by the series} Let $({\mathcal C},(0,0))$ be a complex plane algebraic curve singularity \[ (0,0)\in{\mathcal C}:=\{(x,y)\in {\mathbb C}^2\mid f(x,y)=0\} \] where $f$ is a polynomial with complex coefficients. Each output of the Newton-Puiseux method $y(x)=c_0x^{\mu_0}+..$ with $\mu_0>0$ is a convergent series in a neighborhood of $0$. This series corresponds to a multi-valued mapping defined in a neighborhood of the origin $0\in U\subset{\mathbb C}$ \[ \begin{array}{cccc} \varphi: & U & \longrightarrow & {\mathcal C}\\ & x &\mapsto & (x,y(x)) \end{array} \] that is compatible with the projection \[ \begin{array}{cccc} \pi: &{\mathcal C} &\longrightarrow &{\mathbb C}\\ &(x,y) &\mapsto &x, \end{array} \] that is, $\pi\circ\varphi$ is the identity on $U$. When ${\mathcal C}$ is analytically irreducible at $(0,0)$, the image $\varphi (U)$ is a neighborhood of the curve at $(0,0)$. The series $\varphi$ contains all the topological and analytical information of $({\mathcal C}, (0,0))$ and there are different ways to recover it (see for example \cite{Walker:1978,BrieskornKnorrer:1986}). If $\omega\in{{\mathbb R}_{>0}}^N$ has rationally independent positive coordinates, then the first orthant is $\omega$-positive and we may suppose that the series of an output of the extended Newton-Puiseux method has exponents in a cone $\sigma$ that contains the first orthant. Let $\sigma$ be a strongly convex cone that contains the first orthant. In \cite{FAroca:2004} it is shown that (when it is not empty) the domain of convergence of a series with exponents in a strongly convex cone $\sigma$ contains an open set $W$ that has the origin as accumulation point. Moreover, by the results of \cite[Prop 3.4]{FAroca:2004}, the intersection of a finite number of such domains is non-empty. Let $V$ be an N-admissible complex algebraic variety embedded in ${\mathbb C}^{N+M}$ and let $\omega\in{{\mathbb R}_{>0}}^N$ be of rationally independent coordinates. Each $M$-tuple of series $(y_1(\underline{x}),\ldots ,y_M(\underline{x}))$ found solving Question \ref{problema} corresponds to a multi-valued function defined on an open set $W\subset{\mathbb C}^N$ that has the origin as accumulation point \[ \begin{array}{cccc} \varphi: &W &\longrightarrow &V\\ &\underline{x} &\mapsto (\underline{x},y_1(\underline{x}),\ldots ,y_M(\underline{x}). \end{array} \] The image $\varphi (W)$ contains an open set (a wedge) of $V$. When \begin{equation}\label{el valor es positivo} \omega\cdot\alpha > 0\quad\text{for all}\quad \alpha\in\bigcup_{j=1,\ldots ,M}{\EuScript E} (y_j) \end{equation} (when for each $j$, $y_j$ does not have constant term and its set of exponents is contained in an $\omega-$positive cone with apex at the origin) the open set has the origin as accumulation point. Since analytic continuation is unique, when the origin is an analytically irreducible singularity, this parametrization contains all the topological and analytic information of the singularity. \section{The field of $\omega$-positive Puiseux series.}\label{The fields} In all that follows $\omega$ will be a vector in ${\mathbb R}^N$ of rationally independent coordinates. We will work with an algebraically closed field ${\mathbb K}$ of characteristic zero. Given a N-admissible ideal we are looking for solutions in the ring of Puiseux series with exponents in some translate of an $\omega$-positive cone $\sigma$. The cone $\sigma$ may be different for different ideals. It is only natural to work with the infinite union of all these rings. We say that a Puiseux series $\varphi$ is {\bf $\omega$-positive} when there exists $\gamma\in{\mathbb Q}^N$ and an $\omega$-positive cone $\sigma$ such that ${\mathcal E} (x^\gamma\varphi )\subset \sigma$. The set of $\omega$-positive Puiseux series was introduced in \cite{ArocaIlardi:2009} where it was proved that it is an algebraically closed field. This field is called the \textbf{field of $\omega$-positive Puiseux series} and will be denoted by ${\sl S}_\omega$. The vector $\omega$ induces a total order on ${\mathbb Q}^N$ \[ \alpha\leq \alpha '\Longleftrightarrow\omega\cdot\alpha\leq\omega\cdot\alpha '. \] This gives a natural way to choose the first term of a series in ${\sl S}_\omega$. This is the order we will use to compute the $\omega$-solutions ``term by term''. More precisely, the {\bf order} of an element $\phi=\sum_{\alpha}c_\alpha x^\alpha$ in ${\sl S}_\omega$ is \[ \ordser{\omega} (\phi) :=\min_{\alpha \in {\mathcal E}(f)} \omega \cdot \alpha \] and its {\bf first term} is \[ \inser{\omega} (\phi ):= c_\alpha x^\alpha\qquad\text{where}\qquad \omega \cdot \alpha=\ordser{\omega} (\phi). \] Set $\ordser{\omega}(0) \colon = \infty$ and $\inser{\omega} (0)=0$. \begin{rem}\label{propiedades de valser e inser} For $\phi ,\phi'\in {\sl S}_\omega$ \begin{enumerate} \item $\ordser{\omega} (\phi+\phi')\geq\min \{\ordser{\omega} (\phi), \ordser{\omega} (\phi')\}$.\label{Propiedad valuacion 1} \item $\ordser{\omega} (\phi+\phi')\neq\min \{\ordser{\omega} (\phi), \ordser{\omega}(\phi')\}$ if and only if $\ordser{\omega} (\phi)=\ordser{\omega} (\phi')$ and $\inser{\omega}(\phi) +\inser{\omega}(\phi')=0$.\label{segunda propiedad valser} \item $\ordser{\omega} (\phi\cdot\phi')= \ordser{\omega}(\phi) +\ordser{\omega}(\phi')$. Moreover $\inser{\omega} (\phi\cdot\phi')= \inser{\omega}(\phi)\cdot\inser{\omega}(\phi')$. \label{multiplication} \item $\inser{\omega} (\inser{\omega} (\phi))=\inser{\omega} (\phi) .$ \item\label{orden y ramificacion} $\ordser{\omega}(\phi ({x_1}^r,\ldots ,{x_N}^r))=r\ordser{\omega}(\phi (x_1,\ldots ,x_N))$ for any $r\in{\mathbb Q}$. \end{enumerate} \end{rem} A map from a ring into the reals with Properties \ref{Propiedad valuacion 1} and \ref{multiplication} is called a {\bf valuation}. The {\bf first $M$-tuple} of an element $\varphi=(\varphi_1,\ldots ,\varphi_M)\in{\sl S}_\omega^M$ is the $M$-tuple of monomials \[ \inser{\omega}(\varphi)=(\inser{\omega}(\varphi_1),\ldots ,\inser{\omega}(\varphi_M)) \] and the {\bf order} of $\varphi$ is the $M$-tuple of orders \[ \ordser{\omega}(\varphi)=(\ordser{\omega}(\varphi_1),\ldots ,\ordser{\omega}(\varphi_M)). \] \begin{rem} With the language introduced, Equation (\ref{el valor es positivo}) is equivalent to $\ordser{omega} (y)\in {{\mathbb R}_{>0}}^M$. \end{rem} \section{The extended ideal.}\label{The extended ideal} Given an ideal ${\mathcal I}\subset {\mathbb K} [x,y]$, let ${\mathcal I}^*\subset {\mathbb K} [x^*,y]$ be the extension of ${\mathcal I}$ to ${\mathbb K} [x^*,y]$ via the natural inclusion. We have \[ {\bf V} ( {\mathcal I}^*\cap {\mathbb K} [x,y])= \overline{ {\bf V}({\mathcal I}) \setminus \{ x_1\cdots x_N=0\}}. \] In regard to our question, it is then equivalent to work with ideals in ${\mathbb K} [x,y]$ or in ${\mathbb K}[x^*,y]$. For technical reasons we will start with ideals in ${\mathbb K}[x^*,y]$. \begin{defin} And ideal ${\mathcal I}\subset{\mathbb K} [x^*,y]$ is said to be {\bf N-admissible} if the ideal ${\mathcal I}\cap{\mathbb K} [x,y]\subset{\mathbb K} [x,y]$ is N-admissible. \end{defin} Given an ideal ${\mathcal I}\subset {\mathbb K}[x^*,y]$, let ${{\mathcal I}}^{\rm e}\subset{\sl S}_\omega [y]$ be the extension of ${\mathcal I}$ via the natural inclusion \[ {\mathbb K} [x^*, y]={\mathbb K}[x^*][y]\hookrightarrow {\sl S}_{\omega} [y]. \] When ${\mathcal I}$ is an N-admissible ideal, ${\large \textsc{V}} ({{\mathcal I}}^{\rm e})$ is a discrete subset of ${{\sl S}_\omega}^M$. By definition, $\phi\in{\large \textsc{V}} ({{\mathcal I}}^{\rm e})$, if and only if $\phi$ is an $\omega$-solution for ${\mathcal I}$. Question \ref{problema} may be reformulated as follows:\\ \begin{problem}{\bf Reformulation of Question \ref{problema}} Given an N-admissible ideal ${\mathcal I}\subset{\mathbb K} [x^*,y]$, and a vector $\omega\in{\mathbb R}^N$ of rationally independent coordinates. Find the (discrete) set of zeroes of in ${{\sl S}_\omega}^M$ of the extended ideal ${{\mathcal I}}^{\rm e}\subset {\sl S}_\omega [y]$ . \end{problem} A polynomial $f\in{\mathbb K} [x^*,y]$ may be considered a polynomial in $N+M$ variables with coefficients in ${\mathbb K}$, or a polynomial in $M$ variables with coefficients in ${\mathbb K} [x^*]\subset{\sl S}_\omega$. To cope with this fact we will use a slightly different notation: \begin{itemize} \item[*] $\ordser\omega$ and $\inser\omega$ refer to the field ${\sl S}_\omega$. (Section \ref{The fields}.) \item[*] $\ordpol\omega\eta$, $\inpol\omega\eta$ and $\idinpol\omega\eta$ refer to the ring ${\sl S}_\omega [y]$. (Sections \ref{Weighted orders and initial parts} and \ref{Initial Ideals}.) \item[*] $\ordPol\omega\eta$, $\inPol\omega\eta$ and $\idinPol\omega\eta$ refer to the ring ${\mathbb K} [x^*, y]$. (Section \ref{Polynomial initial ideals}.) \end{itemize} Given an ideal ${\mathcal I}\subset{\mathbb K} [x,y]$ the notation ${\large \textsf{V}} ({\mathcal I})$ will stand for the set of common zeroes of ${\mathcal I}$ in ${\mathbb K}^{N+M}$. Given an ideal ${\mathfrak I}\subset{\sl S}_\omega [y]$ the set of common zeroes of ${\mathfrak I}$ in ${{\sl S}_\omega}^{M}$ will be denoted by ${\large \textsc{V}} ({\mathfrak I})$. \section{Weighted orders and initial parts in ${\sl S}_\omega [y]$.}\label{Weighted orders and initial parts} The classical definition of weighted order and initial part considers as weights only vectors in ${\mathbb R}^M$. For technical reasons we need to extend the classical definition to weights in $\left({\mathbb R}\cup\{\infty\}\right)^M$. A polynomial in $M$ variables with coefficients in ${\sl S}_\omega$ is written in the form \[ f=\sum_{\beta\in E\subset ({{\mathbb Z}_{\geq 0}})^M}\phi_\beta y^\beta,\qquad \phi_\beta\in{\sl S}_\omega,\qquad y^\beta := {y_1}^{\beta_1}\cdots {y_M}^{\beta_M} \] where $E$ is a finite set. Set $\infty\cdot a=\infty$ for $a\in{\mathbb R}^*$ and $\infty\cdot 0=0$. A vector $\eta\in {({\mathbb R}\cup\{\infty\})}^M$ induces a (not necessarily total) order on the terms of $f$. The {\bf $\eta$-order of $f$ as an element of ${\sl S}_\omega [y]$} is \[ \ordpol{\omega}{\eta}(f):=\min_{\phi_\beta\neq 0}\left(\ordser{\omega}\phi_\beta +\eta\cdot\beta\right) \] and, if $\ordpol{\omega}{\eta}f<\infty$, the {\bf $\eta$-initial part of $f$ as an element of ${\sl S}_\omega [y]$} is \[ \inpol{\omega}{\eta}(f):=\sum_{\ordser{\omega}\phi_\beta+\eta\cdot\beta=\ordpol{\omega}{\eta}(f)}(\inser{\omega}\phi_\beta) y^\beta . \] \begin{examp}\label{inpol de un binomio} Consider a binomial of the form $y^\beta -\phi$ we have \[ \ordpol{\omega}{\eta} (y^\beta -\phi )= \left\{ \begin{array}{ll} \eta\cdot\beta &\text{if}\quad \eta\cdot\beta\leq\ordser{\omega}(\phi)\\ \ordser{\omega}(\phi) &\text{if}\quad \ordser{\omega}\phi\leq\eta\cdot\beta \end{array} \right. \] and \[ \inpol{\omega}{\eta}(y^\beta -\phi )= \left\{ \begin{array}{ll} y^\beta &\text{if}\quad \eta\cdot\beta <\ordser{\omega}(\phi)\\ y^\beta -\inser{\omega}(\phi) &\text{if}\quad \eta\cdot\beta =\ordser{\omega}(\phi)\\ \inser{\omega}(\phi) &\text{if}\quad \ordser{\omega}(\phi)<\eta\cdot\beta . \end{array} \right. \] \end{examp} \begin{lem} \label{key} If $\varphi\in{\sl S}_\omega^M$ is a zero of $f\in{\sl S}_\omega [y]$, then $\inser{\omega} (\varphi)$ is a zero of $\inpol{\omega}{\ordser{\omega}\varphi}(f)$. \end{lem} \begin{proof} Set $\eta :=\ordser{\omega}(\varphi)$. For $\phi\in{\sl S}_\omega$ and $\beta\in {{\mathbb Z}_{\geq 0}}^M$ the following equality holds: \begin{equation}\label{relacion entre ordser y ordpol} \ordser{\omega} \left(\phi\varphi^\beta\right) \stackrel{\ref{propiedades de valser e inser},\, \ref{multiplication}}{=} \ordser{\omega}(\phi) +\eta\cdot\beta = \ordpol{\omega}{\eta}\phi y^\beta. \end{equation} Suppose that $\varphi\in{\sl S}_\omega^M$ is a zero of $f=\sum_{\beta}\phi_\beta y^\beta$, we have \[ \begin{array}{lcl} \sum_{\beta}\phi_\beta \varphi^\beta=0 & \stackrel{(\ref{relacion entre ordser y ordpol})+ \ref{propiedades de valser e inser},\,\ref{segunda propiedad valser} }{\Longrightarrow} &\sum_{\ordser{\omega}\left(\phi_\beta\varphi^\beta\right) =\ordpol{\omega}{\eta} (f)} \inser{\omega}\left(\phi_\beta\varphi^\beta\right)=0\\ &\stackrel{\ref{propiedades de valser e inser},\, \ref{multiplication}}{\Longrightarrow} & \sum_{\ordser{\omega}\left(\phi_\beta\right)+\eta\cdot\beta =\ordpol{\omega}{\eta} (f)} \inser{\omega}\phi_\beta {\left(\inser{\omega}\varphi\right)}^\beta=0\\ &\stackrel{\text{By definition}}{\Longrightarrow} & \inpol{\omega}{\eta}(f)\left(\inser{\omega}\varphi\right)=0. \end{array} \] \end{proof} For any $f\in{\sl S}_\omega [y]$ and $\eta\in {({\mathbb R}\cup\{\infty\})}^M$ we have $\inpol{\omega}{\eta}(f)\in {\mathbb K} ({x^{\frac{1}{K}}})[y]$. An element of the form $cx^\alpha$ with $c\in{\mathbb K}$ will be called a monomial. \begin{lem}\label{Sistema de coeficiente} Given $f\in{\sl S}_\omega [y]$, let ${\mathfrak m}(x)\in {{\mathbb K} ({x^{\frac{1}{K}}})}^M$ be an $M$-tuple of monomials. Set $\eta:= \ordser{\omega} {\mathfrak m}$. We have $\inpol{\omega}{\eta}(f)\in {\mathbb K} ({x^{\frac{1}{K}}})[y]$ and \[ \inpol{\omega}{\eta}(f(x,{\mathfrak m}(x)))=0\Longleftrightarrow \inpol{\omega}{\eta}(f(\underline{1},{\mathfrak m}(\underline{1})))=0. \] An $M$-tuple of monomials ${\mathfrak m}\in {{\mathbb K} ({x^{\frac{1}{K}}})}^M$ with $\ordser{\omega} {\mathfrak m}=\eta$ is a zero of $\inpol{\omega}{\eta}(f)$ as an element of ${\mathbb K} ({x^{\frac{1}{K}}})[y]$ if and only if ${\mathfrak m}(\underline{1})$ is a zero of $\inpol{\omega}{\eta}(f(\underline{1},y))$. \end{lem} \begin{proof} If $\ordser{\omega} ({\mathfrak m})=\eta$ then $\ordser{omega}x^\alpha m^\beta=\omega\cdot\alpha+\eta\cdot\beta$. Since $\omega$ has rationally independent coordinates, $x^\alpha {\mathfrak m}^\beta = a x^\gamma$ where $a={{\mathfrak m}(\underline{1})}^\beta\in{\mathbb K}$ and $\gamma$ is the unique vector in ${\mathbb Q}^N$ such that $w\cdot\gamma=\omega\cdot\alpha+\eta\cdot\beta$. Now write \[ \inpol{\omega}{\eta}(f)=\sum_{\omega\cdot\alpha+\eta\cdot\beta= \ordpol{\omega}{\eta}(f)} a_{\alpha ,\beta} x^\alpha y^\beta \] we have $\sum_{\omega\cdot\alpha+\eta\cdot\beta=\ordpol{\omega}{\eta}(f)} a_{\alpha ,\beta} x^\alpha {\mathfrak m}^\beta=0$ if and only if \[ \sum_{\omega\cdot\alpha+\eta\cdot\beta=\ordpol{\omega}{\eta}(f)} a_{\alpha ,\beta} \frac{x^\alpha {\mathfrak m}^\beta}{x^\gamma}=0\Leftrightarrow \sum_{\omega\cdot\alpha+\eta\cdot\beta=\ordpol{\omega}{\eta}(f)} a_{\alpha ,\beta} {{\mathfrak m}(\underline{1})}^\beta=0. \] \end{proof} \section{Initial Ideals in ${\sl S}_\omega [y]$.}\label{Initial Ideals} For an $M$-tuple $\eta\in {({\mathbb R}\cup\{\infty\})}^M$ we will denote by $\Lambda (\eta )$ the set of subindexes \[ \Lambda (\eta ):=\{ i\in\{1,\ldots ,M\}\mid \eta_i\neq\infty\}. \] \begin{rem} $\ordpol{\omega}{\eta}(f)=\infty$ if and only if $f$ is in the ideal generated by $\{ y_i\mid i\in {\Lambda (\eta )}^{\rm C}\}$. \end{rem} Let ${\mathfrak I}$ be an ideal of ${\sl S}_\omega [y]$ and $\eta\in {({\mathbb R}\cup\infty )}^M$. The {\bf $\eta$-initial part of ${\mathfrak I}$} is the ideal of ${\sl S}_\omega [y]$ generated by the $\eta$-initial parts of its elements: \[ \idinpol{\omega}{\eta} {\mathfrak I}=\left< \{ \inpol{\omega}{\eta}f\mid f\in {\mathfrak I}\}\cup \{y_i\}_{i\in \Lambda(\eta)^{\rm C}}\right> . \] Let ${\mathcal A}$ and ${\mathcal B}$ be ideals. We have \begin{equation}\label{inicial de la interseccion menor que interseccion de iniciales} \idinpol{\omega}{\eta}\left( {\mathcal A}\cap {\mathcal B}\right)\subset \idinpol{\omega}{\eta} {\mathcal A}\cap \idinpol{\omega}{\eta}{\mathcal B} \end{equation} and \begin{equation}\label{Parte inicial respeta la inclusion} {\mathcal A}\subset {\mathcal B}\Longrightarrow \idinpol{\omega}{\eta} {\mathcal A}\subset\idinpol{\omega}{\eta} {\mathcal B}. \end{equation} Since ${\mathcal A}\cdot{\mathcal B}\subset {\mathcal A}\cap {\mathcal B}$ then \begin{equation}\label{inicial del producto menor que inicial de interseccion} \idinpol{\omega}{\eta}\left( {\mathcal A}\cdot{\mathcal B}\right) \subset \idinpol{\omega}{\eta}\left( {\mathcal A}\cap {\mathcal B}\right) \end{equation} and, since $\inpol{\omega}{\eta} (a\cdot b)=\inpol{\omega}{\eta} a\cdot \inpol{\omega}{\eta} b$ then \begin{equation}\label{producto de iniciales menor que inicial del producto} \idinpol{\omega}{\eta}{\mathcal A}\cdot \idinpol{\omega}{\eta}{\mathcal B}\subset \idinpol{\omega}{\eta}\left( {\mathcal A}\cdot{\mathcal B}\right). \end{equation} Let $A$ be an arbitrary set. For an M-tuple $y\in A^M$ and a subset $\Lambda \subset \{ 1,\ldots ,M\}$ we will use the following notation: \begin{equation}\label{Quedarse solo con unas coordenadas} y_\Lambda := (y_i)_{i\in\Lambda}. \end{equation} Given two subsets $B\subset A$ and $C\subset A$ the set $B^{\Lambda}\times C^{\Lambda^{\rm C}}$ is defined to be: \[ B^{\Lambda}\times C^{\Lambda^{\rm C}}:=\{ y\in A^M\mid y_\Lambda\in B^{\#\Lambda}\,\text{and}\, y_{\Lambda^{\rm C}}\in C^{\#\Lambda^{\rm C}}\}. \] We will use the notation $\toro{\eta}$ for the $\# \Lambda (\eta )$-dimensional torus \[ \toro{\eta} := {\left({\sl S}_\omega^*\right)}^{\Lambda (\eta )}\times {\{ 0\}}^{{\Lambda (\eta )}^{\rm C}}. \] \begin{rem}\label{Contenidos en cierre de toro} ${\large \textsc{V}} \left( \idinpol{\omega}{\eta}{\mathfrak I}\right)\subset \overline{\toro{\eta}}$. \end{rem} \begin{examp} For a point $\varphi=(\varphi_1,\ldots ,\varphi_M)\in {{\sl S}_\omega}^M$ denote by $\idmax{\varphi}$ be the maximal ideal \[ \idmax{\varphi}=\left< y_1-\varphi_1,\ldots ,y_M-\varphi_M\right>\subset {\sl S}_\omega [y]. \] Given $\eta\in {({\mathbb R}\cup\{\infty\})}^M$ we have \[ \left\{ \begin{array}{ll} \idinpol{\omega}{\eta}\idmax{\varphi}={\sl S}_\omega [y] &\text{if}\quad \ordser{\omega}(\varphi_i)<\eta_i\quad\text{for some}\quad i\in\{1,\ldots ,M\}\\ y_i\in\idinpol{\omega}{\eta}\idmax{\varphi} &\text{if}\quad \ordser{\omega}(\varphi_i)>\eta_i\\ \idinpol{\omega}{\eta}\idmax{\varphi}=\idmax{\inser{\omega}(\varphi)} &\text{if}\quad \ordser{\omega}(\varphi)=\eta . \end{array} \right. \] The first two points and the inclusion $\idinpol{\omega}{\eta}\idmax{\varphi}\supset\idmax{\inser{\omega}(\varphi)}$ in the third are direct consequence of Example \ref{inpol de un binomio}. The inclusion $\idinpol{\omega}{\eta}\idmax{\varphi}\subset\idmax{\inser{\omega}(\varphi)}$ in the third point is equivalent to $\inser{\omega}(\varphi)\in{\large \textsc{V}}\left( \idinpol{\omega}{\eta}\idmax{\varphi}\right)$ which follows from Lemma \ref{key}. And then \begin{equation}\label{ceros de la parte inicial de J} \toro{\eta}\cap {\large \textsc{V}}\left( \idinpol{\omega}{\eta}\idmax{\varphi}\right)=\left\{ \begin{array}{ccc} \emptyset & \text{if} & \ordser{\omega}(\varphi)\neq\eta\\ \inser{\omega}(\varphi) & \text{if} & \ordser{\omega}(\varphi)= \eta.\\ \end{array} \right. \end{equation} \end{examp} \section{Zeroes of the initial ideal in ${\sl S}_\omega [y]$.} \label{A special case of Kapranow's Theorem} Now we are ready to characterize the first terms of the zeroes of the ideal ${\mathfrak I}\subset {\sl S}_\omega [y]$. The following is the key proposition to extend Point \ref{dos} of Newton-Puiseux's method. \begin{prop}\label{Kapranow finito} Let ${\mathfrak I}\subset {\sl S}_\omega [y]$ be an ideal with a finite number of zeroes and let $\eta$ be an $M$-tuple in ${({\mathbb R}\cup\{\infty\})}^M$. An element $\phi\in \toro{\eta}$ is a zero of the ideal $\idinpol{\omega}{\eta}{\mathfrak I}$ if and only if $\ordser{\omega}(\phi) = \eta$ and there exists $\varphi\in{\large \textsc{V}} ({\mathfrak I} )$ such that $\inser{\omega}(\varphi) =\phi$. \end{prop} \begin{proof} Given $\varphi=(\varphi_1,\ldots ,\varphi_M)\in {{\sl S}_\omega}^M $ consider the ideal \[ \idmax{\varphi}=\left< y_1-\varphi_1,\ldots ,y_M-\varphi_M\right>\subset {\sl S}_\omega [y]. \] Set $H:={\large \textsc{V}} ({\mathfrak I} )$. By hypothesis $H$ is a finite subset of ${{\sl S}_\omega}^M$. By the Nullstellensatz there exists $k\in{\mathbb N}$ such that \[ {\left( \bigcap_{\varphi\in H} {\mathcal J}_{\varphi}\right)}^k\subset {\mathfrak I}\subset \bigcap_{\varphi\in H} {\mathcal J}_{\varphi}. \] By (\ref{Parte inicial respeta la inclusion}) and (\ref{producto de iniciales menor que inicial del producto}) we have \begin{equation}\label{Meto la parte inicial del ideal entre dos} {\left( \idinpol{\omega}{\eta}\bigcap_{\varphi\in H} {\mathcal J}_{\varphi}\right)}^k\subset \idinpol{\omega}{\eta}{\mathfrak I}\subset \idinpol{\omega}{\eta}\bigcap_{\varphi\in H} {\mathcal J}_{\varphi}. \end{equation} On the other hand \begin{equation}\label{Entre la interseccion y el producto} \prod_{\varphi\in H} \idinpol{\omega}{\eta}\idmax{\varphi} \stackrel{(\ref{producto de iniciales menor que inicial del producto})+(\ref{inicial del producto menor que inicial de interseccion})}{\subset} \idinpol{\omega}{\eta}\bigcap_{\varphi\in H}\idmax{\varphi} \stackrel{(\ref{inicial de la interseccion menor que interseccion de iniciales})}{\subset} \bigcap_{\varphi\in H}\idinpol{\omega}{\eta}\idmax{\varphi}. \end{equation} The zeroes of the right-hand and left-hand side of Equation (\ref{Entre la interseccion y el producto}) coincide. Therefore \begin{equation} {\large \textsc{V}}\left( \idinpol{\omega}{\eta}\bigcap_{\varphi\in H}\idmax{\varphi}\right) \stackrel{(\ref{Entre la interseccion y el producto})}{=} {\large \textsc{V}}\left( \bigcap_{\varphi\in H}\idinpol{\omega}{\eta}\idmax{\varphi}\right) = \bigcup_{\varphi\in H}{\large \textsc{V}}\left( \idinpol{\omega}{\eta}\idmax{\varphi}\right) \end{equation} and then, by (\ref{ceros de la parte inicial de J}), \begin{equation}\label{ceros de No se que poner} \toro{\eta}\cap {\large \textsc{V}}\left(\idinpol{\omega}{\eta}\bigcap_{\varphi\in H} \idmax{\varphi}\right)=\{\inser{\omega}\varphi\mid\varphi\in H,\ordser{\omega}(\varphi) = \eta\}. \end{equation} The conclusion follows directly from (\ref{Meto la parte inicial del ideal entre dos}) and (\ref{ceros de No se que poner}). \end{proof} \begin{cor}\label{Los ceros del inicial son monomios de orden eta} Let ${\mathfrak I}\subset {\sl S}_\omega [y]$ be an ideal with a finite number of zeroes and let $\eta$ be an $M$-tuple in ${({\mathbb R}\cup\{\infty\})}^M$. The zeroes of the ideal $\idinpol{\omega}{\eta}{\mathfrak I}$ in $\toro{\eta}$ are $M$-tuples of monomials of order $\eta$. \end{cor} \section{Initial ideals in ${\mathbb K} [x^*,y]$.}\label{Polynomial initial ideals} A polynomial $f\in{\mathbb K} [x^*, y]={\mathbb K}[x^*][y]$ is an expression of the form: \[ \sum_{(\alpha ,\beta )\in ({\mathbb Z}^N\times {{\mathbb Z}_{\geq 0}})^M} a_{(\alpha ,\beta )} x^\alpha y^\beta\qquad a_{(\alpha ,\beta )}\in{\mathbb K}. \] The $\eta$-order of $f\in{\mathbb K} [x^*][y]$ as an element of ${\sl S}_\omega [y]$ is called the {\bf $(\omega ,\eta)$-order} of $f$. That is \[ \ordPol{\omega}{\eta} (f) :=\min_{a_{(\alpha ,\beta ) }\neq 0} \omega\cdot\alpha +\eta\cdot\beta. \] And the $\eta$-initial part of $f$ as an element of ${\sl S}_\omega [y]$ is called the {\bf $(\omega ,\eta)$-initial part} $f$. That is: if $\ordPol{\omega}{\eta}f<\infty$, then \[ \inPol{\omega}{\eta} (f) := \sum_{\omega\cdot\alpha +\eta\cdot\beta=\ordPol{\omega}{\eta} (f)} a_{(\alpha,\beta )} x^\alpha y^\beta \] and, if $\ordPol{\omega}{\eta}(f)=\infty$, $\inPol{\omega}{\eta} (f)=0$. Given an ideal ${{\mathcal I}}\subset{\mathbb K} [x^*][y]$ the {\bf $(\omega ,\eta )$-initial ideal of ${{\mathcal I}}$} is the ideal \[ \idinPol{\omega}{\eta}{{\mathcal I}}:= \left< \{\inPol{\omega}{\eta}(f)\mid f\in {{\mathcal I}}\}\cup \{ y_i\}_{i\in {\Lambda (\eta )}^{\rm C}}\right>\subset {\mathbb K} [x^*][y]. \] Given an ideal ${{\mathcal I}}\subset{\mathbb K} [x^*][y]$ let ${\mathcal I}^{\rm e}$ denote the extension of ${\mathcal I}$ to ${\sl S}_\omega [y]$. \begin{prop}\label{las extensiones y sin extender} Given $\eta\in {({\mathbb R}\cup\{\infty\})}^M$ and an ideal ${{\mathcal I}}\subset{\mathbb K} [x^*,y]$ we have that \[ {\left(\idinPol{\omega}{\eta} {{\mathcal I}}\right)}^{\rm e}= \idinpol{\omega}{\eta} {{\mathcal I}}^{\rm e}. \] \end{prop} \begin{proof} The inclusion ${\left(\idinPol{\omega}{\eta} {{\mathcal I}}\right)}^{\rm e}\subset \idinpol{\omega}{\eta} {{\mathcal I}}^{\rm e}$ is straightforward. Now, $h\in \{\inpol{\omega}{\eta}f\mid f\in{{\mathcal I}}^{\rm e}\}$ if and only if $h= \inpol{\omega}{\eta}(\sum_{i=1}^r g_i P_i)$ where $g_i\in{\sl S}_\omega [y]$ and $P_i\in {{\mathcal I}}$. Let $\Lambda =\{ i\mid \ordpol{\omega}{\eta}\left( g_iP_i\right)=\min_{j=1,\ldots r}\ordpol{\omega}{\eta}\left( g_jP_j\right)\}$. If $\sum_{i\in\Lambda} \inpol{\omega}{\eta} \left( g_i P_i\right) = 0$ then $h= \inpol{\omega}{\eta}(\sum_{i=1}^r g_i') P_i $ where $g_i'= g_i-\inpol{\omega}{\eta}(g_i)$ for $i\in\Lambda$ and $g_i'=g_i$ otherwise. Then we can suppose that $\sum_{i\in\Lambda} \inpol{\omega}{\eta} \left( g_i P_i\right)\neq 0$. Then $h= \sum_{i\in\Lambda} \inpol{\omega}{\eta} \left(g_i P_i\right) =\sum_{i\in\Lambda} \inpol{\omega}{\eta} (g_i )\inpol{\omega}{\eta} (P_i)$ is an element of ${\left(\idinPol{\omega}{\eta} {{\mathcal I}}\right)}^{\rm e}$. \end{proof} We will be using the following technical result: \begin{lem}\label{inicial del inicial} Given $\eta\in {({\mathbb R}\cup\{\infty\})}^M$ and an ideal ${\mathcal I}\subset{\mathbb K} [x^*][y]$ we have that \[ \idinPol{\omega}{\eta}( {\idinPol{\omega}{\eta} {{\mathcal I}}}) =\idinPol{\omega}{\eta} {{\mathcal I}}. \] \end{lem} \begin{proof}It is enough to see that for any $g\in{\idinPol{\omega}{\eta} {{\mathcal I}}}$ there exists $f\in {\mathcal I}$ such that ${\idinPol{\omega}{\eta} {g}}={\idinPol{\omega}{\eta} (f)}$: Given $p=\sum_{i=1}^d a_i x^{\alpha_i}y^{\beta_i}\in{\mathbb K} [x^*][y]$ and $h\in {{\mathcal I}}$, we have \[ p\idinPol{\omega}{\eta}(h)= \sum_{i=1}^d a_i x^{\alpha_i}y^{\beta_i} \idinPol{\omega}{\eta}(h)=\sum_{i=1}^d\idinPol{\omega}{\eta} (a_i x^{\alpha_i}y^{\beta_i}h). \] Then the product $p\idinPol{\omega}{\eta}(h)$ is a sum of $({\omega},{\eta})$-initial parts of elements of ${{\mathcal I}}$. Therefore, $g\in\idinPol{\omega}{\eta}{{\mathcal I}}$ if and only if there exists $f_1,\ldots ,f_r\in {{\mathcal I}}$, such that $g=\sum_{i\in\{ 1,\ldots , r\}} \idinPol{\omega}{\eta}(f_i)$. The $f_i$'s may be chosen such that $\sum_{i\in\Lambda}\idinPol{\omega}{\eta}(f_i)\neq 0$ for all non-empty $\Lambda\subset\{ 1,\ldots , r\}$. Let $m=\min_{i\in\{ 1,\ldots , r\}} \ordPol{\omega}{\eta}(f_i)$. Since $\sum_{\ordPol{\omega}{\eta} (f_i)=m}\idinPol{\omega}{\eta} (f_i)\neq 0$ then $\idinPol{\omega}{\eta} (g)=\sum_{\ordPol{\omega}{\eta} (f_i)=m} \idinPol{\omega}{\eta} (f_i)$, and then $f:=\sum_{\ordPol{\omega}{\eta} (f_i)=m}f_i$ has the property we were looking for. \end{proof} \begin{prop}\label{primera traduccion del teorema} Let $\omega\in{\mathbb R}^N$ be of rationally independent coordinates, let $\eta$ be an $M$-tuple in ${({\mathbb R}\cup\{\infty\})}^M$ and let ${\mathcal I}\subset {\mathbb K} [x^*,y]$ be an N-admisible ideal. An element $\phi\in \toro{\eta}$ is an $\omega$-solution for the ideal $\idinPol{\omega}{\eta}{\mathcal I}$ if and only if $\ordser{\omega}(\phi) = \eta$ and there exists $\varphi\in {{\sl S}_\omega}^M$, an $\omega$-solution for ${\mathcal I}$, such that $\inser{\omega}(\varphi) =\phi$. \end{prop} \begin{proof} This is a direct consequence of Proposition \ref{las extensiones y sin extender} and Proposition \ref{Kapranow finito}. \end{proof} \section{The tropical variety.}\label{Tropicalization} The tropical variety of a polynomial $f\in {\mathbb K} [x^*, y]$ is the $(N+M-1)$-skeleton of the normal fan of its Newton polyhedron. The tropical variety of an ideal ${{\mathcal I}}\subset {\mathbb K} [x^*, y]$ is the intersection of the tropical varieties of the elements of ${{\mathcal I}}$. More precisely, the {\bf tropical variety} of ${{\mathcal I}}$ is the set \[ \tau ({{\mathcal I}}):= \{(\omega ,\eta )\in{\mathbb R}^N\times {({\mathbb R}\cup\{\infty\})}^M\mid \idinPol{\omega}{\eta}{{\mathcal I}}\cap {\mathbb K} [x^*,y_{\Lambda (\eta )}]\text{ does not have a monomial}\}. \] Tropical varieties have become an important tool for solving problems in algebraic geometry. See for example \cite{ItenbergMikhalkin:2007,Gathmann:2006,RichterSturmfels:2005}. In \cite{Bogart:2007,HeptTheobald:2007} algorithms to compute tropical varieties are described. \begin{prop}\label{proposicion Tropicalizacion} Let ${{\mathcal I}}$ be an ideal of ${\mathbb K} [x^*,y]$. Given $\eta \in {({\mathbb R}\cup\{\infty\})}^{M}$ the ideal $\idinPol{\omega}{\eta}{{\mathcal I}}$ has an $\omega$-solution in $\toro{\eta}$ if and only if $(\omega ,\eta)$ is in the tropical variety of ${{\mathcal I}}$. \end{prop} \begin{proof} Suppose that $\varphi\in \toro{\eta}$ is an $\omega$-solution of $\idinPol{\omega}{\eta}{{\mathcal I}}$ and that $c x^\alpha y^\beta\in \idinPol{\omega}{\eta}{{\mathcal I}}\cap {\mathbb K} [x^*,y_{\Lambda (\eta )}]$. We have $x^\alpha\varphi^\beta=0$ and then, $\varphi_i=0$ for some $i\in\Lambda (\eta )$ which gives a contradiction. Let ${\mathbb K} (x)$ denote the field of fractions of ${\mathbb K} [x]$ and let $\widetilde{{{\mathcal I}}}$ be the extension of $\idinPol{\omega}{\eta}{{\mathcal I}}$ to ${\mathbb K} (x)[y]$ via the natural inclusion $ {\mathbb K} [x,y]={\mathbb K} [x][y]\subset {\mathbb K} (x)[y]$. Since ${\sl S}_\omega$ contains the algebraic closure of ${\mathbb K} (x)$, the zeroes of $\idinPol{\omega}{\eta}{{\mathcal I}}$ are the algebraic zeroes of $\tilde{{{\mathcal I}}}$. Suppose that $\idinPol{\omega}{\eta}{{\mathcal I}}$ does not have zeroes in $\toro{\eta}$ then, by Remark \ref{Contenidos en cierre de toro} \[ {\large \textsc{V}}\left(\idinPol{\omega}{\eta}{{\mathcal I}}\right)\subset \overline{\toro{\eta}}\setminus\toro{\eta}. \] Let $v$ be the only element of ${\{ 1\} }^{\Lambda (\eta )}\times {\{ 0\} }^{{\Lambda (\eta )}^{\rm C}}$. The monomial $y^v$ vanishes in all the algebraic zeroes of $\tilde{{{\mathcal I}}}$. By the Nullstellensatz, there exists $k\in{\mathbb N}$ such that $y^{kv}$ belongs to $\tilde{{{\mathcal I}}}$. Now $y^{kv}$ belongs to $\tilde{{{\mathcal I}}}$ if and only if there exists $h_1,\dots ,h_r\in {\mathbb K} [x]\setminus\{ 0\}$ and $f_1,\dots ,f_r\in \idinPol{\omega}{\eta}{{\mathcal I}}$ such that \[ y^{kv}=\sum_{i=1}^r \frac{1}{h_i}f_i \Rightarrow \left(\prod_{i=1}^r h_i(x)\right) y^{kv} = \sum_{i=1}^r \left(\prod_{\begin{array}{c}j=1\\i\neq j\end{array}}^r h_j\right) f_i\in \idinPol{\omega}{\eta}{{\mathcal I}}. \] Then, by Lemma \ref{inicial del inicial}, $\inPol{\omega}{\eta} \left(\left(\prod_{i=1}^r h_i(x)\right) y^{kv}\right)\in\idinPol{\omega}{\eta}{{\mathcal I}}$ and $\inPol{\omega}{\eta} \left(\left(\prod_{i=1}^r h_i(x)\right) y^{kv}\right)= \inser{\omega}\left(\prod_{i=1}^r h_i(x)\right)y^{kv}$ is a monomial. And the result is proved. \end{proof} As a direct consequence of of Propositions \ref{primera traduccion del teorema} and \ref{proposicion Tropicalizacion} we have the extension Point 1 of Newton-Puiseux's method. \begin{thm}\label{Extension del punto uno} Let ${\mathcal I}\subset {\mathbb K} [x^*,y]$ a N-admissible ideal and let $\omega\in{\mathbb R}^N$ be of rationally independent coordinates. $\phi= (c_1x^{\alpha^{(1)}},\ldots ,c_M x^{\alpha^{(M)}})$ is the first term of an $\omega$-solution of ${\mathcal I}$ if and only if \begin{itemize} \item $(\omega ,\ordser\omega\phi)$ is in the tropical variety of ${\mathcal I}$. \item $\phi$ is an $\omega$-solution of the ideal $\idinPol{\omega}{\ordser\omega\phi} {\mathcal I}$. \end{itemize} \end{thm} These statements recall Kapranov's theorem. Kapranov's theorem was proved for hypersurfaces in \cite{EinsiedlerKapranov:2006} and the first published proof for an arbitrary ideal may be found in \cite{Draisma:2008}. There are several constructive proofs \cite{Payne:2009,Katz:2009} in the literature. An other proof of Proposition \ref{Kapranow finito} could probably be done by using Proposition \ref{las extensiones y sin extender}, showing that $(\omega ,\eta)\in {\mathcal T} ({{\mathcal I}})$ if and only if $\eta\in {\mathcal T} ({{\mathcal I}}^e)$, and checking each step of one of the constructive proofs. \section{$\omega$-set.}\label{omega-data} At this stage we need to introduce some more notation: Given a $M\times N$ matrix \[ \Gamma=\left(\!\!\begin{array}{ccc} \tiny\Gamma_{1,1} & \ldots & \tiny\Gamma_{1,N}\\ \vdots & & \vdots\\ \tiny\Gamma_{M,1} & \ldots & \tiny\Gamma_{M,N} \end{array}\!\!\right). \] The $i$-th row will be denoted by $\Gamma_{i,*}:= (\Gamma_{i,1},\ldots ,\Gamma_{i,N})$ and \[ x^\Gamma :=\left(\!\!\begin{array}{c} x^{\tiny\Gamma_{1,*}}\\ \vdots \\ x^{\tiny\Gamma_{M,*}} \end{array}\!\!\right). \] In particular, if $I\in {\mathcal M}_{N\times N}$ is the identity, then $x^{\frac{1}{k}I}=({x_1}^{\frac{1}{k}},\ldots ,{x_N}^{\frac{1}{k}})$. An $M$-tuple of monomials ${\mathfrak m}\in {{\mathbb K} ({x^{\frac{1}{K}I}})}^M$ can be written as an entrywise product \[ {\mathfrak m}=x^{\Gamma}c=\left(\begin{array}{c} c_1 x^{\Gamma_{1,*}}\\ \vdots\\ c_M x^{\Gamma_{M,*}} \end{array}\right). \] Given an $M$-tuple of monomials ${\mathfrak m}\in {{\mathbb K} ({x^{\frac{1}{K}I}})}^M$ the {\bf defining data of ${\mathfrak m}$} is the $3$-tuple \[ D({\mathfrak m})=\{\ordser{\omega}{\mathfrak m} ,\Gamma ,{\mathfrak m}(\underline{1})\} \] where $\Gamma\in {\mathcal M}_{M\times N}({\mathbb Q}\cup\{\infty\})$ is the unique matrix such that $\omega\cdot\Gamma^T=\ordser{\omega}{\bf m}$ and $\Gamma_{i,*}=\underline{\infty}$ for all $i\in {\Lambda (\ordser{\omega}{\bf m})}^{\rm C}$. \begin{examp} If $\omega = (1,\sqrt{2})$ and \[ {\mathfrak m} = \left(\begin{array}{c} 3{x_1}^{3}\\ 7{x_1}^{2}{x_2}\\ 0 \end{array}\right) \] then \[ D({\mathfrak m}) =\{ (3,2+\sqrt{2},\infty ),\left(\begin{array}{cc} 3 & 0\\ 2 & 1\\ \infty & \infty \end{array}\right), (3,7,0)\}. \] \end{examp} \begin{defin} An {\bf $\omega$-set} is a $3$-tuple $\{ \eta,\Gamma,c\}$ where \begin{equation}\label{Donde estan los elementos de un starting data} \eta\in (\mathbb{R}\cup\{\infty\})^M,\,\,\Gamma\in {\mathcal M}_{M\times N}(\mathbb{Q}\cup \{\infty\}),\, c\in {\mathbb K}^M \end{equation} and \begin{itemize} \item $\omega\cdot\Gamma^T=\eta$ \item $\Gamma_{i,*}=\underline{\infty}$ for all $i\in {\Lambda (\eta )}^{\rm C}$ \item $c\in {{\mathbb K}^*}^{\Lambda (\eta)}\times {\{ 0\}}^{{\Lambda (\eta)}^{\rm C}}$. \end{itemize} \end{defin} Given an $\omega$-set $D=\{\eta,\Gamma, c\}$ the {\bf M-tuple defined by $D$} is the M-tuple of monomials \[ {\mathfrak M}_D: = x^{\Gamma}c. \] We have \[ {\mathfrak M}_{\{\eta ,\Gamma ,c\}}(x^{rI})={\mathfrak M}_{\{r\eta ,r\Gamma ,c\}}(x). \] \begin{rem} ${\mathfrak m}={\mathfrak M}_{D({\mathfrak m})}$ and $D({\mathfrak M}_D)=D$. \end{rem} \section{Starting $\omega$-set for $\mathcal{I}$.}\label{omega-starting data} Given an N-admissible ideal ${{\mathcal I}}\subset {\mathbb K}[x^*,y]$. {\bf A starting $\omega$-set for ${{\mathcal I}}$} is an $\omega$-set $D=\{ \eta,\Gamma,c\}$ such that \begin{itemize} \item The vector $(\omega,\eta)$ is in the tropical variety of ${{\mathcal I}}$. \item $c$ is a zero of the system $\{f(\underline{1},y)=0\mid f\in \text{In}_{\omega,\eta}{\mathcal I}\}$. \end{itemize} \begin{examp} Let ${{\mathcal I}}=\left< x_1+y_1-y_2+y_1y_2+y_3,x_2-y_1+y_2+2y_1 y_2, y_3\right>$. For $\omega =(1,\sqrt{2})$ there are two possible starting $\omega$-sets \[ D1=\{ (1,1,\infty ),\left(\begin{array}{cc} 1 & 0\\ 1 & 0\\ \infty & \infty \end{array}\right), (1,1,0)\} \] and \[ D2=\{ (0,0,\infty ),\left(\begin{array}{cc} 0 & 0\\ 0 & 0\\ \infty & \infty \end{array}\right), (\frac{1}{3},\frac{1}{5},0)\}. \] and \[ {\mathfrak M}_{D1}(x)=\left(\begin{array}{c} x_1\\ x_1\\ 0 \end{array}\right),\, {\mathfrak M}_{D1} (x^{\frac{1}{3}I})=\left(\begin{array}{c} {x_1}^{\frac{1}{3}}\\ {x_1}^{\frac{1}{3}}\\ 0 \end{array}\right)\,\text{and} \,{\mathfrak M}_{D2}(x)=\left(\begin{array}{c} \frac{1}{3}\\ \frac{1}{5}\\ 0 \end{array}\right). \] \end{examp} \begin{prop}\label{Mupla asociada a data} The $\omega$-set $D=\{\eta ,\Gamma ,c\}$ is a starting $\omega$-set for ${{\mathcal I}}$ if and only if ${\mathfrak M}_D$ is an $\omega$-solution of $\idinPol{\omega}{\eta} {{\mathcal I}}$. Moreover all the $\omega$-solutions of $\idinPol{\omega}{\eta} {{\mathcal I}}$ in $\toro{\eta}$ are of the form ${\mathfrak M}_D$ where $D=\{\eta ,\Gamma ,c\}$ is a starting $\omega$-set for ${{\mathcal I}}$. \end{prop} \begin{proof} That ${\mathfrak M}_D$ is an $\omega$-solution of $\idinPol{\omega}{\eta} {{\mathcal I}}$ when $D$ is a starting $\omega$-set is a direct consequence of Lemma \ref{Sistema de coeficiente}. The other implication is consequence of Proposition \ref{proposicion Tropicalizacion} and Lemma \ref{Sistema de coeficiente}. The last sentence follows from Corollary \ref{Los ceros del inicial son monomios de orden eta}. \end{proof} \section{The ideal ${{\mathcal I}}_D$.}\label{The ideal ID} Given a matrix $\Gamma\in {\mathcal M}_{M\times N}({\mathbb Q}\cup\{\infty\})$ the minimum common multiple of the denominators of its entries will be denoted by ${\bf d}\Gamma$. That is \[ {\bf d}\Gamma := \min\{ k\in{\mathbb N}\mid \Gamma\in {\mathcal M}_{N\times M}(\mathbb{Z}\cup\{\infty\})\}. \] Given an $\omega$-set $D=\{\eta,\Gamma, c\}$, we will denote by ${{\mathcal I}}_D$ the ideal in $\mathbb{K}[x^*,y]$ given by \[ {{\mathcal I}}_D:= \left<\{f(x^{{\bf d}\Gamma I},y+{\mathfrak M}_D(x^{{\bf d}\Gamma I})\mid f\in \mathcal{I}\}\right>\subset{\mathbb K} [x^*,y]. \] \begin{rem}\label{ceros de I y de ID} A series $\phi\in{\sl S}_\omega^M$ is an $\omega$-solution of ${{\mathcal I}}$ if and only if the series $\tilde{\phi}:= \phi(x^{{\bf d}\Gamma I})- {\mathfrak M}_D(x^{{\bf d}\Gamma I})$ is an $\omega$-solution of ${{\mathcal I}}_D$. \end{rem} \begin{prop}\label{En la tropicalizacion hay uno de pendiente mayor} Let $D=\{\eta,\Gamma, c\}$ be a starting $\omega$-set for an ideal ${\mathcal I}$. There exists $\tilde{\eta}\in(\mathbb{R}\cup\{\infty\})^M$ such that $(\omega,\tilde{\eta})\in\tau ({\mathcal I}_D)$ and $\tilde{\eta}_{\Lambda (\tilde{\eta})}>{\bf d}\Gamma\eta_{\Lambda (\tilde{\eta})}$ coordinate-wise. \end{prop} \begin{proof} By Proposition \ref{Kapranow finito} and Proposition \ref{Mupla asociada a data}, ${\mathfrak M}_D$ is the first term of at least one $\omega$-solution of $\mathcal{I}$. Say \[ \phi={\mathfrak M}_D+\tilde{\phi}\in{\large \textsc{V}}({\mathcal I}),\quad \tilde{\phi}= \left(\begin{array}{c} \tilde{\phi}_1\\ \vdots\\ \tilde{\phi}_M \end{array}\right)\in {\sl S}_\omega^M, \] with $\ordser{\omega}(\tilde{\phi}_i)> \omega\cdot\Gamma_{i,*}=\eta_i $ when $\tilde{\phi}_i\neq 0$. Set \[ \tilde{\eta}:={\bf d}\Gamma\ordser{\omega}(\tilde{\phi})\stackrel{\text{Remark} \ref{propiedades de valser e inser},\ref{orden y ramificacion}}{=}\ordser{\omega}(\tilde{\phi})(x^{{\bf d}\Gamma I}) \] then $\tilde{\eta}_i>{\bf d}\Gamma\eta_i$ for all $i\in\Lambda (\tilde{\eta})$. By Remark \ref{ceros de I y de ID} $\tilde{\phi}(x^{{\bf d}\Gamma I})$ is an $\omega$-solution of ${\mathcal I}_D$. Then, by Theorem \ref{Kapranow finito}, $\inser{\omega}\tilde{\phi}$ is an $\omega$-solution of $\idinPol{\omega}{\tilde{\eta}}{\mathcal I}_D$. Finally, by Proposition \ref{proposicion Tropicalizacion}, $(\omega ,\tilde{\eta})$ is in the tropical variety of ${\mathcal I}_D$. \end{proof} \begin{prop}\label{Hay uno de pendiente mayor} Let $D=\{\eta,\Gamma, c\}$ be a starting $\omega$-set for an ideal ${\mathcal I}$. There exists a starting $\omega$-set $D'=\{\eta',\Gamma', c'\}$ for ${\mathcal I}_D$ such that ${\eta'}_{\Lambda (\eta')}>{\bf d}\Gamma\eta_{\Lambda (\eta')}$ coordinate-wise. \end{prop} \begin{proof} By Proposition \ref{En la tropicalizacion hay uno de pendiente mayor}, there exists $\eta'\in {({\mathbb R}\cup\{\infty\})}^M$ such that ${\eta'}_{\Lambda (\eta')}>{\bf d}\Gamma\eta_{\Lambda (\eta')}$ coordinate-wise and $(\omega ,\eta')\in\tau ({\mathcal I}_D)$. By Proposition \ref{proposicion Tropicalizacion}, the ideal $\idinPol{\omega}{\eta'}{\mathcal I}_D$ has an $\omega$-solution $\phi$ in $\toro{\eta'}$. By Proposition \ref{Mupla asociada a data}, $\phi={\mathfrak M}_{D'}$ where $D'=\{\eta',\Gamma',c'\}$ is a starting $\omega$-set for ${\mathcal I}_D$. \end{proof} \section{$\omega$-sequences.}\label{omega-sequences} Given an $M$-tuple $\phi\in{\sl S}_\omega^M$ define inductively $\{\phi^{(i)}\}_{i=0}^\infty$, and $\{ D^{(i)}\}_{i=0}^\infty$ by: \begin{itemize} \item{For $i=0$} \begin{itemize} \item $\phi^{(0)}:=\phi$ \item $D^{(0)}$ is the defining data of $\inser{\omega}\phi$. \end{itemize} \item{For $i>0$:} \begin{itemize} \item $\phi^{(i)}:=\phi^{(i-1)}(x^{{\bf d}\Gamma^{(i-1)} I})-{\mathfrak M}_{D^{(i-1)}}(x^{{\bf d}\Gamma^{(i-1)}I})$ \item $D^{(i)}$ is the defining data of the $M$-tuple of monomials $\inser{\omega}\phi^{(i)}$. ($D^{(i)}:=D(\inser{\omega}\phi^{(i)})$). \end{itemize} \end{itemize} The sequence above \[ {\bf seq} (\phi):= \{D^{(i)}\}_{i=0}^\infty \] will be called {\bf the defining data sequence for $\phi$}. \begin{rem}\label{Los eta crecen en el data asociado a serie} For any $\phi\in{\sl S}_\omega^M$. If ${\bf seq} (\phi):= \{\eta^{(i)},\Gamma^{(i)},c^{(i)}\}_{i=0}^\infty$ then ${\eta^{(i)}}_{\Lambda (\eta^{(i)})}>{\bf d}\Gamma^{(i-1)}{\eta^{(i-1)}}_{\Lambda (\eta^{(i)})}$ coordinate-wise. \end{rem} Given a sequence $S=\{D^{(i)}\}_{i=0\ldots K}=\{\eta^{(i)},\Gamma^{(i)}, c^{(i)}\}_{i=0\ldots K}$, with $K\in{\mathbb Z}_{\geq 0}\cup\{\infty\}$. Set \begin{itemize} \item ${\mathcal I}^{(0)}={\mathcal I}$ \item ${\mathcal I}^{(i)}={\mathcal I}^{(i-1)}_{D^{(i-i)}}$ for $i\in\{1,\ldots ,K\}$ \end{itemize} $S$ is called an {\bf $\omega$-sequence for ${\mathcal I}$} if and only if for $i\in\{0,\ldots ,K\}$ \begin{itemize} \item $D^{(i)}=\{\eta^{(i)},\Gamma^{(i)}, c^{(i)}\}$ is a starting $\omega$-set for ${\mathcal I}^{(i)}$ \item ${\eta^{(i)}}_{\Lambda (\eta^{(i)})}>{\bf d}\Gamma^{(i-1)}{\eta^{i-1}}_{\Lambda (\eta^{(i)})}$ coordinate-wise. \end{itemize} As a corollary to Proposition \ref{Hay uno de pendiente mayor} we have: \begin{cor} Let $\{D^{(i)}\}_{i=0\ldots K}$ be an $\omega$-sequence for ${\mathcal I}$. For any $K'\in\{ K+1,\ldots ,\infty\}$ there exists a sequence $\{D^{(i)}\}_{i=K+1\ldots K'}$ such that $\{D^{(i)}\}_{i=0\ldots K'}$ is an $\omega$-sequence for ${\mathcal I}$. \end{cor} \begin{prop}\label{ceros dan w-secuencia} If $\phi$ is an $\omega$-solution of ${\mathcal I}$ then ${\bf seq} (\phi )$ is an $\omega$-sequence for ${\mathcal I}$. \end{prop} \begin{proof} This is a direct consequence of Remarks \ref{Los eta crecen en el data asociado a serie} and \ref{ceros de I y de ID}. \end{proof} \section{The solutions.}\label{The solutions} Given an $\omega$-sequence $S=\{ D^{(i)}\}_{i=0\ldots K}$ for ${\mathcal I}$, with $D^{(i)}=\{\eta^{(i)},\Gamma^{(i)},c^{(i)}\}$ set \begin{equation}\label{sucesion de ramificaciones} r^{(0)}:=1\quad\text{and}\quad r^{(i)}:=\frac{1}{\prod_{j=0}^{i-1}{\bf d}\Gamma^{(j)}}\,\text{for}\, i>0. \end{equation} The {\bf series defined by $S$} is the series \[ {\bf ser} (S):= \sum_{i=0}^K {\mathfrak M}_{D^{(i)}} (x^{r^{(i)}I}). \] The following theorem is the extension of Point \ref{tres} of Newton-Puiseux's method: \begin{thm}\label{ultimo teorema} \label{data da ceros} If $S=\{D^{(i)}\}_{i=0}^\infty$ is an $\omega$-sequence for ${\mathcal I}$ then ${\bf ser} (S)$ is an $\omega$-solution of ${\mathcal I}$. \end{thm} \begin{proof} Let $S=\{ D^{(i)}\}_{i=1}^\infty$ be an $\omega$-sequence for ${\mathcal I}$. Where $D^{(i)}=\{\eta^{(i)},\Gamma^{(i)},c^{(i)}\}$. Let $\{ {\mathcal I}^{(i)}\}_{i=0}^\infty$ be defined by ${\mathcal I}^{(0)}:={\mathcal I}$ and ${\mathcal I}^{(i)}:={\mathcal I}^{(i-1)}_{D^{(i-1)}}$. We have that $D^{(i)}$ is a starting $\omega$-set for ${\mathcal I}^{(i)}$. By Proposition \ref{proposicion Tropicalizacion} for each $i\in{\mathbb N}$ there exists $\phi^{(i)}\in{\sl S}_\omega^M$ such that $\ordser{\omega}\phi^{(i)}=\eta^{(i)}$ and $\phi^{(i)}\in{\large \textsc{V}} \left( {\mathcal I}^{(i)}\right)$. Set $\{r^{(i)}\}_{i=0}^\infty$ as in (\ref{sucesion de ramificaciones}) and $\tilde{\phi^{(i)}}:= {\bf ser} (\{D^{(i)}\}_{j=0}^{i-1})+\phi^{(i)}(x^{r^{(i)} I})$. For each $i\in{\mathbb N}$, by Remark \ref{ceros de I y de ID}, $\tilde{\phi^{(i)}}\in{\large \textsc{V}} \left( {\mathcal I}\right)\subset{\sl S}_\omega^M$. Since there are only a finite number of zeroes there exists $K\in{\mathbb N}$ such that $\tilde{\phi^{(i)}}=\tilde{\phi^{(K)}}$ for all $i>K$. Then \[ {\bf ser} (S)=\tilde{\phi^{(K)}}\in{\large \textsc{V}} \left({\mathcal I}\right)\subset{\sl S}_\omega^M. \] \end{proof} Theorem \ref{ultimo teorema} together with Proposition \ref{ceros dan w-secuencia} gives: \begin{cor} {\bf Answer to Question \ref{problema}.}\\ Let ${\mathcal I}\subset{\mathbb K}[x^*,y]$ be an N-admissible ideal and let $\omega\in{\mathbb R}^N$ be of rationally independent coordinates. The M-tuple of series defined by an $\omega$-sequence for ${\mathcal I}$ is an element of ${\sl S}_\omega^M$. An M-tuple of series $\phi\in{\sl S}_\omega^M$ is an $\omega$-solution of ${\mathcal I}$ if and only if $\phi$ is an M-tuple of series defined by an $\omega$-sequence for ${\mathcal I}$. \end{cor} \section{The tropical variety of a quasi-ordinary singularity.} Let $(V,\underline{0})$ be a singular $N$-dimensional germ of algebraic variety. $(V,\underline{0})$ is said to be {\bf quasi-ordinary} when it admits a projection $\pi: (V,\underline{0})\longrightarrow ({\mathbb K}^N,\underline{0})$ whose discriminant is contained in the coordinate hyperplanes. Such a projection is called a {\bf quasi-ordinary projection}. Quasi-ordinary singularities admit analytic local parametrizations. This was shown for hypersurfaces by S. Abhyankar \cite{Abhyankar:1955} and extended to arbitrary codimension in \cite{FAroca:2004}. Quasi-ordinary singularities have been the object of study of many research papers \cite[...]{ArocaSnoussi:2005,Gau:1988,Tornero:2001,Popescu-Pampu:2003}. \begin{cor} Let $V$ be an $N$-dimensional algebraic variety embedded in ${\mathbb C}^{N+M}$ with a quasi-ordinary analytically irreducible singularity at the origin. Let \[ \begin{array}{cccc} \pi: & V & \longrightarrow & {\mathbb C}^N\\ & (x_1,\ldots ,x_{N+M}) & \mapsto & (x_1,\ldots ,x_N) \end{array} \] be a quasi-ordinary projection. Let ${\mathcal I}\subset{\mathbb K} [x_1,\ldots ,x_{N+M}]$ be the defining ideal of $V$. Then for any $\omega\in {{\mathbb R}_{>0}}^N$ of rationally independent coordinates there exists a {\bf unique} $e\in {{\mathbb R}_{>0}}^M$ such that $(\omega ,e)$ is in the tropical variety ${\mathcal T}({\mathcal I})$. \end{cor} \begin{proof} Since $\pi$ is quasi-ordinary there exists $k$ and an $M$-tuple of analytic series in $N$ variables $\varphi_1,\ldots ,\varphi_M$ such that $(t_1^k,\ldots ,t_N^k,\varphi_1(\underline{t}),\ldots ,\varphi_M(\underline{t}))$ are parametric equations of ${\large \textsf{V}} ({\mathcal I})$ about the origin \cite{FAroca:2004}. For any $\omega\in {{\mathbb R}_{>0}}^N$ of rationally independent coordinates, the first orthant is $\omega$-positive and then $(\varphi_1,\ldots ,\varphi_M)$ is an element of ${{\sl S}_\omega}^M$. Set $\eta_\omega:=\ordser\omega (\varphi_1,\ldots ,\varphi_M)$. The $M$-tuple of Puiseux series (with positive exponents) $y:=(\varphi_1({x_1}^{\frac{1}{k}},\ldots ,{x_N}^{\frac{1}{k}}),\ldots ,\varphi_M({x_1}^{\frac{1}{k}},\ldots ,{x_N}^{\frac{1}{k}}))$ is an $\omega$-solution of ${\mathcal I}$ with $\ordser\omega (y) =\frac{1}{k}\eta_\omega$ By Theorem \ref{Extension del punto uno} $(\omega ,\frac{1}{k}\eta_\omega)$ is of ${\mathcal I}$. Now we will show that $(\omega ,e)\in {\mathcal T} ({\mathcal I})$ implies $e=\frac{1}{k}\eta_\omega$. Set $\Phi$ to be the map: \[ \begin{array}{cccc} \Phi: & U & \longrightarrow & {\mathbb C}^{N+M}\\ & (t_1,\ldots ,t_N) & \mapsto & (t_1^k,\ldots ,t_N^k,\varphi_1(\underline{t}),\ldots ,\varphi_M(\underline{t})) \end{array} \] Since the singularity is analytically irreducible this parametrization covers a neighborhood of the singularity at the origin. Let $U\subset{\mathbb C}^N$ be the common domain of convergence of $\varphi_1,\ldots ,\varphi_M$. $U$ contains a neighborhood of the origin. We have $\Phi (U)= \pi^{-1} (U)\cap V$. Let $y_1,\ldots ,y_M$ be Puiseux series with exponents in some $\omega$-positive rational cone $\sigma$ (with $\ordser{\omega} \underline{y}=e\in{{\mathbb R}_{> 0}}^M$) such that $f(x_1,\ldots ,x_N, y_1(\underline{x}),\ldots ,y_M(\underline{x}))=0$ for all $f\in {\mathcal I}$. Take $k'$ such that $\psi_j(\underline{t}):= y_j({t_1}^{k'},\ldots ,{t_N}^{k'})$ is a series with integer exponents for all $j=1\ldots M$. (This may be done since they are Puiseux series). There exists an open set $W\subset{\mathbb C}^N$ where all the series $\psi_j$ converge, that has the origin as accumulation point . Set $\Psi$ to be the map: \[ \begin{array}{cccc} \Psi : & W & \longrightarrow & {\mathbb C}^{N+M}\\ & (t_1,\ldots ,t_N) & \mapsto & ({t_1}^{k'},\ldots ,{t_N}^{k'}, \psi_1(\underline{t}),\ldots ,\psi_M(\underline{t})). \end{array} \] Since $U$ is a neighborhood of the origin, $W':= U\cap W\neq \emptyset$. Ramifying again, if necessary, we may suppose that $k=k'$. Take $\underline{t}\in W'$. There exists $\underline{t}'\in U$ such that $\Psi (\underline{t}) = \Phi (\underline{t}')$. Then $\pi\circ\Psi (\underline{t}) = \pi\circ\Phi (\underline{t}')$, and, then $({t_1}^k,\ldots ,{t_N}^k)=({t'_1}^k,\ldots ,{t'_N}^k).$ There exists an $N$-tuple of $k$-roots of the unity $\xi_1,\ldots \xi_N$ such that $(t_1,\ldots ,t_N)=(\xi_1 t'_1,\ldots ,\xi_N t'_N)$. By continuity this $N$-tuple is the same for all $\underline{t}\in W'$. We have $\Psi (t_1,\ldots ,t_N)=\Phi (\xi_1 t_1,\ldots ,\xi_N t_N)$ on $W'$. Since $W'$ contains an open set, we have the equality of series $\psi_j (t_1,\ldots ,t_N)=\phi_j (\xi_1 t_1,\ldots ,\xi_N t_N)$. Then $(y_1,\ldots ,y_M)= (\varphi_1 (\xi_1 {x_1}^{\frac{1}{k}},\ldots ,\xi_N {x_N}^{\frac{1}{k}}),\ldots ,\varphi_M (\xi_1 {x_1}^{\frac{1}{k}},\ldots ,\xi_N {x_N}^{\frac{1}{k}}))$ for some $N$-tuple of roots of unity $(\xi_1,\ldots ,\xi_N)$ and then \[ \ordser{\omega} (y_1,\ldots ,y_M) =\frac{1}{k} \ordser{\omega} (\varphi_1,\ldots \varphi_M). \] The conclusion follows from Theorem \ref{Extension del punto uno}. \end{proof} \section*{Closing remarks.} \label{Closing remarks section} In the literature there are many results relating the Newton polyhedron of a hypersurface and invariants of its singularities. See for example \cite{Kouchirenko:1976}. To extend this type of theorems to arbitrary codimension the usual approach has been to work with the Newton polyhedrons of a system of generators (see for example \cite{Oka:1997}). The results presented here suggest that using the notion of tropical variety better results may be obtained. Both Newton-Puiseux's and McDonald's algorithm have been extended for an ordinary differential equation \cite{Fine:1889} and a partial differential equation \cite{FArocaJCano:2001,FArocaJCanoFJung:2003} respectively. The algorithm presented here can definitely be extended to systems of partial differential equations; a first step in this direction can be found in \cite{FAroca:2009}. \section*{Acknowledgments} The first and the third author were partially supported by CONACyT 55084 and UNAM: PAPIIT IN 105806 and IN 102307.\\ During the preparation of this work the third author was profiting of the post-doctoral fellowship CONACyT 37035. \def$'${$'$} \end{document}
\begin{document} \title{Fault-tolerant linear optical quantum computing with small-amplitude coherent states} \author{A. P. Lund} \email{[email protected]} \author{T. C. Ralph} \affiliation{Centre for Quantum Computer Technology, Department of Physics, University of Queensland, St. Lucia, QLD 4072, Australia } \author{H. L. Haselgrove} \affiliation{C3I Division, Defence Science and Technology Organisation, Canberra, ACT 2600, Australia} \affiliation{School of Information Technology and Electrical Engineering, University of New South Wales at ADFA, Canberra 2600 Australia} \begin{abstract} Quantum computing using two optical coherent states as qubit basis states has been suggested as an interesting alternative to single photon optical quantum computing with lower physical resource overheads. These proposals have been questioned as a practical way of performing quantum computing in the short term due to the requirement of generating fragile diagonal states with large coherent amplitudes. Here we show that by using a fault-tolerant error correction scheme, one need only use relatively small coherent state amplitudes ($\alpha > 1.2$) to achieve universal quantum computing. We study the effects of small coherent state amplitude and photon loss on fault tolerance within the error correction scheme using a Monte Carlo simulation and show the quantity of resources used for the first level of encoding is orders of magnitude lower than the best known single photon scheme. \end{abstract} \maketitle Linear optical quantum computing uses off-line resource states, linear optical processing and photon resolving detection to implement universal quantum processing on optical quantum bits (qubits) \cite{KOK}. This technique avoids a number of serious problems associated with the use of in-line non-linearities for quantum processing including their limited strength, loss, and inevitable distortions of mode shape by the non-linear interaction. The trade-off for adopting the linear approach has been large overheads in resource states and operations. In the standard approach, which we will refer to as LOQC \cite{KLM}, single photons are used as the physical qubits. Although progress has been made in reducing the overheads \cite{var}, for fault-tolerant operation they remain very high~\cite{DHN}. An alternative version of linear optical quantum computing, coherent state quantum computing (CSQC)~\cite{ralph:catcomputing}, uses coherent states for the qubit basis. This is an unusual approach as the computational basis states are not energy eigenstates and are only approximately orthogonal. Previous work on CSQC has concentrated on the regime where coherent states are relatively large ($\alpha > 2$) and the orthogonality is practically zero. It has been shown that CSQC has resource-efficient gates~\cite{JEO}. In this letter we show how to build non-deterministic CSQC gates for arbitrary amplitude coherent states that are overhead-efficient and (for $\alpha > 1.2$) can be used for fault-tolerant quantum computation. We estimate the fault-tolerant threshold for a situation in which photon loss and gate non-determinism are the dominant sources of error. As our gates operate for any amplitude coherent states, proof of principle experiments are possible using even smaller amplitudes. Given recent experimental progress in generating the required diagonal resource states \cite{var2} we suggest that CSQC should be considered a serious contender for optical quantum processing. For this paper we will use the CSQC qubit basis $ \ket{0}=\ket{\alpha},\ket{1}=\ket{-\alpha} $ where $\ket{\alpha}$ describes a coherent state with (real) amplitude $\alpha$ (i.e. $\hat{a}\ket{\alpha} = \alpha \ket{\alpha}$). These states do not define a standard qubit basis for all $\alpha$ as $\braket{-\alpha | \alpha} = e^{-2\alpha^2} \neq 0$, but for $\alpha > 2$ this overlap is practically zero~\cite{ralph:catcomputing}. A general CSQC single-qubit state is \begin{equation} \label{general-qubit} N_{\mu,\nu} (\alpha) \left( \mu \ket{\alpha} + \nu \ket{-\alpha} \right), \end{equation} where $N_{\mu,\nu}(\alpha)$ normalises the state and depends on the coefficients of the state. A special case is the diagonal states with $\mu = \pm \nu$ which can be written as $\ket{\pm} = N_{1,\pm 1} (\alpha) \left( \ket{\alpha} \pm \ket{-\alpha} \right)$. These states form the resource used when constructing CSQC gates using linear optics and photon detection. The diagonal state with a plus (resp. minus) sign has even (odd) symmetry and only contains even (odd) Fock states. This means that a diagonal (i.e. $X$-basis) measurement can be performed by a photon counter and observing the parity. The computational or $Z$-basis measurement is shown in FIG.~\ref{meas}(a) and the Bell state measurement is shown in FIG.~\ref{meas}(b). The $Z$-basis and Bell state measurements must distinguish between non-orthogonal states. For the measurement to be unambiguous and error free it must have a failure outcome~\cite{USD}. This occurs in both measurements when no photons are detected. The probability of failure tends to zero as $\alpha$ increases. \begin{figure} \caption{Schematics for unambiguous CSQC (a) $Z$-basis and (b) Bell state measurements and (c) CSQC teleportation. Thin lines represent modes whose state is a CSQC qubit with the encoding amplitude shown near each line. The $Z$-basis measurement in (a) as described in~\cite{jeong:catcomputing} is performed by determining which mode photons are present in. The Bell state measurement in (b) as described in~\cite{ralph:catcomputing} is performed by determining which mode photons are in and how many photons are present. Both these measurements fail when no photons are present. (c) shows how CSQC teleportation~\cite{bennett:teleportation,ralph:catcomputing} is achieved. A Bell state is generated by splitting a $\beta = \sqrt{2} \alpha$ diagonal state on a beam-splitter and performing a Bell state measurement on an unknown qubit and one half of this entanglement. All detectors are photon counters, all beam-splitters are 50:50, and all unlabelled inputs are arbitrary CSQC qubit states.} \label{meas} \end{figure} A critical part of constructing CSQC gates for all $\alpha$ is teleportation~\cite{bennett:teleportation,ralph:catcomputing}. This is shown in FIG.~\ref{meas}(c). As the teleporter uses unambiguous Bell state measurements there are 5 outcomes to the measurement. Four outcomes correspond to successfully identifying the respective Bell states. When the appropriate Pauli corrections are made the input qubit is successfully transferred to the output. The fifth outcome corresponds to the measurement failure whose probability again decreases to zero as $\alpha$ increases. Upon failure the output of the teleporter is unrelated to the input and hence the qubit is erased. It is this ability to unambiguously teleport the qubit value, in spite of the fact that the basis states are non-orthogonal, that is key to the success of our scheme. Unitary transformations on a CSQC qubit as defined in Equation~(\ref{general-qubit}) will not reach all transformations required to do quantum computing. This is because unitary transformations preserve inner products while various transformations that we might wish to implement (e.g $\ket{\pm \alpha} \rightarrow \ket{\alpha}\pm\ket{-\alpha}$) do not. We implement our gates using non-unitary, measurement-induced gates which act like unitary gates on the {\em coefficients} of our CSQC qubits for all $\alpha$. This requires gates which have in general a non-zero probability of failure. We will construct a universal set of gates based on~\cite{ralph:catcomputing} but applicable for all $\alpha$, that allows us to implement error correction in a standard way. Our objective is to use the error correction to deal with gate failure errors. We will choose our universal set of quantum gates as a Pauli $X$ gate, an arbitrary $Z$ rotation (i.e. $Z(\theta) = e^{i \frac{\theta}{2} Z}$), a Hadamard gate and a controlled-Z gate. Each gate acts on the coefficients of the coherent state qubits as they would on orthogonal qubits. In CSQC the $X$ gate is the only gate deterministic for all $\alpha$. The gate is performed by introducing a $\pi$ phase shift on the qubit~\cite{ralph:catcomputing}. The remainder of the gates are implemented via quantum gate teleportation~\cite{gateteleportation}. Just as we are able to implement unambiguous state teleportation, we are able to implement unambiguous gate teleportation. The gates are implemented by altering the form of the entanglement used in the teleporter. The $Z$ rotation is achieved by using the entanglement $ e^{i\theta}\ket{\alpha,\alpha} + e^{-i \theta}\ket{-\alpha,-\alpha}, $ the Hadamard gate uses the entanglement $ \ket{\alpha,\alpha} + \ket{\alpha, -\alpha} + \ket{-\alpha,\alpha} - \ket{-\alpha,-\alpha}, $ and the controlled-$Z$ uses the four qubit entanglement \begin{eqnarray} \lefteqn{\ket{\alpha,\alpha,\alpha,\alpha} +\ket{\alpha,\alpha,-\alpha,-\alpha}} \nonumber \\ & & +\ket{-\alpha,-\alpha,\alpha,\alpha} -\ket{-\alpha,-\alpha,-\alpha,-\alpha}, \label{cz-ent} \end{eqnarray} which is used as the shared entanglement of two teleporters. The controlled-$Z$ entanglement can be generated from the Hadamard entanglement with coherent state amplitude $\sqrt{2}\alpha$ by splitting the outputs at 50:50 beam-splitters. The procedures to generate the Hadamard and $Z$-rotation entanglement are shown in FIG.~\ref{gates}. \begin{figure} \caption{Schematics for gate entanglement generation. These diagrams have the same layout as those in FIG.~\ref{meas}. (a) shows $Z$ rotation entanglement preparation. A $\ket{+}$ state with amplitude $\gamma = \sqrt{\alpha^2 + \beta^2 + 1/2}$ is split at a three way beam-splitter generating the state $\ket{\alpha^\prime,\alpha,\beta} + \ket{-\alpha^\prime,-\alpha,-\beta}$ where $\alpha^\prime = 1/\sqrt{2}$ and $\beta=\alpha$ for the rotation. The $\alpha^\prime$ mode is mixed at a beam-splitter with reflectivity $\cos \theta$ with a coherent state of equal amplitude. The two output modes are then detected and the output is accepted if one photon is measured in total (occurring approx. 1:3 times). (b) shows the Hadamard entanglement preparation. Two copies of the entanglement from (a) are used but with different angles $\theta$ and $\theta^\prime$ and one output mode with coherent state amplitude $\beta = \sqrt{1/2}$. Next, one $\beta$ mode from each state are combined at a beam-splitter with reflectivity $\cos \delta$ and the output modes are detected. The generation succeeds when only one photon is detected in total. If we choose the rotation angles as $\theta = 3\pi/4, \theta^\prime = \pi/4, \delta = \pi/4$ and perform an $X$ correction on one of the modes the desired entanglement is produced. On average this procedure succeeds approx. 1:27 times. } \label{gates} \end{figure} Depending on the outcome of the Bell state measurement in a teleported gate, it may be necessary to apply an $X$ and/or $Z$ Pauli operator to the output. In this paper, we assume that these Pauli operators are not applied directly, but rather absorbed into the error-correction process via the {\em Pauli frame} technique \cite{noisyknill}. If the outcome of the Bell state measurement is failure, then we say the gate failed and the qubit on which it acted upon is erased. In calculating a noise threshold for CSQC it is necessary to establish a model for the noise experienced by each operation (i.e. gates, measurements, and preparations). This model is expressed in terms of two parameters: the qubit amplitude $\alpha$, and a loss parameter $\eta$ (see below). We use this model to simulate concatenated fault-tolerant error-correction protocols. A particular setting of the parameters $(\alpha,\eta)$ is said to be {\em below the threshold} if the rate of uncorrectable errors is observed to decrease to zero as more levels of error correction are applied. Here we calculate the {\em threshold curve}, defined to be the curve through the $\alpha$-$\eta$ plane which lies at the boundary between the sets of parameters that are above and below the threshold. An important feature of our noise model is the inclusion of two types of error: {\em unlocated} and {\em located} errors. A located error occurs when a gate fails. The experimenter has knowledge about when and where these errors occur. Unlocated errors are caused by photon loss as these errors are not directly observable. Given that our noise model includes both unlocated and located errors, we use an error-correction protocol which has been designed to deal effectively with combinations of these two error types. We have chosen to utilise the ``circuit-based telecorrector protocol'' described in \cite{DHN}. This protocol uses error-location information during ancilla-preparation and syndrome-decoding routines, thus achieving a high tolerance to located noise, whilst achieving a tolerance to unlocated noise similar to that of standard protocols \cite{steane,reichardt}. In practice, other noise sources would be present. Two examples are mode mismatch and phase mismatch. These noise sources will generate additional unlocated and located errors in the teleported gates. The effect of these errors will be similar to those in our simplified noise model. Depending on their strength, they may have a significant effect on the noise threshold curve. We note that these errors are {\em systematic}, and in principle can be greatly reduced by using appropriate locking techniques. The probability of gate failure varies as a function of the input qubit state. For simplicity in the simulations, we apply the worst-case probability value, which corresponds to the input state $\ket{+}$. The maximum probability of failure (per qubit) for $Z$-basis measurements, and Clifford group operations~\cite{cliffordgroup} implemented by gate teleportation is equal to \begin{equation} q=\frac{2}{1+e^{2{\alpha^\prime}^2}}. \end{equation} In this equation $\alpha^\prime = (1-\eta) \alpha$ is an effective encoding amplitude which incorporates the effects of loss. In the case of the controlled-$Z$ gate, this failure probability applies independently to each of the two qubits. Upon a gate failure the input qubit is erased. For simplicity we model this effect by completely depolarising the qubit upon a located error occurring. We model photon loss by assuming that each optical component, each detector and each input coupling causes some fraction of the input intensity to be lost, and that this loss is equal for all modes. Due to the properties of a linear network with loss it is possible to assign one effective input coupling loss which incorporates all of this loss together. We also assume that the output of each gate includes the loss due to the detectors from the {\em next} gate or measurement. From this we can assign an effective input loss rate $\eta$ which combines the {\em detector, component and input} efficiencies together incorporating all these effects. The effect of loss on a CSQC qubit is to induce a random $Z$ operation and decrease the coherent state amplitude~\cite{glancy:catloss}. We assume that the decrease in amplitude is compensated by changing the amplitudes of the coherent states in the entanglement used for the teleported gates. The probability of $Z$ error on a diagonal CSQC state is \begin{equation} p=\frac{1}{2}(1+\frac{\sinh{(2\eta - 1)\alpha^2}}{\sinh{\alpha^2}}) \end{equation} where $\eta$ is the overall fractional loss as defined above. In the $Z$ rotation and the controlled-$Z$ gates, photon loss causes a $Z$ error on the output state. These are due to the loss in the diagonal states from the generation of the entanglement. In the Hadamard gate, there are two diagonal states required and a loss in one induces a $X$ error on the output and a loss on the other induces a $Z$ error on the output (these errors are uncorrelated). In our analysis we consider two noise models which are summarised in TABLE~\ref{model}. \begin{table} \caption{\label{model}Error rates for the models used to calculate the threshold curve for CSQC. The coefficients in the $H$-gate and C-$Z$ gate arise from the larger $\alpha$ required for generating the entanglement and are worse case. Two models for qubit storage are considered as shown in the row labelled ``Memory''. In one model we consider no noise in the operations that store CSQC qubits and the second we introduce photon loss into these operations at the same rate as introduced by the gates.} \begin{ruledtabular} \begin{tabular}{rccc} & Loc. errors & Unloc. $X$ error & Unloc. $Z$ error \\ \hline Memory & $0$ & $0$ & $p$ or $0$ \\ H-gate & $q$ & $1.6 p$ & $1.6 p$ \\ C-Z gate & $q$ & $0$ & $2.5 p$ \\ $\ket{+}$ & $0$ & $0$ & $p$ \\ X-meas & $0$ & $0$ & $0$ \end{tabular} \end{ruledtabular} \end{table} We are considering here an error-correction protocol which consists of several levels of concatenation. The noise model in TABLE~\ref{model} applies only to the lowest level of concatenation (that is, to error-correction circuits that are built using unencoded ``physical'' gates). For all higher levels of concatenation, we assume a noise model identical to that considered in~\cite{DHN} for the ``circuit-based telecorrection protocol'', since the arguments used to derive that noise model are applicable to our situation. Thus, our noise model and error-correction protocol are identical to that of~\cite{DHN} for concatenation levels 2 and higher, and so we do not perform new simulations for these concatenation levels. Instead, we directly utilise the best-fit polynomials that were obtained in~\cite{DHN}, in order to model the mapping between noise rates and effective noise rates for all concatenation levels other than the first. For the first level of concatenation, we perform new numerical simulations, for the noise models in TABLE~\ref{model}. The simulator was a modified version of the one used in~\cite{DHN}. All controlled-\textsc{NOT} gates were replaced by controlled-$Z$ gates and two Hadamard gates and simplifications of this circuit were performed. Separate simulations were performed for protocols based on the $7$-qubit Steane code and the $23$-qubit Golay code. The resulting threshold curves are shown in FIG.~\ref{threshold-plot}. An interesting feature is that {\em increasing $\alpha$ beyond a certain point causes a reduced tolerance to photon loss}. \begin{figure} \caption{Thresholds for CSQC using the 7 qubit Steane and 23 qubit Golay code for both memory noise models. } \label{threshold-plot} \end{figure} TABLE~\ref{table1} estimates the resource-usage for one round of error-correction, for 5 levels of concatenation. \begin{table} \caption{\label{table1}Effective error rates and resource usage for the 7-qubit Steane code with memory noise enabled. Coherent state amplitude used for this table is $\alpha = 1.56$ and loss rate $\eta = 4 \times 10^{-4}$. This corresponds to gate error rates in our model of $(p,q)=(2\times10^{-4},0.015)$. Resource usage is defined to be the total number of gates, preparations, measurements, and quantum memories used. Resource are used in the following fractions for all levels of concatenation: Memory 0.284, Hadamard 0.098, controlled-$Z$ 0.343, Diagonal states 0.164, $X$-basis measurements 0.111. Also shown is an estimate of the maximum length of computation possible assuming the entire computation succeeds with probability $1/2$.} \begin{ruledtabular} \begin{tabular}{ccccc} LEVEL & Unloc. & Loc. & Max. comp. & Resource \\ & rate & rate & steps & usage \\ \hline 1 & $4 \times 10^{-4}$ & $8 \times 10^{-3}$ & $82$ & $1.0 \times 10^3$ \\ 2 & $1.7 \times 10^{-4}$ & $2 \times 10^{-3}$ & $3.3 \times 10^2$ & $8.7 \times 10^5$ \\ 3 & $2.8 \times 10^{-5}$ & $2.1 \times 10^{-4}$ & $3.0 \times 10^3$ & $4.5 \times 10^8$ \\ 4 & $7.4 \times 10^{-7}$ & $3.6 \times 10^{-6}$ & $1.6 \times 10^5$ & $2.1 \times 10^{11}$ \\ 5 & $5.3 \times 10^{-10}$ & $1.7 \times 10^{-9}$ & $3.1 \times 10^8$ & $9.6 \times 10^{13}$ \end{tabular} \end{ruledtabular} \end{table} An advantage of CSQC over LOQC is lower resource usage. Using TABLE~\ref{table1} and the success probabilities in Fig.~\ref{gates} we find that CSQC consumes approximately $10^4$ diagonal resource states per error correction round at the first level of concatenation. This is 4 orders of magnitude less than the number of Bell pair resource states consumed under equivalent conditions by the most efficient known LOQC scheme~\cite{DHN}. However, there is a trade-off. The photon loss threshold we find for CSQC is an order of magnitude smaller than that for LOQC. This means that if the loss budget is too large then CSQC may not be scalable or may require so many levels of concatenation that the resource advantage is lost. We note that the physical resources in terms of specific optical states required to implement CSQC and LOQC are different. Nevertheless we believe comparing resource state counts still gives a good estimate of the relative complexity of the two schemes. In future work, it would be valuable to include other sources of noise and improve upon some of the pessimistic assumptions made in deriving the noise model. One could consider ways of optimising the fault-tolerant protocol in order to take advantage of the relative abundance of $Z$-errors compared with $X$ errors. We have shown how to construct a universal set of gates for coherent state quantum computing for any coherent state amplitude. Provided the coherent state amplitudes are not too small ($\alpha > 1.2$) and photon loss is not too large ($\eta < 5 \times 10^{-4}$) it is possible to produce a scalable system. To our knowledge this is the first estimation of a fault-tolerance threshold for non-orthogonal qubits. As our gates work for all values of $\alpha$, proof of principle experiments are possible using already demonstrated technology. We acknowledge the support of the Australian Research Council, Queensland State Government and the Disruptive Technologies Office. \end{document}
\begin{document} \title{A Fully Polynomial Time Approximation Scheme for Packing While Traveling} \date{} \author{ Frank Neumann\textsuperscript{1}, Sergey Polyakovskiy\textsuperscript{1}, Martin Skutella\textsuperscript{2}, Leen Stougie\textsuperscript{3}, \\ Junhua Wu\textsuperscript{1} \\ \\ \textsuperscript{1} Optimisation and Logistics, School of Computer Science,\\ The University of Adelaide, Australia \\ \textsuperscript{2} Combinatorial Optimisation \& Graph Algorithms,\\ Department of Mathematics,\\ Technical University of Berlin, Germany \\ \textsuperscript{3} CWI and Operations Research,\\ Dept. of Economics and Business Administration,\\ Vrije Universiteit, Amsterdam, The Netherlands } \maketitle \begin{abstract} Understanding the interactions between different combinatorial optimisation problems in real-world applications is a challenging task. Recently, the traveling thief problem (TTP), as a combination of the classical traveling salesperson problem and the knapsack problem, has been introduced to study these interactions in a systematic way. We investigate the underlying non-linear packing while traveling (PWT) problem of the TTP where items have to be selected along a fixed route. We give an exact dynamic programming approach for this problem and a fully polynomial time approximation scheme (FPTAS) when maximising the benefit that can be gained over the baseline travel cost. Our experimental investigations show that our new approaches outperform current state-of-the-art approaches on a wide range of benchmark instances. \end{abstract} \section{Introduction} Combinatorial optimisation problems play a crucial role in important areas such as planning, scheduling and routing. Many combinatorial optimisation problems have been studied extensively in the literature. Two of the most prominent ones are the traveling salesperson problem (TSP) and the knapsack problem (KP) and numerous high performing algorithms have been designed for these two problems. Looking at combinatorial optimisation problems arising in real-world applications, one can observe that real-world problems often are composed of different types of combinatorial problems. For example, delivery problems usually consists of a routing part for the vehicle(s) and a packing part of the goods onto the vehicle(s). Recently, the traveling thief problem (TTP)~\cite{Bonyadi2013} has been introduced to study the interactions of different combinatorial optimisation problems in a systematic way and to gain better insights into the design of multi-component problems. The TTP combines the TSP and KP by making the speed that a vehicle travels along a TSP tour dependent on the weight of the selected items. Furthermore, the overall objective is given by the sum of the profits of the collected items minus the weight dependent travel cost along the chosen route. A wide range of heuristic search algorithms~\cite{Faulkner2015,DBLP:journals/soco/MeiLY16,ElYafrani:2016:PVS:2908812.2908847} and a large benchmark set~\cite{Polyakovskiy2014TTP} have been introduced for the TTP in recent years. However, up to now there are no high performing exact approaches to deal with the TTP. The study of non-linear planning problems is an important topic and the design of approximation algorithms has gained increasing interest in recent years~\cite{DBLP:conf/aaai/HoyN15,DBLP:conf/aaai/YangN16}. The non-linear packing while traveling problem (PWT) has been introduced in \cite{sergey15} to push forward systematic studies on multi-component problems and deals with the packing part combined with the non-linear travel cost function of the TTP. The PWT can be seen as the TTP when the route is fixed but the cost still depends on the weight of the items on the vehicle. The problem is motivated by gaining advanced precision when minimising transportation costs that may have non-linear nature, for example, in applications where weight impacts the fuel costs~\cite{GOODYEAR,Lin14}. From this point of view, the PWT is a baseline problem in various vehicle routing problems with non-linear costs. Some specific applications of the PWT may deal with a single truck collecting goods in large remote areas without alternative routes, that is a single main route that a vehicle has to follow may exist while any deviations from it in order to visit particular cities are negligible~\cite{DBLP:journals/corr/PolyakovskiyN15}. The problem is $\mathcal{NP}$-hard even without the capacity constraint usually imposed on the knapsack. Furthermore, exact and approximative mixed integer programming approaches as well as a branch-infer-and-bound approach~\cite{DBLP:journals/corr/PolyakovskiyN15} have been developed for this problem. We introduce a dynamic programming approach for PWT. The key idea is to consider the items in the order they appear on the route that needs to be travelled and apply dynamic programming similar as for the classical knapsack problem~\cite{Toth1980}. When considering an item, the decision has to be made on whether or not to pack the item. The dynamic programming approach computes for the first $i$, $1 \leq i \leq m$, items and each possible weight $w$ the maximal objective value that can be obtained. As the programming table that is used depends on the number of different possible weights, the algorithm runs in pseudo-polynomial time. After having obtained the exact approach based on dynamic programming, we consider the design of a fully polynomial approximation scheme (FPTAS)~\cite{HochbaumApproximation}. First, we show that it is $\mathcal{NP}$-hard to decide whether a given instance of PWT has a non-negative objective value. This rules out any polynomial time algorithm with finite approximation ratio under the assumption $P\not=NP$. Due to this, we design a FPTAS for the amount that can be gained over the travel cost when the vehicle travels empty (which is the minimal possible travel cost). Our FPTAS makes use of the observation that the item with the largest benefit leads to an objective value of at least $OPT/m$ and uses appropriate rounding in the previously designed dynamic programming approach. We evaluate our two approaches on a wide range of instances from the TTP benchmark set~\cite{Polyakovskiy2014TTP} and compare it to the exact and approximative approaches given in \cite{DBLP:journals/corr/PolyakovskiyN15}. Our results show that the large majority of the instances that can be handled by exact methods, are solved much quicker by dynamic programming than the previously developed mixed integer programming and branch-infer-and-bound approaches. Considering instances with a larger profit and weight range, we show that the choice of the approximation guarantee significantly impacts the runtime behaviour. The paper is structured as follows. In Section~2, we introduce the problem. We present the exact dynamic programming approach in Section~3 and design a FPTAS in Section~4. Our experimental results are shown in Section~5. Finally, we finish with some conclusions. \section{Problem Statement}\label{sec:prelim} The PWT can be formally defined as follows. Given are $n+1$ cities, distances $d_i$, $1\leq i \leq n$, from city $i$ to city $i+1$, and a set of items $M$, $|M| = m$, distributed all over the first $n$ cities. W.l.o.g., we assume $m = \Omega(n)$ to simplify our notations. Each city $i$, $1 \leq i \leq n$, contains a set of items $M_i \subseteq M$, $|M_i| = m_i$. Each item $e_{ij} \in M_i$, $1 \leq j \leq m_i$, is characterised by its positive integer profit $p_{ij}$ and weight $w_{ij}$. In addition, a fixed route $N = (1, 2, ..., n+1)$ is given that is traveled by a vehicle with velocity $v \in [v_{min},v_{max}]$. Let $x_{ij} \in \{0, 1\}$ be a variable indicating whether or not item $e_{ij}$ is chosen in a solution. Then a set $S \subseteq M$ of selected items can be represented by a decision vector $x = (x_{11},x_{12},...,x_{1m_1},x_{21},...,x_{nm_n}).$ The total benefit of selecting a subset of items $S$ is calculated as $$ B(x) = P(x) - R \cdot T(x),$$ where $$P(x) = \sum\limits_{i=1}^n \sum\limits_{j=1}^{m_i} p_{ij}x_{ij}$$ represents the total profit of selected items and $$ T(x) = \sum\limits_{i=1}^n \frac{d_i}{v_{max} - \nu\sum\limits_{k=1}^i\sum\limits_{j=1}^{m_k} w_{kj}x_{kj}}$$ is the total travel time for the vehicle carrying these items. Here, $\nu = \frac{v_{max}-v_{min}}{W}$ is the constant defined by the input parameters, where $W$ is the capacity of the vehicle. $T(x)$ has the following interpretation: when the vehicle is traveling from city $i$ to city $i+1$, the selected items have to be carried and the maximal speed $v_{max}$ of the vehicle is reduced by a normalised amount that depends linearly on the weight of these items. Because the velocity is influenced by the weight of collected items, the total travel time increases along with their weight. Given a renting rate $R \in (0, \infty)$, $R \cdot T(x)$ is the total cost of carrying the items chosen by $x$. The objective of this problem is to find a solution $x^* = \arg max_{x \in \{0,1\}^m} B(x).$ We investigate dynamic programming and approximation algorithms~\cite{HochbaumApproximation} for the non-linear packing while traveling problem. A FPTAS for a given maximisation problem is an algorithm $A$ that obtains for any valid input $I$ and $\epsilon$, $0 < \epsilon \leq 1$, a solution of objective value $A(I) \geq (1- \epsilon) OPT(I)$ in time polynomial in the input size $|I|$ and $1/\epsilon$. \section{Dynamic Programming}\label{sec:dp} We introduce a dynamic programming approach for solving the PWT. Dynamic programming is one of the traditional approaches for the classical knapsack problem~\cite{Toth1980}. The dynamic programming table $\beta$ consists of $W$ rows and $m$ columns. Items are processed in the order they appear along the path $N$ and we consider them in the lexicographic order with respect to their indices, i.e. $$e_{ab} \preceq e_{ij},\text{ iff } ((a < i) \vee ( a=i \wedge b \leq j)).$$ Note that $\preceq$ is a total strict order and we process the items in this order starting with the smallest element. The entry $\beta_{i,j,k}$ represents the maximal benefit that can be obtained by considering all combinations of items $e_{ab}$ with $e_{ab} \preceq e_{ij}$ leading to weight exactly $k$. We denote by $\beta(i,j, \cdot)$ the column containing the entries $\beta_{i,j,k}$. In the case that a combination of weight $k$ doesn't exist, we set $\beta_{i,j,k}=-\infty$. We denote by $$d_{in} = \sum_{l=i}^n d_{l}$$ the distance from city $i$ to the last city $n+1$. We denote by $B(\emptyset)$ the benefit of the empty set which is equivalent to the travel cost when the vehicle travels empty. Furthermore, $B(e_{ij})$ denotes the benefit when only item $e_{ij}$ is chosen. For the first item $e_{ij}$ according to $\preceq$, we set $$\beta(i,j,0)=B(\emptyset),$$ $$\beta(i,j,w_{ij})=B(e_{ij}),$$ and $$\beta(i,j,k) = - \infty \text{ iff } k \not \in \{0,w_{ij}\}.$$ Let $e_{i'j'}$ be the predecessor of item $e_{ij}$ in $\preceq$. Based on $\beta(i',j', \cdot)$ we compute for $\beta(i,j, \cdot)$ each entry $\beta_{i,j,k}$ as \begin{displaymath} \max \!\left\{ \begin{array}{l} \!\!\beta_{i',j',k} \\ \!\!\beta_{i',j',k-w_{ij}} \!+\! p_{ij} \!-\! Rd_{in} (\frac{1}{v_{max}-\nu k} \!-\! \frac{1}{v_{max}-\nu( k - w_{ij})})\\ \end{array} \right. \end{displaymath} Let $e_{st}$ be the last element according to $\preceq$, then $\max_k \beta(s,t,k)$ is reported as the value of an optimal solution. We now investigate the runtime for this dynamic program. If $d_{in}$ has been computed for each $i$, $1 \leq i \leq n-1$, which takes $O(n)$ time in total, then each entry can be compute in constant time. \begin{theorem} The entry $\beta(i,j,k)$ stores the maximal possible benefit for all subsets of $I_{ij} = \{e_{ab} \mid e_{ab} \preceq e_{ij}\}$ having weight $k$. \end{theorem} \begin{proof} The proof is by induction. The statement is true for the first item $e_{ij}$ according to $\preceq$ as there are only the two options of choosing or not choosing $e_{ij}$. Assume that $\beta(i',j',k)$ stores the maximal benefit for each weights $k$ when considering all items of $I_{i'j'}$. There two options exist when we consider item $e_{ij}$ in addition: to include or not include $e_{ij}$. If $e_{ij}$ is not included, then the best possible value for $\beta(i,j,k)$ is $\beta(i',j',k)$. If $e_{ij}$ is included, then remaining weight has to come from the previous items whose maximal benefit has been $\beta(i',j',k-w_{ij})$. Transporting a set of items of weight $k-w_{ij}$ from city $i$ to city $n+1$ has cost $$\frac{Rd_{in}}{v_{max}-\nu ( k - w_{ij})}$$ and transporting a set of items of weight $k$ from city $i$ to $n+1$ has cost $$\frac{Rd_{in}}{v_{max}-\nu k}.$$ This cost of transporting items of a fixed weight from city $i$ to city $n+1$ is independent of the choice of items. Therefore, $\beta(i,j,k)$ stores the maximal possible benefit when considering all possible subsets of $I_{ij} = \{e_{ab} \mid e_{ab} \preceq e_{ij}\}$ having weight $k$. \end{proof} To speed up the computation of our DP approach, we only store an entry for $\beta(i,j,k)$ if it is not dominated by any other entry in $\beta(i,j, \cdot)$, i.e. there is no other entry $\beta(i,j,k')$ with $\beta(i,j,k') \geq \beta(i,j,k) \text{ and } k' < k.$ This does not affect the correctness of the approach as an item $e_{ij}$ can be added to any entry of $\beta(i',j', \cdot)$ and therefore we obtain for each dominated entry at least one entry in the last column having at least the same benefit but potentially smaller weight. \section{Approximation Algorithms}\label{sec:fptas} We now turn our attention to approximation algorithms. The NP-hardness proof for PWT given in~\cite{DBLP:journals/corr/PolyakovskiyN15} does not rule out polynomial time approximation algorithms. In this section, we first show that polynomial time approximation algorithms with a finite approximation ratio do not exist under the assumption $P \not = NP$. This motivates the design of a FPTAS for the amount that can be gained over the baseline cost when the vehicle is traveling empty. \subsection{Inapproximability of PWT} The objective function for PWT can take on positive and negative values. We show that deciding whether a given PWT instances has a solution that is non-negative is already NP-complete. \begin{theorem} \label{thm:nonapprox} Given a PWT instance, the problem to decide whether there is a solution $x$ with $B(x) \geq 0$ is NP-complete. \end{theorem} \begin{proof} The problem is in NP as one can verify in polynomial time for a given solution $x$ whether $B(x) \geq 0$ holds by evaluating the objective function. It remains to show that the problem is NP-hard. We address two cases: when $B(x)$ is subject to the capacity constraint and when it is unconstrained. In both cases, we reduce the $\mathcal{NP}$-complete \textit{subset sum problem} (SSP) to the decision variant of PWT which asks whether there is a solution with objective value at least 0. The input for SSP is given by $m$ positive integers $S=\left\{s_1, \ldots, s_m\right\}$ and a positive integer $Q$. The question is whether there exists a vector $x \in \left\{0,1\right\}^m$ such that $\sum_{k=1}^m {s_kx_k} = Q$. We encode the instance of SSP given by $S$ and $Q$ as the instance of PWT, which consists of two cities. The first city contains all the $m$ items and the distance between the cities is $d_1=1$. We assume that $p_{1k}=w_{1k}=s_k$, $1 \leq k \leq m$. To prove the first case, we construct the instance $I'$ of PWT. We extend the initial settings by giving to the vehicle capacity $W=Q$ and define its velocity range as $\upsilon_{max}=2$ and $\upsilon_{min}=1$. Furthermore, we set $R^*=Q$. Consider the nonlinear function $f'_{R^*} \colon \left[0,W\right] \rightarrow \mathbb{R}$ defined as {\footnotesize $$ f'_{R^*}\left(w\right)=w-\frac{R^*}{2- w/W}=w-\frac{Q}{2- w/Q}.$$ } $f'_{R^*}$, which is defined on the interval $\left[0,W\right]$, is a continuous concave function that reaches its unique maximum of 0 in the point $w^* = W = Q$, i.e. $f'_{R^*}\left(w \right)<0$ for $w \in [0,W]$ and $w \not = w^*$. Then 0 is the maximum value for $f'_{R^*}$ when being restricted to integer input, too. Therefore, the objective function for PWT is given by {\footnotesize $$ g'_{R^*}\left(x\right)=\displaystyle\sum_{k=1}^m {p_{1k}x_k}-\frac{R^*}{2- \frac{1}{W} \displaystyle\sum_{k=1}^m {w_{1k}x_k}}.$$ } There exists an $x \in \{0,1\}^m$ such that $g'_{R^*}(x) \geq 0$ iff $$\sum_{k=1}^m s_kx_k=\sum_{k=1}^m w_{1k}x_k=\sum_{k=1}^m p_{1k}x_k=Q.$$ Therefore, the instance of SSP has answer YES iff the optimal solution of the PWT instance $I'$ has objective value at least 0. Obviously, the reduction can be carried out in polynomial time which completes the proof of the first case. To prove the second case, we construct the instance $I''$ of PWT where our settings assume $$W = \sum_{k=1}^m s_k$$ and $$\upsilon_{min}=\sqrt{Q/(2W-Q)} = \upsilon_{max}/2.$$ We then set $$R^* = \upsilon_{min} \cdot W \left(\upsilon_{max} - \upsilon_{min} \cdot Q/W\right)^2.$$ Finally, this gives us the functions $f''_{R^*}\left(w\right)$ and $g''_{R^*}\left(x\right)$ of the following forms: {\footnotesize $$ f''_{R^*}\left(w\right)=w-\frac{R^*}{\upsilon_{max}- \upsilon_{min} \cdot w/W}.$$ } {\footnotesize $$ g''_{R^*}\left(x\right)=\displaystyle\sum_{k=1}^m {p_{1k}x_k}-\frac{R^*}{\upsilon_{max}- \frac{\upsilon_{min}}{W} \displaystyle\sum_{k=1}^m {w_{1k}x_k}}.$$ } Similarly, there exists an $x \in \{0,1\}^m$ such that $g''_{R^*}(x) \geq 0$ iff $$\sum_{k=1}^m s_kx_k=\sum_{k=1}^m w_{1k}x_k=\sum_{k=1}^m p_{1k}x_k=Q.$$ Therefore, the instance of SSP has answer YES iff the optimal solution of the PWT instance $I''$ has objective value at least 0, while the reduction can be carried out in polynomial time. \end{proof} The objective function can take on negative and non-negative values. Theorem~\ref{thm:nonapprox} rules out meaningful approximations for the original objective functions $B$ and we state this in the following corollary. \begin{corollary} There is no polynomial time approximation algorithm for PWT with a meaningful approximation ratio, unless P=NP. \end{corollary} \subsection{FPTAS for amount over baseline travel cost} As there are no polynomial time approximation algorithms for fixed approximation ratio for PWT, we consider the amount that can be gained over the cost when the vehicle travels empty as the objective. This is motivated by the scenario where the vehicle has to travel along the given route and the goal is to maximise the gain over this baseline cost. Note that an optimal solution for this objective is also an optimal solution for PWT. However, approximation results do not carry over to PWT as the objective values are ``shifted'' by the cost when traveling empty. \begin{algorithm*}[th] \begin{itemize} \item Set $L = \max_{e_{ij} \in M} B'(e_{ij})$, $r = \epsilon L /m$, and $d_{in} = \sum_{l=i}^n d_{l}$, $1\leq i \leq n$. \item Compute order $\preceq$ on the items $e_{ij}$ by sorting them in lexicographic order with respect to their indices $(i,j)$. \item For the first item $e_{ij}$ according to $\preceq$, set $\beta(i,j,0) = B'(\emptyset)$ and $\beta(i,j,w_{ij}) = B'(e_{ij})$. \item Consider the remaining items of $M$ in the order of $\preceq$ and do for each item $e_{ij}$ and its predecessor $e_{i'j'}$: \begin{itemize} \item In increasing order of $k$ do for each $\beta(i',j',k)$ with $\beta(i',j',k)\not = -\infty$ \begin{itemize} \item If there is no $\beta(i,j,k')$ with ($\lfloor \beta(i,j,k')/r\rfloor \geq \lfloor \beta(i',j',k)/r\rfloor$ and $k'<k$),\\ set $\beta(i,j,k) = max\{ \beta(i,j,k), \beta(i',j',k) \}$. \item If there is no $\beta(i,j,k')$ with ($\lfloor \beta(i,j,k')/r\rfloor \geq \lfloor \beta(i',j',k+w_{ij})/r\rfloor$ and $k'<k+w_{ij}$),\\ set $\beta(i,j,k+w_{ij}) = max\{ \beta(i,j,k+w_{ij}), \beta(i',j',k) + p_{ij} + Rd_{in} ( \frac{1}{v_{\max}-\nu k} - \frac{1}{v_{max}-\nu (k+w_{ij})}) \}$. \end{itemize} \end{itemize} \end{itemize} \caption{FPTAS for $B'(x)$} \label{alg:fptas} \end{algorithm*} Let $$B(\emptyset) = - R \cdot \sum_{i=1}^n d_i / v_{\max}$$ be the travel cost (or benefit) for the empty truck. $B(\emptyset)$ can be seen as the set up cost that we have to pay at least. We consider the objective $$B'(x) = B(x) - B(\emptyset),$$ i. e. for the amount that we can gain over this setup cost, and give an FPTAS. Note, that we have $-R \cdot T(x) \leq B(\emptyset)$ for any $x \in \{0,1\}^m$ and $P(x)- R \cdot T(x) - B(\emptyset)=0$ if $x = 0^m$. We now give a FPTAS for the amount that can be gained over the cost when the vehicle travels empty and denote by OPT the optimal value for this objective, i.e. $$ OPT = \max_{x \in \{0,1\}^m} B'(x). $$ Considering the dynamic program for $B'(x)$ instead of $B(x)$ increases each entry by $|B(\emptyset)|$ and therefore obtains an optimal solution for $B'(x)$ in pseudo-polynomial time. In order to obtain an FPTAS, we round the values of $B'(x)$ and store for each rounded value only the minimal achievable weight. Let $$ t(w)= \frac{1}{v_{\max} - \nu w} $$ denote the travel time per unit distance when traveling with weight $w$. We have $t(x+w) - t(x) \geq t(w)$ for any $x\geq 0$ as $t(w)$ is a convex function. Consider the value $B(e_{ij}) - B(\emptyset)$ which gives the additional amount over $B(\emptyset)$ when only packing item $e_{ij}$. We assume that there exists at least one item $e_{ij}$ with $B(e_{ij}) - B(\emptyset) >0$ as otherwise $OPT=0$ the solution being $\{0\}^m$. Let $P(e_{ij})$ and $T(e_{ij})$ be the profit and travel time when only choosing item $e_{ij}$. Furthermore, let $x^* = \arg \max_{x \in \{0,1\}^m} B'(x)$ be an optimal solution of value $OPT>0$. We have $$ \sum_{i=1}^n \sum_{j=1}^{m_i} (P(e_{ij})- R \cdot T(e_{ij})) x_{ij}^* - B(\emptyset) \geq B(x^*) - B(\emptyset) = OPT $$ as $t(w)$ is monotonically increasing and convex. Therefore the item $e_{ij}$ of $x^*$ with $B(e_{ij}) - B(\emptyset)>0$ maximal fulfils $B(e_{ij}) - B(\emptyset) \geq OPT/m. $ Let $$L =max_{e_{ij} \in M} B'(e_{ij}) >0$$ be maximal possible objective value when choosing exactly one item. We have $$L \geq OPT/m \text{ and }L \leq OPT.$$ We set $r = \epsilon L /m$, where $\epsilon$ is the approximation parameter for the FPTAS. For the FPTAS we round $B'(x)$ to $\lfloor(B'(x)/r \rfloor$ and store for each of such values the minimal weight obtained. As we only store entries with $0 \leq B'(x) \leq OPT$, and for each such integer based on dominance and rounding one entry, the total number of entries per column is upper bounded by $$(OPT/ r) +1 \leq OPT/ (\epsilon L/m) + 1 \leq m^2/\epsilon +1$$ and number of entries in the dynamic programming table is $O(m^3 /\epsilon)$. In each step, we make an error of at most $$r = \epsilon L /m \leq \epsilon OPT /m$$ and the error after $m$ steps is at most $\epsilon L \leq \epsilon OPT.$ Hence, the solution $x$ with maximal $B'$-value after having considered all items fulfils $$B'(x) \geq (1-\epsilon) OPT.$$ To implement the idea (see Algorithm~\ref{alg:fptas}), we only store an entry $\beta(i,j,k)$ if there is no entry $\beta(i,j,k')$ with $$\lfloor \beta(i,j,k')/r\rfloor \geq \lfloor \beta(i,j,k)/r\rfloor \text{ and }k'<k.$$ Hence, for each possible value $\lfloor \beta(i,j,k)/r\rfloor$ at most one entry is stored and the number of entries for each column $\beta(i,j,\cdot)$ is upper bounded by $m^2/\epsilon +1$ (as stated above). Using for each $\beta(i,j,\cdot)$ a list which stores the entries $\beta(i,j,k)$ in increasing order of $k$ can be used for our implementation. Based on our investigations and the design of Algorithm~\ref{alg:fptas}, we can state the following result. \begin{theorem} Algorithm~\ref{alg:fptas} is a fully polynomial time approximation scheme (FPTAS) for the objective $B'$. It obtains for any $\epsilon$, $0< \epsilon \leq 1$, a solution $x$ with $B'(x) \geq (1-\epsilon) \cdot OPT$ in time $O(m^3/\epsilon)$. \end{theorem} The construction of the FPTAS only used the fact that the travel time per unit distance is monotonically increasing and convex. Hence, the FPTAS holds for any PWT problem where the travel time per unit distance has this property. \section{Experiments and Results}\label{sec:experiments} In this section, we investigate the effectiveness of the proposed DP and FPTAS approaches based on our implementations in Java\footnote{The code will be made available online at time of publication.}. We mainly focus on two issues: 1) studying how the DP and FPTAS perform compared to the state-of-the-art approaches; 2) investigating how the performance and accuracy of the FPTAS change when the parameter $\epsilon$ is altered. \begin{landscape} \setlength{\tabcolsep}{3pt} \begin{table*} \centering \caption{Results on Small Range Instances} \label{tab:smallresfptas} {\footnotesize \begin{adjustwidth}{0cm}{} \begin{tabular}{rc|@{\,}r@{\,}|r|r|r|rr|rr|rr|rr|rr|rr} \hline \multirow{4}{*}{Instance}& \multirow{4}{*}{m}& \multicolumn{1}{c|}{\multirow{4}{*}{OPT}} & \multicolumn{3}{c|}{Exact Approaches} & \multicolumn{12}{c}{Approximation Approaches} \\ \hhline{~~~---------------} &&& \multicolumn{1}{c|}{exactMIP} &\multicolumn{1}{c|}{BIB} &\multicolumn{1}{c|}{DP} &\multicolumn{2}{c|}{approxMIP} &\multicolumn{10}{c}{FPTAS} \\ \hhline{~~~---------------} &&&&&&&& \multicolumn{2}{c|}{$\epsilon=0.0001$}& \multicolumn{2}{c|}{$\epsilon=0.01$}& \multicolumn{2}{c|}{$\epsilon=0.1$}& \multicolumn{2}{c|}{$\epsilon=0.25$}& \multicolumn{2}{c}{$\epsilon=0.75$}\\ & & & RT(s)& RT(s)& RT(s)& AR(\%)& RT(s)& AR(\%)& RT(s)& AR(\%)& RT(s)& AR(\%)& RT(s)& AR(\%)& RT(s)& AR(\%)& RT(s)\\ \hline \multicolumn{18}{c}{instance family \texttt{eil101}} \\ \hline uncorr\_01 & 100 & 1651.6970 & 1.217 & 5.694 & 0.027 & 100.0000 & 3.838 & 100.0000 & 0.001 & 100.0000 & 0.001 & 100.0000 & 0.001 & 100.0000 & 0.001 & 100.0000 & 0.025 \\ uncorr\_06 & 100 & 10155.4942 & 12.605 & 3.698 & 0.065 & 100.0000 & 4.961 & 100.0000 & 0.012 & 100.0000 & 0.011 & 100.0000 & 0.011 & 100.0000 & 0.011 & 99.9928 & 0.063 \\ uncorr\_10 & 100 & 10297.7134 & 3.525 & 0.795 & 0.036 & 100.0000 & 0.624 & 100.0000 & 0.017 & 100.0000 & 0.017 & 99.9939 & 0.016 & 99.9939 & 0.016 & 99.9653 & 0.037 \\ uncorr-s-w\_01 & 100 & 2152.6188 & 0.328 & 7.566 & 0.001 & 100.0000 & 3.978 & 100.0000 & 0.000 & 100.0000 & 0.000 & 100.0000 & 0.000 & 100.0000 & 0.000 & 100.0000 & 0.003 \\ uncorr-s-w\_06 & 100 & 4333.8512 & 12.590 & 2.215 & 0.012 & 100.0000 & 2.699 & 100.0000 & 0.008 & 100.0000 & 0.007 & 100.0000 & 0.007 & 99.9569 & 0.008 & 99.9569 & 0.017 \\ uncorr-s-w\_10 & 100 & 9048.4908 & 37.144 & 1.107 & 0.022 & 100.0000 & 1.763 & 100.0000 & 0.012 & 100.0000 & 0.012 & 100.0000 & 0.012 & 100.0000 & 0.013 & 99.9355 & 0.020 \\ b-s-corr\_01 & 100 & 4441.9852 & 1.420 & 125.954 & 0.014 & 100.0000 & 5.366 & 100.0000 & 0.010 & 100.0000 & 0.009 & 100.0000 & 0.009 & 100.0000 & 0.008 & 100.0000 & 0.013 \\ b-s-corr\_06 & 100 & 10260.9767 & 4.509 & 22.541 & 0.101 & 100.0000 & 2.761 & 100.0000 & 0.058 & 100.0000 & 0.057 & 100.0000 & 0.048 & 100.0000 & 0.043 & 100.0000 & 0.087 \\ b-s-corr\_10 & 100 & 13630.6153 & 11.013 & 27.081 & 0.187 & 99.9971 & 3.713 & 100.0000 & 0.103 & 100.0000 & 0.101 & 99.9971 & 0.081 & 99.9606 & 0.065 & 99.8143 & 0.113 \\ uncorr\_01 & 500 & 17608.5781 & 19.594 & 27.581 & 0.247 & 100.0000 & 5.757 & 100.0000 & 0.171 & 100.0000 & 0.161 & 100.0000 & 0.153 & 100.0000 & 0.163 & 100.0000 & 0.377 \\ uncorr\_06 & 500 & 56294.5239 & 384.213 & 13.354 & 2.829 & 100.0000 & 7.800 & 100.0000 & 2.370 & 100.0000 & 2.344 & 100.0000 & 2.300 & 100.0000 & 2.212 & 100.0000 & 2.340 \\ uncorr\_10 & 500 & 66141.4840 & 211.302 & 2.325 & 4.010 & 100.0000 & 0.718 & 100.0000 & 3.720 & 100.0000 & 3.645 & 100.0000 & 3.446 & 100.0000 & 3.531 & 100.0000 & 3.632 \\ uncorr-s-w\_01 & 500 & 13418.8406 & 4.337 & 34.866 & 0.090 & 100.0000 & 50.310 & 100.0000 & 0.085 & 100.0000 & 0.090 & 100.0000 & 0.084 & 100.0000 & 0.087 & 99.9910 & 0.085 \\ uncorr-s-w\_06 & 500 & 34280.4730 & 346.430 & 7.285 & 1.040 & 100.0000 & 9.609 & 100.0000 & 0.964 & 100.0000 & 0.933 & 100.0000 & 0.905 & 100.0000 & 0.936 & 100.0000 & 0.920 \\ uncorr-s-w\_10 & 500 & 50836.6588 & 519.902 & 3.338 & 2.022 & 100.0000 & 3.354 & 100.0000 & 2.005 & 100.0000 & 1.783 & 100.0000 & 1.753 & 100.0000 & 1.784 & 100.0000 & 2.147 \\ b-s-corr\_01 & 500 & 21306.9158 & 40.482 & 624.204 & 1.534 & 100.0000 & 13.338 & 100.0000 & 1.373 & 100.0000 & 1.279 & 100.0000 & 1.116 & 100.0000 & 0.949 & 100.0000 & 0.716 \\ b-s-corr\_06 & 500 & 69370.2367 & 236.387 & 97.313 & 14.616 & 99.9996 & 7.847 & 100.0000 & 13.393 & 100.0000 & 12.975 & 100.0000 & 11.642 & 99.9996 & 9.741 & 99.9996 & 6.018 \\ b-s-corr\_10 & 500 & 82033.9452 & 376.569 & 218.728 & 22.011 & 100.0000 & 2.309 & 100.0000 & 21.372 & 100.0000 & 20.829 & 100.0000 & 18.573 & 100.0000 & 15.313 & 99.9943 & 8.840 \\ uncorr\_01 & 1000 & 36170.9109 & 218.306 & 114.567 & 1.872 & 99.9993 & 11.918 & 100.0000 & 1.891 & 100.0000 & 1.875 & 100.0000 & 1.832 & 100.0000 & 1.845 & 100.0000 & 1.764 \\ uncorr\_06 & 1000 & 93949.1981 & 1261.949 & 36.847 & 20.944 & 100.0000 & 17.971 & 100.0000 & 17.024 & 100.0000 & 16.615 & 100.0000 & 16.545 & 100.0000 & 16.378 & 100.0000 & 15.713 \\ uncorr\_10 & 1000 & 122963.6617 & 620.896 & 4.821 & 30.116 & 100.0000 & 2.184 & 100.0000 & 27.305 & 100.0000 & 26.783 & 100.0000 & 26.541 & 100.0000 & 26.051 & 100.0000 & 23.905 \\ uncorr-s-w\_01 & 1000 & 27800.9614 & 241.957 & 399.158 & 0.802 & 100.0000 & 4985.566 & 100.0000 & 0.730 & 100.0000 & 0.690 & 100.0000 & 0.688 & 100.0000 & 0.724 & 100.0000 & 0.687 \\ uncorr-s-w\_06 & 1000 & 61764.4599 & 1152.624 & 12.792 & 9.872 & 100.0000 & 19.063 & 100.0000 & 8.686 & 100.0000 & 8.812 & 100.0000 & 8.560 & 100.0000 & 8.740 & 100.0000 & 8.396 \\ uncorr-s-w\_10 & 1000 & 103572.4074 & 2146.408 & 7.644 & 15.047 & 100.0000 & 9.688 & 100.0000 & 14.030 & 100.0000 & 13.912 & 100.0000 & 13.797 & 100.0000 & 13.982 & 100.0000 & 13.492 \\ b-s-corr\_01 & 1000 & 46886.1094 & 378.551 & 6129.531 & 11.783 & 99.9988 & 46.394 & 100.0000 & 11.714 & 100.0000 & 11.358 & 100.0000 & 10.793 & 100.0000 & 9.592 & 100.0000 & 6.536 \\ b-s-corr\_06 & 1000 & 125830.6887 & 643.533 & 919.201 & 94.523 & 99.9999 & 10.311 & 100.0000 & 92.411 & 100.0000 & 91.039 & 100.0000 & 83.002 & 99.9999 & 71.078 & 100.0000 & 45.433 \\ b-s-corr\_10 & 1000 & 161990.5015 & 862.572 & 1646.520 & 151.601 & 100.0000 & 7.160 & 100.0000 & 150.279 & 100.0000 & 149.722 & 100.0000 & 134.764 & 100.0000 & 113.049 & 99.9981 & 70.135 \\ \hline \end{tabular} \end{adjustwidth} } \end{table*} \end{landscape} \begin{landscape} \setlength{\tabcolsep}{2pt} \begin{table*} \centering \caption{Results of DP and FPTAS on Large Range Instances} \label{tab:mediumresfptas} {\footnotesize \begin{adjustwidth}{0cm}{} \begin{tabular}{rc|rr|rr|rr|rr|rr|rr|rr|rr} \hline \multirow{3}{*}{Instance}& \multirow{3}{*}{m}& \multicolumn{2}{c|}{DP} &\multicolumn{14}{c}{FPTAS} \\ \hhline{~~----------------} &&&& \multicolumn{2}{c|}{$\epsilon=0.0001$}& \multicolumn{2}{c|}{$\epsilon=0.001$}& \multicolumn{2}{c|}{$\epsilon=0.01$}& \multicolumn{2}{c|}{$\epsilon=0.1$}& \multicolumn{2}{c|}{$\epsilon=0.25$}& \multicolumn{2}{c|}{$\epsilon=0.5$}& \multicolumn{2}{c}{$\epsilon=0.75$}\\ & & OPT& RT(s)& AR(\%)& RT(s)& AR(\%)& RT(s)& AR(\%)& RT(s)& AR(\%)& RT(s)& AR(\%)& RT(s)& AR(\%)& RT(s)& AR(\%)&RT(s)\\ \hline \multicolumn{18}{c}{instance family \texttt{eil101\_large-range}} \\ \hline uncorr\_01& 100& 69802802.2801 & 0.030 & 100.0000 & 0.002 & 100.0000 & 0.002 & 100.0000 & 0.002 & 100.0000 & 0.002 & 100.0000 & 0.002 & 100.0000 & 0.002 & 100.0000 & 0.029 \\ uncorr\_06& 100& 204813765.6933 & 0.053 & 100.0000 & 0.019 & 100.0000 & 0.020 & 100.0000 & 0.019 & 100.0000 & 0.019 & 100.0000 & 0.019 & 100.0000 & 0.019 & 100.0000 & 0.049 \\ uncorr\_10& 100& 172176182.1249 & 0.041 & 100.0000 & 0.028 & 100.0000 & 0.028 & 100.0000 & 0.028 & 100.0000 & 0.028 & 100.0000 & 0.027 & 100.0000 & 0.026 & 99.9628 & 0.037 \\ uncorr-s-w\_01& 100& 36420530.5753 & 0.006 & 100.0000 & 0.003 & 100.0000 & 0.003 & 100.0000 & 0.003 & 100.0000 & 0.003 & 100.0000 & 0.003 & 100.0000 & 0.002 & 100.0000 & 0.004 \\ uncorr-s-w\_06& 100& 148058928.2952 & 0.098 & 100.0000 & 0.072 & 100.0000 & 0.502 & 100.0000 & 0.072 & 100.0000 & 0.069 & 100.0000 & 0.065 & 100.0000 & 0.059 & 100.0000 & 0.070 \\ uncorr-s-w\_10& 100& 142538516.4602 & 0.136 & 100.0000 & 0.101 & 100.0000 & 0.104 & 100.0000 & 0.103 & 99.9978 & 0.096 & 99.9978 & 0.086 & 99.9978 & 0.073 & 99.9978 & 0.089 \\ m-s-corr\_01& 100& 19549602.2671 & 0.003 & 100.0000 & 0.002 & 100.0000 & 0.002 & 100.0000 & 0.002 & 100.0000 & 0.002 & 100.0000 & 0.002 & 100.0000 & 0.001 & 100.0000 & 0.002 \\ m-s-corr\_06& 100& 137203175.1921 & 0.147 & 100.0000 & 0.115 & 100.0000 & 0.118 & 100.0000 & 0.113 & 100.0000 & 0.089 & 100.0000 & 0.063 & 100.0000 & 0.040 & 100.0000 & 0.043 \\ m-s-corr\_10& 100& 225584278.6004 & 0.424 & 100.0000 & 0.326 & 100.0000 & 0.329 & 100.0000 & 0.312 & 100.0000 & 0.200 & 100.0000 & 0.179 & 100.0000 & 0.086 & 100.0000 & 0.073 \\ uncorr\_01& 500& 385692662.0930 & 0.470 & 100.0000 & 0.451 & 100.0000 & 0.454 & 100.0000 & 0.619 & 100.0000 & 0.508 & 100.0000 & 0.445 & 100.0000 & 0.430 & 100.0000 & 0.517 \\ uncorr\_06& 500& 958013934.6172 & 3.539 & 100.0000 & 3.749 & 100.0000 & 7.431 & 100.0000 & 3.947 & 100.0000 & 3.690 & 99.9996 & 3.677 & 99.9996 & 3.486 & 99.9993 & 3.021 \\ uncorr\_10& 500& 844949838.4389 & 4.870 & 100.0000 & 5.393 & 100.0000 & 5.716 & 100.0000 & 5.483 & 100.0000 & 5.135 & 100.0000 & 4.851 & 99.9992 & 4.609 & 99.9992 & 4.295 \\ uncorr-s-w\_01& 500& 182418888.9364 & 1.157 & 100.0000 & 1.157 & 100.0000 & 1.199 & 100.0000 & 1.145 & 99.9995 & 1.112 & 99.9995 & 1.063 & 99.9995 & 0.977 & 99.9904 & 0.929 \\ uncorr-s-w\_06& 500& 780432253.0187 & 22.390 & 100.0000 & 25.040 & 100.0000 & 26.276 & 100.0000 & 24.024 & 100.0000 & 23.282 & 99.9997 & 21.756 & 99.9997 & 18.293 & 99.9997 & 18.411 \\ uncorr-s-w\_10& 500& 714433353.7957 & 30.959 & 100.0000 & 34.458 & 100.0000 & 39.004 & 100.0000 & 34.308 & 100.0000 & 32.308 & 99.9996 & 28.792 & 99.9990 & 26.392 & 99.9990 & 25.971 \\ m-s-corr\_01& 500& 96463941.1275 & 2.335 & 100.0000 & 2.478 & 100.0000 & 2.782 & 100.0000 & 2.695 & 100.0000 & 1.509 & 100.0000 & 0.963 & 100.0000 & 0.546 & 100.0000 & 0.408 \\ m-s-corr\_06& 500& 666701000.1488 & 108.705 & 100.0000 & 126.833 & 100.0000 & 139.630 & 100.0000 & 122.750 & 100.0000 & 62.479 & 100.0000 & 33.547 & 100.0000 & 17.959 & 100.0000 & 10.642 \\ m-s-corr\_10& 500& 1082009880.5886 & 262.999 & 100.0000 & 299.862 & 100.0000 & 317.352 & 100.0000 & 274.284 & 100.0000 & 145.087 & 100.0000 & 78.470 & 99.9994 & 41.816 & 99.9994 & 25.924 \\ uncorr\_01& 1000& 777386336.9660 & 4.222 & 100.0000 & 4.397 & 100.0000 & 4.347 & 100.0000 & 4.309 & 100.0000 & 4.341 & 100.0000 & 4.377 & 100.0000 & 4.280 & 100.0000 & 4.240 \\ uncorr\_06& 1000& 1933319297.4248 & 46.043 & 100.0000 & 51.383 & 100.0000 & 53.087 & 100.0000 & 48.861 & 100.0000 & 52.957 & 99.9999 & 52.062 & 99.9997 & 50.286 & 99.9996 & 51.488 \\ uncorr\_10& 1000& 1693797490.1704 & 64.485 & 100.0000 & 76.744 & 100.0000 & 78.847 & 100.0000 & 74.128 & 100.0000 & 82.754 & 100.0000 & 77.057 & 100.0000 & 72.283 & 100.0000 & 72.567 \\ uncorr-s-w\_01& 1000& 361991311.8336 & 14.254 & 100.0000 & 15.072 & 100.0000 & 15.670 & 100.0000 & 14.523 & 100.0000 & 14.110 & 100.0000 & 14.039 & 100.0000 & 12.088 & 100.0000 & 11.129 \\ uncorr-s-w\_06& 1000& 1574469459.3163 & 286.843 & 100.0000 & 318.096 & 100.0000 & 330.508 & 100.0000 & 337.289 & 100.0000 & 334.318 & 100.0000 & 307.588 & 99.9998 & 270.013 & 99.9996 & 245.927 \\ uncorr-s-w\_10& 1000& 1439410696.3695 & 393.793 & 100.0000 & 438.775 & 100.0000 & 455.830 & 100.0000 & 464.527 & 100.0000 & 441.955 & 100.0000 & 433.672 & 99.9994 & 378.917 & 99.9994 & 340.813 \\ m-s-corr\_01& 1000& 191170309.5684 & 46.858 & 100.0000 & 58.031 & 100.0000 & 59.987 & 100.0000 & 58.101 & 100.0000 & 31.703 & 100.0000 & 18.771 & 100.0000 & 10.728 & 100.0000 & 6.831 \\ m-s-corr\_06& 1000& 1315708161.7720 & 2393.205 & 100.0000 & 2512.281 & 100.0000 & 2606.412 & 100.0000 & 1921.573 & 100.0000 & 666.749 & 100.0000 & 364.452 & 100.0000 & 208.969 & 100.0000 & 150.060 \\ m-s-corr\_10& 1000& 2163713055.3759 & 6761.490 & 100.0000 & 6668.535 & 100.0000 & 6441.906 & 100.0000 & 4526.653 & 100.0000 & 1334.882 & 100.0000 & 703.258 & 100.0000 & 397.527 & 100.0000 & 282.211 \\ \hline \end{tabular} \end{adjustwidth} } \end{table*} \end{landscape} In order to be comparable to the mixed integer programming (MIP) and the branch-infer-and-bound (BIB) approaches presented in ~\cite{DBLP:journals/corr/PolyakovskiyN15}, we conduct our experiments on the same families of test instances. Our experiments are carried out on a computer with 4GB RAM and a 3.06GHz Intel Dual Core processor, which is also the same as the machine used in the paper mentioned above. We compare the DP to the exact MIP (\textit{{exactMIP}}) and the branch-infer-and-bound approaches as well as the FPTAS to the approximate MIP (\textit{approxMIP}), as the former three are all exact approaches and the latter two are all approximations. Table~\ref{tab:smallresfptas} demonstrates the results for a route of 101 cities and various types of packing instances. For this particular family, we consider three types of instances: \textit{uncorrelated} (uncorr), \textit{uncorrelated with similar weights} (uncorr-s-w) and \textit{bounded strongly correlated} (b-s-corr), which are further distinguished by the different correlations between profits and weights. In combination with three different numbers of items and three settings of the capacity, we have 27 instances in total, as shown in the column called ``\textit{Instance}''. Similarly to the settings in~\cite{DBLP:journals/corr/PolyakovskiyN15}, every instance with ``\_01'' postfix has a relatively small capacity. We expect such instances to be potentially easy to solve by DP and FPTAS due to the nature of the algorithms. The \textit{OPT} column shows the optimum of each instance and the \textit{RT(s)} columns illustrate the running time for each of the approaches in the time unit of a second. To demonstrate the quality of an approximate approach applied to the instances, we use the ratio between the objective value obtained by the algorithm and the optimum obtained for an instance as the approximation rate $AR(\%) = 100 \times \frac{OBJ}{OPT}$. In the comparison of exact approaches, our results show that the DP is much quicker than the exact MIP and BIB in solving the majority of the instances. The exact MIP is slower than the DP in every case and this dominance is mostly significant. For example, it spends around $35$ minutes to solve the instance \textit{uncorr-s-w\_10} with $1,000$ items, where the DP needs around $15$ seconds only. On the other hand, the BIB slightly beats the DP on three instances, but the DP is superior for the rest $24$ instances. An extreme case is \textit{b-s-corr\_01} with $1,000$ items where the BIB spends above $1.5$ hours while the DP solves it in $11$ seconds only. Concerning the running time of the DP, it significantly increases only for the instances having large amount of items with strongly correlated weights and profits, such as \textit{b-s-corr\_06} and \textit{b-s-corr\_10} with $1,000$ items. However, \textit{b-s-corr\_01} seems exceptional due to the limited capacity assigned to the instance. Our comparison between the approximation approaches shows that the FPTAS has significant advantages as well. The approximation ratios remain $100\%$ when $\epsilon$ equals $0.0001$ and $0.01$. Only when $\epsilon$ is set to $0.25$, the FPTAS starts to output the results having similar accuracies as the ones of \textit{approxMIP}. With regard to the performance, the FPTAS takes less running time than \textit{approxMIP} on the majority of the instances despite the setting of $\epsilon$. As an extreme case, \textit{approxMIP} requires hours to solve the \textit{uncorr-s-w\_01} instance with $1,000$ items, but the FPTAS takes less than a second. However, the \textit{approxMIP} performs much better on \textit{b-s-corr\_06} and \textit{b-s-corr\_10} with $1,000$ items. This somehow indicates that the underlying factors that make instances hard to solve by approximate MIP and FPTAS have different nature. Understanding these factors more and using them wisely should help to build a more powerful algorithm with mixed features of MIP and FPTAS. In our second experiment, we use test instances which are slightly different to those in the benchmark used in ~\cite{DBLP:journals/corr/PolyakovskiyN15}. This is motivated by our findings that relaxing $\epsilon$ from $0.0001$ to $0.75$ improves the performance of FPTAS by around $50\%$ for the b-s-corr instances, while does not degrade the accuracy noticeably. At the same time, there is no significant improvement for other instances. It's surprising as shows that the performance improvement can be easily achieved on complex instances. Therefore, we study how the FPTAS performs if the instances are more complicated. The idea is to use instances with large weights, which are known to be difficult regarding dynamic programming based approaches for the classical knapsack problem. We follow the same way to create TTP instances as proposed in ~\cite{Polyakovskiy2014TTP} and generate the knapsack component of the problem as discussed in ~\cite{Pisinger20052271}. Specifically, we extend the range to generate potential profits and weights from $[1, 10^3]$ to $[1, 10^7]$ and focus on \textit{uncorrelated} (uncorr), \textit{uncorrelated with similar weights} (uncorr-s-w), and \textit{multiple strongly correlated} (m-s-corr) types of instances. Additionally, in the stage of assigning the items of a knapsack instance to particular cities of a given TSP tour, we sort the items in descending order of their profits and the second city obtains $k$, $k\in\left\{1,5,10\right\}$, items of the largest profits, the third city then has the next $k$ items, and so on. We expect that such assignment should force the algorithms to select items in the first cities of a route making the instances more challenging for the DP and FPTAS. In fact, these instances occur to be harder and force us to switch to the 128GB RAM and 8 $\times$~(2.8GHz AMD 6 core processors) cluster machine to carry out the second experiment. Table~\ref{tab:mediumresfptas} illustrates the results of running the DP and FPTAS on the instances with the large range of profits and weights. Generally speaking, we can observe that the instances are significantly harder to solve than those ones from the first experiment, that is they take comparably more time. Similarly, the instances with large number of items, larger capacity, and strong correlation between profits and weights are now hard for the DP as well. Oppositely to the results of the previous experiment, the FPTAS performs much better when dealing with such instances in the case when $\epsilon$ is relaxed. For example, its performance is improved by $95\%$ for the instance \textit{m-s-corr\_10} with $1,000$ items when $\epsilon$ is raised from $0.0001$ to $0.75$ while the approximation rate remains at $100\%$. \section{Conclusion}\label{sec:conclusion} Multi-component combinatorial optimisation problems play an important role in many real-world applications. We have examined the non-linear packing while traveling problem which results from the interactions in the TTP. We designed a dynamic programming algorithm that solves the problem in pseudo-polynomial time. Furthermore, we have shown that the original objective of the problem is hard to approximate and have given an FPTAS for optimising the amount that can be gained over the smallest possible travel cost. It should be noted that the FPTAS applies to a wider range of problems as our proof only assumed that the travel cost per unit distance in dependence of with weight $w$ is monotone increasing and convex. Our experimental results on different types of knapsack instances show the advantage of the dynamic program over the previous approach based on mixed integer programming and branch-infer-and-bound concepts. Furthermore, we have demonstrated the effectiveness of the FPTAS on instances with a large weight and profit range. \end{document}
\begin{document} \title{The conjugacy problem in automaton groups \\ is not solvable} \author{Zoran {\v{S}}uni\'c} \address{Dept. of Mathematics, Texas A\&M Univ. MS-3368, College Station, TX 77843-3368, USA} \author{Enric Ventura} \address{Dept. Mat. Apl. III, Universitat Polit\`ecnica de Catalunya, Manresa, Barcelona, Catalunya} \begin{abstract} (Free-abelian)-by-free, self-similar groups generated by finite self-similar sets of tree automorphisms and having unsolvable conjugacy problem are constructed. Along the way, finitely generated, orbit undecidable, free subgroups of $\mathsf{GL}_d({\mathbb Z})$, for $d \geqslant 6$, and $\mathsf{Aut}(F_d)$, for $d \geqslant 5$, are constructed as well. \end{abstract} \keywords{automaton groups; (free abelian)-by-free groups; conjugacy problem; orbit decidability} \def\textup{2010} Mathematics Subject Classification{\textup{2010} Mathematics Subject Classification} \subjclass[2010]{20E8; 20F10} \maketitle \section{Introduction} The goal of this paper is to prove the following result. \begin{theorem}\label{t:main} There exist automaton groups with unsolvable conjugacy problem. \end{theorem} The question on solvability of the conjugacy problem was raised in 2000 for the class of self-similar groups generated by finite self-similar sets (automaton groups) by Grigorchuk, Nekrashevych and Sushchanski{\u\i}~\cite{grigorchuk-n-s:automata}. Note that the word problem is solvable for all groups in this class by a rather straightforward algorithm running in exponential time. Moreover, for an important subclass consisting of finitely generated, contracting groups the word problem is solvable in polynomial time. Given that our examples contain free nonabelian subgroups and that contracting groups, as well as the groups $\mathsf{Pol}(n)$, $n \geq0$, do not contain such subgroups (see~\cite{nekrashevych:free-subgroups} and~\cite{sidki:pol}), the following question remains open. \begin{question}\label{q:open} Is the conjugacy problem solvable in (i) all finitely generated, contracting, self-similar groups? (ii) the class of automaton groups in $\mathsf{Pol}(n)$, for $n \geqslant 0$? \end{question} There are many positive results on the solvability of the conjugacy problem in automaton groups close to the first Grigorchuk group~\cite{grigorchuk:burnside} and the Gupta-Sidki examples~\cite{gupta-s:burnside}. The conjugacy problem was solved for the first Grigorchuk group independently by Leonov~\cite{leonov:conjugacy} and Rozhkov~\cite{rozhkov:conjugacy}, and for the Gupta-Sidki examples by Wilson and Zaleskii~\cite{wilson-z:conjugacy}. Grigorchuk and Wilson~\cite{grigorchuk-w:conjugacy} showed that the problem is solvable in all subgroups of finite index in the first Grigorchuk group. In fact, the results in~\cite{leonov:conjugacy,wilson-z:conjugacy,grigorchuk-w:conjugacy} apply to certain classes of groups that include the well known examples we explicitly mentioned. In a recent work Bondarenko, Bondarenko, Sidki and Zapata~\cite{bondarenko-b-s-z:conjugacy} showed that the conjugacy problem is solvable in $\mathsf{Pol}(0)$. Lysenok, Myasnikov, and Ushakov provided the first, and so far the only, significant result on the complexity of the conjugacy problem in automaton groups by providing a polynomial time solution for the first Grigorchuk group~\cite{lysenok-m-u:grigorchuk-cp}. The strategy for our proof of Theorem~\ref{t:main} is as follows. First, we observe the following consequence of a result by Bogopolski, Martino and Ventura~\cite{bogopolski-m-v:cp}. \begin{proposition}\label{gamma} Let $H$ be a finitely generated group, and $\Gamma$ a finitely generated subgroup of $\mathsf{Aut} (H)$. If $\Gamma \leqslant \mathsf{Aut} (H)$ is orbit undecidable then $H\rtimes \Gamma$ has unsolvable conjugacy problem. \end{proposition} Since, for $d\geqslant 4$, examples of finitely generated orbit undecidable subgroups $\Gamma$ in $GL_d(\mathbb{Z})$ are provided in~\cite{bogopolski-m-v:cp}, we obtain the existence of groups of the form $\mathbb{Z}^d \rtimes \Gamma$ with unsolvable conjugacy problem. Finally, using techniques of Brunner and Sidki~\cite{brunner-s:glnz}, we prove the following result, which implies Theorem~\ref{t:main}. \begin{theorem}\label{cor} Let $\Gamma$ be an arbitrary finitely generated subgroup of $GL_d(\mathbb{Z})$. Then, $\mathbb{Z}^d \rtimes \Gamma$ is an automaton group. \end{theorem} The examples of finitely generated orbit undecidable subgroups of $\mathsf{GL}_d({\mathbb Z})$, for $d\geqslant 4$ given in~\cite{bogopolski-m-v:cp} are based on Mikhailova's construction and are not finitely presented. By modifying the construction in~\cite{bogopolski-m-v:cp}, at the cost of increasing the dimension by 2, we determine finitely generated, orbit undecidable, free subgroups of $GL_d(\mathbb{Z})$, for $d\geqslant 6$. Note that, by~\cite[Proposition 6.9.]{bogopolski-m-v:cp} and the Tits Alternative~\cite{tits:alternative}, every orbit undecidable subgroup $\Gamma$ of $\mathsf{GL}_d({\mathbb Z})$ contains free nonabelian subgroups. By using the same technique (see Proposition~\ref{p:general-free}) we also construct finitely generated, orbit undecidable, free subgroups of $\mathsf{Aut}(F_d)$, for $d \geq 5$, answering Question~6 raised in~\cite{bogopolski-m-v:cp}. \begin{proposition}\label{p:free-orbit-undecidable} (a) For $d\geqslant 6$, the group $\mathsf{GL}_d({\mathbb Z})$ contains finitely generated, orbit undecidable, free subgroups. (b) For $d\geqslant 5$, the group $\mathsf{Aut}(F_d)$ contains finitely generated, orbit undecidable, free subgroups. \end{proposition} This allows us to deduce the following strengthened version of Theorem~\ref{t:main}. \begin{theorem}\label{gros} For every $d\geqslant 6$, there exists a finitely presented group $G$ simultaneously satisfying the following three conditions: i) $G$ is an automaton group, ii) $G$ is ${\mathbb Z}^d$-by-(f.g.-free) (in fact, $G={\mathbb Z}^d \rtimes_\phi F_m$, with injective action $\phi$), iii) $G$ has unsolvable conjugacy problem. \end{theorem} \section{Orbit undecidability}\label{oud} The main result in~\cite{bogopolski-m-v:cp} can be stated in the following way. \begin{theorem}[Bogopolski, Martino, Ventura~\cite{bogopolski-m-v:cp}]\label{bmv} Let $G=H\rtimes F$ be a semidirect product (with $F$, $H$, and so $G$, finitely generated) such that (i) the conjugacy problem is solvable in $F$, (ii) for every $f \in F$, $\langle f \rangle$ has finite index in the centralizer $C_F(f)$ and there is an algorithm that, given $f$, calculates coset representatives for $\langle f \rangle$ in $C_F(f)$, (iii) the twisted conjugacy problem is solvable in $H$. \noindent Then the following are equivalent: (a) the conjugacy problem in $G$ is solvable, (b) the conjugacy problem in $G$ restricted to $H$ is solvable, (c) the action group $\{\lambda_g \mid g \in G\} \leqslant \mathsf{Aut}(H)$ is orbit decidable, where $\lambda_g$ denotes the right conjugation by $g$, restricted to $H$. \qed \end{theorem} The \emph{conjugacy problem in $G$ restricted to $H$} asks if, given two elements $u$ and $v$ in $H$, there exists an element $g$ in $G$ such that $u^g=v$. The \emph{orbit problem} for a subgroup $\Gamma$ of $\mathsf{Aut}(H)$ asks if, given $u$ and $v$ in $H$, there is an automorphism $\gamma$ in $\Gamma$ such that $\gamma(u)$ is conjugate to $v$ in $H$; we say that $\Gamma$ is \emph{orbit decidable (resp. undecidable)} if the orbit problem for $\Gamma$ is solvable (resp. unsolvable). Finally, the \emph{twisted conjugacy problem} for a group $H$ asks if, given an automorphism $\varphi \in \mathsf{Aut} (H)$ and two elements $u,v\in H$, there is $x\in H$ such that $v=\varphi(x)^{-1}ux$. The implications $(a) \Rightarrow (b) \Leftrightarrow (c)$ in Theorem~\ref{bmv} are clear from the definitions, and do not require most of the hypotheses (as indicated in~\cite{bogopolski-m-v:cp}, the only relevant implication is $(c) \Rightarrow (a)$). Proposition~\ref{gamma}, which is needed for our purposes, is an obvious corollary. \begin{proposition}\label{p:general-free} Let $G$ be a group and $H$ and $K$ be subgroups of $G$ such that (i) $G = \langle H, K \rangle$, (ii) the free group $F_2$ of rank 2 is a subgroup of $\mathsf{Aut}(K)$, (iii) there exists a finitely generated orbit undecidable subgroup $\Gamma \leqslant \mathsf{Aut}(H)$, (iv) every pair of automorphisms $\alpha \in \mathsf{Aut}(H)$ and $\beta \in \mathsf{Aut}(K)$ has a (necessarily unique) common extension to an automorphism of $G$, and (v) two elements of $H$ are conjugate in $G$ if and only if they are conjugate in $H$. Then, $\mathsf{Aut}(G)$ contains finitely generated, orbit undecidable, free subgroups. \end{proposition} \begin{proof} Let $\Gamma = \langle g_1,\ldots,g_m \rangle$ be an orbit undecidable subgroup of $\mathsf{Aut}(H)$ and $F=\langle f_1,\dots,f_m \rangle$ be a free subgroup of rank $m$ of $\mathsf{Aut}(K)$. For, $i=1,\dots,m$, let $s_i$ be the common extension of $g_i$ and $f_i$ to an automorphism of $G$ and let $\Gamma' = \langle s_1,\dots,s_m \rangle \leqslant \mathsf{Aut}(G)$. Since $F$ is free of rank $m$, so is $\Gamma'$. Moreover, $\Gamma'$ is orbit undecidable subgroup of $\mathsf{Aut}(G)$. Indeed, for $u,v \in H$, \begin{align*} (\exists \gamma' \in \Gamma')(\exists t' \in G)~\gamma'(u)=v^{t'} &\iff (\exists \gamma \in \Gamma)(\exists t' \in G)~\gamma(u)=v^{t'} \iff \\ &\iff (\exists \gamma \in \Gamma)(\exists t \in H)~\gamma(u)=v^{t} \end{align*} The second equivalence follows from (v), since $\gamma(u), v \in H$. The first comes from the construction, since, for every group word $w(x_1,\dots,x_m)$, the automorphisms $\gamma'=w(s_1,\dots,s_m) \in \Gamma'$ and $\gamma=w(g_1,\dots,g_m) \in \Gamma$ agree on $H$. Therefore, the orbit problem for the instance $u,\, v\in H$ with respect to $\Gamma \leqslant \mathsf{Aut}(H)$ is equivalent to the orbit problem for the instance $u,v\in H\leqslant G$ with respect to $\Gamma' \leqslant \mathsf{Aut}(G)$, showing that an algorithm that would solve the orbit problem for $\Gamma'$ could be used to solve the orbit problem for $\Gamma$ as well. Thus $\Gamma'$ is orbit undecidable. \end{proof} \begin{proof}[Proof of Proposition~\ref{p:free-orbit-undecidable}] (a) For $d \geq 6$, let $G={\mathbb Z}^d$, $H={\mathbb Z}^{d-2}$, $K={\mathbb Z}^2$, and $G=H \oplus K$. All requirements of Proposition~\ref{p:general-free} are satisfied. In particular, (iii) holds by~\cite[Proposition~7.5]{bogopolski-m-v:cp}, and (v) holds since conjugacy is the same as equality in both $H$ and $G$. (b) For $d \geq 5$, let $G=F_d$, $H=F_{d-2}$, $K=F_2$, and $G=H \ast K$. All requirements of Proposition~\ref{p:general-free} are satisfied. In particular, (iii) holds by~\cite[Subsection~7.2]{bogopolski-m-v:cp}, and (v) holds since the free factor $H$ is malnormal in $G$. \end{proof} \section{Self-similar groups and automaton groups}\label{ssg} Let $X$ be a finite alphabet on $k$ letters. The set $X^*$ of words over $X$ has the structure of a rooted $k$-ary tree in which the empty word is the root and each vertex $u$ has $k$ children, namely the vertices $ux$, for $x$ in $X$. Every tree automorphism fixes the root and permutes the words of the same length (constituting the levels of the rooted tree) while preserving the tree structure. Let $g$ be a tree automorphism. The action of $g$ on $X^*$ can be decomposed as follows. There is a permutation $\pi_g$ of $X$, called the \emph{root permutation} of $g$, determined by the permutation that $g$ induces on the subtrees below the root (the action of $g$ on the first letter in every word), and tree automorphisms $g|_x$, for $x$ in $X$, called the \emph{sections} of $g$, determined by the action of $g$ within these subtrees (the action of $g$ on the rest of the word behind the first letter). Both the root permutation and the sections are uniquely determined by the equality \begin{equation}\label{e:gxw} g(xw) = \pi_g(x)g|_x(w), \end{equation} for $x$ in $X$ and $w$ in $X^*$. A group or a set of tree automorphisms is \emph{self-similar} if it contains all sections of all of its elements. A \emph{finite automaton} is a finite self-similar set. A group $G({\mathcal A})$ of tree automorphisms generated by a finite self-similar set ${\mathcal A}$ is itself self-similar and it is called an \emph{automaton group} (realized or generated by the automaton ${\mathcal A}$). The elements of the automaton are often referred to as \emph{states} of the automaton and the automaton is said to operate on the alphabet $X$. The \emph{boundary} of the tree $X^*$ is the set $X^\omega$ of right infinite words $x_1x_2x_3\cdots$. The tree structure induces a metric on $X^\omega$ inducing the Cantor set topology. The metric is given by $d(u,v) = \frac{1}{2^{|u \wedge v|}}$, for $u \neq v$, where $|u \wedge v|$ denotes the length of the longest common prefix of $u$ and $v$. The group of isometries of the boundary $X^\omega$ and the group of tree automorphism of $X^*$ are canonically isomorphic. Every isometry induces a tree automorphism by restricting the action on finite prefixes, and every tree automorphism induces an isometry on the boundary through an obvious limiting process. The decomposition formula~(\ref{e:gxw}) for the action of tree automorphisms is valid for boundary isometries as well ($w$ is any right infinite word in this case). \section{Automaton groups with unsolvable conjugacy problem}\label{feina} Let ${\mathcal M}=\{ M_1,\ldots,M_m \}$ be a set of integer $d\times d$ matrices with non-zero determinants. Let $n\geqslant 2$ be relatively prime to all of these determinants (thus, each $M_i$ is invertible over the ring ${\mathbb Z}_n$ of $n$-adic integers. For an integer matrix $M$ and an arbitrary vector $\mathbf{v}$ with integer coordinates, consider the invertible affine transformation $M_\mathbf{v} \colon {\mathbb Z}_n^d \to {\mathbb Z}_n^d$ given by $M_\mathbf{v}(\mathbf{u})=\mathbf{v}+M\mathbf{u}$, and let \[ G_{{\mathcal M}, n} =\langle \{M_\mathbf{v} \mid M \in {\mathcal M}, \ \mathbf{v} \in {\mathbb Z}^d\} \rangle \] be the subgroup of $\mbox{Aff}_d({\mathbb Z}_n)$ generated by all the transformations of the form $M_\mathbf{v}$, for $M\in {\mathcal M}$ and $v\in {\mathbb Z}^d$. Denote by $\tau_\mathbf{v}$ the translation ${\mathbb Z}_n^d \to {\mathbb Z}_n^d$, $\mathbf{u} \mapsto \mathbf{v}+\mathbf{u}$, and by $\mathbf{e}_i$ the $i$-th standard basis vector. Since $M_{\mathbf{v}} =\tau_{\mathbf{v}} M_{\mathbf{0}}$, we have \begin{equation}\label{e:generators} G_{{\mathcal M},n} =\langle \{M_\mathbf{0} \mid M \in {\mathcal M}\} \cup \{\tau_{\mathbf{e}_i} \mid i=1,\ldots,d\} \rangle \leqslant \mbox{Aff}_d({\mathbb Z}_n). \end{equation} \begin{lemma}\label{inv} If all matrices in ${\mathcal M}$ are invertible over ${\mathbb Z}$, then $G_{{\mathcal M},n} \cong {\mathbb Z}^d \rtimes \Gamma$, where $\Gamma=\langle {\mathcal M} \rangle \leqslant \mathsf{GL}_d({\mathbb Z})$; in particular, $G_{{\mathcal M},n}$ does not depend on $n$. \end{lemma} \begin{proof} If $M$ is an invertible matrix over ${\mathbb Z}$, and $v\in {\mathbb Z}^d$, then $M_{\mathbf{v}}\in \mbox{Aff}_d({\mathbb Z}_n)$ restricts to a bijective affine transformation $M_{\mathbf{v}}\in \mbox{Aff}_d({\mathbb Z})$. Hence, we can view $G_{{\mathcal M},n}$ as a subgroup of $\mbox{Aff}_d({\mathbb Z})$ and, in particular, it is independent from $n$; let us denote it by $G_{{\mathcal M}}$. Clearly, the subgroup of translations $T=\langle \tau_{\mathbf{e}_1}, \ldots, \tau_{\mathbf{e}_d}\rangle$ of $G_{{\mathcal M}}$ is free abelian of rank $d$, $T\simeq {\mathbb Z}^d$. Since each of the transformations $M_\mathbf{0}$, for $M \in {\mathcal M}$, acts on ${\mathbb Z}^d$ by multiplication by $M$, the subgroup $\langle M_\mathbf{0} \mid M \in {\mathcal M} \rangle$ of $G_{\mathcal M}$ is isomorphic to $\Gamma$ and may be safely identified with it. The subgroups $T$ and $\Gamma$ intersect trivially, since every nontrivial element of $T$ moves the zero vector in ${\mathbb Z}^d$, while no element of $\Gamma$ does. For $M \in {\mathcal M} \cup {\mathcal M}^{-1}$ (where ${\mathcal M}^{-1}$ is the set of integer matrices inverse to the matrices in ${\mathcal M}$), $j=1,\ldots,d$, and $\mathbf{u} \in {\mathbb Z}^d$, $$ M_\mathbf{0} \tau_{\mathbf{e}_j} (M_\mathbf{0})^{-1}(\mathbf{u}) = M_\mathbf{0} \tau_{\mathbf{e}_j} (M^{-1}\mathbf{u}) = M_\mathbf{0}(\mathbf{e}_j + M^{-1}\mathbf{u}) = M\mathbf{e_j} + \mathbf{u} = $$ $$ =\tau_{\mathbf{e}_1}^{m_{1,j}}\tau_{\mathbf{e}_2}^{m_{2,j}}\cdots \tau_{\mathbf{e}_d}^{m_{d,j}}(\mathbf{u}), $$ where $m_{i,j}$ is the $(i,j)$-entry of $M$. Therefore, for $M \in {\mathcal M} \cup {\mathcal M}^{-1}$ and $j=1,\dots,d$, \begin{equation}\label{e:relation} M_\mathbf{0} \tau_{\mathbf{e}_j} (M_\mathbf{0})^{-1} = \tau_{\mathbf{e}_1}^{m(1,j)}\tau_{\mathbf{e}_2}^{m(2,j)}\cdots \tau_{\mathbf{e}_d}^{m(d,j)}. \end{equation} It follows that the subgroup $T\cong {\mathbb Z}^d$ is normal in $G_{\mathcal M}$ and $G_{\mathcal M} \cong {\mathbb Z}^d \rtimes \Gamma$. \end{proof} \begin{remark} The equality~(\ref{e:relation}) is correct (over ${\mathbb Z}_n$) for any integer matrix with non-zero determinant relatively prime to $n$. When ${\mathcal M}=\{M\}$ consists of a single $d\times d$ integer matrix $M=(m_{i,j})$ of infinite order and determinant $k\neq 0$ relatively prime to $n$, the multiplication by $M$ embeds ${\mathbb Z}^d$ into an index $|k|$ subgroup of ${\mathbb Z}^d$ and $G_{{\mathcal M},n}$ is the ascending HNN extension of ${\mathbb Z}^d$ by a single stable letter (see~\cite{bartholdi-s:bs}), i.e., \[ G_{{\mathcal M},n} \cong \langle \ a_1,\ldots,a_d, t \mid [a_i,a_j]=1,~ta_jt^{-1} = a_1^{m_{1,j}}\cdots a_d^{m_{d,j}},~\mbox{for}~1\leqslant i,j\leqslant d \ \rangle. \] \end{remark} The goal now is to show that the groups $G_{{\mathcal M},n}$ constructed in this way, can all be realized by finite automata and so, they are automaton groups. The elements of the ring ${\mathbb Z}_n$ may be (uniquely) represented as right infinite words over the alphabet $Y_n= \{0,\ldots,n-1\}$, through the correspondence $$ y_1y_2y_3 \cdots \quad \longleftrightarrow \quad y_1 + y_2\cdot n + y_3\cdot n^2 + \cdots, $$ while the elements of the free $d$-dimensional module ${\mathbb Z}_n^d$, viewed as column vectors, may be (uniquely) represented as right infinite words over the alphabet $X_n =Y_n^d=\{ (y_1,\ldots,y_d)^T \mid y_i \in Y_n, \ i=1,\ldots,d\}$ consisting of column vectors with entries in $Y_n$. Note that $|Y_n |=n$ and $|X_n |=n^d$. For a vector $\mathbf{v}$ with integer coordinates define $\mbox{Mod}(\mathbf{v})$ and $\mbox{Div}(\mathbf{v})$ to be the vectors whose coordinates are the remainders and the quotients, respectively, obtained by dividing the coordinates of $\mathbf{v}$ by $n$, i.e., the unique integer vectors satisfying $\mathbf{v}=\mbox{Mod}(\mathbf{v})+n\mbox{Div}(\mathbf{v})$, with $\mbox{Mod}(\mathbf{v})\in X_n$. \begin{lemma} For every vector $\mathbf{v}$ with integer coordinates, and every element $\mathbf{x}_1\mathbf{x}_2\mathbf{x}_3\ldots$ in the free module ${\mathbb Z}_n^d$ (where $\mathbf{x}_1,\mathbf{x}_2,\mathbf{x}_3,\ldots$ are symbols in $X_n$), \begin{equation}\label{e:Mv} M_\mathbf{v}(\mathbf{x}_1\mathbf{x}_2\mathbf{x}_3\cdots) = \mbox{Mod}(\mathbf{v}+M\mathbf{x}_1) + nM_{\mbox{Div}(\mathbf{v}+M\mathbf{x}_1)}(\mathbf{x}_2\mathbf{x}_3\mathbf{x}_4\cdots). \end{equation} \end{lemma} \begin{proof} Indeed, \begin{align*} M_\mathbf{v}(\mathbf{x}_1\mathbf{x}_2\mathbf{x}_3\cdots) &= \mathbf{v}+M\mathbf{x}_1\mathbf{x}_2\mathbf{x}_3\cdots = \mathbf{v}+M(\mathbf{x}_1 + n(\mathbf{x}_2\mathbf{x}_3\mathbf{x}_4\cdots)) \\ &= \mathbf{v}+M\mathbf{x}_1 + nM\mathbf{x}_2\mathbf{x}_3\mathbf{x}_4\cdots \\ &= \mbox{Mod}(\mathbf{v}+M\mathbf{x}_1) + n\mbox{Div}(\mathbf{v}+M\mathbf{x}_1) + nM\mathbf{x}_2\mathbf{x}_3\mathbf{x}_4\cdots \\ &= \mbox{Mod}(\mathbf{v}+M\mathbf{x}_1) + n(\mbox{Div}(\mathbf{v}+M\mathbf{x}_1) + M\mathbf{x}_2\mathbf{x}_3\mathbf{x}_4\cdots) \\ &= \mbox{Mod}(\mathbf{v}+M\mathbf{x}_1) + nM_{\mbox{Div}(\mathbf{v} + M\mathbf{x}_1)}(\mathbf{x}_2\mathbf{x}_3\mathbf{x}_4\cdots). \end{align*} \end{proof} Let $||M||$ be the maximal absolute row sum norm of $M$, i.e. $||M||=\max_i \sum_{j=1}^d |m_{i,j}|$, where $m_{i,j}$ is the $(i,j)$-entry of $M$. Define $V_M$ to be the finite set of integer vectors $\mathbf{v}$ for which each coordinate is between $-||M||$ and $||M||-1$, inclusive. Note that $V_M$ is finite and contains $(2||M||)^d$ vectors. \begin{definition} For an integer matrix $M$, define an automaton ${\mathcal A}_{M,n}$ operating on the alphabet $X_n$ as follows: the set of states is $S_{M,n}=\{m_{\mathbf{v}} \mid \mathbf{v} \in V_M \}$, and the root permutations and the sections are, for $\mathbf{x}$ in $X_n$, defined by \begin{equation}\label{e:mv-root-section} m_\mathbf{v} (\mathbf{x}) = \mbox{Mod}(\mathbf{v}+M\mathbf{x}) \qquad \text{and} \qquad m_\mathbf{v}|_{\mathbf{x}} = m_{\mbox{Div}(\mathbf{v}+M\mathbf{x})}. \end{equation} \end{definition} The automaton ${\mathcal A}_{M,n}$ is well defined (it is easy to show that, for $\mathbf{v}\in V_M$ and $\mathbf{x}\in X_n$, the entries of the vector $\mathbf{v}+M\mathbf{x}$ are bounded between $-||M||n$ and $||M||n-1$, and hence $\mbox{Div}(\mathbf{v}+M\mathbf{x}) \in V_M$). \begin{lemma}\label{l:mvMv} For every state $m_{\mathbf{v}}$ of the automaton ${\mathcal A}_{M,n}$, and every element $\mathbf{u}=\mathbf{x}_1\mathbf{x}_2\mathbf{x}_3\cdots$ of the free module ${\mathbb Z}_n^d$ (i.e. every right infinite word over $X_n$), \[ m_{\mathbf{v}}(\mathbf{u}) = M_\mathbf{v} (\mathbf{u}). \] \end{lemma} \begin{proof} Follows directly from the definition of the root permutations and the sections of $m_{\mathbf{v}}$ in~(\ref{e:mv-root-section}) and equality~(\ref{e:Mv}) describing the action of $M_{\mathbf{v}}$. \end{proof} \begin{definition} Let ${\mathcal A}_{{\mathcal M},n}$ be the automaton operating on the alphabet $X_n$ and having $2^d\sum_{i=1}^m ||M_i||^d$ states obtained by taking the (disjoint) union of the automata ${\mathcal A}_{M_1,n},\ldots,{\mathcal A}_{M_m,n}$. \end{definition} \begin{proposition}\label{p:GM} The group $G_{{\mathcal M},n}$ can be realized by a finite automaton acting on an alphabet of size $n^d$ and having no more than $2^d\sum_{i=1}^m ||M_i||^d$ states, where $||M_i||$ is the maximum absolute row sum norm of $M_i$, for $i=1,\ldots,m$. \end{proposition} \begin{proof} The automaton ${\mathcal A}_{{\mathcal M},n}$ satisfies the required conditions, and generates precisely the group $G_{{\mathcal M},n}$. This follows directly from~(\ref{e:generators}) and Lemma~\ref{l:mvMv}, once it is observed that $A_{{\mathcal M},n}$ has enough states to generate $G_{{\mathcal M},n}$. However, this is clear, since each of the automata ${\mathcal A}_{M,n}$, for $M\in {\mathcal M}$, has at least $d+1$ states, $m_\mathbf{0}, m_{-\mathbf{e}_1},\ldots,m_{-\mathbf{e}_d}$, and $m_\mathbf{0}(m_{-\mathbf{e}_j})^{-1}=\tau_{\mathbf{e}_j}$, for $j=1,\ldots,d$. \end{proof} Theorem~\ref{cor} is an immediate corollary of Lemma~\ref{inv}~(ii) and Proposition~\ref{p:GM}. \begin{proof}[Proof of Theorem~\ref{gros}] Let $d\geqslant 6$ and let $F$ be an orbit undecidable, free subgroup of rank $m$ of $\mathsf{GL}_d({\mathbb Z})$ (such a group exists by Proposition~\ref{p:free-orbit-undecidable}). Let ${\mathcal M}=\{M_1,\ldots,M_m\}$ be a set of invertible integer $d\times d$ matrices generating $F=\langle {\mathcal M} \rangle$. Fix $n\geqslant 2$ and consider the group $G=G_{{\mathcal M},n}$. By Proposition~\ref{p:GM}, $G$ is generated by the finite automaton ${\mathcal A}_{{\mathcal M},n}$, so it is an automaton group. By Lemma~\ref{inv}~(ii), $G$ does not depend on $n$ and is in fact isomorphic to ${\mathbb Z}^d \rtimes F$ (since all matrices in ${\mathcal M}$ are invertible over ${\mathbb Z}$); so, it is a ${\mathbb Z}^d$-by-free group. Finally, by Proposition~\ref{gamma}, $G=G_{{\mathcal M},n}$ has unsolvable conjugacy problem. \end{proof} Theorem~\ref{t:main} is an immediate corollary of Theorem~\ref{gros}. \subsection*{Acknowledgments} The authors express their gratitude to CRM at Universit\'e de Montr\'eal and the Organizers of the Thematic Semester in Geometric, Combinatorial, and Computational Group Theory for their hospitality and support in the fall of 2010, when this research was conducted. The first named author was partially supported by the NSF under DMS-0805932 and DMS-1105520. The second one was partial supported by the MEC (Spain) and the EFRD (EC) through projects number MTM2008-01550 and PR2010-0321. \def$'${$'$} \end{document}
\begin{document} \begin{abstract} We prove that for a polynomial diffeomorphism of ${\cc^2}$, the support of any invariant measure, apart from a few obvious cases, is contained in the closure of the set of saddle periodic points. \end{abstract} \title{A closing lemma for polynomial automorphisms of $\mathbb{C} \section{Introduction and results} Let $f$ be a polynomial diffeomorphism of ${\cc^2}$ with non-trivial dynamics. This hypothesis can be expressed in a variety of ways, for instance it is equivalent to the positivity of topological entropy. The dynamics of such transformations has attracted a lot of attention in the past few decades (the reader can consult e.g. \cite{bedford} for basic facts and references). In this paper we make the standing assumption that $f$ is dissipative, i.e. that the (constant) Jacobian of $f$ satisfies $\abs{\jac(f)}<1$. We classically denote by $J^+$ the forward Julia set, which can be characterized as usual in terms of normal families, or by saying that $J^+ = \partial K^+$, where $K^+$ is the set of points with bounded forward orbits. Reasoning analogously for backward iteration gives the backward Julia set $J^- = \partial K^-$. Thus the 2-sided Julia set is naturally defined by $J = J^+\cap J^-$ Another interesting dynamically defined subset is the closure $J^*$ of the set of saddle periodic points (which is also the support of the unique entropy maximizing measure \cite{bls}). The inclusion $J^*\subset J$ is obvious. It is a major open question in this area of research whether the converse inclusion holds. Partial answers have been given in \cite{bs1, bs3, connex, lyubich peters 2, peters guerini}. Let $\nu$ be an ergodic $f$-invariant probability measure. If $\nu$ is hyperbolic, that is, its two Lyapunov exponents\footnote{Recall that in holomorphic dynamics, Lyapunov exponents always have even multiplicity.} are non-zero and of opposite sign, then the so-called Katok closing lemma \cite{katok} implies that $\supp(\nu) \subset J^*$. It may also be the case that $\nu$ is supported in the Fatou set: then from the classification of recurrent Fatou components in \cite{bs2}, this happens if and only if $\nu$ is supported on an attracting or semi-Siegel periodic orbit, or is the Haar measure on a cycle of $k$ circles along which $f^k$ is conjugate to an irrational rotation (recall that $f$ is assumed dissipative). Here by semi-Siegel periodic orbit, we mean a linearizable periodic orbit with one attracting and one irrationally indifferent multipliers. The following ``ergodic closing lemma'' is the main result of this note: \begin{thm}\label{thm:main} Let $f$ be a dissipative polynomial diffeomorphism of ${\cc^2}$ with non-trivial dynamics, and $\nu$ be any invariant measure supported on $J$. Then $\supp(\nu)$ is contained in $J^*$. \end{thm} A consequence is that if $J\setminus J^*$ happens to be non-empty, then the dynamics on $J\setminus J^*$ is ``transient" in a measure-theoretic sense. Indeed, if $x\in J$, we can form an invariant probability measure by taking a cluster limit of $\unsur{n}\sum_{k=0}^n \delta_{f^k(x)}$ and the theorem says that any such invariant measure will be concentrated on $J^*$. More generally the same argument implies: \begin{cor} Under the assumptions of the theorem, if $x\in J^+$, then $\omega(x)\cap J^*\neq \emptyset$. \end{cor} Here as usual $\omega(x)$ denotes the $\omega$-limit set of $x$. Note that for $x\in J^+$ then it is obvious that $\omega(x)\subset J$. It would be interesting to know whether the conclusion of the corollary can be replaced by the sharper one: $\omega(x)\subset J^*$. Theorem \ref{thm:main} can be formulated slightly more precisely as follows. \begin{thm}\label{thm:precised} Let $f$ be a dissipative polynomial diffeomorphism of ${\cc^2}$ with non-trivial dynamics, and $\nu$ be any ergodic invariant probability measure. Then one of the following situations holds: \begin{enumerate}[{(i)}] \item either $\nu$ is atomic and supported on an attracting or semi-Siegel cycle; \item or $\nu$ is the Haar measure on an invariant cycle of circles contained in a periodic rotation domain; \item or $\supp(\nu)\subset J^*$. \end{enumerate} \end{thm} Note that the additional ergodicity assumption on $\nu$ is harmless since any invariant measure is an integral of ergodic ones. The only new ingredient with respect to Theorem \ref{thm:main} is the fact that measures supported on periodic orbits that do not fall in case {\em (i)}, that is, are either semi-parabolic or semi-Cremer, are supported on $J^*$. For semi-parabolic points this is certainly known to the experts although apparently not available in print. For semi-Cremer points this follows from the hedgehog construction of Firsova, Lyubich, Radu and Tanase (see \cite{lyubich radu tanase}). For completeness we give complete proofs below. \noindent{\bf Acknowledgments.} Thanks to Sylvain Crovisier and Misha Lyubich for inspiring conversations. This work was motivated by the work of Crovisier and Pujals on strongly dissipative diffeomorphisms (see \cite[Thm 4]{crovisier pujals}) and by the work of Firsova, Lyubich, Radu and Tanase \cite{firsova lyubich radu tanase, lyubich radu tanase} on hedgehogs in higher dimensions (and the question whether hedgehogs for Hénon maps are contained in $J^*$). \section{Proofs} In this section we prove Theorem \ref{thm:precised} by dealing separately with the atomic and the non-atomic case. Theorem \ref{thm:main} follows immediately. Recall that $f$ denotes a dissipative polynomial diffeomorphism with non trivial dynamics and $\nu$ an $f$-invariant ergodic probability measure. \subsection{Preliminaries} Using the theory of laminar currents, it was shown in \cite{bls} that any saddle periodic point belongs to $J^*$. More generally, if $p$ and $q$ are saddle points, then $J^* = \overline{ {W^u(p)\cap W^u(q)}}$ (see Theorems 9.6 and 9.9 in \cite{bls}). This result was generalized in \cite{tangencies} as follows. If $p$ is any saddle point and $X\subset W^u(p)$, we respectively denote by $\mathrm{Int}_i X$, $\mathrm{cl}_i X$, $\partial_i X$ the interior, closure and boundary of $X$ relative to the intrinsic topology of $W^u(p)$, that is the topology induced by the biholomorphism $W^u(p)\simeq \mathbb{C}$. \begin{lem}[{\cite[Lemma 5.1]{tangencies}}]\label{lem:homoclinic} Let $p$ be a saddle periodic point. Relative to the intrinsic topology in $W^u(p)$, $\partial_i(W^u(p)\cap K^+)$ is contained in the closure of the set of transverse homoclinic intersections. In particular $\partial_i(W^u(p)\cap K^+)\subset J^*$. \end{lem} Here is another statement along the same lines, which can easily be extracted from \cite{bls}. \begin{lem}\label{lem:entire} Let $\psi:\mathbb{C}\to {\cc^2}$ be an entire curve such that $\psi(\mathbb{C})\subset K^+$. Then for any saddle point $p$, $\psi(\mathbb{C})$ admits transverse intersections with $W^u(p)$. \end{lem} \begin{proof} This is identical to the first half of the proof of \cite[Lemma 5.4]{tangencies}. \end{proof} We will repeatedly use the following alternative which follows from the combination of the two previous lemmas. Recall that a Fatou disk is a holomorphic disk along which the iterates $(f^n)_{n\geq 0}$ form a normal family. \begin{lem}\label{lem:inters} Let $\mathcal{E}$ be an entire curve contained in $K^+$, $p$ be any saddle point, and $t$ be a transverse intersection point between $\mathcal{E}$ and $W^u(p)$. Then either $t\in J^*$ or there is a Fatou disk $\Delta\subset W^u(p)$ containing $t$. \end{lem} \begin{proof} Indeed, either $t\in \partial_i(W^u(p)\cap K^+)$ so by Lemma \ref{lem:homoclinic}, $t\in J^*$, or $t\in \mathrm{Int}_i(W^u(p)\cap K^+)$. In the latter case, pick any open disk $\Delta\subset\mathrm{Int}_i(W^u(p)\cap K^+) $ containing $t$. Since $\Delta$ is contained in $K^+$, its forward iterates remain bounded so it is a Fatou disk. \end{proof} \subsection{The atomic case} Here we prove Theorem \ref{thm:precised} when $\nu$ is atomic. By ergodicity, this implies that $\nu$ is concentrated on a single periodic orbit. Replacing $f$ by an iterate we may assume that it is concentrated on a fixed point. Since $f$ is dissipative there must be an attracting eigenvalue. A first possibility is that this fixed point is attracting or semi-Siegel. Then we are in case {\em (i)} and there is nothing to say. Otherwise $p$ is semi-parabolic or semi-Cremer and we must show that $p\in J^*$. In both cases, $p$ admits a strong stable manifold $W^{ss}(p)$ associated to the contracting eigenvalue, which is biholomorphic to $\mathbb{C}$ by a theorem of Poincaré. Let $q$ be a saddle periodic point and $t$ be a point of transverse intersection between $W^{ss}(p)$ and $W^u(q)$. If $t\in J^*$, then since $f^n(t)$ converges to $p $ as $n\to \infty$ we are done. Otherwise there is a non-trivial Fatou disk $\Delta$ transverse to $W^{ss}(p)$ at $t$. Let us show that this is contradictory. In the semi-parabolic case, this is classical. A short argument goes as follows (compare \cite[Prop. 7.2]{ueda}). Replace $f$ by an iterate so that the neutral eigenvalue is equal to 1. Since $f$ has no curve of fixed points there are local coordinates $(x,y)$ near $p$ in which $p=(0,0)$, $W^{ss}_{\rm loc}(p)$ is the $y$-axis $\set{x=0}$ and $f$ takes the form $$(x,y) \longmapsto (x+ x^{k+1}+ {h.o.t.} , by+ {h.o.t.})\ , $$ with $\abs{b}<1$ (see \cite[\S 6]{ueda}). Then $f^n $ is of the form $$(x,y) \longmapsto (x+ n x^{k+1}+ {h.o.t.} , b^n y + {h.o.t.} ) \ , $$ so we see that $f^n$ cannot be normal along any disk transverse to the $y$ axis and we are done. In the semi-Cremer case we rely on the hedgehog theory of \cite{firsova lyubich radu tanase, lyubich radu tanase}. Let $\phi:\mathbb{D}\to \Delta$ be any parameterization, and fix local coordinates $(x,y)$ as before in which $p=(0,0)$, $W^{ss}_{\rm loc}(p)$ is the $y$-axis and $f$ takes the form $$(x,y) \longmapsto (e^{i2\pi\theta} x , by) + {h.o.t.}$$ Let $B$ be a small neighborhood of the origin in which the hedgehog is well-defined. Reducing $\Delta$ and iterating a few times if necessary, we can assume that for all $k\geq 0$, $f^k(\Delta)\subset B$ and $\phi$ is of the form $s\mapsto (s, \phi_2(s))$. Then the first coordinate of $f^n\circ \phi $ is of the form $s\mapsto e^{i2n\pi\theta} s+ {h.o.t.}$. If $(n_j)_{j\geq 0}$ is a subsequence such that $f^{n_j}\circ \phi$ converges to some $\psi = (\psi_1, \psi_2)$, we get that $\psi_1(s) = \alpha s+ h.o.t.$, where $\abs{\alpha} = 1$. Thus $\psi(\mathbb{D}) = \lim f^{n_j}(\Delta)$ is a non-trivial holomorphic disk $\Gamma$ through 0 that is smooth at the origin. For every $k\in \mathbb{Z}$ we have that $f^{k}(\Gamma) = \lim f^{n_j+k}(\Delta)\subset B$. Therefore by the local uniqueness of hedgehogs (see \cite[Thm 2.2]{lyubich radu tanase}) $\Gamma$ is contained in $\mathcal{H}$. It follows that $\mathcal{H}$ has non-empty relative interior in any local center manifold of $p$ and from \cite[Cor. D.1]{lyubich radu tanase} we infer that $p$ is semi-Siegel, which is the desired contradiction. \subsection{The non-atomic case} Assume now that $\nu$ is non-atomic. If $\nu$ gives positive mass to the Fatou set, then by ergodicity it must give full mass to a cycle of recurrent Fatou components. These were classified in \cite[\S 5]{bs2}: they are either attracting basins or rotation domains. Since $\nu$ is non-atomic we must be in the second situation. Replacing $f$ by $f^k$ we may assume that we are in a fixed Fatou component $\Omega$. Then $\Omega$ retracts onto some Riemann surface $S$ which is a biholomorphic to a disk or an annulus and on which the dynamics is that of an irrational rotation. Furthermore all orbits in $\Omega$ converge to $S$. Thus $\nu$ must give full mass to $S$, and since $S$ is foliated by invariant circles, by ergodicity $\nu$ gives full mass to a single circle. Finally the unique ergodicity of irrational rotations implies that $\nu$ is the Haar measure. Therefore we are left with the case where $\supp(\nu)\subset J$, that is, we must prove Theorem \ref{thm:main}. Let us start by recalling some facts on the Oseledets-Pesin theory of our mappings. Since $\nu$ is ergodic by the Oseledets theorem there exists $1\leq k\leq 2$, a set $\mathcal R$ of full measure and for $x \in \mathcal R$ a measurable splitting of $T_x{\cc^2}$, $T_x{\cc^2} = \bigoplus_{i=1}^k E_i(x)$ such that for $v\in E_i(x)$, $\lim_{n\rightarrow\infty}\unsur{n}\log \norm{df^n_x(v)} = \chi_i$. Moreover, $\sum \chi_i = \log \abs{\jac(f)}<0$, and since $\nu$ is non-atomic both $\chi_i$ cannot be both negative (this is already part of Pesin's theory, see \cite[Prop. 2.3]{bls}). Thus $k=2$ and the exponents satisfy $\chi_1<0$ and $\chi_2\geq 0$ (up to relabelling). Without loss of generality, we may further assume that points in $\mathcal R$ satisfy the conclusion of the Birkhoff ergodic theorem for $\nu$. As observed in the introduction, the ergodic closing lemma is well-known when $\chi_2 >0$ so we might only consider the case $\chi_2=0$ (our proof actually treats both cases simultaneously). To ease notation, let us denote by $E^s(x)$ the stable Oseledets subspace and by $\chi^s$ the corresponding Lyapunov exponent ($\chi^s<0$). The Pesin stable manifold theorem (see e.g. \cite{fathi herman yoccoz} for details) asserts that there exists a measurable set $\mathcal{R}'\subset \mathcal R$ of full measure, and a family of holomorphic disks $W^s_\mathrm{loc} (x)$, tangent to $E^s(x)$ at $x$ for $x\in \mathcal R'$, and such that $f(W^s_\mathrm{loc} (x))\subset W^s_\mathrm{loc} (f(x))$. In addition for every $\varepsilon>0$ there exists a set $\mathcal{R}'_\varepsilon$ of measure $\nu(\mathcal{R}'_\varepsilon) \geq 1-\varepsilon$ and constants $r_\varepsilon$ and $C_\varepsilon$ such that for $x\in \mathcal{R}'_\varepsilon$, $W^s_\mathrm{loc} (x)$ contains a graph of slope at most 1 over a ball of radius $r_\varepsilon$ in $E^s(x)$ and for $y\in W^s_\mathrm{loc} (x)$, $d(f^n(y), f^n(x))\leq C_\varepsilon\exp ( (\chi^s+\varepsilon)n)$ for every $n\geq 0$. Furthermore, local stable manifolds vary continuously on $\mathcal{R}'_\varepsilon$. From this we can form global stable manifolds by declaring\footnote{If $\nu$ has a zero exponent, this may not be the stable manifold of $x$ in the usual sense, that is, there might exists points outside $W^s(s)$ whose orbit approach that of $x$.} that $W^s(x)$ is the increasing union of $f^{-n} (W^s_\mathrm{loc}(f^n(x)))$. Then it is a well-known fact that $W^s(x)$ is a.s. biholomorphically equivalent to $\mathbb{C}$ (see e.g. \cite[Prop 2.6]{bls}). Indeed, almost every point visits $\mathcal{R}'_\varepsilon$ infinitely many times, and from this we can view $W^s(x)$ as an increasing union of disks $D_j$ such that the modulus of the annuli $D_{j+1}\setminus D_j$ is uniformly bounded from below. Discarding a set of zero measure if necessary, it is no loss of generality to assume that $\bigcup_{\varepsilon>0} \mathcal R'_\varepsilon = \mathcal R'$ and that for every $x\in \mathcal{R}'$, $W^s(x)\simeq \mathbb{C}$. To prove the theorem we show that for every $\varepsilon>0$, $\mathcal R'_\varepsilon \subset J^*$. Fix $x\in \mathcal R'_\varepsilon$ and a saddle point $p$. By Lemma \ref{lem:entire} there is a transverse intersection $t$ between $W^s(x)$ and $W^u(p)$. Since $x$ is recurrent and $d(f^n(x), f^n(t))\to 0$, to prove that $x\in J^*$ it is enough to show that $t\in J^*$. We argue by contradiction so assume that this is not the case. Then by Lemma \ref{lem:inters} there is a Fatou disk $\Delta$ through $t$ inside $W^u(p)$. Reducing $\Delta$ a little if necessary we may assume that $f^n$ is a normal family in some neighborhood of $\overline \Delta$ in $W^u (p)$. Since $\nu$ is non-atomic and stable manifolds vary continuously for the $C^1$ topology on $\mathcal R'_\varepsilon$, there is a set $A$ of positive measure such that if $y\in A$, $W^s(y)$ admits a transverse intersection with $\Delta$. The iterates $f^n(\Delta)$ form a normal family and $f^n(\Delta)$ is exponentially close to $f^n(A)$. Let $(n_j)$ be some subsequence such that $f^{n_j}\rest\Delta$ converges. Then the limit map has either generic rank 0 or 1, that is if $\phi : \mathbb{D}\to \Delta$ is a parameterization, $f^{n_j}\circ \phi$ converges uniformly on $ \mathbb{D}$ to some limit map $\psi$, which is either constant or has generic rank 1. Set $\Gamma = \psi(\mathbb{D})$. Let $\nu'$ be a cluster value of the sequence of measures $(f^{n_j})_*(\nu\rest{A})$. Then $\nu'$ is a measure of mass $\nu(A)$, supported on $\overline \Gamma$ and $\nu'\leq \nu$. Since $\nu$ gives no mass to points, the rank 0 case is excluded so $\Gamma$ is a (possibly singular) curve. Notice also that if $z$ is an interior point of $\Delta$ (i.e. $z= \phi(\zeta)$ for some $\zeta \in \mathbb{D}$), then $\lim f^{n_j}(z) = \psi(\zeta)$ is an interior point of $\Gamma$. This shows that $\nu'$ gives full mass to $\Gamma$ (i.e. it is not concentrated on its boundary). Then the proof of Theorem \ref{thm:main} is concluded by the following result of independent interest. \begin{prop}\label{prop:subvariety} Let $f$ be a dissipative polynomial diffeomorphism of ${\cc^2}$ with non-trivial dynamics, and $\nu$ be an ergodic non-atomic invariant measure, giving positive measure to a subvariety. Then $\nu$ is the Haar measure on an invariant cycle of circles contained in a periodic rotation domain. In particular a non-atomic invariant measure supported on $J$ gives no mass to subvarieties. \end{prop} \begin{proof} Let $f$ and $\nu$ be as in the statement of the proposition, and $\Gamma_0$ be a subvariety such that $\nu(\Gamma_0)>0$. Since $\nu$ gives no mass to the singular points of $\Gamma_0$, by reducing $\Gamma_0$ a bit we may assume that $\Gamma_0$ is smooth. If $M$ is an integer such that $1/M < \nu(\Gamma_0)$, by the pigeonhole principle there exists $0\leq k \leq l \leq M$ such that $\nu(f^k(\Gamma_0)\cap f^l(\Gamma_0))>0$, so $f^k(\Gamma_0)$ and $f^l(\Gamma_0)$ intersect along a relatively open set. Thus replacing $f$ by some iterate $f^N$ (which does not change the Julia set) we can assume that $\Gamma_0\cap f(\Gamma_0)$ is relatively open in $\Gamma_0$ and $f(\Gamma_0)$. Let now $\Gamma = \bigcup_{k\in \mathbb{Z}} f^k(\Gamma_0)$. This is an invariant, injectively immersed Riemann surface with $\nu(\Gamma)>0$. Notice that replacing $f$ by $f^N$ may corrupt the ergodicity of $\nu$ so if needed we replace $\nu$ by a component of its ergodic decomposition (under $f^N$) giving positive (hence full) mass to $\Gamma$. We claim that $\Gamma$ is biholomorphic to a domain of the form $\set{z\in \mathbb{C}, \ r <\abs{z} <R}$ for some $0\leq r< R \leq \infty$, that $f\rest{\Gamma_0}$ is conjugate to an irrational rotation, and $\nu$ is the Haar measure on an invariant circle. This is {\em a priori} not enough to conclude the proof since at this stage nothing prevents such an invariant ``annulus'' to be contained in $J$. To prove the claim, note first that since $\Gamma$ is non-compact, it is either biholomorphic to $\mathbb{C}$ or $\mathbb{C}^*$, or it is a hyperbolic Riemann surface\footnote{In the situation of Theorem \ref{thm:main} we further know that $ \Gamma\subset K$ so the first two cases are excluded.}. In addition $\Gamma$ possesses an automorphism $f$ with a non-atomic ergodic invariant measure. In the case of $\mathbb{C}$ and $\mathbb{C}^*$ all automorphisms are affine and the only possibility is that $f$ is an irrational rotation. In the case of a hyperbolic Riemann surface, the list of possible dynamical systems is also well-known (see e.g. \cite[Thm 5.2]{milnor}) and again the only possibility is that $f$ is conjugate to an irrational rotation on a disk or an annulus. The fact that $\nu$ is a Haar measure follows as before. Let $\gamma $ be the circle supporting $\nu$, and $\widetilde \Gamma \subset \Gamma$ be a relatively compact invariant annulus containing $\Gamma$ in its interior. To conclude the proof we must show that $\gamma$ is contained in the Fatou set. This will result from the following lemma, which will be proven afterwards. \begin{lem}\label{lem:dominated} $f$ admits a dominated splitting along $\widetilde \Gamma$. \end{lem} See \cite{sambarino} for generalities on the notion of dominated splitting. In our setting, since $\Gamma$ is an invariant complex submanifold and $f$ is dissipative, the dominated splitting actually implies a normal hyperbolicity property. Indeed, observe first that $f\rest{\tilde \Gamma}$ is an isometry for the Poincaré metric $\mathrm{Poin}_\Gamma$ of $\Gamma$, which is equivalent to the induced Riemannian metric on $\widetilde \Gamma$. In particular $C^{-1} \leq \norm{df^n\rest{T\tilde\Gamma}}\leq C$ for some $C>0$ independent of $n$. Therefore a dominated splitting for $f\rest{\widetilde \Gamma}$ means that there is a continuous splitting of $T{\cc^2}$ along $\widetilde \Gamma$, $T_x{\cc^2} = T_x\Gamma\oplus V_x$, and for every $x\in \widetilde \Gamma$ and $n\geq 0$ we have $\norm{df_x^n\rest{V_x}} \leq C'\lambda^n$ for some $C'>0$ and $\lambda <1$. In other words, $f$ is normally contracting along $\widetilde\Gamma$. Thus in a neighborhood of $\gamma$, all orbits converge to $\Gamma$. This completes the proof of Proposition \ref{prop:subvariety}. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:dominated}] By the cone criterion for dominated splitting (see \cite[Thm 1.2]{newhouse cone} or \cite[Prop. 3.2] {sambarino}) it is enough to prove that for every $x\in \Gamma$ there exists a cone $\mathcal{C}_x$ about $T_x \Gamma$ in $T_x{\cc^2}$ such that the field of cones $(\mathcal{C}_x)_{x\in \widetilde \Gamma}$ is strictly contracted by the dynamics. For $x\in \Gamma$, choose a vector $e_x\in T_x\Gamma$ of unit norm relative to the Poincaré metric $\mathrm{Poin}_\Gamma$ and pick $f_x$ orthogonal to $e_x$ in $T_x{\cc^2}$ and such that $\det(e_x, f_x)=1$. Since $\mathrm{Poin}_\Gamma\rest{\widetilde\Gamma}$ is equivalent to the metric induced by the ambient Riemannian metric, there exists a constant $C$ such that for all $x\in \widetilde\Gamma$, $C^{-1}\leq \norm{e_x}\leq C$. Thus, the basis $(e_x, f_x)$ differs from an orthonormal basis by bounded multiplicative constants, i.e. there exists $C^{-1}\leq \alpha(x)\leq C$ such that $(\alpha(x)e_x, \alpha^{-1}(x)f_x)$ is orthonormal. Let us work in the frame $\set{(e_x, f_x), x\in \Gamma}$. Since $df\rest{\Gamma}$ is an isometry for the Poincaré metric and $f(\Gamma) = \Gamma$, the matrix expression of $df_x$ in this frame is of the form $$\begin{pmatrix} e^{i\theta(x)} & a(x) \\ 0 & e^{-i\theta(x)} J \end{pmatrix},$$ where $J$ is the (constant) Jacobian. Fix $\lambda$ such that $\abs{J}<\lambda < 1$, and for $\varepsilon>0$, let $\mathcal{C}_x^\varepsilon\subset T_x{\cc^2}$ be the cone defined by $$\mathcal{C}_x^\varepsilon = \set{ue_x+vf_x, \ \abs{v} \leq \varepsilon \abs{u}}.$$ Let also $A = \sup_{x\in \widetilde \Gamma}\abs{a(x)}$. Working in coordinates, if $(u,v)\in \mathcal{C}_x^\varepsilon$ then $$df_x(u,v) = :(u_1, v_1) = (e^{i\theta(x)} u + a(x) v , e^{-i\theta(x)}Jv),$$ hence $$ \abs{u_1} \geq \abs{u} - A \abs{v} \geq \abs{u} (1-A\varepsilon) \text{ and } \abs{v_1} = \abs{Jv} \leq \varepsilon \abs{J} \abs{u} $$ We see that if $\varepsilon$ is so small that $\abs{J} < \lambda(1-A\varepsilon)$, then for every $x\in \widetilde \Gamma$ we have that $\abs{v_1}\leq \lambda\varepsilon\abs{u_1}$, that is, $df_x(\mathcal C _x^\varepsilon)\subset \mathcal{C}_{f(x)}^{\lambda\varepsilon}$. The proof is complete. \end{proof} \end{document}
\begin{document} \title{On locally convex PL-manifolds \\and fast verification of convexity} \author{Konstantin Rybnikov \\ \date{\today} [email protected]\\ http://faculty.uml.edu/krybnikov} \maketitle \centerline{Short Version} \begin{abstract} We show that a PL-realization of a closed connected manifold of dimension $n-1$ in $\mathbb{R}^n\:(n \ge 3)$ is the boundary of a convex polyhedron if and only if the interior of each $(n-3)$-face has a point, which has a neighborhood lying on the boundary of a convex $n$-dimensional body. This result is derived from a generalization of Van Heijenoort's theorem on locally convex manifolds to the spherical case. Our convexity criterion for PL-manifolds implies an easy polynomial-time algorithm for checking convexity of a given PL-surface in $\mathbb{R}^n$. \end{abstract} There is a number of theorems that infer global convexity from local convexity. The oldest one belongs to Jacque Hadamard (1897) and asserts that any compact smooth surface embedded in $\R^3$, with strictly positive Gaussian curvature, is the boundary of a convex body. Local convexity can be defined in many different ways (see van Heijenoort (1952) for a survey). We will use Bouligand's (1932) notion of local convexity. In this definition a surface $M$ in the affine space $\mathbb{R}^n$ is called locally convex at point ${\bf p}$ if ${\bf p}$ has a neighborhood which lies on the boundary of a convex $n$-dimensional body $K_{\bf p}$; if $K_{\bf p}\backslash{\bf p}$ lies in an open half-space defined by a hyperplane containing ${\bf p}$, $M$ is called strictly convex at ${\bf p}$. This paper is mainly devoted to local convexity of piecewise-linear (PL) surfaces, in particular, polytopes. A PL-surface in $\mathbb{R}^n$ is a pair $M=({\cal M},r)$, where ${\cal M}$ is a topological manifold with a fixed cell-partition and $r$ is a continuous {\it realization} map from ${\cal M}$ to $\mathbb{R}^n$ that satisfies the following conditions: \par \noindent 1) $r$ is a bijection on the closures of all cells of ${\cal M}$ \par \noindent 1) for each $k$-cell $C$ of $\cM$ the image $r(C)$ lies on a $k$-dimensional affine subspace of $\mathbb{R}^n$; $r(C)$ is then called a $k$-face of $M$. \par Thus, $r$ need not be an immersion, but its restriction to the closure of any cell of ${\cal M}$ must be. By a fixed cell-partition of ${\cal M}$ we mean that ${\cal M}$ has a structure of a CW-complex where all gluing mappings are homeomorphisms (such complexes are called \emph{regular} by J.H.C. Whitehead). \emph{All cells and faces are assumed to be open.} We will also call $M=({\cal M},r)$ a PL-realization of ${\cal M}$ in $\mathbb{R}^n$. \begin{definition} \emph{We say that $M=({\cal M},r)$ is the boundary of a convex body $P$ if $r$ is a homeomorphism between ${\cal M}$ and ${\partial P}$.} \end{definition} Hence, we exclude the cases when $r(\cM)$ coincides with the boundary of a convex set, but $r$ is not injective. Of course, the algorithmic and topological sides of this case are rather important for computational geometry and we will consider them in further works. Notice that for $n>2$ a closed $(n-1)$-manifold $\cM$ cannot be immersed into $\R^n$ by a non-injective map $r$ so that $r(\cM)$ is the boundary of a convex set, since any covering space of a simply connected manifold must be simply connected. However, such immersions are possible in the hyperbolic space $\HH^n$. Our main theorem asserts that any closed PL-surface $M$ immersed in $\mathbb{R}^n \:(n \ge 3)$ with at least one point of strict convexity, and such that each $(n-3)$-cell has a point at which $M$ is locally convex, is convex. Notice that if the last condition holds for some point on an $(n-3)$-face, it holds for all points of this face. This theorem implies a test for global convexity of PL-surfaces: check local convexity on each of the $(n-3)$-faces. Notice that if for all $k$ and every $k$-face there is an $(n-k-1)$-sphere, lying in a complementary subspace and centered at some point of $F$, such that $\SSS \cap F$ is a convex surface, then $r$ is an immersion. The algorithm implicitly checks if a given realization is an immersion, and reports "not convex" if it is not. The pseudo-code for the algorithm is given in this article. The complexity of this test depends on the way the surface is given as input data. Assuming we are given the coordinates of the vertices and the poset of faces of dimensions $n-1$, $n-2$, $n-3$ and $0$, OR, the equations of the facets, and the poset of faces dimensions $n-1$, $n-2$, and $n-3$, the complexity of the algorithm for a general closed PL-manifold is $O(f_{n-3,n-2})=O(f_{n-3,n-1})$, where $f_{k,l}$ is the number of incidences between cells of dimension $k$ and $l$. If the vertices of the manifold are assumed to be in a sufficiently general position, then the dimension of the space does not affect the complexity at all. Another advantage of this algorithm is that it consist of $f_{n-3}$ independent subroutines corresponding to the $(n-3)$-faces, each with complexity not exceeding $O$ in the number of $(n-1)$-cells incident to the $(n-3)$-face. The complexity of our algorithm is asymptotically equal to the complexity of algorithms suggested by Devillers et al (1998) and Mehlhorn et al (1999) for simplicial 2-dimensional surfaces; for $n>3$ our algorithm is asymptotically faster than theirs. These authors verify convexity not by checking it locally at $(n-3)$-faces, but by different, rather global methods (their notion of local convexity is, in fact, a global notion). Devillers et al (1998) and Mehlhorn et al (1999) make much stronger initial assumptions about the input, such as the orientability of the input surface; they also presume that for each $(n-1)$-face of the surface an external normal is given, and that the directions of these normals define an orientation of the surface. Then they call the surface locally convex at an $(n-2)$-face $F$ if the angle between the normals of two $(n-1)$-faces adjacent to $F$ is obtuse. Of course this notion of ``local'' convexity is not local. The main theorem is deduced from a direct generalization of van Heijenoort's theorem to the spherical case. Van Heijenoort's theorem asserts that an immersion in $\mathbb{R}^n$ of any closed connected manifold ${\cal M}$, which is locally convex at all points, strictly locally convex in at least one point, and is complete with respect to the metric induced by $r$, is the boundary of a convex $d$-dimensional set. Van Heijenoort (1952) noticed that for $n=3$ his theorem immidiately follows from four theorems contained in Alexandrov (1948); however, acoording to van Heijenoort, Alexandrov's methods do not extend to $n>3$ and his approach is technically more complicated. We show that this theorem also holds for spheres, but not for the hyperbolic space. While all notions of affine convexity can be obviously generalized to the hyperbolic space, there are two possible generalizations in the spherical case; neither of these generalizations is perfect. The main question is whether we want all geodesics joining points of a convex set $S$ to be contained in $S$, or at least one. In the first case subspheres are not convex, in the second case two convex sets can intersect by a non-convex set. The latter problem can be solved by requiring a convex set to be open, but this is not very convenient, since again, it excludes subspheres. We call a set in $\X^n$ convex if for any two points $\p,\q \in S$, there is some geodesic $[\p,\q] \subset S$. \begin{proposition} If the intersection $I$ of two convex sets in $\SSS^n$, $n>0$ is not convex, then $I$ contains two opposite points. \end{proposition} To have unified terminology we will call subspheres subspaces. Besides the algorithmic implications, our generalization implies that any $(n-3)$-simple PL-surface in $\mathbb{R}^n$ with convex facets is the boundary of a convex polyhedron. \section{van Heijenoort-Alexandrov's Theorem \\for Spaces of Constant Curvature} Throughout the paper $\X^n$ denotes $\R^n$, $\SSS^n$, or $\HH^n$. Following the original proof of van Hejeenoort's, we will now show that his theorem holds in a somewhat stronger form for $\SSS^n$ for $n>2$. van Hejeenoort's theorem does not hold for unbounded surfaces in $\HH^n$. We will give three different kinds of counterexamples and pose a conjecture about simply connected locally compact embeddings of manifolds in $\HH^n$. Imagine a "convex strip" in 3D which is bent in the form of handwritten $\varphi$ so that it self-intersects itself, but not locally. Consider the intersection of this strip with a ball of appropriate radius so that the self-intersection of the strip happens to be inside the ball, and the boundary of the strip outside. Regarding the interior of the ball as Klein's model of $\HH^3$ we conclude that the constructed surface is strictly locally convex at all points and has a complete metric induced by the immersion into the hyperbolic space. This gives an example of an\emph{ immersion of a simply connected manifold} into $\HH^n$ which does not bound a convex surface. Notice that in this counterexample the surface self-intersects itself. Consider the (affine) product of a non-convex quadrilateral, lying inside a unit sphere centered at the origin, and a line in $\R^n$. The result is a non-convex polyhedral cylindrical surface $P$. Pick a point $\p$ inside the sphere, but outside the cylinder, whose vertical projection on the cylinder is the affine center of one of its facets. Replace this facet of $P$ with the cone over $F \cap \{\x | \|\x\| \le 1\}$ The part of the resulting polyhedral surface, that lies inside the sphere of unit radius, is indeed a PL-surface \emph{embedded} in $\HH^n$ (in Klein's model). The surface if locally convex at every point and strictly convex at $\p$. However, it is not the boundary of a convex body. Notice that this surface is \emph{not simply connected}. Consider a locally convex spiral, embedded in the $yz$-plane with two limiting sets: circle $\{\p | x=0, y^2+z^2=1 \}$ and the origin. That is this spiral coils around the origin and also around (from inside) the circle. Let $M$ be the double cone over this spiral with apexes at $(1,0,0)$ and $(-1,0,0$, intersected with the unit ball $\{\x | \|\x\| < 1\}$. This \emph{non-convex } surface is obviously \emph{simply connected, embedded,} locally convex at every point, and strictly convex at all point of the spiral. A locally compact realization $r$ of $\cM$ is a realization such that for any compact subset $C$ of $\X^n$ $C \cap r(\cM)$ is compact. The question remains: \begin{problem} Is it true that any locally compact embedding of a simply connected surface in $\HH^n$ is convex? \end{problem} It remains an open question whether van Heijenoort's (1952) criterion works for \emph{embedded} \emph{unbounded} surfaces in $\HH^n$. We conjecture that it is, indeed, the case. The proof of the main theorem makes use of quite a number of technical propositions and lemmas. The proofs of these statements for $\R^n$ by most part can be directly repeated for $\X^n$, but in some situations extra care is needed. If the reader is referred to van Heijenoort's paper for the proof, it means that the original proof works without any changes. \underline{Notation:} The calligraphic font is used for sets in the abstract topological manifold. The regular mathematics font is used for the images of these sets in $\X^n$, a space of constant curvature. The interior of a set $S$ is denoted by $(S)$, while the closure by $[S]$. The boundary of S is denoted by $\partial S$. Since this paper is best read together with van Heijenoort's (1952) paper, we would like to explain the differences between his and our notations. van Heijenoort denotes a subset in the abstract manifold $\overline{M}$ by $\overline{S}$, while denoting its image in $\mathbb{R}^n$ by $S$; an interior of a set $S$ in $\mathbb{R}^n$ is denoted in his paper by $\dot{S}$. The immersion $r$ induces a metric on ${\cal M}$ by \[ d(p,q)=\GLB\limits_{arc(p,q) \subset {\cal M}}\{|r(arc(p,q))|\} \] where \emph{$\GLB$} stands for the greatest lower bound, and $|r(arc(p,q))|$ — for the length of an arc joining $\p$ and $\q$ on $M$, which is the $r$-image of an arc joining these points on ${\cal M}$. We will call this metric $r$-metric. \begin{lemma}\label{arcwise} (van Heijenoort) Any two points of ${\cal M}$ can be connected by an arc of a finite length. Thus ${\cal M}$ is not only connected, but also arcwise connected. \end{lemma} \begin{lemma} (van Heijenoort) The metric topology defined by the $r$-metric is equivalent to the original topology on ${\cal M}$. \end{lemma} \begin{lemma} (van Heijenoort) $r(S)$ is closed in $\X^n$ for any closed subset $S$ of ${\cal M}$. \end{lemma} \begin{lemma} (van Heijenoort) If on a bounded (in $r$-metric) closed subset $S \subset {\cal M}$ mapping $r$ is one-to-one, then $r$ is a homeomorphism between $S$ and $r(S)$. \end{lemma} The proofs of the last two lemmas have been omitted in van Heijenoort (1952), but they are well known in topology. \begin{theorem}\label{main} Let $\X^n$ ($n>2$) be a Euclidean, spherical, or hyperbolic space. Let $M=({\cal M},r)$ be an immersion of an $(n-1)$-manifold ${\cal M}$ in $\X^n$, such that $r(\cM)$ is bounded in $\X^n$. Suppose that $M=({\cal M},r)$ satisfies the following conditions: \noindent 1) ${\cal M}$ is complete with respect to the metric induced on ${\cal M}$ by the immersion $r$, \noindent 2) ${\cal M}$ is connected, \noindent 2) $M$ is locally convex at each point, \noindent 4) $M$ is strictly convex in at least one point, Then $r$ is a homeomorphism from ${\cal M}$ onto the boundary of a compact convex body. \end{theorem} \begin{proof} Notice that our theorem for $\X^n=\HH^n$ directly follows from van Heijenoort's proof of the Euclidean case. Any immersion of $\cM$ into $\HH^n$ can be regarded as an immersion into the interior of a unit ball with a hyperbolic metric, according to Klein's model. If conditions 1)-4) are satisfied for the hyperbolic metric, they are satisfied for the Euclidean metric on this ball. Geodesics in Klein's model are straight line segments and, therefore, for a bounded closed surfaces in $\HH^n$, that satisfies the conditions of the theorem, the convexity follows from the Euclidean version of this theorem. The original Van Heijenoort's proof is based on the notion of convex part. A {\it convex part} of $M$, centered at a point of strict convexity $\oo=r(o)$, $o \in \cM$, is an open connected subset $C$ of $r({\cal M})$ that contains $\oo$ and such that: (1) $\partial C= H \cap r({\cal M})$, where $H$ is a hyperplane in $\X^n$, not passing through $\mathbf{o}$, (2) $C$ lies on the boundary of a closed convex body $K_C$ bounded by $C$ and $H$. We call $H \cap K_C$ the lid of the convex part. Let $H_0$ be a supporting hyperplane at $\oo$. We call the \emph{open} half-space defined by $H_0$, where the convex part lies, the \emph{positive half-space} and denote it by $H^+_0$. We call the $r$-preimage of a convex part $C$ in ${\cal M}$ an abstract convex part, and denote it by ${\cal C}$. In van Heijenoort's paper $H$ is required to be parallel to the supporting hyperplane $H_0$ of $r({\cal M})$ at $\oo$, but this is not essential. In fact, we just need a family of hyperplanes such that: (1) they do not intersect in the positive half-space, (2) the intersections of these hyperplanes with the positive half-space form a partition of the positive half-space, (3) all these hyperplanes are orthogonal to a line $l$, passing through $\oo$. Let us call such a family a fiber bundle $\{H_z\}_{(l,H_0)}$ of the positive half-space defined by $l$ and $H_0$. (In the case of $\X^n=\R^n$ it is a vector bundle.) In fact, it is not necessary to assume that $l$ passes through $\oo$, but this assumption simplifies our proofs. Here $z>0$ denotes the distance, \emph{along the line $l$}, between the hyperplane $H_z$ in this family and $H_0$. We will call $z$ the height of $H_z$. \begin{proposition} A convex part exists. \end{proposition} \begin{proof}van Heijenoort's proof works for $\X^n$, $n>2$, without changes. \end{proof} Denote by $\zeta$ the least upper bound of the set of heights of the lids of convex parts centered at $\oo$ and defined by some fixed fiber bundle $\{H_z\}_{(l,H_o)}$. Since $r(\cM)$ is bounded, then $\zeta < \infty$. Consider the union $G$ of all convex parts, centered at $\oo$. We want to prove that this union is also a convex part. Let us depart for a short while (this paragraph) from the assumption that $r(\cM)$ is bounded. $G$ may only be unbounded in the hyperbolic and Euclidean cases. As shown by van Heijennort (1952), if $\X^n=\R^n$ and $\zeta < \infty$, $G$ must be bounded even when $r(\cM)$ is allowed to be unbounded. If $\X^n=\HH^n$ and $\zeta < \infty$, $G$ can be unbounded, and this is precisely the reason why van Heijenoort's theorem does not hold for unbounded surfaces in hyperbolic spaces. Since in this theorem $r(\cM)$ is assumed to be bounded, $G$ is bounded. Let us presume from now on that $\X^n=\SSS^n$ (the case of $\X^n=\HH^n$ is considered in the beginning of the proof). $\partial G$ belongs to the hyperplane $H_{\zeta}$ and is equal to $H_{\zeta} \cap M$. $\partial G$ bounds a closed bounded convex set $D$ in $H_{\zeta}$. Two mutually excluding cases are possible. Case 1: $\dim D<n-1$. Then, following the argument of van Heljenoort (Part 2: pages 239-230, Part 5: page 241, Part 3: II on page 231), we conclude that $G \cup D$ is the homeomorphic pre-image of an $(n-1)$-sphere ${\cal G} \cup {\cal D} \subset {\cal M}$. Since ${\cal M}$ is connected, ${\cal G} \cup {\cal D} = {\cal M}$, and ${\cal M}$ is a convex surface. Case 2: $\dim D=n-1$. The following lemma is a key part of the proof of the main theorem. Roughly speaking, it asserts that if the lid of a convex part is of co-dimension 1, then either this convex part is a subset of a bigger convex part, or this convex part, together with the lid, is homeomorphic to $\cM$ via mapping $r$. \begin{lemma}\label{alternative} Suppose $\X^n=\SSS^n$. Let $C$ be a convex part centered at a point $\oo$ and defined by a hyperplane $H_z$ from a fiber bundle $\{H_z\}_{(l,H_0)}$. Suppose $B=\partial C$ is the boundary of an $(n-1)$-dimensional closed convex set $S$ in $H_z$. Either $S$ is the $r$-image of an $(n-1)$-disk ${\cal S}$ in ${\cal M}$ and ${\cal M}={\cal C} \cup {\cal S}$, where ${\cal C}=r(C)$, or $C$ is a proper subset of a larger convex part, defined by the same fiber bundle $\{H_z\}_{(l,H_0)}$. \end{lemma} \begin{proof} Using a perturbation argument, we will prove this lemma by reducing the spherical case to the Euclidean one. Since $S$ is $(n-1)$-dimensional and belongs to one of the hyperplanes in the fiber bundle $\{H_z\}_{(i,H_0)}$, $[\conv C] \cap H_0$ is either empty or $(n-1)$-dimensional. If it is non-empty, $[\conv C] \cap H_0$ must have a point other than $\oo$ and its opposite. The closure of a convex set in $\X^n$ is convex. Since $[\conv C ]$ is convex, if it contains a point $\p$ of $H_0$ other than $\oo$ and its opposite, it contains some geodesic segment $[\oo \p]$ \emph{lying in} $H_0$. Since $\oo$ is a point of strict convexity, there is a neighborhood of $\oo$ on $\oo \p$ all whose points, except for $\oo$, are not points of $[\conv C]$, which contradicts to the choice of $[\oo \p]$. So, $[\conv C] \cap H_0$ is definitely empty. Since, by Lemma \ref{arcwise}, $r(M)$ is arcwise connected, all of $[\conv C]$, except for the point $\oo$, lies in the positive subspace. Therefore, there is a hyperplane $H$ in $\SSS^n$ such that $[C]$ lies in an open halfspace $H_+$ defined by $H$. We can regard $\SSS^n$ as a standard sphere in $\R^{n+1}$. $H$ defines a hyperplane in $\R^{n+1}$. Consider an $n$-dimensional plane $E_n$ in $\R^{n+1}$ parallel to this hyperplane and not passing through the origin. Central projection $r_1$ of $M \cap H_+$ on $E_n$ obviously induces an immersion $r_1r$ of a submanifold $\cM^{\prime}$ of $\cM$ into $E_n$. This submanifold $\cM^{\prime}$ is defined as the maximal arcwise connected open subset of $\cM$ such that (1) all points of this subset are mapped by $r$ to $H_+$, and (2) it contains $o$. It is obviously a manifold. Let us prove that it exists. Consider the union of all open arcwise connected subsets that contain $o$. It is open and is acrwise connected, since it contains $o$. Let $M^{\prime}=(\cM^{\prime}, r_1r)$. The immersion $r_1r$ obviously satisfies Conditions 2-4 of the main theorem \ref{main}. $r_1r$ defines a metric on $\cM^{\prime}$. Any Cauchy sequence on $\cM^{\prime}$ under this metric is also a Cauchy sequence on $\cM$ under the metric induced by $r$. Therefore $\cM^{\prime}$ is complete and satisfies the conditions of the main Theorem \ref{main}. The central projection on $E_n$ maps a spherical convex part of $M$ on a Euclidean convex part of $M^{\prime}$; it also maps the fiber bundle $\{H_z\}_{(l,H_0)}$ to a fiber bundle in the Euclidean $n$-plane $E_n$. van Heijenoort (1952) proved Lemma \ref{alternative} for the Euclidean case. Therefore, either ${\cal M}={\cal C} \cup {\cal S}$, or $C$ is a proper subset of a larger convex part centered at $\oo$, and defined by the same fiber bundle $\{H_z\}_{(l,H_0)}$. \end{proof} The second alternative ($C$ is a subset of a larger convex part) is obviously excluded, since $C$ is the convex part corresponding to the height which is the least upper bound of all possible heights of convex parts. Therefore in Case 2 ${\cal M}$ is the boundary of a convex body which consists of a maximal convex part and a convex $(n-1)$-disk, lying in the hyperplane $H_{\zeta}$. \end{proof} \section{Locally convex PL-surfaces} \begin{theorem}\label{strict} Let $r$ be a realization map from a \emph{compact} connected manifold ${\cal M}$ of dimension $n-1$ into $\mathbb{X}^n$ ($n>2$) such that $\cM$ is complete with respect to the $r$-metric. Suppose that $M=(\cM,r)$ is locally convex at all points. Then $M=(\cM,r)$ is either strictly locally convex in at least one point, or is a spherical hyper-surface of the form $\mathbb{S}^n \cap \partial C$, where $C$ is a convex cone in $\mathbb{R}^{n+1}$, whose face of the smallest dimension contains the origin (in particular, $C$ may be a hyperplane in $\mathbb{R}^{n+1}$). \end{theorem} \begin{proof} The proof of this rather long and technical theorem will be included in the full length paper of Rybnikov (200X). \end{proof} \begin{theorem}\label{PL-case} Let $r$ be a realization map from a closed connected $n$-dimensional manifold ${\cal M}$, with a regular CW-decomposition, in $\mathbb{R}^n$ or $\SSS^n$ ($n>2$) such that on the closure of each cell $C$ of ${\cal M}$ map $r$ is one-to-one and $r(C)$ lies on a subspace of dimension equal to $\dim C$. Suppose that $r({\cal M})$ is strictly locally convex in at least one point. The surface $r({\cal M})$ is the boundary of a convex polyhedron if and only if each $(n-3)$-face has a point with an $M$-neighborhood which lies on the boundary of a convex $n$-dimensional set. \end{theorem} \begin{proof} $M$ is locally convex at all points of its $(n-3)$-cells. Suppose we have shown that $M$ is locally convex at each $k$-face, $0<k \le n-3$. Consider a $(k-1)$-face $F$. Consider the intersection of $\Star(F)$ with a sufficiently small $(n-k)$-sphere ${\mathbb S}$ centered at some point $\p$ of $F$ and lying in a subspace complimentary to $F$. $M$ is locally convex at $F$ if and only if the hypersurface ${\mathbb S} \cap \Star(F)$ on the sphere ${\mathbb S}$ is convex. Since $M$ is locally convex at each $k$-face, ${\mathbb S} \cap \Star(F)$ this hypersurface is locally convex at each vertex. By Theorem \ref{strict} ${\mathbb S} \cap \Star(F)$ is either the intersection of the boundary of a convex cone with ${\mathbb S}$ or has a point of strict convexity on the sphere ${\mathbb S}$. In the latter case the spherical generalization of van Heijenoort's theorem implies that ${\mathbb S} \cap \Star(F)$ is convex. Thus $M$ is convex at $\p$ and therefore at all points of $F$. This induction argument shows that $M$ must be locally convex at all vertices. If is locally convex at all vertices, it is locally convex at all points. We assumed that $M$ had a point of strict convexity. The metric induced by $r$ is indeed complete. By van Heijenoort's theorem and Theorem \ref{main}, $M$ is the boundary of a convex polyhedron. \end{proof} \section{New Algorithm for Checking Global \\Convexity of PL-surfaces} \underline{Idea:} check convexity for the star of each $(n-3)$-cell of $M$. We present an algorithm checking convexity of PL-realizations (in the sense outlined above) of a closed compact manifold $M=({\cal M},r)$. The main algorithm uses an auxiliary algorithm C-check. The input of this algorithm is a pair $(T,{\cal T})$, where ${\cal T}$ is a one vertex tree with a cyclic orientation of edges and $T$ is its rectilinear realization in 3-space. This pair can be thought of as a PL-realization of a plane fan (partition of the plane into cones with common origin) in 3-space. The output is 1, if this realization is convex, and 0 otherwise. Obviously, this question is equivalent to verifying convexity of a plane polygon. For the plane of reference we choose a plane perpendicular to the sum of all unit vectors directed along the edges of the fan. The latter question can be resolved in time, linear in the number of edges of the tree (e.g. see Devillers et al (1998), Mehlhorn et al (1999)). \underline{Input and Preprocessing:} The poset of faces of dimensions $n-3,n-2$, and $n-1$ of ${\cal M}$ and the equations of the facets, OR the poset of faces of dimensions $n-3,n-2,n-1$, and $0$ of ${\cal M}$ and the positions of the vertices. We assume that we know the correspondence between the rank of a face in the poset and its dimension. There are mutual links between the facets (or vertices) of ${\cal M}$ in the poset and the records containing their realization information. All $(n-3)$-faces of ${\cal M}$ are put into a stack $S_{n-3}$. There are mutual links between elements of this stack and corresponding elements of the face lattice of ${\cal M}$. \underline{Output:} YES, if $r({\cal M})$ is the boundary of a convex polyhedron, NO otherwise. \begin{tabbing} 1. {\bf while} $S_{n-3}$ is not empty, pick an $(n-3)$-face $F$ from $S_{n-3}$;\\ 2. compute the projection of $F$, and of all $(n-2)$-faces incident to $F$,\\ ~~~onto an affine 3-plane complimentary to $F$; denote this projection by $\PStar(F)$;\\ 3. compute the cyclicly ordered one-vertex tree ${\cal T}(F)$, whose edges\\ ~~~are the $(n-2)$-faces of $\PStar(F)$ and whose vertex is $F$; \\ 4. Apply to $(\PStar(F),{\cal T}(F))$ the algorithm C-check\\ ~~~~~{\bf if} C-check$(\PStar(F),{\cal T}(F))=1$ \= {\bf then} remove $F$ from the stack $S_{n-3}$ \\ \> {\bf else} Output:=NO; terminate \\ ~~~{\bf endwhile}\\ 5. Output:=YES \\ \end{tabbing} \begin{remark} The algorithm processes the stars of all $(n-3)$-faces independently. On a parallelized computer the stars of all $(n-3)$-faces can be processed in parallel. \end{remark} \textbf{Proof of Correctness.} The algorithm checks the local convexity of $M$ at the stars of all $(n-3)$-cells. $\cM$ is compact and closed — by Krein-Milman theorem (or Lemma \ref{strict}) $M$ has at least one strictly convex vertex. By Theorem \ref{PL-case} local convexity at all vertices, together with the existence of at least one strictly convex vertex, is necessary and sufficient for $M$ to be the boundary of a convex body. \underline{\bf Complexity estimates} Denote by $f_k$ the number of $k$-faces of ${\cal M}$, and by $f_{k,l}$ -- the number of incidences between $k$-faces and $l$-faces in ${\cal }M$. Step 1 is repeated at most $f_{n-3}$ times. Steps 2-4 take at most $\const f_{n-2,n-3} (\Star(F))$ arithmetic operations for each $F$, where $\const$ does not depend on $F$. Thus, steps 2-4, repeated for all $(n-3)$-faces of ${\cal M}$, require $O(f_{n-2,n-3})$ operations. Therefore, the total number of operations for this algorithm is $O(f_{n-2,n-3})$. \begin{remark} The algorithm does not use all of the face lattice of ${\cal M}$. \end{remark} \begin{remark} The algorithm requires computing polynomial predicates only. The highest degree of algebraic predicates that the algorithm uses is $d$, which is optimal (see Devillers et al, 1998). \end{remark} From a practical point of view, it makes sense to say that a surface $M$ is almost convex, if it lies within a small Hausdorf distance from a convex surface $S$ that bounds an $n$-dimensional convex set $B$. In this case, the measure of lines, that pass through interior points of $B$ and intersect $S$ in more than 2 points, will be small, as compared to the measure of all lines passing through interior points of $B$. These statements can be given a rigorous meaning in the language of integral geometry, also called ``geometric probability'' (see Klain and Rota 1997). \begin{remark} If there is a 3-dimensional coordinate subspace $L$ of $\R^n$ such that all the subspaces spanned by $(n-3)$-faces are complementary to $L$, the polyhedron can be projected on $L$ and all computations can be done in 3-space. This reduces the degree of predicates from $d$ to 3. In such case the boolean complexity of the algorithm does not depend on the dimension at all. therefore, for sufficiently generic realizations the algorithm has degree 3 and complexity not depending on $n$. \end{remark} \begin{remark} This algorithm can also be applied without changes to compact PL-surfaces in $\SSS^n$ or $\HH^n$. \end{remark} \end{document}
\begin{document} \title{Ample tangent bundle on smooth projective stacks} \author{Karim El Haloui} \email{[email protected]} \address{Dept. of Maths, University of Warwick, Coventry, CV4 7AL, UK} \date{Novembre 06, 2016} \subjclass[2010]{Primary 14A20; Secondary 18E40} \begin{abstract} We study ample vector bundles on smooth projective stacks. In particular, we prove that the tangent bundle for the weighted projective stack ${\mathbb P}(a_0,...,a_n)$ is ample. A result of Mori shows that the only smooth projective varieties with ample vector bundle are isomorphic to ${\mathbb P}^N$ for some $N$. Extending our geometric spaces from varieties to projective stacks, we are able to provide a new example. This leaves the open question of knowing if the only smooth projective stacks are the weighted projective stacks. \end{abstract} \maketitle A conjecture of Hartshorne says that the only $n$-dimensional irreducible smooth projective space whose tangent bundle is ample is isomorphic to ${\mathbb P}^N$ for some $N$. This was proved for all dimensions and for any characteristic of the base field by Mori \cite{Mor}. The goal of this paper is to extend the definition of ample vector bundles to smooth projective stacks and provide a new example in this context. We prove that for any positive weights $a_0,...,a_n$, the tangent bundle of the weighted projective stack ${\mathbb P}(a_0,...,a_n)$ is ample. The next step would be to show whether the only spaces with such characterisation are the weighted projective stacks. We leave this as a conjecture in this article. In section 1 we recall some general observations about quotient categories. In section 2 we establish a technical framework for working with ample bundles on smooth projective stacks. In section 3 we use this framework to prove that for the tangent bundle of any weighted projective stack is ample. \section{Properties of the quotient category} Let ${\mathbb K}$ be a field. All our modules are graded modules and homomorphisms are graded module homomorphisms. \subsection{Quotient stack} Let $Y$ be a smooth algebraic variety with an action of an algebraic group $G$. ${\mathcal O}_{[X]}$-modules on the quotient stack $[X]=[Y/G]$ can be understood in terms of $G$-equivariant ${\mathcal O}_Y$-modules \cite{EHR}. There are different notions of a projective stack, for instance, a stack whose coarse moduli space is a projective variety. Here we use a more restrictive notion: a projective stack is a smooth closed substack of a weighted projective stack \cite{Zho}. Let us spell it out. Let $V=\oplus_{k=1}^m V_k$ be a positively graded $n+1$-dimensional ${\mathbb K}$-vector space. Naturally we treat it as a ${\mathbb G}_m$-module with positive weights by $\lambda \bullet \ensuremath{\mathbf{v}}\xspace_k = \lambda^k \ensuremath{\mathbf{v}}\xspace_k$ where $\ensuremath{\mathbf{v}}\xspace_k \in V_k$. Let $Y$ be a smooth closed ${\mathbb G}_m$-invariant subvariety of $V\setminus \{0\}$. We define {\em a projective stack} as the stack $[X]=[Y/{\mathbb G}_m]$. The G.I.T.-quotient $X=Y//{\mathbb G}_m$ is the coarse moduli space of $[X]$. \subsection{Coherent sheaves on projective stacks} Let us describe the category of ${\mathcal O}_{[X]}{{\mbox{\rm --\,Qcoh}}}$ of quasicoherent sheaves on $[X]$. Choose a homogeneous basis $\ensuremath{\mathbf{e}}\xspace_i$ on $V$ with $\ensuremath{\mathbf{e}}\xspace_i\in V_{d_i}$, $i=0,1, \ldots, n$. Let $\ensuremath{\mathbf{x}}\xspace_i \in V^\ast$ be the dual basis. Then ${\mathbb K}[V] = {\mathbb K} [\ensuremath{\mathbf{x}}\xspace_0,...,\ensuremath{\mathbf{x}}\xspace_n ]$ possesses a natural grading with $\deg(\ensuremath{\mathbf{x}}\xspace_{i})=d_{i}$. Let $I$ be the defining ideal of $\overline{Y}$. Since $Y$ is ${\mathbb G}_m$-invariant, the ideal $I$ and the ring $$ A:={\mathbb K}[Y]= {\mathbb K}[\overline{Y}] = {\mathbb K} [\ensuremath{\mathbf{x}}\xspace_0,...,\ensuremath{\mathbf{x}}\xspace_n ]/I $$ are graded. Both $X$ and $[X]$ can be thought of as the projective spectrum of $A$. The scheme $X$ is naturally isomorphic to the scheme theoretic $ {{\mbox{\rm Proj}}}\, A$. The stack $[X]$ is the Artin-Zhang projective spectrum $ {{\mbox{\rm Proj}}}_{AZ} A$ \cite{AKO}, i.e. its category of quasicoherent sheaves ${\mathcal O}_{[X]}{{\mbox{\rm --\,Qcoh}}}$ is equivalent to the quotient category $A{{\mbox{\rm --\,GrMod}}}/A{{\mbox{\rm --\,Tors}}}$ where $A{{\mbox{\rm --\,GrMod}}}$ is the category of ${\mathbb Z}$-graded $A$-modules, $A{{\mbox{\rm --\,Tors}}}$ is its full subcategory of torsion modules (we identify the objects of ${\mathcal O}_{[X]}{{\mbox{\rm --\,Qcoh}}}$ and $A{{\mbox{\rm --\,GrMod}}}/A{{\mbox{\rm --\,Tors}}}$). Recall that $$ \tau (M) = \{\ensuremath{\mathbf{m}}\xspace \in M \,\mid\, \exists N \; \forall k>N \; A_k\ensuremath{\mathbf{m}}\xspace=0\} $$ is {\em the torsion submodule of} $M$. $M$ is said to be {\em torsion} if $\tau (M)=M$. It can be seen as well that the torsion submodule of $M$ is the sum of all the finite dimensional submodules of $M$ since $A$ is connected. Denote by $$ \pi_A:A{{\mbox{\rm --\,GrMod}}} \rightarrow A{{\mbox{\rm --\,GrMod}}}/A{{\mbox{\rm --\,Tors}}} $$ the quotient functor. Since $A{{\mbox{\rm --\,GrMod}}}$ has enough injectives and $A{{\mbox{\rm --\,Tors}}}$ is dense then there exists a section functor $$ \omega_A:A{{\mbox{\rm --\,GrMod}}}/A{{\mbox{\rm --\,Tors}}} \rightarrow A{{\mbox{\rm --\,GrMod}}} $$ which is right adjoint to $\pi_A$ in the sense that $$ {{\mbox{\rm Hom}}}_{A\mbox{\tiny {{\mbox{\rm --\,GrMod}}}}}(N,\omega_A(\mathcal{M}))\cong{{\mbox{\rm Hom}}}_{A\mbox{\tiny{{\mbox{\rm --\,GrMod}}}}/A\mbox{\tiny{{\mbox{\rm --\,Tors}}}}}(\pi_A(N),\mathcal{M}). $$ Recall as well that $\pi_A$ is exact and that $\omega_A$ is left exact. We call $\omega_A\pi_A(M)$ the {\em $A$-saturation} of $M$. We say that a module is {\em $A$-saturated} is it is isomorphic to the saturation of a module. It can be easily seen that a saturated module is the saturation of itself and is torsion-free (from the adjunction). If $M$ and $N$ are $A$-saturated, then being isomorphic in $A{{\mbox{\rm --\,GrMod}}}/A{{\mbox{\rm --\,Tors}}}$ is equivalent to being isomorphic in $A{{\mbox{\rm --\,GrMod}}}$. The ${\mathcal O}_{[X]}$-module ${\mathcal O}_{[X]}(k)$ is defined as ${{\mbox{\rm Sat}}}(A[k])$ where $A[k]$ is the shifted regular module and the grading is given by $A[k]_m = A_{k+m}$. In particular, $A[k]$ is $A$-saturated if $A$ is a polynomial rings of more than two variables \cite{AZ}. A well-known example of a ring which isn't $A$-saturated is the polynomial ring of one variable $A={\mathbb K}[x]$ which $A$-saturation is given by ${\mathbb K}[x,x^{-1}]$. \subsection{Tensor product} Let $M$ and $N$ be two $A$-modules, then $M\otimes_A N$ possesses a natural $A$-module structure. We want to induce on the quotient category $A{{\mbox{\rm --\,GrMod}}}/A{{\mbox{\rm --\,Tors}}}$ a structure of a symmetric monoidal category. Consider the full subcategory of $A$-saturated modules $A{{\mbox{\rm --\,Sat}}}$. The essential image of the section functor $\omega_A$ consists precisely of the $A$-saturated modules. Thus $$\omega_A: A{{\mbox{\rm --\,GrMod}}}/A{{\mbox{\rm --\,Tors}}} \rightarrow A{{\mbox{\rm --\,GrMod}}}$$ becomes full and faithful onto its image. Indeed $$ {{\mbox{\rm Hom}}}_{A\mbox{\tiny{{\mbox{\rm --\,GrMod}}}}/A\mbox{\tiny{{\mbox{\rm --\,Tors}}}}}(\pi_A(M),\pi_A(N)) \cong {{\mbox{\rm Hom}}}_{A\mbox{\tiny{{\mbox{\rm --\,GrMod}}}}}(M,N) $$ if $N$ is $A$-saturated. So now, we identify the quotient category with its image and call its objects sheaves which we denote by curly letters. If $N$ is a finitely generated $A$-module, then ${\mathcal N}=\pi_A(N)$ is said to be a \emph{coherent} ${\mathcal O}_{[X]}$-module. The definition makes sense since the $A$-saturation of a finitely generated $A$-module $N$ is finitely generated. Indeed, It was proved \cite{AZ} that for any graded $A$-module $N$ we have: $$ 0 \rightarrow \tau_A(N) \rightarrow N \rightarrow \omega_A\pi_A(N) \rightarrow R^{1}\tau_A(N) \rightarrow 0. $$ where $\tau_A(N)$ and $R^{1}\tau_A(N)$ are torsion $A$-modules. Since the localisation functor is exact and that the localisation of a torsion module is zero, then the localisation of the saturation of $N$ is isomorphic to the localisation of $N$ which in turn is finitely generated. But being finitely generated is a local property, thus the saturation of $N$ is finitely generated. From general localisation theory \cite{Ga}, $A{{\mbox{\rm --\,Sat}}}$ is an abelian category (it is actually a Grothendieck category) but it is not an abelian subcategory of $A{{\mbox{\rm --\,GrMod}}}$ \footnote{A full subcategory of an abelian category need not be an abelian subcategory}. The kernels in both categories are the same but the cokernel of two saturated $A$-modules isn't necessarily saturated. The saturation functor ${{\mbox{\rm Sat}}}:A{{\mbox{\rm --\,GrMod}}} \rightarrow A{{\mbox{\rm --\,Sat}}}$ is exact and its right adjoint, namely the inclusion functor, is left exact. Moreover it preserves finite direct sums as does any additive functor in any additive category. Let $A$ be the coordinate ring of some projective stack $[X]$. The graded global section functor $\Gamma_*:{{\mbox{\rm Qcoh}}}([X]) \rightarrow A{{\mbox{\rm --\,GrMod}}}$ and the sheafification functor ${{\mbox{\rm Sh}}}:A{{\mbox{\rm --\,GrMod}}} \rightarrow {{\mbox{\rm Qcoh}}}([X])$ induces the following equivalence of categories: $$ A{{\mbox{\rm --\,Sat}}} \cong {{\mbox{\rm Qcoh}}}([X]). $$ There exists a symmetric monoidal structure in the category of quasicoherent sheaves on $[X]$ denoted by ${{\mbox{\rm Qcoh}}}([X])$. The latter category is equivalent to $A{{\mbox{\rm --\,GrMod}}}/A{{\mbox{\rm --\,Tors}}}$ and hence to $A{{\mbox{\rm --\,Sat}}}$. Note that ${{\mbox{\rm Sh}}}({M}) \otimes {{\mbox{\rm Sh}}}({N}) \cong {{\mbox{\rm Sh}}}({M \otimes_A N})$ in ${{\mbox{\rm Qcoh}}}([X])$. So, \begin{align*} \Gamma_*({{\mbox{\rm Sh}}}({M}) \otimes {{\mbox{\rm Sh}}}({{N}})) &\cong \Gamma_*({{\mbox{\rm Sh}}}{(M \otimes_A N})) \\ &\cong {{\mbox{\rm Sat}}}(M \otimes_A N) \end{align*} where all the isomorphisms are natural (to preserve the symmetric monoidal structure). We can now define a tensor product: take ${\mathcal M}$ and ${\mathcal N}$ in $A{{\mbox{\rm --\,Sat}}}$ and let $$ {\mathcal M} \otimes {\mathcal N} := {{\mbox{\rm Sat}}}(M\otimes_A N). $$ where as objects ${\mathcal M}={{\mbox{\rm Sat}}}(M)$ and ${\mathcal N}={{\mbox{\rm Sat}}}(N)$. Since ${{\mbox{\rm Sat}}}$ and the tensor product of graded modules is right-exact so is the tensor product defined on $A{{\mbox{\rm --\,Sat}}}$. \section{Ample vector bundles} We want to define a notion of vector bundles of finite rank that we shall call equivalently locally free sheaf of finite rank which will be defined purely in cohomological terms. We have a \emph{internal} Hom defined on $A{{\mbox{\rm --\,Sat}}}$ defined as follow: $$ \underline{{{\mbox{\rm Hom}}}}_{\mbox{\tiny A{{\mbox{\rm --\,Sat}}}}}({\mathcal M},{\mathcal N})= \bigoplus_{k \in {\mathbb Z}} {{\mbox{\rm Hom}}}_{\mbox{\tiny A{{\mbox{\rm --\,Sat}}}}}({\mathcal M},{\mathcal N}[k]) $$ where as objects ${\mathcal N}[k]={{\mbox{\rm Sat}}}(N[k])$ (saturation is preserved under shifts). The injective objects in $A{{\mbox{\rm --\,Sat}}}$ are the injective torsion-free $A$-modules in $A{{\mbox{\rm --\,GrMod}}}$ (they are all saturated) and from standard localisation theory $A{{\mbox{\rm --\,Sat}}}$ has enough injectives \cite{Ga}. Moreover an injective object in $A{{\mbox{\rm --\,GrMod}}}$ can be decomposed as a direct sum of an injective torsion-free $A$-module and an injective torsion $A$-modules determined up to isomorphism \cite{AZ}. So the injective resolution of a $A$-module $N$, say $E^\bullet(N)$, is equal to $Q^\bullet(N) \oplus I^\bullet(N)$ where $Q^\bullet(N)$ is the saturated torsion free part and $I^\bullet(N)$ the torsion free part. Supposing from now on that $M$ is a finitely generated graded $A$-module then: \begin{align*} {{\mbox{\rm Ext}}}^i_{\mbox{\tiny A{{\mbox{\rm --\,Sat}}}}}({\mathcal M},{\mathcal N}) &= R^i{{\mbox{\rm Hom}}}_{\mbox{\tiny A{{\mbox{\rm --\,Sat}}}}}({\mathcal M},\_)({\mathcal N}) \\ &\cong {{\mbox{\rm h}}}^i({{\mbox{\rm Hom}}}_{\mbox{\tiny A{{\mbox{\rm --\,GrMod}}}}} (M,Q^\bullet(N))) \end{align*} Graded Ext is defined as follows: \begin{align*} \underline{{{\mbox{\rm Ext}}}}^i_{\mbox{\tiny A{{\mbox{\rm --\,Sat}}}}}({\mathcal M},{\mathcal N}) &= \bigoplus_{k \in {\mathbb Z}} {{\mbox{\rm Ext}}}^i_{\mbox{\tiny A{{\mbox{\rm --\,Sat}}}}}({\mathcal M},{\mathcal N}[k]) \\ &\cong {{\mbox{\rm h}}}^i(\underline{{{\mbox{\rm Hom}}}}_{\mbox{\tiny A{{\mbox{\rm --\,GrMod}}}}}(M,Q^\bullet(N))) \end{align*} which is the $i$th right derived functor of $\underline{{{\mbox{\rm Hom}}}}$. We can endow the graded Ext in $A{{\mbox{\rm --\,Sat}}}$ with the structure of a graded $A$-module and define the sheafified version of graded Ext as follows $$ {\underline{\mathcal{E}xt}}^i({\mathcal M},{\mathcal N}) := {{\mbox{\rm Sat}}}(\underline{{{\mbox{\rm Ext}}}}^i_{\mbox{\tiny A{{\mbox{\rm --\,Sat}}}}}({\mathcal M},{\mathcal N})) $$ where ${\mathcal M}$ and ${\mathcal N}$ are objects in $A{{\mbox{\rm --\,Sat}}}$ (and as object in $A{{\mbox{\rm --\,GrMod}}}$ $\pi_A(M)={\mathcal M}$ and $M$ is finitely generated). Let $X$ be a smooth projective variety and ${\mathcal E}$ a vector bundle (of finite rank). Equally, ${\mathcal E}$ is a locally free sheaf which is equivalent to asking that for all $x\in X$ the stalk ${\mathcal E}_x$ is a free module of finite rank over the regular local ring ${\mathcal O}_x$. But ${\mathcal E}_x$ is a free module if and only if ${{\mbox{\rm Ext}}}^i_{{\mathcal O}_x}({\mathcal E}_x,{\mathcal O}_x)=0$ for all $i>0$. Since ${{\mbox{\rm Ext}}}^i_{{\mathcal O}_x}({\mathcal E}_x,{\mathcal O}_x)\cong \mathcal{E}xt^i_{{\mathcal O}}({\mathcal E},{\mathcal O})_x$ for all $x\in X$, then ${\mathcal E}$ is a vector bundle if and only if $\mathcal{E}xt^i_{{\mathcal O}}({\mathcal E},{\mathcal O})=0$ for all $i>0$. This justify the next definition, \begin{defn} Let ${\mathcal M}$ be a coherent sheaf. ${\mathcal M}$ is a {\bf vector bundle} or {\bf a locally free sheaf} if $$ {\underline{\mathcal{E}xt}}^i({\mathcal M},{\mathcal O})=0 $$ for all $i>0$ where ${\mathcal O}:={{\mbox{\rm Sat}}}(A)$. \end{defn} For example, if $[X]$ is a weighted projective stack of dimension greater than 2 then $A$ is a graded polynomial ring with more than 2 variables. In this case, it is known that ${\mathcal O}=A$ \cite{AZ}. But since ${\mathcal O}(k)$ is projective then ${\mathcal O}(k)$ is locally free for all $k$. \begin{defn} A sheaf ${\mathcal M}$ is said to be {\bf weighted globally generated} if there exists an epimorphism $$ \bigoplus_{j = 0}^{l-1} {\mathcal O}(j)^{\oplus s_j} \rightarrow {\mathcal M} \rightarrow 0 $$ for some non-negative $s_j$'s with $l={{\mbox{\rm lcm}}}(d_0,...,d_n)$. \end{defn} In the case where all the weights are one, the least common multiple is equal to one and we recover the definition of globally generated sheaves adopted for projective varieties. \begin{prop} \begin{enumerate} \item Any quotient of a weighted globally generated sheaf is weighted globally generated. \item The direct sum of two weighted globally generated sheaves is weighted globally generated. \item For all $k\geqslant 0$, ${\mathcal O}(k)$ is weighted globally generated. \item The tensor product of two weighted globally generated sheaves is weighted globally generated. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate} \item It follows from the definition and the fact that the composition of two epimorphisms is an epimorphism. \item This follows immediately by definition. \item By the division algorithm, we know that $k=al+r$ for some non-negative integer $a$ and $0 \leqslant r < l$. We claim that the following map $$ {\mathcal O}(r)^{\oplus (n+1)} \rightarrow {\mathcal O}(k) $$ induced by $(0,...,1_j,...,0) \mapsto x_j^{ \frac{al}{d_j}}$ is an epimorphism in $A{{\mbox{\rm --\,Sat}}}$. To prove our claim we need to show that the cokernel of the map in $A{{\mbox{\rm --\,GrMod}}}$ is torsion. Take a homogeneous element $f\in A(k)$ and let $N=\max\left\lbrace \dfrac{al}{d_j}, j\in\{0,...,n\} \right\rbrace $. Suppose that $h\in A$ is an homogeneous element of degree greater than $N$. So it can be written as $h'x_j^{ \frac{al}{d_j}}$ for some $j\in\{0,...,n\}$. It follows that $hf$ is in the image of the map. Hence its cokernel is zero. \item Suppose that ${\mathcal M}_1$ and ${\mathcal M}_2$ are weighted globally generated. Then we know that $$ \bigoplus_{j = 0}^{l-1} {\mathcal O}(j)^{\oplus s_j^1} \rightarrow {\mathcal M}_1 \rightarrow 0 \:\:\:(1) $$ and $$ \bigoplus_{k = 0}^{l-1} {\mathcal O}(k)^{\oplus s_k^2} \rightarrow {\mathcal M}_2 \rightarrow 0 \:\:\:(2) $$ Tensoring (2) by $\bigoplus_{j = 0}^{l-1} {\mathcal O}(j)^{\oplus s_j^1}$ on the left and (1) by ${\mathcal M}_2$ on the right, it follows that $$ \bigoplus_{j = 0}^{l-1} {\mathcal O}(j)^{\oplus s_j^1} \otimes \bigoplus_{k = 0}^{l-1} {\mathcal O}(k)^{\oplus s_k^2} \rightarrow \bigoplus_{j = 0}^{l-1} {\mathcal O}(j)^{\oplus s_j^1} \otimes {\mathcal M}_2 \rightarrow 0. $$ and $$ \bigoplus_{j = 0}^{l-1} {\mathcal O}(j)^{\oplus s_j^1} \otimes {\mathcal M}_2 \rightarrow {\mathcal M}_1 \otimes {\mathcal M}_2 \rightarrow 0 $$ Since the composition of epimorphisms is an epimorphism, we get $$ \bigoplus_{j = 0}^{l-1} {\mathcal O}(j)^{\oplus s_j^1} \otimes \bigoplus_{k = 0}^{l-1} {\mathcal O}(k)^{\oplus s_k^2} \rightarrow {\mathcal M}_1 \otimes {\mathcal M}_2 \rightarrow 0. $$ Therefore, $$ \bigoplus_{0 \leqslant j+k \leqslant 2(l-1)} {\mathcal O}(j+k)^{\oplus (s_j^1+s_k^2)} \rightarrow {\mathcal M}_1 \otimes {\mathcal M}_2 \rightarrow 0. $$ Since each summand is weighted globally generated and that a direct sum of such is weighted globally generated, the result follows. \end{enumerate} \end{proof} In any abelian symmetric braided tensor category we can define the $n^{th}$ symmetric power functor ${{\mbox{\rm Sym}}}^n:A{{\mbox{\rm --\,Sat}}} \rightarrow A{{\mbox{\rm --\,Sat}}}$ as the coequalizer of all the endomorphisms $\sigma \in S_n$ of the $n^{th}$ tensor power functor $T^n$. Hence, we get the following well-known properties: \begin{prop} \begin{enumerate} \item There exists an epimorphism $$ {{\mbox{\rm Sym}}}^p({\mathcal M}) \otimes {{\mbox{\rm Sym}}}^q({\mathcal M}) \twoheadrightarrow {{\mbox{\rm Sym}}}^{p+q}({\mathcal M}). $$ \item There is a natural isomorphism $$ \bigoplus_{p+q=n} {{\mbox{\rm Sym}}}^p({\mathcal M}) \otimes {{\mbox{\rm Sym}}}^q({\mathcal N}) \rightarrow {{\mbox{\rm Sym}}}^n({\mathcal M}\oplus {\mathcal N}). $$ \item The functor ${{\mbox{\rm Sym}}}^n$ preserves epimorphisms and sends coherent sheaves to coherent sheaves. \item There is a natural epimorphism $$ {{\mbox{\rm Sym}}}^n({\mathcal M}) \otimes {{\mbox{\rm Sym}}}^n({\mathcal N}) \rightarrow {{\mbox{\rm Sym}}}^n({\mathcal M} \otimes {\mathcal N}). $$ \end{enumerate} \end{prop} \begin{prop} Let ${\mathcal M}\in A{{\mbox{\rm --\,Sat}}}$, then ${{\mbox{\rm Sym}}}^n({\mathcal M}) \cong {{\mbox{\rm Sat}}}( \mbox{\rm S}^n(M))$ where $\mbox{\rm S}^n$ is the $n^{th}$ symmetric power taken in $A{{\mbox{\rm --\,GrMod}}}$ and ${\mathcal M}={{\mbox{\rm Sat}}}(M)$. \end{prop} This results holds because of the definition of our tensor product in $A{{\mbox{\rm --\,Sat}}}$, we preserved the monoidal symmetric structure and now each transposition acts by switching tensorands before saturation. More generally, it should be noted that saturating a module corresponds to the sheafification of a presheaf. We give a more detailed proof of the following proposition first given in \cite{GP}. \begin{prop} For all $M$, $N$ be two modules. We have $$ {{\mbox{\rm Sat}}}({{\mbox{\rm Sat}}}(M)\otimes {{\mbox{\rm Sat}}}(N)) \cong {{\mbox{\rm Sat}}}(M\otimes_A N). $$ \end{prop} \begin{proof} Consider the following exact sequence in $A{{\mbox{\rm --\,GrMod}}}$ \cite{AZ}: $$ 0 \rightarrow \tau(M) \rightarrow M \rightarrow {{\mbox{\rm Sat}}}(M) \rightarrow R^{1}\tau(M) \rightarrow 0 $$ where $\tau(M)$ is the largest torsion submodule of $M$. The saturation of $M$, denoted by $\widetilde{M}$, is the maximal essential extension of $M/\tau(M)$ such that ${\widetilde{M}}/(M/\tau(M))$ is in $A{{\mbox{\rm --\,Tors}}}$. So we have $$ 0 \rightarrow M/\tau(M) \rightarrow {{\mbox{\rm Sat}}}(M) \rightarrow T \rightarrow 0 $$ where $T$ is in $A{{\mbox{\rm --\,Tors}}}$. Applying by $\_\otimes_AN$ we obtain $$ ... \rightarrow \textrm{Tor}_1^A(T,N) \rightarrow M/\tau(M)\otimes_AN \rightarrow {{\mbox{\rm Sat}}}(M)\otimes_AN \rightarrow T\otimes_AN \rightarrow 0. $$ From the properties of the $\textrm{Tor}$ functor, it is known that $\textrm{Tor}_1^A(T,N) \cong \textrm{Tor}_1^A(N,T)$. Now taking a projective resolution of $N$ and tensoring by $T$ we get a complex of objects in $A{{\mbox{\rm --\,Tors}}}$ since tensor product preserves torsion. Therefore $\textrm{Tor}_1^A(T,N)$ is in $A{{\mbox{\rm --\,Tors}}}$. The saturation functor is exact and the saturation of torsion objects is the zero object, so we get a short exact sequence $$ 0 \rightarrow {{\mbox{\rm Sat}}}(M/\tau(M)\otimes_AN) \rightarrow {{\mbox{\rm Sat}}}({{\mbox{\rm Sat}}}(M)\otimes_AN) \rightarrow 0. $$ And hence an isomorphism $$ {{\mbox{\rm Sat}}}(M/\tau(M)\otimes_AN) \cong {{\mbox{\rm Sat}}}({{\mbox{\rm Sat}}}(M)\otimes_AN). $$ Moreover we have the following short exact sequence $$ 0 \rightarrow \tau(M) \rightarrow M \rightarrow M/\tau(M) \rightarrow 0. $$ Tensoring on the left by $N$ we get $$ \tau(M)\otimes_AN \rightarrow M\otimes_AN \rightarrow M/\tau(M)\otimes_AN \rightarrow 0. $$ Since $\tau(M)\otimes_AN$ is torsion, applying the saturation functor we obtain $$ 0 \rightarrow {{\mbox{\rm Sat}}}(M\otimes_AN) \rightarrow {{\mbox{\rm Sat}}}(M/\tau(M)\otimes_AN) \rightarrow 0. $$ And hence an isomorphism $$ {{\mbox{\rm Sat}}}(M\otimes_AN) \cong {{\mbox{\rm Sat}}}(M/\tau(M)\otimes_AN) $$ So, $$ {{\mbox{\rm Sat}}}(M\otimes_AN) \cong {{\mbox{\rm Sat}}}({{\mbox{\rm Sat}}}(M)\otimes_AN) $$ To conclude, \begin{align*} {{\mbox{\rm Sat}}}(M\otimes_AN) &\cong {{\mbox{\rm Sat}}}({{\mbox{\rm Sat}}}(M)\otimes_AN) \\ &\cong {{\mbox{\rm Sat}}}(N \otimes_A {{\mbox{\rm Sat}}}(M)) \\ &\cong {{\mbox{\rm Sat}}}({{\mbox{\rm Sat}}}(N) \otimes_A {{\mbox{\rm Sat}}}(M)) \\ &\cong {{\mbox{\rm Sat}}}({{\mbox{\rm Sat}}}(M) \otimes_A {{\mbox{\rm Sat}}}(N)) \end{align*} \end{proof} \begin{defn} A vector bundle ${\mathcal M}$ is {\bf ample} if for any coherent sheaf ${\mathcal F}$ there exists $n_0>0$ such that $${\mathcal F}\otimes {{\mbox{\rm Sym}}}^n({\mathcal M})$$ is weighted globally generated for all $n \geqslant n_0$. \end{defn} \begin{prop} \begin{enumerate} \item For any ample sheaf, there exists a non-negative integer such that any of its higher symmetric power is weighted globally generated. \item Any quotient sheaf of an ample sheaf is ample. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate} \item Suppose ${\mathcal M}$ is ample, since ${\mathcal O}$ is a coherent sheaf then there exist a non-negative $n_0$ such that for all $n \geqslant n_0$ $$ {\mathcal O} \otimes {{\mbox{\rm Sym}}}^n({\mathcal M}) \cong {{\mbox{\rm Sym}}}^n({\mathcal M}) $$ is weighted globally generated. \item For a given sheaf ${\mathcal F}$, the functor ${\mathcal F} \otimes \_$ is right exact as a composition of a right exact functor and an exact functor. Let ${\mathcal M}'$ be a quotient of ${\mathcal M}$, i.e., we have an epimorphism ${\mathcal M} \twoheadrightarrow {\mathcal M}'$. Since ${{\mbox{\rm Sym}}}^n$ preserves epimorphisms we have $${{\mbox{\rm Sym}}}^n({\mathcal M}) \twoheadrightarrow {{\mbox{\rm Sym}}}^n({\mathcal M}'),$$ so $${\mathcal F} \otimes{{\mbox{\rm Sym}}}^n({\mathcal M}) \twoheadrightarrow {\mathcal F} \otimes{{\mbox{\rm Sym}}}^n({\mathcal M}')$$ for any coherent sheaf ${\mathcal F}$. But ${\mathcal M}$ is ample, so for $n$ sufficiently large ${\mathcal F} \otimes{{\mbox{\rm Sym}}}^n({\mathcal M})$ is weighted globally generated and since ${\mathcal F} \otimes{{\mbox{\rm Sym}}}^n({\mathcal M}) \twoheadrightarrow {\mathcal F} \otimes{{\mbox{\rm Sym}}}^n({\mathcal M}')$ is an epimorphism then ${\mathcal F} \otimes{{\mbox{\rm Sym}}}^n({\mathcal M}')$ is weighted globally generated. This shows that ${\mathcal M}'$ is ample. \end{enumerate} \end{proof} \begin{prop} The finite direct sum of ample sheaves is ample. \end{prop} \begin{proof} The proof is similar to the one given by Hartshorne \cite{Har}. We know that $$ {{\mbox{\rm Sym}}}^n({\mathcal M}\oplus{\mathcal N}) = \bigoplus_{p=0}^{n} {{\mbox{\rm Sym}}}^p({\mathcal M})\otimes {{\mbox{\rm Sym}}}^{n-p}({\mathcal N}). $$ Write $q=n-p$. It suffices to show that there exists some non-negative integer $n_0$ such that when $p+q \geqslant n_0$ then $$ {\mathcal F}\otimes {{\mbox{\rm Sym}}}^p({\mathcal M})\otimes {{\mbox{\rm Sym}}}^p({\mathcal N}) $$ is weighted globally generated. Fix some coherent sheaf ${\mathcal F}$, \begin{enumerate} \item ${\mathcal M}$ is ample so there exists a positive integer $n_1$ such that for all $n \geqslant n_1$, $$ {{\mbox{\rm Sym}}}^n({\mathcal M}) $$ is weighted globally generated. \item ${\mathcal N}$ is ample so there exists a positive $n_2$ such that for all $n \geqslant n_2$, $$ {\mathcal F} \otimes {{\mbox{\rm Sym}}}^n({\mathcal N}) $$ is weighted globally generated. \item For each $r\in\{0,...,n_1-1\}$, the sheaf ${\mathcal F} \otimes {{\mbox{\rm Sym}}}^r({\mathcal M})$ is coherent. Since ${\mathcal N}$ is ample, there exists $m_r$ such that for all $n \geqslant m_r$, $$ {\mathcal F} \otimes {{\mbox{\rm Sym}}}^r({\mathcal M}) \otimes {{\mbox{\rm Sym}}}^n({\mathcal N}) $$ is weighted globally generated. \item For each $s\in\{0,...,n_2-1\}$, the sheaf ${\mathcal F} \otimes {{\mbox{\rm Sym}}}^s({\mathcal N})$ is coherent. Since ${\mathcal M}$ is ample, there exists $l_s$ such that for all $n \geqslant l_s$, $$ {\mathcal F} \otimes {{\mbox{\rm Sym}}}^n({\mathcal M}) \otimes {{\mbox{\rm Sym}}}^s({\mathcal N}) $$ is weighted globally generated. \end{enumerate} Now take $n_0=\max_{r,s}\{r+m_r,s+l_s\}$, then for any $n\geqslant n_0$ $$ {\mathcal F} \otimes {{\mbox{\rm Sym}}}^p({\mathcal M}) \otimes {{\mbox{\rm Sym}}}^q({\mathcal N}) $$ is weighted globally generated. Indeed, we have 3 cases, \begin{enumerate}[(i)] \item Suppose $p <n_1$. Then $p+q\geqslant n_0 \geqslant p+m_p$, so $q\geqslant m_p$ and by 3. we are done. \item Suppose $q <n_2$. Then $p+q\geqslant n_0 \geqslant l_q+q$, so $p\geqslant l_q$ and by 4. we are done. \item Suppose $p \geqslant n_1$ and $q \geqslant n_2$, so ${{\mbox{\rm Sym}}}^p({\mathcal M})$ and ${\mathcal F}\otimes {{\mbox{\rm Sym}}}^q({\mathcal N})$ are weighted globally generated and so is their tensor product. \end{enumerate} We conclude that ${\mathcal M} \oplus {\mathcal N}$ is ample. \end{proof} \begin{cor} Let ${\mathcal M}$ and ${\mathcal N}$ be two sheaves. Then ${\mathcal M} \oplus {\mathcal N}$ is ample if and only if ${\mathcal M}$ and ${\mathcal N}$ are ample. \end{cor} \begin{proof} We already know that if ${\mathcal M}$ and ${\mathcal N}$ are ample then so is their direct sum. Conversely, ${\mathcal M}$ and ${\mathcal N}$ are quotient of ${\mathcal M}\oplus{\mathcal N}$ which is ample, so are ${\mathcal M}$ and ${\mathcal N}$. \end{proof} \begin{cor} The tensor product of an ample sheaf and a weighted globally generated sheaf is ample. \end{cor} \begin{proof} Let ${\mathcal M}$ be an ample sheaf and ${\mathcal N}$ a weighted globally generated sheaf. So, $$ \bigoplus_{j = 0}^{l-1} {\mathcal O}(j)^{\oplus s_j} \rightarrow {\mathcal N} \rightarrow 0. $$ Tensoring by ${\mathcal M}$, $$ \bigoplus_{j = 0}^{l-1} {\mathcal M} \otimes {\mathcal O}(j)^{\oplus s_j} \rightarrow {\mathcal M} \otimes {\mathcal N} \rightarrow 0. $$ It suffices to show that ${\mathcal M} \otimes {\mathcal O}(j)$ for $j\in\{0,...,l-1\}$ is ample. Let ${\mathcal F}$ to be a coherent sheaf and consider $$ {\mathcal F} \otimes {{\mbox{\rm Sym}}}^n({\mathcal M} \otimes {\mathcal O}(j)) $$ for $n$ a non-negative integer. It is a quotient of $$ {\mathcal F} \otimes {{\mbox{\rm Sym}}}^n({\mathcal M}) \otimes {{\mbox{\rm Sym}}}^n({\mathcal O}(j)) \cong {\mathcal F} \otimes {{\mbox{\rm Sym}}}^n({\mathcal O}(j)) \otimes {{\mbox{\rm Sym}}}^n({\mathcal M}). $$ But ${\mathcal F} \otimes {{\mbox{\rm Sym}}}^n({\mathcal O}(j))$ is a coherent sheaf and ${\mathcal M}$ is ample, so there exists a non-negative integer $n_0$ such that for all $n \geqslant n_0$ $$ {\mathcal F} \otimes {{\mbox{\rm Sym}}}^n({\mathcal O}(j)) \otimes {{\mbox{\rm Sym}}}^n({\mathcal M}) $$ is weighted globally generated. It follows that all of its quotients are weighted globally generated and in particular ${\mathcal F} \otimes {{\mbox{\rm Sym}}}^n({\mathcal M} \otimes {\mathcal O}(j))$. Hence, ${\mathcal M} \otimes {\mathcal O}(j)$ is ample and the result follows. \end{proof} \begin{lemma} The sheaf ${\mathcal O}(1)$ is ample. \end{lemma} \begin{proof} Let ${\mathcal F}={{\mbox{\rm Sat}}}(F)$ be a coherent sheaf. So $F$ is a finitely generated module over $A$ generated by finitely many homogeneous elements $f_0,...,f_c$ of degree $\rho_0,...,\rho_c$ respectively. Take $n_0=\max\{\rho_0,...,\rho_c\}$, then for each $n \geqslant n_0$ we have $$ n-\rho_i=a_il+r_i $$ where $0\leqslant r_i <l$ by the division algorithm. We claim that the following map is an epimorphism in $A{{\mbox{\rm --\,Sat}}}$, $$ \bigoplus_{j=0}^n \bigoplus_{i=0}^c {\mathcal O}(r_i) \rightarrow {\mathcal F}(n) $$ induced by $((0,...,0),...,(0,...,1_i,...,0)_j,...(0,...,0)) \mapsto x_j^{\frac{a_il}{d_j}}f_i$. Indeed, to prove the claim we need to show that the cokernel of the map in $A{{\mbox{\rm --\,GrMod}}}$ is torsion. So take $f\in F(n)$ homogeneous and assume that $f$ can be written $kf_i$ for some $i\in\{0,...,c\}$. Let $N=\max\left\lbrace \dfrac{a_il}{d_j}, i\in\{0,...,c\}, j\in\{0,...,n\} \right\rbrace $. Suppose that $h\in A$ is an homogeneous element of degree greater than $N$. So it can be written as $h'x_j^{ \frac{a_il}{d_j}}$ for some $i\in\{0,...,c\}$ and $j\in\{0,...,n\}$. It follows that $hf$ is in the image of the map. Henceforth its cokernel is zero. \end{proof} Since ${\mathcal O}(1)$ is ample and weighted globally generated, so ${\mathcal O}(2)$ is ample for any weighted projective stack. However, for ${\mathbb P}(3,5)$, seen as a variety, we have ${\mathcal O}(2) \cong {\mathcal O}(-1)$ which isn't ample. \begin{theorem} The tangent sheaf of any weighted projective stack is ample. \end{theorem} \begin{proof} We have the following short exact sequence \cite{Zho} $$ 0 \rightarrow {\mathcal O} \rightarrow \bigoplus_{j=0}^n {\mathcal O}(a_j) \rightarrow {\mathcal T} \rightarrow 0. $$ From the long exact sequence in cohomolgy applied to the above short exact sequence and the fact that ${\mathcal O}$ and ${\mathcal O}(a_j)$ are vector bundles, it is evident to see that so is ${\mathcal T}$. Each summand of the central term is ample since ${\mathcal O}(a_i)={\mathcal O}(1)^{\otimes a_i}$ and ${\mathcal O}(1)$ ample. Moreover, ${\mathcal T}$ is the quotient of a finite direct sum of ample sheaves. Then, ${\mathcal T}$ is ample. \end{proof} We obtain the following corollary proved first by Hartshorne \begin{cor}[\cite{Har}] The tangent sheaf of a standard projective space is ample. \end{cor} A converse of this corollary was proved and provides a characterisation of smooth projective spaces also known as the Hartshorne conjecture and proved by Mori, \begin{theorem}[\cite{Mor}] The only smooth irreducible projective smooth variety with ample tangent bundle is isomorphic to ${\mathbb P}^n$ for some $n$. \end{theorem} It is natural to ask whether the conjecture of Hartshorne holds as well extending the class of spaces. We formulate it as follows: \begin{conj} The only smooth irreducible projective stack with an ample tangent bundle is isomorphic to a weighted projective stack. \end{conj} {} \end{document} \end{document}
\begin{document} \title{On scaling limits of planar maps with stable face-degrees} \begin{abstract} We discuss the asymptotic behaviour of random critical Boltzmann planar maps in which the degree of a typical face belongs to the domain of attraction of a stable law with index $\alpha \in (1,2]$. We prove that when conditioning such maps to have $n$ vertices, or $n$ edges, or $n$ faces, the vertex-set endowed with the graph distance suitably rescaled converges in distribution towards the celebrated Brownian map when $\alpha=2$, and, after extraction of a subsequence, towards another `$\alpha$-stable map' when $\alpha <2$, which improves on a first result due to Le Gall \& Miermont who assumed slightly more regularity. \end{abstract} \begin{figure} \caption{Simulations of large $\alpha$-stable Boltzmann maps with $\alpha=\numprint{1.7}$ on the left and $\alpha=\numprint{1.9}$ on the right. Courtesy of Nicolas Curien.} \label{fig:simu_cartes_stables} \end{figure} \section{Introduction and main result} This work deals with scaling limits of large random planar maps viewed as metric measured spaces. We assume that the reader is already acquainted with this theory; let us describe the precise model that we consider before stating our main results. We study rooted planar maps, which are finite (multi-)graphs embedded in the two-dimensional sphere, viewed up to homeomorphisms, and equipped with a distinguished oriented edge called the \emph{root-edge}. For technical reasons, we restrict ourselves to \emph{bipartite} maps which are those maps in which all faces have even degree. Given a sequence $\mathbf{q}= (q_{k})_{ k \geq 1}$ of non-negative numbers such that $q_k \ne 0$ for at least one $k \ge 3$ (in order to discard trivial cases), we define a Boltzmann measure $w^\mathbf{q}$ on the set $\mathbf{M}$ of all finite bipartite maps by assigning a weight: \[w^\mathbf{q}(M) \enskip=\enskip \prod_{f\in \mathsf{Faces}(M)} q_{ \deg(f)/2},\] to each such map $M$. We shall also consider rooted and \emph{pointed} maps in which we distinguish a vertex $\star$ in a map $M$; we then define a pointed Boltzmann measure on the set $\mathbf{M}^\bullet$ of pointed maps by setting $w^{\mathbf{q}, \bullet}(M, \star) = w^\mathbf{q}(M)$. Let $W^\mathbf{q} = w^\mathbf{q}(\mathbf{M})$ and $W^{\mathbf{q}, \bullet} = w^{\mathbf{q}, \bullet}(\mathbf{M}^\bullet)$ be their total mass; obviously the latter is greater than the former, but Bernardi, Curien \& Miermont~\cite{Bernardi-Curien-Miermont:A_Boltzmann_approach_to_percolation_on_random_triangulations} proved that \[W^\mathbf{q} < \infty \quad\text{if and only if}\quad W^{\mathbf{q}, \bullet} < \infty.\] When this holds, we say that the sequence $\mathbf{q}$ is \emph{admissible} and we normalise our measures into probability measures $\ensembles{P}^\mathbf{q}$ and $\ensembles{P}^{\mathbf{q}, \bullet}$ respectively. We assume further that $\mathbf{q}$ is \emph{critical}, which means that the number of vertices of a map has infinite variance under $\ensembles{P}^\mathbf{q}$, or equivalently infinite mean under $\ensembles{P}^{\mathbf{q}, \bullet}$. Such models of random maps have been first considered by Marckert \& Miermont~\cite{Marckert-Miermont:Invariance_principles_for_random_bipartite_planar_maps} who gave analytic admissibility and criticality criteria, recast in~\cite{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence}, and which we shall recall later. Following the terminology introduced in the very recent work of Curien \& Richier~\cite{Curien-Richier:Duality_of_random_planar_maps_via_percolation}, we further assume that there exists $\alpha \in (1,2]$ such that our distributions are \emph{discrete stable with index $\alpha$}, which we define as follows: Under the pointed law $\ensembles{P}^{\mathbf{q}, \bullet}$, the degree of the face adjacent to the right of the root-edge (called the \emph{root-face}) belongs to the domain of attraction of a stable law with index $\alpha$. It can be checked that this degree under the non-pointed law $\ensembles{P}^\mathbf{q}$ is more regular, and under this assumption has finite variance for every $\alpha \in (1,2]$. We shall interpret the law of this degree as that of a typical face in a large pointed or non-pointed Boltzmann random map. Such an assumption was first formalised by Richier~\cite{Richier:Limits_of_the_boundary_of_random_planar_maps} (except that the case $\alpha=2$ was restricted to finite variance) and is more general than the one used e.g. in~\cite{Le_Gall-Miermont:Scaling_limits_of_random_planar_maps_with_large_faces,Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence}. For every integer $n \ge 2$, let $\mathbf{M}_{E=n}$, $\mathbf{M}_{V=n}$ and $\mathbf{M}_{F=n}$ be the subsets of $\mathbf{M}$ of those maps with respectively $n-1$ edges, $n+1$ vertices (these shifts by one will simplify the statements) and $n$ faces. For every $S = \{E, V, F\}$ and every $n \ge 2$, we define \[\ensembles{P}^\mathbf{q}_{S=n}(M) = \ensembles{P}^\mathbf{q}(M \mid M \in \mathbf{M}_{S=n}), \qquad M \in \mathbf{M}_{S=n},\] the law of a rooted Boltzmann map conditioned to have `size' $n$. We define similarly pointed laws $\ensembles{P}^{\mathbf{q}, \bullet}_{S=n}$. Let us denote by $\zeta(M_n)$ the number of edges of the map $M_n$ sampled from such a law; note that it equals $n-1$ if $S=E$ but it is random otherwise. We shall implicitly assume that the support of $\mathbf{q}$ generates the whole group $\ensembles{Z}$, not just a strict subgroup, so these laws are well-defined for every $n$ large enough; the general case only requires mild modifications. We consider limits of large random maps in the following sense: given a finite map $M$, we endow its vertex-set (which we still denote by $M$) with the graph distance $d_{\mathrm{gr}}$ and the uniform probability measure $p_{\mathrm{gr}}$; the topology we use is then that given by the so-called \emph{Gromov--Hausdorff--Prokhorov} distance which makes the space of compact metric measured spaces (viewed up to isometries) a Polish space, see e.g. Miermont~\cite{Miermont:Tessellations_of_random_maps_of_arbitrary_genus}. \begin{thm} \label{thm:cv_cartes} There exists an increasing sequence $(B_n)_{n \ge 1}$ such that the following holds. Fix $S \in \{E, V, F\}$ and for every $n \ge 2$, sample $M_n$ from $\ensembles{P}^\mathbf{q}_{S=n}$ or from $\ensembles{P}^{\mathbf{q}, \bullet}_{S=n}$, then: \begin{enumerate} \item If $\alpha=2$, then we have the convergence in distribution in the sense of Gromov--Hausdorff--Prokhorov \[\left(M_n, B_{\zeta(M_n)}^{-1/2} d_{\mathrm{gr}}, p_{\mathrm{gr}}\right) \cvloi (\mathscr{M}, \mathscr{D}, \mathscr{m}),\] where $(\mathscr{M}, (\frac{9}{8})^{1/4} \mathscr{D}, \mathscr{m})$ is the (standard) \emph{Brownian map}. \item If $\alpha<2$, then from every increasing sequence of integers, one can extract a subsequence along which we have the convergence in distribution in the sense of Gromov--Hausdorff--Prokhorov, \[\left(M_n, B_{\zeta(M_n)}^{-1/2} d_{\mathrm{gr}}, p_{\mathrm{gr}}\right) \cvloi (\mathscr{M}, \mathscr{D}, \mathscr{m}),\] where $(\mathscr{M}, \mathscr{D}, \mathscr{m})$ is a random compact measured metric space with Hausdorff dimension $2\alpha$. \end{enumerate} \end{thm} \begin{rem}\label{rem:nombre_aretes_carte} \begin{enumerate} \item This result is reminiscent of the work of Duquesne~\cite{Duquesne:A_limit_theorem_for_the_contour_process_of_conditioned_Galton_Watson_trees} and Kortchemski~\cite{Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves} on size-conditioned Bienaymé--Galton--Watson trees (see~\eqref{eq:Duquesne_Kortchemski} below) and indeed, the sequence $(B_n)_{n \ge 1}$ is the same as there; it is of order $n^{1/\alpha}$, and in the finite-variance regime, it takes the form $B_n = (n \sigma^2/2)^{1/2}$ for some $\sigma^2 \in (0,\infty)$. \item We shall see in Remark~\ref{rem:nombre_aretes_arbre} that under $\ensembles{P}^\mathbf{q}_{S=n}$ or $\ensembles{P}^{\mathbf{q}, \bullet}_{S=n}$ we have for some constant $Z_\mathbf{q} > 1$ \[n^{-1} \zeta(M_n) \cvproba Z_\mathbf{q} \enskip\text{if}\enskip S=V \qquad\text{and}\qquad n^{-1} \zeta(M_n) \cvproba (1-Z_\mathbf{q}^{-1})^{-1} \enskip\text{if}\enskip S=F,\] so the factor $B_{\zeta(M_n)}^{-1/2}$ may be replaced by $Z_\mathbf{q}^{-1/(2\alpha)} B_n^{-1/2}$ and $(1-Z_\mathbf{q}^{-1})^{1/(2\alpha)} B_n^{-1/2}$ respectively. \end{enumerate} \end{rem} In the Gaussian case $\alpha=2$, tightness in the sense of Gromov--Hausdorff of rescaled uniform random \emph{$2\kappa$-angulations} (all faces have degree $2\kappa$ fixed) with $n$ faces was first obtained by Le Gall~\cite{Le_Gall:The_topological_structure_of_scaling_limits_of_large_planar_maps}. The Brownian map was then characterised independently by Le Gall~\cite{Le_Gall:Uniqueness_and_universality_of_the_Brownian_map} and Miermont~\cite{Miermont:The_Brownian_map_is_the_scaling_limit_of_uniform_random_plane_quadrangulations} which yields the convergence of these maps; building upon the pioneer work of Marckert \& Miermont~\cite{Marckert-Miermont:Invariance_principles_for_random_bipartite_planar_maps}, Le Gall~\cite{Le_Gall:Uniqueness_and_universality_of_the_Brownian_map} also includes Boltzmann planar maps conditioned by the number of vertices, assuming exponential moments. This assumption was then reduced to a second moment in~\cite{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence}, as a corollary of a more general model of random maps `with a prescribed degree sequence'. Let $\mathscr{D}^\ast = (\frac{9}{8})^{1/4} \mathscr{D}$, then in this finite variance regime, Theorem~\ref{thm:cv_cartes} reads thanks to the preceding remark: \[\left(M_n, \left(\frac{9}{4 \sigma^2 \zeta(M_n)}\right)^{1/4} d_{\mathrm{gr}}, p_{\mathrm{gr}}\right) \cvloi (\mathscr{M}, \mathscr{D}^\ast, \mathscr{m}),\] which recovers~\cite[Theorem~3]{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence}. In the case $\alpha < 2$, Theorem~\ref{thm:cv_cartes} extends a result due to Le Gall and Miermont~\cite{Le_Gall-Miermont:Scaling_limits_of_random_planar_maps_with_large_faces} who studied the Gromov--Hausdorff convergence of such maps conditioned by the number of vertices in the particular case where the probability that the root-face has degree $2k$ under $\ensembles{P}^{\mathbf{q}, \bullet}$ equals $C k^{-\alpha-1} (1+o(1))$ for some constant $C > 0$. Because the conjectured `stable maps' have not yet been characterised, the extraction of a subsequence is needed in Theorem~\ref{thm:cv_cartes}. Nonetheless, as in~\cite{Le_Gall-Miermont:Scaling_limits_of_random_planar_maps_with_large_faces}, we derive some scaling limits which do not necessitate such an extraction: in Theorem~\ref{thm:profil} below, we give the limit of the maximal distance to the distinguished vertex in a pointed map, or to a uniformly chosen vertex in the non-pointed map, as well as the \emph{profile} of the map, given by the number of vertices at distance $k$ to such a vertex, for every $k \ge 0$. Let us finally mention the work of Richier~\cite{Richier:Limits_of_the_boundary_of_random_planar_maps} and more recently with Kortchemski~\cite{Kortchemski-Richier:The_boundary_of_random_planar_maps_via_looptrees} who analyse the geometric behaviour of the \emph{boundary} of the root-face when conditioned to be large, and so, roughly speaking, the geometric behaviour of macroscopic faces of the map. \begin{rem} \begin{enumerate} \item As in~\cite{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence}, the proof of Theorem~\ref{thm:cv_cartes} actually shows that we can also take as notion of size of a map the number of faces whose degree belongs to a fixed subset $A \subset 2\ensembles{N}$, at least when either $A$ or its complement is finite. \item As observed in~\cite[Theorem~4]{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence}, the conditioning by the number of edges is special since $\mathbf{M}_{E=n}$ is a finite set for every $n$ fixed so we may define the law $\ensembles{P}^\mathbf{q}_{E=n}$ even when $\mathbf{q}$ is not admissible and our results still holds under appropriate assumptions. \item As in~\cite{Le_Gall-Miermont:Scaling_limits_of_random_planar_maps_with_large_faces}, Theorem~\ref{thm:cv_cartes} and the other main results below hold when conditioning the maps to have `size' \emph{at least} $n$, the references cover this case and the proofs only require mild modifications. \end{enumerate} \end{rem} The proof of convergences as in Theorem~\ref{thm:cv_cartes} in~\cite{Le_Gall:Uniqueness_and_universality_of_the_Brownian_map, Le_Gall-Miermont:Scaling_limits_of_random_planar_maps_with_large_faces} relied on a bijection due to Bouttier, Di Francesco \& Guitter~\cite{Bouttier-Di_Francesco-Guitter:Planar_maps_as_labeled_mobiles} which shows that a pointed map is encoded by `two-type' \emph{labelled tree} and one of the key steps was to prove that this labelled tree, suitably rescaled, converges in distribution towards a `continuous' limit which similarly describes the limit $(\mathscr{M}, \mathscr{D}, \mathscr{m})$. In~\cite{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence} we studied this two-type tree by further relying on a more recent work of Janson--Stef{\'a}nsson~\cite{Janson-Stefansson:Scaling_limits_of_random_planar_maps_with_a_unique_large_face} who established a bijection between these a `two-type' trees and `one-type' trees which are much easier to control. The scheme of the proof of the analogous statement in~\cite{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence} was first to prove that this `one-type' labelled tree converges towards a continuous object, then transporting this convergence to the two-type tree and finally conclude from the arguments developed in~\cite{Le_Gall:Uniqueness_and_universality_of_the_Brownian_map, Le_Gall-Miermont:Scaling_limits_of_random_planar_maps_with_large_faces}. In this paper, we bypass the bijection~\cite{Bouttier-Di_Francesco-Guitter:Planar_maps_as_labeled_mobiles} and only work with the one-type tree from~\cite{Janson-Stefansson:Scaling_limits_of_random_planar_maps_with_a_unique_large_face}; we prove the convergence of this object in Theorem~\ref{thm:cv_serpents_cartes} and deduce Theorem~\ref{thm:cv_cartes} by recasting the arguments from~\cite{Le_Gall:Uniqueness_and_universality_of_the_Brownian_map, Le_Gall-Miermont:Scaling_limits_of_random_planar_maps_with_large_faces}. On the one-hand, the advantage of the bijection from~\cite{Bouttier-Di_Francesco-Guitter:Planar_maps_as_labeled_mobiles} is that it also applies to non-bipartite maps (but it yields a `three-type' tree even more complicated to study) so in principle, one may use it to prove the convergence of such maps, whereas the bijection from~\cite{Janson-Stefansson:Scaling_limits_of_random_planar_maps_with_a_unique_large_face} only applies to bipartite maps. On the other hand, the latter bijection reduces the technical analysis of the tree, which opens the possibility to study more general models of random bipartite maps, such as those from~\cite{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence} in more complicated `large faces' regimes. In particular, our proof does not necessitate a tight control on the geometry of the tree, since it mostly relies on its {\L}ukasiewicz path which is rather simple to study. The rest of this paper is organised as follows: In Section~\ref{sec:cartes_arbres}, we recall the key bijection with labelled trees which in our case are randomly labelled size-conditioned Bienaymé--Galton--Watson trees. We recall their continuous analogues in Section~\ref{sec:arbres_continus} and state and prove their convergence in Theorem~\ref{thm:cv_serpents_cartes} in Section~\ref{sec:limite_arbres_etiquetes} which contains most of the technical parts and novelties of this work. Finally, in Section~\ref{sec:limite_cartes}, we state and prove Theorem~\ref{thm:profil} on the profile of distances and then prove Theorem~\ref{thm:cv_cartes}. \section{Maps and labelled trees} \label{sec:cartes_arbres} In this first section, let us briefly recall the notion of labelled (plane) trees and introduce some useful notation. We also describe the bijection between a pointed planar map and such a tree. \subsection{Plane trees} \label{sec:arbres} Following the notation of Neveu~\cite{Neuveu:Arbres_et_processus_de_Galton_Watson}, we view discrete trees as words. Let $\ensembles{N} = \{1, 2, \dots\}$ be the set of all positive integers and set $\ensembles{N}^0 = \{\varnothing\}$. Then a (plane) \emph{tree} is a non-empty subset $T \subset \bigcup_{n \ge 0} \ensembles{N}^n$ such that: \begin{enumerate} \item $\varnothing \in T$; \item if $u = (u_1, \dots, u_n) \in T$, then $pr(u) = (u_1, \dots, u_{n-1}) \in T$; \item if $u = (u_1, \dots, u_n) \in T$, then there exists an integer $k_u \ge 0$ such that $ui = (u_1, \dots, u_n, i) \in T$ if and only if $1 \le i \le k_u$. \end{enumerate} We shall view each vertex $u$ of a tree $T$ as an individual of a population for which $T$ is the genealogical tree. The vertex $\varnothing$ is called the \emph{root} of the tree and for every $u = (u_1, \dots, u_n) \in T$, $pr(u) = (u_1, \dots, u_{n-1})$ is its \emph{parent}, $k_u$ is the number of \emph{children} of $u$ (if $k_u = 0$, then $u$ is called a \emph{leaf}, otherwise, $u$ is called an \emph{internal vertex}), and $u1, \dots, uk_u$ are these children from left to right, $\chi_u = u_n$ is the relative position of $u$ among its siblings, and $|u| = n$ is its \emph{generation}. We shall denote by $\llbracket u , v \rrbracket$ the unique non-crossing path between $u$ and $v$. Fix a tree $T$ with $N+1$ vertices, listed $\varnothing = u_0 < u_1 < \dots < u_N$ in \emph{lexicographical order}. We describe two discrete paths which each encode $T$. First, its \emph{{\L}ukasiewicz path} $W = (W(j) ; 0 \le j \le N+1)$ is defined by $W(0) = 0$ and for every $0 \le j \le N$, \[W(j+1) = W(j) + k_{u_j}-1.\] One easily checks that $W(j) \ge 0$ for every $0 \le j \le N$ but $W(N+1)=-1$. Next, we define the \emph{height process} $H = (H(j); 0 \le j \le N)$ by setting for every $0 \le j \le N$, \[H(j) = |u_j|.\] The next lemma, whose proof is left as an exercise, gathers some deterministic results that we shall need (we refer to e.g. Le Gall~\cite{Le_Gall:Random_trees_and_applications} for a thorough discussion of such results). In order to simplify the notation, we identify the vertices of a tree with their index in the lexicographic order. \begin{lem}\label{lem:codage_marche_Luka} Let $T$ be a plane tree and $W$ be its {\L}ukasiewicz path. Fix a vertex $u \in T$, then \[W(u k_u) = W(u), \qquad W(uj') = \inf_{[uj,uj']} W \qquad\text{and}\qquad j' - j = W(uj) - W(uj')\] for every $1 \le j \le j' \le k_u$. \end{lem} Note that $W(u) - W(pr(u))$ equals the number of siblings of $u$ which lie to its right, so $W(u)$ equals the total number of individuals branching off to the right of the ancestral line $\llbracket \varnothing, u\llbracket$. \begin{figure} \caption{A tree on the left, with its vertices listed in lexicographical order, and on the right, its {\L}ukasiewicz path $W$ on top and its height process $H$ below.} \label{fig:codage_arbre} \end{figure} \subsection{Labelled trees} For every $k \ge 1$, let us consider the set of \emph{bridges with no negative jumps} \begin{equation}\label{eq:pont_sans_saut_negatif} \mathcal{B}_k^+ = \left\{(x_1, \dots, x_k): x_1, x_2-x_1, \dots, x_k-x_{k-1} \in \{-1, 0, 1, 2, \dots\} \text{ and } x_k=0\right\}. \end{equation} A \emph{labelling} $\ell$ of a plane tree $T$ is a function defined on its set of vertices to $\ensembles{Z}$ such that \begin{enumerate} \item the root of $T$ has label $\ell(\varnothing) = 0$, \item for every vertex $u$, with $k_u \ge 1$ children, the sequence of increments $(l(u1)-l(u), \dots, l(uk_u)-l(u))$ belongs to $\mathcal{B}_{k_u}^+$. \end{enumerate} We stress that the last child of every internal vertex carries the same label as its parent, for example, the right-most branch in the tree only contains zeros. Define the \emph{label process} $L(k) = \ell(u_k)$, where $(u_0, \dots, u_N)$ is the sequence of vertices of $T$ in lexicographical order; the labelled tree is encoded by the pair $(H, L)$, see Figure~\ref{fig:arbre_etiquete}. \begin{figure} \caption{A labelled tree on the left, and on the right, its height process on top and its label height process below.} \label{fig:arbre_etiquete} \end{figure} Without further notice, throughout this work, every {\L}ukasiewicz path shall be viewed as a step function, jumping at integer times, whereas height and label processes shall be viewed as continuous functions after interpolating linearly between integer times. \subsection{Labelled trees and pointed maps} \label{sec:bijection} Bouttier, Di Francesco \& Guitter~\cite{Bouttier-Di_Francesco-Guitter:Planar_maps_as_labeled_mobiles} proved that pointed maps are in bijection with some labelled trees, different from the preceding section; in the bipartite case, Janson \& Stef{\'a}nsson~\cite{Janson-Stefansson:Scaling_limits_of_random_planar_maps_with_a_unique_large_face} then related these trees to labelled trees as in the preceding section. Let us describe a direct construction of this bijection between labelled trees and pointed maps and leave to the reader as an exercise to verify that it indeeds corresponds to the two previous bijections (one may compare the figures here and those in~\cite{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence}). Let us start with the construction of a pointed map from a labelled tree $(T, \ell)$, depicted in Figure~\ref{fig:arbre_carte}; the construction contains two steps. Let $(u_0, \dots, u_N)$ be the vertices of $T$ listed in lexicographical order. For every $0 \le i \le N$, set $u_{N+1+i} = u_i$. We add an extra vertex $\star$ labelled $\min_{u \in T} \ell(u)-1$ outside of the tree $T$ and construct a first planar graph $G$ on the vertex-set of $T$ and $\star$ by drawing edges as follows: for every $0 \le i \le N-1$, \begin{itemize} \item if $\ell(u_i) > \min_{0 \le k \le N} \ell(u_k)$, then we draw an edge between $u_i$ and $u_j$ where $j = \min\{k > i: \ell(u_k) = \ell(u_i)-1\}$, \item if $\ell(u_i) = \min_{0 \le k \le N} \ell(u_k)$, then we draw an edge between $u_i$ and $\star$. \end{itemize} We stress that we exclude the last vertex $u_N$ in this construction; it indeed yields a planar graph $G$. In a second step, we merge every internal vertex of the tree $T$ with their last child; then $G$ becomes a map $M$ with labelled vertices. We shift all labels by subtracting $\min_{u \in T} \ell(u)-1$; it can be checked that these new labels are just the graph distance to $\star$ in the map $M$. We also distinguish the image after the merging operation of the first edge that we drew, for $i=0$. The latter is non-oriented; let $e_+$ and $e_-$ be its extremities so that $d_{\mathrm{gr}}(e_-, \star) = d_{\mathrm{gr}}(e_+, \star)-1$ and let us orient the edge from $e_+$ to $e_-$; these maps are called \emph{negative} in~\cite{Marckert-Miermont:Invariance_principles_for_random_bipartite_planar_maps}. \begin{figure} \caption{The negative map associated with a labelled tree.} \label{fig:arbre_carte} \end{figure} \begin{figure} \caption{The labelled tree associated with a negative map.} \label{fig:carte_arbre} \end{figure} Let us next construct a labelled tree from a negative pointed map $(M, \star)$, as depicted in Figure~\ref{fig:carte_arbre}. First, label all vertices by their graph distance to $\star$. In every face of $M$, place a new unlabelled vertex and mark each corner when the next vertex of $M$ in clockwise order has a smaller label. Then start with the root-face, adjacent to the right of the root-edge. Link the new vertex in this face to every marked corner if it is the only marked corner of this vertex, otherwise, erase the mark and link the new vertex to the one in the face which contains the next marked corner of this vertex in clockwise order. Proceed similarly with the new vertices attached to the one in the root-face: link each of them to the marked corners in their face if they are the only remaining ones around their vertex, otherwise, remove the mark and link the new vertex to the next one in clockwise order around the vertex. Continue recursively until all faces have been considered. This yields a planar tree that we root at the new vertex in the root-face, whose first child is either $e_-$ the target of the root-edge or the new vertex in the next face in clockwise order around it if any. Then assign to each new vertex the label of its last child and finally shift all labels so the root of the tree has label $0$ to get a labelled tree as in the preceding section. We claim that these constructions are the inverse of one another and yield a bijection between labelled trees and negative maps (the construction is very close to~\cite{Bouttier-Di_Francesco-Guitter:Planar_maps_as_labeled_mobiles}, one can thus follow their detailed proof). Recall that the root-face of a map is the face adjacent to the right of the root-edge. This bijection enjoys the following properties: \begin{enumerate} \item The leaves of the tree are in one to one correspondence with the vertices different from the distinguished one in the map, and the label of a leaf minus the infimum over all labels, plus one, equals the graph distance between the corresponding vertex of the map and the distinguished vertex. \item The internal vertices of the tree are in one to one correspondence with the faces of the map, and the number of children of the vertex is half the degree of the face. \item The root-face of the map corresponds to the root-vertex of the tree. \item The number of edges of the map and the tree are equal. \end{enumerate} In order to have a bijection between labelled trees and \emph{positive} maps (in which the root-edge is oriented from $e_-$ to $e_+$), one just reverse the root-edge in order to get a negative map. Note that Property~(iii) above does not hold anymore and it does not seem clear which internal vertex of the tree corresponds to the original root-face. Nonetheless, by `mirror symmetry' of the map (which preserves positivity or negativity of the map), the degree of the faces on both sides of the root-edge have the same distribution, so in both cases of positive or negative maps, the half-degree distribution of the root-face is the offspring distribution of the root of the tree. Property~(i) above explains how to partially translate the metric properties of the map to the labelled tree, whereas Property~(ii) is important because it gives us the distribution of the tree when the map is a random Boltzmann map, as described below. \subsection{Random labelled trees} \label{sec:BGW} Let us introduce the law of the labelled tree associated with a pointed map sampled from $\ensembles{P}^{\mathbf{q}, \bullet}$. Let $q_0 = 1$ and define the power series \[g_\mathbf{q}(x) = \sum_{k \ge 0} x^k \binom{2k-1}{k-1} q_k, \qquad x \ge 0.\] Then $g_\mathbf{q}$ is convex, strictly increasing and continuous until its radius of convergence, and $g_\mathbf{q}(0)=1$. In particular, it has at most two fixed points, and if it has exactly one, then at that point, the graph of $g_\mathbf{q}$ either crosses the line $y=x$, or is tangent to it. It was argued in~\cite[Section~7.1]{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence}, recasting the discussion from~\cite[Section~1.2]{Marckert-Miermont:Invariance_principles_for_random_bipartite_planar_maps} in the context of the Janson--Stef{\'a}nsson bijection, that the sequence $\mathbf{q}$ is admissible and critical exactly when $g_\mathbf{q}$ falls into the last case, and we denote by $Z_\mathbf{q}$ the only fixed point, which satisfies $g_\mathbf{q}'(Z_\mathbf{q}) = 1$. Let us mention that $Z_\mathbf{q}$ equals $(W^{\mathbf{q},\bullet}+1)/2 > 1$. Such a sequence $\mathbf{q}$ thus induces a probability measure on $\ensembles{Z}_+ = \{0, 1, 2, \dots\}$ with mean one, given by: \begin{equation}\label{eq:loi_GW_carte_Boltzmann} \mu_\mathbf{q}(k) = Z_\mathbf{q}^{k-1} \binom{2k-1}{k-1} q_k, \qquad k \ge 0. \end{equation} We shall consider random labelled trees, sampled as follows. First, let $T$ be a Bienaymé--Galton--Watson tree with offspring distribution $\mu_\mathbf{q}$, which means that the probability that $T$ equals a given finite tree $\tau$ is $\prod_{u \in \tau} \mu_\mathbf{q}(k_u)$. For every subset $A \subset \ensembles{Z}_+$ such that $\mu_\mathbf{q}(A) \ne 0$ and for every $n \ge 1$, we let $T_{A,n}$ be such a tree conditioned to have exactly $n$ vertices with offspring in $A$; the asymptotic behaviour of such trees has been investigated by Kortchemski~\cite{Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves}, with the restriction that either $A$ or its complement is finite. We shall be particularly interested in the sets $A=\ensembles{Z}_+$ so the tree is conditioned on its total progeny, $A=\{0\}$ so the tree is conditioned on its number of leaves, and $A = \ensembles{N}$ so the tree is conditioned on its number of internal vertices. We let $\zeta(T_{A,n})$ be the number of edges of $T_{A,n}$. Next, conditional on the tree $T$ (or $T_{A,n}$), we sample uniformly random labels $(\ell(u))_{u \in T}$ satisfying the conditions described in Section~\ref{sec:arbres}: the root has label $\ell(\varnothing) = 0$ and the sequences $(\ell(ui)-\ell(u))_{1 \le i \le k_u}$ are independent when $u$ ranges over all internal vertices of $T$ and are distributed respectively uniformly at random in $\mathcal{B}_{k_u}^+$. Let us observe that the cardinal of $\mathcal{B}_k^+$ is precisely the binomial factor $\binom{2k-1}{k-1}$ in the definition of $\mu_\mathbf{q}$. Also, it is well-known and easy to check that a uniform random bridge in $\mathcal{B}^+_k$ has the law of the first $k$ steps of a random walk conditioned to end at $0$, with step distribution $\sum_{i \ge -1} 2^{-i-2} \delta_i$, which is centred and with variance $2$. One easily checks (see e.g.~\cite[Proposition~11]{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence}) that this labelled tree $(T, (\ell(u))_{u \in T})$ is the one associated, in the bijection described previously, with a pointed Boltzmann map sampled from $\ensembles{P}^{\mathbf{q}, \bullet}$. Finally, the tree is conditioned to have $n$ vertices, or $n$ internal vertices, or $n$ leaves, when the map is conditioned to have $n-1$ edges, $n$ faces, and $n+1$ vertices respectively. Thanks to Property~(iii) of the bijection in the preceding section, $\mu_\mathbf{q}$ is the law of the half-degree of the root-face under $\ensembles{P}^{\mathbf{q}, \bullet}$. For the rest of this paper, we further assume that it belongs to the domain of attraction of a stable law with index $\alpha \in (1,2]$, which means that either it has finite variance $\sum_{k=0}^\infty k^2 \mu_\mathbf{q}(k) < \infty$ and then $\alpha = 2$, or the tail can be written as $\sum_{k=j}^\infty \mu_\mathbf{q}(k) = j^{-\alpha} l(j)$, where $l$ is a \emph{slowly varying} function at infinity which means that for every $c > 0$, it holds that $\lim_{x \to \infty} l(cx) / l(x) = 1$. We refer the reader to~\cite[Proposition~4]{Curien-Richier:Duality_of_random_planar_maps_via_percolation} for three equivalent assumptions. Using the notation from this reference, the root-face under $\ensembles{P}^\mathbf{q}$ has degree $2k$ with probability proportional to $q_k W_\mathbf{q}^{(k)}$ which, under our assumption, behaves as $q_k r_\mathbf{q}^{-k} k^{-\alpha-1/2}$. Using the fact that $q_k r_\mathbf{q}^{-k}$ is almost $\nu_\mathbf{q}(k)$ and that $\nu_\mathbf{q}$ has regularly varying tails with index $\alpha-1/2$, we see that, informally, $\nu_\mathbf{q}(k)$ behaves as $k^{-\alpha-1/2}$, so finally $q_k W_\mathbf{q}^{(k)} \approx k^{-2\alpha-1}$ and so the root-face under $\ensembles{P}^\mathbf{q}$ has regularly varying tails with index $2\alpha > 2$. This can be made rigorous using similar arguments to~\cite[Proposition~4]{Curien-Richier:Duality_of_random_planar_maps_via_percolation}. \section{Continuous labelled trees} \label{sec:arbres_continus} In this short section, we briefly describe the continuous limits of labelled size-conditioned Bienaymé--Galton--Watson trees, the statement and proof of such a convergence are given in Section~\ref{sec:limite_arbres_etiquetes}. For the rest of this paper, we fix an admissible and critical sequence $\mathbf{q}$ such that its support generates the whole group $\ensembles{Z}$ and such that $\mu_\mathbf{q}$ defined by~\eqref{eq:loi_GW_carte_Boltzmann} belongs to the domain of attraction of a stable law with index $\alpha \in (1,2]$. Then there exists an increasing sequence $(B_n)_{n \ge 1}$ such that if $(\xi_n)_{n \ge 1}$ is a sequence of i.i.d. random variables sampled from $\mu_\mathbf{q}$, then $B_n^{-1} (\xi_1 + \dots + \xi_n - n)$ converges in distribution to a random variable $X^{(\alpha)}$ whose law is given by the Laplace exponent $\ensembles{E}[\exp(-\lambda X^{(\alpha)})] = \exp(\lambda^\alpha)$ for every $\lambda \ge 0$. Recall that $n^{-1/\alpha} B_n$ is slowly varying at infinity and that if $\mu_\mathbf{q}$ has variance $\sigma^2_\mathbf{q} \in (0,\infty)$, then this falls in the case $\alpha=2$ and we may take $B_n = (n \sigma^2_\mathbf{q}/2)^{1/2}$. We stress that with this normalisation, $X^{(2)}$ has the centred Gaussian law with variance $2$. \subsection{The stable trees} \label{sec:arbres_stables} The continuous analog of size-conditioned Bienaymé--Galton--Watson trees are the so-called \emph{stable Lévy trees} with index $\alpha \in (1,2]$. Let $\mathscr{X} = (\mathscr{X}_t ; t \in [0,1])$ denote the normalised excursion of the $\alpha$-stable \emph{Lévy process} with no negative jump, whose value at time $1$ has the law of $X^{(\alpha)}$, and let further $\mathscr{H} = (\mathscr{H}_t ; t \in [0,1])$ be the associated \emph{height function}; we refer to e.g.~\cite{Duquesne:A_limit_theorem_for_the_contour_process_of_conditioned_Galton_Watson_trees} for the definitions of this object. In the case $\alpha=2$, the two processes $\mathscr{X}$ and $\mathscr{H}$ are equal, both to $\sqrt{2}$ times the standard Brownian excursion. In any case, $\mathscr{H}$ is a non-negative, continuous function, which vanishes only at $0$ and $1$. As any such function, it encodes a `continuous tree' called the $\alpha$-stable Lévy tree $\mathscr{T}$ of Duquesne, Le Gall \& Le Jan~\cite{Duquesne:A_limit_theorem_for_the_contour_process_of_conditioned_Galton_Watson_trees,Le_Gall-Le_Jan:Branching_processes_in_Levy_processes_the_exploration_process}, which generalises the celebrated Brownian tree of Aldous~\cite{Aldous:The_continuum_random_tree_3} in the case $\alpha=2$. Precisely, for every $s, t \in [0,1]$, set \[d(s,t) = \mathscr{H}_s + \mathscr{H}_t - 2 \min_{r \in [\min(s, t), \max(s, t)]} \mathscr{H}_r.\] One easily checks that $d$ is a random pseudo-metric on $[0,1]$, we then define an equivalence relation on $[0,1]$ by setting $s \sim t$ whenever $d(s,t)=0$. Consider the quotient space $\mathscr{T} = [0,1] / \sim$, we let $\pi$ be the canonical projection $[0,1] \to \mathscr{T}$; then $d$ induces a metric on $\mathscr{T}$ that we still denote by $d$. The space $(\mathscr{T}, d)$ is a so-called compact real-tree, naturally rooted at $\pi(0) = \pi(1)$. \subsection{The continuous distance process} We construct next another process $\mathscr{L} = (\mathscr{L}_t ; t \in [0,1])$ called the \emph{continuous distance process} on the same probability space as $\mathscr{H}$ which is intrinsically different according as wether $\alpha=2$ or $\alpha<2$. Let us start with the latter case which is analogous to the discrete setting. Indeed, in the discrete setting, the label increment between a vertex and is parent was given by the value of a random discrete bridge of length equal to the offspring of the parent, at a time given by the position of the child. Loosely speaking, we do the same when $\alpha < 2$, by taking random Brownian bridges. Precisely, suppose that $\alpha < 2$ and that $(b_i)_{i \ge 1}$ are i.d.d. standard Brownian bridges of duration $1$ from $0$ to $0$, defined on the same probability space as $\mathscr{X}$ and independent of the latter; by the scaling property, for every $x > 0$, the process $(x^{1/2} b_1(t/x) ; t \in [0,x])$ is a standard Brownian bridge of duration $x$. For every $0 \le s \le t \le 1$, put \[\mathscr{I}_{s, t} = \inf_{r \in [s,t]} \mathscr{X}_r.\] For every $t \in (0,1)$ let $\Delta \mathscr{X}_t = \mathscr{X}_t - \mathscr{X}_{t-} \ge 0$ be the `jump' of $\mathscr{X}$ at time $t$ and let $(t_i)_{i \ge 1}$ be a measurable enumeration of those times $t$ such that $\Delta \mathscr{X}_t > 0$. We then put for every $t \in [0,1]$: \begin{equation}\label{eq:etiquettes_stables} \mathscr{L}_t = \sqrt{2} \sum_{i \ge 1} \Delta \mathscr{X}_{t_i}^{1/2} b_i\left(\frac{\mathscr{I}_{t_i, t} - \mathscr{X}_{t-}}{\Delta \mathscr{X}_{t_i}}\right) \ind{\mathscr{I}_{t_i, t} \ge \mathscr{X}_{t-}} \ind{t_i \le t}. \end{equation} According to Le Gall \& Miermont~\cite[Proposition~5 and~6]{Le_Gall-Miermont:Scaling_limits_of_random_planar_maps_with_large_faces}, this series converges in $L^2$ and the process $\mathscr{L}$ admits a continuous modification, even H\"{o}lder continuous for any index smaller than $1/(2\alpha)$. The factor $\sqrt{2}$ is added here in the definition of $\mathscr{L}$ in order to have statements without constants. When $\alpha=2$, the process $\mathscr{X}$ is $\sqrt{2}$ times the Brownian excursion so it has continuous paths. To understand the definition, imagine that in the discrete setting, the tree $T_n$ is binary: internal vertices always have two children, then the label increment between such an internal vertex and its first child equals $-1$ or $1$ with probability $1/2$ each, and given a `typical' vertex, each of its ancestor is either the first or the second child of its parent, with probability roughly $1/2$ each, so the sequence of increments along an ancestral line resembles a centred random walk with step $-1$ or $1$ with probability $1/4$ each and $0$ with probability $1/2$. In the continuous setting of the Brownian tree, we define the process $\mathscr{L}$ conditional on $\mathscr{H}$ as a centred Gaussian process satisfying for every $s,t \in [0,1]$, \[\Esc{|\mathscr{L}_s - \mathscr{L}_t|^2}{\mathscr{H}} = \frac{2}{3} \cdot d(s,t) \quad\text{or, equivalently,}\quad \Esc{\mathscr{L}_s \mathscr{L}_t}{\mathscr{H}} = \frac{2}{3} \min_{r \in [\min(s, t), \max(s, t)]} \mathscr{H}_r.\] Again, the factor $2/3$ removes the constants in our statements and will be explained below. This process is called the \emph{head of Brownian snake} driven by $\mathscr{H}$~\cite{Le_Gall:Nachdiplomsvorlesung,Duquesne-Le_Gall:Random_trees_Levy_processes_and_spatial_branching_processes}; it is known, see, e.g.~\cite[Chapter~IV.4]{Le_Gall:Nachdiplomsvorlesung} that it admits a continuous version. In all cases $\alpha \in (1,2]$, without further notice, we shall work throughout this paper with the continuous version of $\mathscr{L}$. Observe that, almost surely, $\mathscr{L}_0=0$ and $\mathscr{L}_s = \mathscr{L}_t$ whenever $s \sim t$ so $\mathscr{L}$ can be seen as a random motion indexed by $\mathscr{T}$ by setting $\mathscr{L}_{\pi(t)} = \mathscr{L}_t$ for every $t \in [0,1]$. We interpret $\mathscr{L}_x$ as the label of an element $x \in \mathscr{T}$; the pair $(\mathscr{T}, (\mathscr{L}_x; x \in \mathscr{T}))$ is a continuous analog of labelled plane trees. \begin{rem} We point out that, when $\alpha=2$, the process $\mathscr{L}$ is loosely speaking (up to constants) a Brownian motion indexed by the Brownian tree, which is denoted by $\mathscr{S}$ in~\cite{Marzouk:Scaling_limits_of_discrete_snakes_with_stable_branching}, but it is \emph{not} a Brownian motion indexed by the stable tree in the case $\alpha < 2$, this object is studied in~\cite{Marzouk:Scaling_limits_of_discrete_snakes_with_stable_branching}. \end{rem} \section{Scaling limits of labelled trees} \label{sec:limite_arbres_etiquetes} Throughout this section, we fix $A \subset \ensembles{Z}_+$ such that either $A$ or $\ensembles{Z}_+\setminus A$ is finite and $\mu_\mathbf{q}(A) \ne 0$ and for every $n \ge 1$, we let $T_{A,n}$ be a Bienaymé--Galton--Watson tree with offspring distribution $\mu_\mathbf{q}$ conditioned to have exactly $n$ vertices with offspring in $A$; recall that $\zeta(T_{A,n})$ denotes the number of edges of $T_{A,n}$. We then sample uniformly random labels $(\ell(u))_{u \in T_{A,n}}$ as in Section~\ref{sec:BGW}. Duquesne~\cite{Duquesne:A_limit_theorem_for_the_contour_process_of_conditioned_Galton_Watson_trees} in the case $A = \ensembles{Z}_+$ (so $\zeta(T_{A,n}) = n-1$), see also Kortchemski~\cite{Kortchemski:A_simple_proof_of_Duquesne_s_theorem_on_contour_processes_of_conditioned_Galton_Watson_trees}, and then Kortchemski~\cite{Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves} in the general case, proved the convergence of the {\L}ukasiewicz path and height process: \begin{equation}\label{eq:Duquesne_Kortchemski} \left(\frac{1}{B_{\zeta(T_{A,n})}} W_n(\zeta(T_{A,n}) t), \frac{B_{\zeta(T_{A,n})}}{\zeta(T_{A,n})} H_n(\zeta(T_{A,n}) t)\right)_{t \in [0,1]} \cvloi (\mathscr{X}_t, \mathscr{H}_t)_{t \in [0,1]}, \end{equation} in $\mathscr{D}([0,1], \ensembles{R}) \otimes \mathscr{C}([0,1], \ensembles{R})$. Let us point out that the work~\cite{Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves} focuses on the case $A=\{0\}$ and most of the results we shall need are developed in this case, but as explained in Section~8 there, the arguments extend to the general case, at least as long as either $A$ or its complement is finite. \begin{rem}\label{rem:nombre_aretes_arbre} As observed by Kortchemski~\cite{Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves}, see e.g. Corollary~3.3 there for a stronger result, it holds that \[n^{-1} \zeta(T_{A,n}) \cvproba \mu_\mathbf{q}(A)^{-1}.\] Therefore, we may replace $\zeta(T_{A,n})$ by $\mu_\mathbf{q}(A)^{-1} n$ in~\eqref{eq:Duquesne_Kortchemski} above and Theorem~\ref{thm:cv_serpents_cartes} below. In the cases $A = \{0\}$ and $A = \ensembles{N}$, recall that $T_{A,n}$ is related to a Boltzmann map conditioned to have $n+1$ vertices and $n$ faces respectively. Since $\mu_\mathbf{q}(0) = Z_\mathbf{q}^{-1}$, this explains Remark~\ref{rem:nombre_aretes_carte}. \end{rem} As alluded in the introduction, the key to prove Theorem~\ref{thm:cv_cartes} is the following result. \begin{thm} \label{thm:cv_serpents_cartes} The convergence in distribution \[\left(B_{\zeta(T_{A,n})}^{-1/2} L_n(\zeta(T_{A,n}) t)\right)_{t \in [0,1]} \cvloi (\mathscr{L}_t)_{t \in [0,1]},\] holds in $\mathscr{C}([0,1], \ensembles{R})$ jointly with~\eqref{eq:Duquesne_Kortchemski}. \end{thm} The proof of the convergence of $L_n$ occupies the rest of this section. We first prove that it is tight and then we characterise the finite dimensional marginals, using two different arguments for the non-Gaussian case $\alpha < 2$ and the Gaussian case $\alpha = 2$, since the limit process $\mathscr{L}$ is defined in two different ways. Indeed, let us comment on this statement and on the constants in the definition of $\mathscr{L}$. We assume $A = \ensembles{Z}_+$ to ease the notation in this informal discussion. For a vertex $u \in T_{A,n}$, the label increments between consecutive ancestors are independent and distributed as $X_{k,j}$ when an ancestor has $k \ge 1$ children and the one on the path to $u$ is the $j$-th one, where $(X_{k,1}, \dots, X_{k,k})$ is uniformly distributed in $\mathcal{B}_k^+$, as defined in \eqref{eq:pont_sans_saut_negatif}. Since the latter has the law of a random walk conditioned to be at $0$ at time $k$, with step distribution $\sum_{k \ge -1} 2^{-k-2} \delta_k$ which is centred and with variance $2$, then a conditional version of Donsker's invariance principle for random bridges (see e.g.~\cite[Lemma~10]{Bettinelli-Scaling_limits_for_random_quadrangulations_of_positive_genus} for a detailed proof of the latter) yields \begin{equation}\label{eq:Donsker_ponts} \left((2k)^{-1/2} X_{k, kt}\right)_{t \in [0,1]} \cvloi (b_t)_{t \in [0,1]}, \end{equation} where, as usual, on the left we have linearly interpolated, and $b$ is the standard Brownian bridge. The factor $\sqrt{2}$ is the same as in the definition of $\mathscr{L}$ for $\alpha < 2$ in~\eqref{eq:etiquettes_stables} and one must check that the $k$'s and $j$'s converge towards the $\Delta \mathscr{X}_{t_i}$'s and the $\mathscr{I}_{t_i, t} - \mathscr{X}_{t-}$'s. In the case $\alpha=2$, suppose furthermore that the variance $\sigma_\mathbf{q}^2$ of $\mu_q$ is finite, so $B_n = (n \sigma_\mathbf{q}^2 / 2)^{1/2}$. Since $X_{k,j}$ has variance $2j(k-j)/(k+1)$ and, as we will see, there is typically a proportion about $\mu_\mathbf{q}(k)$ of such ancestors, then $\ell(u)$ has variance about \[\sum_{k \ge 1} \sum_{j=1}^k |u| \mu_\mathbf{q}(k) \frac{2j(k-j)}{k+1} = |u| \sum_{k \ge 1} \mu_\mathbf{q}(k) \frac{k(k-1)}{3} \approx |u| \frac{\sigma_\mathbf{q}^2}{3}.\] If $u$ is the vertex visited at time $\lfloor n t\rfloor$ in lexicographical order, then, by~\eqref{eq:Duquesne_Kortchemski} we have $|u| \approx (n/B_{n}) \mathscr{H}_t = (2n /\sigma_\mathbf{q}^2)^{1/2}\mathscr{H}_t$ so we expect $L_n(n t)$, once divided by $B_{n}^{1/2} = (n \sigma_\mathbf{q}^2 / 2)^{1/4}$, to be asymptotically Gaussian with variance \[\left(\frac{2}{n \sigma_\mathbf{q}^2}\right)^{1/2} \left(\frac{2n}{\sigma_p^2}\right)^{1/2} \mathscr{H}_t \frac{\sigma_p^2}{3} = \frac{2}{3} \mathscr{H}_t,\] Which exactly corresponds to $\mathscr{L}_t$. The case $\alpha=2$ but $\mu_\mathbf{q}$ has infinite variance is more involved, but this sketch can be adapted, by taking the truncated variance. \subsection{Tightness of the label process} \label{sec:tension_labels} The first step towards the proof of Theorem~\ref{thm:cv_serpents_cartes} is to show that the sequence of processes \[\left(B_{\zeta(T_{A,n})}^{-1/2} L_n(\zeta(T_{A,n}) t)\right)_{t \in [0,1]}\] is tight. This was proved in~\cite[Proposition 7]{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence} in a slightly different context of trees `with a prescribed degree sequence' in a finite variance regime but the argument are easily adapted to our case. The main point is to apply Kolmogorov's tightness criterion; thanks to the properties of uniform random bridges in $\mathcal{B}_k^+$, we can see that the increment of labels between a vertex $u$ and one of its ancestors $v$ is about the square-root of the numbers of vertices branching off of the path $\llbracket u, v \llbracket$, which can be described in terms of the {\L}ukasiewicz path. We thus shall need later the following tail bounds. \begin{lem}\label{lem:moments_marche_Luka} Fix any $\theta \in (0, 1/\alpha)$. There exists $c_1, c_2 > 0$ such that for every $n$ large enough, for every $0 \le s \le t \le 1$, every $x \ge 0$, and every $\delta \in (0, \alpha/(\alpha-1))$, we have \[\Pr{W_n(\zeta(T_{A,n}) s) - \min_{s \le r \le t} W_n(\zeta(T_{A,n}) r) > B_{\zeta(T_{A,n})} |t-s|^\theta x} \le c_1 \exp(- c_2 x^\delta).\] Consequently, the moments of $B_{\zeta(T_{A,n})}^{-1} |t-s|^{-\theta} (W_n(\zeta(T_{A,n}) s) - \min_{s \le r \le t} W_n(\zeta(T_{A,n}) r))$ are uniformly bounded. \end{lem} \begin{proof} First note that we may restrict ourselves to times $|t-s| \le 1/2$. Let us start with the more familiar case $A=\ensembles{Z}_+$. It is well-known that $W_n$ is an excursion of a random walk $S$ with i.i.d. steps distributed as $\sum_{k \ge -1} \mu_\mathbf{q}(k+1) \delta_k$ in the sense that we condition the path to hit $-1$ for the first time at time $n+1$. Moreover, such an excursion can be obtained by cyclicly shifting a bridge $S_n$ of this walk (i.e. conditioning the walk to be at $-1$ at time $n+1$, but without the positivity constraint) at the first time it realises its overall minimum, see e.g. Figure~6 in~\cite{Marzouk:Scaling_limits_of_discrete_snakes_with_stable_branching}; this operation is called a discrete Vervaat transform, see e.g. Pitman~\cite[Chapter~6.1]{Pitman:Combinatorial_stochastic_processes} for details. Our claim holds when $W_n$ is replaced by $S$, in which case we may take $s=0$; indeed, according to Kortchemski~\cite[Proposition 8]{Kortchemski:Sub_exponential_tail_bounds_for_conditioned_stable_Bienayme_Galton_Watson_trees}, it holds that \[\Pr{S(n s) - \min_{s \le r \le t} S(n r) > u B_{n |t-s|}} \le c_1 \exp(- c_2 u^\delta).\] Since $n^{-1/\alpha} B_n$ is slowly varying at infinity, the so-called Potter bounds (see e.g.\cite[Lemma~4.2]{Bjornberg-Stefansson:Random_walk_on_random_infinite_looptrees} or~\cite[Equation~9]{Kortchemski:Sub_exponential_tail_bounds_for_conditioned_stable_Bienayme_Galton_Watson_trees}) assert that for every $\varepsilon > 0$, there exists a constant $c$ depending only on $\varepsilon$ such that for every $n$ large enough, \[\frac{(n |t-s|)^{-1/\alpha} B_{n |t-s|}}{n^{-1/\alpha} B_{n}} \le c \cdot |t-s|^{-\varepsilon},\] and so \[\Pr{S(n s) - \min_{s \le r \le t} S(n r) > c \cdot u \cdot |t-s|^{-\varepsilon+1/\alpha} \cdot B_n} \le c_1 \exp(- c_2 u^\delta).\] One can then transfer this bound to $S_n$; an argument based on the Markov property of $S$ indeed results in an absolute continuity between the first $n/2$ steps of $S$ and of $S_n$, see e.g.~\cite{Kortchemski:Sub_exponential_tail_bounds_for_conditioned_stable_Bienayme_Galton_Watson_trees}, near the end of the proof of Theorem~9 there. Finally, we can transfer this bound from $S_n$ to $W_n$ using the preceding construction from a cyclic shift, see e.g. the end of the proof of Equation~7 in~\cite{Marzouk:Scaling_limits_of_discrete_snakes_with_stable_branching}. In the case $A=\{0\}$, the construction of $W_n$ from a bridge $S_n$ is discussed by Kortchemski~\cite[Section~6.1]{Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves}, and as discussed in Section~8 there, and it extends \emph{mutatis mutandis} to the general case $A$ either finite or co-finite. Here, the bridge $S_n$ is obtained by conditioning the walk $S$ the be at $-1$ after its $n$-th jump in the set $A-1$. Therefore, it suffices again to prove our claim when $W_n$ is replaced by $S_n$ and $|t-s| < 1/2$. Again, we may cut the path of $S_n$ at the time it realises its $(n/2)$-th jump in the set $A-1$ and this path is absolutely continuous with respect to that of the unconditioned walk $S$ cut at the analogous stopping time. This follows from the same argument as alluded above, appealing to the strong Markov property. So finally, we have reduced our claim to showing that it holds when $W_n$ is replaced by $S$, when $s=0$, and $\zeta(T_{A,n})$ is replaced by the time of the $n$-th jump of $S$ in the set $A-1$. This random time, divided by $n$ converges almost surely towards $\mu_q(A)$, see e.g. \cite[Lemma~6.2]{Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves} so we may replace it by $(1\pm \gamma) \mu_q(A) n$ with a given $\gamma \in (0,1)$ and conclude from the previous bound on $S$ (it only affects the constants). \end{proof} We next turn to the proof of tightness of the label process. Recall that we may replace $B_{\zeta(T_{A,n})}$ by $B_n$. Our argument closely follows the proof of Proposition~7 in~\cite{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence}. \begin{proof} [Proof of the tightness in Theorem~\ref{thm:cv_serpents_cartes}] Fix $q > \frac{2\alpha}{\alpha-1}$ and $\beta \in (1, \frac{q(\alpha-1)}{2\alpha})$. We aim at showing that for every $n$ large enough, for every pair $0 \le s \le t \le 1$, it holds that \begin{equation}\label{eq:moments_labels} \Es{|L_n(\zeta(T_{A,n}) s) - L_n(\zeta(T_{A,n}) t)|^q} \le C(q) \cdot B_n^{q/2} \cdot |t-s|^{\beta}, \end{equation} where here and in all this proof, $C(q)$ stands for some constant, which will vary from one equation to the other, which depends on $q$, $\beta$, and the offspring distribution, but not on $n$ nor $s$ nor $t$. Tightness follows from~\eqref{eq:moments_labels} appealing to the standard Kolmogorov's criterion. Without loss of generality, we may, and do, restrict to those times $s$ and $t$ such that $|t-s| \le 1/2$ and both $\zeta(T_{A,n}) s$ and $\zeta(T_{A,n}) t$ are integers. Let us then denote by $u$ and $v$ the vertices corresponding to the times $\zeta(T_{A,n}) s$ and $\zeta(T_{A,n}) t$ respectively in lexicographical order, so $L_n(\zeta(T_{A,n}) s) - L_n(\zeta(T_{A,n}) t) = \ell(u)-\ell(v)$. Let $u \wedge v$, be the most recent common ancestor of $u$ and $v$ and further $\hat{u}$ and $\hat{v}$ be the children of $u \wedge v$ which are respectively ancestor of $u$ and $v$. We stress that $u$ and $v$ correspond to deterministic times, whereas $u \wedge v$, $\hat{u}$ and $\hat{v}$ correspond to random times which are measurable with respect to $T_{A,n}$. We write: \[\ell(u) - \ell(v) = \left(\sum_{w \in \mathopen{\rrbracket} \hat{u}, u \mathclose{\rrbracket}} \ell(w) - \ell(pr(w))\right) + (\ell(\hat{u}) - \ell(\hat{v})) + \left(\sum_{w \in \mathopen{\rrbracket} \hat{v}, v \mathclose{\rrbracket}} \ell(pr(w)) - \ell(w)\right).\] Recall the notation $1 \le \chi_{\hat{u}} \le \chi_{\hat{v}} \le k_{u \wedge v}$ for the relative position of $\hat{u}$ and $\hat{v}$ among the children of $u \wedge v$. By construction of the labels on $T_{A,n}$, conditional on the tree, the difference $\ell(\hat{u}) - \ell(\hat{v})$ is distributed as $X_{p,i} - X_{p,j}$ with $p = k_{u \wedge v}$, $i = \chi_{\hat{u}}$ and $j = \chi_{\hat{v}}$ and where $X_p$ has the uniform distribution on the set of bridges with no-negative jumps $\mathcal{B}_p^+$. According to Le Gall \& Miermont~\cite[Lemma~1]{Le_Gall-Miermont:Scaling_limits_of_random_planar_maps_with_large_faces}, we thus have \[\Esc{\left|\ell(\hat{u}) - \ell(\hat{v})\right|^q}{T_{A,n}} \le C(q) \cdot (\chi_{\hat{v}} - \chi_{\hat{u}})^{q/2}.\] Next, fix $w \in \mathopen{\rrbracket} \hat{u}, u \mathclose{\rrbracket}$, since $\ell(pr(w)) = \ell(pr(w) k_{pr(w)})$, similarly, we have \[\Esc{|\ell(w) - \ell(pr(w))|^q}{T_{A,n}} \le C(q) \cdot (k_{pr(w)} - \chi_w)^{q/2},\] and, for every $w \in \mathopen{\rrbracket} \hat{v}, v \mathclose{\rrbracket}$, \[\Esc{|\ell(pr(w)) - \ell(w)|^q}{T_{A,n}} \le C(q) \cdot \chi_w^{q/2}.\] It was argued in~\cite[Equation~20]{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence}, appealing to the so-called Marcinkiewicz--Zygmund inequality, that if $Y_1, \dots, Y_m$ are independent and centred random variables which admit a finite $q$-th moment, then \[\Es{\left|\sum_{i=1}^m Y_i\right|^q} \le C(q) \cdot \left(\sum_{i=1}^m \Es{\left|Y_i\right|^q}^{2/q}\right)^{q/2}.\] In our context, this reads \begin{align}\label{eq:tension_branches_gauche_droite} \Esc{|\ell(u) - \ell(v)|^q}{T_{A,n}} &\le C(q) \cdot \left( \sum_{w \in \mathopen{\rrbracket} \hat{u}, u \mathclose{\rrbracket}} (k_{pr(w)} - \chi_w) + (\chi_{\hat{v}} - \chi_{\hat{u}}) + \sum_{w \in \mathopen{\rrbracket} \hat{v}, v \mathclose{\rrbracket}} \chi_w \right)^{q/2}\nonumber \\ &\le C(q) \cdot \left( \left(\sum_{w \in \mathopen{\rrbracket} \hat{u}, u \mathclose{\rrbracket}} (k_{pr(w)} - \chi_w) + (\chi_{\hat{v}} - \chi_{\hat{u}})\right)^{q/2} + \left(\sum_{w \in \mathopen{\rrbracket} \hat{v}, v \mathclose{\rrbracket}} \chi_w\right)^{q/2} \right). \end{align} Let us first consider the first term in~\eqref{eq:tension_branches_gauche_droite}. Appealing to Lemma~\ref{lem:codage_marche_Luka}, we have \[\chi_{\hat{v}} - \chi_{\hat{u}} = W_n(\hat{u}) - W_n(\hat{v}),\] and similarly, for every $w \in \mathopen{\rrbracket} \hat{u}, u \mathclose{\rrbracket}$, \[k_{pr(w)} - \chi_w = W_n(w) - W_n(pr(w) k_{pr(w)}) = W_n(w k_w) - W_n(pr(w) k_{pr(w)}),\] so \[\sum_{w \in \mathopen{\rrbracket} \hat{u}, u \mathclose{\rrbracket}} (k_{pr(w)} - \chi_w) + (\chi_{\hat{v}} - \chi_{\hat{u}}) = W_n(u) - W_n(\hat{v}) = W_n(u) - \inf_{[u, v]} W_n.\] Then Lemma~\ref{lem:moments_marche_Luka} applied with $\theta = 2\beta/q < 1/\alpha$ yields \[\Es{\left(\sum_{w \in \mathopen{\rrbracket} \hat{u}, u \mathclose{\rrbracket}} (k_{pr(w)} - \chi_w) + (\chi_{\hat{v}} - \chi_{\hat{u}})\right)^{q/2}} \le C(q) \cdot B_n^{q/2} \cdot |t-s|^\beta.\] We next focus on the second term in~\eqref{eq:tension_branches_gauche_droite}. We would like to proceed symmetrically but there is a technical issue: on the branch $\mathopen{\rrbracket} \hat{u}, u \mathclose{\rrbracket}$, we relied on the fact that $\ell(wk_w) = \ell(w)$ in order to only count the number of vertices branching off of this path \emph{strictly} to the right, but this is not the case on $\mathopen{\rrbracket} \hat{v}, v \mathclose{\rrbracket}$: we do not have $\ell(w1)=\ell(w)$ in general so we must also count the vertices \emph{on} this path. Let $T_{A,n}^-$ be the `mirror image' of $T_{A,n}$, i.e. the tree obtained from $T_{A,n}$ by flipping the order of the children of every vertex; let us write $w^- \in T_{A,n}^-$ for the mirror image of a vertex $w \in T_{A,n}$; make the following observations: \begin{enumerate} \item $T_{A,n}^-$ has the same law as $T_{A,n}$, so in particular, their {\L}ukasiewicz paths have the same law; \item for every $w \in \mathopen{\rrbracket} \hat{v}, v \mathclose{\rrbracket}$, the quantity $\chi_w-1$ in $T_{A,n}$ corresponds to the quantity $k_{pr(w^-)} - \chi_{w^-}$ in $T_{A,n}^-$; \item the lexicographical distance between the last descendant in $T_{A,n}^-$ of respectively $\hat{v}^-$ and $v^-$ is smaller than the lexicographical distance between $\hat{v}$ and $v$ in $T_{A,n}$ (the elements of $ \mathopen{\rrbracket} \hat{v}, v \mathclose{\rrbracket} = \mathopen{\rrbracket} \hat{v}^-, v^- \mathclose{\rrbracket}$ are missing). \end{enumerate} With theses observations, the previous argument used to control the branch $\mathopen{\rrbracket} \hat{u}, u \mathclose{\rrbracket}$ shows that \[\Es{\left(\sum_{w \in \mathopen{\rrbracket} \hat{v}, v \mathclose{\rrbracket}} (\chi_w-1)\right)^{q/2}} \le C(q) \cdot B_n^{q/2} \cdot |t-s|^\beta.\] Finally, as proved recently (for the conditioning $A = \ensembles{Z}_+$ but the general case follows similarly) in~\cite{Marzouk:Scaling_limits_of_discrete_snakes_with_stable_branching}: for every $\gamma < (\alpha-1)/\alpha$, \begin{equation}\label{eq:hauteur_Holder} \Es{\# \mathopen{\rrbracket} \hat{v}, v \mathclose{\rrbracket}^{q/2}} \le C(q) \cdot \left(\frac{B_{\zeta(T_{A,n})}}{\zeta(T_{A,n})}\right)^{q/2} \cdot |t-s|^{\gamma q/2}, \end{equation} which is smaller than the bound we are looking for; indeed, since we assume that both $\zeta(T_{A,n}) s$ and $\zeta(T_{A,n}) t$ are integers, then $\zeta(T_{A,n})^{-q/2} \le |t-s|^{q/2} \le 1$ and $\gamma$ can be chosen close enough to $(\alpha-1)/\alpha$ to ensure that $|t-s|^{(\gamma+1) q/2} \le |t-s|^\beta$. \end{proof} Let us mention that we have hidden the technical difficulties in~\eqref{eq:hauteur_Holder}. Nevertheless, there is a different argument which does not necessitate any control on the length of the branches. Indeed the bound~\eqref{eq:hauteur_Holder} answers in this context of size-conditioned Bienaymé--Galton--Watson trees Remark~3 in~\cite{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence} on trees with a prescribed degree sequence. We could have argued instead as in the proof of Proposition~7 there that, if $\chi_w \ge 2$, then $\chi_w \le 2 (\chi_w-1)$, so in order to control the moments of $\sum_{w \in \mathopen{\rrbracket} \hat{v}, v \mathclose{\rrbracket}} \chi_w$, it suffices to bound those of $\#\{w \in \mathopen{\rrbracket} \hat{v}, v \mathclose{\rrbracket} : \chi_w = 1\}$. But according to~\cite[Lemma~2]{Marzouk:Scaling_limits_of_discrete_snakes_with_stable_branching} (which recasts~\cite[Corollary~3]{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence} in the context of size-conditioned Bienaymé--Galton--Watson trees), with high probability, uniformly for all pair of vertices $\hat{v},v$ such that $\hat{v}$ is an ancestor of $v$, \footnote{And the path $\llbracket \hat{v}, v\rrbracket$ has length at least of order $\ln n$, but shorter paths do not cause any issue.} there is a proportion at most $1-\mu_\mathbf{q}(0)/2 < 1$ of individuals $w \in \mathopen{\rrbracket} \hat{v}, v \mathclose{\rrbracket}$ such that $\chi_w = 1$. Then the bound~\eqref{eq:moments_labels} holds under the conditional expectation with respect to this event, so tightness of the label process holds conditional on this event, and so also unconditionally. \subsection{Finite dimensional marginals in the non-Gaussian case} In this subsection, we assume that $\alpha < 2$, and prove the following result which, together with the tightness obtained in the preceding subsection, concludes the proof of Theorem~\ref{thm:cv_serpents_cartes} in this case. \begin{prop} For every $k \ge 1$ and every $0 \le t_1 < \dots < t_k \le 1$, it holds that \[\left(B_{\zeta(T_{A,n})}^{-1/2} L_n(\lfloor \zeta(T_{A,n}) t_i\rfloor)\right)_{1 \le i \le k} \cvloi (\mathscr{L}_{t_i})_{1 \le i \le k},\] jointly with~\eqref{eq:Duquesne_Kortchemski}. \end{prop} Our argument follows closely that of Le Gall \& Miermont~\cite[proof of Proposition~7]{Le_Gall-Miermont:Scaling_limits_of_random_planar_maps_with_large_faces} who considered the two-type tree associated with the maps via the Bouttier--Di Francesco--Guitter bijection, whereas we use the Janson--Stef{\'a}nsson bijection which eliminates several technicalities. The argument relies of the convergence of the {\L}ukasiewicz path in~\eqref{eq:Duquesne_Kortchemski} which we assume for the rest of this subsection to hold almost surely, appealing to Skorokhod's representation Theorem. To ease the exposition, we start with the one-dimensional marginals. \begin{proof}[Proof in the case $k=1$] Fix $t \in [0,1]$ and recall the notation $\mathscr{I}_{s, t} = \inf_{r \in [s,t]} \mathscr{X}_r$ for every $s \in [0,t]$. Let $(s_i)_{i \ge 1}$ be those times $s \in [0,t]$ such that \[\mathscr{X}_{s-} < \mathscr{I}_{s, t},\] which are ranked in decreasing order of the values of the jumps of $\mathscr{X}$: $\Delta \mathscr{X}_{s_1} > \Delta \mathscr{X}_{s_2} > \dots$. Similarly, let $\kappa_n$ be the number of integers $k \in \{0, \dots, \lfloor \zeta(T_{A,n}) t\rfloor-1\}$ such that \[W_n(k) = \min_{r \in [k, \lfloor \zeta(T_{A,n}) t \rfloor]} W_n(r),\] and let us denote by $a_{n,1}, \dots, a_{n, \kappa_n}$ these integers, ranked so that \[W_n(a_{n,1}+1)-W_n(a_{n,1}) \ge \dots \ge W_n(a_{n,\kappa_n}+1)-W_n(a_{n,\kappa_n}).\] It follows from~\eqref{eq:Duquesne_Kortchemski} that almost surely, for every $i \ge 1$, we have \begin{equation}\label{eq:temps_sauts_ancetres} \begin{aligned} \frac{1}{\zeta(T_{A,n})} a_{n,i} &\cv s_i, \\ \frac{1}{B_{\zeta(T_{A,n})}} \left(W_n(a_{n,i}+1)-W_n(a_{n,i})\right) &\cv \Delta \mathscr{X}_{s_i}, \\ \frac{1}{B_{\zeta(T_{A,n})}} \left(\min_{k \in [a_{n,i}+1, \lfloor \zeta(T_{A,n}) t \rfloor]} W_n(k)-W_n(a_{n,i})\right) &\cv \mathscr{I}_{s_i, t} - \mathscr{X}_{s_i-}, \end{aligned} \end{equation} Let $u_0, u_1, \dots, u_{\zeta(T_{A,n})}$ be the vertices of $T_{A,n}$ listed in lexicographical order. Observe that the $a_{n,i}$'s are exactly the indices of the strict ancestors of $u_{\lfloor \zeta(T_{A,n}) t \rfloor}$. We may then write \[L_n(\lfloor \zeta(T_{A,n}) t \rfloor) = \ell(u_{\lfloor \zeta(T_{A,n}) t \rfloor}) = \sum_{i=1}^{\kappa_n} (\ell(u_{\psi(a_{n,i})}) - \ell(u_{a_{n,i}})),\] where $u_{\psi(a_{n,i})}$ is the only child of $u_{a_{n,i}}$ which is an ancestor of $u_{\lfloor \zeta(T_{A,n}) t \rfloor}$. We claim that only the first values of $i$ matters. Indeed, by classical results on fluctuation theory, it is well known that \[\mathscr{X}_t = \sum_{i \ge 1} (\mathscr{I}_{s_i, t} - \mathscr{X}_{s_i-}),\] whence, for every $\varepsilon > 0$, there exists an integer $N \ge 1$ such that with probability at least $1-\varepsilon$, it holds that \[\mathscr{X}_t - \sum_{i \le N} (\mathscr{I}_{s_i, t} - \mathscr{X}_{s_i-}) \le \varepsilon/2.\] Then~\eqref{eq:temps_sauts_ancetres} and~\eqref{eq:Duquesne_Kortchemski} imply that for every $n$ sufficiently large, with probability at least $1-2\varepsilon$, it holds that \[\frac{1}{B_{\zeta(T_{A,n})}} \left(W_n(\lfloor\zeta(T_{A,n}) t\rfloor) - \sum_{i=1}^{N \wedge \kappa_n} \min_{k \in [a_{n,i}+1, \lfloor \zeta(T_{A,n}) t \rfloor]} W_n(k) - W_n(a_{n,i})\right) < \varepsilon.\] Observe that the left-hand side equals \[\frac{1}{B_{\zeta(T_{A,n})}} \sum_{i=N+1}^{\kappa_n} \min_{k \in [a_{n,i}+1, \lfloor \zeta(T_{A,n}) t \rfloor]} W_n(k) - W_n(a_{n,i}),\] which is therefore arbitrarily small when fixing $N$ large enough. Now recall that, conditional on $T_{A,n}$, the label increments $\ell(u_{\psi(a_{n,i})}) - \ell(u_{a_{n,i}})$ for $1 \le i \le \kappa_n$ are independent and distributed as $X_{k_i, \chi_i}$ where $(X_{k_i, 1}, \dots, X_{k_i, k_i})$ is a uniform random bridge in $\mathcal{B}^+_{k_i}$ defined in~\eqref{eq:pont_sans_saut_negatif}, where $k_i = W_n(a_{n, i}+1) - W_n(a_{n, i}) + 1$ is the number of children of $a_{n,i}$, and where $\chi_i = W_n(a_{n, i}+1) - W_n(\psi(a_{n, i})) + 1$ is the position of $\psi(a_{n,i})$ amongst its siblings; note that $W_n(\psi(a_{n, i})) = \min_{j \in [a_{n, i}+1, \lfloor \zeta(T_{A,n}) t\rfloor]} W_n(j)$. As in the preceding section, according to Le Gall \& Miermont~\cite[Equation~17]{Le_Gall-Miermont:Scaling_limits_of_random_planar_maps_with_large_faces}, there exists some universal constant $K > 0$ such that \[\Esc{|\ell(u_{\psi(a_{n,i})}) - \ell(u_{a_{n,i}})|^2}{T_{A,n}} = K\frac{\chi_i (k_i-\chi_i)}{k_i} \le K \left(\min_{j \in [a_{n, i}+1, \lfloor \zeta(T_{A,n}) t\rfloor]} W_n(j) - W_n(a_{n, i})\right).\] Since, conditional on $T_{A_n}$, these increments are centred and independent, we conclude that \begin{multline*} \Esc{\left|B_{\zeta(T_{A,n})}^{-1/2} \sum_{i=N+1}^{\kappa_n} (\ell(u_{\psi(a_{n,i})}) - \ell(u_{a_{n,i}}))\right|^2}{T_{A,n}} \\ \le K \cdot B_{\zeta(T_{A,n})}^{-1} \sum_{i=N+1}^{\kappa_n} \min_{j \in [a_{n, i}+1, \lfloor \zeta(T_{A,n}) t\rfloor]} W_n(j) - W_n(a_{n, i}), \end{multline*} which, on a set of probability at least $1-2\varepsilon$ for every $n$ large enough, is bounded by $K \varepsilon$ according to the preceding discussion. We next focus on $a_{n,1}, \dots, a_{n,N}$. Conditional on $\mathscr{X}$, let $(\gamma_i)_{1 \le i \le N}$ be independent Brownian bridges of length $\Delta \mathscr{X}_{s_i}$ respectively; it follows from~\eqref{eq:temps_sauts_ancetres} and~\eqref{eq:Duquesne_Kortchemski} together with Donsker's invariance principle for random bridges~\eqref{eq:Donsker_ponts} that \[B_{\zeta(T_{A,n})}^{-1/2} \sum_{i=1}^N (\ell(u_{\psi(a_{n,i})}) - \ell(u_{a_{n,i}})) \cvloi \sqrt{2} \sum_{i=1}^N \gamma_i(\mathscr{X}_{s_i} - \mathscr{I}_{s_i, t}),\] and the right-hand side converges further towards $\mathscr{L}_t$ as $N \to \infty$. \end{proof} We next briefly sketch the argument for the multi-dimensional marginals. \begin{proof}[Proof in the case $k \ge 2$] To ease the notation, we only treat the case $k = 2$, but the arguments are valid in the more general case. Let us fix $0 < s < t$; let us denote by $0 = a_{n,0}' < \dots < a_{n,\kappa_n'}'$ the indices of the strict ancestors of $u_{\lfloor \zeta(T_{A,n}) s\rfloor}$, and let similarly $0 = a_{n,0}'' < \dots < a_{n,\kappa_n''}''$ be the indices corresponding to the ancestors of $u_{\lfloor \zeta(T_{A,n}) t\rfloor}$. Let $j(n) \in \{0, \dots, \lfloor \zeta(T_{A,n}) s\rfloor\}$ be the index such that $u_{j(n)}$ is the last common ancestor of $u_{\lfloor \zeta(T_{A,n}) s\rfloor}$ and $u_{\lfloor \zeta(T_{A,n}) t\rfloor}$; we implicitly assume that $j(n) < \lfloor \zeta(T_{A,n}) s\rfloor$ but this case is treated similarly. Let $i(n) \in \{0, \dots \kappa_n' \wedge \kappa_n''\}$ be the index such that $j(n) = a_{n,i(n)}' = a_{n,i(n)}''$. Note that $j(n)$ is the unique time such that \[W_n(j(n)) \le \min_{k \in [\lfloor \zeta(T_{A,n}) s\rfloor, \lfloor \zeta(T_{A,n}) t\rfloor]} W_n(k) < \min_{k \in [j(n)+1, \lfloor \zeta(T_{A,n}) s\rfloor]} W_n(k).\] By analogy with the discrete setting, we interpret the times $r \in [0,s]$ such that $\mathscr{X}_{r-} < \mathscr{I}_{r, s}$ as the times of visit of the ancestors of the vertex visited at time $s$, and similarly for $t$. Introduce then the unique time $r_0 \in [0,s]$ such that \[\mathscr{X}_{r_0-} < \mathscr{I}_{s, t} < \mathscr{I}_{r_0, s},\] which intuitively corresponds to the time of visit of the last common ancestor of the vertices visited at time $s$ and $t$, and indeed, from~\eqref{eq:Duquesne_Kortchemski}, it is the almost sure limit of $\zeta(T_{A,n})^{-1} j(n)$. Let us consider the label increments at the branch-point: conditional on $\mathscr{X}$, let $\gamma$ be a Brownian bridge of length $\Delta \mathscr{X}_{r_0}$, then similar arguments as in the one-dimensional case show that the pair \[B_{\zeta(T_{A,n})}^{-1/2} \left(L_n(a_{n,i(n)+1}') - L_n(j(n)), L_n(a_{n,i(n)+1}'') - L_n(j(n))\right)\] converges in distribution as $n\to\infty$ towards \[\sqrt{2} \cdot (\gamma(\mathscr{X}_{r_0} - \mathscr{I}_{r_0, s}), \gamma(\mathscr{X}_{r_0} - \mathscr{I}_{r_0, t})).\] If one removes the branch-point from the subtree of $T_{A,n}$ spanned by its root and the vertices $u_{\lfloor \zeta(T_{A,n}) s\rfloor}$ and $u_{\lfloor \zeta(T_{A,n}) t\rfloor}$, then one gets three branches and the label increments between their root and their leaf are independent; we may apply the arguments of the previous proof to prove that these three increments, divided by $B_{\zeta(T_{A,n})}^{1/2}$ converge in distribution towards the `label increments' given by $\mathscr{L}$. Details are left to the reader, we refer to the end of the proof of Proposition~7 in~\cite{Le_Gall-Miermont:Scaling_limits_of_random_planar_maps_with_large_faces}. \end{proof} \subsection{Finite dimensional marginals in the Gaussian case} We now focus on the Gaussian regime $\alpha=2$. As opposed to the other regimes, we consider \emph{random} marginals. Precisely, we prove the following result. \begin{prop}\label{prop:marginal_etiquettes_cas_gaussien} For every $k \ge 1$, sample $U_1, \dots, U_k$ i.i.d. uniform random variables in $[0,1]$ independently of the labelled trees, then the convergence \[\left(B_{\zeta(T_{A,n})}^{1/2} L_n(\lfloor \zeta(T_{A,n}) U_i\rfloor)\right)_{1 \le i \le k} \cvloi (\mathscr{L}_{U_i})_{1 \le i \le k},\] holds jointly with~\eqref{eq:Duquesne_Kortchemski}, where the process $\mathscr{L}$ is independent of $U_1, \dots, U_k$. \end{prop} Since we know that the sequence of continuous processes $B_{\zeta(T_{A,n})}^{-1/2} L_n(\zeta(T_{A,n}) \cdot)$ is tight, this suffices to characterise the subsequential limits as $\mathscr{L}$. Indeed, given any finite collection of fixed times in $[0,1]$, one can approximate them by sampling sufficiently many i.i.d. uniform random variables in $[0,1]$; then the equicontinuity given by the tightness shows that the images of $L_n$ at these random times approximate well the values at the deterministic times, and the same hods for the uniformly continuous limit $\mathscr{L}$, see e.g. Addario-Berry \& Albenque~\cite[proof of Proposition 6.1]{Addario_Berry-Albenque:The_scaling_limit_of_random_simple_triangulations_and_random_simple_triangulations} for a detailed argument. As previously, we first treat the one-dimensional case. \begin{proof}[Proof in the case $k=1$] The approach was described earlier in this section. Sample $U$ uniformly at random in $[0,1]$ independently of the rest and note that the vertex $u_n$ visited at the time $\lceil \zeta(T_{A,n}) U \rceil$ in lexicographical order has the uniform distribution in $T_n$; \footnote{Precisely $u_n$ has the uniform distribution in $T_n \setminus \{\varnothing\}$, but we omit this detail for the sake of clarity.} let us write \[\left(\frac{1}{B_{\zeta(T_{A,n})}}\right)^{1/2} \ell(u_n) = \left(\frac{B_{\zeta(T_{A,n})}}{\zeta(T_{A,n})} |u_n|\right)^{1/2} \cdot \left(\frac{\zeta(T_{A,n})}{B_{\zeta(T_{A,n})}^2 |u_n|}\right)^{1/2} \ell(u_n).\] It follows from~\eqref{eq:Duquesne_Kortchemski} that the first term on the right converges in distribution towards $\mathscr{H}_U$, it is therefore equivalent to show that, jointly with~\eqref{eq:Duquesne_Kortchemski}, we have the convergence in distribution \begin{equation}\label{eq:cv_label_unif} \left(\frac{3 \zeta(T_{A,n})}{2 B_{\zeta(T_{A,n})}^2 |u_n|}\right)^{1/2} \ell(u_n) \cvloi G \end{equation} where $G$ has the standard Gaussian distribution. Recall that according to Remark~\ref{rem:nombre_aretes_arbre}, we may, and do, replace $\zeta(T_{A,n})$ by $\mu_\mathbf{q}(A)^{-1} n$ and $B_{\zeta(T_{A,n})}$ by $\mu_\mathbf{q}(A)^{-1/2} B_n$. For every $k \ge j \ge 1$, let us denote by $A_{k,j}(u_n)$ the number of strict ancestors of $u_n$ with $k$ children, among which the $j$-th one is again an ancestor of $u_n$: \[A_{k,j}(u_n) =\# \left\{v \in \llbracket\varnothing, u_n\llbracket : k_v = k \text{ and } vj \in \rrbracket\varnothing, u_n\rrbracket\right\}.\] The idea is to decompose $\ell(u_n)$ as the sum of the label increments between two consecutive ancestors $w$ and $pr(w)$; conditionally on $T_n$, these random variables are independent and, whenever $k_{pr(w)}=k$ and $w = pr(w)j$, the label increment has the law of $j$-th marginal of a uniform random bridge in $\mathcal{B}_k^+$, which is centred and has variance, say, $\sigma_{k,j}^2$. This variance is known explicitly, see e.g.~\cite[page 1664 \footnote{Note that they consider uniform random bridges in $\mathcal{B}_{k+1}^+$!}]{Marckert-Miermont:Invariance_principles_for_random_bipartite_planar_maps}: we have \begin{equation}\label{eq:variance_accroissements_labels_cas_gaussien} \sigma_{k,j}^2 = \frac{2j(k-j)}{k+1}, \qquad\text{so}\qquad \sum_{j=1}^k \sigma_{k,j}^2 = \frac{k(k-1)}{3}. \end{equation} Let $\Delta(T_{A,n})$ denote the largest offspring of a vertex of $T_{A,n}$. As in the classical proof of the central limit theorem, we may write for every $z \in \ensembles{R}$, \begin{align*} &\Esc{\exp\left(\mathrm{i} z \left(\frac{3 n}{2 B_n^2 |u_n|}\right)^{1/2} \ell(u_n)\right)}{T_n, u_n} \\ &= \prod_{k=1}^{\Delta(T_{A,n})} \prod_{j=1}^k \left(1 - \frac{z^2}{2} \frac{3 n \sigma_{k,j}^2}{2 B_n^2 |u_n|} + o\left(\left(\frac{n \sigma_{k,j}^2}{B_n^2 |u_n|}\right)^2\right)\right)^{A_{k,j}(u_n)} \\ &= \exp\left(- \frac{z^2}{2} \sum_{k=1}^{\Delta(T_{A,n})} \sum_{j=1}^k A_{k,j}(u_n) \left(\frac{3 n \sigma_{k,j}^2}{2 B_n^2 |u_n|} + O\left(\left(\frac{n \sigma_{k,j}^2}{B_n^2 |u_n|}\right)^2\right)\right)\right). \end{align*} We claim that \begin{equation}\label{eq:moyenne_accroissements_etiquettes_cas_gaussien} \sum_{k=1}^{\Delta(T_{A,n})} \sum_{j=1}^k \frac{3 n \sigma_{k,j}^2}{2 B_n^2 |u_n|} A_{k,j}(u_n) \cvproba 1, \quad\text{and}\quad \sum_{k=1}^{\Delta(T_{A,n})} \sum_{j=1}^k \left(\frac{n \sigma_{k,j}^2}{B_n^2 |u_n|}\right)^2 A_{k,j}(u_n) \cvproba 0. \end{equation} Then an application of Lebesgue's Theorem yields our claim. We shall restrict ourselves to a `good event' that we now introduce. For a vertex $u \in T_{A,n}$, let $LR(u)$ denote the number of vertices branching off of the path $\llbracket \varnothing, u \llbracket$ i.e. whose parent belongs to this ancestral line, and which themselves do not; formally, \[LR(u) = \#\left\{v \in T_{A,n}\setminus \llbracket \varnothing, u \rrbracket : pr(v) \in \llbracket \varnothing, u \llbracket \right\}.\] For a tree $T$, a vertex $u \in T$ and three (small) parameters $\eta, \gamma, \kappa > 0$, let us consider the event \[E_{n, \eta, \gamma, \kappa}(T, u) = \left\{\Delta(T) \le \eta B_n\right\} \cap \left\{\gamma \le \frac{B_n}{n} |u| \le \gamma^{-1}\right\} \cap \left\{LR(u) \ge \kappa B_n\right\}.\] If $u$ is the $k$-th vertex of $T$ in lexicographical order and $W$ and $H$ and respectively the {\L}ukasiewicz path and the height process of $T$, then $\Delta(T)$ is the largest jump plus one of $W$ and $|u|$ equals $H(k)$. Finally, if $u_n$ is a uniform random vertex of $T_{A,n}$, then $LR(u_n)$ has the law of the sum of two independent copies of $W(U_n)$ where $U_n$ is a uniform random integer in $\{0, \dots, \zeta(T_{A,n})\}$ independent of $T_{A,n}$; indeed, the number of vertices branching off to the right of the path $\llbracket \varnothing, u_n \llbracket$ is exactly $W(U_n)$, and then by symmetry, the number of vertices branching off to the left is the value of the `mirror {\L}ukasiewicz path' at the corresponding time; we then conclude by the invariance of the law of the tree by this `mirror' operation. It thus follows from~\eqref{eq:Duquesne_Kortchemski} that for any $\eta > 0$, \[\lim_{\gamma, \kappa \downarrow 0} \liminf_{n \to \infty} \Pr{E_{n, \eta, \gamma, \kappa}(T_{A,n}, u_n)} = 1.\] Now fix $\varepsilon, \delta > 0$ and choose $\gamma, \kappa$ small enough so that for any $\eta > 0$, the probability of $E_{n, \eta, \gamma, \kappa}(T_{A,n}, u_n)$ is at least $1-\delta$ for every $n$ large enough. We shall tune $\eta$ in such a way that for every $n$ large enough, \[\Pr{\left\{\left|\sum_{k=1}^{\Delta(T_{A,n})} \sum_{j=1}^k \frac{3 n \sigma_{k,j}^2}{2 B_n^2 |u_n|} A_{k,j}(u_n) - 1\right| > \varepsilon\right\} \cap E_{n, \eta, \gamma, \kappa}(T_{A,n}, u_n)} \le \delta.\] We appeal to a \emph{spinal decomposition} due to Duquesne~\cite[Equation~24]{Duquesne:An_elementary_proof_of_Hawkes_s_conjecture_on_Galton_Watson_trees} which results in an absolute continuity relation between the tree $T_{A,n}$ and the tree $T_\infty$ `conditioned to survive', which is the infinite tree which arises as the \emph{local limit} of $T_{A,n}$. It was introduced by Kesten~\cite{Kesten:Subdiffusive_behavior_of_random_walk_on_a_random_cluster} and the most general results on such convergences are due to Abraham \& Delmas~\cite{Abraham-Delmas:Local_limits_of_conditioned_Galton_Watson_trees_the_infinite_spine_case}. The tree $T_\infty$ contains a unique infinite simple path called the \emph{spine}, starting from the root, and the vertices which belong to this spine reproduce according to the \emph{size-biased} law $\sum_{k \ge 1} k \mu_\mathbf{q}(k) \delta_k$, whereas the other vertices reproduce according to $\mu_\mathbf{q}$, and all the vertices reproduce independently. For a tree $\tau$ and a vertex $v \in \tau$, let $\theta_v(\tau)$ be the subtree consisting of $v$ and all its progeny, and let $\mathsf{Cut}_v(\tau) = \{v\} \cup (\tau \setminus \theta_v(\tau))$ be its complement (note that $v$ belongs to both parts). Then for every non-negative measurable functions $G_1, G_2$, for every $h \ge 0$, we have \begin{equation}\label{eq:epine} \Es{\sum_{\substack{v \in T \\ |v| = h}} G_1(\mathsf{Cut}_v(T), v) \cdot G_2(\theta_v(T))} = \Es{G_1(\mathsf{Cut}_{v_h^\ast}(T_\infty), v_h^\ast) \cdot G_2(T')}, \end{equation} where $v_h^\ast$ is the only vertex on the spine of $T_\infty$ at height $h$ and where $T'$ is independent of $T_\infty$ and distributed as a non-conditioned Bienaymé--Galton--Watson tree with offspring distribution $\mu_\mathbf{q}$. Observe that the $A_{k,j}(u_n)$ are measurable with respect to $(u_n, \mathsf{Cut}_{u_n}(T_{A,n}))$ and so are $|u_n|$ and $LR(u_n)$; finally, let us replace the event $E_{n, \eta, \gamma, \kappa}(T_{A,n}, u_n)$ by $E_{n, \eta, \gamma, \kappa}(\mathsf{Cut}_{u_n}(T_{A,n}), u_n)$ whose probability is greater. Let $T$ be a a non-conditioned Bienaymé--Galton--Watson tree with offspring distribution $\mu_\mathbf{q}$ and let $\zeta_A(T)$ be its number of vertices with offspring in $A$. Let us write \begin{multline*} \Pr{\left\{\left|\sum_{k=1}^{\Delta(\mathsf{Cut}_{u_n}(T_{A,n}))} \sum_{j=1}^k \frac{3 n \sigma_{k,j}^2}{2 B_n^2 |u_n|} A_{k,j}(u_n) - 1\right| > \varepsilon\right\} \cap E_{n, \eta, \gamma, \kappa}(\mathsf{Cut}_{u_n}(T_{A,n}), u_n)} \\ = \frac{1}{\ensembles{P}(\zeta_A(T) = n)} \Pr{\left\{\left|\sum_{k=1}^{\eta B_n} \sum_{j=1}^k \frac{3 n \sigma_{k,j}^2}{2 B_n^2 |u_n|} A_{k,j}(u_n) - 1\right| > \varepsilon\right\} \cap \{\zeta_A(T) = n\} \cap E_{n, \eta, \gamma, \kappa}(\mathsf{Cut}_{u_n}(T), u_n)}. \end{multline*} According to~\cite[Theorem~8.1]{Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves}, the quantity $(n B_n)^{-1} \ensembles{P}(\zeta_A(T) = n)$ converges to some positive and finite limit. Then by conditioning on the value of $u_n$, we have \begin{multline*} \Pr{\left\{\left|\sum_{k=1}^{\eta B_n} \sum_{j=1}^k \frac{3 n \sigma_{k,j}^2}{2 B_n^2 |u_n|} A_{k,j}(u_n) - 1\right| > \varepsilon\right\} \cap \{\zeta_A(T) = n\} \cap E_{n, \eta, \gamma, \kappa}(\mathsf{Cut}_{u_n}(T), u_n)} \\ = \frac{1}{n} \sum_{h = \gamma n/B_n}^{\gamma^{-1} n/B_n} \Es{\sum_{\substack{v \in T \\ |v| = h}} \ind{\{|\sum_{k=1}^{\eta B_n} \sum_{j=1}^k \frac{3 n \sigma_{k,j}^2}{2 B_n^2 h} A_{k,j}(v) - 1| > \varepsilon\} \cap \{\zeta_A(T) = n\} \cap E_{n, \eta, \gamma, \kappa}(\mathsf{Cut}_v(T), v)}}. \end{multline*} This is almost in the form of~\eqref{eq:epine}, we just need to express the quantity $\zeta_A(T)$ in terms of $\mathsf{Cut}_v(T)$, $v$, and $\theta_v(T)$. Let $\lambda(\mathsf{Cut}_v(T))$ be the number of leaves of the tree $\mathsf{Cut}_v(T)$; one of them is $v$ who gives birth to a progeny coded by $\theta_v(T)$, whereas the other leaves give birth to progenies coded by independent non-conditioned Bienaymé--Galton--Watson trees with offspring distribution $\mu_\mathbf{q}$. By splitting the contribution to $\zeta_A(T)$ of these trees and of $\mathsf{Cut}_v(T)$, the spinal decomposition~\eqref{eq:epine} reads \begin{multline*} \Es{\sum_{\substack{v \in T \\ |v| = h}} \ind{\{|\sum_{k=1}^{\eta B_n} \sum_{j=1}^k \frac{3 n \sigma_{k,j}^2}{2 B_n^2 h} A_{k,j}(v) - 1| > \varepsilon\} \cap \{\zeta_A(T) = n\} \cap E_{n, \eta, \gamma, \kappa}(\mathsf{Cut}_v(T), v)}} \\ = \ensembles{P}\left(\left\{\left|\sum_{k=1}^{\eta B_n} \sum_{j=1}^k \frac{3 n \sigma_{k,j}^2}{2 B_n^2 h} A_{k,j}(v^\ast_h) - 1\right| > \varepsilon\right\} \cap E_{n, \eta, \gamma, \kappa}(\mathsf{Cut}_{v^\ast_h}(T_\infty), v^\ast_h)\right. \\ \left.\cap \{\zeta_A(\mathsf{Cut}_{v^\ast_h}(T_\infty)) + \zeta_A(F_{\lambda(\mathsf{Cut}_{v^\ast_h}(T_\infty))}) = n\}\right), \end{multline*} where for every $N \ge 1$, we let $F_N$ denote a forest of i.i.d. non-conditioned Bienaymé--Galton--Watson trees with offspring distribution $\mu_\mathbf{q}$, which is independent of the rest. In the case $A = \ensembles{Z}_+$ when we condition by the total size, it is well-known that $\zeta_{\ensembles{Z}_+}(F_1) = |T|$ belongs to the domain of attraction of a stable law with index $1/2$, and an application of the local limit theorem shows that there exists a constant $C > 0$ such that for every $N, m \ge 1$, \[\Pr{\zeta_{\ensembles{Z}_+}(F_n) = m} \le \frac{C}{B_N'},\] where $(B_N')_{N \ge 1}$ is an increasing sequence such that $(N^{-2} B_N')_{N \ge 1}$ is slowly varying and furthermore \[B_{B_n}' \enskip\mathop{\sim}^{}_{n \to \infty}\enskip n.\] We refer to the discussion leading to Equation~28 in~\cite{Kortchemski:Sub_exponential_tail_bounds_for_conditioned_stable_Bienayme_Galton_Watson_trees} for more details. This extends to the general case where either $A$ or $\ensembles{Z}_+ \setminus A$ is finite by replacing Equation~26 in~\cite{Kortchemski:Sub_exponential_tail_bounds_for_conditioned_stable_Bienayme_Galton_Watson_trees} by~\cite[Theorem~8.1]{Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves}, so $\zeta_A(F_1)$ also belongs to the domain of attraction of a stable law with index $1/2$; this only modifies the preceding constant $C$. In our case, we shall take $N = LR(v_h^\ast)+1$ which, on the event $E_{n, \eta, \gamma, \kappa}(\mathsf{Cut}_{v^\ast_h}(T_\infty), v^\ast_h)$ is at least $\kappa B_n$, so $(B_N')^{-1}$ is at most $B_{\kappa B_n}^{-1} \sim \kappa^{-2} n^{-1}$ as $n \to \infty$. Let us put together the previous arguments: we have shown that there exists a constant $K$ such that \begin{align*} &\Pr{\left\{\left|\sum_{k=1}^{\Delta(T_{A,n})} \sum_{j=1}^k \frac{3 n \sigma_{k,j}^2}{2 B_n^2 |u_n|} A_{k,j}(u_n) - 1\right| > \varepsilon\right\} \cap E_{n, \eta, \gamma, \kappa}(T_{A,n}, u_n)} \\ &\le K \cdot n B_n\cdot \frac{1}{n} \sum_{h = \gamma n/B_n}^{\gamma^{-1} n/B_n} \frac{1}{\kappa^2 n}\cdot \Pr{\left\{\left|\sum_{k=1}^{\eta B_n} \sum_{j=1}^k \frac{3 n \sigma_{k,j}^2}{2 B_n^2 h} A_{k,j}(v^\ast_h) - 1\right| > \varepsilon\right\} \cap E_{n, \eta, \gamma, \kappa}(\mathsf{Cut}_{v^\ast_h}(T_\infty), v^\ast_h)} \\ &\le \frac{K}{\gamma \kappa^2} \sup_{\gamma n/B_n \le h \le \gamma^{-1} n/B_n} \Pr{\left\{\left|\sum_{k=1}^{\eta B_n} \sum_{j=1}^k \frac{3 n \sigma_{k,j}^2}{2 B_n^2 h} A_{k,j}(v^\ast_h) - 1\right| > \varepsilon\right\} \cap E_{n, \eta, \gamma, \kappa}(\mathsf{Cut}_{v^\ast_h}(T_\infty), v^\ast_h)}. \end{align*} We finally treat the last probability; let us write \[\sum_{k=1}^{\eta B_n} \sum_{j=1}^k \frac{3 n \sigma_{k,j}^2}{2 B_n^2 h} A_{k,j}(v^\ast_h) = \sum_{i=1}^h \frac{3 n}{2 B_n^2 h} X_i,\] where $X_i$ takes the value $\sigma_{k,j}^2$ if and only if the vertex at height $i-1$ on the spine has $k \le \eta B_n$ children, and the vertex at height $i$ on the spine is its $j$-th child, whereas $X_i$ takes the value $0$ whenever $k > \eta B_n$. Recall that on the spine, the vertices reproduce independently according to the size-biased law $\sum_{k \ge 1} k \mu_\mathbf{q}(k) \delta_k$, and furthermore, conditional on the number of children of its parent, the position of a vertex amongst its sibling is uniformly chosen. According to~\eqref{eq:variance_accroissements_labels_cas_gaussien}, we thus have as $n \to \infty$ \begin{align*} \Es{\sum_{k=1}^{\eta B_n} \sum_{j=1}^k \frac{3 n \sigma_{k,j}^2}{2 B_n^2 h} A_{k,j}(v^\ast_h)} &= \frac{3 n}{2 B_n^2} \sum_{k=1}^{\eta B_n} \sum_{j=1}^k \sigma_{k,j}^2 \mu_\mathbf{q}(k) \\ &= \frac{3 n}{2 B_n^2} \sum_{k=1}^{\eta B_n} \frac{k(k-1)}{3} \mu_\mathbf{q}(k) \\ &\sim \frac{n}{2 B_n^2} \mathrm{Var}(\xi \ind{\xi \le \eta B_n}), \end{align*} where $\xi$ is distributed according to $\mu_\mathbf{q}$. Recall that the function $l(x) = \mathrm{Var}(\xi \ind{\xi \le x})$ is slowly varying so the factor $\eta$ in the last line above can be removed and we conclude from~\cite[Equation~7]{Kortchemski:Sub_exponential_tail_bounds_for_conditioned_stable_Bienayme_Galton_Watson_trees}, that our expectation tends to $1$ as $n \to \infty$. Replacing $\varepsilon$ by $2\varepsilon$ in all the preceding equations, we may thus replace the factor $1$ that we subtract in the probability that we are currently aiming at controlling by the preceding expectation. An application fo Markov inequality then shows that the probability \[\Prc{\left\{\left|\sum_{k=1}^{\eta B_n} \sum_{j=1}^k \frac{3 n \sigma_{k,j}^2}{2 B_n^2 h} A_{k,j}(v^\ast_h) - \Es{\sum_{k=1}^{\eta B_n} \sum_{j=1}^k \frac{3 n \sigma_{k,j}^2}{2 B_n^2 h} A_{k,j}(v^\ast_h)}\right| > \varepsilon\right\}} {E_{n, \eta, \gamma, \kappa}(\mathsf{Cut}_{v^\ast_h}(T_\infty), v^\ast_h)}\] is bounded above by \[\varepsilon^{-2} h \left(\frac{3 n}{2 B_n^2 h}\right)^2 \Es{X_1^2}.\] Observe from~\eqref{eq:variance_accroissements_labels_cas_gaussien} that $\sigma^2_{k,j} \le k/2$ for every pair $1 \le j \le k$ so, as previously, \[\Es{X_1^2} = \sum_{k=1}^{\eta B_n} \sum_{j=1}^k \sigma_{k,j}^4 \mu_\mathbf{q}(k) \le \sum_{k=1}^{\eta B_n} \frac{k^3}{6} \mu_\mathbf{q}(k) \le \frac{\eta B_n}{6} \sum_{k=1}^{\eta B_n} k^2 \mu_\mathbf{q}(k) \sim \frac{\eta}{6} B_n l(B_n).\] Whence for every $h \in [\gamma n/B_n, \gamma^{-1} n/B_n]$ \[h \left(\frac{3 n}{2 B_n^2 h}\right)^2 \Es{X_1^2} \le \frac{9 \eta n^2 B_n l(B_n)}{24 B_n^4 h} (1+o(1)) \le \frac{\eta}{\gamma} \frac{9n l(B_n)}{24 B_n^2} (1+o(1)),\] and the last fraction tends to $9/12$ as $n \to \infty$. So finally, we have for every $n$ large enough, \[\Pr{\left\{\left|\sum_{k=1}^{\Delta(T_{A,n})} \sum_{j=1}^k \frac{3 n \sigma_{k,j}^2}{2 B_n^2 |u_n|} A_{k,j}(u_n) - 1\right| > \varepsilon\right\} \cap E_{n, \eta, \gamma, \kappa}(T_{A,n}, u_n)} \le \frac{K \eta}{\gamma^2 \kappa^2}.\] Since $\eta$ can be chosen arbitrarily small, the first convergence in~\eqref{eq:moyenne_accroissements_etiquettes_cas_gaussien} follows. The second convergence in~\eqref{eq:moyenne_accroissements_etiquettes_cas_gaussien} follows by the exact same calculations: we can bound the probability \[\Pr{\left\{\sum_{k=1}^{\Delta(\mathsf{Cut}_{u_n}(T_{A,n}))} \sum_{j=1}^k \left(\frac{n \sigma_{k,j}^2}{B_n^2 |u_n|} \right)^2 A_{k,j}(u_n) > \varepsilon\right\} \cap E_{n, \eta, \gamma, \kappa}(\mathsf{Cut}_{u_n}(T_{A,n}), u_n)}\] by \[\frac{1}{\varepsilon} \frac{B_n}{\gamma n} \frac{n^2}{B_n^4} \mathrm{Var}(\xi \ind{\xi \le \eta B_n}) \sim \frac{1}{\varepsilon\gamma} \frac{n}{B_n^3} l(B_n) \sim \frac{2}{\varepsilon \gamma B_n} \to 0,\] as $n \to \infty$, which concludes the proof. \end{proof} As in the case $\alpha < 2$, we close this section by sketching the argument for the multi-dimensional marginals. One of the differences is that now the contribution of the branch-points vanishes. \begin{lem}\label{lem:deplacement_max_autour_site} We have the convergence in probability \[B_{\zeta(T_{A,n})}^{-1/2} \max_{u \in T_n} \left|\max_{1 \le i \le k_u} \ell(ui) - \min_{1 \le i \le k_u} \ell(ui)\right| \cvproba 0.\] \end{lem} \begin{proof} Note that again, we may, and shall, replace $B_{\zeta(T_{A,n})}$ by $B_n$. We follow the proof of~\cite[Proposition 2]{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence} which dealt with trees `with a prescribed degree sequence' in the finite-variance regime. Recall that a uniform random bridge $X_k$ in $\mathcal{B}_k^+$ has the same law as the first $k$ steps of a random walk with step distribution $\sum_{i \ge -1} 2^{-i-2} \delta_i$ conditioned on being at $0$ at time $k$. According to Lemma 6 in~\cite{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence}, there exists two constants $c, C > 0$ such that for every $k \ge 1$ and $x \ge 0$, we have \[\Pr{\max_{1 \le i \le k} X_{k,i} - \min_{1 \le i \le k} X_{k,i} > x} \le C \mathrm{e}^{-cx^2/k}.\] Let $\zeta_k(T_{A,n})$ denote the number of individuals in $T_{A,n}$ with $k$ offsprings and let $\Delta(T_{A,n})$ be the largest offspring in $T_{A,n}$. Fix $\varepsilon > 0$ and recall the bound $\ln(1-x) \ge -\frac{x}{1-x}$ for $x < 1$. We then have \begin{align*} &\Prc{\max_{u \in T_n} \left|\max_{1 \le i \le k_u} \ell(ui) - \min_{1 \le i \le k_u} \ell(ui)\right| \le \varepsilon B_n^{1/2}}{T_{A,n}} \\ &= \prod_{k=1}^{\Delta(T_{A,n})} \Prc{\max_{1 \le i \le k} X_{k,i} - \min_{1 \le i \le k} X_{k,i} \le \varepsilon B_n^{1/2}}{T_{A,n}}^{\zeta_k(T_{A,n})} \\ &\ge \exp\left(- C \sum_{k=1}^{\Delta(T_{A,n})} \zeta_k(T_{A,n}) \mathrm{e}^{-c\varepsilon^2B_n/k} (1+o(1))\right), \end{align*} and the claim reduces to showing the convergence in probability \[\sum_{k=1}^{\Delta(T_{A,n})} \zeta_k(T_{A,n}) \mathrm{e}^{-c\varepsilon^2B_n/k} \cvproba 0.\] Since $x \mapsto x^2\mathrm{e}^{-x}$ is decreasing on $[2, \infty)$, we have on a set of high probability as $n \to \infty$, \[\sum_{k=1}^{\Delta(T_{A,n})} \zeta_k(T_{A,n}) \mathrm{e}^{-c\varepsilon^2B_n/k} \le \sum_{k=1}^{\Delta(T_{A,n})} \frac{k^2 \zeta_k(T_{A,n})}{B_n^2} \times \frac{B_n^2}{\Delta(T_{A,n})^2} \mathrm{e}^{-c\varepsilon^2 B_n/\Delta(T_{A,n})}.\] Recall that $\Delta(T_{A,n})$ is the largest jump plus one of the {\L}ukasiewicz path $W_n$ so, according to~\eqref{eq:Duquesne_Kortchemski}, the ratio $\Delta(T_{A,n})/B_n$ tends to $0$ in probability and so \[\frac{B_n^2}{\Delta(T_{A,n})^2} \mathrm{e}^{-c\varepsilon^2 B_n/\Delta(T_{A,n})} \cvproba 0.\] It only remains to prove that the sequence $\sum_{k=1}^{\Delta(T_{A,n})} \frac{k^2 \zeta_k(T_{A,n})}{B_n^2}$ is bounded in probability in the sense that \[\lim_{K \to \infty} \limsup_{n \to \infty} \Pr{\sum_{k=1}^{\Delta(T_{A,n})} \frac{k^2 \zeta_k(T_{A,n})}{B_n^2} > K} = 0.\] Let us intersect the preceding event with $\{\Delta(T_{A,n}) \le B_n\}$ whose probability tends to one. Let us translate our probability in terms of the {\L}ukasiewicz path: we aim at showing \[\lim_{K \to \infty} \limsup_{n \to \infty} \Pr{\left\{\sum_{i=1}^{\zeta(T_{A,n})} \frac{(W_n(i+1)-W_n(i))^2}{B_n^2} > K\right\} \cap \left\{\max_{1 \le i \le \zeta(T_{A,n})} W_n(i+1)-W_n(i) \le B_n\right\}} = 0.\] We then use the same reasoning as in the proof of Lemma~\ref{lem:moments_marche_Luka}. For a path $S$ and $n \ge 1$, let us denote by $\varsigma_{A,n}$ the time such that the $n$-th jump of $S$ with values in the set $A-1$ is its $\varsigma_{A,n}$-th jump in total. Note that our event is shift-invariant so we may replace the excursion $W_n$ by the bridge $S_n$ obtained by conditioning the random walk $S$ with step distribution $\sum_{k \ge -1} \mu_\mathbf{q}(k+1) \delta_k$ to be at $-1$ after its $\varsigma_{A,n}$-th jump. Then, by cutting the time interval in two and using a time-reversibility property for the second half due to Kortchemski~\cite[Proposition~6.8]{Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves}, we may in fact only consider the first half of the bridge, i.e. up to time $\varsigma_{A,n/2}$. The latter is absolutely continuous with respect to the unconditioned random walk so, if $(\xi_i)_{i \ge 1}$ are i.i.d. copies of a random variable $\xi$ sampled from $\sum_{k \ge -1} \mu_\mathbf{q}(k+1) \delta_k$ and if now $\varsigma_{A,n}$ denote the least $i \ge 1$ such that $\#\{i \in \{1, \dots, \varsigma_{A,n}\} : \xi_i \in A-1\}=n$, then there exists $C > 0$ such that \begin{multline*} \Pr{\left\{\sum_{i=1}^{\zeta(T_{A,n})} \frac{(W_n(i+1)-W_n(i))^2}{B_n^2} > K\right\} \cap \left\{\max_{1 \le i \le \zeta(T_{A,n})} W_n(i+1)-W_n(i) \le B_n\right\}} \\ \le C \cdot \Pr{\left\{\sum_{i=1}^{\varsigma_{A,n/2}} \frac{\xi_i^2}{B_n^2} > K\right\} \cap \left\{\max_{1 \le i \le \varsigma_{A,n/2}} \xi_i \le B_n\right\}}. \end{multline*} Since $\varsigma_{A,n/2}/n$ converges almost surely to $1/(2\mu_\mathbf{q}(A))$ (see e.g.~\cite[Lemma~6.2]{Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves}), we may replace $\varsigma_{A,n/2}$ by $n/(2\mu_\mathbf{q}(A))$. The Markov inequality then yields \[\Pr{\left\{\sum_{i=1}^{n/(2\mu_\mathbf{q}(A))} \frac{\xi_i^2}{B_n^2} > K\right\} \cap \left\{\max_{1 \le i \le n/(2\mu_\mathbf{q}(A))} \xi_i \le B_n\right\}} \le \frac{n}{2K \mu_\mathbf{q}(A) B_n^2} \Es{\xi^2 \ind{\xi \le B_n}},\] which converges to $1/(K \mu_\mathbf{q}(A))$ and our claim follows. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:marginal_etiquettes_cas_gaussien} in the case $k \ge 2$] Let us only restrict ourselves to the case $k=2$ to ease the notation since the general case hides no extra difficulty. We sample two independent uniform random vertices of $T_n$, say, $u_n$ and $v_n$, and we let $w_n$ be their most recent ancestor, we denote by $\hat{u}_n$ and $\hat{v}_n$ the children of $w_n$ which are respectively an ancestor of $u_n$ and $v_n$ in order to decompose \[\ell(u_n) = \ell(w_n) + (\ell(\hat{u}_n)-\ell(w_n)) + (\ell(u_n) - \ell(\hat{u}_n)),\] and similarly for $v_n$. The important observation is that, conditional on $T_{A,n}$, $u_n$ and $v_n$, the random variables $\ell(w_n)$, $\ell(u_n) - \ell(\hat{u}_n)$ and $\ell(v_n) - \ell(\hat{v}_n)$ are independent. According to~\eqref{eq:Duquesne_Kortchemski}, we have \[\frac{B_{\zeta(T_{A,n})}}{\zeta(T_{A,n})} \left(|w_n|, |u_n| - |\hat{u}_n|, |v_n| - |\hat{v}_n|\right) \cvloi \left(\min_{r \in [U, V]} \mathscr{H}_r, \mathscr{H}_U - \min_{r \in [U, V]} \mathscr{H}_r, \mathscr{H}_V - \min_{r \in [U, V]}\mathscr{H}_r\right),\] where $U$ and $V$ are i.i.d uniform random variables on $[0,1]$ independent of $\mathscr{H}$. We claim that, jointly with~\eqref{eq:Duquesne_Kortchemski}, we have \begin{equation}\label{eq:cv_labels_trois_parties} \left(\frac{3 \zeta(T_{A,n})}{2 B_{\zeta(T_{A,n})}^2}\right)^{1/2} \left(\frac{\ell(w_n)}{\sqrt{|w_n|}}, \frac{\ell(u_n) - \ell(\hat{u}_n)}{\sqrt{|u_n| - |\hat{u}_n|}}, \frac{\ell(v_n) - \ell(\hat{v}_n)}{\sqrt{|v_n| - |\hat{v}_n|}}\right) \cvloi \left(G_1, G_2, G_3\right), \end{equation} where $G_1$, $G_2$, $G_3$ are i.i.d. standard Gaussian random variables. This actually follows from the arguments used in the proof of Proposition~\ref{prop:marginal_etiquettes_cas_gaussien} in the case $k = 1$ which show not only the convergence of $\ell(u_n)$, but also that if $a_n$ is an ancestor of $u_n$ such that the ratio $|a_n|/|u_n|$ converges in probability to some $a \in (0,1)$ as $n \to \infty$, then we have \[\left(\frac{3 \zeta(T_{A,n})}{2 B_{\zeta(T_{A,n})}^2}\right)^{1/2} \left(\frac{\ell(a_n)}{\sqrt{|a_n|}}, \frac{\ell(u_n) - \ell(a_n)}{\sqrt{|u_n| - |\hat{u}_n|}}\right) \cvloi \left(G_1, G_2\right).\] Indeed, replacing $u_n$ by $a_n$ only replaces $h$ in the last part of the proof in the case $k = 1$ by $ah(1+o(1))$ which shows the convergence of the first marginal, and similarly for the second; the joint convergence holds since they are independent. Recall from Lemma~\ref{lem:deplacement_max_autour_site} that the maximal displacement at a branch-point is small, then the preceding convergence implies that of the first two components in~\eqref{eq:cv_labels_trois_parties}. The convergence of the last one also holds since the role of $u_n$ and $v_n$ is symmetric and so~\eqref{eq:cv_labels_trois_parties} holds by independence. Since Lemma~\ref{lem:deplacement_max_autour_site} implies that \[\left(\frac{\zeta(T_{A,n})}{B_{\zeta(T_{A,n})}^2}\right)^{1/2} \left(\ell(\hat{u}_n) - \ell(w_n), \ell(\hat{v}_n) - \ell(w_n)\right) \cvproba \left(0,0\right),\] We conclude from~\eqref{eq:cv_labels_trois_parties} that the pair \[B_{\zeta(T_{A,n})}^{-1/2} (\ell(u_n), \ell(v_n))\] converges in distribution as $n \to \infty$ towards \[\frac{2}{3} \left(\sqrt{\min_{r \in [U, V]} \mathscr{H}_r} \cdot G_1 + \sqrt{\mathscr{H}_U - \min_{r \in [U, V]} \mathscr{H}_r} \cdot G_2, \sqrt{\min_{r \in [U, V]} \mathscr{H}_r} \cdot G_1 + \sqrt{\mathscr{H}_V - \min_{r \in [U, V]} \mathscr{H}_r} \cdot G_3\right) \] which is indeed distributed as $(\mathscr{L}_U, \mathscr{L}_V)$. \end{proof} \section{Scaling limits of maps} \label{sec:limite_cartes} The main goal of this section is to prove Theorem~\ref{thm:cv_cartes}, we also state and prove scaling limits on the profile of distances in Theorem~\ref{thm:profil} below. We first prove that large pointed and non-pointed maps are close, in order to focus on pointed maps. Relying of the description of such maps by labelled trees, we then state and prove Theorem~\ref{thm:profil}. Finally, we prove Theorem~\ref{thm:cv_cartes} in the last three subsections. The proof of tightness in Theorem~\ref{thm:cv_cartes} follows from the functional convergence in Theorem~\ref{thm:cv_serpents_cartes} as in the pioneer work of Le Gall~\cite{Le_Gall:The_topological_structure_of_scaling_limits_of_large_planar_maps} who considered maps pointed at the origin of the root-edge, see also~\cite{Le_Gall:Uniqueness_and_universality_of_the_Brownian_map,Le_Gall-Miermont:Scaling_limits_of_random_planar_maps_with_large_faces} for maps pointed as here; all these references rely on a different labelled tree obtained by the Bouttier--Di Francesco--Guitter bijection~\cite{Bouttier-Di_Francesco-Guitter:Planar_maps_as_labeled_mobiles}. We recast their proof using the Janson--Stef{\'a}nsson bijection~\cite{Janson-Stefansson:Scaling_limits_of_random_planar_maps_with_a_unique_large_face}. Throughout this section, we fix $S \in \{V, E, F\}$, and for every $n \ge 1$, we sample a pointed map $(M_n, \star)$ from $\ensembles{P}^{\mathbf{q}, \bullet}_{S=n}$. Recall that $\zeta(M_n)$ denotes the number of edges of $M_n$. Recall that we associate with $(M_n, \star)$ a labelled tree $(T_n, \ell)$ with the same amount of edges $\zeta(M_n)$, and, as discussed in Section~\ref{sec:BGW}, $T_n$ has the law of $T_{A,n}$ where $A = \ensembles{Z}_+$ if $S=E$, and $A = 0$ if $S=V$, and $A=\ensembles{N}$ if $S=F$. \subsection{On the behaviour of leaves in a large Bienaymé--Galton--Watson tree} Recall that the leaves of the tree are in one-to-one correspondence with the vertex of $M_n$ different from the distinguished one; we shall need the following two estimates. First, recall the notation $\lambda(T_n)$ for the number of leaves of $T_n$. For every $0 \le j \le \zeta(T_n)$, let further $\Lambda(T_n, j)$ denote the number of leaves amongst the first $j$ vertices of $T_n$ in lexicographical order, and make $\Lambda$ a continuous function on $[0, \zeta(T_n)]$ after linear interpolation. \begin{lem}\label{lem:repartition_feuilles} We have the convergence in probability \[\left(\frac{\Lambda(T_n, \zeta(T_n) t)}{\lambda(T_n)} ; t \in [0,1]\right) \cvproba (t ; t \in [0,1]).\] \end{lem} \begin{proof} Kortchemski~\cite[Corollary~3.3]{Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves} (again for trees conditioned by the number of leaves, but it extends to the general case) proved that when we restrict to a time interval $[\eta, 1]$ with $\eta > 0$, the probability that $t\mapsto \frac{\Lambda(T_n, \zeta(T_n) t)}{\lambda(T_n)}$ deviates from the identity decays sub-exponentially fast. We can then extend to the whole segment $[0,1]$ to get our result by `mirror symmetry'. It suffices to observe that the `mirror' {\L}ukasiewicz path visits more leaves in its last $k$ steps than the original {\L}ukasiewicz path in its first $k$ steps; indeed in order to visit a vertex in the original lexicographical order, one must first visits all its ancestors, whereas in the `mirror' order, some of them have been already visited (the root of the tree for example). \end{proof} The preceding result states that the leaves of the tree are homogeneously spread. Note that we could replace the leaves by the vertices with offspring in a given set $B \subset \ensembles{Z}_+$. The next result states that the inverse of the number of leaves, normalised to have expectation $1$, converges to $1$ in $L^1$. \begin{lem}\label{lem:biais_sites_GW} We have the convergence in probability \[\lim_{n \to \infty} \Es{\left|\frac{1}{\lambda(T_n)} \frac{1}{\ensembles{E}[\frac{1}{\lambda(T_n)}]} - 1\right|} = 0.\] \end{lem} This convergence is~\cite[Lemma~8]{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence} in the finite-variance regime; the proof applies \emph{mutatis mutandis} in our case since the arguments used there, which are due to Kortchemski~\cite{Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves}, hold in the more our general case. Following arguments from~\cite{Abraham:Rescaled_bipartite_planar_maps_converge_to_the_Brownian_map, Bettinelli-Jacob-Miermont:The_scaling_limit_of_uniform_random_plane_maps_via_the_Ambjorn_Budd_bijection, Bettinelli-Miermont:Compact_Brownian_surfaces_I_Brownian_disks}, it was then shown in~\cite[Proposition~12]{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence} that Lemma~\ref{lem:biais_sites_GW} yields the following comparison between pointed and non-pointed maps. \begin{prop}\label{prop:biais_cartes_Boltzmann_pointees} Let $\phi : \Map^\bullet \to \mathbf{M} : (M, \star) \mapsto M$ and let $\phi_* \ensembles{P}^{\mathbf{q}, \bullet}_{S=n}$ be the push-forward measure induced on $\mathbf{M}$ by $\ensembles{P}^{\mathbf{q}, \bullet}_{S=n}$, then \[\left\|\ensembles{P}^\mathbf{q}_{S=n} - \phi_* \ensembles{P}^{\mathbf{q}, \star}_{S=n}\right\|_{TV} \cv 0,\] where $\|\cdot\|_{TV}$ refers to the total variation norm. \end{prop} Indeed, one can bound this total variation distance by the expectation in Lemma~\ref{lem:biais_sites_GW} with $\lambda(T_n)-1$ instead of $\lambda(T_n)$. Observe that if $(M_n, \star)$ is sampled from $\ensembles{P}^{\mathbf{q}, \bullet}_{S=n}$, then, conditional on $M_n$, the vertex $\star$ is uniformly distributed in $M_n$. \subsection{Radius and profile} \label{sec:profil} Although for $\alpha \in (1,2)$, we shall only obtain a convergence along subsequences of the metric spaces, because these subsequential limits are not characterised, still we do obtain some information about distances in large maps. Recall that we work with pointed maps $(M_n, \star)$ sampled from $\ensembles{P}^{\mathbf{q}, \bullet}_{S=n}$, but according to the preceding section, this pair is close to a non-pointed map sampled from $\ensembles{P}^{\mathbf{q}}_{S=n}$, in which we sample a vertex uniformly at random so the next result also holds in this context. Recall that $\zeta(M_n)$ denotes the number of edges of $M_n$, let us denote by $\upsilon(M_n)$ its number of vertices. Let \[R(M_n) = \max_{x \in M_n} d_{\mathrm{gr}}(x, \star)\] be the \emph{radius} of the map; define also a point measure on $\ensembles{Z}_+$, called the \emph{profile of distances}, by \[\rho_{M_n}(k) = \#\{x \in M_n : d_{\mathrm{gr}}(x, \star)=k\}. \qquad k \in \ensembles{Z}_+.\] Finally, let $\Delta(M_n)$ be the longest distance in $M_n$ between $\star$ and the two extremities of the root-edge (the other extremity is at distance $\Delta(M_n)-1$). \begin{thm} \label{thm:profil} Let $\overline{\mathscr{L}} = \sup_{t \in [0,1]} \mathscr{L}_t$ and $\underline{\mathscr{L}} = \inf_{t \in [0,1]} \mathscr{L}_t$ and observe that $\overline{\mathscr{L}}$ and $-\underline{\mathscr{L}}$ have the same law by symmetry. Then the following convergences in distribution hold as $n \to \infty$: \begin{enumerate} \item $B_{\zeta(M_n)}^{-1/2} R(M_n) \to \overline{\mathscr{L}} - \underline{\mathscr{L}}$; \item $B_{\zeta(M_n)}^{-1/2} \Delta(M_n) \to \overline{\mathscr{L}}$; \item For every continuous and bounded function $\varphi$, \[\frac{1}{\upsilon(M_n)} \sum_{k \ge 0} \varphi(B_{\zeta(M_n)}^{-1/2} k) \rho_{M_n}(k) \cvloi \int_0^1 \varphi(\mathscr{L}_t - \underline{\mathscr{L}}) \mathrm{d} t.\] \end{enumerate} \end{thm} \begin{proof} We rely on the bijection with the labelled tree $(T_n, \ell)$. Let us set $\underline{L}_n = \min_{1 \le i \le \lambda(T_n)} L_n(i)-1$; in this bijection, we have \[R(M_n) = \max_{0 \le i \le \zeta(T_n)} L_n(i) - \underline{L}_n,\] so the first convergence immediately follows from Theorem~\ref{thm:cv_serpents_cartes}. Similarly, the root-vertex of the tree is the farthest extremity of the root-edge of $M_n$ from $\star$, so \[\Delta(M_n) = - \underline{L}_n,\] and the second convergence is again an immediate consequence of Theorem~\ref{thm:cv_serpents_cartes}. We need a little more work for the third assertion. Our argument shall also serve later in Section~\ref{sec:tension_cartes} and~\ref{sec:carte_brownienne}. Recall the notation $\lambda(T_n)$ for the number of leaves of $T_n$, which equals $\upsilon(M_n)-1$, and $\Lambda(T_n, j)$ for the number of leaves amongst the first $j$ vertices of $T_n$ in lexicographical order. For every $1 \le i \le \lambda(T_n)$, let $g(i) \in \{1, \dots, \zeta(T_n)\}$ be the index such that $u_{g(i)}$ is the $i$-th leaf of $T_n$. Since $j \mapsto \Lambda(T_n, j)$ is non-decreasing, Lemma~\ref{lem:repartition_feuilles} is equivalent to \begin{equation}\label{eq:approximation_sites_aretes_carte} \left(\frac{g(\lambda(T_n) t)}{\zeta(T_n)} ; t \in [0,1]\right) \cvproba (t ; t \in [0,1]), \end{equation} where as usual, we have linearly interpolated $g$ between integer values. . Then observe that \begin{align*} \frac{1}{\upsilon(M_n)-1} \sum_{k \ge 0} \varphi(B_{\zeta(M_n)}^{-1/2} k) \rho_{M_n}(k) &= \frac{1}{\lambda(T_n)} \varphi(0) + \frac{1}{\lambda(T_n)} \sum_{i = 1}^{\lambda(T_n)} \varphi\left(B_{\zeta(T_n)}^{-1/2} \left(L_n(g(k)) - \underline{L}_n\right)\right) \\ &= \frac{1}{\lambda(T_n)} \varphi(0) + \int_0^1 \varphi\left(B_{\zeta(T_n)}^{-1/2} \left(L_n(g(\lceil \lambda(T_n)t\rceil)) - \underline{L}_n\right)\right) \mathrm{d} t, \end{align*} which converges in law to $\int_0^1 \varphi(\mathscr{L}_t - \underline{\mathscr{L}}) \mathrm{d} t$ according to~\eqref{eq:approximation_sites_aretes_carte} and Theorem~\ref{thm:cv_serpents_cartes}. \end{proof} \subsection{The Gromov--Hausdorff--Prokhorov topology} Let us next briefly define this topology used in Theorem~\ref{thm:cv_cartes} in a way that is tailored for our purpose. Let $(X, d_x, m_x)$ and $(Y, d_Y, m_y)$ be two compact metric spaces equipped with a Borel probability measure. A \emph{correspondence} between these spaces is a subset $R \subset X \times Y$ such that for every $x \in X$, there exists $y \in Y$ such that $(x,y) \in R$ and vice-versa. The \emph{distortion} of $R$ is defined as \[\mathrm{dis}(R) = \sup\left\{\left|d_X(x,x') - d_Y(y,y')\right| ; (x,y), (x', y') \in R\right\}.\] Then we define the Gromov--Hausdorff--Prokhorov distance between these spaces as the infimum of all those $\varepsilon > 0$ such that there exists a coupling $\nu$ between $m_X$ and $m_Y$ and a compact correspondence $R$ between $X$ and $Y$ such that \[\nu(R) \ge 1-\varepsilon \quad\text{and}\quad \mathrm{dis}(R) \le 2 \varepsilon.\] This definition is not the usual one and is due to Miermont~\cite[Proposition~6]{Miermont:Tessellations_of_random_maps_of_arbitrary_genus}. We refer to Section~6 for more details on the Gromov--Hausdorff--Prokhorov distance. Let us only recall that it makes separable and complete the set of isometry classes of compact metric spaces equipped with a Borel probability measure. If $(M_n\setminus\{\star\}, d_{\mathrm{gr}}, p_{\mathrm{gr}})$ is the metric measured space given by the vertices of $M_n$ different from $\star$, their graph distance \emph{in $M_n$} and the uniform probability measure, then the Gromov--Hausdorff--Prokhorov distance between $(M_n, d_{\mathrm{gr}}, p_{\mathrm{gr}})$, and $(M_n\setminus\{\star\}, d_{\mathrm{gr}}, p_{\mathrm{gr}})$ is bounded by one so it suffices to prove that from every increasing sequence of integers, one can extract a subsequence along which the convergence in distribution \begin{equation}\label{eq:convergence_cartes_pointees} \left(M_n\setminus\{\star\}, B_{\zeta(M_n)}^{-1/2} d_{\mathrm{gr}}, p_{\mathrm{gr}}\right) \cvloi (\mathscr{M}, \mathscr{D}, \mathscr{m}), \end{equation} holds for the Gromov--Hausdorff--Prokhorov topology. \subsection{Tightness of distances} Recall that the leaves of the labelled tree $(T_n, \ell)$ associated with $(M_n, \star)$ are in bijection with the vertices of $M_n$ different from $\star$. As for the internal vertices of $T_n$, they are each identified with their last child and so to each such internal vertex corresponds a leaf (the end of the right-most ancestral line starting from them) and therefore a vertex of $M_n\setminus\{\star\}$. Let $\varphi : T_n \to M_n\setminus\{\star\}$ be the map which associates with each vertex of $T_n$ its corresponding vertex of $M_n$. Let us list the vertices of $T_n$ as $u_0 < u_1 < \dots < u_{\zeta(M_n)}$ in lexicographical order and for every $i,j \in \{0, \dots, \zeta(M_n)\}$, we set \[d_n(i,j) = d_{\mathrm{gr}}(\varphi(u_i), \varphi(u_j)),\] where $d_{\mathrm{gr}}$ is the graph distance of $M_n$. We then extend $d_n$ to a continuous function on $[0,n]^2$ by `bilinear interpolation' on each square of the form $[i,i+1] \times [j,j+1]$ as in~\cite[Section~2.5]{Le_Gall:Uniqueness_and_universality_of_the_Brownian_map} or~\cite[Section~7]{Le_Gall-Miermont:Scaling_limits_of_random_planar_maps_with_large_faces}. Define for every $t \in [0,1]$: \[H_{(n)}(t)= \frac{B_{\zeta(M_n)}}{\zeta(M_n)} H_n(\zeta(M_n) t), \qquad\text{and}\qquad L_{(n)}(t) = B_{\zeta(M_n)}^{-1/2} L_n(\zeta(M_n) t),\] and for every $s,t \in [0,1]$: \begin{align*} d_{(n)}(s, t) &= B_{\zeta(M_n)}^{-1/2} d_n(\zeta(M_n) s, \zeta(M_n) t), \\ D_{L_{(n)}}(s, t) &= L_{(n)}(s) + L_{(n)}(t) - 2 \max\left\{\min_{r \in [s \wedge t, s \vee t]} L_{(n)}(r); \min_{r \in [0, s \wedge t] \cup [s \vee t, 1]} L_{(n)}(r)\right\}. \end{align*} Using the triangle inequality at a vertex where a geodesic from $\varphi(u_i)$ to $\star$ and a geodesic from $\varphi(u_j)$ to $\star$ in $M_n$ merge, Le Gall~\cite[Equation 4]{Le_Gall:Uniqueness_and_universality_of_the_Brownian_map} (see also~\cite[Lemma 3.1]{Le_Gall:The_topological_structure_of_scaling_limits_of_large_planar_maps} for a detailed proof) obtained the bound \begin{equation}\label{eq:bornes_distances_carte} d_{(n)}(s, t) \le D_{L_{(n)}}(s, t) + 2 B_{\zeta(M_n)}^{-1/2}, \end{equation} for every $s,t \in [0,1]$ such that both $\zeta(M_n) s$ and $\zeta(M_n) t$ are integers, but then also in the other cases. Let us point out that this bound was obtained using the coding of the Bouttier--Di Francesco--Guitter bijection, where $L_n$ is the so-called \emph{white label function} of the two-type tree in the contour order. Nonetheless, as proved in~\cite[Lemma~1]{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence}, this process equals (deterministically) our process $L_n$ when the trees are related by the Janson--Stef{\'a}nsson bijection. Recall from Section~\ref{sec:BGW} that $T_n$ has the law of $T_{A,n}$ where $A = \ensembles{Z}_+$ if $S=E$, and $A = 0$ if $S=V$, and $A=\ensembles{N}$ if $S=F$. Then Theorem~\ref{thm:cv_serpents_cartes} yields the convergence in distribution of continuous paths \[\left(H_{(n)}(t), L_{(n)}(t), D_{L_{(n)}}(s, t)\right)_{s,t \in [0,1]} \cvloi (\mathscr{H}_t, \mathscr{L}_t, D_\mathscr{L}(s,t))_{s,t \in [0,1]},\] where, similarly to the discrete setting, \[D_\mathscr{L}(s,t) = \mathscr{L}_s + \mathscr{L}_t - 2 \max\left\{\min_{r \in [s \wedge t, s \vee t]} \mathscr{L}_r; \min_{r \in [0, s \wedge t] \cup [s \vee t, 1]} \mathscr{L}_r\right\}.\] The bound~\eqref{eq:bornes_distances_carte} then easily shows that $d_{(n)}$ is tight. Therefore, from every increasing sequence of integers, we can extract a subsequence along which we have \begin{equation}\label{eq:convergence_distances_sous_suite} \left(H_{(n)}(t), L_{(n)}(t), d_{(n)}(s, t)\right)_{s,t \in [0,1]} \cvloi (\mathscr{H}_t, \mathscr{L}_t, \mathscr{D}(s,t))_{s,t \in [0,1]}, \end{equation} where $(\mathscr{D}(s,t))_{s,t \in [0,1]}$ depends a priori on the subsequence and satisfies $\mathscr{D} \le D_\mathscr{L}$, see~\cite[Proposition~3.2]{Le_Gall:The_topological_structure_of_scaling_limits_of_large_planar_maps} for a detailed proof in a similar context. In the next subsections, we implicitly restrict ourselves to a subsequence along which~\eqref{eq:convergence_distances_sous_suite} holds. \subsection{Tightness of metric spaces} \label{sec:tension_cartes} Appealing to Skorokhod's representation Theorem, let us assume that the convergence~\eqref{eq:convergence_distances_sous_suite} holds almost surely (along the appropriate subsequence). We claim that, deterministically, the convergence~\eqref{eq:convergence_cartes_pointees} then holds. Let us first construct the limit space. As limit of the sequence $(d_{(n)})_{n \ge 1}$, the fonction $\mathscr{D}$, which is continuous on $[0,1]^2$, is a pseudo-distance. We then define an equivalence relation on $[0,1]$ by setting \[s \approx t \qquad\text{if and only if}\qquad \mathscr{D}(s,t) = 0,\] and we let $\mathscr{M}$ be the quotient $[0,1] / \approx$, equipped with the metric induced by $\mathscr{D}$, which we still denote by $\mathscr{D}$. We let $\Pi$ be the canonical projection from $[0,1]$ to $\mathscr{M}$ which is continuous (since $\mathscr{D}$ is) so $(\mathscr{M}, \mathscr{D})$ is a compact metric space, which finally we endow with the Borel probability measure $\mathscr{m}$ given by the push-forward by $\Pi$ of the Lebesgue measure on $[0,1]$. Recall our definition of the Gromov--Hausdorff--Prokhorov distance. Recall from Section~\ref{sec:profil} that for every $1 \le i \le \lambda(T_n)$, we denote by $g(i) \in \{1, \dots, \zeta(T_n)\}$ the index such that $u_{g(i)}$ is the $i$-th leaf of $T_n$, so the sequence $(\varphi(u_{g(i)}))_{1 \le i \le \lambda(T_n)}$ lists \emph{without redundancies} the vertices of $M_n$ different from $\star$. The set \[\mathscr{R}_n = \left\{\left(\varphi(u_{g(\lceil \lambda(T_n) t\rceil)}), \Pi(t)\right) ; t \in [0,1]\right\}.\] is a correspondence between $(M_n^\star\setminus\{\star\}, B_{\zeta(M_n)}^{-1/2} d_{\mathrm{gr}}, p_{\mathrm{gr}})$ and $(\mathscr{M}, \mathscr{D}, \mathscr{m})$. Let further $\nu$ be the coupling between $p_{\mathrm{gr}}$ and $\mathscr{m}$ given by \[\int_{M_n^\star\setminus\{\star\} \times \mathscr{M}} \phi(v, x) \mathrm{d}\nu(v,x) = \int_0^1 \phi\left(\varphi(u_{g(\lceil \lambda(T_n) t\rceil)}), \Pi(t)\right) \mathrm{d} t,\] for every test function $\phi$. Then $\nu$ is supported by $\mathscr{R}_n$ by construction. Finally, the distortion of $\mathscr{R}_n$ is given by \[\sup_{s,t \in [0,1]} \left|d_{(n)}\left(\frac{g(\lceil \lambda(T_n) s\rceil)}{\zeta(T_n)}, \frac{g(\lceil \lambda(T_n) t\rceil)}{\zeta(T_n)}\right) - \mathscr{D}(s,t)\right|,\] which, appealing to~\eqref{eq:approximation_sites_aretes_carte}, tends to $0$ whenever the convergence~\eqref{eq:convergence_distances_sous_suite} holds, which concludes the proof of the tightness. \subsection{Characterisation of the limit in the Brownian case} \label{sec:carte_brownienne} In this last subsection, we assume that $\alpha=2$ and we prove that~\eqref{eq:convergence_distances_sous_suite} holds without extracting a subsequence, and then so does~\eqref{eq:convergence_cartes_pointees}, with a limit which we next recall, following Le Gall~\cite{Le_Gall:The_topological_structure_of_scaling_limits_of_large_planar_maps} to which we refer for details. First, we view $D_\mathscr{L}$ as a function on the tree $\mathscr{T}$ by setting \[D_\mathscr{L}(x,y) = \inf\left\{D_\mathscr{L}(s,t) ; s,t \in [0,1], x=\pi(s) \text{ and } y=\pi(t)\right\},\] for every $x, y \in \mathscr{T}$, where we recall the notation $\pi : [0,1] \to \mathscr{T} = [0,1] / \sim$ for the canonical projection. Then we put \[\mathscr{D}^\ast(x,y) = \inf\left\{\sum_{i=1}^k D_\mathscr{L}(a_{i-1}, a_i) ; k \ge 1, (x=a_0, a_1, \dots, a_{k-1}, a_k=y) \in \mathscr{T}\right\}.\] The function $\mathscr{D}^\ast$ is a pseudo-distance on $\mathscr{T}$ which can be seen as a pseudo-distance on $[0,1]$ by setting $\mathscr{D}^\ast(s,t) = \mathscr{D}^\ast(\pi(s),\pi(t))$ for every $s,t \in [0,1]$. As functions on $\mathscr{T}^2$, we clearly have $\mathscr{D}^\ast \le D_\mathscr{L}$ and in fact, $\mathscr{D}^\ast$ is the largest pseudo-distance on $\mathscr{T}$ satisfying this property. Indeed, if $D$ is another such pseudo-distance, then for every $x,y \in \mathscr{T}$, for every $k \ge 1$ and every $a_0, a_1, \dots, a_{k-1}, a_k \in \mathscr{T}$ with $a_0=x$ and $a_k=y$, by the triangle inequality, $D(x, y) \le \sum_{i=1}^k D(a_{i-1}, a_i) \le \sum_{i=1}^k D_\mathscr{L}(a_{i-1}, a_i)$ and so $D(x, y) \le \mathscr{D}^\ast(x, y)$. Furthermore, if we view $\mathscr{D}^\ast$ as a function on $[0,1]^2$, then for all $s,t \in [0,1]$ such that $\pi(s) = \pi(t)$ we have $\mathscr{D}^\ast(\pi(s),\pi(t)) = 0$. We deduce from the previous maximality property that $\mathscr{D}^\ast$ is the largest pseudo-distance $D$ on $[0,1]$ satisfying the following two properties: \[D \le D_\mathscr{L} \qquad\text{and}\qquad \pi(s) = \pi(t) \quad\text{implies}\quad D(s,t) = 0.\] We point out that since $\mathscr{H}$ is $\sqrt{2}$ times the standard Brownian excursion, then our process $\mathscr{L}$ corresponds to $(\frac{8}{9})^{1/4} Z$ where $Z$ is used to define the Brownian map in~\cite{Le_Gall:The_topological_structure_of_scaling_limits_of_large_planar_maps} and in subsequent paper so the standard Brownian map is $(\mathscr{M}, (\frac{9}{8})^{1/4}\mathscr{D}^\ast, \mathscr{m})$. Let $\mathscr{D}$ be a limit in~\eqref{eq:convergence_distances_sous_suite} and note that it satisfies the two preceding properties; we claim that \[\mathscr{D} = \mathscr{D}^\ast \qquad\text{almost surely}.\] Our argument is adaped from~\cite[below Equation~27]{Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence}, which itself was adapted from the work of Bettinelli \& Miermont~\cite[Lemma 32]{Bettinelli-Miermont:Compact_Brownian_surfaces_I_Brownian_disks}. According to the maximality property of $\mathscr{D}^\ast$, the bound $\mathscr{D} \le \mathscr{D}^\ast$ holds almost surely so it suffices to prove that if $X, Y$ are i.i.d. uniform random variables on $[0,1]$ such that the pair $(X, Y)$ is independent of everything else, then \begin{equation}\label{eq:identite_distances_carte_brownienne_repointee} \mathscr{D}(X,Y) \enskip\mathop{=}^{(d)}\enskip \mathscr{D}^\ast(X,Y). \end{equation} It is known~\cite[Corollary 7.3]{Le_Gall:Uniqueness_and_universality_of_the_Brownian_map} that the right-hand side is distributed as $\mathscr{D}^\ast(s_\star,Y) = \mathscr{L}_Y - \mathscr{L}_{s_\star}$, where $s_\star$ is the (a.s. unique~\cite{Le_Gall-Weill:Conditioned_Brownian_trees}) point at which $\mathscr{L}$ attains its minimum. The point is that, in the discrete setting, $d_n$ describes the distances in the map between the vertices $(\varphi(u_i))_{0 \le i \le \zeta(M_n)}$, and some vertices of $M_n$ may appear more often that others in this sequence so if one samples two uniform random times, they do not correspond to two uniform random vertices of the map. Nonetheless, this effect disappears at the limit, according to~\eqref{eq:approximation_sites_aretes_carte}. Indeed, fix $X, Y$ i.i.d. uniform random variables on $[0,1]$ such that the pair $(X, Y)$ is independent of everything else, and set $x = \varphi(u_{g(\lceil \lambda(T_n) X\rceil)})$ and $y = \varphi(u_{g(\lceil \lambda(T_n) Y\rceil)})$. Note that $x$ and $y$ are uniform random vertices of $M_n \setminus \{\star\}$, they can therefore be coupled with two independent uniform random vertices $x'$ and $y'$ of $M_n$ in such a way that the conditional probability given $M_n$ that $(x,y) \ne (x', y')$ is at most $2/\upsilon(M_n) \to 0$ as $n \to \infty$; we implicitly assume in the sequel that $(x,y) = (x', y')$. Since $\star$ is also a uniform random vertex of $M_n$, we obtain that \begin{equation}\label{eq:identite_distances_carte_discrete_repointee} d_{\mathrm{gr}}(x,y) \enskip\mathop{=}^{(d)}\enskip d_{\mathrm{gr}}(\star, y). \end{equation} By definition, \[d_{\mathrm{gr}}(x,y) = d_n(g(\lceil \lambda(T_n) X\rceil), g(\lceil \lambda(T_n) Y\rceil)).\] Recall that the labels on $T_n$ describe the distances from $\star$ in $M_n$, we therefore have \[d_{\mathrm{gr}}(\star, y) = L_n(g(\lceil \lambda(T_n) Y\rceil)) - \min_{0 \le j \le \zeta(T_n)} L_n(j) + 1.\] We obtain~\eqref{eq:identite_distances_carte_brownienne_repointee} by letting $n \to \infty$ in~\eqref{eq:identite_distances_carte_discrete_repointee} along the same subsequence as in~\eqref{eq:convergence_distances_sous_suite}, appealing also to~\eqref{eq:approximation_sites_aretes_carte}. {\small } \end{document}
\begin{document} \title[Uniqueness]{On a general variational framework for existence and uniqueness in Differential Equations} \author{Pablo Pedregal} \date{} \thanks{Supported by the Spanish Ministerio de Ciencia, Innovación y Universidades through project MTM2017-83740-P} \begin{abstract} Starting from the classic contraction mapping principle, we establish a general, flexible, variational setting that turns out to be applicable to many situations of existence in Differential Equations. We show its potentiality with some selected examples including initial-value, Cauchy problems for ODEs; non-linear, monotone PDEs; linear and non-linear hyperbolic problems; and steady Navier-Stokes systems. \end{abstract} \maketitle \section{Introduction} Possibly the most fundamental result yielding existence and uniqueness of solutions of an equation is the classic Banach contraction mapping principle. \begin{theorem}\label{contraction} Let $\mathbf T:\mathbb H\to\mathbb H$ be a mapping from a Banach space $\mathbb H$ into itself that is contractive in the sense $$ \|\mathbf T\mathbf{x}-\mathbf T\mathbf{y}\|\le k\|\mathbf{x}-\mathbf{y}\|,\quad k\in[0, 1), \mathbf{x}, \mathbf{y}\in\mathbb H. $$ Then $\mathbf T$ admits a unique fixed point $\overline\mathbf{x}\in\mathbb H$, $$ \mathbf T\overline\mathbf{x}=\overline\mathbf{x}. $$ \end{theorem} The proof is well-known, elementary, and independent of dimension. The most fascinating issue is that this basic principle is at the heart of many uniqueness results in Applied Analysis and Differential Equations. Our aim is to stress this fact from a variational stand-point. This means that we would like to rephrase the previous principle into a variational form that could be directly and flexibly used in many of the situations where uniqueness of solutions is known or expected. Our basic principle is the following. \begin{proposition}\label{basica} Let $E:\mathbb H\to\mathbb{R}^+$ be a non-negative, lower semi-continuous functional in a Banach space $\mathbb H$, such that \begin{equation}\label{enhcoerf} \|\mathbf{x}-\mathbf{y}\|\le C(E(\mathbf{x})+E(\mathbf{y})),\quad C>0, \mathbf{x}, \mathbf{y}\in\mathbb H. \end{equation} Suppose, in addition, that \begin{equation}\label{infcero} \inf_{\mathbf{z}\in\mathbb H}E(\mathbf{z})=0. \end{equation} Then there is a unique minimizer, i.e. a unique $\overline\mathbf{x}\in\mathbb H$ such that $E(\overline\mathbf{x})=0$, and \begin{equation}\label{errorint} \|\mathbf{x}-\overline\mathbf{x}\|\le CE(\mathbf{x}),\quad \mathbf{x}\in\mathbb H. \end{equation} \end{proposition} The proof again is elementary, because every minimizing sequence $\{\mathbf{x}_j\}$ with $E(\mathbf{x}_j)\searrow0$ must be a Cauchy sequence in $\mathbb H$, according to \eqref{enhcoerf}, and so it converges to some $\overline\mathbf{x}\in\mathbb H$. The lower semicontinuity implies that $$ 0\le E(\overline\mathbf{x})\le\liminf_{j\to\infty}E(\mathbf{x}_j)=0, $$ and $\overline\mathbf{x}$ is a minimizer. Condition \eqref{enhcoerf} implies automatically that such minimizer is unique, and leads to \eqref{errorint}. Condition \eqref{errorint} is a very clear statement that functional $E$ in Proposition \ref{basica} is a measure of how far we are from $\overline\mathbf{x}$, the unique point where $E$ vanishes. Indeed, this consequence already points in the direction in which to look for functionals $E$ in specific situations: they should be setup as a way to measure departure from solutions sought. This will be taken as a guiding principle in concrete examples. The usual least-square method (see \cite{bogunz}, \cite{glowinski}, for example), suitably adapted to each situation, stands as a main, natural possibility for $E$. It is not surprising that Proposition \ref{basica} is more general than Theorem \ref{contraction}, in the sense that the latter is a consequence of the former by considering the natural functional \begin{equation}\label{error} E(\mathbf{x})=\|\mathbf T\mathbf{x}-\mathbf{x}\|. \end{equation} Indeed, for an arbitrary pair $\mathbf{x}, \mathbf{y}\in\mathbb H$, $$ \|\mathbf{x}-\mathbf{y}\|\le\|\mathbf{x}-\mathbf T\mathbf{x}\|+\|\mathbf T\mathbf{x}-\mathbf T\mathbf{y}\|+\|\mathbf T\mathbf{y}-\mathbf{y}\|, $$ and $$ \|\mathbf{x}-\mathbf{y}\|\le E(\mathbf{x})+E(\mathbf{y})+k\|\mathbf{x}-\mathbf{y}\|. $$ From here, we immediately find \eqref{enhcoerf} $$ \|\mathbf{x}-\mathbf{y}\|\le \frac1{1-k}(E(\mathbf{x})+E(\mathbf{y})). $$ Along every sequence of iterates, we have \eqref{infcero} if $\mathbf T$ is contactive. Of course, minimizers for $E$ in \eqref{error} are exactly fixed points for $\mathbf T$. Our objective is to argue that the basic variational principle in Proposition \ref{basica} is quite flexible, and can be implemented in many of the situations in Differential Equations where uniqueness of solutions is known. There are two main requisites in Proposition \ref{basica}. The first one \eqref{enhcoerf} has to be shown directly in each particular scenario where uniqueness is sought. Note that it is some kind of enhanced coercivity, and, as such, stronger than plain coercivity. Concerning \eqref{infcero}, there is, however, a general strategy based on smoothness that can be applied to most of the interesting situations in practice. For the sake of simplicity, we restrict attention to a Hilbert space situation, and regard $\mathbb H$ as a Hilbert space henceforth. If a non-negative functional $E:\mathbb H\to\mathbb{R}^+$ is $\mathcal C^1$-, then $$ \inf_{\mathbf{x}\in\mathbb H}\|E'(\mathbf{x})\|=0. $$ Therefore, it suffices to demand that $$ \lim_{E'(\mathbf{x})\to\mathbf 0}E(\mathbf{x})=0 $$ to enforce \eqref{infcero}. Proposition \ref{basica} becomes then: \begin{proposition}\label{basicad} Let $E:\mathbb H\to\mathbb{R}^+$ be a non-negative, $\mathcal C^1$- functional in a Hilbert space $\mathbb H$, such that \begin{equation}\label{enhcoers} \|\mathbf{x}-\mathbf{y}\|\le C(E(\mathbf{x})+E(\mathbf{y})),\quad C>0, \mathbf{x}, \mathbf{y}\in\mathbb H. \end{equation} Suppose, in addition, that \begin{equation}\label{infceros} \lim_{E'(\mathbf{x})\to\mathbf 0}E(\mathbf{x})=0. \end{equation} Then there is a unique $\overline\mathbf{x}\in\mathbb H$ such that $E(\overline\mathbf{x})=0$, and $$ \|\mathbf{x}-\overline\mathbf{x}\|\le CE(\mathbf{x}) $$ for every $\mathbf{x}\in\mathbb H$. \end{proposition} Though the following is a simple observation, it is worth to note it explicitly. \begin{proposition} Under the same conditions as in Proposition \ref{basicad}, the functional $E$ enjoys the Palais-Smale condition. \end{proposition} We remind readers that the fundamental Palais-Smale condition reads: \begin{quote} If the sequence $\{\mathbf{x}_j\}$ is bounded in $\mathbb H$, and $E'(\mathbf{x}_j)\to\mathbf 0$ in $\mathbb H$, then, at least for some subsequence, $\{\mathbf{x}_j\}$ converges in $\mathbb H$. \end{quote} Again, it is not difficult to suspect the proof. Condition \eqref{infceros} informs us that Palais-Smale sequences ($\{\mathbf{x}_j\}$, bounded and $E'(\mathbf{x}_j)\to\mathbf 0$) are always minimizing sequences for $E$ ($E(\mathbf{x}_j)\to0$), while the estimate \eqref{enhcoers} ensures that (the full) such sequence is a Cauchy sequence in $\mathbb H$. Notice, however, that, due to \eqref{infceros}, $0$ is the only possible critical value of $E$, and so critical points become automatically global minimizers regardless of convexity considerations. In view of the relevance of conditions \eqref{enhcoers} and \eqref{infceros}, we adopt the following definition in which we introduce some simple, helpful changes to broaden its applicability. We also change the notation to stress that vectors in $\mathbb H$ will be functions for us. \begin{definition}\label{errorgeneral} A non-negative, $\mathcal C^1$-functional $$ E(\mathbf{u}):\mathbb H\to\mathbb{R}^+ $$ defined over a Hilbert space $\mathbb H$ is called an error functional if \begin{enumerate} \item behavior as $E'\to\mathbf 0$: \begin{equation}\label{comporinff} \lim_{E'(\mathbf{u})\to\mathbf 0}E(\mathbf{u})=0 \end{equation} over bounded subsets of $\mathbb H$; and \item enhanced coercivity: there is a positive constant $C$, such that for every pair $\mathbf{u}, \mathbf{v}\in\mathbb H$ we have \begin{equation}\label{enhcoer} \|\mathbf{u}-\mathbf{v}\|^2\le C(E(\mathbf{u})+E(\mathbf{v})). \end{equation} \end{enumerate} \end{definition} Our basic result Proposition \ref{basicad} remains the same. \begin{proposition}\label{basicas} Let $E:\mathbb H\to\mathbb{R}^+$ be an error functional according to Definition \ref{errorgeneral}. Then there is a unique minimizer $\mathbf{u}_\infty\in\mathbb H$ such that $E(\mathbf{u}_\infty)=0$, and \begin{equation}\label{medidaerror} \|\mathbf{u}-\mathbf{u}_\infty\|^2\le CE(\mathbf{u}), \end{equation} for every $\mathbf{u}\in\mathbb H$. \end{proposition} It is usually said that the contraction mapping principle Theorem \ref{contraction}, though quite helpful in ODEs, is almost inoperative for PDEs. We will try to make an attempt at convincing readers that, on the contrary, Proposition \ref{basicad} is equally helpful for ODEs and PDEs. To this end, we will examine several selected examples as a sample of the potentiality of these ideas. Specifically, we will look at the following situations, though none of our existence results is new at this stage: \begin{enumerate} \item Cauchy, initial-value problems for ODEs; \item linear hyperbolic examples; \item non-linear, monotone PDEs; \item non-linear wave models; \item steady Navier-Stokes system. \end{enumerate} We systematically will have to check the two basic properties \eqref{enhcoer} and \eqref{comporinff} in each situation treated. We can be dispensed with condition \eqref{comporinff}, and replace it by \eqref{infcero} if more general results not requiring smoothness are sought. On the other hand, in many regular situations linearization may lead in a systematic way to the following. \begin{proposition}\label{principalbis} Let $$ E(\mathbf{u}):\mathbb H\to\mathbb{R}^+ $$ be a $\mathcal C^1$-functional verifying the enhanced coercivity condition \eqref{enhcoer}. Suppose there is $\mathbf T:\mathbb H\to\mathbb H$, a locally Lipschitz operator, such that \begin{equation}\label{propoper} \langle E'(\mathbf{u}), \mathbf T\mathbf{u}\rangle=-dE(\mathbf{u}) \end{equation} for every $\mathbf{u}\in\mathbb H$, and some constant $d>0$. Then $E$ is an error functional (according to Definition \ref{errorgeneral}), and, consequently, there is a unique global minimizer $\mathbf{u}_\infty$ with $E(\mathbf{u}_\infty)=0$, and \eqref{medidaerror} holds \begin{equation}\label{medidaerrorbis} \|\mathbf{u}-\mathbf{u}_\infty\|^2\le CE(\mathbf{u}), \end{equation} for some constant $C$, and every $\mathbf{u}\in\mathbb H$. \end{proposition} Note how condition \eqref{propoper} leads immediately to \eqref{comporinff}. In this contribution, we will assume smoothness in all of our examples. Typically our Hilbert spaces $\mathbb H$ will be usual Sobolev spaces in different situations, so standard facts about these spaces will be taken for granted. In particular, the following Hilbert spaces will play a basic role for us in those various situations mentioned above $$ H^1(0, T; \mathbb{R}^N),\quad H^1_0(\mathbb{R}^N_+),\quad H^1(\mathbb{R}^N_+),\quad H^1_0(\Omega), \quad H^1_0(\Omega; \mathbb{R}^N), $$ for a domain $\Omega\subset\mathbb{R}^N$ as regular as we may need it to be. If one is interested in numerical or practical approximation of solutions $\mathbf{u}_\infty$, note how \eqref{medidaerrorbis} is a clear invitation to seek approximations to $\mathbf{u}_\infty$ by minimizing $E(\mathbf{u})$. The standard way to take a functional to its minimum value is to use a steepest descent algorithm or some suitable variant of it. It is true that such procedure is designed, in fact, to lead the derivative $E'(\mathbf{u})$ to zero; but precisely, condition \eqref{comporinff} is guaranteeing that in doing so we are also converging to $\mathbf{u}_\infty$ always. We are not pursuing this direction here, though it has been implemented in some scenarios (\cite{munped2}, \cite{pedregal3}). Definition \ref{errorgeneral} is global. A local parallel concept may turn out necessary for some situations. We will show this in our final example dealing with the steady Navier-Stokes system. The application to parabolic problems, though still feasible, is, in general, more delicate. \section{Cauchy problems for ODEs} As a preliminary step, we start testing our ideas with a typical initial-value, Cauchy problem for the non-linear system \begin{equation}\label{ode} \mathbf{x}'(t)=\mathbf{f}(\mathbf{x}(t))\hbox{ in }(0, +\infty),\quad \mathbf{x}(0)=\mathbf{x}_0 \end{equation} where the map $$ \mathbf{f}(\mathbf{y}):\mathbb{R}^N\to\mathbb{R}^N $$ is smooth and globally Lipschitz, and $\mathbf{x}_0\in\mathbb{R}^N$. Under these circumstances, it is well-known that \eqref{ode} possesses a unique solution. Let us pretend not to know anything about problem \eqref{ode}, and see if our formalism could be applied in this initial situation to prove the following classical theorem. \begin{theorem} If the mapping $\mathbf{f}(\mathbf{y})$ is globally Lipschitz, there is unique absolutely continuous solution $$ \mathbf{x}(t):[0, \infty)\to\mathbb{R}^N $$ for \eqref{ode}. \end{theorem} According to our previous discussion, we need a functional $E:\mathbb H\to\mathbb{R}^+$ defined on an appropriate Hilbert space $\mathbb H$ complying with the necessary properties. For a fixed, but otherwise arbitrary, positive time $T$, we will take \begin{gather} \mathbb H=\{\mathbf{z}(t):[0, T]\to\mathbb{R}^N: \mathbf{z}\in H^1(0, T; \mathbb{R}^N), \mathbf{z}(0)=\mathbf 0\},\nonumber\\ E(\mathbf{z})=\frac12\int_0^T|\mathbf{z}'(s)-\mathbf{f}(\mathbf{x}_0+\mathbf{z}(s))|^2\,ds\label{erroredo}. \end{gather} $\mathbb H$ is a subspace of the standard Sobolev space $H^1(0, T; \mathbb{R}^N)$, under the norm (recall that $\mathbf{z}(0)=\mathbf 0$ for paths in $\mathbb H$) $$ \|\mathbf{z}\|^2=\int_0^T|\mathbf{z}'(s)|^2\,ds. $$ Note that paths $\mathbf{x}\in\mathbb H$ are absolutely continuous, and hence $E(\mathbf{z})$ is well-defined over $\mathbb H$. We first focus on \eqref{enhcoer}. \begin{lemma} For paths $\mathbf{y}$, $\mathbf{z}$ in $\mathbb H$, we have $$ \|\mathbf{y}-\mathbf{z}\|^2\le C(E(\mathbf{y})+E(\mathbf{z})),\quad C>0. $$ \end{lemma} \begin{proof} The proof is, in fact, pretty elementary. Suppose that $$ \mathbf{z}(0)=\mathbf{y}(0)=\mathbf{x}_0, $$ so that $\mathbf{y}-\mathbf{z}\in\mathbb H$. Then \begin{align} \mathbf{y}(t)-\mathbf{z}(t)=&\int_0^t (\mathbf{y}'(s)-\mathbf{z}'(s))\,ds\nonumber\\ =&\int_0^t (\mathbf{y}'(s)-\mathbf{f}(\mathbf{y}(s)))\,ds+\int_0^t (\mathbf{f}(\mathbf{y}(s))-\mathbf{f}(\mathbf{z}(s)))\,ds\nonumber\\ &+\int_0^t (\mathbf{f}(\mathbf{z}(s))-\mathbf{z}'(s))\,ds.\nonumber \end{align} From here, we immediately find $$ |\mathbf{y}(t)-\mathbf{z}(t)|^2\le C(E(\mathbf{y})+E(\mathbf{z}))+CM^2\int_0^t|\mathbf{y}(s)-\mathbf{z}(s)|^2\,ds, $$ if $M$ is the Lipschitz constant for the map $\mathbf{f}$, and $C$ is a generic, universal constant we will not care to change. From Gronwall's lemma, we can have \begin{equation}\label{dercero} |\mathbf{y}(t)-\mathbf{z}(t)|^2\le C(E(\mathbf{y})+E(\mathbf{z}))e^{CM^2T} \end{equation} for all $t\in[0, T]$. This means \begin{gather} \|\mathbf{y}-\mathbf{z}\|_{L^\infty(0, T; \mathbb{R}^N)}\le e^{CM^2T/2}\sqrt{C(E(\mathbf{y})+E(\mathbf{z}))},\nonumber\\ \|\mathbf{y}-\mathbf{z}\|_{L^2(0, T; \mathbb{R}^N)}^2\le CTe^{CM^2T}(E(\mathbf{y})+E(\mathbf{z})).\label{cota} \end{gather} But once we can rely on this information, the above decomposition allows us to write in a similar manner $$ \int_0^t |\mathbf{y}'(s)-\mathbf{z}'(s)|^2\,ds\le C(E(\mathbf{y})+E(\mathbf{z}))+CM^2\int_0^t|\mathbf{y}(s)-\mathbf{z}(s)|^2\,ds $$ and $$ \|\mathbf{y}'-\mathbf{z}'\|^2_{L^2(0, T; \mathbb{R}^N)}\le C(E(\mathbf{y})+E(\mathbf{z}))+CM^2\|\mathbf{y}-\mathbf{z}\|^2_{L^2(0, T; \mathbb{R}^N)}, $$ and thus, taking into account \eqref{cota}, $$ \|\mathbf{y}'-\mathbf{z}'\|^2_{L^2(0, T; \mathbb{R}^N)}\le (C+C^2M^2Te^{CM^2T})(E(\mathbf{y})+E(\mathbf{z})). $$ Our estimate \eqref{enhcoer} is then a consequence that the norm in $\mathbb H$ can be taken to be the $L^2$-norm of the derivative. \end{proof} The second basic ingredient is \eqref{comporinff}. We will be using Proposition \ref{principalbis}. We assume further that the mapping $\mathbf{f}$ is smooth with a derivative uniformly bounded to guarantee the uniform Lipschitz condition. For the operator $\mathbf T$, we will put $\mathbf{Z}=\mathbf T\mathbf{z}$ for $\mathbf{z}\in\mathbb H$, and linearize \eqref{ode} at the path $\mathbf{x}_0+\mathbf{z}(t)$ to write \begin{gather} \mathbf{Z}'(t)=\mathbf{f}(\mathbf{x}_0+\mathbf{z}(t))+\nabla\mathbf{f}(\mathbf{x}_0+\mathbf{z}(t))\mathbf{Z}(t)-\mathbf{z}'(t)\hbox{ in }[0, T],\label{linealizacion}\\ \mathbf{Z}(0)=\mathbf 0.\nonumber \end{gather} This is a linear, differential, non-constant coefficient system for $\mathbf{Z}$ with coefficients depending on $\mathbf{z}$. Under smoothness assumptions, which we take for granted, such operator $\mathbf T$ is locally Lipschitz because the image $\mathbf{Z}=\mathbf T\mathbf{z}$ is defined through a linear initial-value, Cauchy problem with coefficients depending continuously on $\mathbf{z}$. The important property to be checked, concerning $\mathbf T$, is \eqref{propoper}. It is elementary to see, under smoothness assumptions which, as indicated, we have taken for granted, that $$ \langle E'(\mathbf{z}), \mathbf{Z}\rangle= \int_0^T(\mathbf{z}'(s)-\mathbf{f}(\mathbf{x}_0+\mathbf{z}(s)))(\mathbf{Z}'(s)-\nabla\mathbf{f}(\mathbf{x}_0+\mathbf{z}(s))\mathbf{Z}(s))\,ds $$ Hence, for $\mathbf{Z}=\mathbf T\mathbf{z}$ coming from \eqref{linealizacion}, we immediately deduce that $$ \langle E'(\mathbf{z}), \mathbf{Z}\rangle=-2E(\mathbf{z}). $$ We are, then, entitled to apply Proposition \ref{principalbis} to conclude that functional $E$ in \eqref{erroredo} is an error functional after Definition \ref{errorgeneral}, and we are entitled to utilize Proposition \ref{basicas} to conclude the following. \begin{theorem} If the mapping $\mathbf{f}(\mathbf{y}):\mathbb{R}^N\to\mathbb{R}^N$ is $\mathcal C^1$- with a globally bounded gradient, then, for arbitrary $\mathbf{x}_0\in\mathbb{R}^N$ and $T>0$, problem \eqref{ode} admits a unique $\mathcal C^1$-solution $$ \overline\mathbf{x}(t):[0, T)\to\mathbb{R}^N, $$ and there is a positive constant $C$ such that $$ \|\mathbf{x}-\overline\mathbf{x}\|_\mathbb H^2\le C\int_0^T|\mathbf{x}'(s)-\mathbf{f}(\mathbf{x}_0+\mathbf{x}(s))|^2\,ds $$ for every $\mathbf{x}\in\mathbb H$. \end{theorem} There is no difficulty in showing a local version of this result by using the same ideas. \section{Linear hyperbolic example} Since most likely readers will not be used to think about hyperbolic problems in these terms, we will treat the most transparent example of a linear, hyperbolic problem from this perspective, and later apply the method to a non-linear wave equation. We seek a (weak) solution $u(t, \mathbf{x})$ of some sort of the problem \begin{gather} u_{tt}(t, \mathbf{x})-\Delta u(t, \mathbf{x})-u(t, \mathbf{x})=f(t, \mathbf{x})\hbox{ in }\mathbb{R}^N_+,\label{ondas}\\ u(0, \mathbf{x})=0, u_t(0, \mathbf{x})=0\hbox{ on }t=0,\nonumber \end{gather} for $f\in L^2(\mathbb{R}^N_+)$. Here $\mathbb{R}^N_+$ is the upper half hyperspace $[0, +\infty)\times\mathbb{R}^N$. We look for $$ u(t, \mathbf{x})\in H^1_0(\mathbb{R}^N_+) $$ (jointly in time and space) such that \begin{gather} \int_{\mathbb{R}^N_+}[u_t(t, \mathbf{x})w_t(t, \mathbf{x})-\nabla u(t, \mathbf{x})\cdot\nabla w(t, \mathbf{x})\label{formadeb}\\ +(f(t, \mathbf{x})+u(t, \mathbf{x}))w(t, \mathbf{x})]\,d\mathbf{x}\,dt=0\nonumber \end{gather} for every test function $$ w(t, \mathbf{x})\in H^1(\mathbb{R}^N_+). $$ Note how the arbitrary values of the test function $w$ for $t=0$ imposes the vanishing initial velocity $u_t(0, \mathbf{x})=0$. To setup a suitable error functional $$ E(u): H^1_0(\mathbb{R}^N_+)\to\mathbb{R}^+ $$ for every $u(t, \mathbf{x})$, and not just for the solution we seek, we utilize a natural least-square concept as indicated in the Introduction. Define an appropriate defect or residual function $$ U(t, \mathbf{x})\in H^1(\mathbb{R}^N_+), $$ for each such $u\in H^1_0(\mathbb{R}^N_+)$, as the unique variational solution of \begin{equation}\label{debilondas} \int_{\mathbb{R}^N_+}[(u_t+U_t)w_t-(\nabla u-\nabla U)\cdot\nabla w+(f+u+U) w]\,d\mathbf{x}\,dt=0 \end{equation} valid for every $w\in H^1(\mathbb{R}^N_+)$. This function $U$ is indeed the unique minimizer over $H^1(\mathbb{R}^N_+)$ of the strictly convex, quadratic functional $$ I(w)=\int_{\mathbb{R}^N_+}\left(\frac12[(w_t+u_t)^2+|\nabla w-\nabla u|^2+(u+w)^2]+fw\right)\,d\mathbf{x}\,dt $$ for each fixed $u$. The size of $U$ is regarded as a measure of the departure of $u$ from being a solution of our problem \begin{gather} E:H^1_0(\mathbb{R}^N_+)\to\mathbb{R}^+,\nonumber\\ E(u)=\int_{\mathbb{R}^N_+}\frac12[U^2_t(t, \mathbf{x})+|\nabla U(t, \mathbf{x})|^2+U^2(t, \mathbf{x})]\,d\mathbf{x}\,dt.\label{errorhyp} \end{gather} We can also put, in a short form, \begin{equation}\label{sencillo} E(u)=\frac12\|U\|_{H^1(\mathbb{R}^N_+)}^2; \end{equation} or even $$ E(u)=\frac12\|u_{tt}-\Delta u-u-f\|^2_{H^{-1}(\mathbb{R}^N_+)}, $$ though we will stick to \eqref{sencillo} to better manipulate $E$. We would like to apply Proposition \ref{basicas} in this situation, and hence, we set to ourselves the task of checking the two main assumptions in Definition \ref{errorgeneral}. Our functional $E$ is definitely smooth and non-negative to begin with. It is not surprising that in order to work with the wave equation the following two linear operators \begin{gather} \mathbb S:H^1(\mathbb{R}^N_+)\mapsto H^1(\mathbb{R}^N_+),\quad \mathbb S w(t, \mathbf{x})=w(t, -\mathbf{x}),\nonumber\\ \mathcal S: H^1(\mathbb{R}^N_+)\mapsto H^1(\mathbb{R}^N_+)^*,\nonumber\\ \mathcal S u(t, \mathbf{x})=(u(t, -\mathbf{x}), u_t(t, -\mathbf{x}), \nabla u(t, -\mathbf{x})),\nonumber \end{gather} will play a role. $H^1(\mathbb{R}^N_+)^*$ is here the dual space of $H^1(\mathbb{R}^N_+)$, not to be mistaken with $H^{-1}(\mathbb{R}^N_+)$. Put $\mathbb H=\mathcal S(H^1_0(\mathbb{R}^N_+))$. The following fact is elementary. Check for instance \cite{brezis}. \begin{lemma}\label{hiperbolico} \begin{enumerate} \item The map $\mathbb S$ is an isometry. \item $\mathbb H$ is a closed subspace of $H^1(\mathbb{R}^N_+)^*$, and $$ \mathcal S:H^1_0(\mathbb{R}^N_+)\to\mathbb H $$ is a bijective, continuous mapping. In fact, we clearly have \begin{equation}\label{coer} \|u\|_{H^1_0(\mathbb{R}^N_+)}\le \|\mathcal S u\|_{H^1(\mathbb{R}^N_+)^*}. \end{equation} \end{enumerate} \end{lemma} We can now proceed to prove inequality \eqref{enhcoer} in this new context. \begin{proposition}\label{enhcoerhyp} There is a constant $K>0$ such that $$ \|u-v\|_{H^1_0(\mathbb{R}^N_+)}^2\le K(E(u)+E(v)), $$ for every pair $u, v\in H^1_0(\mathbb{R}^N_+)$. \end{proposition} \begin{proof} Let $U, V\in H^1(\mathbb{R}^N)$ be the respective residual functions associated with $u$ and $v$. Because we are in a linear situation, if we replace $$ u-v\mapsto u,\quad U-V\mapsto U, $$ we would have \begin{equation}\label{debilcero} \int_{\mathbb{R}^N_+}[(u_t+U_t)w_t-(\nabla u-\nabla U)\cdot\nabla w+(u+U) w]\,d\mathbf{x}\,dt=0, \end{equation} for every $w\in H^1(\mathbb{R}^N_+)$. If we use $\mathbb S w$ in \eqref{debilcero} instead of $w$, we immediately find \begin{gather} \int_{\mathbb{R}^N_+}[(u_t+U_t)w_t(t, -\mathbf{x})+(\nabla u-\nabla U)\cdot\nabla w(t, -\mathbf{x})\nonumber\\ +(u+U)w(t, -\mathbf{x})]\,d\mathbf{x}\,dt=0.\nonumber \end{gather} The terms involving $U$ can be written in compact form as $$ \langle U, \mathbb S w\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)\rangle} $$ while a natural change of variables in the terms involving $u$ leads to writing these in the form $$ \langle w, \mathcal S u\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)^*\rangle}. $$ Hence, for every $w\in H^1(\mathbb{R}^N_+)$, we find $$ \langle U, \mathbb S w\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)\rangle}+ \langle w, \mathcal S u\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)^*\rangle}=0. $$ Bearing in mind this identity, we have, through the Lemma \ref{hiperbolico}, \begin{align} \|u\|_{H^1_0(\mathbb{R}^N_+)}\le&\|\mathcal S u\|_{H^1(\mathbb{R}^N_+)^*}\nonumber\\ =&\sup_{\|w\|_{H^1(\mathbb{R}^N_+)}\le 1}\langle w, \mathcal S u\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)^*\rangle}\nonumber\\ \le&\|U\|_{H^1(\mathbb{R}^N_+)}\,\|w\|_{H^1(\mathbb{R}^N_+)} \nonumber\\ \le&\|U\|_{H^1(\mathbb{R}^N_+)} .\nonumber \end{align} If we go back to $$ u\mapsto u-v,\quad U\mapsto U-V, $$ we are led to \begin{align} \|u-v\|^2_{H^1_0(\mathbb{R}^N_+)}\le &\|U-V\|^2_{H^1(\mathbb{R}^N_+)}\nonumber\\ \le &C\left(\|U\|^2_{H^1(\mathbb{R}^N_+)}+\|V\|^2_{H^1(\mathbb{R}^N_+)}\right)\nonumber\\ \le &C\left(E(u)+E(v)\right),\nonumber \end{align} for some constant $C>0$. \end{proof} The second main ingredient to apply Proposition \ref{basicas} is to show that $E$ defined in \eqref{errorhyp} complies with \eqref{comporinff} too. To this end, we need to compute the derivative $E'(u)$, and so we perform a perturbation $$ u+\epsilon v\mapsto U+\epsilon V, $$ in \eqref{debilondas} to write \begin{gather} \int_{\mathbb{R}^N_+}[(u_t+\epsilon v_t+U_t+\epsilon V_t)w_t-(\nabla u+\epsilon\nabla v-\nabla U-\epsilon\nabla V)\cdot\nabla w\nonumber\\ +(f+u+\epsilon v+U+\epsilon V) w]\,d\mathbf{x}\,dt=0.\nonumber \end{gather} The term to order 1 in $\epsilon$ should vanish \begin{equation}\label{firstorder} \int_{\mathbb{R}^N_+}[(v_t+V_t)w_t-(\nabla v-\nabla V)\cdot\nabla w+(v+V) w]\,d\mathbf{x}\,dt=0 \end{equation} for every $w\in H^1(\mathbb{R}^N_+)$. On the other hand, by differentiating $$ E(u+\epsilon v)=\int_{\mathbb{R}^N_+}\frac12((U+\epsilon V)^2_t+|\nabla U+\epsilon\nabla V|^2+(U+\epsilon V)^2)\,d\mathbf{x}\,dt, $$ with respect to $\epsilon$, and setting $\epsilon=0$, we arrive at $$ \langle E'(u), v\rangle=\int_{\mathbb{R}^N_+}(U_tV_t+\nabla U\cdot\nabla V+UV)\,d\mathbf{x}\,dt. $$ By taking $w=U$ in \eqref{firstorder}, we can also write \begin{align} \langle E'(u), v\rangle=&\int_{\mathbb{R}^N_+}(\nabla v\cdot\nabla U-v_tU_t-vU)\,d\mathbf{x}\,dt\nonumber\\ =&-\langle \mathbb S v, \mathcal S U\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)^*\rangle}.\nonumber \end{align} From this identity, which ought to be valid for every $v\in H^1_0(\mathbb{R}^N_+)$, we clearly conclude that if $E'(u)\to\mathbf 0$ then $\mathcal S U\to\mathbf 0$ as well, because $\mathbb S$ preserves the norm. Realizing that $$ E(u)=\frac12\|U\|_{H^1(\mathbb{R}^N_+)}^2\le \frac12\|\mathcal S U\|_{H^1(\mathbb{R}^N_+)^*}^2, $$ by estimate \eqref{coer}, we conclude the following. \begin{proposition} The functional $E$ in \eqref{errorhyp} is an error functional in the sense of Definition \ref{errorgeneral}. \end{proposition} Our main abstract result Proposition \ref{basicas} applies in this situation too, and we can conclude \begin{theorem} Problem \eqref{ondas} admits a unique weak solution $u\in H^1_0(\mathbb{R}^N_+)$ in the sense \eqref{formadeb}, and for every other $v\in H^1_0(\mathbb{R}^N_+)$, we have $$ \|u-v\|_{H^1_0(\mathbb{R}^N_+)}^2\le KE(v), $$ for some positive constant $K$. \end{theorem} \section{Non-linear monotone problems}\label{seven} Suppose we would like to solve, or approximate the solution of, a certain non-linear elliptic system of PDEs of the form $$ \operatorname{div}[\Phi(\nabla u)]=0\hbox{ in }\Omega,\quad u=u_0\hbox{ on }\partial\Omega, $$ for a non-linear, smooth map $$ \Phi(\mathbf{a}):\mathbb{R}^N\to\mathbb{R}^N. $$ $\Omega\subset\mathbb{R}^N$ is assumed to be a regular, bounded domain. One can set up a natural, suitable, non-negative, smooth functional based on the least-squares idea, as already introduced, \begin{equation}\label{funcellnl} E(v):H^1_0(\Omega)\to\mathbb{R} \end{equation} by putting $$ E(v)=\frac12\int_\Omega|\nabla U(\mathbf{x})|^2\,d\mathbf{x} $$ where $$ \operatorname{div}[\Phi(\nabla v+\nabla u_0)+\nabla U]=0\hbox{ in }\Omega $$ and $U\in H^1_0(\Omega)$. We can also put $$ E(v)=\frac12\|\operatorname{div}[\Phi(\nabla v+\nabla u_0)]\|^2_{H^{-1}(\Omega)}. $$ Our goal is to apply again Proposition \ref{basicas}, or, since we are now in a non-linear situation, Proposition \ref{principalbis}. Anyhow, \eqref{enhcoer} is necessary. \begin{lemma} Let $\Phi(\mathbf{a}):\mathbb{R}^N\to\mathbb{R}^N$ be a smooth-map with linear growth at infinity, i.e. \begin{equation}\label{lineargr} |\Phi(\mathbf{a})|\le C_1|\mathbf{a}|+C_0, \end{equation} with $C_1>0$, and strictly monotone in the sense \begin{equation}\label{monotonia} (\Phi(\mathbf{a}_1)-\Phi(\mathbf{a}_0))\cdot(\mathbf{a}_1-\mathbf{a}_0)\ge c|\mathbf{a}_1-\mathbf{a}_0|^2,\quad c>0, \end{equation} for every pair of vectors $\mathbf{a}_i$, $i=0, 1$. Then there is a positive constant $C$ such that $$ \|u-v\|_{H^1_0(\Omega)}^2\le C(E(u)+E(v)), $$ for every pair $u, v\in H^1_0(\Omega)$. \end{lemma} \begin{proof} Let $u, v\in H^1_0(\Omega)$, and let $U, V\in H^1_0(\Omega)$ be their respective residuals in the sense \begin{equation}\label{sistemas} \operatorname{div}[\Phi(\nabla u+\nabla u_0)+\nabla U]=0,\quad \operatorname{div}[\Phi(\nabla v+\nabla u_0)+\nabla V]=0 \end{equation} in $\Omega$, and $$ E(u)=\frac12\|\nabla U\|^2_{L^2(\Omega; \mathbb{R}^N)},\quad E(v)=\frac12\|\nabla V\|^2_{L^2(\Omega; \mathbb{R}^N)}. $$ If we use $u-v$ as test field in \eqref{sistemas}, we find \begin{gather} \int_\Omega\Phi(\nabla u+\nabla u_0)\cdot(\nabla u-\nabla v)\,d\mathbf{x}=-\int_\Omega\nabla U\cdot(\nabla u-\nabla v)\,d\mathbf{x},\nonumber\\ \int_\Omega\Phi(\nabla v+\nabla u_0)\cdot(\nabla u-\nabla v)\,d\mathbf{x}=-\int_\Omega\nabla V\cdot(\nabla u-\nabla v)\,d\mathbf{x}.\nonumber \end{gather} The monotonicity condition, together with these identities, takes us, by subtracting one from the other, to $$ c\int_\Omega|\nabla u-\nabla v|^2\,d\mathbf{x}\le\int_\Omega(\nabla V-\nabla U)\cdot(\nabla u-\nabla v)\,d\mathbf{x}. $$ The standard Cauchy-Schwarz inequality implies that $$ c\|\nabla u-\nabla v\|_{L^2(\Omega; \mathbb{R}^N)}\le \|\nabla U-\nabla V\|_{L^2(\Omega; \mathbb{R}^N)}, $$ and thus, taking into account the triangular inequality, we have $$ c^2\|\nabla u-\nabla v\|_{L^2(\Omega; \mathbb{R}^N)}^2\le 4E(u)+4E(v). $$ The use of Poincaré's inequality yields our statement. \end{proof} The second ingredient, to apply Theorem \ref{principalbis}, is the operator $\mathbf T$ which comes directly from linearization or from Newton's method. Given an approximation of the solution $v+u_0$, $v\in H^1_0(\Omega)$, we seek a better approximation $V\in H^1_0(\Omega)$ in the form \begin{equation}\label{sistemalineal} \operatorname{div}[\Phi(\nabla v+\nabla u_0)+\nabla\Phi(\nabla v+\nabla u_0)\nabla V]=0\hbox{ in }\Omega, \end{equation} as a linear approximation of $$ \operatorname{div}[\Phi(\nabla v+\nabla u_0+\nabla V)]=0\hbox{ in }\Omega. $$ We therefore define \begin{equation}\label{newoper} \mathbf T:H^1_0(\Omega)\to H^1_0(\Omega),\quad \mathbf T v=V, \end{equation} where $V$ is the solution of \eqref{sistemalineal}. The fact that $\mathbf T$ is well-defined is a direct consequence of the standard Lax-Milgram lemma and the identification $$ \mathbf A=\nabla\Phi(\nabla v+\nabla u_0),\quad \mathbf{a}=\Phi(\nabla v+\nabla u_0), $$ provided $$ |\nabla\Phi(\mathbf{v})|\le M,\quad \mathbf{u}\cdot\nabla\Phi(\mathbf{v})\mathbf{u}\ge c|\mathbf{u}|^2,\quad M, c>0. $$ The first bound is compatible with linear growth at infinity, \eqref{lineargr}, while the second one is a consequence of monotonicity \eqref{monotonia}. On the other hand, the smoothness of $\mathbf T$ depends directly on the smoothness of $\Phi$, specifically, we assume $\Phi$ to be $\mathcal C^2$. Since $\mathbf T$ comes from Newton's method, condition \eqref{propoper} is guaranteed. We are hence entitled to apply Proposition \ref{principalbis} and conclude that \begin{theorem} Let $\Phi(\mathbf{a}):\mathbb{R}^N\to\mathbb{R}^N$ be a $\mathcal C^2$-mapping such that \begin{gather} |\nabla\Phi(\mathbf{a})|\le M,\nonumber\\ (\Phi(\mathbf{a}_1)-\Phi(\mathbf{a}_0))\cdot(\mathbf{a}_1-\mathbf{a}_0)\ge c|\mathbf{a}_1-\mathbf{a}_0|^2,\nonumber \end{gather} for constants $M, c>0$, and every $\mathbf{a}$, $\mathbf{a}_1$, $\mathbf{a}_0$ in $\mathbb{R}^N$. There is a unique weak solution $u\in u_0+H^1_0(\Omega)$, for arbitrary $u_0\in H^1(\Omega)$, of the equation $$ \operatorname{div}[\Phi(\nabla u)]=0\hbox{ in }\Omega,\quad u-u_0\in H^1_0(\Omega). $$ Moreover $$ \|u-v\|^2_{H^1_0(\Omega)}\le CE(v) $$ for every other $v\in u_0+H^1_0(\Omega)$. \end{theorem} It is not hard to design appropriate sets of assumptions to deal with more general equations of the form $$ \operatorname{div}[\Phi(\nabla v(\mathbf{x}), v(\mathbf{x}), \mathbf{x})]=0. $$ \section{Non-linear waves} We would like to explore non-linear equations of the form $$ u_{tt}(t, \mathbf{x})-\Delta u(t, \mathbf{x})-f(\nabla u(t, \mathbf{x}), u_t(t, \mathbf{x}), u(t, \mathbf{x}))=0\hbox{ in }(t, \mathbf{x})\in\mathbb{R}^N_+, $$ subjected to initial conditions $$ u(0, \mathbf{x})=u_0(\mathbf{x}),\quad u_t(0, \mathbf{x})=u_1(\mathbf{x}) $$ for appropriate data $u_0$ and $u_1$ belonging to suitable spaces to be determined. Dimension $N$ is taken to be at least two. Though more complicated situations could be considered allowing for a monotone main part in the equation, as in the previous section, to better understand the effect of the term incorporating lower-order terms, we will restrict ourselves to the equation above. Conditions on the non-linear term $$ f(\mathbf{z}, z, u):\mathbb{R}^N\times\mathbb{R}\times\mathbb{R}\to\mathbb{R} $$ will be specified along the way as needed. Our ambient space will be $H^1(\mathbb{R}^N_+)$ so that weak solutions $u$ are sought in $H^1(\mathbb{R}^N_+)$. If we assume $$ u_0\in H^{1/2}(\mathbb{R}^N),\quad u_1\in L^2(\mathbb{R}^N), $$ we can take for granted, without loss of generality, that both $u_0$ and $u_1$ identically vanish and $u\in H^1_0(\mathbb{R}^N_+)$, at the expense of permitting $$ f(\mathbf{z}, z, u, t, \mathbf{x}):\mathbb{R}^N\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}^N\to\mathbb{R}. $$ We will therefore stick to the problem \begin{equation}\label{nonlinw} u_{tt}-\Delta u-f(\nabla u, u_t, u, t, \mathbf{x})=0\hbox{ in }(t, \mathbf{x})\in\mathbb{R}^N_+, \end{equation} subjected to initial conditions \begin{equation}\label{iniccond} u(0, \mathbf{x})=0,\quad u_t(0, \mathbf{x})=0. \end{equation} A weak solution $u\in H^1_0(\mathbb{R}^N_+)$ of \eqref{nonlinw} is such that \begin{equation}\label{formadebilnlw} \int_{\mathbb{R}^N_+}[-u_tw_t+\nabla u\cdot\nabla w-f(\nabla u, u_t, u, t, \mathbf{x})w]\,d\mathbf{x}\,dt=0 \end{equation} for every $w\in H^1(\mathbb{R}^N_+)$. This weak formulation asks for the non-linear term recorded in the function $f$ to comply with \begin{equation}\label{nonlinterm} |f(\mathbf{z}, z, u, t, \mathbf{x})|\le C(|\mathbf{z}|+|z|+|u|^{(N+1)/(N-1)})+f_0(t, \mathbf{x}) \end{equation} for a function $f_0\in L^2(\mathbb{R}^N_+)$, in such a way that the composition $$ f(\nabla u, u_t, u, t, \mathbf{x})\in L^2(\mathbb{R}^N_+) $$ for every $u\in H^1(\mathbb{R}^N_+)$. As expected, for every $u\in H^1_0(\mathbb{R}^N_+)$ we define its residual $U\in H^1(\mathbb{R}^N_+)$ through \begin{equation}\label{resnlw} \int_{\mathbb{R}^N_+}[(U_t+u_t)w_t-(\nabla u-\nabla U)\cdot\nabla w+(U+f(\nabla u, u_t, u, t, \mathbf{x}))w]\,d\mathbf{x}\,dt=0 \end{equation} which ought to be correct for every test $w\in H^1(\mathbb{R}^N_+)$; and the functional \begin{equation}\label{funcerrnlw} E(u):H^1_0(\mathbb{R}^N_+)\to\mathbb{R}^+,\quad E(u)=\frac12\|U\|^2_{H^1(\mathbb{R}^N_+)}, \end{equation} as a measure of departure of $u$ from being a weak solution of \eqref{nonlinw}. Note how \eqref{resnlw} determines $U$ in a unique way. Indeed, such $U$ is the unique minimizer of the strictly convex, quadratic functional $$ \frac12\int_{\mathbb{R}^N_+}\left[(U_t+u_t)^2+|\nabla U-\nabla u|^2+(U+f(\nabla u, u_t, u, t, \mathbf{x}))^2\right]\,d\mathbf{x}\,dt $$ define for $U\in H^1(\mathbb{R}^N_+)$. We claim that under appropriate additional hypotheses, we can apply Proposition \ref{principalbis} to this situation. To explain things in the most affordable way, however, we will show that Proposition \ref{basicas} can also be applied directly. This requires to check that $E$ in \eqref{funcerrnlw} is indeed an error functional in the sense of Definition \ref{errorgeneral}. We will be using the operators and the formalism right before Lemma \ref{hiperbolico}, as well as bound \eqref{coer} in this lemma. \begin{lemma} Suppose the function $f(\mathbf{z}, z, u, t, \mathbf{x})$ is such that \begin{enumerate} \item $f(\mathbf 0, 0, 0, t, \mathbf{x})\in L^2(\mathbb{R}^N_+)$; \item the difference $f(\mathbf{z}, z, u, t, \mathbf{x})-u$ is globally Lipschitz with respect to triplets $(\mathbf{z}, z, u)$ in the sense \begin{gather} |f(\mathbf{z}, z, u, t, \mathbf{x})-u-f(\mathbf{y}, y, v, t, \mathbf{x})+v|\le \nonumber\\ M\left(|\mathbf{z}-\mathbf{y}|+|z-y|+\frac1D|u-v|^{(N+1)/(N-1)}\right),\nonumber \end{gather} where $D$ is the constant of the corresponding embedding $$ H^1(\mathbb{R}^N_+)\subset L^{2(N+1)/(N-1)}(\mathbb{R}^N_+), $$ and $M<1$. \end{enumerate} Then there is a positive constant $K$ with $$ \|u-v\|_{H^1_0(\mathbb{R}^N_+)}^2\le K(E(u)+E(v)), $$ for every pair $u, v\in H^1_0(\mathbb{R}^N_+)$, where $E$ is given in \eqref{funcerrnlw}. \end{lemma} \begin{proof} Note how our hypotheses on the nonlinearity $f$ imply the bound \eqref{nonlinterm} by taking $$ \mathbf{y}=\mathbf 0,\quad y=v=0,\quad f_0(t, \mathbf{x})=f(\mathbf 0, 0, 0, t, \mathbf{x}). $$ If $u$, $v$ belong to $H^1_0(\mathbb{R}^N_+)$, and $U$, $V$ in $H^1(\mathbb{R}^N_+)$ are their respective residuals, then \begin{gather} \int_{\mathbb{R}^N_+}[(U_t+u_t)w_t-(\nabla u-\nabla U)\cdot\nabla w+(U+f(\nabla u, u_t, u, t, \mathbf{x}))w]\,d\mathbf{x}\,dt=0,\nonumber\\ \int_{\mathbb{R}^N_+}[(V_t+v_t)w_t-(\nabla v-\nabla V)\cdot\nabla w+(V+f(\nabla v, v_t, v, t, \mathbf{x}))w]\,d\mathbf{x}\,dt=0,\nonumber \end{gather} for every $w\in H^1(\mathbb{R}^N_+)$. By subtracting one from the other, and letting $$ s=u-v,\quad S=U-V, $$ we find \begin{gather} \int_{\mathbb{R}^N_+}[(S_t+s_t)w_t-(\nabla s-\nabla S)\cdot\nabla w+\nonumber\\ (S+f(\nabla u, u_t, u, t, \mathbf{x})-f(\nabla v, v_t, v, t, \mathbf{x}))w]\,d\mathbf{x}\,dt=0,\nonumber \end{gather} for every $w\in H^1(\mathbb{R}^N_+)$. We can recast this identity, by using the formalism in the corresponding linear situation around Lemma \ref{hiperbolico}, as \begin{gather} \langle S, \mathbb S w\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)\rangle}+ \langle w, \mathcal S s\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)^*\rangle}\nonumber\\ =-\int_{\mathbb{R}^N_+}[f(\nabla u, u_t, u, t, \mathbf{x})-f(\nabla v, v_t, v, t, \mathbf{x})-s)w]\,d\mathbf{x}\,dt. \nonumber \end{gather} The same manipulations as in the proof of Proposition \ref{enhcoerhyp}, together with the assumed Lipschitz property on $f$, lead immediately to \begin{align} \|s\|_{H^1_0(\mathbb{R}^N_+)}\le &\|\mathcal S s\|_{H^1(\mathbb{R}^N_+)^*}\nonumber\\ =&\sup_{\|w\|_{H^1(\mathbb{R}^N_+)}\le 1}|\langle w, \mathcal S s\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)^*\rangle}|\nonumber\\ =&\sup_{\|w\|_{H^1(\mathbb{R}^N_+)}\le 1}\left|\langle S, \mathbb S w\rangle+\int_{\mathbb{R}^N_+}(f_u-f_v-s)w\,d\mathbf{x}\,dt\right|\nonumber\\ \le&\|S\|_{H^1(\mathbb{R}^N_+)}\,\|w\|_{H^1(\mathbb{R}^N_+)}+M\|s\|_{H^1_0(\mathbb{R}^N_0)}\|w\|_{H^1(\mathbb{R}^N_+)}\nonumber\\ \le&\|S\|_{H^1(\mathbb{R}^N_+)}+M\|s\|_{H^1_0(\mathbb{R}^N_+)}.\nonumber \end{align} We are putting $$ f_u=f(\nabla u, u_t, u, t, \mathbf{x}),\quad f_v=f(\nabla v, v_t, v, t, \mathbf{x}), $$ for the sake of notation. Note also the use of the embedding constant. The resulting final inequality, and the relative sizes of these constants, show our claim. \end{proof} We turn to the second important property for $E$ to become an error functional, namely, $$ \lim_{E'(\mathbf{u})\to\mathbf 0}E(\mathbf{u})=0 $$ over bounded subsets of $H^1_0(\mathbb{R}^N_+)$. We assume that the non-linearity $f$ is $C^1$- with respect to $(\mathbf{z}, z, u)$, and its partial derivatives are uniformly bounded. To compute the derivative $E'(u)$ at an arbitrary $u\in H^1_0(\mathbb{R}^N_+)$, we perform, as usual, the perturbation to first-order $$ u\mapsto u+\epsilon v,\quad U\mapsto U+\epsilon V, $$ and introduce them in \eqref{resnlw}. After differentiation with respect to $\epsilon$, and setting $\epsilon=0$, we find \begin{equation}\label{perturbacion} \int_{\mathbb{R}^N_+}[(V_t+v_t)w_t-(\nabla v-\nabla V)\cdot\nabla w+(V+\overline f_\mathbf{z}\cdot\nabla v+\overline f_z v_t+\overline f_u v)w]\,d\mathbf{x}\,dt=0, \end{equation} for all $w\in H^1(\mathbb{R}^N_+)$, where $$ \overline f_\mathbf{z}(t, \mathbf{x})=f_\mathbf{z}(\nabla u(t, \mathbf{x}), u_t(t, \mathbf{x}), u(t, \mathbf{x}), t, \mathbf{x}), $$ and the same for $\overline f_z(t, \mathbf{x})$ and $\overline f_u(t, \mathbf{x})$. On the other hand, $$ \langle E'(u), v\rangle=\lim_{\epsilon\to0}\frac1\epsilon(E(u+\epsilon v)-E(u)) $$ is clearly given by $$ \langle E'(u), v\rangle=\int_{\mathbb{R}^N_+}\left(\nabla U\cdot\nabla V+U_t\,V_t+U\,V\right)\,d\mathbf{x}\,dt. $$ If we use $w=U$ in \eqref{perturbacion}, we can write $$ \langle E'(u), v\rangle=\int_{\mathbb{R}^N_+}\left[-v_t\,U_t+\nabla v\cdot\nabla U-(\overline f_\mathbf{z}\cdot\nabla v+\overline f_z v_t+\overline f_u v)U\right]\,d\mathbf{x}\,dt. $$ The validity of this representation for every $v\in H^1_0(\mathbb{R}^N_+)$ enables us to identify $E'(u)$ with the triplet $$ (-U_t-\overline f_zU, \nabla U-U\overline f_\mathbf{z}, -\overline f_uU), $$ in the sense $E'(u)=\mathcal S U$ where the linear operator $$ \mathcal S:H^1(\mathbb{R}^N_+)\mapsto H^{-1}(\mathbb{R}^N_+) $$ is precisely determined by $$ \langle\mathcal S U, v\rangle=\int_{\mathbb{R}^N_+}\left[-v_t\,U_t+\nabla v\cdot\nabla U-(\overline f_\mathbf{z}\cdot\nabla v+\overline f_z v_t+\overline f_u v)U\right]\,d\mathbf{x}\,dt $$ for every $v\in H^1_0(\mathbb{R}^N_+)$. Notice how this operator $\mathcal S$ is well-defined because the non-linearity $f$ has been assumed to be globally Lipschitz with partial derivatives uniformly bounded. To conclude that $E'(u)\to\mathbf 0$ implies $U\to0$ and, hence $E(u)=0$, we need to ensure that this operator $\mathcal S$ is injective. We conjecture that this is so, without further requirements; but to simplify the argument here, we add the assumption that $$ |f_u(\mathbf{z}, z, u, t, z)|\ge \epsilon>0 $$ for every $(\mathbf{z}, z, u, t, z)$. Under this additional hypothesis, the condition $$ (-U_t-\overline f_zU, \nabla U-U\overline f_\mathbf{z}, -\overline f_uU)=\mathbf 0 $$ automatically implies $$ U=\nabla U=U_t=0 $$ and hence $E(u)=0$. \begin{theorem} Suppose the non-linearity $f(\mathbf{z}, z, u, t, \mathbf{x})$ is $\mathcal C^1$- with respect to variables $(\mathbf{z}, z, u)$, and: \begin{enumerate} \item $f(\mathbf 0, 0, 0, t, \mathbf{x})\in L^2(\mathbb{R}^N_+)$; \item the difference $f(\mathbf{z}, z, u, t, \mathbf{x})-u$ is Lipschitz with respect to triplets $(\mathbf{z}, z, u)$ in the sense \begin{gather} |f(\mathbf{z}, z, u, t, \mathbf{x})-u-f(\mathbf{y}, y, v, t, \mathbf{x})+v|\le \nonumber\\ M\left(|\mathbf{z}-\mathbf{y}|+|z-y|+\frac1D|u-v|^{(N+1)/(N-1)}\right),\nonumber \end{gather} where $D$ is the constant of the corresponding embedding $$ H^1(\mathbb{R}^N_+)\subset L^{2(N+1)/(N-1)}(\mathbb{R}^N_+), $$ and $M<1$; \item non-vanishing of $f_u$: there is some $\epsilon>0$ with $$ |f_u(\mathbf{z}, z, u, t, z)|\ge \epsilon>0. $$ \end{enumerate} Then the problem $$ u_{tt}-\Delta u+f(\nabla u, u_t, u, t, \mathbf{x})=0\hbox{ in }(t, \mathbf{x})\in\mathbb{R}^N_+ $$ under vanishing initial conditions $$ u(0, \mathbf{x})=u_t(0, \mathbf{x})=0,\quad \mathbf{x}\in\mathbb{R}^N, $$ admits a unique weak solution $u\in H^1_0(\mathbb{R}^N_+)$ in the sense \eqref{formadebilnlw}, and $$ \|v-u\|^2_{H^1_0(\mathbb{R}^N_+)}\le K E(v), $$ for any other $v\in H^1_0(\mathbb{R}^N_+)$. \end{theorem} Without the global Lipschitzianity condition on $f$ in the previous statement, but only the smoothness with respect to triplets $(\mathbf{z}, z, u)$, only a local existence result is possible. This is standard. \section{The steady Navier-Stokes system} For a bounded, Lipschitz, connected domain $\Omega\subset\mathbb{R}^N$, $N=2, 3$, we are concerned with the steady Navier-Stokes system \begin{equation}\label{navsto} -\nu\Delta\mathbf{u}+\nabla\mathbf{u}\,\mathbf{u}+\nabla u=\mathbf{f},\quad \operatorname{div}\mathbf{u}=0\hbox{ in }\Omega, \end{equation} for a vector field $\mathbf{u}\in H^1_0(\Omega; \mathbb{R}^N)$, and a scalar, pressure field $u\in L^2(\Omega)$. The external force field $\mathbf{f}$ is assumed to belong to the dual space $H^{-1}(\Omega; \mathbb{R}^N)$. The parameter $\nu>0$ is viscosity. Because of the incompressibility condition, the system can also be written in the form $$ -\nu\Delta\mathbf{u}+\operatorname{div}(\mathbf{u}\otimes\mathbf{u})+\nabla u=\mathbf{f},\quad \operatorname{div}\mathbf{u}=0\hbox{ in }\Omega. $$ A weak solution is a divergence-free vector field $\mathbf{u}\in H^1_0(\Omega; \mathbb{R}^N)$, and a scalar field $u\in L^2(\Omega)$, normalized by demanding vanishing average in $\Omega$, such that $$ \int_\Omega [\nu\nabla\mathbf{u}(\mathbf{x}):\nabla\mathbf{v}(\mathbf{x})-\mathbf{u}(\mathbf{x})\nabla\mathbf{v}(\mathbf{x})\mathbf{u}(\mathbf{x})-u(\mathbf{x})\operatorname{div}\mathbf{v}(\mathbf{x})]\,d\mathbf{x}=\langle\mathbf{f}, \mathbf{v}\rangle $$ where the right-hand side stands for the duality pairing $$ H^{-1}(\Omega; \mathbb{R}^N)-H^1_0(\Omega; \mathbb{R}^N). $$ We propose to look at this problem incorporating the incompressibility constraint into the space as part of feasibility as is usually done; there is also the alternative to treat the same situation incorporating a penalization on the divergence into the functional, instead of including it into the class of admissible fields (see \cite{lemmunped}). The pressure field rises as the multiplier corresponding to the divergence-free constraint. Let $$ \mathbb{D}\equiv H^1_{0, div}(\Omega; \mathbb{R}^N)=\{\mathbf{u}\in H^1_0(\Omega; \mathbb{R}^N): \operatorname{div}\mathbf{u}=0\hbox{ in }\Omega\}. $$ For every such $\mathbf{u}$, we determine its residual $\mathbf{U}$, in a unique way, as the solution of the restricted variational problem $$ \hbox{Minimize in }\mathbf{V}\in \mathbb{D}:\quad \int_\Omega\left[\frac12|\nabla\mathbf{V}|^2+(\nu\nabla\mathbf{u}-\mathbf{u}\otimes\mathbf{u}):\nabla\mathbf{V}\right]\,d\mathbf{x}-\langle\mathbf{f}, \mathbf{V}\rangle. $$ The pressure $v$ comes as the corresponding multiplier for the divergence-free constraint, in such a way that the unique minimizer $\mathbf{U}$ is determined through the variational equality $$ \int_\Omega (\nabla\mathbf{U}:\nabla\mathbf{V}+(\nu\nabla\mathbf{u}-\mathbf{u}\otimes\mathbf{u}):\nabla\mathbf{V}+v\operatorname{div}\mathbf{V})\,d\mathbf{x}-\langle\mathbf{f}, \mathbf{V}\rangle $$ valid for every test field $\mathbf{V}\in H^1_0(\Omega; \mathbb{R}^N)$. This is the weak form of the optimality condition associated with the previous variational problem \begin{equation}\label{residuo} -\Delta\mathbf{U}-\nu\Delta\mathbf{u}+\operatorname{div}(\mathbf{u}\otimes\mathbf{u})-\mathbf{f}+\nabla v=\mathbf 0\hbox{ in }\Omega, \end{equation} for $\mathbf{U}\in\mathbb{D}$. The multiplier $v\in L^2_0(\Omega)$ (square-integrable fields with a vanishing average) is the pressure. We define \begin{equation}\label{funcionalns} E(\mathbf{u}):\mathbb{D}\to\mathbb{R}^+,\quad E(\mathbf{u})=\frac12\int_\Omega|\nabla\mathbf{U}(\mathbf{x})|^2\,d\mathbf{x}. \end{equation} For $\mathbf{u}, \mathbf{v}\in\mathbb{D}$, let $\mathbf{U}, \mathbf{V}\in\mathbb{D}$ be their respective residuals, and $u, v$ their respective pressure fields. Put $$ \mathbf{w}=\mathbf{u}-\mathbf{v}\in\mathbb{D},\quad \mathbf{W}=\mathbf{U}-\mathbf{V}\in\mathbb{D}, \quad w=u-v\in L^2(\Omega). $$ It is elementary to find, by subtraction of the corresponding system \eqref{residuo} for $\mathbf{u}$ and $\mathbf{v}$, that \begin{equation}\label{esta} -\Delta\mathbf{W}-\nu\Delta\mathbf{w}+\operatorname{div}(\mathbf{u}\otimes\mathbf{u}-\mathbf{v}\otimes\mathbf{v})+\nabla w=\mathbf 0\hbox{ in }\Omega. \end{equation} It is the presence of the non-linear term $\operatorname{div}(\mathbf{u}\otimes\mathbf{u})$, so fundamental to the Navier-Stokes system, what makes the situation different compared to a linear setting. We write $$ \operatorname{div}(\mathbf{u}\otimes\mathbf{u}-\mathbf{v}\otimes\mathbf{v})=\operatorname{div}(\mathbf{w}\otimes\mathbf{u})+\operatorname{div}(\mathbf{v}\otimes\mathbf{w}), $$ and bear in mind the well-know fact \begin{equation}\label{identidades} \int_\Omega(\mathbf{v}\otimes\mathbf{v}:\nabla\mathbf{u}+\mathbf{u}\otimes\mathbf{v}:\nabla\mathbf{v})\,d\mathbf{x}=\int_\Omega\mathbf{u}\otimes\mathbf{v}:\nabla\mathbf{u}\,d\mathbf{x}=0 \end{equation} for every $\mathbf{u}, \mathbf{v}\in\mathbb{D}$. If we use $\mathbf{w}$ as a test function in \eqref{esta}, we find that \begin{equation}\label{uno1} \int_\Omega[\nabla\mathbf{W}:\nabla \mathbf{w}+\nu|\nabla\mathbf{w}|^2-(\mathbf{w}\otimes\mathbf{u}):\nabla\mathbf{w}-(\mathbf{v}\otimes\mathbf{w}):\nabla\mathbf{w}]\,d\mathbf{x}=0. \end{equation} Note that the integral involving $w$ vanishes because $\mathbf{w}$ is divergence-free. By \eqref{identidades}, \begin{gather} \int_\Omega(\mathbf{w}\otimes\mathbf{u}):\nabla\mathbf{w}\,d\mathbf{x}=0,\nonumber\\ \int_\Omega(\mathbf{v}\otimes\mathbf{w}):\nabla\mathbf{w}\,d\mathbf{x}=-\int_\Omega (\mathbf{w}\otimes\mathbf{w}):\nabla\mathbf{v}\,d\mathbf{x}.\nonumber \end{gather} Identity \eqref{uno1} becomes \begin{equation}\label{dos2} \int_\Omega[\nabla\mathbf{W}:\nabla \mathbf{w}+\nu|\nabla\mathbf{w}|^2+(\mathbf{w}\otimes\mathbf{w}):\nabla\mathbf{v}]\,d\mathbf{x}=0. \end{equation} We can use $\mathbf{v}$ as a test function in the corresponding system \eqref{residuo} for $\mathbf{v}$ to have $$ \int_\Omega(\nabla \mathbf{V}:\nabla\mathbf{v}+\nu|\nabla\mathbf{v}|^2-\mathbf{f}\cdot\mathbf{v})\,d\mathbf{x}=0. $$ Again we have utilized that fields in $\mathbb{D}$ are divergence-free, and the second identity in \eqref{identidades}. This last identity implies, in an elementary way, that \begin{equation}\label{tres3} \nu\|\mathbf{v}\|_{H^1_0(\Omega; \mathbb{R}^N)}\le \|\mathbf{f}\|_{H^{-1}(\Omega; \mathbb{R}^N)}+\sqrt{2E(\mathbf{v})}. \end{equation} Recall that $$ 2E(\mathbf{v})=\|\mathbf{V}\|^2_{H^1_0(\Omega; \mathbb{R}^N)}. $$ We now have all the suitable elements to exploit \eqref{dos2}. If $C=C(n)$ is the constant of the Sobolev embedding of $H^1(\Omega)$ into $L^4(\Omega)$ for $N\le4$, then \eqref{dos2} leads to $$ \nu\|\mathbf{w}\|^2\le \|\mathbf{W}\|\,\|\mathbf{w}\|+C^2\|\mathbf{w}\|^2\|\mathbf{v}\| $$ where all norms here are in $H^1_0(\Omega; \mathbb{R}^N)$. On the other hand, if we replace the size of $\mathbf{v}$ by the estimate \eqref{tres3}, we are carried to $$ \nu\|\mathbf{w}\|^2\le \|\mathbf{W}\|\,\|\mathbf{w}\|+\frac {C^2}\nu\|\mathbf{w}\|^2\left(\|\mathbf{f}\|_{H^{-1}(\Omega; \mathbb{R}^N)}+\sqrt{2E(\mathbf{v})}\right), $$ or $$ \left(\nu-\frac {C^2}\nu\left(\|\mathbf{f}\|+\sqrt{2E(\mathbf{v})}\right)\right)\|\mathbf{w}\|^2\le \|\mathbf{W}\|\,\|\mathbf{w}\|. $$ Since $$ \|\mathbf{W}\|=\|\mathbf{U}-\mathbf{V}\|\le \|\mathbf{U}\|+\|\mathbf{V}\|, $$ we would have $$ \left(\nu-\frac {C^2}\nu\left(\|\mathbf{f}\|+\sqrt{2E(\mathbf{v})}\right)\right)^2\|\mathbf{u}-\mathbf{v}\|^2\le 4 (E(\mathbf{u})+E(\mathbf{v})). $$ The form of this inequality leads us to the following interesting generalization of Definition \ref{errorgeneral}. \begin{definition}\label{errorconcota} A non-negative, $\mathcal C^1$-functional $$ E(\mathbf{u}):\mathbb H\to\mathbb{R}^+ $$ defined over a Hilbert space $\mathbb H$ is called an error functional if there is some positive constant $c$ (including $c=+\infty$) such that: \begin{enumerate} \item behavior as $E'\to\mathbf 0$: $$ \lim_{E'(\mathbf{u})\to\mathbf 0}E(\mathbf{u})=0 $$ over bounded subsets of $\mathbb H$; and \item enhanced coercivity: there is a positive constant $C$ (that might depend on $c$), such that for every pair $\mathbf{u}, \mathbf{v}$ belonging to the sub-level set $\{E\le c\}$, we have $$ \|\mathbf{u}-\mathbf{v}\|^2\le C(E(\mathbf{u})+E(\mathbf{v})). $$ \end{enumerate} \end{definition} It is interesting to note that the sub-level sets $\{E\le d\}$ for $d<c$, for a functional $E$ verifying Definition \ref{errorconcota}, cannot maintain several connected components. Because our basic result Proposition \ref{basicas} is concerned with zeros of $E$, it is still valid under Definition \ref{errorconcota}. \begin{proposition}\label{basicass} Let $E:\mathbb H\to\mathbb{R}^+$ be an error functional according to Definition \ref{errorconcota}. Then there is a unique minimizer $\mathbf{u}_\infty\in\mathbb H$ such that $E(\mathbf{u}_\infty)=0$, and $$ \|\mathbf{u}-\mathbf{u}_\infty\|^2\le CE(\mathbf{u}), $$ for every $\mathbf{u}\in\mathbb H$ provided $E(\mathbf{u})$ is sufficiently small ($E(\mathbf{u})\le c$, the constant in Definition \ref{errorconcota}). \end{proposition} The calculations that motivated this generalization yield the following. \begin{proposition} Let $N\le4$, and $\Omega\subset\mathbb{R}^N$, a bounded, Lipschitz, connected domain. If $\nu>0$ and $\mathbf{f}\in H^{-1}(\Omega; \mathbb{R}^N)$ are such that the quotient $\|\mathbf{f}\|/\nu^2$ is sufficiently small, then the functional $E$ in \eqref{funcionalns} complies with Definition \ref{errorconcota}. \end{proposition} We now turn to examining the interconnection between $E$ and $E'$. To this end, we gather here \eqref{residuo} and \eqref{funcionalns} \begin{gather} E(\mathbf{u})=\frac12\int_\Omega|\nabla\mathbf{U}(\mathbf{x})|^2\,d\mathbf{x},\nonumber\\ -\Delta\mathbf{U}-\nu\Delta\mathbf{u}+\operatorname{div}(\mathbf{u}\otimes\mathbf{u})-\mathbf{f}+\nabla u=\mathbf 0\hbox{ in }\Omega,\nonumber \end{gather} for $\mathbf{u}, \mathbf{U}\in\mathbb{D}$ and $u\in L^2_0(\Omega)$. If we replace $$ \mathbf{u}\mapsto\mathbf{u}+\epsilon\mathbf{v},\quad \mathbf{U}\mapsto\mathbf{U}+\epsilon\mathbf{V},\quad u\mapsto u+\epsilon v, $$ to first-order in $\epsilon$, we would have \begin{gather} E(\mathbf{u}+\epsilon\mathbf{v})=\frac12\int_\Omega|\nabla\mathbf{U}+\epsilon\nabla\mathbf{V}|^2\,d\mathbf{x},\nonumber\\ -\Delta(\mathbf{U}+\epsilon\mathbf{V})-\nu\Delta(\mathbf{u}+\epsilon\mathbf{v})+\operatorname{div}((\mathbf{u}+\epsilon\mathbf{v})\otimes(\mathbf{u}+\epsilon\mathbf{v}))\nonumber\\ -\mathbf{f}+\nabla (u+\epsilon v)=\mathbf 0.\nonumber \end{gather} By differentiating with respect to $\epsilon$, and setting $\epsilon=0$, we arrive at \begin{gather} \langle E'(\mathbf{u}), \mathbf{v}\rangle=\int_\Omega\nabla\mathbf{U}\cdot\nabla\mathbf{V}\,d\mathbf{x},\nonumber\\ -\Delta\mathbf{V}-\nu\Delta\mathbf{v}+\operatorname{div}(\mathbf{u}\otimes\mathbf{v}+\mathbf{v}\otimes\mathbf{u})+\nabla v=\mathbf 0.\nonumber \end{gather} If we use $\mathbf{U}$ as a test function in this last system, we realize that $$ \langle E'(\mathbf{u}), \mathbf{v}\rangle=\int_\Omega [-\nu\nabla\mathbf{v}\cdot\nabla\mathbf{U}+\mathbf{u}\otimes\mathbf{v}:\nabla\mathbf{U}+\mathbf{v}\otimes\mathbf{u}:\nabla\mathbf{U}]\,d\mathbf{x} $$ for every $\mathbf{v}\in\mathbb{D}$. If we set $\mathbf{w}=E'(\mathbf{u})\in\mathbb{D}$, then $$ \int_\Omega[\nabla\mathbf{w}\cdot\nabla \mathbf{v}+\nu\nabla\mathbf{U}\cdot\nabla\mathbf{v}+(\nabla\mathbf{U}\mathbf{u}+\mathbf{u}\nabla\mathbf{U})\cdot\mathbf{v}]\,d\mathbf{x}, $$ for every $\mathbf{v}\in\mathbb{D}$. In particular, if we plug $\mathbf{v}=\mathbf{U}$ in, bearing in mind that due to \eqref{identidades} the last two terms drop out, we are left with $$ \nu\|\nabla\mathbf{U}\|^2=-\langle\nabla\mathbf{w}, \nabla\mathbf{U}\rangle, $$ or $$ \nu\|\nabla\mathbf{U}\|\le\|\nabla\mathbf{w}\|. $$ If the term on the right-hand side, which is $\|E'(\mathbf{u})\|$, tends to zero, so does the one on the left-hand side, which is $\nu\sqrt{2E(\mathbf{u})}$. This shows the second basic property of an error functional. As a result, Proposition \ref{basicass} can be applied. \begin{theorem} If $\Omega\subset\mathbb{R}^N$, $N\le4$, is a bounded, Lipschitz, connected domain, and $\nu>0$ and $\mathbf{f}\in H^{-1}(\Omega; \mathbb{R}^N)$ in the steady Navier-Stokes system \eqref{navsto} are such that the quotient $\|\mathbf{f}\|/\nu^2$ is sufficiently small, then there is a unique weak solution $\mathbf{u}$ in $\mathbb{D}$, and $$ \|\mathbf{u}-\mathbf{v}\|^2_{H^1_0(\Omega; \mathbb{R}^N)}\le CE(\mathbf{v}) $$ provided $E(\mathbf{v})$ is sufficiently small. \end{theorem} \end{document}
\begin{document} \title{Depth of characters of curve complements and orbifold pencils} \author{E. Artal Bartolo, J.I.~Cogolludo-Agust{\'\i}n and A.~Libgober} \address{Departamento de Matem\'aticas, IUMA\\ Universidad de Zaragoza\\ C.~Pedro Cerbuna 12\\ 50009 Zaragoza, Spain} \email{[email protected],[email protected]} \address{ Department of Mathematics\\ University of Illinois\\ 851 S.~Morgan Str.\\ Chicago, IL 60607} \email{[email protected]} \thanks{Partially supported by the Spanish Ministry of Education MTM2010-21740-C02-02. The third named author was also partially supported by an NSF grant.} \subjclass[2000]{14H30, 14J30, 14H50, 11G05, 57M12, 14H52} \begin{abstract} The present work is a user's guide to the results of~\cite{acl-depth}, where a description of the space of characters of a quasi-projective variety was given in terms of global quotient orbifold pencils. Below we consider the case of plane curve complements and hyperplane arrangements. In particular, an infinite family of curves exhibiting characters of any torsion and depth~3 will be discussed. Also, in the context of line arrangements, it will be shown how geometric tools, such as the existence of orbifold pencils, can replace the group theoretical computations via fundamental groups when studying characters of finite order, specially order two. Finally, we revisit an Alexander-equivalent Zariski pair considered in the literature and show how the existence of such pencils distinguishes both curves. \end{abstract} \maketitle \section{Introduction} Let $\mathcal X$ be the complement of a reduced (possibly reducible) projective curve $\mathcal D$ in the complex projective plane $\mathbb P^2$. The space of characters of the fundamental group $\Char(\mathcal X)=\text{\rm Hom}(\pi_1(\mathcal X),\mathbb C^*)$ has an interesting stratification by subspaces, given by the cohomology of the rank one local system associated with the character: \begin{equation}\label{jump} V_k(\mathcal X):=\{\chi \mid {\rm dim} H^1(\mathcal X,\chi) = k \}. \end{equation} The closures of these jumping loci in $\Char(\mathcal X)$ were called in~\cite{charvar} the characteristic varieties of $\mathcal X$. More precisely, the characteristic varieties associated to $\mathcal X$ were defined in~\cite{charvar} as the zero sets of Fitting ideals of the $\mathbb C[\pi_1/\pi_1']$-module which is the complexification $\pi_1'/\pi_1'' \otimes \mathbb C$ of the abelianized commutator of the fundamental group $\pi_1(\mathcal X)$ (cf. section~\ref{prelim} for more details). In particular the characteristic varieties (unlike the jumping sets of the cohomology dimension greater than one) depend only on the fundamental group. Fox calculus provides an effective method for calculating the characteristic varieties if a presentation of the fundamental group by generators and relators is known. For each character $\chi \in \text{\rm Hom}(\pi_1(\mathcal X),\mathbb C^*)$ the \emph{depth} was defined in~\cite{charvar} as \begin{equation} d(\chi):={\rm dim} H^1(\mathcal X,\chi) \end{equation} so that the strata~\eqref{jump} are the sets on which $d(\chi)$ is constant. In~\cite{acl-depth}, we describe a geometric interpretation of the depth by relating it to the pencils on $\mathcal X$ i.e. holomorphic maps $\mathcal X \rightarrow C, \ {\rm dim} C=1$ having multiple fibers. In fact the discussion in~\cite{acl-depth} is in a more general context in which $\mathcal X$ is a smooth quasi-projective variety. \footnote{Much of the discussion in the first two sections below applies to general quasi-projective varieties (cf.~\cite{acl-depth}), but in the present paper we will stay in the hypersurface complement context. Note that the characteristic varieties only depend on the fundamental group, hence, by the Lefschetz-type theorems it is enough to consider the curve complement class.} The viewpoint of~\cite{acl-depth} (and~\cite{acm-charvar-orbifolds}) is that such a pencil can be considered as a map in the category of orbifolds. The orbifold structure of the curve $C$ is matched by the structure of multiple fibers of the pencil. The main result of \cite{acl-depth} can be stated as follows: \begin{theorem} \label{thm-main} Let $(\mathcal X,\chi)$ be a pair consisting of a smooth quasi-projective manifold $\mathcal X$, whose smooth compactification $\bar \mathcal X$ satisfies $H^1(\bar \mathcal X,\mathbb C)=0$, and let $\chi$ be a character of the fundamental group of $\mathcal X$. Assume that the depth of $\chi$ is positive. Then there is a (possibly non-compact) orbifold curve $\mathcal C$, a character $\rho$ of its orbifold fundamental group $\pi_1^{orb}(\mathcal C)$ and $n$ strongly independent orbifold pencils $f_i: \mathcal X \rightarrow \mathcal C$ such that $f_i^*(\rho)=\chi$. Moreover, \begin{enumerate} \enet{\rm(\arabic{enumi})} \item\label{thm-main-part1} $d(\chi) \ge n d(\rho)$. \item\label{thm-main-part2} If $\mathcal C$ is a global quotient orbifold of a one-dimensional algebraic group, then the inequality above is in fact an equality, $d(\chi) = n d(\rho)$. \end{enumerate} Vice versa, given an orbifold pencil $f: \mathcal X \rightarrow \mathcal C$ and a character $\rho \in {\rm Char}\pi_1^{orb}(\mathcal C)$ with a positive depth then $\chi=f^*(\rho)$ has a positive depth at least as large as $d(\rho)$. \end{theorem} We refer to section~\ref{prelim} for all the required definitions, and in particular, the definition of strongly independent pencils. According to this result, given a character $\chi$ with a positive depth, one automatically has an orbifold pencil with the depth $d(\chi)$ being bounded below by the constant specified in Theorem~\ref{thm-main}\ref{thm-main-part1} by the geometric data. One can compare this statement with the numerous previous results on existence of pencils on quasi-projective manifolds which mostly guarantee existence of ordinary pencils corresponding to the components of~\eqref{jump} having a positive dimension (cf.~\cite{simp,arapura} and references therein). For example, if $\chi$ has a positive depth and an infinite order, then it must belong to a component of characteristic variety of positive dimension since the isolated points of characteristic variety have a finite order by~\cite{nonvanishing}. Hence the results of~\cite{arapura} can be applied to such a component to obtain a pencil $f: \mathcal X \rightarrow \mathcal C$ and a character $\rho \in {\rm Char}\pi_1(\mathcal C)$ such that $\chi=f^*(\rho)$. Here $\mathcal C$ is the complement in $\mathbb P^1$ to a finite set containing say $d>2$ points. Moreover, the number of independent pencils in the sense of section~\ref{prelim} is equal to one (cf. Remark~\ref{rem-non-orbifold}) and the depth of $\rho \in {\rm Char}\pi_1(\mathcal C)$ is equal to $d-2$). Since it was show in \cite{arapura} that ${\rm dim} H^1(\mathcal X,\chi) \ge {\rm dim} H^1(\mathcal C,\rho)$ the inequality in Theorem~\ref{thm-main}\ref{thm-main-part1} follows from the latter. Note that the orbifold pencils, which Theorem~\ref{thm-main} claims, \emph{without the orbifold structure} are just rational pencils and the connection with the jumping loci disappears. The goal of this paper, is to illustrate in detail both parts~\ref{thm-main-part1} and~\ref{thm-main-part2} of Theorem~\ref{thm-main} with examples in which orbifolds are unavoidable. We start with a section reviewing mainly known results on the cohomology of local systems, characteristic varieties, orbifolds, and Zariski pairs making possible to read the rest of the paper unless one is interested in the proofs of mentioned results. Then, firstly, in section~\ref{sec-fermat}, a family of curves is considered for which the characteristic variety containing isolated characters having torsion or arbitrary finite order and whose depth is~3. The calculations illustrate the use of Fox calculus for finding explicit form description of the characteristic varieties. Secondly, in the context of line arrangements, examples of Ceva and {augmented Ceva}\ arrangements are considered in section~\ref{sec-ceva}. Their characteristic varieties have been studied in the literature via computer aided calculations based on fundamental group presentations and Fox calculus. Here we present an alternative way to study such varieties independent of the fundamental group illustrating the geometric approach of Theorem~\ref{thm-main}. Finally, in section~\ref{sec-zariski-pair} we discuss on a Zariski pair of sextic curves, whose Alexander polynomials coincide. We determine this Zariski pair by the existence of orbifold pencils. \subsection*{Acknowledgments} The authors want to express thanks to the organizers of the \emph{Intensive research period on Configuration Spaces: Geometry, Combinatorics and Topology,} at the \emph{Centro di Ricerca Matematica Ennio De Giorgi}, Pisa, in May 2010 for their hospitality and excellent working atmosphere. The second and third named authors want to thank the University of Illinois and Zaragoza respectively, where part of the work on the material of this paper was done during their visit in Spring and Summer 2011. \section{Preliminaries}\label{prelim} \label{sec-preliminaries} In this section the necessary definitions used in Theorem~\ref{thm-main} will be reviewed together with material on the characteristic varieties and Zariski pairs with the aim to keep the discussion of the upcoming sections in a reasonably self-contained manner. \subsection{Characteristic varieties} \label{libg-var} \mbox{} Characteristic varieties appeared first in the literature in the context of algebraic curves in~\cite{Libgober-homology}. They can be defined as follows. Let $\mathcal D:=\mathcal D_1\cup\dots \cup \mathcal D_r$ be the decomposition of a reduced curve $\mathcal D$ into irreducible components and let $d_i:={\rm \,deg\, } \mathcal D_i$ denote the degrees of the components $\mathcal D_i$. Let $\tau:=\gcd(d_1,\dots,d_r)$ and $\mathcal X=\mathbb P^2 \setminus\mathcal D$. Then (cf.~\cite{charvar}) \begin{equation} \label{eq-h1} H_1(\mathcal X;\mathbb Z)={\left\langle \bigoplus_{i=1}^r \gamma_i \mathbb Z \right\rangle}/ {\langle d_1 \gamma_1+\dots+d_r \gamma_r\rangle}\approx \mathbb Z^{r-1}\oplus {\mathbb Z}/{\tau \mathbb Z}, \end{equation} where $\gamma_i$ is the homology class of a meridian of $\mathcal D_i$ (i.e. the boundary of small disk transversal to $\mathcal D_i$ at a smooth point). Let $\ab:G:=\pi_1(\mathcal X)\to H_1(\mathcal X;\mathbb Z)$ be epimorphism of abelianization. The kernel $G'$ of $\ab$, i.e. the commutator of $G$, defines the universal Abelian covering of $\mathcal X$, say $\mathcal X_{\ab} \rightmap{\pi} \mathcal X$, whose group of deck transformations is $H_1(\mathcal X;\mathbb Z)=G/G'$. This group of deck transformations, since it acts on $\mathcal X_{\ab}$, also acts on $H_1(\mathcal X_{\ab};\mathbb Z)=G'/G''$. \footnote{this action corresponds to the action of $G/G'$ on $G'/G''$ by conjugation} This allows to endow $M_{\mathcal D,\ab}:=H_1(\mathcal X_{\ab};\mathbb Z)\otimes \mathbb C$ (as well as $\tilde M_{\mathcal D,\ab}:=H_1(\mathcal X_{\ab},\pi^{-1}(*);\mathbb Z)\otimes \mathbb C$) with a structure of $\Lambda_\mathcal D$-module where \begin{equation} \label{eq-ring} \Lambda_\mathcal D:=\mathbb C[G/G']\approx \mathbb C[t_1^{\pm 1},\dots,t_r^{\pm 1}]/(t_1^{d_1}\cdot\ldots\cdot t_r^{d_r}-1). \end{equation} Note that $Spec \Lambda_\mathcal D$ can be identified with the commutative affine algebraic group ${\rm Char}\pi_1(X)$ having $\tau$ tori ${(\mathbb C^*)}^{r-1}$ as connected components. Indeed, the elements of $\Lambda_{\mathcal D}$ can be viewed as the functions on the group of characters of $G$. Since $G$ is a finitely generated group, the module $M_{\mathcal D,\ab}$ (resp. $\tilde M_{\mathcal D,\ab}$) is a finitely generated $\Lambda_\mathcal D$-module: \footnote{in most interesting examples with non-cyclic $G/G'$ the group $G'/G''$ is infinitely generated.} in fact one can construct a presentation of $M_{\mathcal D,\ab}$ (resp. $\tilde M_{\mathcal D,\ab}$) with the number of $\Lambda_{\mathcal D}$-generators at most $n \choose 2$ (resp. $n$), where $n$ is the number of generators of $G$. If $G/G'$ is not cyclic (i.e. $r>2$ or $r\geq 2$ and $\tau>1$) then $\Lambda_{\mathcal D}$ is not a Principal Ideal Domain. One way to approach the $\Lambda_{\mathcal D}$-module structure of both $M_{\mathcal D,\ab}$ and $\tilde M_{\mathcal D,\ab}$ is to study their Fitting ideals (cf.~\cite{eisenbud}). Let us briefly recall the relevant definitions. Let $R$ be a commutative Noetherian ring with unity and $M$ a finitely generated $R$-module. Choose a finite free presentation for $M$, say $\phi: R^m \rightarrow R^n$, where $M=\text{coker } \phi.$ The homomorphism $\phi$ has an associated $(n \times m)$ matrix $A_{\phi}$ with coefficients in $R$ such that $\phi(x)=A_{\phi} x$ (the vectors below are represented as the column matrices). \begin{dfn} \label{def-fitting} The $k$-th \emph{Fitting ideal} $F_k(M)$ of $M$ is defined as the ideal generated by $$\begin{cases} 0 & \text{if } k \leq \max\{0,n-m\} \\ 1 & \text{if } k > n \\ \text{minors of } A_{\phi} \text{ of order } (n-k+1) & \text{otherwise.} \end{cases} $$ It will be denoted $F_k$ if no ambiguity seems likely to arise. \end{dfn} \begin{dfn}{\cite{Libgober-homology}} \label{def-charvar} With the above notations the \emph{$k$-th characteristic variety} ($k>0$) of $\mathcal X=\mathbb P^2\setminus \mathcal D$ can be defined as the zero-set of the ideal $F_k(M_{\mathcal D,\ab})$ $$ \Char_k(\mathcal X):=Z(F_k(M_{\mathcal D,\ab}))\subset{\rm Spec}\Lambda_{\mathcal D}={\rm Char}\pi_1(\mathbb P^2\setminus \mathcal D). $$ Then $V_k(\mathcal X)$ is the set of characters in $\Char_k(\mathcal X)$ which do not belong to $\Char_j(\mathcal X)$ for $j>k$. If a character $\chi$ belongs to $V_k(\mathcal X)$ then $k$ is called the {\it depth} of $\chi$ and denoted by $d(\chi)$ (cf. \cite{charvar}). An alternative notation for $\Char_k(\mathbb P^2 \setminus \mathcal D)$ is $\Char_{k,\mathbb P}(\mathcal D)$. \end{dfn} \begin{remark} Essentially without loss of generality one can consider only the cases when the quotient by an ideal in the definition of the ring $\Lambda_\mathcal D$ in~\eqref{eq-ring} is absent i.e. consider only the modules of the ring of Laurent polynomials. Indeed, consider a line $L$ not contained in $\mathcal D$ and in general position (i.e. which does not contain singularities of $\mathcal D$ and is transversal to it). Then $\Lambda_{L\cup \mathcal C}$ is isomorphic to $\mathbb C[t_1^{\pm 1},\dots,t_r^{\pm 1}]$. Moreover, since we assume transversality $L\pitchfork\mathcal D$, then the $\Lambda_{L\cup \mathcal D}$-module $M_{L\cup \mathcal D,\ab}$ does not depend on $L$ (see for instance~\cite[Proposition~1.16]{ji-zariski}). The characteristic variety $\Char_{k,\mathbb P}(L\cup \mathcal D)$ determines $\Char_{k,\mathbb P}(\mathcal D)$ (cf. \cite{charvar,ji-zariski}). By abuse of language it is called the \emph{$k$-th affine characteristic variety} and denoted simply by $\Char_{k}(\mathcal D)$. One can also use the module $\tilde M_{\mathcal D,\ab}$ to obtain the characteristic varieties of $\mathcal D$. One has the following connection $$ \Char_k(\mathcal X)\setminus \bar 1=Z(F_{k+1}(\tilde M_{\mathcal D,\ab}))\setminus \bar 1, $$ where $\bar 1$ denotes the trivial character. \end{remark} \begin{remark} The depth of a character appears in explicit formulas for the first Betti number of cyclic and abelian unbranched and branched covering spaces (cf. \cite{Libgober-homology,eko,sakuma}) \end{remark} \begin{remark}\label{charvargroups} One can also define the $k$-th characteristic variety $\Char_k(G)$ of any finitely generated group $G$ (such that the abelianization $G/G' \ne 0$ or, more generally, for a surjection $G \rightarrow A$ where $A$ is an abelian group) as the $k$-th characteristic variety of the $\Lambda_G=\mathbb C[G/G']$-module $M_G=H_1(\mathcal X_{G,\ab})$ obtained by considering the CW-complex $\mathcal X_G$ associated with a presentation of $G$ and its universal abelian covering space $\mathcal X_{G,\ab}$ (respectively considering the covering space of $\mathcal X_G$ associated with the kernel of the map to $A$). Such invariant is independent of the finite presentation of $G$ (resp. depends only on $G \rightarrow A$). This construction will be applied below to the orbifold fundamental groups of one dimensional orbifolds. \end{remark} \begin{remark} Note that one has: \begin{itemize} \item $\Char_k(\mathcal D)=\Supp_{\Lambda_\mathcal D} \wedge^i(H_1(\mathcal X_{\ab};\mathbb C))$, \item $\Spec\Lambda_{L\cup \mathcal D}=\mathbb T^r=(\mathbb C^*)^{r}$, for the affine case, and \item $\Spec\Lambda_\mathcal D=\mathbb T_\mathcal D=\{\omega^i\}_{i=0}^{\tau-1}\times (\mathbb C^*)^{r-1}= V(t_1^{d_1}\cdot\ldots\cdot t_r^{d_r}-1)\subset\mathbb T^r$, where $\omega$ is a $\tau$-th primitive root of unity for the curves in projective plane. \end{itemize} Note also that in the case of a finitely presented group $G$ such that $G/G'=\mathbb Z^r\oplus \mathbb Z/\tau_1\mathbb Z\oplus\dots\oplus \mathbb Z/\tau_s\mathbb Z$ one has \begin{equation} {\rm Spec}{\Lambda_G}=\mathbb T_G=\{(\omega^{i_1}_1,\dots,\omega^{i_s}_s)\mid i_k=0,\dots,\tau_k-1,\ k=1,\dots,s\} \times (\mathbb C^*)^r, \end{equation} where as above $\Lambda_G=\mathbb C[G/G']$ and $\omega_i$ is a $\tau_i$-th primitive root of unity. \end{remark} Let $\mathcal X$ be a smooth quasi-projective variety such that for its smooth compactification $\bar \mathcal X$ one has $H^1(\bar \mathcal X,\mathbb C)=0$. This of course includes the cases $\mathcal X=\mathbb P^2\setminus \mathcal D$. The structure of the closures of the strata $V_k(\mathcal X)$ is given by the following fundamental result. \begin{theorem}[\cite{arapura}] \label{arapurath} The closure of each $V_k(\mathcal X)$ is a finite union of cosets of subgroups of $\Char(\pi_1(\mathcal X))$. Moreover, for each irreducible component $W$ of $V_k(\mathcal X)$ having a positive dimension there is a pencil $f: \mathcal X \rightarrow C$, where $C$ is a $\mathbb P^1$ with deleted points, and a torsion character $\chi \in \Char_k(\mathcal X)$ such that $W=\chi f^*H^1(C,\mathbb C^*)$. \end{theorem} \subsection{Essential Coordinate Components} \mbox{} Let $\mathcal D'\subsetneq\mathcal D$ be curve whose components form a subset of the set of components of $\mathcal D$. There is a natural epimorphism $\pi_1(\mathbb P^2\setminus\mathcal D)\twoheadrightarrow\pi_1(\mathbb P^2\setminus\mathcal D')$ induced by the inclusion. This surjection induces a natural inclusion ${\rm Spec}\Lambda_{\mathcal D'}\subset {\rm Spec}\Lambda_{\mathcal D}$. With identification of the generators of $\Lambda_{\mathcal D}$ with components of $\mathcal D$ as above, this embedding is obtained by assigning~$1$ to the coordinates corresponding to those irreducible components of $\mathcal D$ which are not in $\mathcal D'$ (cf.~\cite{charvar}). The embedding ${\rm Spec}\Lambda_{\mathcal D'}\subset {\rm Spec}\Lambda_{\mathcal D}$ induces the embedding $\Char_k(\mathcal D')\subset\Char_k(\mathcal D)$ (cf. \cite{charvar}); any irreducible component of $V_k(\mathcal D')$ is the intersection of an irreducible component of $V_k(\mathcal D)$ with $\Lambda_{\mathcal D'}$. \begin{dfn} Irreducible components of $V_k(\mathcal D)$ contained in $\Lambda_{\mathcal D'}$ for some curve $\mathcal D' \subset \mathcal D$ are called \emph{coordinate components} of $V_k(\mathcal D)$. If an irreducible coordinate component~$V$ of $V_k(\mathcal D')$ is also an irreducible component of $V_k(\mathcal D)$, then $V$ is called a \emph{non-essential coordinate component}, otherwise it is called an \emph{essential coordinate component}. \end{dfn} See~\cite{ArtalCogolludo} for examples. A detailed discussion of more examples is done in sections~\ref{sec-fermat},~\ref{sec-ceva}, and~\ref{sec-zariski-pair}. As shown in~\cite[Lemma~1.4.3]{charvar} (see also~\cite[Proposition~3.12]{dimca-pencils}), essential coordinate components must be zero dimensional. \subsection{Alexander Invariant} \label{sec-alex-inv} In section~\ref{libg-var} the characteristic varieties of a finitely presented group $G$ are defined as the zeroes of the Fitting ideals of the module $M:=G'/G''$ over $G/G'$. This module is referred to in the literature as the \emph{Alexander invariant} of $G$. Note, however, that this is not the module represented by the matrix of Fox derivatives called the \emph{Alexander module} of $G$. Our purpose in this section is to briefly describe the Alexander invariant for fundamental groups of complements of plane curves and give a method to obtain a presentation of such a module from a presentation of $G$. In order to do so, consider $G:=\pi_1(\mathbb P^2\setminus \mathcal D)$ the fundamental group of the curve $\mathcal D$. Without loss of generality one might assume that \begin{enumerate} \item[$(Z1)$] $G/G'$ is a free group of rank $r$ generated by meridians $\gamma_1, \gamma_2, ..., \gamma_r$, \end{enumerate} then one has the following \begin{lemma}[{\cite[Proposition~2.3]{ji-rybnikov}}] Any group $G$ as above satisfying $(Z1)$ admits a presentation \begin{equation} \label{eq-zar-pres} \left\langle x_1,...,x_r,y_1,...,y_s : R_1(\bar x,\bar y)=...=R_m(\bar x,\bar y)=1 \right\rangle, \end{equation} where $\bar x:=\{x_1,...,x_r\}$ and $\bar y:=\{y_1,...,y_s\}$ satisfying: \begin{enumerate} \item[$(Z2)$] $\ab (x_i)=\gamma_i$, $\ab (y_j)=0$, and $R_k$ can be written in terms of $\bar y$ and $x_k [x_i,x_j] x_k^{-1}$, where $[x_i,x_j]$ is the commutator of $x_i$ and $x_j$. \end{enumerate} \end{lemma} A presentation satisfying $(Z2)$ is called a \emph{Zariski presentation} of~$G$. From now on we will assume $G$ admits a Zariski presentation as in~\eqref{eq-zar-pres}. In order to describe elements of the module $M$ it is sometimes convenient to see $\mathbb Z[G/G']$ as the ring of Laurent polynomials in $r$ variables $\mathbb Z[t_1^{\pm 1},...,t_r^{\pm 1}]$, where $t_i$ represents the action induced by $\gamma_i$ on $M$ as a multiplicative action, that is, \begin{equation} \label{eq-action-M} t_i g\ \eqmap{M}\ x_i g x_i^{-1} \end{equation} for any $g\in G'$. \begin{remark} \mbox{} \begin{enumerate} \item One of course needs to convince oneself that action~\eqref{eq-action-M} is independent, up to an element of $G''$, of the representative $x_i$ as long as $\ab (x_i)=\gamma_i$. This is an easy exercise. \item We denote by ``$\eqmap{M}$'' equalities that are valid in $M$. \end{enumerate} \end{remark} \begin{example} \label{exam-rel} Note that \begin{equation} \label{eq-comm-M} [xy,z]\ \eqmap{M}\ [x,z]+t_x[y,z], \end{equation} where $x$, $y$, and $z$ are elements of $G$ and $t_x$ denotes $\ab (x)$ in the multiplicative group. This is a consequence of the following $$ [xy,z]=xyzy^{-1}x^{-1}z^{-1}=x(yzy^{-1}z^{-1})x^{-1} xzx^{-1}z^{-1}\ \eqmap{M}\ t_x[y,z]+[x,z]. $$ As a useful application of~\eqref{eq-comm-M} one can check that \begin{equation} \label{eq-comm-M2} [x^y,z]\ \eqmap{M}\ [x,z]+(t_z-1)[y,x], \end{equation} where $x^y:=yxy^{-1}$. \end{example} Note that $x_{ij}:=[x_i,x_j]$, $1\leq i<j \leq r$ and $y_k$, $k=1,...,s$ are elements in $G'$, since $\ab (x_{ij})=\ab (y_k)=0$. Therefore \begin{equation} \label{eq-xijk} x_k [x_i,x_j] x_k^{-1}\ \eqmap{M}\ t_k x_{i,j} \end{equation} (see~\eqref{eq-action-M} and~$(Z2)$). Moreover, \begin{prop} \label{prop-M1} For a group $G$ as above, the module $M$ is generated by $\bar x_{i,j}:=\{x_{ij}\}_{1\leq i<j \leq r}$ and $\bar y:=\{y_k\}_{k=1,...,s}$. \end{prop} \begin{example} \label{exam-rel2} The module $M$ is not freely generated by the set mentioned above, for instance, note that according to~$(Z2)$ and~\eqref{eq-xijk} any relation in $G$, say $R_i(\bar x,\bar y)=1$ (as in~\eqref{eq-zar-pres}) can be written (in $M$) in terms of $\bar \{x_{ij}\}$ and $\bar y$ as $\mathcal R_i(\bar x_{ij},\bar y)$. In other words, $\mathcal R_i(\bar x_{ij},\bar y)=0$ is a relation in $M$. \end{example} \begin{example} \label{exam-rel3} Even if $G$ were to be the free group $\mathbb F_r$, $M$ would not be freely generated by $\bar \{x_{ij}\}$ and $\bar y$. In fact, \begin{equation} \label{eq-J} J(x,y,z):=(t_x-1)[y,z]+(t_y-1)[z,x]+(t_z-1)[x,y]\ \eqmap{M}\ 0 \end{equation} for any $x$, $y$, $z$ in $G$. Using Example~\ref{exam-rel} repeatedly, one can check the following \begin{equation} \label{eq-J2} [xy,z] = \left\{ \array{l} \eqmap{M}\ [x,z]+t_x[y,z]\\ = [y^{x^{-1}}x,z]\ \eqmap{M}\ [y^{x^{-1}},z]+ t_y [x,z]\ \eqmap{M}\ [y,z]-(t_z-1)[x,y]+t_y [x,z],\\ \endarray \right. \end{equation} where $a^b=bab^{-1}$. The difference between both equalities results in $J(x,y,z)=0$. Such relations will be referred to as \emph{Jacobian relations} of $M$. \end{example} A combination of Examples~\ref{exam-rel2} and~\ref{exam-rel3} gives in fact a presentation of $M$. \begin{prop}[{\cite[Proposition~2.39]{ji-zariski}}] \label{prop-M2} The set of relations $\mathcal R_1$,...,$\mathcal R_m$ as described in Example~\ref{exam-rel2} and $J(i,j,k)=J(x_i,x_j,x_k)$ as described in Example~\ref{exam-rel3} is a complete system of relations for $M$. \end{prop} \begin{example} Let $G=\mathbb F_r$ be the free group in $r$ generators, for instance, the fundamental group of the complement to the union of $r+1$ concurrent lines. According to Propositions~\ref{prop-M1} and~\ref{prop-M2}, $M$ has a presentation matrix $A_r$ of size ${{r}\choose {3}} \times {{r}\choose {2}}$ whose columns correspond to the generators $x_{ij}=[x_i,x_j]$ and whose rows correspond to the coefficients of the Jacobian relations $J(i,j,k)$, $1\leq i<j<k \leq r$. For instance, if $r=4$ $$ A_4:= \left[ \begin{matrix} (t_3-1) & -(t_2-1) & 0 & (t_1-1) & 0 & 0\\ (t_4-1) & 0 & -(t_2-1) & 0 & (t_1-1) & 0\\ 0 & (t_4-1) & -(t_3-1) & 0 & 0 & (t_1-1)\\ 0 & 0 & 0 & (t_4-1) & -(t_3-1) & (t_2-1)\\ \end{matrix} \right]. $$ Such matrices have rank ${{r-1}\choose {2}}$ if $t_i\neq 1$ for all $i=1,...,r$, and hence the depth of a non-coordinate character is $r-1$. On the other hand, for the trivial character $\bar 1$, the matrix $A_n$ has rank~0 and hence $\bar 1$ has depth ${{r}\choose {2}}$ (see Definitions~\ref{def-fitting} and~\ref{def-charvar} for details on the connection between the rank of $A_n$ and the depth of a character). \end{example} \subsection{Orbicurves} As a general reference for orbifolds and orbifold fundamental groups one can use ~\cite{adem}, see also~\cite{friedman,scott}. A brief description of what will be used here follows. \begin{dfn} \label{def-global} An \emph{orbicurve} is a complex orbifold of dimension equal to one. An orbicurve $\mathcal C$ is called a \emph{global quotient} if there exists a finite group $G$ acting effectively on a Riemann surface $C$ such that $\mathcal C$ is the quotient of $C$ by $G$ with the orbifold structure given by the stabilizers of the $G$-action on~$C$. \end{dfn} We may think of $\mathcal C$ as a Riemann surface with a finite number of points $R:=\{P_1,...,P_s\}\subset \mathcal C$ labeled with positive integers $\{m(P_1),...,m(P_s)\}$ (for global quotients those are the orders of stabilizers of action of $G$ on $C$). A neighborhood of a point $P\in \mathcal C$ with $m(P)>0$ is the quotient of a disk (centered at~$P$) by an action of the cyclic group of order $m(P)$ (a rotation). A small loop around $P$ is considered to be trivial in $\mathcal C$ if its lifting in the above quotient map bounds a disk. Following this idea, orbifold fundamental groups can be defined as follows. \begin{dfn}\label{dfn-group-orb}(cf.~\cite{adem,scott,friedman}) Consider an orbifold $\mathcal C$ as above, then the \emph{orbifold fundamental group} of $\mathcal C$ is $$ \pi_1^\text{\rm orb}(\mathcal C):=\pi_1(\mathcal C\setminus\{P_1,\dots,P_s\})/\langle\mu_j^{m_j}=1\rangle $$ where $\mu_j$ is a meridian of $P_j$ and $m_j:=m(P_j)$. \end{dfn} According to Remark~\ref{charvargroups} the Definition~\ref{def-charvar} can be applied to the case of finitely generated groups. In particular one defines the $k$-th characteristic variety $\Char_k(\mathcal C)$ of an orbicurve $\mathcal C$ as $\Char_k(\pi_1^\text{\rm orb})$. Therefore also the concepts of a character $\chi$ on $\mathcal C$ and its depth are well defined. \begin{example} \label{exam-orbicurve} Let us denote by $\mathbb P^1_{m_1,...,m_s,k \infty}$ an orbicurve for which the underlying Riemann surface is $\mathbb P^1$ with $k$ points removed and $s$ labeled points with labels $m_1,...,m_s$. If $k\geq 1$ (resp. $k\geq 2$) we also use the notation $\mathbb C_{m_1,...,m_s,(k-1) \infty}$ (resp. $\mathbb C^*_{m_1,...,m_s,(k-2) \infty}$) for $\mathbb P^1_{m_1,...,m_s,k \infty}$. We suppress specification of actual points on $\mathbb P^1$. Note that $$\pi_1^\text{\rm orb}(\mathbb P^1_{m_1,...,m_s,k \infty})= \begin{cases} \mathbb Z_{m_1}(\mu_1)*...*\mathbb Z_{m_s}(\mu_s)*\mathbb Z*\dotsmap{k-1} *\mathbb Z & \text{if\ } k>0\\ {\mathbb Z_{m_1}(\mu_1)*...*\mathbb Z_{m_{s}}(\mu_{s}})/{\prod \mu_i} & \text{if\ } k=0\\ \end{cases} $$ (here $\mathbb Z_m(\mu)$ denotes a cyclic group of order $m$ with a generator $\mu$). Note that a global quotient orbifold of $\mathbb P^1\setminus \{nk \text{\ points}\}$ by the cyclic action of order $n$ on $\mathbb P^1$ that fixes two points, that is, $[x:y]\mapsto [\xi_nx:y]$ (which fixes $[0:1]$ and $[1:0]$) is $\mathbb P^1_{n,n,k \infty}$. Interesting examples of elliptic global quotients occur for $\mathbb P^1_{2,3,6,k \infty}$, $\mathbb P^1_{3,3,3,k \infty}$, and $\mathbb P^1_{2,4,4,k \infty}$, which are global orbifolds of elliptic curves $E\setminus \{6 k \text{\ points}\}$, $E\setminus \{3 k \text{\ points}\}$, and $E\setminus \{4 k \text{\ points}\}$ respectively, see~\cite{mordweil} for a study of the relationship between these orbifolds ($k=0$) and the depth of characters of fundamental groups of the complements to plane singular curves. \end{example} \begin{dfn} A \emph{marking} on an orbicurve $\mathcal C$ (resp. a quasiprojective variety $\mathcal X$) is a non-trivial character of its orbifold fundamental group (resp. its fundamental group) of positive depth $k$, that is, an element of $\text{\rm Hom}(\pi_1^\text{\rm orb}(\mathcal C),\mathbb C^*)$ (resp. $\text{\rm Hom}(\pi_1(\mathcal X),\mathbb C^*)$) which is in $V_k(\mathcal C)$ (resp. $V_k(\mathcal X)$). A \emph{marked orbicurve} is a pair $(\mathcal C,\rho)$, where $\mathcal C$ is an orbicurve and $\rho$ is a marking on $\mathcal C$. Analogously, one defines a \emph{marked quasi-projective manifold} as a pair $(\mathcal X,\chi)$ consisting of a quasi-projective manifold $\mathcal X$ and a marking on it. A marked orbicurve $(\mathcal C,\rho)$ is \emph{a global quotient} if $\mathcal C$ is a global quotient of $C$, where $C$ is a branched cover of $\mathcal C$ associated with the unbranched cover of $\mathcal C\setminus \{P_1,...,P_s\}$ corresponding to the kernel of $\pi_1(\mathcal C\setminus \{P_1,...,P_s\})\to \pi_1^\text{\rm orb} (\mathcal C) \rightmap{\rho} \mathbb C^*$. In other words, the covering space in Definition~\ref{def-global} corresponds to the kernel of $\rho$. \end{dfn} \subsection{Orbifold pencils on quasi-projective manifolds}\label{quasiprojpencils} \begin{dfn} Let $\mathcal X$ be a quasi-projective variety, $C$ be a quasi-projective curve, and $\mathcal C$ an orbicurve which is a global quotient of $C$. A \emph{global quotient orbifold pencil} is a map $\phi: \mathcal X \rightarrow \mathcal C$ such that there exists $\Phi: X_G \rightarrow C$ where $X_G$ is a quasi-projective manifold endowed with an action of the group $G$ making the following diagram commute: \begin{equation}\label{markedpencildef} \begin{matrix} X_G & \buildrel \Phi \over \rightarrow & C \cr \downarrow & & \downarrow \cr \mathcal X & \buildrel \phi \over \rightarrow & \mathcal C \cr \end{matrix} \end{equation} The vertical arrows in \eqref{markedpencildef} are the quotients by the action of $G$. If, in addition, $(\mathcal X,\chi)$ and $(\mathcal C,\rho)$ are marked, then the global quotient orbifold pencil $\phi: \mathcal X \rightarrow \mathcal C$ called \emph{marked} if $\chi=\phi^*(\rho)$. We will refer to the map of pairs $\phi:(\mathcal X,\chi)\to (\mathcal C,\rho)$ as a \emph{marked global quotient orbifold pencil} on $(\mathcal X,\chi)$ with target~$(\mathcal C,\rho)$. \end{dfn} \begin{dfn} \label{def-indep} Global quotient orbifold pencils $\phi_i:(\mathcal X,\chi) \rightarrow (\mathcal C,\rho)$, $i=1,...,n$ are called \emph{independent} if the induced maps $\Phi_i: X_G \rightarrow C$ define $\mathbb Z[G]$-independent morphisms of modules \begin{equation} {\Phi_i}_*: H_1(X_G,\mathbb Z) \rightarrow H_1(C,\mathbb Z), \end{equation} that is, independent elements of the $\mathbb Z[G]$-module $\text{\rm Hom}_{\mathbb Z[G]}(H_1(X_G,\mathbb Z),H_1(C,\mathbb Z))$. In addition, if $\bigoplus {\Phi_i}_*: H_1(X_G,\mathbb Z) \rightarrow H_1(C,\mathbb Z)^n$ is surjective we say that the pencils $\phi_i$ are \emph{strongly independent}. \end{dfn} \begin{remark} Note that if either $n=1$ or $H_1(C,\mathbb Z)=\mathbb Z[G]$, then independence is equivalent to strong independence (this is the case for Remark~\ref{rem-main-thm}\ref{rem-non-orbifold} and Theorem~\ref{thm-main}\ref{thm-main-part2}). \end{remark} \subsection{Structure of characteristic varieties (revisited)} The following are relevant improvements or additions to Theorem~\ref{arapurath}: \begin{theorem}[\cite{nonvanishing,acm-charvar-orbifolds}] The isolated zero-dimensional characters of $V_k(\mathcal D)$ are torsion characters of $\Char(\mathcal D)$. \end{theorem} In~\cite[Theorem~3.9]{dimca-pencils} (see also~\cite{Dimca-Papadima-Suciu-formality}) there is a description of one-dimensional components $\chi f^*H^1(C,\mathbb C^*)\subset \Char_k(\mathcal X)$ mentioned in Theorem~\ref{arapurath} and most importantly, of the order of $\chi$ in terms of multiple fibers of the rational pencil $f$. In~\cite{charvar}, an algebraic method is described to detect the irregularity of abelian covers of $\mathbb P^2$ ramified along $\mathcal D$. This method is very useful to compute \emph{non-coordinate components} of $V_k(\mathcal D)$ independently of a presentation of the fundamental group of the complement $\mathcal X$ of $\mathcal D$. Theorem~\ref{thm-main} (see~\cite{acl-depth}) has~\cite[Theorem~3.9]{dimca-pencils} as a consequence, but uses the point of view of orbifold pencils. Using this result also the zero-dimensional components can be detected (in particular essential coordinate components) and in some cases characterized (see section~\ref{sec-ceva}). Another improvement of Theorem~\ref{arapurath} was given in~\cite{acm-charvar-orbifolds} were the point of view of orbifolds was first introduced as follows: \begin{theorem}[\cite{acm-charvar-orbifolds}] \label{thmprin} Let $\mathcal X$ be a smooth quasi-projective variety. Let $V$ be an irreducible component of $V_k(\mathcal X)$. Then one of the two following statements holds: \begin{enumerate} \enet{\rm(\arabic{enumi})} \item\label{thmprin-orb} There exists an orbicurve $\mathcal C$, a surjective orbifold morphism $\rho:X\to \mathcal C$ and an irreducible component $W$ of $V_k(\pi_1^{\text{\rm orb}}(\mathcal C))$ such that $V=\rho^*(W)$. \item\label{thmprin-tors} $V$ is an isolated torsion point not of type~\ref{thmprin-orb}. \end{enumerate} \end{theorem} One has the following consequences from~\ref{thm-main}\ref{thm-main-part2} that allows us to characterize certain elements of $V_k(\mathcal D)$: \begin{cor} \label{rem-main-thm} Let $(\mathcal X,\chi)$ be a marked complement of $\mathcal D$. Then possible targets for marked orbifold pencils are $(\mathcal C,\rho)$ with $\mathcal C=\mathbb P^1_{m_1,...,m_s,k \infty}$ (see Example~\ref{exam-orbicurve}). Assume that there are $n$ strongly independent marked orbifold pencils with such a fixed target $(\mathcal C,\rho)$. Then, \begin{enumerate} \enet{\rm(\arabic{enumi})} \item\label{rem-non-orbifold} In case $\mathcal C$ has no orbifold points, that is $s=0$, the character $\chi$ belongs to a positive dimensional component $V$ of $\Char(\mathcal X)$ containing the trivial character. In this case, $d(\chi)= {\rm dim} V-1=n-2$. \item\label{rem-torsion-2} In case $\chi$ is a character of order two, there is a unique marking on $\mathcal C=\mathbb C_{2,2}$ and $d(\chi)$ is the maximal number of strongly independent orbifold pencils with target~$\mathcal C$. \item\label{rem-elliptic} In case $\chi$ has torsion 3,4, or 6, there is a unique marking on $\mathcal C=\mathbb P^1_{3,3,3}$, $\mathcal C=\mathbb P^1_{2,4,4}$, or $\mathcal C=\mathbb P^1_{2,3,6}$ respectively and $d(\chi)$ is the maximal number of strongly independent orbifold pencils with target~$\mathcal C$. \end{enumerate} \end{cor} Part~\ref{rem-non-orbifold} is a direct consequence of Theorem~\ref{arapurath} and part~\ref{rem-elliptic} had already appeared in the context of Alexander polynomials in~\cite{mordweil}. In section~\ref{sec-ceva} we will describe in detail examples of Corollary~\ref{rem-main-thm}\ref{rem-torsion-2} for line arrangements. \subsection{Zariski pairs} We will give a very brief introduction to Zariski pairs. For more details we refer to~\cite{ji-zariski} and the bibliography therein. \begin{dfn}[\cite{kike}] Two plane algebraic curves $\mathcal D$ and $\mathcal D'$ form a \emph{Zariski pair} if there are homeomorphic tubular neighborhoods of $\mathcal D$ and $\mathcal D'$, but the pairs $(\mathbb P^2,\mathcal D)$ and $(\mathbb P^2,\mathcal D)$ are not homeomorphic. \end{dfn} The first example of a Zariski pair was given by Zariski~\cite{zariski}, who showed that the fundamental group of the complement to an irreducible sextic (a curve of degree six) with six cusps on a conic is isomorphic to $\mathbb Z_2*\mathbb Z_3$ whereas the fundamental group of any other sextic with six cusps is $\mathbb Z_6$. This paved the way for intensive research aimed to understand the connection between the topology of $(\mathbb P^2,\mathcal D)$ and the position of the singularities of~$\mathcal D$ (whether algebraically, geometrically, combinatorially...). This research has been often in the direction of a search for finer invariants of $(\mathbb P^2,\mathcal D)$. Characteristic varieties (described above) and the Alexander polynomials (i.e. the one variable version of the characteristic varieties), twisted polynomials~\cite{twisted}, generalized Alexander polynomials~\cite{oka-alexander,mordweil}, dihedral covers of $\mathcal D$ (\cite{hiro}) among many others are examples of such invariants. \begin{dfn} If the Alexander polynomials $\Delta_\mathcal D(t)$ and $\Delta_{\mathcal D'}(t)$ coincide, then we say $\mathcal D$ and $\mathcal D'$ form an \emph{Alexander-equivalent Zariski pair}. \end{dfn} In section~\ref{sec-zariski-pair} we will use Theorem~\ref{thm-main} to give an alternative proof that the curves in~\cite{ArtalCogolludo} Alexander-equivalent Zariski pair, without computing the fundamental group. \section{Examples of characters of depth 3: Fermat Curves} \label{sec-fermat} Consider the following family of plane curves: $$ \array{rcl} \mathcal F_n&:=&\{f_n:= x_1^n+x_2^n-x_0^n=0\},\\ \mathcal L_1&:=&\{\ell_1:=x_0^n-x_2^n=0\},\\ \mathcal L_2&:=&\{\ell_2:=x_0^n-x_2^n=0\}. \endarray $$ We will study the characteristic varieties of the quasi-projective manifolds $\mathcal X_n:=\mathbb P^2\setminus \mathcal D_n$, where $\mathcal D_n:=\mathcal F_n\cup \mathcal L_1\cup \mathcal L_2$, in light of the results given in the previous sections, in particular the essential torsion characters will be considered and their depth will be exhibited as the number of strictly independent orbifold pencils. \subsection{Fundamental Group} Note that $\mathcal D_n$ is nothing but the preimage by the Kummer cover $[x_0:x_1:x_2]\mapstomap{\kappa_n} [x_0^n:x_1^n:x_2^n]$ of the following arrangement of three lines in general position given by the equation $$(x_0-x_1)(x_0-x_2)(x_0-x_1-x_2)=0.$$ Such a map ramifies along $\mathcal B:=\{x_0x_1x_2=0\}$. We will compute the fundamental group of $\mathcal X_n$ as a quotient of the subgroup $K_n$ of $\pi_1(\mathbb P^2\setminus \mathcal L)$ associated with the Kummer cover, where \begin{equation} \label{eq-ceva} \mathcal L:=\{x_0x_1x_2(x_0-x_2)(x_0-x_1)(x_0-x_1-x_2)=0\} \end{equation} is a Ceva arrangement. More precisely, the quotient is obtained as a factor of $K_n$ by the normal subgroup generated by the meridians of the ramification locus $\kappa_n^{-1}(\mathcal B)$ in $\mathcal X_n$. The fundamental group of the complement to the Ceva arrangement $\mathcal L$ is given by the following presentation of~$G$. \begin{equation} \label{eq-G} \langle e_0,...,e_5 : [e_1,e_2]=[e_3,e_5,e_1]=[e_3,e_4]=[e_5,e_2,e_4]=e_4e_3e_5e_2e_1e_0=1 \rangle \end{equation} where $e_i$ is a meridian of the component appearing in the $(i+1)$-th place in~\eqref{eq-ceva}, $[\alpha,\beta]$ denotes the commutator $\alpha\beta\alpha^{-1}\beta^{-1}$, and $[\alpha,\beta,\delta]$ denotes the triple of commutators $[\alpha\beta\delta,\alpha]$, $[\alpha\beta\delta,\beta]$, and $[\alpha\beta\delta,\delta]$ leading to a triple of relations in~\eqref{eq-G}. In other to obtain~\eqref{eq-G} one can use the \emph{non-generic} Zariski-Van Kampen method on Figure~\ref{fig-ceva} (see~\cite[Section~1.4]{ji-zariski}). The dotted line $\ell$ represents a generic line where the meridians $e_0,...,e_5$ are placed (note that the last relation on~\eqref{eq-G} is the relation in the fundamental group of~$\ell \setminus (\mathcal L\cap \ell)\approx \mathbb P^1_{6\infty}$). The first two relations on~\eqref{eq-G} appear when moving the generic line around $\ell_1$. The third and fourth relations come from moving the generic line around~$\ell_4$. \begin{figure} \caption{Ceva arrangement} \label{fig-ceva} \end{figure} The fundamental group of the complement to $\mathcal D_n\cup \mathcal B$ is equal to the kernel $K_n$ of the epimorphism \begin{equation} \label{eq-epi} \array{ccc} G & \rightmap{\alpha} &\mathbb Z_n \times \mathbb Z_n \\ e_0 & \mapsto & (1,1)\\ e_1 & \mapsto & (1,0) \\ e_2 & \mapsto & (0,1) \\ e_3 & \mapsto & (0,0) \\ e_4 & \mapsto & (0,0) \\ e_5 & \mapsto & (0,0) \\ \endarray \end{equation} since it is the fundamental group of the abelian cover with covering transformations $\mathbb Z_n\times \mathbb Z_n$. Therefore a presentation of the fundamental group of the complement to $\mathcal D_n$ can be obtained by taking a factor of $K_n$ by the normal subgroup generated by $e^n_0$, $e^n_1$, and $e^n_2$ (which are the meridians to the preimages of the lines $x_0$, $x_1$, and $x_2$ respectively). Using the Reidemeister-Schreier method (cf.~\cite{karras}) combined with the triviality of $e^n_0$, $e^n_1$, and $e^n_2$ one obtains the following presentation for $G_n:=\pi_1(\mathcal X_n)$: \begin{equation} \label{eq-rels} G_n=\langle \ e_{3,i,j},e_{4,i,j},e_{5,i,j} : \begin{array}{cl} (R1) & e_{3,i+1,j}=e^{-1}_{5,i,j}e_{3,i,j}e_{5,i,j},\\ (R2) & e_{4,i,j+1}=e^{-1}_{5,i,j}e_{4,i,j}e_{5,i,j},\\ (R3) & e_{5,i+1,j}=e^{-1}_{5,i,j}e^{-1}_{3,i,j}e_{5,i,j}e_{3,i,j}e_{5,i,j},\\ (R4) & e_{5,i,j+1}=e^{-1}_{5,i,j}e^{-1}_{4,i,j}e_{5,i,j}e_{4,i,j}e_{5,i,j},\\ (R5) & {[}e_{3,i,j},e_{4,i,j}{]}=1,\\ (R6) & \prod_{k=0}^{n-1}e_{4,k,k}e_{3,k,k}e_{5,k,k}=1 \end{array} \ \rangle \end{equation} where $i,j\in \mathbb Z_n$ and $$e_{k,i,j}:=e_1^ie_2^je_ke_2^{-j}e_1^{-i},\ \ k=3,4,5.$$ As a brief description of the Reidemeister-Schreier method, we recall that the generators of $G_n$ are obtained from a set-theoretical section of $\alpha$ in~\eqref{eq-epi} (in our case $s:\mathbb Z_n\times \mathbb Z_n \to G$ is given by $(i,j)\mapsto e_1^ie_2^j$) as follows $$ s(i,j)\ e_k\ (\alpha(e_k)s(i,j))^{-1}. $$ Thus the set $\{e_{k,i,j}\}$ above forms a set of generators of $G_n$. Finally a complete set of relations can be obtained by rewriting the relations of $G$ in~\eqref{eq-G} (and their conjugates by $s(i,j)$) in terms of the generators of the subgroup $G_n$. \begin{example} In order to illustrate the rewriting method we will proceed with the second relation of $G$ in~\eqref{eq-G}. $$ \array{c} s(i,j)[e_1,e_2]s(i,j)^{-1}=e_1^ie_2^j(e_3e_4e_3^{-1}e_4^{-1})e_2^{-j}e_1^{-i}=\\ (e_1^ie_2^je_3e_2^{-j}e_1^{-i})\ (e_1^ie_2^je_4e_2^{-j}e_1^{-i})\ (e_1^ie_2^je_3^{-1}e_2^{-j}e_1^{-i})\ (e_1^ie_2^je_4^{-1}e_2^{-j}e_1^{-i})= [e_{3,i,j},e_{4,i,j}] \endarray $$ \end{example} \subsection{Essential Coordinate Characteristic Varieties} \label{sec-charvar} Now we will discuss a presentation of $G_n'/G_n''$ as a module over $G_n/G'_n$, which will be referred to as $M_{\mathcal D_n,\ab}$. For details we refer to section~\ref{sec-alex-inv}. Note that $G_n/G'_n$ is isomorphic to $\mathbb Z^{2n}$ and is generated by the cycles $\gamma_5$, $\gamma_{3,j}$, $\gamma_{4,i}$, ($i,j\in \mathbb Z_n$) where $\gamma_5=\ab(e_{5,i,j})$, $\gamma_{3,j}=\ab(e_{3,i,j})$, and $\gamma_{4,i}=\ab(e_{4,i,j})$ satisfying $n\gamma_5+\sum_j \gamma_{3,j} + \sum_i \gamma_{4,i}=0$\footnote{Recall that $\ab$ is the morphism of abelianization}. Let $t_{5}$ (resp. $t_{3,j}$, $t_{4,i}$) be the generators of $G_n/G_n'$ viewed as a multiplicative group corresponding to the additive generators $\gamma_5$ (resp. $\gamma_{3,j}$, $\gamma_{4,i}$). The characteristic varieties of $G_n$ are contained~in $$ (\mathbb C^*)^{2n} = \spec \mathbb C[t_5^{\pm 1},t_{3,i}^{\pm 1},t_{4,j}^{\pm 1}]/(t_5^n\prod_j t_{3,j} \prod_i t_{4,i}-1). $$ As generators of $M_{\mathcal D_n,\ab}$ we select commutators of the generators of $G_n$ as given in~\eqref{eq-G}. In order to do so, note that using relations $(R1)-(R4)$ in~\eqref{eq-rels}, a presentation of $G_n$ can be given in terms the $2n+1$ generators $e_5:=e_{5,0,0}$, $e_{3,j}:=e_{3,0,j}$, and $e_{4,i}:=e_{4,i,0}$. Hence, by Proposition~\ref{prop-M1}, $M_{\mathcal D_n,\ab}$ is generated by the ${2n+1}\choose{2}$ commutators \begin{equation} \label{eq-comm} \{[e_{5},e_{3,j}],[e_{5},e_{4,i}],[e_{4,i},e_{3,j}],[e_{4,i_1},e_{4,i_2}],[e_{3,j_1},e_{3,j_2}]\}_{i_*,j_*\in \mathbb Z_n}, \end{equation} as a $\mathbb C[\mathbb Z[t_1^{\pm 1},...,t_4^{\pm 1},t_5^{\pm 1}]]$-module. Also, according to Proposition~\ref{prop-M2}, a complete set of relations of $M_{\mathcal D_n,\ab}$ is given by rewriting the following relations \begin{equation} \label{eq-relsM} \array{lllll} (M1) & {[}\prod_{i=0}^{n-1} e_{5,i,j},e_{3,j}] & = & 0&\\ (M2) & {[}\prod_{i=0}^{n-1} e_{5,i,j},e_{4,i}] & = & 0&\\ (M3) & {[}e_{5,i,j+1},e_{3,i,j+1}e_{5,i,j+1}]e_{5,i,j+1}^{-1}&=& {[}e_{5,i+1,j},e_{4,i+1,j}e_{5,i+1,j}]e_{5,i+1,j}^{-1}&\\ (M4) & \prod_{i=0}^{n-1} e_{4,i,i}e_{3,i,i}e_{5,i,i} & = &0&\\ \endarray \end{equation} in terms of commutators~\eqref{eq-comm} and by the Jacobian relations: \begin{equation} \label{eq-jacobi} \array{lll} (t_{3,j}-1)[e_{5},e_{4,i}]+(t_{4,i}-1)[e_{3,j},e_{5}]+(t_{5}-1)[e_{4,i},e_{3,j}] & = & 0,\\ (t_{3,j_1}-1)[e_{5},e_{3,j_2}]+(t_{3,j_2}-1)[e_{3,j_1},e_{5}]+(t_{5}-1)[e_{3,j_2},e_{3,j_1}] & = & 0,\\ (t_{4,i_1}-1)[e_{5},e_{4,i_2}]+(t_{4,i_2}-1)[e_{4,i_1},e_{5}]+(t_{5}-1)[e_{4,i_2},e_{4,i_1}] & = & 0,\\ ... & \\ \endarray \end{equation} In order to rewrite relations $(M1)-(M4)$ one needs to use~\eqref{eq-comm} repeatedly. In what follows, we will concentrate on the characters of $\Char(\mathcal D_n)$ contained in the coordinate axes $t_{3,j}=t_{4,i}=1$. Computations for the general case can also be performed, but are more technical and tedious. Since we are assuming $t_{3,j}=t_{4,i}=1$, and $t_5\neq 1$, relations in~\eqref{eq-jacobi} become $[e_{4,i},e_{3,j}]=[e_{3,j_2},e_{3,j_1}]=[e_{4,i_2},e_{4,i_1}]=0$ and hence $(R5)$ in~\eqref{eq-rels} become redundant. A straightforward computation gives the following matrix where each line is a relation from~\eqref{eq-relsM} written in terms of the commutators $\{[e_5,e_{3,i}],[e_5,e_{4,i}]\}_{i\in \mathbb Z_n}$. $$ A_n:= {\tiny{ \left[ \begin{array}{cccccc|cccccc} \phi_n & 0 & 0 & ... & 0 & 0 & 0 & 0 & 0 & ... & 0 & 0\\ 0 & \phi_n & 0 & ... & 0 & 0 & 0 & 0 & 0 & ... & 0 & 0\\ &&& \vdots &&&&&& \vdots &&\\ 0 & 0 & 0 & ... & 0 & \phi_n & 0 & 0 & 0 & ... & 0 & 0\\ \hline \hline 0 & 0 & 0 & ... & 0 & 0 & \phi_n & 0 & 0 & ... & 0 & 0\\ 0 & 0 & 0 & ... & 0 & 0 & 0 & \phi_n & 0 & ... & 0 & 0\\ &&& \vdots &&&&&& \vdots &&\\ 0 & 0 & 0 & ... & 0 & 0 & 0 & 0 & 0 & ... & 0 & \phi_n\\ \hline \hline 1 & -1 & 0 & ... & 0 & 0 & 1 & -1 & 0 & ... & 0 & 0\\ 1 & -1 & 0 & ... & 0 & 0 & 0 & t & -t & ... & 0 & 0\\ &&& \vdots &&&&&& \vdots &&\\ 1 & -1 & 0 & ... & 0 & 0 & 0 & 0 & 0 & ... & t^{n-2} & -t^{n-2}\\ 1 & -1 & 0 & ... & 0 & 0 & -t^{n-1} & 0 & 0 & ... & 0 & t^{n-1}\\ \hline &&& \vdots &&&&&& \vdots &&\\ \hline 0 & 0 & 0 & ... & t^{n-2} & -t^{n-2} & 1 & -1 & 0 & ... & 0 & 0\\ 0 & 0 & 0 & ... & t^{n-2} & -t^{n-2} & 0 & t & -t & ... & 0 & 0\\ &&& \vdots &&&&&& \vdots &&\\ 0 & 0 & 0 & ... & t^{n-2} & -t^{n-2} & 0 & 0 & 0 & ... & t^{n-2} & -t^{n-2}\\ 0 & 0 & 0 & ... & t^{n-2} & -t^{n-2} & -t^{n-1} & 0 & 0 & ... & 0 & t^{n-1}\\ \hline \hline 1 & t & t^2 & ... & t^{n-2} & t^{n-1} & 1 & t & t^2 & ... & t^{n-2} & t^{n-1}\\ \end{array} \right] }} $$ More precisely, the first (resp. second) block of $A_n$ corresponds to the $n$ relations given in $(M1)$ (resp. $(M2)$) of~\eqref{eq-relsM}, $\phi_n:=\frac{t^n-1}{t-1}$, and $t=t_5$. The following $n$~blocks of $A_n$ (between double horizontal lines) correspond to the $n^2$ relations given in $(M3)$ of~\eqref{eq-relsM}. Note that the last row of each of these blocks is a consequence of the remaining $n-1$ rows. The last block corresponds to the relation given in $(M4)$ of~\eqref{eq-relsM}. \begin{example} In order to illustrate $A_n$ we will show how to rewrite the first relation for $n=3$, that is, $$[e_{5,0,j}e_{5,1,j}e_{5,2,j},e_{3,j}]\ \eqmap{M}\ \phi_n [e_5,e_{3,j}].$$ Using~\eqref{eq-comm-M} one has $$ [e_{5,0,j}e_{5,1,j}e_{5,2,j},e_{3,j}]\ \eqmap{M}\ [e_{5,0,j},e_{3,j}]+t[e_{5,1,j},e_{3,j}]+t^2[e_{5,0,j},e_{3,j}]. $$ Therefore, it is enough to show that $[e_{5,i,j},e_{3,j}]=[e_5,e_{3,j}]$. Note that $e_{5,i,j}$ is a conjugate of $e_5$ (using $(R4)$ and $(R4)$), hence, by~\eqref{eq-comm-M2} one obtains $[e_{5,i,j},e_{3,j}]=[e_5,e_{3,j}]$ (since we are assuming $t_{3,j}=1$). \end{example} Also note that, performing row operations, one can obtain the following equivalent matrix $$ B_n:= {\tiny{ \left[ \begin{array}{cccccc|cccccc} \phi_n & 0 & 0 & ... & 0 & 0 & 0 & 0 & 0 & ... & 0 & 0\\ 0 & \phi_n & 0 & ... & 0 & 0 & 0 & 0 & 0 & ... & 0 & 0\\ &&& \vdots &&&&&& \vdots &&\\ 0 & 0 & 0 & ... & 0 & \phi_n & 0 & 0 & 0 & ... & 0 & 0\\ \hline 0 & 0 & 0 & ... & 0 & 0 & \phi_n & 0 & 0 & ... & 0 & 0\\ 0 & 0 & 0 & ... & 0 & 0 & 0 & \phi_n & 0 & ... & 0 & 0\\ &&& \vdots &&&&&& \vdots &&\\ 0 & 0 & 0 & ... & 0 & 0 & 0 & 0 & 0 & ... & 0 & \phi_n\\ \hline 1 & -1 & 0 & ... & 0 & 0 & 1 & -1 & 0 & ... & 0 & 0\\ 1 & -1 & 0 & ... & 0 & 0 & 0 & t & -t & ... & 0 & 0\\ &&& \vdots &&&&&& \vdots &&\\ 1 & -1 & 0 & ... & 0 & 0 & 0 & 0 & 0 & ... & t^{n-2} & -t^{n-2}\\ 0 & t & -t & ... & 0 & 0 & 0 & 0 & 0 & ... & t^{n-2} & -t^{n-2}\\ &&& \vdots &&&&&& \vdots &&\\ 0 & 0 & 0 & ... & t^{n-2} & -t^{n-2} & 0 & 0 & 0 & ... & t^{n-2} & -t^{n-2}\\ \hline 1 & t & t^2 & ... & t^{n-2} & t^{n-1} & 1 & t & t^2 & ... & t^{n-2} & t^{n-1}\\ \end{array} \right] }} $$ Finally, one can write the presentation matrix $B_n$ in terms of the basis $$\{[e_5,e_{3,i}] - [e_5,e_{3,i+1}], [e_5,e_{3,n-1}] , [e_5,e_{4,i}] - [e_5,e_{4,i+1}], [e_5,e_{4,n-1}] \}_{i=0,...,n-2}$$ resulting in $$ {\tiny{ \left[ \begin{array}{cccccc|cccccc} \phi_n & \phi_n & 0 & ... & 0 & 0 & 0 & 0 & 0 & ... & 0 & 0\\ 0 & \phi_n & \phi_n & ... & 0 & 0 & 0 & 0 & 0 & ... & 0 & 0\\ &&& \vdots &&&&&& \vdots &&\\ 0 & 0 & 0 & ... & \phi_n & \phi_n & 0 & 0 & 0 & ... & 0 & 0\\ 0 & 0 & 0 & ... & 0 & \phi_n & 0 & 0 & 0 & ... & 0 & 0\\ \hline 0 & 0 & 0 & ... & 0 & 0 & \phi_n & \phi_n & 0 & ... & 0 & 0\\ 0 & 0 & 0 & ... & 0 & 0 & 0 & \phi_n & \phi_n & ... & 0 & 0\\ &&& \vdots &&&&&& \vdots &&\\ 0 & 0 & 0 & ... & 0 & 0 & 0 & 0 & 0 & ... & \phi_n & \phi_n\\ 0 & 0 & 0 & ... & 0 & 0 & 0 & 0 & 0 & ... & 0 & \phi_n\\ \hline 1 & 0 & 0 & ... & 0 & 0 & 1 & 0 & 0 & ... & 0 & 0\\ 1 & 0 & 0 & ... & 0 & 0 & 0 & t & 0 & ... & 0 & 0\\ 1 & 0 & 0 & ... & 0 & 0 & 0 & 0 & t^2 & ... & 0 & 0\\ &&& \vdots &&&&&& \vdots &&\\ 1 & 0 & 0 & ... & 0 & 0 & 0 & 0 & 0 & ... & t^{n-2} & 0\\ 0 & t & 0 & ... & 0 & 0 & 0 & 0 & 0 & ... & t^{n-2} & 0\\ 0 & 0 & t^2 & ... & 0 & 0 & 0 & 0 & 0 & ... & t^{n-2} & 0\\ &&& \vdots &&&&&& \vdots &&\\ 0 & 0 & 0 & ... & t^{n-2} & 0 & 0 & 0 & 0 & ... & t^{n-2} & 0\\ \hline \phi_1 & \phi_2 & \phi_3 & ... & \phi_{n-1} & \phi_n & \phi_1 & \phi_2 & \phi_3 & ... & \phi_{n-1} & \phi_n\\ \end{array} \right] }} $$ One can use the units in the third block to eliminate columns, leaving the equivalent matrix $$ {\tiny{ \left[ \begin{array}{cccccc|cccccc} \phi_n & 0 & 0 & ... & 0 & 0 & 0 & 0 & 0 & ... & 0 & 0\\ 0 & 0 & 0 & ... & 0 & 0 & 0 & 0 & 0 & ... & -\phi_n & 0\\ 0 & 0 & 0 & ... & 0 & \phi_n & 0 & 0 & 0 & ... & 0 & 0\\ \hline 0 & 0 & 0 & ... & 0 & 0 & 0 & 0 & 0 & ... & 0 & \phi_n\\ \hline 1 & 0 & 0 & ... & 0 & 0 & 1 & 0 & 0 & ... & 0 & 0\\ 1 & 0 & 0 & ... & 0 & 0 & 0 & 0 & 0 & ... & t^{n-2} & 0\\ \end{array} \right] }} \cong {\tiny{ \left[ \begin{array}{c|ccc} 0 & -\phi_n & 0 & 0\\ 0 & 0 & -\phi_n & 0\\ \phi_n & 0 & 0 & 0\\ \hline 0 & 0 & 0 & \phi_n\\ \hline 0 & -1 & t^{n-2} & 0\\ \end{array} \right]. }} $$ Finally, a last combination of row operations using the units to eliminate columns results in $$ {\tiny{ \left[ \begin{array}{c|ccc} 0 & 0 & -\phi_n t^{n-2} & 0\\ 0 & 0 & -\phi_n & 0\\ \phi_n & 0 & 0 & 0\\ \hline 0 & 0 & 0 & \phi_n\\ \hline 0 & -1 & t^{n-2} & 0\\ \end{array} \right] }} \cong {\tiny{ \left[ \begin{array}{c|cc} 0 & -\phi_n t^{n-2} & 0\\ 0 & -\phi_n & 0\\ \phi_n & 0 & 0\\ \hline 0 & 0 & \phi_n \end{array} \right] }} \cong {\tiny{ \left[ \begin{array}{c|cc} 0 & \phi_n & 0\\ \phi_n & 0 & 0\\ \hline 0 & 0 & \phi_n \end{array} \right]. }} $$ Hence the $n-1$ non-trivial torsion characters $\chi_n^i:=(\xi_n^i,1,...,1)$, $i=1,...,n$ belong to $\Char(\mathcal D_n)$ and have depth 3, that is, $\chi_n^i\in V_3(\mathcal D_n)$. \subsection{Marked Orbifold Pencils} By Theorem~\ref{thm-main}\ref{thm-main-part1} there are at most three strongly independent marked orbifold pencils from the marked variety $(\mathcal X_n,\chi_n)$. Our purpose is to explicitly show such three strongly independent pencils. Note that \begin{equation} \label{eq-orb-maps} \array{cccl} j_k: & \mathbb P^2\setminus (\mathcal F_n\cup \mathcal L_k) & \to & \mathbb C^*_{n} =\mathbb P^1_{(n,[1:0]),(\infty,[0:1]),(\infty,[1:1])} \\ & [x:y:z] & \mapsto & [f_n:x_k^n], \endarray \end{equation} for $j=1,2$ are two natural orbifold pencils coming from the $n$-ordinary points of $\mathcal F_n$ coming form the triple points of the Ceva arrangement $\mathcal L$ which are in $\mathcal B$. Consider the marked orbicurve $(\mathbb C_{n,n},\rho_n)$, where $\rho_n=(\xi_n,1)$, the first coordinate corresponds to the image of a meridian $\mu_1$ around $[0:1]\in \mathbb P^1_{(n,[0:1]),(n,[1:0]),(\infty,[1:1])}$ and the second coordinate corresponds to the image of a meridian $\mu_2$ around $[1:0]$ (note that $\pi_1^\text{\rm orb}(\mathbb C_{n,n})=\mathbb Z_n(\mu_1) * \mathbb Z_n(\mu_2)$). In order to obtain marked orbifold pencils with target $(\mathbb C_{n,n},\rho_n)$ one simply considers the following composition, where $i_k$ and $j_k$ are inclusions $$\psi_k:\mathcal X_n \injmap{i_k} \mathbb P^2\setminus (\mathcal F_n \cup \mathcal L_k) \rightmap{j_k} \mathbb P^1_{(n,[1:0]),(\infty,[0:1]),(\infty,[1:1])} \injmap{i} \mathbb P^1_{(n,[0:1]),(n,[1:0]),(\infty,[1:1])}.$$ Such pencils are clearly marked global quotient orbifold pencils from $(\mathcal X_n,\chi_n)$ to $(\mathbb C_{n,n},\rho_n)$, where $(\mathbb C_{n,n},\rho_n)$ is the marked quotient of $C_n:=\mathbb P^1\setminus \{[\xi_n^j:1]\}_{j\in \mathbb Z_n}$ by the cyclic action $[x:y]\mapsto [\xi_nx:y]$. The resulting commutative diagrams are given by \begin{equation} \label{eq-diag1} \array{rcl} X_n & \rightmap{\Psi_k} & C_n\\ {[}x_0:x_1:x_2:w] & \mapsto & [w:x_k] \\ \downmap{\pi} & & \downarrow\\ \mathcal X_n & \rightmap{\psi_k} & \mathbb C_{n,n}\\ {[}x_0:x_1:x_2] & \mapsto & [f_n:x_k^n], \endarray \end{equation} $k=1,2$, where $X_n$ is the smooth open surface given by $\{[x_0:x_1:x_2:w]\in \mathbb P^3 \mid w^n=f_n\} \setminus \{f_n\ell_1\ell_2=0\}$. Note that there is a third quasitoric relation involving all components of $\mathcal D_n$, namely, \begin{equation} \label{eq-3} f_nx_0^n + \ell_1\ell_2 = x_1^nx_2^n \end{equation} and hence a global quotient marked orbifold map \begin{equation} \label{eq-orb-maps3} \array{cccl} \psi_3: & \mathcal X_n & \to & \mathbb C_{n,n} =\mathbb P^1_{(n,[0:1]),(n,[1:0]),(\infty,[1:1])} \\ & [x:y:z] & \mapsto & [-f_nx_0^n:x_1^nx_2^n], \endarray \end{equation} which gives rise to the following diagram \begin{equation} \label{eq-diag2} \array{rcl} X_n & \rightmap{\Psi_3} & C_n\\ {[}x_0:x_1:x_2:w] & \mapsto & [-wx_0:x_1x_2] \\ \downmap{\pi_n} & & \downarrow\\ \mathcal X_n & \rightmap{\psi_k} & \mathbb C_{n,n}\\ {[}x_0:x_1:x_2] & \mapsto & [-f_nx_0^n:x_1^nx_2^n]. \endarray \end{equation} Note that, when extending $\pi_n$ to a branched covering, the preimage of each line $\{\ell_{k,i}=0\}\subset \mathcal L_k$ ($k=1,2$) in $\mathcal D_n$ ($\ell_{k,i}:=x_0-\xi_n^ix_k$) decomposes into $n$ irreducible components $\bigcup_{j\in \mathbb Z_n} \ell_{k,i,j}$ and thus allows to consider $\gamma_{k,i,j}$ ($k=1,2$, $i,j\in\mathbb Z_n$) meridians around each component of $\{\ell_{k,i,j}=0\}$. Also consider a meridian $\gamma_0$ around the preimage of $\mathcal F_n$. \begin{theorem} The marked orbifold pencils $\psi_1$, $\psi_2$, and $\psi_3$ described above are strongly independent and hence they form a maximal set of strongly independent pencils. \end{theorem} \begin{proof} Consider $\Psi_{\varepsilon,*}:H_1(X_n;\mathbb Z) \to H_1(C_n;\mathbb Z)=\mathbb Z[\xi_n]$, $\varepsilon=1,2,3$ the three equivariant morphisms described above. Using the commutative diagrams~\eqref{eq-diag1} and~\eqref{eq-diag2} one can easily see that \begin{equation} \label{eq-gammas} \Psi_{\varepsilon,*} (\gamma_{k,i,j}) = \begin{cases} \xi_n^j & \text{ if\ } \varepsilon=k\in \{1,2\} \\ \xi_n^{i+j} & \text{ if\ } k=3 \\ 0 & \text{otherwise} \end{cases} \end{equation} and $$ \Psi_{\varepsilon,*} (\gamma_{0}) = 0 $$ and therefore $\Psi_{\varepsilon,*}$ are surjective $\mathbb Z[\xi_n]$-module morphisms. Also note that $[\gamma_{k,i,j}]=\mu_n^j[\gamma_{k,i,0}] \in H_1(X_n;\mathbb Z)$. Consequently according to~\eqref{eq-gammas} one has $$ \left(\Psi_{1,*}\oplus \Psi_{2,*}\oplus \Psi_{3,*}\right) (\gamma_{k,i,0})= \begin{cases} (1,0,\xi_n^i) & \text{ if\ } k=1 \\ (0,1,\xi_n^i) & \text{ if\ } k=2 \\ \end{cases} $$ which implies that $\Psi_{1,*}\oplus \Psi_{2,*}\oplus \Psi_{3,*}$ is surjective. After the discussion of section~\ref{sec-charvar}, since the depth of $\xi_n^i$ is three, the set of strongly independent pencils is indeed maximal. \end{proof} \section{Order Two Characters: {augmented Ceva}} \label{sec-ceva} From Theorem~\ref{thm-main}\ref{thm-main-part2}, for any order two character $\chi$ of depth $k$ in the characteristic variety of the complement of a curve there exist $k$ independent pencils associated with $\chi$ whose target is a global quotient orbifold of type $\mathbb C_{2,2}$. Interesting examples for $k>1$ of this scenario are the {{augmented Ceva}} arrangements $\text{\rm CEVA}(2,s)$, $s=1,2,3$ (or \emph{erweiterte Ceva} cf.~\cite[Section~2.3.J, pg.~81]{geraden}). Consider the following set of lines: \begin{equation} \array{ccc} \array{c} \ell_1:=x\\ \ell_2:=y\\ \ell_3:=z \endarray & \array{c} \ell_4:=(y-z)\\ \ell_5:=(x-z)\\ \ell_6:=(x-y) \endarray & \array{l} \ell_7:=(x-y-z)\\ \ell_8:=(y-z-x)\\ \ell_9:=(z-x-y). \endarray \endarray \end{equation} The curve $\mathcal C_6:=\left\{ \prod_{i=1}^6\ell_i=0\right\}$ is a realization of the Ceva arrangement $\text{\rm CEVA}(2)$ (a.k.a. braid arrangement or $B_3$-reflection arrangement). Note that this realization is different from the one used in section~\ref{sec-fermat}. The curve $\mathcal C_7:=\left\{ \prod_{i=1}^7\ell_i=0 \right\}$ is the {augmented Ceva}\ arrangement $\text{\rm CEVA}(2,1)$ (a.k.a. a realization of the non-Fano plane). The curve $\mathcal C_8:=\left\{ \prod_{i=1}^8\ell_i=0\right\}$ is the {augmented Ceva}\ arrangement $\text{\rm CEVA}(2,2)$ (a.k.a. a deleted $B_3$-arrangement). Finally, $\mathcal C_9:=\left\{\prod_{i=1}^9\ell_i=0\right\}$ is the {augmented Ceva}\ arrangement $\text{\rm CEVA}(2,3)$. The characteristic varieties of such arrangements of lines are well known (c.f~\cite{suciu-enumerative,suciu-translated,dimca-pencils}). Such computations are done via a presentation of the fundamental group and using Fox derivatives. In most cases (except for the simplest ones) the need of computer support is basically unavoidable. In~\cite[Example~3.11]{dimca-pencils} there is an alternative calculation of the positive dimensional components of depth~1 via pencils. Here we will give an interpretation via orbifold pencils of the characters of depth~2, which will account for the appearance of these components of the characteristic varieties independently of computation of the fundamental group. \subsection{Ceva and {augmented Ceva}\ Pencils} Note that $x(y-z)-y(x-z)+z(x-y)=0$ and hence $$ \array{cccc} f_C: & \mathbb P^2 & \to & \mathbb P^1\\ & [x:y:z] & \mapsto & [\ell_1\ell_4:\ell_2\ell_5] \endarray $$ is a pencil of conics such that $(f_C^*([0:1])=\ell_1\ell_4,f_C^{*}([1:0])=\ell_2\ell_5,f_C^{-1}([1:1])=\ell_3\ell_6)$ (we will refer to it as the \emph{Ceva pencil}). Analogously $$x(y-z)(x-y-z)^2-y(x-z)(y-z-x)^2+z(x-y)(z-x-y)^2=0$$ and hence $$ \array{cccc} f_{SC}: & \mathbb P^2 & \to & \mathbb P^1\\ & [x:y:z] & \mapsto & [\ell_1\ell_4\ell_7^2:\ell_2\ell_5\ell_8^2] \endarray $$ is a pencil of quartics such that $(f_{SC}^{*}([0:1])=\ell_1\ell_4\ell_7^2,f_{SC}^{*}([1:0])=\ell_2\ell_5\ell_8^2,f_{SC}^{*}([1:1])=\ell_3\ell_6\ell_9^2)$ (we will refer to it as the \emph{{augmented Ceva}\ pencil}). \subsection{Characteristic Varieties of $\mathcal C_i$, $i=6,7,8,9$} We include the structure of the characteristic varieties of these curves for the reader's convenience. As reference for such computations see~\cite{suciu-enumerative,suciu-translated,falk-arrangements,cohen-suciu-characteristic,charvar,libgober-yuzvinsky-local}. We will denote by $\mathcal X_*$ the complement of the curve $\mathcal C_*$ in $\mathbb P^2$, for $*=6,7,8,9$. \subsubsection{Arrangement $\mathcal C_6$.} The characteristic variety $\Char(\mathcal C_6)$ consists of four non-essential coordinate components associated with the four triple points of $\mathcal C_6$ (see Remark~\ref{rem-main-thm}\ref{rem-non-orbifold})\footnote{a.k.a. local components} and one essential component of dimension~2 and depth~1 given by the Ceva pencil $$\psi_6:=f_C|_{\mathcal X_6}: \mathcal X_6 \to \mathbb P^1\setminus \{[0:1],[1:0],[1:1]\}.$$ \subsubsection{Arrangement $\mathcal C_7$.} The characteristic variety $\Char(\mathcal C_7)$ consists of six (resp. four) non-essential coordinate components associated with the six triple points of $\mathcal C_7$ (resp. four $\mathcal C_6$-subarrangements) of dimension~2 and depth~1. In addition, there is one extra character of order two, namely, $$\chi_{7}:=(1,-1,-1,1,-1,-1,1)$$ of depth~2.\footnote{the subscript 7 refers to the arrangement $\mathcal C_7$. Similar notation will be used in the examples that follow. A second subscript (when necessary) will be used to index the characters considered.} In order to check the value of the depth, one needs to find all marked orbifold pencils in $(\mathcal X_7,\chi_{7})$ of target $(\mathbb C_{2,2},\rho)$ where $\rho:=(-1,-1)$ is the only possible non-trivial character of $\mathbb C_{2,2}$. Two such independent pencils are the following, $$\psi_{7,1}:=f_C|_{\mathcal X_7}: \mathcal X_7 \to \mathbb P^1\setminus \{[0:1],[1:0],[1:1]\} \to \mathbb P^1_{(2,[1:0]),(2,[1:1]),(\infty [0:1])}$$ and $$\psi_{7,2}:=f_{SC}|_{\mathcal X_7}: \mathcal X_7 \to \mathbb P^1_{(2,[1:0]),(2,[1:1]),(\infty [0:1])}.$$ This is the maximal number of independent pencils by Theorem~\ref{thm-main}. \subsubsection{Arrangement $\mathcal C_8$.} The characteristic variety $\Char(\mathcal C_8)$ consists of six (resp. five) non-essential coordinate components associated with the six triple points of $\mathcal C_8$ (resp. four $\mathcal C_6$-subarrangements) of dimension~2 and depth~1. In addition, there is one 3-dimensional non-essential coordinate component of depth~2 associated with its quadruple point (see Remark~\ref{rem-main-thm}\ref{rem-non-orbifold}). Consider the following {augmented Ceva}\ pencil $$\psi_{8,1}:=f_{SC}|_{\mathcal X_8}: \mathcal X_8 \to \mathbb P^1_{(2,[1:1]),(\infty [0:1]),(\infty [1:0])}.$$ Computation of the induced map on the variety of characters shows that this map yields the only non-coordinate translated component of dimension~1 and depth~1 observed in the references above. Finally, there are two characters of order two, namely, $$ \array{cc} \chi_{8,1}:=(1,-1,-1,1,-1,-1,1,1) & \text{\ and}\\ \chi_{8,2}:=(-1,1,-1,-1,1,-1,1,1) & \endarray $$ of depth~2. In order to check the value of the depth, one needs to find two marked orbifold pencils on $(\mathcal X_8,\chi_{8,1})$ with target $(\mathbb C_{2,2},\rho)$, where $$\mathbb C_{2,2}:=\mathbb P^1_{(2,[1:0]),(2,[1:1]),(\infty [0:1])}$$ and $\rho:=(-1,-1,1)$ is the only non-trivial character of $\mathbb C_{2,2}$. Two such independent pencils can, for example, be given as follows $$\psi_{8,2}:=f_C|_{\mathcal X_8}: \mathcal X_8 \to \mathbb P^1\setminus \{[0:1],[1:0],[1:1]\} \to \mathbb P^1_{(2,[1:0]),(2,[1:1]),(\infty [0:1])}$$ and $$\psi_{8,3}:=f_{SC}|_{\mathcal X_8}: \mathcal X_8 \to \mathbb P^1_{(2,[1:1])} \setminus \{[1:0],[0:1]\} \to \mathbb P^1_{(2,[1:0]),(2,[1:1]),(\infty [0:1])}.$$ \subsubsection{Arrangement $\mathcal C_9$.} The characteristic variety $\Char(\mathcal C_9)$ consists of four (resp. eleven) non-essential coordinate components associated with the four triple points of $\mathcal C_9$ (resp. eleven $\mathcal C_6$-subarrangements), which have dimension~2 and depth~1. In addition, there are three 3-dimensional non-essential coordinate components of depth~2 associated with the quadruple points of $\mathcal C_9$. Consider the following {augmented Ceva}\ pencil $$\psi_{9,1}:=f_{SC}|_{\mathcal X_9}: \mathcal X_9 \to \mathbb P^1 \setminus \{[1:0],[0:1],[1:1]\}.$$ Computations of the induced map on the variety of characters show that this pencil yields the only non-coordinate translated component of dimension~2 and depth~1 observed in the references above. Finally, there are also three characters of order two $$ \array{cc} \chi_{9,1}:=(-1,-1,1,-1,-1,1,1,1,1), &\\ \chi_{9,2}:=(-1,1,-1,-1,1,-1,1,1,1), & \text{\ and}\\ \chi_{9,3}:=(1,-1,-1,1,-1,-1,1,1,1) \endarray $$ of depth~2. In order to check the value of the depth, one needs to find two independent marked orbifold pencils on $(\mathcal X_9,\chi_{9,1})$ with target $(\mathbb C_{2,2},\rho)$ where $\mathbb C_{2,2}:=\mathbb P^1_{(2,[0:1]),(2,[1:0]),(\infty [1:1])}$ and $\rho:=(-1,-1,1)$ is the only non-trivial character on $\mathbb C_{2,2}$. Two such independent pencils can be given, for example, as follows $$\psi_{9,2}:=f_C|_{\mathcal X_9}: \mathcal X_9 \to \mathbb P^1\setminus \{[0:1],[1:0],[1:1]\} \to \mathbb P^1_{(2,[0:1]),(2,[1:0]),(\infty [1:1])}$$ and $$\psi_{9,3}:=f_{SC}|_{\mathcal X_9}: \mathcal X_9 \to \mathbb P^1\setminus \{[0:1],[1:0],[1:1]\} \to \mathbb P^1_{(2,[0:1]),(2,[1:0]),(\infty [1:1])}.$$ \begin{remark} Note that the depth $2$ characters in $\Char(\mathcal C_8)$ and $\Char(\mathcal C_9)$ lie in the intersection of positive dimensional components and this fact forces them to have depth greater than~$1$, see~\cite[Proposition~5.9]{acm-charvar-orbifolds}. \end{remark} \subsection{Comments on Independence of Pencils} \mbox{} \begin{itemize} \item \textbf{Depth conditions on the target:} First of all note that the condition on the target $(\mathcal C,\rho)$ to have $d(\rho)>0$ is essential in the discussion above, i.e. pencils with target satisfying $d(\rho)=0$ may not contribute to the characteristic varieties. For instance, the space $\mathcal X_6$ also admits several global quotient pencils coming from the {{augmented Ceva}} pencil, namely $$\psi'_{6}:=f_{SC}|_{\mathcal X_6}: \mathcal X_6 \to \mathbb P^1_{(2,[0:1]),(2,[1:0]),(2,[1:1])} \to \mathbb P^1_{(2,[0:1]),(2,[1:0])}.$$ However, the orbifold $\mathbb P_{2,2}$ is a global quotient orbifold whose orbifold fundamental group is abelian, so no non-trivial characters belong to its characteristic variety. \item \textbf{Independence of Pencils.} Here is an explicit argument for independence of pencils for one of the cases discussed in last section. Consider the pencils $\psi_{9,2}$ and $\psi_{9,3}$ described above as marked pencils from $(\mathcal X_9,\chi_{9,1})$ having $(\mathbb C_{2,2},\rho)$ as target. The marking produces the following commutative diagrams: $$ \array{rcl} X_{9,2} & \rightmap{\Psi_{9,2}} & C_2\\ {[}x:y:z:w] & \mapsto & [\ell_1\ell_4:w] \\ \downmap{\pi} & & \downmap{\tilde \pi}\\ \mathcal X_9 & \rightmap{\psi_{9,2}} & \mathbb C_{2,2}\\ {[}x:y:z] & \mapsto & [\ell_1\ell_4:\ell_2\ell_5], \endarray $$ and $$ \array{rcl} X_{9,2} & \rightmap{\Psi_{9,3}} & C_2\\ {[}x:y:z:w] & \mapsto & [\ell_1\ell_4\ell_7:w\ell_8] \\ \downmap{\pi} & & \downmap{\tilde \pi}\\ \mathcal X_9 & \rightmap{\psi_{9,3}} & \mathbb C_{2,2}\\ {[}x:y:z] & \mapsto & [\ell_1\ell_4\ell_7^2:\ell_2\ell_5\ell_8^2], \endarray $$ where $X_{9,2}$ is contained in $\{[x:y:z:w] \mid w^2=\ell_1\ell_4\ell_2\ell_5\}$, $C_2:=\mathbb P^1\setminus \{[1:1],[1:-1]\}$ and $\tilde \pi$ is given by $[u:v]\mapsto [u^2:v^2]$. Consider $\gamma_{i,k}$, $i=3,6,7,8,9$, $k=1,2$ the lifting of meridians around $\ell_i$ in $X_{9,2}$. Also denote by $\mathbb Z[\mathbb Z_2]$ the ring of deck transformations of $\tilde \pi$ as before, where $\mathbb Z_2$ acts by multiplication by $\xi_2=(-1)$. Note that, as before $\Psi_{9,2}(\gamma_{3,k})=\Psi_{9,2}(\gamma_{3,k})=(-1)^k$ and $\Psi_{9,3}(\gamma_{4,k})=\Psi_{9,3}(\gamma_{4,k})=(-1)^{k+1}$. However, $\Psi_{9,2}(\gamma_{9,k})=0$ and $\Psi_{9,3}(\gamma_{9,k})=(-1)^k$. Therefore $\psi_{9,2}$ and $\psi_{9,3}$ are independent pencils of $(\mathcal X_9,\chi_{9,1})$ with target $(\mathbb C_{2,2},\rho)$. \end{itemize} \section{Curve Arrangements} \label{sec-zariski-pair} Consider the space $\mathcal M$ of sextics with the following combinatorics: \begin{enumerate} \item $\mathcal C$ is a union of a smooth conic $\mathcal C_2$ and a quartic $\mathcal C_4$; \item $\text{\rm Sing}(\mathcal C_4) = \{P, S\}$ where $S$ is a cusp of type $\mathbb A_4$ and $P$ is a node of type $\mathbb A_1$; \item $\mathcal C_2 \cap \mathcal C_4 = \{S, R\}$ where $S$ is a $\mathbb D_7$ on $\mathcal C$ and $R$ is a $\mathbb A_{11}$ on $\mathcal C$. \end{enumerate} In~\cite{ArtalCogolludo} it is shown that $\mathcal M$ has two connected components, say $\mathcal M^{(1)}$ and $\mathcal M^{(2)}$. The following are equations for curves in each connected component: \begin{equation*} \array{rl} f_6^{(1)}=f_2^{(1)}f_4^{(1)}:= & \left( \left( y+3x \right) z+\frac{3y^2}{2} \right) \\ & \left( x^2{z}^{2}- \left( x{y}^{2}+\frac{15}{2}\,x^2y+\frac{9}{2}x^3 \right) z-3x\,{y}^{3}-\frac{9x^2y^2}{4}+\frac{y^4}{4} \right) \endarray \end{equation*} for $\mathcal C_6^{(1)}\in \mathcal M^{(1)}$ and \begin{equation*} \array{c} f_6^{(2)}=f_2^{(2)}f_4^{(2)}:= \left( \left( y+\frac{x}{3} \right) z-\frac{y^2}{6} \right) \left( x{z}^{2}- \left( x{y}^{2}+\frac{9x^2y}{2}+\frac{3x^3}{2}\right) z +\frac{y^4}{4}+\frac{3x^2y^2}{4} \right) \endarray \end{equation*} for $\mathcal C_6^{(2)}\in \mathcal M^{(2)}$. The curves $\mathcal C_6^{(1)}$ and $\mathcal C_6^{(2)}$ form a Zariski pair since their fundamental groups are not isomorphic. This cannot be detected by Alexander polynomials since both are trivial. In~\cite{ArtalCogolludo} the existence of an essential coordinate character of order two in the characteristic variety of $\mathcal C_6^{(2)}$ was shown enough to distinguish both fundamental groups, since the characteristic variety of $\mathcal C_6^{(1)}$ is trivial. By Theorem~\ref{thm-main}\ref{thm-main-part2} this fact can also be obtained by looking at possible orbifold pencils. Note that there exists a conic $\mathcal Q:=\{q=0\}$ passing through $S$ and $R$ such that $(\mathcal Q,\mathcal C_4^{(1)})_S=4$, $(\mathcal Q,\mathcal C_4^{(2)})_S=5$, and $(\mathcal Q,\mathcal C_2^{(2)})_R=3$, $(\mathcal Q,\mathcal C_2^{(2)})_R=3$. Consider $L:=\{\ell=0\}$ the tangent line to $\mathcal Q$ at $S$. One has the following list of multiplicities of intersection: $$ \array{ll} (\mathcal Q,\mathcal C_2^{(2)}+2L)_S=(\mathcal Q,\mathcal C_4^{(2)})_S=5 & (\mathcal Q,\mathcal C_2^{(2)}+2L)_R=(\mathcal Q,\mathcal C_4^{(2)})_R=3\\ (\mathcal C_4^{(2)},2\mathcal Q)_S=(\mathcal C_4^{(2)},\mathcal C_2^{(2)}+2L)_S=10 & (\mathcal C_4^{(2)},2\mathcal Q)_R=(\mathcal C_4^{(2)},\mathcal C_2^{(2)}+2L)_R=6\\ (\mathcal C_2^{(2)},\mathcal C_4^{(2)})_S=(\mathcal C_2^{(2)},2\mathcal Q)_S=2 & (\mathcal C_2^{(2)},\mathcal C_4^{(2)})_R=(\mathcal C_2^{(2)},2\mathcal Q)_R=6\\ (L,\mathcal C_4^{(2)})_S=(L,2\mathcal Q)_S=4 & (L,\mathcal C_4^{(2)})_R=(L,2\mathcal Q)_R=0. \endarray $$ By~\cite{ji-noether}, this implies that $(\mathcal C_2^{(2)}+2L,\mathcal C_2^{(2)},2\mathcal Q)$ are members of a pencil of quartics. In other words, there is a marked orbifold pencil from $\mathcal C:=\mathbb P^2\setminus \mathcal C_6^{(2)}$ marked with $\chi:=(-1,1)$ to $\mathbb P^1_{(2,[0,1]),(2,[1:0]),(\infty [1:1])}$ given by $[x:y:z]\mapsto [f_2^{(2)}\ell^2:q^2]$ whose target mark is the character $\rho:=(-1,-1,1)$. \end{document}
\begin{document} \title[Trace identities and almost polynomial growth]{Trace identities and almost polynomial growth} \author{Antonio Ioppolo} \address{IMECC, UNICAMP, S\'ergio Buarque de Holanda 651, 13083-859 Campinas, SP, Brazil} \email{[email protected]} \thanks{A. Ioppolo was supported by the Fapesp post-doctoral grant number 2018/17464-3} \author{Plamen Koshlukov} \address{IMECC, UNICAMP, S\'ergio Buarque de Holanda 651, 13083-859 Campinas, SP, Brazil} \email{[email protected]} \thanks{P. Koshlukov was partially supported by CNPq grant No.~302238/2019-0, and by FAPESP grant No.~2018/23690-6} \author{Daniela La Mattina} \address{Dipartimento di Matematica e Informatica, Universit\`a degli Studi di Palermo, Via Archirafi 34, 90123, Palermo, Italy} \email{[email protected]} \thanks{D. La Mattina was partially supported by GNSAGA-INDAM} \subjclass[2010]{Primary 16R10, Secondary 16R30, 16R50} \keywords{Trace algebras, polynomial identities, codimensions growth} \begin{abstract} In this paper we study algebras with trace and their trace polynomial identities over a field of characteristic 0. We consider two commutative matrix algebras: $D_2$, the algebra of $2\times 2$ diagonal matrices and $C_2$, the algebra of $2 \times 2$ matrices generated by $e_{11}+e_{22}$ and $e_{12}$. We describe all possible traces on these algebras and we study the corresponding trace codimensions. Moreover we characterize the varieties with trace of polynomial growth generated by a finite dimensional algebra. As a consequence, we see that the growth of a variety with trace is either polynomial or exponential. \end{abstract} \maketitle \section{Introduction} All algebras we consider in this paper will be associative and over a fixed field $F$ of characteristic 0. Let $F\langle X\rangle$ be the free associative algebra freely generated by the infinite countable set $X=\{x_1, x_2, \ldots\}$ over $F$. One interprets $F\langle X\rangle$ as the $F$-vector space with a basis consisting of 1 and all non-commutative monomials (that is words) on the alphabet $X$. The multiplication in $F\langle X\rangle$ is defined on the monomials by juxtaposition. Let $A$ be an algebra, it is clear that every function $\varphi\colon X\to A$ can be extended in a unique way to a homomorphism (denoted by the same letter) $\varphi\colon F\langle X\rangle\to A$. A polynomial $f\in F\langle X\rangle$ is a polynomial identity (PI for short) for the algebra $A$ whenever $f$ lies in the kernels of all homomorphisms from $F\langle X\rangle$ to $A$. Equivalently $f(x_1,\ldots,x_n)$ is a polynomial identity for $A$ whenever $f(a_1,\ldots,a_n)=0$ for any choice of $a_i \in A$. The set of all PI's for a given algebra $A$ forms an ideal in $F\langle X\rangle$ denoted by $\mbox{Id}(A)$ and called the T-ideal of $A$. Clearly $\mbox{Id}(A)$ is closed under endomorphisms. It is not difficult to prove that the converse is also true: if an ideal $I$ in $F\langle X\rangle$ is closed under endomorphisms then $I=\mbox{Id}(A)$ for some (adequate) algebra $A$. One such algebra is the relatively free algebra $F\langle X\rangle/I$. Knowing the polynomial identities satisfied by an algebra $A$ is an important problem in Ring theory. It is also a very difficult one; it was solved completely in very few cases. These include the algebras $F$ (trivial); $M_2(F)$, the full matrix algebra of order 2; $E$, the infinite dimensional Grassmann algebra; $E\otimes E$. If one adds to the above algebras the upper triangular matrices $UT_n(F)$, one will get more or less the complete list of algebras whose identities are known. The theory developed by A. Kemer in the 80-ies (see \cite{Kemer1991book}) solved in the affirmative the long-standing Specht problem: is every T-ideal in the free associative algebra finitely generated as a T-ideal? But the proof given by Kemer is not constructive; it suffices to mention that even the description of the generators of the T-ideal of $M_3(F)$ are not known, and it seems to be out of reach with the methods in use nowadays. Thus finding the exact form of the polynomial identities satisfied by a given algebra is practically impossible in the vast majority of important algebras. Hence one is led to study either other types of polynomial identities or other characteristics of the T-ideals. In the former direction it is worth mentioning the study of polynomial identities in algebras graded by a group or a semigroup, in algebras with involution, in algebras with trace and so on. Clearly one has to incorporate the additional structure into the ``new'' polynomial identities. It turned out such identities are sort of ``easier'' to study than the ordinary ones. We cite as an example the graded identities for the matrix algebras $M_n(F)$ for the natural gradings by the cyclic groups $\mathbb{Z}_n$ and $\mathbb{Z}$: these were described by Vasilovsky in \cite{Vasilovsky1998, Vasilovsky1999}. Gradings on important algebras (for the PI-theory) and the corresponding graded identities have been studied by very many authors (we refer the reader to the monograph \cite{ElduqueKochetov2013} and the references therein). The trace identities for the full matrix algebras were described independently by Razmyslov \cite{Razmyslov1974} and by Procesi \cite{Procesi1976} (see also the paper by Razmyslov \cite{Razmyslov1985} for a generalization to another important class of algebras). It turns out that the ideal of all trace identities for the matrix algebra $M_n(F)$ is generated by a single polynomial, this is the well known Hamilton--Cayley polynomial written in terms of the traces of the matrix and its powers (and then linearised). We must note here that as it often happens, the simplicity of the statement of the theorem due to Razmyslov and Procesi is largely misleading, and that the proofs are quite sophisticated and extensive. The free associative algebra is graded by the degrees of its monomials, and also by their multidegrees. Clearly the T-ideals are homogeneous in such gradings; this implies that the relatively free algebras inherit the gradings on $F\langle X\rangle$. One might want to describe the Hilbert (or Poincar\'e) series of the relatively free algebras. This task was achieved also in very few instances. Studying the relatively free algebras is related to studying varieties of algebras. We recall the corresponding notions and their importance. Let $A$ be an algebra with T-ideal $\mbox{Id}(A)$. The class of all algebras satisfying all polynomial identities from $\mbox{Id}(A)$ (and possibly some more PI's) is called the variety of algebras $\mbox{var}(A)$ generated by $A$. Then the relatively free algebra $F\langle X\rangle/\mbox{Id}(A)$ is the relatively free algebra in $\mbox{var}(A)$ (clearly there might be several algebras that satisfy the same polynomial identities as $A$, and passing to $\mbox{var}(A)$ one may ``forget'' about $A$ and even look for some ``better'' algebra generating the same variety). One of the most important numerical invariants of a variety (or a T-ideal) is its codimension sequence. Let $P_n$ denote the vector space in $F\langle X\rangle$ consisting of all multilinear polynomials in $x_1$, \dots, $x_n$. One may view $P_n$ as the vector subspace of $F\langle X\rangle$ with a basis consisting of all monomials $x_{\sigma(1)} x_{\sigma(2)}\cdots x_{\sigma(n)}$ where $\sigma\in S_n$, the symmetric group on $\{1,2,\ldots,n\}$. It is well known that whenever the base field is of characteristic 0, each T-ideal $I$ is generated by its multilinear elements, that is by all intersections $I\cap P_n$, $n\ge 1$. On the other hand $P_n$ is a left module over $S_n$ and it is clear that $P_n\cong FS_n$, the regular $S_n$-module. The T-ideals are invariant under permutations of the variables hence $I\cap P_n$ is a submodule of $P_n$. In this way one can employ the well developed theory of the representations of the symmetric (and general linear) groups and study the polynomial identities satisfied by an algebra. This approach might have seemed rather promising but in 1972 A. Regev \cite{Regev1972} established a fundamental result showing that if $I\ne 0$ then the intersections $I\cap P_n$ tend to become very large when $n\to\infty$. More precisely let $A$ be a PI-algebra (i.e., $A$ satisfies a non-trivial identity), and let $I=\mbox{Id}(A)$, denote by $P_n(A) = P_n/(P_n\cap I)$. Then $P_n(A)$ is also an $S_n$-module, its dimension $c_n(A)=\dim P_n(A)$ is called the $n$-th codimension of $A$ (or of the variety $\mbox{var}(A)$, or of the relatively free algebra in $\mbox{var}(A)$). Regev's theorem states that if $A$ satisfies an identity of degree $d$ then $c_n(A)\le (d-1)^{2n}$. Since $\dim P_n=n!$ this gives a more precise meaning of the above statement about the size of $P_n\cap I$. Recall that this exponential bound for the codimensions allowed Regev to prove that if $A$ and $B$ are both PI-algebras then their tensor product $A\otimes B$ is also a PI-algebra. But computing the exact values of the codimensions of a given algebra is also a very difficult task, and $c_n(A)$ is known for very few algebras $A$. This is exactly the same list as above, namely that of the algebras whose identities are known. Hence one is led to study the growth of the codimension sequences. In the eighties Amitsur conjectured that for each PI-algebra $A$, the sequence $(c_n(A))^{1/n}$ converges when $n\to\infty$ and moreover its limit is an integer. This conjecture was dealt with by Giambruno and Zaicev \cite{GiambrunoZaicev1998, GiambrunoZaicev1999} (see also \cite{GiambrunoZaicev2005book}): they answered in the affirmative Amitsur's conjecture. The above limit is called the PI-exponent of a PI-algebra, $\exp(A)$. Giambruno and Zaicev's important result initiated an extensive research concerning the asymptotic behaviour of the codimension sequences of algebras. It is well known (see for example \cite[Chapter 7]{GiambrunoZaicev2005book} and the references therein) that either the codimensions of $A$ are bounded by a polynomial function or grow exponentially. The results of Giambruno and Zaicev also hold for the case of graded algebras (\cite{AljadeffGiambruno2013, AljadeffGiambrunoLaMattina2011, GiambrunoLaMattina2010}), algebras with involution (\cite{GiambrunoMiliesValenti2017}), superalgebras with superinvolution, graded involution or pseudoinvolution (\cite{Ioppolo2018, Ioppolo2020, Santos2017}) and also for large classes of non-associative algebras. It is useful to highlight that there are examples of non-associative algebras such that their PI-exponent exists but is not an integer and also examples where the PI-exponent does not exist at all. In this paper we study trace polynomial identities. We focus our attention on two commutative subalgebras of $UT_2$, the algebra of $2 \times 2$ upper-triangular matrices over $F$: $D_2$ the algebra of diagonal matrices and $C_2 = \mbox{span}_F \{e_{11}+ e_{22}, e_{12} \}$. In \cite[Theorem 2.1]{Berele1996} A. Berele described the ideal of trace identities for the algebra $D_n$; he proved that it is generated by the commutativity law and by the Hamilton--Cayley polynomial. The asymptotic behaviour of the codimensions of trace identities was studied by A. Regev. In fact he described in \cite{Regev1988} the asymptotics of the ordinary codimensions of the full matrix algebra, and in \cite{Regev1984} he proved that the ordinary and the trace codimensions of the full matrix algebra are asymptotically equal. Our main goal in this paper is the description of the varieties of trace algebras that are of \textsl{almost polynomial growth}. This means that the codimensions of the given variety are of exponential growth but each proper subvariety is of polynomial growth. The description we obtain is in terms of excluding the algebras $D_2$ and $C_2$ with non-zero traces and the algebra $UT_2$ with the zero trace. As a by-product of the proof we obtain that the codimension growth of the trace identities of a finite dimensional algebra is either polynomially or exponentially bounded. It is interesting to note that if one considers the algebra $D_n$ of the diagonal $n\times n$ matrices without a trace, it is commutative and non-nilpotent, and hence its codimensions are equal to 1. But when adding a trace then it becomes of exponential growth. Hence in the case of diagonal matrices there cannot be a direct analogue of Regev's theorem mentioned above. We recall here that similar descriptions for the ordinary codimensions can be found in \cite[Theorem 7.2.7]{GiambrunoZaicev2005book} where it was proved that the only two varieties of (ordinary) algebras of almost polynomial growth are the ones generated by the Grassmann algebra $E$ and by $UT_2$. In the case of algebras with involution, superinvolution, pseudoinvolution or graded by a finite group, a complete list of varieties of algebras of almost polynomial growth was exihibited in \cite{GiambrunoIoppoloLaMattina2016, GiambrunoIoppoloLaMattina2019, GiambrunoMishchenko2001bis, GiambrunoMishchenkoZaicev2001, IoppoloLaMattina2017, IoppoloMartino2018, LaMattina2015, Valenti2011}. In order to obtain our results we use methods from the theory of trace polynomial identities together with a version of the Wedderburn--Malcev theorem for finite dimensional trace algebras. Here we recall that a trace function on the matrix algebra $M_n(F)$ is just a scalar multiple of the usual matrix trace. In sharp contrast with this there are very many traces on $D_n$ and $C_2$: these algebras are commutative and hence a trace is just a linear function from them into $F$. \section{Preliminaries} Throughout this paper $F$ will denote a field of characteristic zero and $A$ a unitary associative $F$-algebra with trace $\mbox{tr}$. We say that $A$ is an algebra with trace if it is endowed with a linear map $\mbox{tr}\colon A \rightarrow F$ such that for all $a,b \in A$ one has \[ \mbox{tr}(ab) = \mbox{tr}(ba). \] In what follows, we shall identify, when it causes no misunderstanding, the element $\alpha \in F$ with $\alpha \cdot 1$, where $1$ is the unit of the algebra. Accordingly, one can construct $F\langle X, \mbox{Tr}\rangle$, the free algebra with trace on the countable set $X = \{ x_1, x_2, \ldots \}$, where $\mbox{Tr}$ is a formal trace. Let $\mathcal{M}$ denote the set of all monomials in the elements of $X$. Then $F\langle X, \mbox{Tr}\rangle$ is the algebra generated by the free algebra $F\langle X\rangle$ together with the set of central (commuting) indeterminates $\mbox{Tr}(M)$, $M \in \mathcal{M}$, subject to the conditions that $\mbox{Tr}(MN) = \mbox{Tr}(NM)$, and $\mbox{Tr}(\mbox{Tr}(M)N)=\mbox{Tr}(M)\mbox{Tr}(N)$, for all $M$, $N \in \mathcal{M}$. In other words, \[ F\langle X, \mbox{Tr}\rangle \cong F\langle X\rangle \otimes F[\mbox{Tr}(M) : M \in \mathcal{M}]. \] The elements of the free algebra with trace are called trace polynomials. A trace polynomial $f(x_1, \ldots, x_n, \mbox{Tr}) \in F\langle X, \mbox{Tr}\rangle$ is a trace identity for $A$ if, after substituting the variables $x_i$ with arbitrary elements $a_i \in A$ and $\mbox{Tr}$ with the trace $\mbox{tr}$, we obtain $0$. We denote by $\Id^{tr}(A)$ the set of trace identities of $A$, which is a trace $T$-ideal ($T^{tr}$-ideal) of the free algebra with trace, i.e., an ideal invariant under all endomorphisms of $F\langle X, \mbox{Tr}\rangle$. As in the ordinary case, $\Id^{tr}(A)$ is completely determined by its multilinear polynomials. \begin{Definition} The vector space of multilinear elements of the free algebra with trace in the first $n$ variables is called the space of multilinear trace polynomials in $x_1$, \dots, $x_n$ and it is denoted by $MT_n$ ($MT$ comes from \textsl{mixed trace}). Its elements are linear combinations of expressions of the type \[ \mbox{Tr}(x_{i_1} \cdots x_{i_a}) \cdots \mbox{Tr}(x_{j_1} \cdots x_{j_b}) x_{l_1} \cdots x_{l_c} \] where $ \left \{ i_1, \ldots, i_a, \ldots, j_1, \ldots, j_b, l_1, \ldots, l_c \right \} = \left \{ 1, \ldots, n \right \} $. \end{Definition} The non-negative integer \[ c_n^{tr}(A) = \dim_F \dfrac{ MT_n}{MT_n \cap \Id^{tr}(A)} \] is called the $n$-th trace codimension of $A$. A prominent role among the elements of $MT_n$ is played by the so-called pure trace polynomials, i.e., polynomials such that all the variables $x_1$, \dots, $x_n$ appear inside a trace. \begin{Definition} The vector space of multilinear pure trace polynomials in $x_1$, \dots, $x_n$ is the space \[ PT_n = \mbox{span}_F \left \{ \mbox{Tr}(x_{i_1} \cdots x_{i_a}) \cdots \mbox{Tr}(x_{j_1} \ldots x_{j_b}) : \left \{ i_1, \ldots, j_b \right \} = \left \{ 1, \ldots, n \right \} \right \}. \] \end{Definition} For a permutation $\sigma \in S_n$ we write (in \cite{Berele1996}, Berele uses $\sigma$ instead of $\sigma^{-1}$) \[ \sigma^{-1} = \left ( i_1 \cdots i_{r_1} \right ) \left ( j_1 \cdots j_{r_2} \right ) \cdots \left ( l_1 \cdots l_{r_t} \right ) \] as a product of disjoint cycles, including one-cycles and let us assume that $r_1 \geq r_2 \geq \cdots \geq r_t$. In this case we say that $\sigma$ is of cyclic type $\lambda = (r_1, \ldots, r_t)$. We then define the pure trace monomial $ptr_{\sigma} \in PT_n$ as \[ ptr_{\sigma}(x_1, \ldots, x_n) = \mbox{Tr} \left ( x_{i_1} \cdots x_{i_{r_1}} \right ) \mbox{Tr} \left ( x_{j_1} \cdots x_{j_{r_2}} \right ) \cdots \mbox{Tr} \left ( x_{l_1} \cdots x_{l_{r_t}} \right ). \] If $\displaystyle a = \sum_{\sigma \in S_n} \alpha_{\sigma} \sigma \in FS_n$, we also define $\displaystyle ptr_{a}(x_1, \ldots, x_n) = \sum_{\sigma \in S_n} \alpha_{\sigma} ptr_{\sigma}(x_1, \ldots, x_n)$. It is useful to introduce also the so-called trace monomial $mtr_{\sigma} \in MT_{n-1}$. It is defined so that \[ ptr_{\sigma}(x_1, \ldots, x_n) = \mbox{Tr} \left ( mtr_{\sigma}(x_1, \ldots, x_{n-1}) x_n \right ). \] Let now $\varphi\colon FS_n \rightarrow PT_n$ be the map defined by $\varphi(a) = ptr_a(x_1, \ldots, x_n)$. Clearly $\varphi$ is a linear isomorphism and so $\dim_F PT_{n} = \dim_F FS_{n} = n!$. The following result is well known, and we include its proof for the sake of completeness. \begin{Remark} $\dim_F MT_n = (n+1)!$. \end{Remark} \begin{proof} In order to prove the result we shall construct an isomorphism of vector spaces between $PT_{n+1}$ and $MT_n$. This will complete the proof since $\dim_F PT_{n+1} = (n+1)!$. Let $\varphi\colon PT_{n+1} \rightarrow MT_n$ be the linear map defined by the equality \[ \varphi \left ( \mbox{Tr}(x_{i_1} \cdots x_{i_a}) \cdots \mbox{Tr}(x_{j_1} \cdots x_{j_b}) \mbox{Tr}(x_{l_1} \cdots x_{l_c}) \right ) = \mbox{Tr}(x_{i_1} \cdots x_{i_a}) \cdots \mbox{Tr}(x_{j_1} \cdots x_{j_b}) x_{l_1} \cdots x_{l_{c-1}}. \] Here we assume, as we may, that $l_c=n+1$. It is easily seen that $\varphi$ is a linear isomorphism and we are done. \end{proof} \section{Matrix algebras with trace} In this section we study matrix algebras with trace. Let $M_n(F)$ be the algebra of $ n \times n $ matrices over $F$. One can endow such an algebra with the usual trace on matrices, denoted $t_1$, and defined as \[ t_1(a) = t_1 \begin{pmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{n1} & \cdots & a_{nn} \end{pmatrix} = a_{11} + \cdots + a_{nn} \in F. \] We recall that every trace on $M_n(F)$ is proportional to the usual trace $t_1$. The proof of this statement is a well known result of elementary linear algebra, we give it here for the sake of completeness. \begin{Lemma} \label{traces on matrices} Let $f\colon M_n(F) \rightarrow F$ be a trace. Then there exists $\alpha \in F$ such that $f = \alpha t_1$. \end{Lemma} \begin{proof} Let $e_{ij}$'s denote the matrix units. First we shall prove that $f(e_{ij}) = 0$ whenever $i \neq j$. In fact, since $f(ab) = f(ba)$, for all $a$, $b \in M_n(F)$, we get that \[ f(e_{ij}) = f(e_{ij} e_{jj}) = f(e_{jj} e_{ij}) = f(0) = 0. \] Moreover $f(e_{jj}) = f(e_{11})$, for all $j = 2, \ldots, n$. Indeed $ f(e_{11}) = f(e_{1j} e_{jj} e_{j1}) = f(e_{j1} e_{1j} e_{jj}) = f(e_{jj})$. For any matrix $a \in M_n(F)$, $a = (a_{ij}) = \sum_{i,j} a_{ij} e_{ij}$, we get that \[ f(a) = f \Bigl( \sum_{i,j} a_{ij} e_{ij} \Bigr) = \sum_{j= 1}^n a_{jj} f(e_{jj}) = f(e_{11}) t_1(a), \] and the proof is complete. \end{proof} In what follows we shall use the notation $t_\alpha$ to indicate the trace on $M_n(F)$ such that $t_\alpha = \alpha t_1$. Moreover, $M_n^{t_\alpha}$ will denote the algebra of $n \times n$ matrices endowed with the trace $t_\alpha$. In sharp contrast with the above result, there are very many different traces on the algebra $D_n = D_n(F)$ of $n \times n$ diagonal matrices over $F$. \begin{Remark}\label{traces_on_Dn} If $\mbox{tr}$ is a trace on $D_n$ then there exist scalars $\alpha_1$, \dots, $\alpha_n\in F$ such that for each diagonal matrix $a=\mbox{diag}(a_{11},\ldots, a_{nn})\in D_n$ one has $\mbox{tr}(a) = \alpha_1 a_{11}+\cdots+ \alpha_n a_{nn}$. \end{Remark} The algebra $D_n\cong F^n$ is commutative, and $D_n\cong F^n$ with component-wise operations. Hence a linear function $\mbox{tr} \colon D_n\to F$ must be of the form stated in the remark. Clearly for each choice of the scalars $\alpha_i$ one obtains a trace on $D_n$, and we have the statement of the remark. We shall denote with the symbol $t_{\alpha_1, \ldots, \alpha_n}$ the trace $\mbox{tr}$ on $D_n$ such that, for all $a = \mbox{diag}(a_{11}, \ldots, a_{nn})$, $ \mbox{tr}(a) = \alpha_1 a_{11} + \cdots + \alpha_n a_{nn}$. Moreover, $D_n^{t_{\alpha_1, \ldots, \alpha_n}}$ will indicate the algebra $D_n$ endowed with the trace $t_{\alpha_1, \ldots, \alpha_n}$. Let $(A, t)$ and $(B, t')$ be two algebras with trace. A homomorphism (isomorphism) of algebras $\varphi: A \rightarrow B$ is said to be a homomorphism (isomorphism) of algebras with trace if $\varphi(t(a)) = t'(\varphi(a))$, for any $a \in A$. We have the following remark. \begin{Remark} \label{isomorphic Dn} Let $S_n$ be the symmetric group of order $n$ on the set $\{ 1,2, \ldots, n \}$. For all $\sigma \in S_n$, the algebras $D_n^{t_{\alpha_1, \ldots, \alpha_n}}$ and $D_n^{t_{\alpha_{\sigma(1)}, \ldots, \alpha_{\sigma(n)}}}$ are isomorphic, as algebras with trace. \end{Remark} \begin{proof} We need only to observe that the linear map $\varphi \colon D_n^{t_{\alpha_1, \ldots, \alpha_n}} \rightarrow D_n^{t_{\alpha_{\sigma(1)}, \ldots, \alpha_{\sigma(n)}}}$, defined by $ \varphi(e_{ii}) = e_{\sigma(i) \sigma(i)}$, for all $i = 1, \ldots, n$, is an isomorphism of algebras with trace. \end{proof} Recall that a trace function $\mbox{tr}$ on an algebra $A$ is said to be degenerate if there exists a non-zero element $a \in A$ such that \[ \mbox{tr}(ab) = 0 \] for every $b\in A$. This means that the bilinear form $f(x,y) = \mbox{tr}(xy)$ is degenerate on $A$. In the following lemma we describe the non-degenerate traces on $D_n$. \begin{Lemma}\label{non_degenerate_traces_on_Dn} Let $D_n^{t_{\alpha_1, \ldots, \alpha_n}}$ be the algebra of $n \times n$ matrices endowed with the trace $t_{\alpha_1, \ldots, \alpha_n}$. Such a trace is non-degenerate if and only if all the scalars $\alpha_i$ are non-zero. \end{Lemma} \begin{proof} Let $t_{\alpha_1, \ldots, \alpha_n}$ be non-degenerate and suppose that there exists $i$ such that $\alpha_i = 0$. Consider the matrix unit $e_{ii}$. It is easy to see that we reach a contradiction since, for any element $\mbox{diag}(a_{11},\ldots, a_{nn}) \in D_n$, we get $$ t_{\alpha_1, \ldots, \alpha_n}(e_{ii} \mbox{diag}(a_{11},\ldots, a_{nn})) = t_{\alpha_1, \ldots, \alpha_n}(e_{ii} a_{ii} ) = \alpha_i a_{ii} = 0. $$ In order to prove the opposite direction, let us assume that all the scalars $\alpha_i$ are non-zero. Suppose, by contradiction, that the trace $t_{\alpha_1, \ldots, \alpha_n}$ is degenerate. Hence there exist a non-zero element $a = \mbox{diag}(a_{11},\ldots, a_{nn}) \in D_n$ such that $t_{\alpha_1, \ldots, \alpha_n}(ab) = 0$, for any $b \in D_n$. In particular, let $b = e_{ii}$, for $i=1, \ldots, n$. We have that $$ t_{\alpha_1, \ldots, \alpha_n}( a e_{ii}) = t_{\alpha_1, \ldots, \alpha_n}(\mbox{diag}(a_{11},\ldots, a_{nn}) e_{ii}) = t_{\alpha_1, \ldots, \alpha_n}(a_{ii} e_{ii} ) = \alpha_i a_{ii} = 0. $$ Since $\alpha_i \neq 0$, for all $i = 1, \ldots, n$, we get that $a_{ii} = 0$ and so $a = 0$, a contradiction. \end{proof} \section{The algebras $D_2^{t_{\alpha, \beta}}$} In this section we deal with the algebra $D_2$ of $2 \times 2$ diagonal matrices over the field $F$. In accordance with the results of Section $3$, we can define on $D_2$, up to isomorphism, only the following trace functions: \begin{enumerate} \item[1.] $t_{\alpha, 0}$, for any $\alpha \in F $, \item[2.] $t_{\alpha, \alpha}$, for any non-zero $\alpha \in F $, \item[3.] $t_{\alpha, \beta }$, for any distinct non-zero $\alpha, \beta \in F $. \end{enumerate} In the first part of this section our goal is to find the generators of the trace $T$-ideals of the identities of the algebra $D_2$ endowed with all possible trace functions. Let us start with the case of $D_2^{t_{\alpha,0}}$. Recall that, if $\alpha = 0$, then $D_2^{t_{0,0}}$ is the algebra $D_2$ with zero trace. So $\mbox{Id}^{tr}(D_2^{t_{0,0}})$ is generated by the commutator $[x_1, x_2]$ and $\mbox{Tr}(x)$ and $c_n^{tr}(D_2^{t_{0,0}}) = c_n(D_2^{t_{0,0}}) =1$. For $\alpha \neq 0$, we have the following result. \begin{Theorem} \label{identities and codimensions of D2 t alpha 0} Let $\alpha \in F \setminus \{ 0 \}$. The trace $T$-ideal $\Id^{tr}(D_2^{t_{\alpha, 0}})$ is generated, as a trace $T$-ideal, by the polynomials: \begin{itemize} \item[•] $ f_1 = [x_1,x_2]$, \item[•] $f_2 = \mbox{Tr}(x_1) \mbox{Tr}(x_2) - \alpha \mbox{Tr}(x_1 x_2)$. \end{itemize} Moreover \[ c_n^{tr}(D_2^{t_{\alpha, 0}}) = 2^{n}. \] \end{Theorem} \begin{proof} It is clear that $T = \langle f_1, f_2 \rangle_{T^{tr}} \subseteq \Id^{tr}(D_2^{t_{\alpha, 0}})$. We need to prove the opposite inclusion. Let $f \in MT_n$ be a multilinear trace polynomial of degree $n$. It is clear that $f$ can be written (mod $T$) as a linear combination of the polynomials \begin{equation} \label{non identities of D2 t alpha 0} \mbox{Tr}(x_{i_1} \cdots x_{i_k}) x_{j_1} \cdots x_{j_{n-k}}, \end{equation} where $ \left \{ i_1, \ldots, i_k, j_1, \ldots, j_{n-k} \right \} = \left \{ 1, \ldots, n \right \}$, $i_1 < \cdots < i_k$ and $j_1 < \cdots < j_{n-k}$. Our goal is to show that the polynomials in \eqref{non identities of D2 t alpha 0} are linearly independent modulo $ \Id^{tr}(D_2^{t_{\alpha, 0}})$. To this end, let $g = g(x_1, \ldots, x_n, \mbox{Tr})$ be a linear combination of the above polynomials which is a trace identity: \[ g(x_1, \ldots, x_n, \mbox{Tr}) = \sum_{I,J} a_{I,J} \mbox{Tr}(x_{i_1} \cdots x_{i_k}) x_{j_1} \cdots x_{j_{n-k}}, \] where $I = \{ x_{i_1}, \ldots, x_{i_k} \}$, $J = \{ x_{j_1}, \ldots, x_{j_{n-k}} \}$, and $i_1 < \cdots < i_k$, $j_1 < \cdots < j_{n-k}$. We claim that $g$ is actually the zero polynomial. Suppose that, for some fixed $I = \{ x_{i_1}, \ldots, x_{i_k} \}$ and $J = \{ x_{j_1}, \ldots, x_{j_{n-k}} \}$, one has that $a_{I,J} \neq 0$. We consider the following evaluation: \[ x_{i_1} = \cdots = x_{i_k} = e_{11}, \ \ \ \ \ \ \ \ x_{j_1} = \cdots = x_{j_{n-k}} = e_{22}, \ \ \ \ \ \ \ \ \mbox{Tr} = t_{\alpha, 0}. \] It follows that $g(e_{11}, \ldots, e_{11}, e_{22}, \ldots, e_{22}, t_{\alpha,0}) = a_{I,J} \alpha e_{22} = 0$. Hence $a_{I,J} = 0$, a contradiction. The claim is proved and so \[ \Id^{tr}(D_2^{t_{\alpha, 0}}) = T. \] Finally, in order to compute the $n$-th trace codimension sequence of our algebra, we have only to count how many elements in \eqref{non identities of D2 t alpha 0} there are. Fixed $k$, there are exactly $\binom{n}{k}$ elements of the type $\mbox{Tr}(x_{i_1} \cdots x_{i_k}) x_{j_1} \cdots x_{j_{n-k}}$, $i_1 < \cdots < i_k$, $j_1 < \cdots < j_{n-k}$. Hence the polynomials in \eqref{non identities of D2 t alpha 0} are exactly $ \sum_{k=0}^n \binom{n}{k} = 2^n$ and the proof is complete. \end{proof} Now, we consider $D_2^{t_{\alpha,\alpha}}$. Recall that, for any $\begin{pmatrix} a & 0 \\ 0 & b \end{pmatrix} \in D_2$, we have that $ t_{\alpha, \alpha} \begin{pmatrix} a & 0 \\ 0 & b \end{pmatrix} = \alpha (a + b).$ \begin{Theorem} \label{identities of D2 talpha alpha} Let $\alpha \in F \setminus \{ 0 \}$. The trace $T$-ideal $\Id^{tr}(D_2^{t_{\alpha, \alpha}})$ is generated, as a trace $T$-ideal, by the polynomials: \begin{itemize} \item[•] $ f_1 = [x_1,x_2]$, \item[•] $ f_3 = \alpha^2 x_1x_2 + \alpha^2 x_2 x_1 + \mbox{Tr}(x_1)\mbox{Tr}(x_2) - \alpha \mbox{Tr}(x_1)x_2 - \alpha \mbox{Tr}(x_2)x_1 - \alpha \mbox{Tr}(x_1 x_2) $. \end{itemize} Moreover \[ c_n^{tr}(D_2^{t_{\alpha, \alpha}})= 2^n. \] \end{Theorem} \begin{proof} In case $\alpha=1$, Berele (\cite[Theorem $2.1$]{Berele1996}) proved that $\Id^{tr}(D_2^{t_{\alpha, \alpha}}) = \langle f_1, f_3 \rangle_{T^{tr}}$. The proof when $\alpha\ne 1$ follows word by word that one given by Berele in \cite{Berele1996}. In order to find the trace codimensions, we remark that the trace polynomials \begin{equation} \label{non identities of D2 with trace} \mbox{Tr}(x_{i_1} \cdots x_{i_k}) x_{j_1} \cdots x_{j_{n-k}}, \ \ \ \ \ \ \ \ \ \ \left \{ i_1, \ldots, i_k, j_1, \ldots, j_{n-k} \right \} = \left \{ 1, \ldots, n \right \}, \ \ i_1 < \cdots < i_k, \ \ j_1 < \cdots < j_{n-k}, \end{equation} form a basis of $MT_n \pmod{ MT_n \cap \Id^{tr}(D_2^{t_{\alpha, \alpha}})}$. Hence, their number, which is the $n$-th trace codimension sequence of $D_2^{t_{\alpha, \alpha}}$, is $\sum_{k=0}^n \binom{n}{k} = 2^n$ and the proof is complete. \end{proof} \begin{Remark} \label{differentvarieties} Here we observe a curious fact. It follows from Theorems \ref{identities and codimensions of D2 t alpha 0} and \ref{identities of D2 talpha alpha} that the relatively free algebras in the varieties of algebras with trace generated by $D_2^{t_{\alpha,0}}$ and by $D_2^{t_{\alpha,\alpha}}$ are quite similar. In fact the multilinear components of degree $n$ in these two relatively free algebras are isomorphic. But this is an isomorphism of vector spaces which cannot be extended to an isomorphism of the corresponding algebras. It can be easily seen that neither of these two varieties is a subvariety of the other as soon as $\alpha\ne 0$. \begin{enumerate} \item The trace identity $f_2=Tr(x_1)Tr(x_2)-\alpha Tr(x_1x_2)$ does not hold for the algebra $D_2^{t_{\alpha,\alpha}}$. One evaluates it on the ``generic" diagonal matrices $d_1=diag(a_1,b_1)$ and $d_2=diag(a_2,b_2)$ and gets $\alpha^2(a_1b_2 + a_2b_1)$ which does not vanish on $D_2^{t_{\alpha,\alpha}}$. \item Likewise $D_2^{t_{\alpha,0}}$ does not satisfy the trace identity $f_3$. Once again substituting $x_1$ and $x_2$ in $f_3$ by the generic matrices $d_1$ and $d_2$ we get a diagonal matrix with 0 at position $(1,1)$ and a non-zero entry $\alpha^2(2b_1b_2 -a_1b_2-a_2b_1)$ at position $(2,2)$. \end{enumerate} This question is addressed in a more general form in Lemmas~\ref{D2 delta 0 no T equ}, \ref{D2 gamma gamma no T equ}, and \ref{D2 alfa beta no T equ}. \end{Remark} Finally we consider the trace algebra $D_2^{t_{\alpha,\beta}}$. Recall that, for any $\begin{pmatrix} a & 0 \\ 0 & b \end{pmatrix} \in D_2$, we have that $ t_{\alpha, \beta} \begin{pmatrix} a & 0 \\ 0 & b \end{pmatrix} = \alpha a + \beta b. $ \begin{Theorem} \label{identities and codimensions of D2 talpha beta} Let $\alpha$, $\beta \in F \setminus \{ 0 \}$, $\alpha \neq \beta$. As a trace $T$-ideal, $\Id^{tr}(D_2^{t_{\alpha, \beta}})$ is generated by the polynomials: \begin{itemize} \item[•] $ f_1 = [x_1,x_2]$, \item[•] $f_4 = -x_1 \mbox{Tr}(x_2) \mbox{Tr}(x_3) + (\alpha + \beta) x_1 \mbox{Tr}(x_2 x_3) + x_3 \mbox{Tr}(x_1) \mbox{Tr}(x_2) - (\alpha + \beta) x_3 \mbox{Tr}(x_1 x_2) - \mbox{Tr}(x_1) \mbox{Tr}(x_2 x_3) + \mbox{Tr}(x_3)\mbox{Tr}(x_1 x_2)$, \item[•] $f_5 = \mbox{Tr}(x_1) \mbox{Tr}(x_2) \mbox{Tr}(x_3) - (\alpha \beta^2 + \alpha^2 \beta) x_1 x_2 x_3 + \alpha \beta x_1 x_2 \mbox{Tr}(x_3) + \alpha \beta x_1 x_3 \mbox{Tr}(x_2) + \alpha \beta x_2 x_3 \mbox{Tr}(x_1) - (\alpha + \beta) x_1 \mbox{Tr}(x_2) \mbox{Tr}(x_3) + (\alpha^2 + \alpha \beta + \beta^2) x_1 \mbox{Tr}(x_2 x_3) - \alpha \beta x_2 \mbox{Tr}(x_1 x_3) - \alpha \beta x_3 \mbox{Tr}(x_1 x_2) + \alpha \beta \mbox{Tr}(x_1 x_2 x_3) - (\alpha + \beta) \mbox{Tr}(x_1) \mbox{Tr}(x_2 x_3)$. \end{itemize} Moreover \[ c_n^{tr}(D_2^{t_{\alpha, \beta}}) = 2^{n+1}-n-1. \] \end{Theorem} \begin{proof} Write $I = \langle f_1, f_4, f_5 \rangle_{T^{tr}}$. An immediate (but tedious) verification shows that $ I \subseteq \Id^{tr}(D_2^{t_{\alpha, \beta}})$. In order to obtain the opposite inclusion, first we shall prove that the polynomials \begin{equation} \label{non identities of D2 alfa beta 1} x_{i_1} \cdots x_{i_{k}} \mbox{Tr}(x_{h_1} \cdots x_{h_{n-k}}), \ \ \ \ \ x_{i_1} \cdots x_{i_{k}} \mbox{Tr}(x_{j_1} \cdots x_{j_{s-1}}) \mbox{Tr}(x_{j_s}), \end{equation} where $ i_1 < \cdots < i_k$, $ h_1 < \cdots < h_{n-k}$ and $ j_1 < \cdots < j_{s-1} < j_s$, span $ MT_n $, modulo $ MT_n \cap I $, for every $n \geq 1$. In order to achieve this goal we shall use an induction. Let $f \in MT_n$ be a multilinear trace polynomial of degree $n$. Hence it is a linear combination of polynomials of the type \[ x_{i_1} \cdots x_{i_a} \mbox{Tr}(x_{j_1} \cdots x_{j_b}) \cdots \mbox{Tr}(x_{l_1} \cdots x_{l_c}) \] where $ \left \{ i_1, \ldots, i_a, j_1, \ldots, j_b, \ldots, l_1, \ldots, l_c \right \} = \left \{ 1, \ldots, n \right \} $. Because of the identity $f_5 \equiv 0$, we can kill all products of three traces (and more than three traces). So we may consider only monomials with either no trace, or with one, or with two traces. Clearly the identity $f_1$ implies that we can assume all of these monomials ordered, outside and also inside each trace. In the case of monomials with two traces, now we want to show how to reduce one of these traces to be of a monomial of length $1$ (that is a variable). To this end, in $f_4$ take $x_1$ as a letter, and $x_2$, $x_3$ as monomials. The last term is ``undesirable'', and it is written as a combination of either one trace, or two traces where one of these is a trace of a letter (that is $x_1$). The only problem is the first term of $f_4$. But it has a letter outside the traces. Since the total degree of the monomials inside traces in the first summand of $f_4$ will be less than the initial one, we can apply the induction. Suppose now we have a linear combination of monomials where we have either no traces (at most one of these), or just one trace (and the variables outside the trace are ordered, as well as those inside the trace), or two traces. In the latter case we may assume that the variables outside the trace are ordered, and that these in the first trace are ordered (increasing) as well. And moreover the second trace is of a variable, not monomial. In this case we are concerned with the monomials having two traces. We use once again $f_4$, in fact the last two summands in it, to exchange variables between the two traces. We get then monomials with one trace or with two traces but of lower degree inside the traces, and as above continue by induction. In conclusion, we can suppose that in the case of two traces, the variables are ordered in the following way: \[ x_{i_1} \cdots x_{i_a} \mbox{Tr}(x_{j_1}\cdots x_{j_b}) \mbox{Tr}(x_j) \] where $i_1<\cdots<i_a$ and $j_1<\cdots<j_b<j$. We next show that the polynomials in \eqref{non identities of D2 alfa beta 1} are linearly independent modulo $ \Id^{tr}(D_2^{t_{\alpha, \beta}})$. Let us take generic diagonal matrices $X_i=(a_i, b_i)$, that is we consider $a_i$ and $b_i$ as commuting independent variables. If the monomials we consider are not linearly independent there will be a non-trivial linear combination among them which vanishes. Form such a linear combination and evaluate it on the above defined generic diagonal matrices $X_i$. We order the monomials in $a_i$ and $b_i$ obtained in the linear combination, at positions $(1,1)$ and $(2,2)$ of the resulting matrix, as follows. In the first coordinate (that is position $(1,1)$ of the matrices) we consider $a_1<\cdots<a_n<b_1<\cdots<b_n$, in the second coordinate $(2,2)$ of the matrices in $D_2$ we take $b_1<\cdots<b_n<a_1<\cdots<a_n$. Then we extend this order lexicographically to all monomials in $F[a_i, b_i]$. In fact these are two orders, one for position $(1,1)$ and another for position $(2,2)$ of the diagonal matrices. In order to simplify the notation, let us assume that the largest monomial in the first coordinate is $a_1 \cdots a_k b_{k+1} \cdots b_n$. If $k=n$ then there is no trace at all. The case $k=n-1$ is also clear: we have only one trace and it is $\mbox{Tr}(x_n)$. So take $k\le n-2$. Such a monomial can come from either $M = x_1\cdots x_k \mbox{Tr}(x_{k+1}\cdots x_n)$ or from $N = x_1\cdots x_k \mbox{Tr}(x_{k+1}\cdots x_{n-1})\mbox{Tr}(x_n)$. Clearly in the second coordinate the largest monomial will be $b_1\cdots b_k a_{k+1}\cdots a_n$, and it comes from the above two elements only. Now suppose that a linear combination of monomials vanishes on the generic matrices $X_i$. Then the largest monomials will cancel and so there will exist scalars $p$, $q\in F$ such that the largest monomials in $pM+qN$ will cancel. This means, after computing the traces, that $p\beta + q\beta^2 = 0$ and $p\alpha+q\alpha^2=0$. Consider $p$ and $q$ as variables in a $2\times 2$ system. The determinant of the system is $\alpha\beta(\alpha-\beta)$. Since $\alpha$ and $\beta$ are both non-zero and since $\alpha\ne \beta$, we get that $p=q=0$ and we cancel out the largest monomials. In conclusion the polynomials in \eqref{non identities of D2 alfa beta 1} are linearly independent modulo $ \Id^{tr}(D_2^{t_{\alpha, \beta}})$. Since $ MT_n \cap \Id^{tr}(D_2^{t_{\alpha, \beta}}) \supseteq MT_n \cap I $, they form a basis of $ MT_n$ modulo $MT_n \cap \Id^{tr}(D_2^{t_{\alpha, \beta}}) $ and $ \Id^{tr}(D_2^{t_{\alpha, \beta}}) = I $. Finally, in order to compute the codimension sequence of our algebra, we have only to observe the following facts. We have only one monomial with no trace at all, and exactly $n$ monomials where $n-1$ letters are outside the traces (and the remaining one is inside a trace). Then we have $2{n\choose s}$ elements of the type \[ x_{i_1}\cdots x_{i_k} \mbox{Tr}(x_{j_1}\cdots x_{j_s}) \mbox{\ \ \ \ \ or \ \ \ \ \ } x_{i_1}\cdots x_{i_k} \mbox{Tr}(x_{j_1}\cdots x_{j_{s-1}}) \mbox{Tr}(x_{j_s}), \qquad k+s=n. \] In conclusion we get that $$ c_n^{tr}(D_2^{t_{\alpha, \beta}}) = \binom{n}{0} + \binom{n}{1} + 2\sum_{s = 2}^{n} \binom{n}{s} = 2^{n+1}-n-1.$$ \end{proof} Given a variety $\mathcal{V}$ of algebras with trace, the growth of $\mathcal{V}$ is the growth of the sequence of trace codimensions of any algebra $A$ generating $\mathcal{V}$, i.e., $\mathcal{V} = \mbox{var}^{tr}(A) $. We say that $\mathcal{V}$ has almost polynomial growth if it grows exponentially but any proper subvariety has polynomial growth. In the following theorem we prove that the algebras $D_2^{t_{\alpha, \alpha}}$ generate varieties of almost polynomial growth. \begin{Theorem} \label{D2 alfa APG} The algebras $D_2^{t_{\alpha, \alpha}}$, $\alpha \in F \setminus \{ 0 \}$, generate varieties of almost polynomial growth. \end{Theorem} \begin{proof} By Theorem \ref{identities of D2 talpha alpha}, the variety generated by $D_2^{t_{\alpha, \alpha}}$ has exponential growth. We are left to prove that any proper subvariety of $\mbox{var}^{tr}(D_2^{t_{\alpha, \alpha}})$ has polynomial growth. Let $\mbox{var}^{tr}(A) \subsetneq \mbox{var}^{tr}(D_2^{t_{\alpha, \alpha}})$. Then there exists a multilinear trace polynomial $f$ of degree $n$ which is a trace identity for $A$ but not for $D_2^{t_{\alpha, \alpha}}$. We can write $f$ as \begin{equation} f = \sum_{k=0}^n \sum_{I} \alpha_{k,I,J} \mbox{Tr}(x_{i_1} \cdots x_{i_k}) x_{j_1} \cdots x_{j_{n-k}} + h \end{equation} where $h \in \Id^{tr}(D_2^{t_{\alpha, \alpha}})$, $I = \{ i_1, \ldots, i_k \}$, $J = \{ j_1, \ldots, j_{n-k} \}$, $i_1 < \cdots < i_k$ and $j_1 < \cdots < j_{n-k}$. Let $M$ be the largest $k$ such that $\alpha_{k,I,J} \neq 0$. There may exist several monomials in $f$ with that same $k$, we choose the one with the least monomial with respect to its trace part $x_{i_1} \cdots x_{i_k}$ (in the usual lexicographical order on the monomials in $x_1$, \dots, $x_n$ induced by $x_1<\cdots<x_n$). Now consider a monomial $g$ of degree $n' > n+M$ of the type \[ g = \mbox{Tr}(x_{l_1} \cdots x_{l_a}) x_{k_1} \cdots x_{k_{n'-a}} \] with $a > 2M$ and $n'-a > n-M$. We split the monomial $x_{l_1} \cdots x_{l_a}$ inside the trace, in $M$ monomials $y_1 = x_{l_1} \cdots x_{l_{a_1}}$, \dots, $y_M = x_{l_{a_{M-1}+1}} \cdots x_{l_{a}}$, each one with $\lfloor \frac{a}{M} \rfloor$ or $\lceil \frac{a}{M} \rceil$ variables. We also let $y_{M+1} = x_{k_1}$, \dots, $y_{n'-a+M} = x_{k_{n'-a}}$. Now, because of $f(y_1, \ldots, y_n) \equiv 0$, we can write $g \pmod{\Id^{tr}(A)}$ as a linear combination of monomials having either less than $M$ variables $y_i$ inside the trace, or $M$ variables $y_i$ inside the trace, but at least one of these variables is not among $y_1$, \dots, $y_M$. Passing back to $x_1$, \dots, $x_{n'}$ we see that $g$ is a linear combination of monomials with less than $a$ variables inside the trace. If $a > 2M$ is still satisfied for some of these monomials (for the new value of $a$) we repeat the procedure and so on. Thus after several such steps we shall write $g$ as a linear combination of monomials with at most $2M$ variables inside the traces. It follows that, for $n$ large enough, \[ c_n^{tr}(A) \leq \sum_{k = 0}^{2M} \binom{n}{k} \approx bn^{2M} \] where $b$ is a constant. Hence $\mbox{var}^{tr}(A)$ has polynomial growth and the proof is complete. \end{proof} With a similar proof we obtain also the following result. \begin{Theorem} \label{D2 alfa 0 APG} The algebras $D_2^{t_{\alpha, 0}}$, $\alpha \in F \setminus \{ 0 \}$, generate varieties of almost polynomial growth. \end{Theorem} We conclude this section by proving some results showing that the algebras $D_2^{t_{\alpha, \beta}}$, $D_2^{t_{\gamma, \gamma}}$ and $D_2^{t_{\delta, 0}}$ are not $T^{tr}$-equivalent. Recall that given two algebras with trace $A$ and $B$, $A$ is $T^{tr}$-equivalent to $B$ and we write $A \sim_{T^{tr}} B$, in case $\Id^{tr}(A) = \Id^{tr}(B)$. \begin{Lemma} \label{D2 delta 0 no T equ} Let $\alpha$, $\beta, \gamma, \delta, \epsilon \in F \setminus \{0 \}$, $\alpha \neq \beta$, $\delta \neq \epsilon$. Then \begin{itemize} \item[1.] $\Id^{tr}(D_2^{t_{\delta, 0}}) \not \subset \Id^{tr}(D_2^{t_{\alpha, \beta}})$. \item[2.] $ \Id^{tr}(D_2^{t_{\delta, 0}}) \not \subset \Id^{tr}(D_2^{t_{\gamma,\gamma}})$. \item[3.] $ \Id^{tr}(D_2^{t_{\delta, 0}}) \not \subset \Id^{tr}(D_2^{t_{\epsilon, 0}})$. \end{itemize} \end{Lemma} \begin{proof} Let us consider the polynomial \[ f_2 = \mbox{Tr}(x_1)\mbox{Tr}(x_2) - \delta \mbox{Tr}(x_1 x_2). \] We have seen in Theorem \ref{identities and codimensions of D2 t alpha 0} that $f_2$ is a trace identity of $D_2^{t_{\delta, 0}}$. In order to complete the proof we need only to show that $f_2$ does not vanish on the algebras $D_2^{t_{\alpha, \beta}}$, $D_2^{t_{\gamma, \gamma}}$ and $D_2^{t_{\epsilon, 0}}$. By considering the evaluation $x_1 = e_{11}$ and $x_2 = e_{22}$, we obtain that $f_2(e_{11}, e_{22}, t_{\alpha, \beta}) = \alpha \beta (e_{11}+ e_{22}) \neq 0$ and $f_2(e_{11}, e_{22}, t_{\gamma, \gamma}) =\gamma^2 (e_{11} + e_{22}) \neq 0$. Hence $f_2$ is not a trace identity of $D_2^{t_{\alpha, \beta}}$ and $D_2^{t_{\gamma, \gamma}}$ and we are done in the first two cases. Finally, evaluating $x_1 = x_2 = e_{11}$, we get $f_2(e_{11}, e_{11}, t_{\epsilon, 0}) = \epsilon (\epsilon - \delta) (e_{11} + e_{22}) \neq 0$ and the proof is complete. \end{proof} \begin{Lemma} \label{D2 gamma gamma no T equ} Let $\alpha$, $\beta, \gamma, \delta, \kappa \in F \setminus \{0 \}$, $\alpha \neq \beta$, $\gamma \neq \kappa$. Then \begin{itemize} \item[1.] $\Id^{tr}(D_2^{t_{\gamma, \gamma}}) \not \subset \Id^{tr}(D_2^{t_{\alpha, \beta}})$. \item[2.] $ \Id^{tr}(D_2^{t_{\gamma, \gamma}}) \not \subset \Id^{tr}(D_2^{t_{\kappa,\kappa}})$. \item[3.] $ \Id^{tr}(D_2^{t_{\gamma, \gamma}}) \not \subset \Id^{tr}(D_2^{t_{\delta, 0}})$. \end{itemize} \end{Lemma} \begin{proof} Let us consider the polynomial $$ f_3 = \gamma^2 x_1x_2 + \gamma^2 x_2 x_1 + \mbox{Tr}(x_1)\mbox{Tr}(x_2) - \gamma \mbox{Tr}(x_1)x_2 - \gamma \mbox{Tr}(x_2)x_1 - \gamma \mbox{Tr}(x_1 x_2). $$ We have seen in Theorem \ref{identities of D2 talpha alpha} that $f_3$ is a trace identity of $D_2^{t_{\gamma, \gamma}}$. By considering the evaluation $x_1 = e_{11}$ and $x_2 = e_{22}$, we obtain that $f_3(e_{11}, e_{22}, t_{\alpha, \beta}) = \beta (\alpha - \gamma) e_{11} + \alpha (\beta - \gamma) e_{22} \neq 0$, $f_3(e_{11}, e_{22}, t_{\kappa, \kappa}) =\kappa(\kappa- \gamma) (e_{11} + e_{22}) \neq 0$ and $f_3(e_{11}, e_{22}, t_{\delta, 0}) = -\gamma \delta e_{22} \neq 0$. Hence $f_3$ is not a trace identity of $D_2^{t_{\alpha, \beta}}$, $D_2^{t_{\kappa, \kappa}}$ and $D_2^{t_{\delta, 0}}$ and the proof is complete. \end{proof} \begin{Lemma} \label{D2 alfa beta no T equ} Let $\alpha$, $\beta, \gamma, \delta, \eta, \mu \in F \setminus \{0 \}$, $\alpha \neq \beta$, $\eta \neq \mu$, $\{ \alpha, \beta \} \neq \{ \eta, \mu \}$. Then \begin{itemize} \item[1.] $\Id^{tr}(D_2^{t_{\alpha, \beta}}) \not \subset \Id^{tr}(D_2^{t_{\eta, \mu}})$. \item[2.] $\Id^{tr}(D_2^{t_{\alpha, \beta}}) \not \subset \Id^{tr}(D_2^{t_{\gamma,\gamma}})$. \item[3.] $\Id^{tr}(D_2^{t_{\alpha, \beta}}) \not \subset \Id^{tr}(D_2^{t_{\delta, 0}})$. \end{itemize} \end{Lemma} \begin{proof} By Theorem \ref{identities and codimensions of D2 talpha beta} we know that the polynomials $f_4$ and $f_5$ are trace identities of $D_2^{t_{\alpha, \beta}}$. In order to complete the proof we shall show that such polynomials do not vanish on the algebras $D_2^{t_{\eta, \mu}}$, $D_2^{t_{\gamma, \gamma}}$ and $D_2^{t_{\delta, 0}}$. \begin{itemize} \item[1.] We have to consider two different cases. If $\alpha + \beta \neq \eta + \mu$, then $f_4(e_{11}, e_{22}, e_{22}, t_{\eta, \mu}) = \mu (\alpha + \beta - \eta - \mu) e_{11} \neq 0$ and we are done in this case. Now, let us suppose that $\alpha + \beta = \eta + \mu$. In this case, for some $\lambda \in F$, we obtain that $f_5(e_{11}, e_{22}, e_{22}, t_{\eta, \mu}) = \lambda e_{11} + \eta (\beta - \mu) (\alpha - \mu) e_{22}$ is non-zero since the hypothesis $\{ \eta, \mu \} \neq \{ \alpha, \beta \}$ implies that $\beta \neq \mu$ and $\alpha \neq \mu$. \item[2.] It is the same proof of item $1.$ in which $\eta = \mu = \gamma$. \item[3.] The evaluation $x_1 = x_2 = e_{22}$, $x_3 = e_{11}$ gives $f_5(e_{22}, e_{22}, e_{11}, t_{\delta, 0}) = \alpha \beta \delta e_{22} \neq 0$. \end{itemize} \end{proof} \section{The algebras $C_2^{t_{\alpha, \beta}}$} In this section we focus our attention on the $F$-algebra $$ C_2 = \left \{ \begin{pmatrix} a & b \\ 0 & a \end{pmatrix} : a,b \in F \right \}. $$ Since $C_2$ is commutative, every trace on $C_2$ is just a linear map $C_2 \rightarrow F$. Hence, if $\mbox{tr}$ is a trace on $C_2$, then there exist $\alpha, \beta \in F$ such that $$ \mbox{tr} \left ( \begin{pmatrix} a & b \\ 0 & a \end{pmatrix} \right ) = \alpha a + \beta b. $$ We denote such a trace by $t_{\alpha, \beta}$. Moreover, $C_2^{t_{\alpha, \beta}}$ indicates the algebra $C_2$ endowed with the trace $t_{\alpha, \beta}$. \begin{Lemma} \label{identities of C2 t alfa 0} Let $\alpha \in F$. Then $C_2^{t_{\alpha, 0}}$ satisfies the following trace identities of degree $2:$ \begin{itemize} \item[1.] $[x_1, x_2] \equiv 0$. \item[2.] $\mbox{Tr}(x_1) \mbox{Tr}(x_2) - \alpha \mbox{Tr}(x_1 x_2) \equiv 0$. \item[3.] $\mbox{Tr}(x_1) \mbox{Tr}(x_2) - \alpha \mbox{Tr}(x_1) x_2 - \alpha \mbox{Tr}(x_2) x_1 + \alpha^2 x_1 x_2 \equiv 0$. \end{itemize} \end{Lemma} \begin{proof} The result follows by an immediate verification. \end{proof} If $\alpha = 0$ then $C_2^{t_{0,0}}$ is a commutative algebra with zero trace and $c_n^{tr}(C_2^{t_{0,0}}) = 1$, for all $n \geq 1$. In case $\alpha \neq 0$, by putting together Lemma \ref{identities of C2 t alfa 0} and Theorem \ref{identities and codimensions of D2 t alpha 0} we get that $\mbox{var}^{tr}(C_2^{t_{\alpha,0}}) \subsetneq \mbox{var}^{tr}(D_2^{t_{\alpha,0}}) $. Hence, by Theorem \ref{D2 alfa 0 APG}, $C_2^{t_{\alpha,0}}$ generates a variety of polynomial growth. \begin{Remark} \label{C alpha, beta equivalent to C alpha beta'} Let $\alpha, \beta, \beta' \in F$ with $\beta, \beta' \neq 0$. The algebras $C_2^{t_{\alpha, \beta}}$ and $C_2^{t_{\alpha, \beta'}}$ are isomorphic, as algebras with trace. \end{Remark} \begin{proof} We need only to observe that the linear map $\varphi \colon C_2^{t_{\alpha, \beta}} \rightarrow C_2^{t_{\alpha, \beta'}}$, defined by $$ \varphi \left ( \begin{pmatrix} a & b \\ 0 & a \end{pmatrix} \right ) = \begin{pmatrix} a & \beta \beta'^{-1} b \\ 0 & a \end{pmatrix}, $$ is an isomorphism of algebras with trace. \end{proof} With a straightforward computation we get the following result. \begin{Lemma} \label{identity of C alfa} Let $\alpha \in F$. Then: \begin{itemize} \item[1.] $C_2^{t_{\alpha,1}}$ does not satisfy any multilinear trace identity of degree $2$ which is not a consequence of $[x_1, x_2] \equiv 0$. \item[2.] $C_2^{t_{\alpha,1}}$ satisfies the following trace identity of degree $3:$ $$ f_\alpha = \alpha x_1 x_2 x_3 + \mbox{Tr}(x_1 x_2) x_3 + \mbox{Tr}(x_1 x_3) x_2 + \mbox{Tr}(x_2 x_3) x_1 - \mbox{Tr}(x_1) x_2 x_3 - \mbox{Tr}(x_2) x_1 x_3 - \mbox{Tr}(x_3) x_1 x_2 - \mbox{Tr}(x_1 x_2 x_3). $$ \end{itemize} \end{Lemma} Next we shall prove that, for any $\alpha \in F$, the algebra $C_2^{t_{\alpha,1}}$ generates a variety of exponential growth. \begin{Theorem} \label{C alfa has exp growth} For any $\alpha \in F$, the algebra $C_2^{t_{\alpha,1}}$ generates a variety of exponential growth. \end{Theorem} \begin{proof} Let us consider the following set of trace monomials of degree $n$: \begin{equation} \label{monomials} \mbox{Tr}(x_{i_1}) \cdots \mbox{Tr}(x_{i_k}) x_{j_1} \cdots x_{j_{n-k}}, \end{equation} where $\{ i_1, \ldots, i_k, j_1, \ldots, j_{n-k} \} = \{ 1, \ldots, n \}$, $i_1 < \cdots < i_k$, $j_1 < \cdots < j_{n-k}$, $k = 0, \ldots, n$. The number of elements in \eqref{monomials} is exactly $ \sum_{k = 0}^n \binom{n}{k} = 2^n$. So, in order to prove the theorem we shall show that the monomials in \eqref{monomials} are linearly independent, modulo $\Id^{tr}(C_2^{t_{\alpha,1}})$. To this end, let $g \in \Id^{tr}(C_2^{t_{\alpha,1}})$ be a linear combination of the above elements: $$ g(x_1, \ldots, x_n) = \sum_{I,J} a_{I,J} \mbox{Tr}(x_{i_1}) \cdots \mbox{Tr}(x_{i_k}) x_{j_1} \cdots x_{j_{n-k}}, $$ where $k= 0, \ldots, n$, $I = \{ x_{i_1}, \ldots, x_{i_k} \}$, $J = \{ x_{j_1}, \ldots, x_{j_{n-k}} \}$ and $i_1 < \cdots < i_k, \ j_1 < \cdots < j_{n-k}$. We claim that $g$ is actually the zero polynomial. Let $k$ be the largest integer such that $\alpha_{I,J} \neq 0$, with fixed $I = \{ x_{i_1}, \ldots, x_{i_k} \}$ and $J = \{ x_{j_1}, \ldots, x_{j_{n-k}} \}$. By making the evaluation $x_{i_1} = \cdots = x_{i_k} = e_{12}$ and $x_{j_1} = \cdots = x_{j_{n-k}} = e_{11} + e_{22}$, we get $g = \alpha_{I,J} (e_{11} + e_{22}) + \gamma e_{12} = 0$. This implies $ \alpha_{I,J} = 0$, a contradiction. \end{proof} We conclude this section with the following results comparing trace $T$-ideals. \begin{Lemma} \label{C alfa no T equ C beta} Let $\alpha, \beta \in F$ be two distinct elements. Then $\Id^{tr}(C_2^{t_{\alpha,1}}) \not \subset \Id^{tr}(C_2^{t_{\beta,1}}) $. \end{Lemma} \begin{proof} Let us consider the polynomial $$ f_\alpha = \alpha x_1 x_2 x_3 + \mbox{Tr}(x_1 x_2) x_3 + \mbox{Tr}(x_1 x_3) x_2 + \mbox{Tr}(x_2 x_3) x_1 - \mbox{Tr}(x_1) x_2 x_3 - \mbox{Tr}(x_2) x_1 x_3 - \mbox{Tr}(x_3) x_1 x_2 - \mbox{Tr}(x_1 x_2 x_3). $$ We have seen in Lemma \ref{identity of C alfa} that $f_\alpha$ is a trace identity of $C_2^{t_{\alpha,1}}$. In order to complete the proof we need only show that such a polynomial does not vanish on $C_2^{t_{\beta,1}}$. By considering the evaluation $x_1 = x_2 = x_3 = e_{11} + e_{22} \in C_2^{t_{\beta,1}} $, we get \[ f_\alpha(e_{11} + e_{22}, e_{11} + e_{22}, e_{11} + e_{22}, t_{\beta,1}) = (\alpha - \beta) (e_{11} + e_{22}). \] Since $\alpha \neq \beta $, $f_\alpha$ does not vanish on $C_2^{t_{\beta,1}}$ and we are done. \end{proof} \begin{Lemma} \label{D2 alpha 0, D2 alfa alfa no T equ C beta} Let $\alpha, \beta, \gamma, \delta \in F \setminus \{0 \}$, $\epsilon \in F$, $\alpha \neq \beta$. Then \begin{itemize} \item[1.] $\Id^{tr}(D_2^{t_{\delta,0}}) \not \subset \Id^{tr}(C_2^{t_{\epsilon,1}}) $, \item[2.] $\Id^{tr}(D_2^{t_{\gamma,\gamma}}) \not \subset \Id^{tr}(C_2^{t_{\epsilon,1}}) $, \item[3.] $\Id^{tr}(D_2^{t_{\alpha,\beta}}) \not \subset \Id^{tr}(C_2^{t_{\epsilon,1}}) $. \end{itemize} \end{Lemma} \begin{proof} By Theorems \ref{identities and codimensions of D2 t alpha 0} and \ref{identities of D2 talpha alpha}, we know that the algebras $D_2^{t_{\delta,0}}$ and $D_2^{t_{\gamma,\gamma}}$ satisfy trace identities of degree $2$ which are not a consequence of $[x_1, x_2] \equiv 0$. This does not happen for the algebra $C_2^{t_{\epsilon,1}}$ (see the first item of Lemma \ref{identity of C alfa}) and so the proof of the first two items is complete. In order to prove the last item, let us consider the polynomial $f_5$ of Theorem \ref{identities and codimensions of D2 talpha beta}, which is a trace identity of $D_2^{t_{\alpha, \beta}}$. Such a polynomial does not vanish on $C_2^{t_{\epsilon,1}}$. In fact, by considering the evaluation $x_1 = x_2 = x_3 = e_{12} \in C_2^{t_{\epsilon,1}} $, we get \[ f_5(e_{12}, e_{12}, e_{12}, t_{\epsilon,1}) = e_{11} + e_{22} - (\alpha + \beta)e_{12} \neq 0. \] \end{proof} \begin{Lemma} \label{C alfa no equ D2 beta gamma} Let $\alpha, \beta, \gamma \in F$, $\alpha \neq 0$. Then $\Id^{tr}(C_2^{t_{\gamma,1}}) \not \subset \Id^{tr}(D_2^{t_{\alpha, \beta}}) $. \end{Lemma} \begin{proof} Let us consider the polynomial $f_\gamma$ of Lemma \ref{identity of C alfa}, which is a trace identity of $C_2^{t_{\gamma,1}}$. We shall show that it does not vanish on $D_2^{t_{\alpha, \beta}}$. By considering the evaluation $x_1 = x_2 = e_{11}$ and $x_3 = e_{22}$, we get \[ f_\gamma(e_{11}, e_{11}, e_{22}, t_{\alpha, \beta}) = \alpha e_{22} - \beta e_{11}. \] Since $\alpha \neq 0$, $f_\gamma$ does not vanish on $D_2^{t_{\alpha, \beta}}$ and we are done. In particular, in case $\beta = \alpha$ we get that $\Id^{tr}(C_2^{t_{\gamma,1}}) \not \subset \Id^{tr}(D_2^{t_{\alpha, \alpha}}) $ and in case $\beta = 0$ $\Id^{tr}(C_2^{t_{\gamma,1}}) \not \subset \Id^{tr}(D_2^{t_{\alpha, 0}}) $. \end{proof} \section{Algebras with trace of polynomial growth} We start this section by describing a version of the Wedderburn-Malcev theorem for finite dimensional algebras with trace. First we recall some definitions. Let $A$ be a unitary algebra with trace $\mbox{tr}$. A subset (subalgebra, ideal) $ S \subseteq A$ is a trace-subset (subalgebra, ideal) of $A$ if it is stable under the trace; in other words for all $ s \in S $, one has $ \mbox{tr}(s) \in S $. \begin{Definition} Let $A$ be an algebra with trace. $A$ is called a trace-simple algebra if \begin{enumerate} \item[1.] $A^2 \neq 0$, \item[2.] $A$ has no non-trivial trace-ideals. \end{enumerate} \end{Definition} \begin{Remark} \label{simple implies trace simple} Let $A$ be an algebra with trace $\mbox{tr}$. \begin{enumerate} \item[1.] If $A$ is simple (as an algebra) then $A$ is trace-simple. \item[2.] If $I$ is a proper trace-ideal of $A$ then the trace vanishes on $I$. \end{enumerate} \end{Remark} \begin{proof} The first item is obvious. For the second one, let us suppose that there exists $a\in I$ such that $tr(a)=\alpha\ne 0$. Hence $\alpha \in F$ is invertible. Moreover, since $I$ is a trace-ideal, it contains $\alpha$ and so we would have $I=A$, a contradiction. Notice that the second item of the remark also holds for one-sided ideals. \end{proof} In the following result we give a version of the Wedderburn--Malcev theorem for finite dimensional algebras with trace. \begin{Theorem}\label{WM} Let $A$ be a finite dimensional unitary algebra with trace $\mbox{tr}$ over an algebraically closed field $F$ of characteristic $0$. Then there exists a semisimple trace-subalgebra $B$ such that \[ A=B+J(A) = B_1 \oplus \cdots \oplus B_k + J(A) \] where $J = J(A)$ is the Jacobson radical of $A$ and $B_1$, \dots, $B_k$ are simple algebras. \end{Theorem} \begin{proof} By the Wedderburn--Malcev theorem for the ordinary case (see for example \cite[Theorem $3.4.3$]{GiambrunoZaicev2005book}), we can write $A$ as a direct sum of vector spaces \[ A = B + J = B_1 \oplus \cdots \oplus B_k + J \] where $B$ is a maximal semisimple subalgebra of $A$, $J= J(A)$ is the Jacobson radical of $A$, and $B_i$ are simple algebras, $i = 1$, \dots, $k$. By the Theorems of Wedderburn and Wedderburn--Artin on simple and semisimple algebras (see for instance \cite[Theorems 1.4.4, 2.1.6]{Herstein1968book}), and since $F$ is algebraically closed, we have that \[ B = B_1 \oplus \cdots \oplus B_k = M_{n_1}(F) \oplus \cdots \oplus M_{n_k}(F). \] Here $M_{n_i}(F)$ is the simple algebra of $n_i \times n_i$ matrices, $i = 1$, \dots, $k$. Clearly $B$ is a trace-subalgebra since $1_A \in B$. Moreover, by considering the restriction of the trace $\mbox{tr}$ on $B$ it is easy to see that there exist $\alpha_i \in F$ such that \[ tr(a_1, \ldots, a_k) = \sum_{i = 1}^{k} t_{\alpha_i}(a_i) \] where $a_i \in M_{n_i}(F)$, $t_{\alpha_i} = \alpha_i t_1^i$, and $t_1^i$ is the ordinary trace on the matrix algebra $M_{n_i}(F)$. \end{proof} In order to prove the main result of this paper we need the following lemmas. \begin{Lemma} \label{C alfa in F+J} Let $A = B + J$ be a finite dimensional algebra with trace $\mbox{tr}$. If there exists $j \in J$ such that $\mbox{tr}(j) \neq 0$ then $C_2^{t_{\alpha,1}} \in \mbox{var}^{tr}(A)$, for some $\alpha \in F$. \end{Lemma} \begin{proof} Let us consider the trace subalgebra $B'$ of $A$ generated by $1$, $j$ over $F$ and let $I$ be the ideal of $B'$ generated by $j^n$, where $n$ is the least integer such that $\mbox{tr}(j^n) = \mbox{tr}(j^{n+1}) = \cdots = 0$. Then the quotient algebra $\bar{B} = B'/I$ is an algebra with trace $t$ defined as $t(a+I) = \mbox{tr}(a)$, for any $a \in B'$. Obviously $\bar{B} = \mbox{span} \{ \bar{1} = 1+I, \bar{j} = j+I, \ldots, \bar{j}^{n-1} = j^{n-1}+I\}$. Let $\alpha = \mbox{tr}(1)$ and $\beta = \mbox{tr}({j}^{n-1}) \neq 0$. We claim that $C_2^{t_{\alpha, \beta}} \in \mbox{var}^{tr}(\bar{B})$. Let $\varphi\colon C_2^{t_{\alpha, \beta}} \to \bar{B} $ be the linear map defined by $\varphi(e_{11}+e_{22}) = \bar{1}$ and $\varphi(e_{12}) = \bar{j}^{n-1}$. It is easy to check that $\varphi$ is an injective homomorphism of algebras with trace. Hence $C_2^{t_{\alpha, \beta}}$ is isomorphic to a trace subalgebra of $\bar{B}$ and the claim is proved. Since, by Remark \ref{C alpha, beta equivalent to C alpha beta'}, $C_2^{t_{\alpha, \beta}} \cong C_2^{t_{\alpha, 1}}$, it follows that $C_2^{t_{\alpha,1}} \in \mbox{var}^{tr}(A)$ and the proof is complete. \end{proof} \begin{Lemma} \label{D2 alfa alfa in Mn alfa} For any $\alpha \in F$, the algebra $D_2^{t_{\alpha, \alpha}}$ belongs to the variety generated by $M_n^{t_\alpha}$. \end{Lemma} \begin{proof} Let us recall that we denote by $M_n^{t_\alpha}$ the algebra of the $n\times n$ matrices endowed with the trace $t_\alpha$; this is the usual trace multiplied by the scalar $\alpha\in F$. Since $D_2^{t_{\alpha,\alpha}} \subseteq M_2^{t_\alpha}$ as algebras with (the same) trace it follows that $D_2^{t_{\alpha,\alpha}}$ satisfies all trace identities of $M_2^{t_\alpha}$ (and some additional ones). Therefore $D_2^{t_{\alpha,\alpha}} \in \mbox{var}^{tr}(M_2^{t_\alpha})$. In order to complete the proof we need just to show that $M_2^{t_{\alpha}} \in \mbox{var}^{tr}(M_n^{t_\alpha})$. To this end, let $f \in \mbox{Id}^{tr}(M_n^{t_\alpha})$ be a multilinear trace identity of degree $m$ and suppose, by contradiction, that there exists elementary matrices $e_{i_1 j_1}$, \dots, $e_{i_m j_m}$ in $M_2^{t_\alpha}$ such that $f(e_{i_1 j_1}, \ldots, e_{i_m j_m}) = \sum \alpha_{i, j} e_{i j} \neq 0$. Notice that, if we denote by $e_{ij}'$ the elementary matrices in $M_n^{t_\alpha}$, then $f(e_{i_1 j_1}', \ldots, e_{i_m j_m}') = \sum \alpha_{i, j} e_{i j} + \sum_{i=3}^n \beta_{ii} e_{ii} \neq 0$, a contradiction. \end{proof} In order to prove the main result of this paper we have to consider also the algebra $UT_2$ of $ 2 \times 2$ upper-triangular matrices endowed with zero trace. In the following theorem we collect some results concerning such an algebra. \begin{Theorem} \label{UT2} Let $UT_2$ be the algebra of $ 2 \times 2$ upper-triangular matrices endowed with zero trace. \begin{itemize} \item[1.] The trace $T$-ideal $\mbox{Id}^{tr}(UT_2)$ is generated by $[x_1, x_2] [x_3, x_4]$ and $\mbox{Tr}(x)$. \item[2.] $UT_2$ generates a variety of almost polynomial growth. \item[3.] $ \mbox{Id}^{tr}(UT_2) \nsubseteq \mbox{Id}^{tr}(A)$, where $A \in \{ D_2^{t_{\alpha, \beta}}, D_2^{t_{\gamma,\gamma}}, D_2^{t_{\delta,0}}, C_2^{t_{\epsilon,1}} \}$, $\alpha$, $\beta$, $\gamma$, $\delta \in F \setminus \{ 0 \}$, $\alpha \ne \beta$, $\epsilon \in F$. \item[4.] $ \mbox{Id}^{tr}(A) \nsubseteq \mbox{Id}^{tr}(UT_2)$, where $A \in \{ D_2^{t_{\alpha, \beta}}, D_2^{t_{\gamma,\gamma}}, D_2^{t_{\delta,0}}, C_2^{t_{\epsilon,1}} \}$, $\alpha$, $\beta$, $\gamma$, $\delta \in F \setminus \{ 0 \}$, $\alpha \ne \beta$, $\epsilon \in F$. \end{itemize} \end{Theorem} \begin{proof} The first two items follows directly from the ordinary case (see, for instance \cite[Chapter 4 and 7]{GiambrunoZaicev2005book}). For the item (3) it is sufficient to observe that $\mbox{Tr}(x) \equiv 0$ is a trace-identity of $UT_2$ but such a polynomial does not vanish on $A$, for any $A \in \{ D_2^{t_{\alpha, \beta}}, D_2^{t_{\gamma,\gamma}}, D_2^{t_{\delta,0}}, C_2^{t_{\epsilon,1}} \}$. Finally, since the algebras $D_2^{t_{\alpha, \beta}}, D_2^{t_{\gamma,\gamma}}, D_2^{t_{\delta,0}}, C_2^{t_{\epsilon,1}}$ are commutative and $UT_2$ is not, we get item (4), and the proof is complete. \end{proof} Now we are in a position to prove the following theorem characterizing the varieties of unitary algebras with trace which are generated by finite dimensional algebras, and have polynomial growth of their codimensions. \begin{Theorem} \label{characterization} Let $A$ be a finite dimensional unitary algebra with trace $\mbox{tr}$ over a field $F$ of characteristic zero. Then the sequence $c_n^{tr}(A)$, $n=1$, 2, \dots, is polynomially bounded if and only if $ D_2^{t_{\alpha, \beta}}$, $D_2^{t_{\gamma,\gamma}}$, $D_2^{t_{\delta,0}}$, $C_2^{t_{\epsilon,1}}, UT_2 \notin \mbox{var}^{tr}(A)$, for any choice of $\alpha$, $\beta$, $\gamma$, $\delta \in F \setminus \{ 0 \}$, $\alpha \ne \beta$, $\epsilon \in F$. \end{Theorem} \begin{proof} By Theorems \ref{identities and codimensions of D2 t alpha 0}, \ref{identities of D2 talpha alpha}, \ref{identities and codimensions of D2 talpha beta}, \ref{C alfa has exp growth}, \ref{UT2}, the algebras $D_2^{t_{\alpha, \beta}}$, $D_2^{t_{\gamma,\gamma}}$, $D_2^{t_{\delta,0}}$, $C_2^{t_{\epsilon,1}}$ and $UT_2$ generate varieties of exponential growth. Hence, if $c_n^{tr}(A)$ is polynomially bounded, then $ D_2^{t_{\alpha, \beta}}$, $D_2^{t_{\gamma,\gamma}}$, $ D_2^{t_{\delta,0}}$, $C_2^{t_{\epsilon,1}}, UT_2 \notin \mbox{var}^{tr}(A)$, for any $\alpha$, $\beta$, $\gamma$, $\delta \in F \setminus \{ 0 \}$, $\alpha \ne\beta$, $\epsilon \in F$. Conversely suppose that $ D_2^{t_{\alpha, \beta}}$, $D_2^{t_{\gamma,\gamma}}$, $ D_2^{t_{\delta, 0}}$, $C_2^{t_{\epsilon,1}}, UT_2 \notin \mbox{var}^{tr}(A)$, for any $\alpha$, $\beta$, $\gamma$, $ \delta \in F \setminus \{ 0 \}$, $\alpha \ne\beta$, $\epsilon \in F$. Since we are dealing with codimensions, and these do not change under extensions of the base field, we may assume that the field $F$ is algebraically closed. By Theorem \ref{WM}, we get that \[ A = M_{n_1}(F) \oplus \cdots \oplus M_{n_k}(F) + J, \ \ k \geq 1, \] and there exist constants $\alpha_i$ such that, for $a_i \in M_{n_i}(F)$, we have \[ tr(a_1, \ldots, a_k) = \sum_{i = 1}^{k} t_{\alpha_i}(a_i). \] Since $D_2^{t_{\gamma,\gamma}} \notin \mbox{var}^{tr}(A)$, for any $\gamma \in F \setminus \{ 0 \}$, and since, by Lemma \ref{D2 alfa alfa in Mn alfa}, we have that, for $n\ge 2$, $D_2^{t_{\gamma,\gamma}} \in \mbox{var}^{tr}(M^{t_\gamma}_n) \subseteq \mbox{var}^{tr}(A) $, we get that $n_i = 1$, for every $i = 1$, \dots, $k$. Hence \[ A=A_1\oplus \cdots\oplus A_k + J \] where for every $i=1$, \dots, $k$, $A_i\cong F$ and the trace on it is $t_{\alpha_i}$. Since, for any $\alpha \in F$, $C_2^{t_{\alpha,1}} \not \in \mbox{var}^{tr}(A)$, by Lemma \ref{C alfa in F+J} we must have that the trace vanishes on $J$. Now, if for any $i = 1$, \dots, $k$, the trace on $A_i$ is zero, since $UT_2 \not \in \mbox{var}^{tr}(A)$, then, for any $i \neq j$, we must have $A_i J A_j = 0$. Hence, for $n \geq 1$, $c_n^{tr}(A) = c_n(A)$ is polynomially bounded (see, for instance \cite[Chapter 7]{GiambrunoZaicev2005book}) and we are done in this case. Hence, we may assume that there exists $i$ such that the trace on $A_i$ is $t_{\alpha_i}$, with $\alpha_i \neq 0$. Let $F_\alpha$ denote the field $F$ endowed with the trace $t_\alpha$. We claim that $F_\alpha \oplus F_\beta$ is isomorphic to $D_2^{t_{\alpha, \beta}}$ if $\alpha \neq \beta$ (notice that $\beta$ could be zero) and to $D_2^{t_{\alpha, \alpha}}$ otherwise. Here we shall denote by $t$ the trace map on $F_{\alpha} \oplus F_{\beta} $ defined as $t((a,b)) = t_{\alpha}(a) + t_{\beta}(b)$, for all $(a,b) \in F_\alpha \oplus F_\beta$. In order to prove the claim, let us consider the linear map $\varphi\colon D_2 \rightarrow F_\alpha \oplus F_\beta $ such that \[ \varphi \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} = (1,0) \ \ \ \ \ \ \ \ \ \mbox{and} \ \ \ \ \ \ \ \ \ \varphi \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} = (0,1). \] It is easily seen that $\varphi$ is an isomorphism of algebras. Now, if $\alpha \neq \beta$, we have that \[ \varphi \left ( t_{\alpha, \beta} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \right ) = \varphi(\alpha) = (\alpha, \alpha) = t(1,0) = t \left ( \varphi \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \right ), \] \[ \varphi \left ( t_{\alpha, \beta} \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} \right ) = \varphi(\beta) = (\beta, \beta) = t(0,1) = t \left ( \varphi \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} \right ), \] and so $\varphi$ is an isomorphism of algebras with trace between $D_2^{t_{\alpha, \beta}}$ and $F_\alpha \oplus F_\beta$. In the same way, if $\alpha = \beta$, we get a trace isomorphism between $D_2^{t_{\alpha,\alpha}}$ and $F_\alpha \oplus F_\alpha$. Hence, since $D_2^{t_{\alpha, \beta}}, D_2^{t_{\gamma,\gamma}}$, $ D_2^{t_{\delta,0}} \notin \mbox{var}^{tr}(A)$, it follows that \[ A = B + J \] where $B \cong F$ and for all $a = b+j \in A$, $tr(a) = tr(b+j) = \alpha b$, with $\alpha \neq 0$. In order to complete the proof we need to show that $B + J$ has polynomially bounded trace codimensions. Notice that the following polynomials are trace identities of $B + J$: \begin{enumerate} \item[1.] $\alpha \mbox{Tr}(x_1 x_2) - \mbox{Tr}(x_1) \mbox{Tr}(x_2) \equiv 0$, \item[2.] $ \left ( \mbox{Tr}(x_1) - \alpha x_1 \right )\cdots \left ( \mbox{Tr}(x_{q+1}) - \alpha x_{q+1} \right )\equiv 0$, where $J^q \neq 0$ and $J^{q+1} = 0$. \end{enumerate} Modulo the first trace identity, every multilinear trace polynomial in the variables $x_1$, \dots, $x_n$ is a linear combination of expressions of the type \[ \mbox{Tr}(x_{i_1}) \cdots \mbox{Tr}(x_{i_a}) x_{j_1} \cdots x_{j_b} \] where $ \left \{ i_1, \ldots, i_a, j_1, \ldots, j_b \right \} = \left \{ 1, \ldots, n \right \} $ and $i_1 < \cdots < i_a $. The second identity implies that we can suppose $a\le q$. Indeed if $a\ge q+1$ then one can represent the product of $q+1$ traces as a linear combination of elements with fewer traces by expanding the second trace identity. Therefore we may suppose $a\le q$. We can further reduce the form of the latter polynomials. As we consider algebras with unit we can rewrite each monomial $x_{j_1} \cdots x_{j_b}$ as a linear combination of elements of the type \[ x_{p_1} \cdots x_{p_c} \left [ x_{l_1}, \ldots , x_{l_{k}} \right ] \cdots \left [ x_{m_1}, \ldots , x_{m_{h}} \right ]. \] Here the commutators that appear are left normed, that is $[u,v] = uv-vu$, $[u,v,w] = [[u,v],w]$ and so on. Moreover \[ \left \{ i_1, \ldots, i_a, p_1, \ldots, p_c, l_1, \ldots, l_k, \ldots, m_1, \ldots, m_h \right \} = \left \{ 1, \ldots, n \right \}. \] By applying the Poincar\'e--Birkhoff--Witt theorem, we can assume further that $p_1 < \cdots < p_c $, and that the commutators are ordered. Therefore we can write each trace polynomial as a linear combination of elements of the type \[ \mbox{Tr}(x_{i_1}) \cdots \mbox{Tr}(x_{i_a}) x_{p_1} \cdots x_{p_c} [ x_{l_1}, \ldots , x_{l_{k_1}} ] \cdots [ x_{m_1}, \ldots , x_{m_{k_h}} ], \] where $ \left \{ i_1, \ldots, i_a, p_1, \ldots, p_c, l_1, \ldots, l_{k_1}, \ldots, m_1, \ldots, m_{k_h} \right \} = \left \{ 1, \ldots, n \right \} $ and $i_1 < \cdots < i_a $, $p_1 < \cdots < p_c $, $a \leq q$. The algebra $B$ in the decomposition $A=B+J$ is commutative. Hence each commutator vanishes when evaluated on elements of $B$ only. Thus in order to get a non-zero element we must substitute in a commutator at least one element from $J$. Since $J^{q+1} = 0$, a product of $q+1$ commutators is a trace identity. Therefore we also get the restriction $ K := k_1 + \cdots + k_h \leq q$. In this way \begin{eqnarray*} c_n^{tr}(A) &\le & \sum_{a=0}^q \binom{n}{a} \left ( \sum_{K = k_1 + \cdots + k_h = 0}^q \binom{n-a}{K} \binom{K}{k_1, \ldots, k_h } k_1! \cdots k_h! \right ) \\ &=&\sum_{a=0}^q \dfrac{n(n-1) \cdots (n-a+1)}{a!}\left ( \sum_{K = k_1 + \cdots + k_h=0}^q \dfrac{(n-a)!}{(n-a-K)!} \right ) \\ & \approx & cn^{2q} \end{eqnarray*} where $c$ is a constant. Hence $A = B + J$ has polynomial growth and the proof is complete. \end{proof} As an immediate consequence, we get the following result. \begin{Corollary} If $A$ is a finite dimensional unitary algebra with trace, then the sequence $c^{tr}_n(A)$, $n=1$, 2, \dots, is either polynomially bounded or grows exponentially. \end{Corollary} Now we prove the following corollary. \begin{Corollary} Let $\alpha$, $\beta \in F \setminus \{ 0 \}$, $\alpha \neq \beta$. Any proper subvariety of $\mbox{var}^{tr}(D_2^{t_{\alpha, \beta}})$, generated by a finite dimensional algebra with trace, has polynomial growth. \end{Corollary} \begin{proof} Let $\mathcal{V} = \mbox{var}^{tr}(A) \subsetneq \mbox{var}^{tr}(D_2^{t_{\alpha, \beta}})$ where $A$ is a finite dimensional algebra with trace. As a consequence of Lemmas \ref{D2 alfa beta no T equ}, \ref{D2 alpha 0, D2 alfa alfa no T equ C beta} (item $3.$) and Theorem \ref{UT2}, we get that $UT_2$, $D_2^{t_{\alpha', \beta'}}$, $D_2^{t_{\gamma, \gamma}}$, $D_2^{t_{\delta,0}}$, $ C_2^{t_{\epsilon,1}} \not \in \mbox{var}^{tr}(A)$, for any $\alpha'$, $\beta'$, $\gamma$, $\delta \in F \setminus \{ 0 \}$, $\alpha' \ne\beta'$, $\epsilon \in F$. Hence Theorem \ref{characterization} applies and the proof is complete. \end{proof} With a similar approach we obtain the following result. \begin{Corollary} For any $\epsilon \in F$, any proper subvariety of $\mbox{var}^{tr}(C_2^{t_{\epsilon,1}})$, generated by a finite dimensional algebra with trace, has polynomial growth. \end{Corollary} According to the previous results, with an abuse of terminology, we may say that $D_2^{t_{\alpha, \beta}}$ and $C_2^{t_{\epsilon,1}}$ generate varieties of almost polynomial growth. As a consequence we state the following corollary. \begin{Corollary} The algebras $UT_2$, $D_2^{t_{\alpha, \beta}}$, $D_2^{t_{\gamma,\gamma}}$, $D_2^{t_{\delta,0}}$ and $C_2^{t_{\epsilon,1}}$, $\alpha, \beta, \gamma, \delta \in F \setminus \{ 0 \}$, $\alpha \neq \beta$, $\epsilon \in F$, are the only finite dimensional algebras with trace generating varieties of almost polynomial growth. \end{Corollary} \textbf{Acknowledgements} We thank the Referee whose comments were appreciated. Remark~\ref{differentvarieties} was added in order to answer a question raised by the Referee. \end{document}
\begin{document} \title{\large A lower bound on HMOLS with equal sized holes} \author{Michael Bailey} \address{\rm Michael Bailey: Mathematics and Statistics, University of Victoria, Victoria, BC, Canada } \email{[email protected]} \author{Coen del Valle} \address{\rm Coen del Valle: Mathematics and Statistics, University of Victoria, Victoria, BC, Canada} \email{[email protected]} \author{Peter J.~Dukes} \address{\rm Peter J.~ Dukes: Mathematics and Statistics, University of Victoria, Victoria, BC, Canada } \email{[email protected]} \thanks{Research of Peter Dukes is supported by NSERC grant 312595--2017} \date{\today} \begin{abstract} It is known that $N(n)$, the maximum number of mutually orthogonal latin squares of order $n$, satisfies the lower bound $N(n) \ge n^{1/14.8}$ for large $n$. For $h\ge 2$, relatively little is known about the quantity $N(h^n)$, which denotes the maximum number of `HMOLS' or mutually orthogonal latin squares having a common equipartition into $n$ holes of a fixed size $h$. We generalize a difference matrix method that had been used previously for explicit constructions of HMOLS. An estimate of R.M. Wilson on higher cyclotomic numbers guarantees our construction succeeds in suitably large finite fields. Feeding this into a generalized product construction, we are able to establish the lower bound $N(h^n) \ge (\log n)^{1/\delta}$ for any $\delta>2$ and all $n > n_0(h,\delta)$. \end{abstract} \maketitle \hrule \section{Introduction} \subsection{Overview} A \emph{latin square} is an $n \times n$ array with entries from an $n$-element set of symbols such that every row and column is a permutation of the symbols. Often the symbols are taken to be from $[n]:=\{1,\dots,n\}$. The integer $n$ is called the \emph{order} of the square. Two latin squares $L$ and $L'$ of order $n$ are \emph{orthogonal} if $\{(L_{ij},L'_{ij}): i,j \in [n]\}=[n]^2$; that is, two squares are orthogonal if, when superimposed, all ordered pairs of symbols are distinct. A family of latin squares in which any pair are orthogonal is called a set of \emph{mutually orthogonal latin squares}, or `MOLS' for short. The maximum size of a set of MOLS of order $n$ is denoted $N(n)$. It is easy to see that $N(n) \le n-1$ for $n>1$, with equality if and only if there exists a projective plane of order $n$. Consequently, $N(q)=q-1$ for prime powers $q$. Using a number sieve and some recursive constructions, Beth showed \cite{Beth} (building on \cite{CES,WilsonMOLS}) that $N(n) \ge n^{1/14.8}$ for large $n$. In fact, by inspecting the sieve a little more closely, $14.8$ can be replaced by $14.7994$; we use this observation later to keep certain bounds a little cleaner. In this article, we are interested in a variant on MOLS. An \emph{incomplete latin square} of order $n$ is an $n \times n$ array $L=(L_{ij}: i,j \in [n])$ with entries either blank or in $[n]$, together with a partition $(H_1,\dots,H_m)$ of some subset of $[n]$ such that \begin{itemize} \item $L_{ij}$ is empty if $(i,j) \in \cup_{k=1}^m H_k \times H_k$ and otherwise contains exactly one symbol; \item every row and every column in $L$ contains each symbol at most once; and \item symbols in $H_k$ do not appear in rows or columns indexed by $H_k$, $k=1,\dots,m$. \end{itemize} The sets $H_k$ are often taken to be intervals of consecutive rows/columns/symbols (but need not be). As one special case, when $m=n$ and each $H_k=\{k\}$, the definition is equivalent to a latin square $L$ that is \emph{idempotent}, that is satisfying $L_{ii}=i$ for each $i \in [n]$, except that the diagonal is removed to produce the corresponding incomplete latin square. The \emph{type} of an incomplete latin square is the list $(h_1,\dots,h_m)$, where $h_i = |H_i|$ for each $i=1,\dots,m$. When $h_i=n/m$ for all $k$, so that the set of holes is a uniform partition of $[n]$, the term `holey latin square' is used, and the type is abbreviated to $h^m$, where $h=n/m$. To clarify the notation, we henceforth recycle the parameter $n$ as the number of holes, so that type $h^n$ is considered. The relevant squares are then $hn \times hn$. Two holey latin squares $L,L'$ of type $h^n$ (and sharing the same hole partition) are said to be \emph{orthogonal} if each of the $(hn)^2-nh^2$ ordered pairs of symbols from different holes appear exactly once when $L$ and $L'$ are superimposed. As with MOLS, we use the term `mutually orthogonal' for a set of holey latin squares, any two of which are orthogonal. The abbreviation HMOLS is standard in the more modern literature; see for instance \cite{ABG,Handbook}. Following \cite[\S III.4.4]{Handbook}, we use a similar function $N(h^n)$ as for MOLS to denote the maximum number of HMOLS of type $h^n$. (Some context is needed to properly parse this notation and not mistake `$h^n$' for exponentiation of integers.) \begin{ex} As an example we give a pair of HMOLS of type $2^4$, also shown in ~\cite{DS}. In our (slightly different) presentation, the holes are $\{1,2\}$, $\{3,4\}$, $\{5,6\}$, $\{7,8\}$. \begin{center} \begin{tabular}{|cccccccc|} \hline &&8&6&3&7&4&5\\ &&5&7&8&4&6&3\\ 7&6&&&1&8&5&2\\ 5&8&&&7&2&1&6\\ 4&7&2&8&&&3&1\\ 8&3&7&1&&&2&4\\ 3&5&6&2&4&1&&\\ 6&4&1&5&2&3&&\\ \hline \end{tabular} \hspace{1.5cm} \begin{tabular}{|cccccccc|} \hline &&5&7&8&4&6&3\\ &&8&6&3&7&4&5\\ 5&8&&&7&2&1&6\\ 7&6&&&1&8&5&2\\ 8&3&7&1&&&2&4\\ 4&7&2&8&&&3&1\\ 6&4&1&5&2&3&&\\ 3&5&6&2&4&1&&\\ \hline \end{tabular} \end{center} \end{ex} It is easy to see that $N(n-1) \le N(1^n) \le N(n)$, so Beth's result gives a lower bound on HMOLS in the special case $h=1$. Also, if there exist $k$ HMOLS of type $1^n$ and $k$ MOLS of order $h$, then $N(h^n) \ge k$ follows easily by a standard product construction. However, very little else is known about HMOLS with holes of a fixed size greater than $1$. Some explicit results are known for a small number of squares. Dinitz and Stinson showed \cite{DS} that $N(2^n) \ge 2$ for $n \ge 4$. Stinson and Zhu \cite{SZ} extended this to $N(h^n) \ge 2$ for all $h \ge 2$, $n \ge 4$. Bennett, Colbourn and Zhu \cite{BCZ} settled the case of three HMOLS with a handful of exceptions. Abel, Bennett and Ge \cite{ABG} obtained several constructions of four, five or six HMOLS and produced a table of lower bounds on $N(h^n)$ for $h \le 20$ and $n \le 50$. For $2 \le h \le 6$, the largest entry in this table is 7, due to Abel and Zhang in \cite{AZ}. As a na\"{i}ve upper bound, we have $N(h^n) \le n-2$ from a similar argument as for the standard MOLS upper bound. In more detail, we may permute symbols in a set of HMOLS so that the first row contains symbols $h+1,\dots,nh$, where symbols $1,\dots,h$ are missing and columns $1,\dots,h$ are blank. Consider the symbols that occur in entry $(h+1,1)$ among the family of HMOLS. At most one element from each of the holes $H_3,\dots,H_n$ can appear. Our main result is a general lower bound on the rate of growth of $N(h^n)$ for fixed $h$ and large $n$. \begin{thm} \label{main} Let $h$ be a positive integer and $\epsilon>0$. For $k>k_0(h,\epsilon)$, there exists a set of $k$ HMOLS of type $h^n$ for all $n \ge k^{(3+\epsilon)\omega(h)k^2}$, where $\omega(h)$ denotes the number of distinct prime factors of $h$. \end{thm} To our knowledge, it has not even been stated that $N(h^n)$ tends to infinity, though by now this is implicit from some results on graph decompositions; see Section~\ref{sec:graph-decompositions}. The bound of Theorem~\ref{main} is very weak, yet we are satisfied at present due to the apparent difficulty of obtaining direct constructions. Unlike in the case of MOLS, or in the case $h=1$, there is no obvious `finite-geometric' object to get started for hole size $h \ge 2$. Indeed, the bulk of our lower bound is needed to get just a single finite field construction of $k$ HMOLS; see Section~\ref{sec:cyclotomic}. \subsection{Related objects} Let $n$ and $k$ be positive integers, where $k \ge 2$. A \emph{transversal design} TD$(k,n)$ consists of an $nk$-element set of \emph{points} partitioned into $k$ \emph{groups}, each of size $n$, and equipped with a family of $n^2$ \emph{blocks} of size $k$ having the property that any two points in distinct groups appear together in exactly one block. There exists a TD$(k,n)$ if and only if there exists a set of $k-2$ MOLS of order $n$. This equivalence is seen by indexing groups of the partition by rows, columns, and symbols from each square. Transversal designs are closely connected to orthogonal arrays. As with MOLS, it is possible to extend the definition to include holes. A \emph{holey transversal design} HTD$(k,h^n)$ is a $khn$-element set, say $[k] \times X$, where $X$ has cardinality $hn$ and an equipartition $(H_1,\dots,H_n)$ of \emph{holes} of size $h$, together with a collection of $h^2n(n-1)$ blocks that cover, exactly once each, every pair of elements $(i,x)$, $(j,y)$ in which $i \neq j$ and $x,y$ are in different holes. Of course, one could extend the definition to allow holes of mixed sizes, but the uniform hole size case suffices for our purposes. For a graph $G$ and positive integer $t$, let $G(t)$ denote the graph obtained by replacing every vertex of $G$ by an independent set of $t$ vertices, and replacing every edge of $G$ by a complete bipartite subgraph between corresponding $t$-sets. In other words, $G(t)$ is the lexicographic graph product $G \cdot \overline{K_t}$. Let us identify in the natural way the set of points of an HTD$(k,h^n)$ with vertices of the graph $K_k(h) \times K_n$. If we interpret the blocks of the transversal design as $k$-cliques on the underlying set of points, then the condition that two elements appear in a block (exactly once) if and only if they are in distinct groups and distinct holes amounts to every edge of $K_k(h) \times K_n$ falling into precisely one $k$-clique. The following is a summary of the preceding equivalences. \begin{prop} Let $h,k,n$ be positive integers with $k \ge 2$. The following are equivalent: \begin{itemize} \item the existence of a set of $k-2$ HMOLS of type $h^n$; \item the existence of a holey transversal design HTD$(k,h^n)$; and \item the existence of a $K_k$-decomposition of $K_k (h) \times K_n$. \end{itemize} \end{prop} \subsection{Existence via graph decompositions} \label{sec:graph-decompositions} In \cite{BKLOT}, Barber, K\"uhn, Lo, Osthus and Taylor prove a powerful existence result on $K_k$-decompositions of `dense' $k$-partite graphs. In a little more detail, let us call a $k$-partite graph $G$ \emph{balanced} if every partite set has the same cardinality and \emph{locally balanced} if every vertex has the same number of neighbors in each of the other partite sets. The main result of \cite{BKLOT} assures that any balanced and locally balanced $k$-partite graph on $kn$ vertices has a $K_k$-decomposition if $n$ is sufficiently large and the minimum degree satisfies $\delta(G) > C(k) (k-1)n$. Here, $C(k)$ is a constant less than one associated with the `fractional $K_k$-decomposition' threshold. We remark that the preceding machinery is enough to guarantee, for fixed $k$ and $h$, the existence of an HTD$(k,h^n)$ for sufficiently large $n$, since $K_k(h) \times K_n \cong K_k \times K_n(h)$ is $k$-partite and $r$-regular, where $r=h(k-1)(n-1)=(k-1)hn - h(k-1)$. In fact, even a slowly growing parameter $h$ (as a function of $n$) can be accommodated. However, the result in \cite{BKLOT} makes no attempt to quantify how large $n$ must be for the decomposition. Even in the case of the structured $k$-partite graph we are considering, it is likely hopeless to obtain a reasonable bound on $n$ by this method. Separately, the theory \cite{DMW,LW} of `edge-colored graph decompositions' due to R.M. Wilson and others, can be applied to the setting of HMOLS. To sketch the details, we fix $h$ and $k$ and consider the graph $K_k(h) \times K_n$ for large $n$. From this, we set up a directed complete graph $K_n^*$ with $r=(kh)^2-kh^2$ edge-colors between two vertices. Each color corresponds with an edge of the bipartite graph $K_k(h) \times K_2$ occurring between two of the $n$ vertices. Let $\mathcal{H}$ denote the family of all $r$-edge-colored cliques $K_k$ which correspond to legal placements of a block in our TD. We seek an $\mathcal{H}$-decomposition of $K_n^*$, and this is guaranteed for sufficiently large $n$ from \cite[Theorem 1.2]{LW}. (We omit several routine calculations needed to check the hypotheses.) Wilson's approach makes it difficult to obtain reasonable bounds on $n$, although in this context it is worth mentioning the bounds of Y.~Chang \cite{Chang-BIBD2,Chang-TD} for block designs and transversal designs. \subsection{Outline} The outline of the rest of the paper is as follows. In Section~\ref{sec:cyclotomic}, we obtain a direct construction of $k$ HMOLS of type $h^q$ for large prime powers $q$. This finite field construction is inspired by a method in \cite{DS} that was applied for $k \le 6$. Then, in Section~\ref{sec:constructions}, we adapt a product-style MOLS construction in \cite{WilsonMOLS} to the setting of HMOLS. The proof of our main result, Theorem~\ref{main}, is completed in Section~\ref{sec:proof}. We conclude with a discussion of a few next steps for research on HMOLS. \section{A cyclotomic construction} \label{sec:cyclotomic} \subsection{Expanding transversal designs of higher index} Let $\lambda$ be a positive integer. We define a TD$_\lam(k,n)$ similarly as a TD$(k,n)$, except that any two points in distinct groups appear together in exactly $\lam$ blocks (and otherwise in zero blocks). The integer $\lam$ is called the \emph{index} of the transversal design. The main idea in what follows is to expand a TD$_\lam(k,h)$, where $h$ is the desired hole size, into an HTD$(k,h^q)$ for suitable large prime powers $q$. Unless $k$ is small relative to $h$, the input design for this construction may require large index $\lam$. When $h$ is itself a prime power, a TD$_\lam(k,h)$ naturally arises from a linear algebraic construction. \begin{prop} \label{td-projection} Let $h$ be a prime power and $d$ a positive integer with $k \le h^d$. Then there exists a TD$_{h^{d-1}}(k,h)$. \end{prop} \begin{proof} Let $H$ be a field of order $h$. Our construction uses points $H \times H^d$, where groups are of the form $H \times \{v\}$, $v \in H^d$. Consider the family of blocks $$\mathcal{B} =\{ \{ (a+u\cdot v,v): v \in H^d \} : a \in H, u \in H^d \},$$ where $u \cdot v$ denotes the usual dot product in the vector space $H^d$. The family $\mathcal{B}$ can be viewed as the result of developing the subfamily $\mathcal{B}_0 =\{ \{ (u\cdot v,v): v \in H^d \} : u \in H^d \}$ additively under $H$. Fix two elements $v_1 \neq v_2$ in $H^d$ and a `difference' $\delta \in H$. Then since $|\{u : u \cdot (v_1-v_2)=\delta\}|=h^{d-1}$, it follows that there are exactly $h^{d-1}$ elements in $\mathcal{B}_0$ which achieve difference $\delta$ across the groups indexed by $v_1$ and $v_2$. Therefore, two points $(a_1,v_1)$, $(a_2,v_2) \in H \times H^d$ are together in exactly one translate of each of those blocks, where $a_1-a_2=\delta$. We have shown that $\mathcal{B}$ produces a transversal design of index $h^{d-1}$ on the indicated points and group partition; the restriction to (any) $k$ groups produces the desired TD$_{h^{d-1}}(k,h)$. \end{proof} We now use a standard product construction to build higher index transversal designs for the case where $h$ has multiple distinct prime divisors. \begin{prop} If there exists both a TD$_{\lambda_1}(k,h_1)$ and a TD$_{\lambda_2}(k,h_2)$, then there exists a TD$_{\lambda_1\lambda_2}(k,h_1h_2)$. \end{prop} \begin{proof} Take the given TD$_{\lambda_i}(k,h_i)$ on point set $[k]\times H_i$, $i=1,2$, where $[k]$ indexes the groups. We construct our TD$_{\lambda_1\lambda_2}(k,h_1h_2)$ on points $[k]\times H_1 \times H_2$. For each block $\beta$ of the TD$_{\lambda_1}(k,h_1)$, we put the blocks of a TD$_{\lambda_2}(k,h_2)$ on $\{(x,y,z) : (x,y)\in\beta, z\in H_2\}$. It is easy to verify the resulting design is a TD$_{\lambda_1\lambda_2}(k,h_1h_2)$. \end{proof} The next result follows immediately from the previous two propositions and induction. \begin{cor} \label{prod-inf} Let $h \ge 2$ be an integer which factors into prime powers as $q_1q_2\cdots q_{\omega(h)}$. Put $\lam(h,k):=\prod_{i=1}^{\omega(h)} q_i^{d_i-1}$, where $d_i=\lceil \log_{q_i} k\rceil$ for each $i$. Then there exists a TD$_\lambda(k,h)$. \end{cor} Next, we show how to expand a transversal design of group size $h$ and index $\lam$ into an HTD with hole size $h$ (and index one). Roughly speaking, elements are expanded into copies of a finite field $\F_q$, where $q \equiv 1 \pmod{\lam}$. Each block is lifted so that previously overlapping pairs now cover the cyclotomic classes of index $\lam$, and then blocks are developed additively in $\F_q$. To this end, we cite a guarantee of R.M. Wilson on cyclotomic difference families in sufficiently large finite fields. \begin{lemma}[Wilson; see \cite{WilsonCyc}, Theorem 3] \label{wilson-cyc} Let $\lambda$ and $k$ be given integers, $\lambda,k \ge 2$. For any prime power $q \equiv 1 \pmod{\lambda}$ with $q>\lambda^{k(k-1)}$, there exists a $k$-tuple $(a_1,\dots,a_k) \in \F_q^k$ such that the $\binom{k}{2}$ differences $a_j-a_i$, $1 \le i < j \le k$, belong to any prespecificed cosets of the index-$\lambda$ subgroup of $\F_q^\times$. \end{lemma} Applying this, we have the following result which mirrors \cite[Construction 6]{DLL}. \begin{prop} \label{htd-cons} Suppose there exists a TD$_\lam(k,h)$ and $q$ is a prime power with $q \equiv 1 \pmod{\lam}$, $q>\lam^{k(k-1)}$. Then there exists an HTD$(k,h^q)$. \end{prop} \begin{proof} Consider a TD$_\lam(k,h)$ on $[k] \times H$, with block collection $\mathcal{B}$. Consider the collection of point-block incidences $S:=\{((x,y),\beta) : (x,y)\in\beta\in\mathcal{B}\}$. Let $\mu : {S \choose 2}\to\{0,1,\dots,\lambda-1\}$ be defined such that for each fixed pair $(i,y), (j,y')$ with $1\leq i<j\leq k$, $$\left\{\mu(\{((i,y),\beta),((j,y'),\beta)\right\}) : \beta\supset \{(i,y),(j,y')\}\}=\{0,1,\dots,\lambda-1\}.$$ (One can choose such a $\mu$ via a `greedy labeling'.) Pick a prime power $q\equiv 1\pmod{\lambda}$, $q>\lam^{k(k-1)}$, and let $C_0,C_1,\dots, C_{\lambda-1}$ denote the cyclotomic classes of index $\lambda$ in $\F_q$. By Lemma~\ref{wilson-cyc} there is a map $\phi : S\to \F_q$ such that for every block $\beta\in\mathcal{B}$, and $i<j$, $\phi((i,y),\beta)-\phi((j,y'),\beta)\in C_t$, where $t=\mu(\{((i,y),\beta),((j,y'),\beta)\})$. We construct an HTD$(k,h^q)$ on $[k]\times H\times \F_q$ as follows. For each $a\in C_0$, $\beta\in \mathcal{B}$, and $c\in \F_q$, include the block $a\beta'+c$, where $a\beta'+c=\{(x,y,a\phi((x,y),\beta)+c) : (x,y)\in \beta\}$. It is clear that if two points are in the same group, they will appear together in no common blocks; this is inherited from the original TD$_\lam(h,k)$. Consider two points in different groups, but the same hole, say $(x,y,z)$, and $(x',y',z)$, where $x \neq x'$. If there were some block $a\beta'+c$ containing both points then we would have $\phi((x,y),\beta)=\phi((x',y'),\beta)$, an impossibility. It remains to show that any two points from different groups and holes appear together in exactly one block. Let $(x,y,z)$, and $(x',y',z')$ be two such points. By construction there is exactly one block $\beta$ satisfying $z-z'\in C_t$ where $t=\mu(\{((x,y),\beta),((x',y'),\beta)\})$. Then, there is some $a\in C_0$ satisfying $a(\phi((x,y),\beta)-\phi((x',y'),\beta))=z-z'$, and so our two chosen points belong to the block $a\beta'+c$, where $c=z-a\phi((x,y),\beta)$. \end{proof} Combining Corollary \ref{prod-inf} and Proposition \ref{htd-cons}, we obtain a construction of HTD$(k,h^q)$ for general $h$ and $k$ and certain large prime powers $q$. \begin{thm} \label{cyclotomic-hmols} Let $h \ge 2$ be an integer which factors into prime powers as $q_1q_2\cdots q_{\omega(h)}$. Then there exists an HTD$(k,h^q)$ for all prime powers $q\equiv 1\pmod{\lam(h,k)}$, $q>\lam(h,k)^{k(k-1)}$. In other words, $N(h^q) \ge k$ for all prime powers $q\equiv 1 \pmod{\lam(h,k+2)}$ with $q>\lam(h,k+2)^{(k+2)(k+1)}$. \end{thm} \subsection{Template matrices and explicit computation} We include here some remarks on explicit computer-aided construction of HTD$(k,h^q)$ in the special case of prime hole size $h$. The `expansion' construction of Proposition~\ref{htd-cons} relies on lifting all blocks so that the differences across any two points fall into distinct cyclotomic classes. Using a `template matrix' method introduced by Dinitz and Stinson \cite{DS}, it is possible to impose some additional structure on this lifting to gain an efficiency in computations. With notation similar to before, we define an $h^d \times h^d$ `template matrix' $T_d(h)$ as the Gram matrix of the vector space $H^d$. That is, rows and columns of $T_d(h)$ are indexed by $H^d$, and $T_d(h)_{uv} = u \cdot v$. Although the order in which columns appear is unimportant, it is convenient to index the rows in lexicographic order. When $h=2$, the template is simply a Walsh Hadamard matrix (with entries $0,1$ instead of $\pm 1$). We offer another example below. \begin{ex} Consider the case $h=3$, $d=2$, which is suitable for the construction of up to $6=2^3-2$ HMOLS having hole size $3$. With rows (and columns) indexed by the lex order on $\F_3^2$, we have $$T_3(2)= \left[\begin{array}{ccc|ccc|ccc} 0&0&0&0&0&0&0&0&0\\ 0&1&2&0&1&2&0&1&2\\ 0&2&1&0&2&1&0&2&1\\ \hline 0&0&0&1&1&1&2&2&2\\ 0&1&2&1&2&0&2&0&1\\ 0&2&1&1&0&2&2&1&0\\ \hline 0&0&0&2&2&2&1&1&1\\ 0&1&2&2&0&1&1&2&0\\ 0&2&1&2&1&0&1&0&2\\ \end{array}\right].$$ \end{ex} Observe that the difference between any two distinct columns of $T_h(d)$ achieves every value in $H$ exactly $h^{d-1}$ times each; this is essentially the content of Proposition~\ref{td-projection}. For $k \le h^d$, the restriction of $T_h(d)$ to any $k$ columns has the same property. As we illustrate in Example~\ref{401} to follow, aiming for a value of $k$ less than $h^d$ may be a worthwhile tradeoff in computations. We use the template matrix in conjunction with the following `relative difference matrix' setup for HTDs. \begin{lemma}[see \cite{DS}]\label{difcon} Let $G$ be an abelian group of order $g$ with subgroup $H$ of order $h$, and $\mathcal{B}\subseteq G^k$. If for all $r,s$ with $1\leq r<s\leq k$ and each $a\in G\setminus H$ there is a unique $b\in \mathcal{B}$ with $b_r-b_s=a$, then there exists an HTD$(k,h^{g/h})$. \end{lemma} To further set up the construction, fix integers $d,h \ge 2$, and put $\lam=h^{d-1}$. Let $q\equiv 1\pmod{\lam}$ be a prime power. Let $\omega$ be a multiplicative generator of $\F_q$, and define $C_0:= \langle \omega^{\lam} \rangle$ to be the index-$\lam$ subgroup of $\F_q^\times$. For $1\leq i < \lam$, we denote the coset $\omega^i C_0$ by $C_i$. Let $k \le h^d$. Given two $k$-tuples $t \in X^k$ and $u \in Y^k$, define $t \circ u=((t_i,u_i) : 1 \le i \le k) \in (X \times Y)^k$. We take $X=H=\F_h$ and $Y=F_q$ in what follows, so that the group in Lemma~\ref{difcon} is $G= \F_h \times \F_q$ with subgroup $\F_h \times \{0\}$. Letting $t_1,t_2,\dots, t_{h^d}$ denote the rows of $T_h(d)$, our construction amounts to a selection of vectors $u_1,u_2,\dots,u_{h^d} \in \F_q^k$ such that $$\mathcal{B} = \{ t_i \circ (x u_i) : x \in C_0, 1 \le i \le h^d \}$$ satisfies the hypotheses of Lemma~\ref{difcon}. Dinitz and Stinson \cite{DS} and later, Abel and Zhang \cite{AZ}, reduce the search for such vectors $u_i$ by assuming they have the form $$ u_1, \omega u_1, \omega^2 u_1, \dots, \omega^{\lam-1}u_1, \dots, u_h, \omega u_h, \omega^2 u_h, \dots, \omega^{\lam-1}u_h.$$ With this reduction, $\mathcal{B}$ produces an HTD if the quotients $(u_{ir}-u_{is})(u_{jr}-u_{js})^{-1}$ lie in certain cosets of $C_0$ for each pair $r,s$ with $1 \le r < s \le k$. In more detail, fix two such column indices and consider two blocks $b,b' \in \mathcal{B}$ arising from a choice of two rows of $T_d(h)$. When these rows are in the same block of $\lam=h^{d-1}$ consecutive rows, we automatically avoid $b_r-b_s = b'_r-b'_s$ because of the different powers of $\omega$ multiplying the same $u_i$. On the other hand, when these rows are in, say, the $i$th block and $j$th block of $\lam$ rows, $i\neq j$, we must ensure that the quotient $(u_{ir}-u_{is})(u_{jr}-u_{js})^{-1}$ avoids those cyclotomic classes indexed by $e'-e \pmod{\lam}$, whenever $\omega^e u_i$ and $\omega^{e'} u_j$ index rows of $T_d(h)$ which have equal $(r,s)$-differences. It is routine (but somewhat tedious) exercise to characterize the `allowed cosets', either computationally for specific $h,d$ or in general. We omit the details, but point out that, for each $r$ and $s$, an arithmetic progression of cyclotomic classes (with difference a power of $h$) is available. Now, given a table of allowed cosets, the vectors $u_1,\dots,u_h \in \F_q^k$ can be chosen one at a time, where each new vector has coset restrictions on its $(r,s)$-differences. The guarantee of Lemma~\ref{wilson-cyc} can be used for this purpose (giving an alternate proof of Corollary~\ref{cyclotomic-hmols} in the case of prime $h$). However, in practice it often suffices to take significantly smaller values of $q$. \begin{ex} \label{401} To illustrate the method, we construct 9 HMOLS of type $2^{401}$ in $\F_2 \times \F_{401}$; that is, we consider $h=2$, $q=401$. Instead of using all $16$ columns of the template $T_2(4)$, we require only $9+2=11$, as indicated below. Let \begin{align*} u_1&=(284, 136, 249, 334, 1, 202, 140, 307, -, 35, 312, -, 0, -, -, -)\\ \text{and}~u_2&=(283, 297, 137, 60, 1, 210, 102, 39, -, 241, 111,-, 0, -, -, -). \end{align*} It can be verified that the quotients $(u_{2r}-u_{2s})(u_{1r}-u_{1s})^{-1}$ all lie in allowed cosets for $T_2(4)$ for any distinct indices $r$ and $s$ such that our vectors are nonblank. (As an explanation for the unnatural ordering of entries, it turns out that a column-permutation of the template $T_2(4)$ was more convenient for the computations, at least with our approach.) \end{ex} To our knowledge, Example~\ref{401} provides the first (explicit) construction of more than $6$ HMOLS of type $2^n$ for any $n>1$. \section{Recursive constructions} \label{sec:constructions} As we move away from prime powers, we present a product construction which scales the number of holes. The idea is to join (copies of) equal-sized HMOLS on the diagonal and ordinary MOLS off the diagonal. Our proof uses the language of transversal designs. \begin{prop} \label{prod2} $N(h^{mn}) \ge \min \{N(1^m),N(hn),N(h^n) \}$. \end{prop} \begin{proof} We show that the existence of a HTD$(k,h^{mn})$ is implied by the existence of an HTD$(k,1^m)$, TD$(k,hn)$ and HTD$(k,h^n)$. Let us take as points $[k] \times H \times M \times X$, where $|H|=h$, $|M|=m$, $|X|=n$. The groups of our TD are $\{i\} \times H \times M \times X$ and the holes are $[k] \times H \times \{w\} \times \{x\}$, where $i \in [k]$, $w \in M$, and $x \in X$. We construct the block set in two pieces. First, on each layer of points of the form $[k] \times H \times \{w\} \times X$, where $w \in M$, we include the blocks of an HTD$(k,h^n)$ with groups $\{i\} \times H \times \{w\} \times X$ and holes $[k] \times H \times \{w\} \times \{x\}$. Second, let us take an HTD$(k,1^m)$ on $[k] \times M$ and, for each block $B=\{(i,w_i): i = 1,\dots,k\}$, include the blocks of a TD$(k,hn)$ on $\cup_i \{i\} \times H \times \{w_i\} \times X$, where in each case we use the natural group partition induced by first coordinates. It remains to verify that the block set as constructed covers every pair of points as needed for an HTD$(k,h^{mn})$. To begin, since each of our ingredient blocks is transverse to the group partition, it is clear that two distinct points in the same group are together in no block. Moreover, two distinct points in the same hole appear in the same HTD$(k,h^n)$, the hole partition of which is inherited from our resultant design. Therefore, such elements are also together in no block. Consider then, two elements $(i_1,j_1,w_1,x_1)$ and $(i_2,j_2,w_2,x_2)$ with $i_1 \neq i_2$ and $(w_1,x_1) \neq (w_2,x_2)$. If $w_1=w_2$, this pair of points occurs in the same HTD$(k,h^n)$, and thus in exactly one block. On the other hand, if $w_1 \neq w_2$, we first locate the unique block of the HTD$(k,1^m)$ containing $(i_1,w_1)$ and $(i_2,w_2)$, and then, within the corresponding TD$(k,hn)$, identify the unique block containing our two given points. We remark that the holes can be safely ignored in this latter case, since we are assuming $w_1 \neq w_2$. \end{proof} Although the idea behind the construction in Proposition~\ref{prod2} is very standard, we could not find this result mentioned explicitly in the literature. To set up our next construction, we recall that an incomplete latin square can have holes that partition a proper subset of the index set. In particular, we consider sets of mutually orthogonal $n \times n$ incomplete latin squares with a single common $h \times h$ hole for integers $n>h>0$. The maximum number of squares in such a set is commonly denoted $N(n;h)$. \begin{ex} A noteworthy value is $N(6;2)=2$ in spite of the nonexistence of a pair of orthogonal latin squares of order six. The squares, with common hole $H=\{1,2\}$, are shown below. \begin{center} \begin{tabular}{|cccccc|} \hline &&3&4&5&6\\ &&4&3&6&5\\ 6&3&5&1&4&2\\ 4&5&6&2&3&1\\ 3&6&2&5&1&4\\ 5&4&1&6&2&3\\ \hline \end{tabular} \hspace{1.5cm} \begin{tabular}{|cccccc|} \hline &&3&5&6&4\\ &&4&6&5&3\\ 3&5&2&4&1&6\\ 6&4&1&3&2&5\\ 4&6&5&1&3&2\\ 5&3&6&2&4&1\\ \hline \end{tabular} \end{center} \end{ex} Given a set of $k-2$ mutually orthogonal incomplete latin squares of type $(n;h)$, reading blocks as $k$-tuples produces an \emph{incomplete transversal design}, abbreviated either ITD$(k,(n;h))$ or $\text{TD}(k,n)-\text{TD}(k,h)$. The latter notation is not meant to suggest that a TD$(k,h)$ exists as a subdesign, but rather that two elements from the hole are uncovered by blocks. The interested reader is referred to \cite{Handbook,MOLStable} for more information and references on these objects. We now extend Proposition~\ref{prod2} to get an analog of Wilson's MOLS construction \cite[Theorem 2.3]{WilsonMOLS}. \begin{prop} \label{wilsonish-cons} For $0 \le u < t$, \begin{equation*} N(h^{mt+u}) \ge \min \{N(t)-1,N(h^m),N(hm),N(hm+h;h),N(h^u) \}. \end{equation*} \end{prop} \begin{proof} We show that the existence of an HTD$(k,h^{mt+u})$ is implied by the existence of a TD$(k+1,t)$, HTD$(k,h^m)$, TD$(k,hm)$, HTD$(k,h^u)$ and an ITD$(k,(hm+h;h))$. We remark that the first of these is equivalent to a resolvable TD$(k,t)$. The set of points for our design is $[k] \times H \times (M \times X \cup Y)$, where $|H|=h$, $|M|=m$, $|X|=t$, and $|Y|=u$. Similar to the proof of Proposition~\ref{prod2}, the groups are induced by first coordinates, and the holes are `copies of $H$'. Begin with a TD$(k+1,t)$ on $([k] \cup \{0\}) \times X$, say with blocks $\mathcal{A}$. Let $x_* \in X$ and assume, without loss of generality, that the blocks in $\mathcal{A}$ incident with $(0,x_*)$ are of the form $\{(i,x):i=1,\dots,k\} \cup \{(0,x_*)\}$. In other words, in the induced resolvable TD$(k,t)$, assume one parallel class is labeled as $[k] \times \{x\}$, $x \in X$. Let us identify $Y$ with any $u$-element subset of $\{0\} \times (X \setminus \{x_*\})$. For each block in $\mathcal{A}$ of the form $\{(i,x):i=1,\dots,k\} \cup \{(0,x_*)\}$, include the blocks of an HTD$(k,h^m)$, on $[k] \times H \times M \times \{x\}$ with groups and holes as usual. Consider now a block $B \in \mathcal{A}$ which does not contain $(0,x_*)$, say $B=\{(i,x_i):i=0,1,\dots,k\} \in \mathcal{A}$ where $x_0 \neq x_*$. Put $y_0 = (0,x_0)$. If $y_0 \not\in Y$ (that is if $B$ does not intersect $Y$), we include the blocks of a TD$(k,hm)$ on $\cup_{i=1}^k \{i\} \times H \times M \times \{x_i\}$ with groups and holes as usual. On the other hand, if $y_0 \in Y$ (that is if $B$ intersects $Y$), we include the blocks of an ITD$(k,(hm+h;h))$ on the points $\cup_{i=1}^k \{i\} \times H \times (M \times \{x_i\} \cup \{y_0\})$ and such that the hole of this ITD occurs as $\cup_{i=1}^k \{i\} \times H \times \{y_0\}$. To finish the construction, we include the blocks of an HTD$(k,h^u)$ on $[k] \times H \times Y$, where again the natural partition into groups and holes is used. We have used four types of blocks, to be referenced below in the order just described. As a verification, we consider two elements in different groups and holes. Suppose $i$ and $i'$ index two different groups. There are cases to consider. Consider first a pair of points of the form $(i,j,w,x)$ and $(i',j',w',x')$, where $(w,x) \neq (w',x')$. If $x=x'$, then the two points appear together in exactly one block of the first kind. If $x \neq x'$, then we consider the unique block $B \in \mathcal{A}$ containing $(i,x)$ and $(i',x')$. Our two points are either in exactly one block of the second kind if $B \cap Y = \emptyset$ or exactly one block of the third kind otherwise. Consider now the points $(i,j,w,x)$ and $(i',j',y')$, where $y' \in Y$. There is exactly one block of the TD$(k+1,t)$ containing $(i,x)$ and $y_0$. Examining the ITD prescribed by the construction, the points $(i,j,w,x)$ and $(i',j',y')$ appear together in exactly one block of the third kind, since $i \neq i'$ and only one of these points belongs to the hole. Finally, two points $(i,j,y)$ and $(i',j',y')$ appear together in one (and only one) block of the fourth kind if and only if $y \neq y'$. \end{proof} \begin{rk} The construction in Proposition~\ref{prod2} is just the specialization $Y=\emptyset$ of that of Proposition~\ref{wilsonish-cons}. However, we have kept the former stated separately since it requires no assumption on $N(hm+h;h)$. \end{rk} \section{Lower bounds} \label{sec:proof} \subsection{Preliminary bounds} To make use of Proposition~\ref{wilsonish-cons}, it is helpful to have a lower bound on $N(n;h)$ resembling Beth's bound for $N(n)$. \begin{thm} \label{imols-bound} Let $h$ be a positive integer. Then $N(n;h) > n^{1/29.6}$ for sufficiently large $n$. \end{thm} \begin{proof} We use the construction of \cite[Theorem 2.4]{WilsonMOLS}. A minor variant gives that, for $0 \le u,v \le t$, \begin{equation} \label{imols-cons} N(mt+u+v;v) \ge \min \{N(m),N(m+1),N(m+2),N(t)-2,N(u)\}. \end{equation} (To clarify, the cited theorem `fills the hole' of size $v$ so that the left side becomes $N(mt+u+v)$ and the minimum on the right side includes $N(v)$.) Put $v=h$ and suppose $k$ is a large integer. For $n \ge k^{29.6}$, write $n = mt+u$, where $m,t,u \ge k^{14.7995}$. Then, from Beth's inequality, there exist $k$ MOLS of each of the side lengths $m,m+1,m+2,u$, and also $k+2$ MOLS of side length $t$. It follows from (\ref{imols-cons}) that $N(n+h;h) \ge k$, as required. \end{proof} The forthcoming proof of Theorem~\ref{main} makes use of two number-theoretic lemmas which are minor variants of classical results. The first of these concerns the selection of a prime, with a congruence restriction, in a large and wide enough interval. \begin{lemma} \label{Dirichlet-estimate} For any sufficiently large integer $M$ and any real number $x>e^M$ there exists a prime $p \equiv 1 \pmod{M}$ satisfying $x < p \le 2x$. \end{lemma} \begin{proof} We use a result \cite[Theorem 1.3]{Dirichlet-bounds} of Bennett, Martin, O'Bryant and Rechnitzer concerning the prime-counting function $$\pi(x;q,a) := \#\{p \le x: p~\text{ is prime}, p \equiv a \pmod{q}\}.$$ With $q=M$ and $a=1$, their estimate implies \begin{equation*} \left| \pi(x;M,1) - \frac{{\mathrm{Li}}(x)}{\phi(M)} \right| \le \frac{1}{160} \frac{x}{(\log x)^2} \end{equation*} for $M>10^5$ and all $x > e^M$. Since $\mathrm{Li}(x)\sim x/\log(x)$ and $\phi(M) < \log x$, a routine calculation gives $\pi(2x;M,1)-\pi(x;M,1) \ge 1$ for sufficiently large $x$ and $M$. \end{proof} \begin{rk} Lemma~\ref{Dirichlet-estimate} actually holds with `2' replaced by any constant greater than one; however, the present form suffices for our purposes. \end{rk} Next, we have a Frobenius-style representation theorem for large integers. \begin{lemma} \label{Frobenius} Let $a,b$ and $C$ be positive integers with $\gcd(a,b)=1$. Any $n > a(b+1)(b+C)$ can be written in the form $n=ax+by$ where $x$ and $y$ are integers satisfying $x \ge C$ and $y > ax$. \end{lemma} \begin{proof} The integers $a(C+1), a(C+2), \dots, a(C+b)$ cover all congruence classes mod $b$. Suppose $n \equiv a(C+j) \pmod{b}$, where $j \in \{1,\dots,b\}$. Put $x=C+j$ and $y=(n-ax)/b$. Then $y$ is an integer with \begin{equation*} y > \frac{a(b+1)(C+b)-a(C+b)}{b} = a(C+b) \ge ax.\qedhere \end{equation*} \end{proof} \subsection{Proof of the main result} We are now ready to prove our asymptotic lower bound on HMOLS of type $h^n$. \begin{proof}[Proof of Theorem~\ref{main}] Put $M=\lam(h,k+2)$, as defined in Corollary~\ref{prod-inf}. Note that $M \le (k+2)^{\omega(h)}$. Let $K$ denote the ceiling of $k^{(\omega(h)+\epsilon/4)k^2}$, which we note for large $k$ exceeds both $e^M$ and $M^{(k+2)(k+1)}$. Using Lemma~\ref{Dirichlet-estimate}, choose two primes $q_1,q_2 \equiv 1 \pmod{M}$ where $q_2 \in (K,2K]$ and $q_1 \in (2K,4K]$. With $m=q_2$, we have $N(h^m) \ge k$ from Theorem~\ref{cyclotomic-hmols}, $N(hm) \ge k$ from Beth's inequality, and $N(hm+h;h) \ge k$ from Theorem \ref{imols-bound}. The latter two bounds use the assumption that $k$ is large. From the hypothesis on $n$ and choice of $q_i$, we have for large $k$, $$n > k^{(3+\epsilon)\omega(h)k^2} > 17K^3 > q_1(q_2+1)(q_2+k^{14.8}).$$ Using Lemma~\ref{Frobenius}, write $n=q_1s+q_2t$, where $s,t$ are integers satisfying $s \ge k^{14.8}$ and $t > q_1s$. Put $u=q_1s$ so that, with this alternate notation, we have $n=mt+u$ with $t>u$. Observe that $N(h^u) \ge k$ from Proposition~\ref{prod2} with $s$ taking the role of $m$ and $q_1$ taking the role of $n$. We additionally have $N(t) > k$ from Beth's inequality and our lower bound on $t$. From the above properties of $m,t,u$, Proposition~\ref{wilsonish-cons} implies $N(h^n) \ge k$. \end{proof} \begin{ex} We illustrate the proof method by computing an explicit bound for the existence of six HMOLS of type $2^n$. We show that $n> 8 \times 50 \times 148 = 59200$ suffices. From \cite[Table 1]{ABG}, there exist six HMOLS of types $2^8$ and $2^{49}$. (The former does not arise from the cyclotomic construction of Section 2, but it helps us optimize the bound.) From \cite[Table III.3.83]{Handbook}, we have $N(1^s) \ge 6$ for all $s \ge 99$. It follows by Proposition~\ref{prod2} that $N(2^{8s}) \ge 6$ for all $s \ge 99$. Put $m=49$ and note that $N(2m)=N(98) \ge 6$ and $N(2m+2;2) = N(100;2) \ge 6$, where the latter appears in \cite[Table III.4.14]{Handbook}. Write $n=8s+25t$ where $t >8s$. From \cite[Table III.3.81]{Handbook}, we have $N(t) \ge 7$. Letting $h=2$ and $u=8s$, we conclude from Proposition~\ref{wilsonish-cons} that $N(2^n) \ge 6$. \end{ex} \subsection{Inverting the bound} Here, we offer a lower bound on $N(h^n)$ in terms of $n$. \begin{thm}\label{N-bound} Let $h\geq 2$ be an integer and $\delta>2$ a real number. Then $N(h^n)\ge (\log n)^{1/\delta}$ for all $n>n_0(h,\delta)$. \end{thm} \begin{proof} If $k$ is an integer not exceeding the right side of the above bound, then $\log n \ge k^\delta > C k^2 \log k $ for any constant $C >3 \omega(h)$ and sufficiently large $k$. The existence of $k$ HMOLS of type $h^n$ then follows from Theorem~\ref{main}. \end{proof} \begin{rk} Various slightly better `inverse bounds' are possible. For instance, we have $N(h^n)\ge e^{\frac{1}{2}W(\log(n)/2\omega(h))}$ for sufficiently large $n$, where $W$ denotes Lambert's function, the inverse of $x \mapsto xe^x$. The constants here represent one choice of many. \end{rk} \section{Future directions} It would of course be desirable to produce a lower bound of the form $N(h^n) \ge n^{\delta}$ for some $\delta>0$. This appears difficult using the presently available methods over finite fields, although a sophisticated randomized construction is plausible. When $k$ is a fixed positive integer (rather than sufficiently large), our proof method can still compute, in principle, a lower bound on $n$ such that $N(h^n) \ge k$. However, this bound incurs a considerable penalty in the analytic number theory for small integers $k$. To track this penalty, we require a bound on the selection of prime powers in the spirit of \cite[Lemma 5.3]{Chang-BIBD1} or a deeper look at explicit estimates for primes in Dirichlet's theorem such as in the data attached to \cite{Dirichlet-bounds}. Concerning explicit sets of HMOLS, we showed $N(2^{401}) \ge 9$ in Example~\ref{401}. A few other sets of 9 HMOLS of type $2^n$ for $n<1000$ were found, along with a set of 10 HMOLS of type $2^{1009}$. Our search for a pair of vectors with the needed cyclotomic constraints was very na\"ive. We also restricted our computational efforts to the case $h=2$. With some improved code to search for vectors $u_1,u_2,\dots,u_h$ producing a set of HMOLS, one could envision an expanded table of lower bounds. Such an undertaking could offer a better sense of what to expect in practice from the standard construction methods and set a benchmark for future bounds. As a separate line of investigation, it would be of interest to improve the exponent of Theorem~\ref{imols-bound}, our bound on MOLS with exactly one hole of a fixed size. A direct use of the Buchstab sieve as in \cite{WilsonMOLS} is likely to do a bit better than our (indirect) method. \end{document}
\begin{document} \title{Constructing totally $p$-adic numbers of small height} \author{S. Checcoli} \address{S.~Checcoli, Institut Fourier, Universit\'e Grenoble Alpes, 100 rue des Math\'ematiques, 38610 Gi\`eres, France} \email{[email protected]} \author{A. Fehm} \address{A.~Fehm, Institut f\"ur Algebra, Fakult\"at Mathematik, Technische Universit\"at Dresden, 01062 Dresden, Germany} \email{[email protected]} \begin{abstract} Bombieri and Zannier gave an effective construction of algebraic numbers of small height inside the maximal Galois extension of the rationals which is totally split at a given finite set of prime numbers. They proved, in particular, an explicit upper bound for the lim inf of the height of elements in such fields. We generalize their result in an effective way to maximal Galois extensions of number fields with given local behaviour at finitely many places. \end{abstract} \maketitle \section{Introduction} \noindent Let $h$ denote the absolute logarithmic Weil height on the field $\overline{\mathbb{Q}}$ of algebraic numbers. We are interested in explicit height bounds for elements of $\overline{\mathbb{Q}}$ with special local behaviour at a finite set of primes. The first result in this context is due to Schinzel \cite{Sch} who proved a height lower bound for elements in the field of totally real algebraic numbers $\mathbb{Q}^{\rm tr}$, the maximal Galois extension of $\mathbb{Q}$ in which the infinite prime splits totally. More precisely, he showed that every $\alpha\in \mathbb{Q}^{\rm tr}$ has either $h(\alpha)=0$ or \[h(\alpha)\geq \frac{1}{2}\log\left(\frac{1+\sqrt{5}}{2}\right).\] Explicit upper and lower bounds for the limit infimum of the height of algebraic integers in $\mathbb{Q}^{\rm tr}$ are given in \cite{Smy80,Smy81, Fla96}. In \cite{BZ} Bombieri and Zannier investigate the analogous problem for the $p$-adic numbers. More precisely, in \cite[Theorem 2]{BZ} they prove the following: \begin{theorem}[Bombieri--Zannier]\label{BZ_lower} Let $p_1,\dots,p_n$ be distinct prime numbers, for each $i$ let $E_i$ be a finite Galois extension of $\mathbb{Q}_{p_i}$, and $L$ the maximal Galois extension of $\mathbb{Q}$ contained in all $E_i$. Denote by $e_i$ and $f_i$ the ramification index and inertia degree of $E_i/\mathbb{Q}_{p_i}$. Then $$ \liminf_{\alpha\in L} h(\alpha) \;\geq\; \frac{1}{2}\cdot \sum_{i=1}^n\frac{\log(p_i)}{e_i(p_i^{f_i}+1)}. $$ \end{theorem} In the special case $E_i=\mathbb{Q}_{p_i}$, Bombieri and Zannier in \cite[Example 2]{BZ} show that the lower bound in Theorem \ref{BZ_lower} is almost optimal. More precisely: \begin{theorem}[Bombieri--Zannier]\label{BZ_upper} Let $p_1,\dots,p_n$ be prime numbers and let $L$ be the maximal Galois extension of $\mathbb{Q}$ contained in all $\mathbb{Q}_{p_i}$. Then $$ \liminf_{\alpha\in L} h(\alpha) \;\leq\; \sum_{i=1}^n\frac{\log(p_i)}{p_i-1}. $$ \end{theorem} Other proofs, refinements and generalizations were given in \cite{Fil, Pot,FiliPetsche, FP, PS19} See also \cite{Smy07} for a general survey on the height of algebraic numbers. In Remark \ref{rem-Fili} we will discuss in detail the contribution \cite{Fil} and how it compares to our work. The goal of this note is to generalize in an effective way the upper bound Theorem \ref{BZ_upper} to general $E_i$, and to further replace the base field $\mathbb{Q}$ by an arbitrary number field. Our main result is the following: \begin{theorem}\label{mainthm} Let $K$ be a number field and let $\mathfrak{p}_1,\dots,\mathfrak{p}_n$ be distinct primes ideals of the ring of integers $\mathcal{O}_K$ of $K$. For each $i$, let $E_i$ be a finite Galois extension of the completion $F_i$ of $K$ at $\mathfrak{p}_i$. Denote by $e_i$ and $f_i$ the ramification index and the relative inertia degree of $E_i/F_i$ and write $q_i=|\mathcal{O}_K/\mathfrak{p}_i|=p_i^{{f(\mathfrak{p}_i|p_i)}}$. Then for the maximal Galois extension $L$ of $K$ contained in all $E_i$, \[ \liminf_{\alpha\in L} h(\alpha) \;\leq\; \sum_{i=1}^n \frac{f(\mathfrak{p}_i|p_i)}{[K:\mathbb{Q}]}\cdot\frac{\log(p_i)}{e_i(q_i^{f_i}-1)}. \] More precisely, let \[ C=\max\left\{[K:\mathbb{Q}], |\Delta_K|, \max_i (e_i f_i), \max_i q_i^{f_i}\right\} \] where $\Delta_K$ is the absolute discriminant of $K$. Then for every $0<\epsilon<1$ there exist infinitely many $\alpha\in\mathcal{O}_L$ of height \begin{equation}\label{eqn:thm1} h(\alpha)\leq \sum_{i=1}^n \frac{f(\mathfrak{p}_i|p_i)}{[K:\mathbb{Q}]}\cdot\frac{\log(p_i)}{e_i(q_i^{f_i}-1)}+13 nC^{2n+2}\frac{\log \left([K(\alpha):K]\right)}{[K(\alpha):K]} +\begin{cases}0,&n=1\\n\epsilon,&n>1\end{cases}. \end{equation} Namely, for every $\rho\geq 3C^n$ there exists such $\alpha$ of degree \begin{equation}\label{eqn:thm2} \rho\leq [K(\alpha):K] \leq \begin{cases}C\rho,&n=1\\\rho^{\frac{(4 \log C)^{n+1}}{\log^n(1+\epsilon)}},&n>1\end{cases}. \end{equation} \end{theorem} Note that in the special case $K=\mathbb{Q}$ and $E_i=\mathbb{Q}_{p_i}$ we reobtain Theorem \ref{BZ_upper}, except that Theorem \ref{mainthm} appears stronger in that the result is effective and the $\liminf$ can be taken over algebraic integers. However, an inspection of the proof of Bombieri and Zannier shows that it is effective as well and does in fact produce algebraic integers. \begin{remark}\label{rem-Fili} Theorem \ref{mainthm} provides an effective version of a result of Fili \cite[Theorem 1.2]{Fil}. The bound in that result seems to differ from ours by the factor $e(\mathfrak{p}_i|p_i)$, and \cite[Theorem 1.1]{Fil} (and similarly \cite[Theorem 9]{FiliPetsche}) also states a variant of Theorem~\ref{BZ_lower} which contradicts our Theorem \ref{mainthm}, but according to Paul Fili (personal communication) this is merely an error in normalization in \cite{Fil} and \cite{FiliPetsche} that became apparent when comparing to our result, and the $e_v$ in the denominator of Theorems 1.1, 1.2, and Conjecture 1 of \cite{Fil} (and similarly in the statements of \cite{FiliPetsche}) should have been the absolute instead of the relative ramification index. When this correction is made, the lower bound of \cite[Theorem 1.1]{Fil} agrees with the one in \cite[Theorem 13]{FP}, and the upper bound of \cite[Theorem 1.2]{Fil} agrees with the one in Theorem \ref{mainthm}. In any case, Fili's proof of \cite[Theorem 1.2]{Fil} uses capacity theory on analytic Berkovich spaces and does not provide explicit bounds on the degree and the height of a sequence of integral elements in the $\liminf$. Instead, our effective proof is more elementary and is inspired by Bombieri and Zannier's effective proof of Theorem \ref{BZ_upper}. To the best of our knowledge, Theorem \ref{mainthm} is the only result currently available that gives a bound on the height in terms of the degree of such a sequence of $\alpha$, except for the case where $K=\mathbb{Q}$ and $E_i=\mathbb{Q}_{p_i}$ for all $i$, where such a bound can be deduced from \cite{BZ}. We also remark that our use of \cite[Theorem 1.2]{Fil} in \cite{CF} is limited to the cases where \cite[Theorem 1.2]{Fil} agrees with Theorem \ref{mainthm}. \end{remark} The paper is organised as follows. In Section \ref{prel} we collect all the preliminary results needed to prove Theorem \ref{mainthm}, namely: a consequence of Dirichlet's theorem on simultaneous approximation (Proposition \ref{Dir}), a bound for the size of representatives in quotient rings of rings of integers (Proposition \ref{lem:small_rep}), a variant of Hensel's lemma (Proposition \ref{val-BZ}), a bound for the height of a root of a polynomial defined over a number field in terms of its coefficients (Proposition \ref{H-min-root}), and a construction of special Galois invariant sets of representatives of residue rings of local fields (Proposition \ref{A_i}). The proof of Theorem \ref{mainthm} is carried out in Section \ref{sec:main}. We briefly sketch it here for clarity. Following Bombieri and Zannier's strategy, given $\rho\geq 3C^n$ we construct a monic irreducible polynomial $g\in\mathcal{O}_K[X]$ such that \begin{enumerate}[(i)] \item\label{d-g} its degree is upper and lower bounded in terms of $\rho$ as the degree of $\alpha$ in Theorem~\ref{mainthm}, \item\label{c-g} the complex absolute value of all conjugates of its coefficients is sufficiently small, and \item\label{r-g} all its roots are contained in all $E_i$. \end{enumerate} In Bombieri and Zannier's proof of Theorem \ref{BZ_upper}, \eqref{d-g} and \eqref{c-g} were achieved by using the Chinese Remainder Theorem to deform the polynomial $\prod_{i=1}^{\rho} (X-i)$ into an irreducible polynomial of the same degree with coefficients small enough to give the desired bound for the height of the roots. Then a variant of Hensel's lemma was applied to show that the roots of the constructed polynomial are still in $\mathbb{Q}_{p_i}$ for each $i$. In our generalisation, the degree of the polynomial is carefully chosen to obtain \eqref{d-g} in Section \ref{sec:degree} via Proposition \ref{Dir} (necessary only if $n>1$, which leads to the better bounds in the case $n=1$). The polynomial $g$ satisfying \eqref{c-g} is then constructed in Section \ref{constr-G}: We start with polynomials $\prod_{\alpha\in \tilde A_i} (X-\alpha)$, where now $\tilde A_i\subseteq \mathcal{O}_{E_i}$ is a set constructed using Proposition \ref{A_i}. These polynomials are then merged into an irreducible polynomial $g$ by applying the Chinese Remainder Theorem and Proposition \ref{lem:small_rep} to bound the size of its coefficients. Property \eqref{r-g} is verified in Section \ref{roots-g}, using Proposition \ref{val-BZ}. Finally, Proposition \ref{H-min-root} is applied to show that $g$ has a root $\alpha$ of height bounded from above as desired. \section{Notation and preliminaries}\label{prel} \noindent We fix some notation. If $K$ is a number field or a non-archimedean local field we let $\mathcal{O}_K$ denote the ring of integers of $K$. For an ideal $\mathfrak{a}$ of $\mathcal{O}_K$ we denote by $N(\mathfrak{a})=|\mathcal{O}_K/\mathfrak{a}|$ its norm. For a nonzero prime ideal $\mathfrak{p}$ of $\mathcal{O}_K$, we denote by $v_\mathfrak{p}$ the discrete valuation on $K$ with valuation ring $(\mathcal{O}_K)_{\mathfrak{p}}$ normalized such that $v_\mathfrak{p}(K^\times)=\mathbb{Z}$. If $L/K$ is an extension of number fields and $\mathfrak{P}$ is a prime ideal of $\mathcal{O}_L$ lying above a prime ideal $\mathfrak{p}$ of $\mathcal{O}_K$ we denote by $e(\mathfrak{P}|\mathfrak{p})$ and $f(\mathfrak{P}|\mathfrak{p})$ the ramification index and the inertia degree. For an extension $E/F$ of non-archimedean local fields we denote the ramification index and the inertia degree also by $e(E/F)$ and $f(E/F)$. \subsection{Auxiliary results} In this section we collect the preliminary results we need to prove Theorem \ref{mainthm}. These results are not related to each other and we list them in this section following their order of appearance in the proof of Theorem \ref{mainthm}. \begin{proposition}\label{Dir} Let $x_1,\dots,x_n$ be integers greater than $1$. For every $\rho\geq 3$ and $0<\epsilon<1$ there exist positive integers $r,k_1,\dots,k_n$ such that $r\geq \rho$ and, for all $i$, $r\leq x_i^{k_i}\leq (1+\epsilon)r$ and \[k_i\leq \frac{2^{2n+1}\log^n(\max_jx_j) \log(\rho)}{ \log(x_i) \log^n(1+\epsilon)}.\] \end{proposition} \begin{proof} Say $x_1=\max_ix_i$. Let $\alpha_i=2\log(\rho)/\log(x_i)$ and $Q=\lceil 2\log(x_1)/\log(1+\epsilon)\rceil$. By the simultaneous Dirichlet approximation theorem \cite[Chapter II, Section 1, Theorem 1A]{Schm} there exist positive integers $q,k_1,\dots,k_n$ with $1\leq q< Q^n$ such that $|q\alpha_i-k_i|\leq Q^{-1}$ for all $i$, and thus $|2\log(\rho) q-\log(x_i^{k_i})|\leq \log(1+\epsilon)/2$. Letting $r=\min_i x_i^{k_i}$, one has $\log(r)\geq 2\log(\rho) q-1\geq \log(\rho)$ for $q\geq 1$ and $\rho\geq 3$. In addition, for all $i$, $0\leq\log(x_i^{k_i})-\log(r)\leq\log(1+\epsilon)$, hence $r\leq x_i^{k_i}\leq (1+\epsilon)r$. Finally $k_i\leq q \alpha_i+1 \leq 2 q \alpha_i\leq 2\log(\rho)Q^n\log(x_i)^{-1}$ and replacing $Q$ we get the desired bound. \end{proof} The next proposition deals with bounds for the absolute value of small representatives for quotient rings. \begin{proposition}\label{lem:small_rep} {Let $K$ be a number field of degree $m=[K:\mathbb{Q}]$. Given a nonzero ideal $\mathfrak{a}$ of $\mathcal{O}_K$, there exists a set of representatives $A$ of $\mathcal{O}_K/\mathfrak{a}$ such that, for every $a\in A$ and every $\sigma\in{\rm Hom}(K,\mathbb{C})$, one has \[|\sigma(a)|\leq \delta_K N(\mathfrak{a})^{1/m}\] where $\delta_K=m^{\frac{3}{2}}{2}^{\frac{m(m-1)}{2}}\sqrt{|\Delta_K|}$.} \end{proposition} \begin{proof} This is an immediate consequence of well-known results on lattice reduction. For instance, \cite[Proposition 15]{BFH} gives for every $\alpha\in\mathcal{O}_K$, an element $a\in\mathcal{O}_K$ with $\alpha-a\in\mathfrak{a}$ such that \[\sqrt{\sum_{\sigma}|\sigma(a)|^2}\leq m^{\frac{3}{2}}{\ell}^{\frac{m(m-1)}{2}}\sqrt{|\Delta_K|} N(\mathfrak{a})^{1/m}\] where $\ell$ depends on certain parameters {$\eta\in\ (1/2,1), \delta\in (\eta^2,1)$ and $\theta>0$ coming from applying a variant of the LLL-reduction algorithm as in \cite[Theorem 5.4]{Cha} and \cite[Theorem 7]{NSV} (see also \cite[\S 4, p.595]{BFH}) to a $\mathbb{Z}$-basis of $\mathfrak{a}$. In particular, choosing $\eta=2/3,\delta=7/9$ and $\theta=(\sqrt{19}-4)/3$, we have $\ell=2$, which gives the claimed upper bound for $|\sigma(a)|$.} \end{proof} The following proposition is a variant of Hensel's lemma. \begin{proposition}\label{val-BZ} Let $E$ be a finite extension of $\mathbb{Q}_p$, $\mathfrak{P}$ the maximal ideal of $\mathcal{O}_E$ and $v=v_\mathfrak{P}$. Let $f\in E[X]$ and $x_0\in E$. Assume there exist $a,b\in\mathbb{Z}$ such that \begin{enumerate}[(i)] \item\label{ci} $v(f(x_0))>a+b$, \item\label{cii} $v(f'(x_0))\leq a$, \item\label{ciii} $v(f^{(\nu)}(x_0)/\nu!)\geq a-(\nu-1)b$ for every $\nu\geq 2$.\end{enumerate} Then there exists $x\in E$ with $f(x)=0$ and $v(x-x_0)>b$. \end{proposition} \begin{proof} This can be proved precisely as the special case $E=\mathbb{Q}_p$ in \cite[Lemma 1]{BZ}. Alternatively, one can reduce this to one of the standard forms of Hensel's lemma as follows. Let $\beta\in E$ with $v(\beta)=b$. Then $g(X):=(\beta f'(x_0))^{-1}f(\beta X+x_0)$ is in $\mathcal{O}_E[X]$ by $(i)$-$(iii)$ and has a simple zero $X=0$ modulo $\mathfrak{P}$ by $(i)$ and $(ii)$, hence by Hensel's lemma $g$ has a zero $x'\in\mathfrak{P}$, and $x=\beta x'+x_0$ is then the desired zero of $f$. \end{proof} The final proposition in this subsection gives a bound for the height of the roots of a polynomial with small algebraic coefficients. \begin{proposition}\label{H-min-root} Let $K$ be a number field and let $f(X)=X^m+a_{m-1}X^{m-1}+\ldots +a_0\in \mathcal{O}_K[X]$. If $B\geq 1$ with $|\sigma(a_i)|<B$ for every $i$ and every $\sigma\in{\rm Hom}(K,\mathbb{C})$, then $f$ has a root $\alpha$ with \[ h(\alpha)\leq \frac{\log(B\sqrt{m+1})}{m}. \] \end{proposition} \begin{proof} Let $M_K=M_K^0\cup M_K^{\infty}$ be the set of (finite and infinite) places of $K$ and let $d=[K:\mathbb{Q}]$. For a place $v\in M_K$, denote by $d_v=[K_v:\mathbb{Q}_v]$ the local degree. Let \[ \hat{h}(f)=\log\left(\prod_{v\in M_K}M_v(f)^{d_v/d}\right) \] where if $v$ is non-archimedean $M_v(f)=\max_i(|a_i|_v)$, while if $v$ is archimedean and corresponds to the embedding $\sigma\in{\rm Hom}(K,\mathbb{C})$, $M_v(f)$ is the Mahler measure $M(\sigma(f))$ of the polynomial $\sigma(f)$. By \cite[Appendix A, section A.2, pag. 210]{Zan} we have that, if $\alpha_1,\ldots, \alpha_m\in \overline{\mathbb{Q}}$ are the roots of $f$ (with multiplicities), then $\hat{h}(f)=\sum_{i=1}^m h(\alpha_i)$ where $h$ denotes the usual logarithmic Weil height. Thus, if $\alpha$ is a root of $f$ of minimal height, then \begin{equation}\label{h-root} h(\alpha)\leq \frac{\hat{h}(f)}{m}. \end{equation} Let $\sigma_1,\ldots,\sigma_r$ and $\tau_1,\overline{\tau_1},\ldots, \tau_s,\overline{\tau_s}$ be, respectively, the real and pairwise conjugate complex embeddings of $K$ in $\mathbb{C}$, so that $d=r+2s$. As $f$ has coefficients in $\mathcal{O}_K$, $M_v(f)\leq1$ if $v$ is non-archimedean and we have that \begin{align*} \hat{h}(f)&\leq\log\left(\prod_{v\in M_K^{\infty}}M_v(f)^{d_v/d}\right)=\log\left(\prod_{i=1}^r M(\sigma_i(f))^{1/d}\cdot \prod_{j=1}^s M(\tau_j(f))^{2/d}\right)=\\ &=\log\left(\prod_{\sigma\in{\rm Hom}(K,\mathbb{C})}M(\sigma(f))^{1/d}\right). \end{align*} By \cite[Section 3.2.2, formula (3.7)]{Zan} and by our hypothesis on $B$, we have that $M(\sigma(f))\leq B\sqrt{m+1}$ for all $\sigma\in{\rm Hom}(K,\mathbb{C})$, thus $\hat{h}(f)\leq \log(B\sqrt{m+1})$ and, plugging this bound into \eqref{h-root}, we conclude. \end{proof} \subsection{Representatives of residue rings of local fields}\label{sec-aux-p} This subsection contains the technical key result needed to construct the local polynomials in the proof of Theorem \ref{mainthm}. Let $E/F$ be a Galois extension of non-archimedean local fields with Galois group $G$. {Let $\mathfrak{p}$ be the maximal ideal of $\mathcal{O}_F$,} $\mathfrak{P}$ be the maximal ideal of $\mathcal{O}_E$ and for $k\in\mathbb{N}$ denote by $\pi_k:\mathcal{O}_E\rightarrow\mathcal{O}_E/\mathfrak{P}^k$ the residue map. It is known that one can always find a $G$-invariant set of representatives of the residue field $\mathcal{O}_E/\mathfrak{P}$, e.g.~the Teichm\"uller representatives. As long as the ramification of $E/F$ is tame, one can also find $G$-invariant sets of representatives of each residue ring $\mathcal{O}_E/\mathfrak{P}^k$, but if the ramification is wild, this is not necessarily so. We will therefore work with the following substitute for such a $G$-invariant set of representatives: \begin{proposition}\label{A_i} Let $E/F$ be a Galois extension of non-archimedean local fields and define $G,\mathfrak{p},\mathfrak{P},\pi_k$ as above. Let $d$ be a multiple of $|G|$. There exists a constant $c$ such that for every $k$ there is $A\subseteq\mathcal{O}_E$ such that \begin{enumerate} \item $A$ is $G$-invariant, \item all orbits of $A$ have length $|G|$, \item $\pi_k|_{A}:A\rightarrow\mathcal{O}_E/\mathfrak{P}^k$ is $d$-to-1 and onto, and \item $\pi_{k+c}|_{A}$ is injective. \end{enumerate} Moreover, if $F$ is a $p$-adic field, one can choose \[ c\leq e(\mathfrak{P}|\mathfrak{p})\left(d+|G|+\frac{e(\mathfrak{p}|p)}{p-1}+1\right). \] \end{proposition} \begin{proof} Note that $G$ naturally acts on $\mathcal{O}_E$ and on $\mathcal{O}_E/\mathfrak{P}^k$, and that $\pi_k$ is $G$-equivariant. Fix some primitive element $\alpha\in\mathcal{O}_E^\times$ of $E/F$ and a uniformizer $\theta\in\mathcal{O}_F$ of $v_\mathfrak{p}$, and let \begin{eqnarray*} e&=&e(\mathfrak{P}|\mathfrak{p}),\\ c_0&=&\max_{1\neq\sigma\in G} v_\mathfrak{P}(\alpha-\sigma\alpha), (\mbox{with } c_0=0\mbox{ if }G=1), \\ c_1&=&\lceil|G|+c_0/e\rceil, \mbox{ and }\\ c &=& e(d+c_1). \end{eqnarray*} Let $k\in\mathbb{N}$ be given. The desired set $A$ is obtained by applying the following Claim in the case $X=\mathcal{O}_E/\mathfrak{P}^k$: \begin{claim} For every $G$-invariant subset $X\subseteq\mathcal{O}_E/\mathfrak{P}^k$ there exists a $G$-invariant subset $A\subseteq\mathcal{O}_E$ with all orbits of length $|G|$ such that $\pi_{k+c}|_{A}$ is injective and $\pi_k|_{A}$ is $d$-to-$1$ onto $X$. \end{claim} We prove the Claim by induction on $|X|$: If $X=\emptyset$, $A=\emptyset$ satisfies the claim. If $X\neq\emptyset$ take $x\in X$ and let $X'=X\setminus Gx$, where $Gx$ denotes the orbit of $x$ under $G$. By the induction hypothesis there exists $A'\subseteq\mathcal{O}_E$ satisfying the claim for $X'$. Choose $a\in\pi_k^{-1}(x)$ and let $k_0=\lceil\frac{k}{e}\rceil$. Then $$ n_0:=\min\{n\geq0: v_\mathfrak{P}(a-\sigma a)\neq e(k_0+n)+v_\mathfrak{P}(\alpha-\sigma\alpha)\;\forall 1\neq\sigma\in G\} < |G|, $$ as $v_\mathfrak{P}(a-\sigma a)-v_\mathfrak{P}(\alpha-\sigma\alpha)$ attains less than $|G|$ many distinct values. Thus for $1\neq\sigma\in G$, \begin{eqnarray*} v_\mathfrak{P}( (a+\theta^{n_0+k_0}\alpha)-\sigma(a+\theta^{n_0+k_0}\alpha) ) &=& \min\{v_\mathfrak{P}(a-\sigma a),e(n_0+k_0)+v_\mathfrak{P}(\alpha-\sigma\alpha)\}\\ &\leq&e(n_0+k_0)+c_0\\&<& k+ec_1, \end{eqnarray*} so if we replace $a$ by $a+\theta^{n_0+k_0}\alpha$, we can assume without loss of generality that $\pi_{k+ec_1}$ is injective on $Ga$ and that $|Ga|=|G|$. If we now let $$ A=A'\cup\{\sigma(a)+\theta^{k_0+c_1+j}:\sigma\in G,0\leq j<d/|G_x|\} $$ where $G_x$ is the stabilizer of $x$, then $\pi_k|_{A}$ is $d$-to-$1$ onto $X=X'\cup Gx$ and $A$ is $G$-invariant with all orbits of length $|G|$. As $$ k+ec_1\leq e(k_0+c_1+j)<k+c, $$ we have that $\pi_{k+c}|_{A}$ is injective. Now, if $F$ is a $p$-adic field and if we chose $\alpha\in \mathcal{O}_E$ to be also a generator of $\mathcal{O}_E$ as a $\mathcal{O}_F$-algebra, by \cite[Chap.IV, Ex. 3(c)]{Serre}, one has the explicit bound $c_{0}\leq e(\mathfrak{P}/{p})/(p-1)$ which implies the stated bound for $c$. \end{proof} \begin{remark} Note that if (4) holds for some $c,k,A$, then also for $c',k,A$ for any $c'\geq c$. \end{remark} \section{Proof of Theorem \ref{mainthm}} \label{sec:main} \noindent Using the notation of Theorem \ref{mainthm}, for every $1\leq i\leq n$, let $\mathfrak{P}_i$ be the maximal ideal of $\mathcal{O}_{E_i}$, $v_i$ the extension of $v_{\mathfrak{P}_i}$ to an algebraic closure of $E_i$, $G_i={\rm Gal}(E_i/F_i)$ and $d=\prod_{i=1}^n|G_i|$. Let $C$ be the constant from Theorem \ref{mainthm}, note that $C\geq 2$, and let $c=4C^{n+1}$. Fix an integer $\rho\geq 3C^n$ and note that $\rho/d\geq 3$ since $d\leq C^n$. If $n=0$ let $\epsilon=0$, otherwise fix $0<\epsilon<1$. \subsection{Choosing the right degree} \label{sec:degree} If $n>1$ we apply Proposition \ref{Dir} to $x_i={q_i}^{f_i}$ to obtain positive integers $r>\rho/d$ and $k_1,\ldots,k_n$ such that for every $i$, \begin{enumerate}[(i)] \item \label{bound-r} $r\leq q_i^{f_i k_i} \leq (1+\epsilon) r$ and \item\label{b-ki} $k_i\leq 2^{2(n+1)}(\log C)^{n}\frac{\log(\rho/d)}{\log^n(1+\epsilon)}$, \end{enumerate} where we used that, for every $i$, $\log 2 \leq \log(x_i)=\log({q_i^{f_i}})\leq \log C$. It follows that \begin{equation*}\label{b-r} \log(\rho/d)\leq \log(r)\leq (4\log C)^{n+1}\frac{\log(\rho/d)}{\log^n(1+\epsilon)}. \end{equation*} If $n=1$ we instead set $r=q_1^{f_1k_1}$, where $k_1=\lceil\log(\rho/d)/\log(q_1^{f_1})\rceil$, so that (\ref{bound-r}) holds with $\epsilon=0$, and \begin{equation*}\label{b-r-rho} \log(\rho/d)\leq\log(r)\leq \log(\rho/d)+\log(q_1^{f_1}). \end{equation*} Using that $(4\log C)^{n+1}\geq\log^n(1+\epsilon)$, we conclude \begin{equation}\label{eqn:deg} \rho \leq dr \leq \begin{cases} C\rho,&n=1\\ \rho^{\frac{(4\log C)^{n+1}}{\log^n(1+\epsilon)}},&n>1 \end{cases} \end{equation} \subsection{Construction of the polynomial $g$}\label{constr-G} We first want to prove the following: \begin{claim}\label{claim} For every $i$, there exists a polynomial $g_i\in \mathcal{O}_{K}[X]$ of degree $dr$ whose set of roots ${A_i}$ satisfies \begin{enumerate}[(a)] \item\label{aux1} $A_i\subseteq E_i$, \item\label{aux2} $v_i({\alpha}-{\beta})< k_i+c$ for all ${\alpha},{\beta}\in{A_i}$ with ${\alpha}\neq{\beta}$, and \item\label{aux3} $v({g_i}'(\alpha))\leq d\left(\frac{q_i^{f_i k_i}-1}{q_i^{f_i}-1}+ c\right)$ for every ${\alpha}\in{A_i}$. \end{enumerate} \end{claim} \begin{proof}[Proof of the claim] As $e(\mathfrak{P}_i|\mathfrak{p}_i)(d+|G_i|+\frac{e(\mathfrak{p}_i|p_i)}{p_i-1}+1)\leq C(C^n+C+C+1)\leq 4C^{n+1}=c$, by Proposition \ref{A_i} there is a $G_i$-invariant set $A_i'\subseteq\mathcal{O}_{E_i}$ with all orbits of length $|G_i|$ such that ${A_i'}\rightarrow\mathcal{O}_{E_i}/\mathfrak{P}_i^{k_i}$ is $d$-to-$1$ and ${A_i'}\rightarrow\mathcal{O}_{E_i}/\mathfrak{P}_i^{k_i+c}$ is injective. As $|{A_i'}|=d q_i^{f_i k_i}$, $|G_i|$ divides $d$, and $r\leq q_i^{f_i k_i}$, there exists a $G_i$-invariant subset ${\tilde A_i}\subseteq{A_i'}$ with $|\tilde{A_i}|=d r$. Let $$ \tilde{g_i}=\prod_{\alpha\in \tilde{A_i}}(X-\alpha)\in\mathcal{O}_{F_i}[X]. $$ We first prove that conditions \eqref{aux1}-\eqref{aux3} hold for $\tilde{g_i}$ and the set $\tilde{A_i}$, instead of $g_i$ and $A_i$. Note that $\tilde{g_i}\in\mathcal{O}_{F_i}[X]$ is monic of degree $dr$ and that condition \eqref{aux1} holds for $\tilde{A_i}$ by construction. Moreover, as the map $\tilde{A_i}\rightarrow\mathcal{O}_{E_i}/\mathfrak{P}_i^{k_i+c}$ is injective, we have that condition \eqref{aux2} is also satisfied for $\tilde{A_i}$. As for condition \eqref{aux3}, note that the valuation $v_{\mathfrak{P}_i}$ on $\mathcal{O}_{E_i}$ induces a map $\bar{v}:(\mathcal{O}_{E_i}/\mathfrak{P}_i^{k_i})\setminus\{0\}\rightarrow\{0,\dots,k_i-1\}$ such that ${v}_{\mathfrak{P}_i}(\gamma)=\bar{v}(\pi_{k_i}(\gamma))$ for all $\gamma\in\mathcal{O}_{E_i}\setminus\mathfrak{P}_i^{k_i}$, where $\pi_{k_i}$ denotes the residue map $\mathcal{O}_{E_i}\rightarrow\mathcal{O}_{E_i}/\mathfrak{P}_i^{k_i}$. Now \begin{eqnarray*} v_{\mathfrak{P}_i}(\tilde{g_i}'(\alpha)) &=& \sum_{\alpha\neq\beta\in \tilde{A_i}}v_{\mathfrak{P}_i}(\alpha-\beta)\\&\leq& \sum_{\alpha\neq\beta\in A_i'}v_{\mathfrak{P}_i}(\alpha-\beta)\\ &=&\sum_{\stackrel{\alpha\neq\beta\in A_i'}{\pi_{k_i}(\alpha)=\pi_{k_i}(\beta)}}v_{\mathfrak{P}_i}(\alpha-\beta)+\sum_{\stackrel{\alpha\neq\beta\in A_i'}{\pi_{k_i}(\alpha)\neq\pi_{k_i}(\beta)}}v_{\mathfrak{P}_i}(\alpha-\beta)\\ &<&(d-1)\cdot(k_i+c)+d\cdot\sum_{ {0\neq a\in\mathcal{O}_{E_i}/\mathfrak{P}_i^{k_i}}}\bar{v}(a) \end{eqnarray*} and \begin{eqnarray*} \sum_{{0\neq a\in\mathcal{O}_{E_i}/\mathfrak{P}_i^{k_i}}} \bar{v}(a) &=& \sum_{j=0}^{k_i-1}|\{a:\bar{v}(a)=j\}|\cdot j \quad=\quad\sum_{j=0}^{k_i-1}\sum_{l=1}^j|\{a:\bar{v}(a)=j\}|\\ &=&\sum_{l=1}^{k_i-1}\sum_{j=l}^{k_i-1}|\{a:\bar{v}(a)=j\}| \quad=\quad\sum_{l=1}^{k_i-1}|\{a:\bar{v}(a)\geq l\}|\\ &=&\sum_{l=1}^{k_i-1}(q_i^{f_i(k_i-l)}-1) \quad=\quad \sum_{l=0}^{k_i-1}q_i^{f_i l} - k_i \quad=\quad \frac{1-q_i^{f_i k_i}}{1-q_i^{f_i}}-k_i \end{eqnarray*} and plugging this into the previous inequality gives condition (\ref{aux3}) for $\tilde{g}_i$. As $\mathcal{O}_K$ is dense in $\mathcal{O}_{F_i}$ with respect to $v_i$, we obtain a monic polynomial ${g_i}\in\mathcal{O}_K[X]$ of degree $dr$ arbitrarily close to $\tilde{g_i}$. Let ${A_i}$ be the set of roots of ${g_i}$. By the continuity of roots \cite[Theorem 2.4.7]{EP} we can achieve that the roots of ${g_i}$ are arbitrarily close to the roots of $\tilde{g_i}$, in particular that conditions \eqref{aux2} and \eqref{aux3} are satisfied by $g_i$ and $A_i$. Moreover, by Krasner's lemma \cite[Ch.II, \S 2, Proposition 4]{Lang}, we can in addition achieve condition \eqref{aux1}, completing the proof of the claim. \end{proof} Now, let $p_0$ be the smallest prime number not in the set $\{p_1,\ldots,p_n\}$ and let $\mathfrak{p}_0$ be a prime ideal of $\mathcal{O}_K$ above $p_0$. Fix a monic polynomial $g_0\in\mathcal{O}_K[X]$ of degree $dr$ whose reduction modulo $\mathfrak{p}_0$ is irreducible. Let \begin{equation}\label{defmi} m_i=\frac{d}{e_i}\left(\frac{q_i^{f_i k_i}-1}{q_i^{f_i}-1}+k_i+2c\right) \end{equation} and $$ \mathfrak{a}=\mathfrak{p}_0\mathfrak{p}_1^{m_1}\cdots\mathfrak{p}_n^{m_n}. $$ By the Chinese Remainder Theorem and Proposition \ref{lem:small_rep} there exists a monic polynomial $g\in\mathcal{O}_K[X]$ such that \begin{enumerate} \item\label{degg_dr} $\deg g=dr$, \item\label{cond_p0} $g\equiv g_0\mbox{ mod }\mathfrak{p}_0[X]$, \item\label{cond_equiv} $g\equiv {g_i}\mbox{ mod }\mathfrak{p}_i^{m_i}[X]$ for $i=1,\dots,n$, and \item\label{cond_coeff} $|\sigma(a)|\leq\delta_K N(\mathfrak{a})^{1/[K:\mathbb{Q}]}$ for every coefficient $a$ of $g$ and every $\sigma\in{\rm Hom}(K,\mathbb{C})$, \end{enumerate} where $\delta_K=[K:\mathbb{Q}]^{\frac{3}{2}}{2}^{\frac{[K:\mathbb{Q}]([K:\mathbb{Q}]-1)}{2}}\sqrt{|\Delta_K|}$. Note that (\ref{cond_p0}) implies that $g$ is irreducible. In particular, we get from (\ref{degg_dr}) and (\ref{eqn:deg}) that every root $\alpha$ of $g$ satisfies the degree bound (\ref{eqn:thm2}) of Theorem \ref{mainthm}. \subsection{The roots of $g$ are in $E_i$ for every $i$.}\label{roots-g} We claim that the conditions of Proposition \ref{val-BZ} hold for the field $E_i$, the polynomial $g$ and $x_0=\alpha\in {A_i}$ (which lies in $E_i$ by condition \eqref{aux1}) by setting $a=v_i({g_i}'(\alpha))$ and $b=k_i+c-1$. Indeed, by \eqref{aux3} \begin{eqnarray}\label{aem} a=v_i({g_i}'(\alpha))\leq d\cdot\frac{q_i^{f_i k_i}-1}{q_i^{f_i}-1}+dc<e_im_i-b, \end{eqnarray} and, writing $g-{g_i}= t_i$ with $t_i\in\mathfrak{p}_i^{m_i}[X]$ by (\ref{cond_equiv}) of Section \ref{constr-G}, we have $g(\alpha)=t_i(\alpha)$ and therefore $v_i(g(\alpha))\geq e_im_i>a+b$, so condition \eqref{ci} holds. Similarly for condition \eqref{cii}, we have $g'(\alpha)=t_i'(\alpha)+{g_i}'(\alpha)$ and since \[v_i(t_i'(\alpha))\geq e_im_i> v_i({g_i}'(\alpha)),\] we conclude that $v_i(g'(\alpha))=v_i({g_i}'(\alpha))=a$. Now for $\nu\geq 2$ write $$ {g_i}^{(\nu)}(\alpha) = \nu! {g_i}'(\alpha)\sum_{\substack{B\subseteq {A_i}\\|B|=\nu-1\\ \alpha\notin B}}\prod_{\beta\in B}(\alpha-\beta)^{-1}. $$ Thus $$ v_i({{g_i}^{(\nu)}(\alpha)}/{\nu!})\geq a-(\nu-1)\max_{\beta\neq\alpha}v_i(\alpha-\beta) {\geq} a-(\nu-1)b $$ where the last inequality holds by {(\ref{aux2})}. Moreover, using (\ref{aem}), we get \[v_i({t_i^{(\nu)}(\alpha)}/{\nu!})\geq e_im_i-v_i(\nu!)\geq a+b-\frac{e(\mathfrak{P}_i|p_i)\nu}{p_i-1}\geq a-(\nu-1)b\] where the last inequality holds since $b\geq c\geq \frac{e(\mathfrak{P}_i|p_i)}{p_i-1}$. Thus \[v_i({g^{(\nu)}(\alpha)}/{\nu!})\geq \min\left\{v_i({{g_i}^{(\nu)}(\alpha)}/{\nu!}), v_i({t_i^{(\nu)}(\alpha)}/{\nu!})\right\}\geq a-(\nu-1)b\] fulfilling condition \eqref{ciii}. So Proposition \ref{val-BZ} gives ${\alpha}'\in E_i$ with $g({\alpha}')=0$ and $v_i(\alpha'-\alpha)>b$. As $v_i(\alpha-\beta)\leq b$ for all $\beta\in {A_i}\setminus\{\alpha\}$ by (\ref{aux2}), we conclude that ${\alpha}'\neq{\beta}'$ for all $\alpha\neq\beta$. Hence $g$ has precisely $|{A_i}|=dr$ many roots in $E_i$. As this holds for every $i$ and $g$ totally splits in the maximal Galois extension $L$ of $K$ that is contained in all $E_i$. Moreover, as $g\in \mathcal{O}_K[X]$, all roots of $g$ are actually in $\mathcal{O}_L$. \subsection{Bounding the height of the roots of $g$}\label{b-root} From condition (\ref{cond_coeff}) of Section \ref{constr-G}, for every coefficient $a$ of $g$ and every $\sigma\in{\rm Hom}(K,\mathbb{C})$, we have $$ |\sigma(a)|\leq B:=\delta_K N(\mathfrak{p}_0)^{1/[K:\mathbb{Q}]}\cdot\prod_{i=1}^n N(\mathfrak{p}_i)^{m_i/[K:\mathbb{Q}]}. $$ By Proposition \ref{H-min-root}, $g$ has a root $\alpha\in\mathcal{O}_L$ with $h(\alpha)$ bounded by \begin{equation}\label{bo-h} \frac{\log(B\sqrt{\deg g+1})}{\deg g} \leq \frac{\log\left(\delta_K N(\mathfrak{p}_0)^{1/[K:\mathbb{Q}]}\sqrt{\deg g+1}\right)}{\deg g}+\sum_{i=1}^n \frac{m_i}{\deg g}\cdot\frac{\log (q_i)}{[K:\mathbb{Q}]}. \end{equation} By the definition of $m_i$ in (\ref{defmi}) and recalling $\deg g=dr$ from \eqref{degg_dr}, we have \begin{eqnarray}\label{eq-mi} \frac{m_i}{\deg g} &=&\frac{1}{e_i}\cdot\frac{1}{q_i^{f_i}-1}\cdot\frac{q_i^{f_i k_i}-1}{r}+\frac{k_i}{e_i r}+\frac{2c}{e_i r}. \end{eqnarray} Condition \eqref{bound-r} of Section \ref{constr-G} implies that \[ \frac{q_i^{f_i k_i}-1}{r}\leq 1+\epsilon. \] Moreover, \[ \frac{k_i}{e_i r}=\frac{\log(q_i^{f_i k_i})}{e_i r f_i \log(q_i)}\leq \frac{2d}{e_i f_i \log(q_i)}\cdot \frac{\log(\deg g)}{\deg g}\leq 3 C^n \frac{\log(\deg g)}{\deg g} \] Finally, $$ \frac{2c}{e_i r}\leq \frac{8d C^{n+1}}{e_i \deg g}\leq \frac{8C^{2n+1}}{\deg g}. $$ Therefore, substituting in \eqref{eq-mi}, and recalling that $C\geq 2$ and $\rho\geq 3$, we have \[ \frac{m_i}{\deg g} \leq \frac{1}{e_i(q_i^{f_i}-1)}+\frac{\epsilon}{e_i(q_i^{f_i}-1)}+11 C^{2n+1}\frac{\log(\deg g)}{\deg g}. \] Thus the second summand in \eqref{bo-h} can be bounded as \begin{align*} \sum_{i=1}^n \frac{m_i}{\deg g}\cdot\frac{\log(q_i)}{[K:\mathbb{Q}]}\leq \sum_{i=1}^n& \frac{f(\mathfrak{p}_i|p_i)}{[K:\mathbb{Q}]}\cdot\frac{\log(p_i)}{e_i(q_i^{f_i}-1)}+n\epsilon+11nC^{2n+2}\frac{\log(\deg g)}{\deg g}. \end{align*} As for the first summand in \eqref{bo-h}, note that $N(\mathfrak{p}_0)^{1/[K:\mathbb{Q}]}\leq p_0$ where $p_0$ is the smallest prime number not in the set $\{p_1,\ldots,p_n\}$, which, by Bertrand's postulate, can be bounded by $p_0<2\max_i p_i\leq 2C$. Moreover, \[ \delta_K=[K:\mathbb{Q}]^{\frac{3}{2}}{2}^{\frac{[K:\mathbb{Q}]([K:\mathbb{Q}]-1)}{2}}\sqrt{|\Delta_K|}\leq C^2 \cdot 2^{\frac{C(C-1)}{2}} \] and thus, as $C\geq 2$, \begin{eqnarray*} \frac{\log\left(\delta_K N(\mathfrak{p}_0)^{1/[K:\mathbb{Q}]}\sqrt{\deg g+1}\right)}{\deg g} &\leq& \frac{\log\left(C^3 2^{\frac{C^2-C+2}{2}}\sqrt{\deg g+1}\right)}{\deg g}\leq 2nC^{2n+1}\frac{\log(\deg g)}{\deg g}. \end{eqnarray*} Therefore, from \eqref{bo-h} we get \[ h(\alpha)\leq \sum_{i=1}^n \frac{f(\mathfrak{p}_i|p_i)}{[K:\mathbb{Q}]}\cdot\frac{\log(p_i)}{e_i(q_i^{f_i}-1)}+n\epsilon+13 nC^{2n+2}\frac{\log(\deg g)}{\deg g}, \] so $\alpha$ satisfies the height bound (\ref{eqn:thm1}) of Theorem \ref{mainthm} (recalling that $\epsilon=0$ if $n=1$). \section*{Acknowledgments} \noindent The authors thank Lukas Pottmeyer for pointing out the results in \cite{Fil,FiliPetsche,FP}, Paul Fili for the exchange regarding \cite{Fil}, and Philip Dittmann for helpful discussions on $p$-adic fields as well as for suggesting the short proof of Proposition \ref{val-BZ}. The first author's work has been funded by the ANR project Gardio 14-CE25-0015. \end{document}
\begin{document} \title[Hilbert functions of general intersections] {On Hilbert functions of\\ general intersections of ideals} \author{Giulio Caviglia} \address{ Department of Mathematics, Purdue University, West Lafayette, IN 47901, USA. } \email{[email protected]} \author{Satoshi Murai} \address{ Satoshi Murai, Department of Mathematical Science, Faculty of Science, Yamaguchi University, 1677-1 Yoshida, Yamaguchi 753-8512, Japan. } \email{[email protected]} \thanks{The work of the first author was supported by a grant from the Simons Foundation (209661 to G. C.). The work of the second author was supported by KAKENHI 22740018. } \subjclass[2010]{Primary 13P10, 13C12, Secondary 13A02} \maketitle \begin{abstract} Let $I$ and $J$ be homogeneous ideals in a standard graded polynomial ring. We study upper bounds of the Hilbert function of the intersection of $I$ and $g(J)$, where $g$ is a general change of coordinates. Our main result gives a generalization of Green's hyperplane section theorem. \end{abstract} \section{Introduction} Hilbert functions of graded $K$-algebras are important invariants studied in several areas of mathematics. In the theory of Hilbert functions, one of the most useful tools is Green's hyperplane section theorem, which gives a sharp upper bound for the Hilbert function of $R/hR$, where $R$ is a standard graded $K$-algebra and $h$ is a general linear form, in terms of the Hilbert function of $R$. This result of Green has been extended to the case of general homogeneous polynomials by Herzog and Popescu \cite{HP} and Gasharov \cite{Ga}. In this paper, we study a further generalization of these theorems. Let $K$ be an infinite field and $S=K[x_1,\dots,x_n]$ a standard graded polynomial ring. Recall that the \textit{Hilbert function} $H(M,-) : \mathbb{Z} \to \mathbb{Z}$ of a finitely generated graded $S$-module $M$ is the numerical function defined by $$H(M,d)=\dim_K M_d,$$ where $M_d$ is the graded component of $M$ of degree $d$. A set $W$ of monomials of $S$ is said to be \textit{lex} if, for all monomials $u,v \in S$ of the same degree, $u \in W$ and $v>_{\mathrm{lex}} u$ imply $v \in W$, where $>_{\mathrm{lex}}$ is the lexicographic order induced by the ordering $x_1> \cdots > x_n$. We say that a monomial ideal $I \subset S$ is a \textit{lex ideal} if the set of monomials in $I$ is lex. The classical Macaulay's theorem \cite{Ma} guarantees that, for any homogeneous ideal $I \subset S$, there exists a unique lex ideal, denoted by $I^{{\mathrm{lex}}}$, with the same Hilbert function as $I$. Green's hyperplane section theorem \cite{Gr} states \begin{theorem}[Green's hyperplane section theorem] \label{green} Let $I \subset S$ be a homogeneous ideal. For a general linear form $h \in S_1$, $$H(I \cap (h),d) \leq H(I^{\mathrm{lex}} \cap (x_n),d) \ \ \mbox{for all } d \geq 0.$$ \end{theorem} Green's hyperplane section theorem is known to be useful to prove several important results on Hilbert functions such as Macaulay's theorem \cite{Ma} and Gotzmann's persistence theorem \cite{Go}, see \cite{Gr}. Herzog and Popescu \cite{HP} (in characteristic $0$) and Gasharov \cite{Ga} (in positive characteristic) generalized Green's hyperplane section theorem in the following form. \begin{theorem}[Herzog--Popescu, Gasharov] \label{hpg} Let $I \subset S$ be a homogeneous ideal. For a general homogeneous polynomial $h \in S$ of degree $a$, $$H(I \cap (h),d) \leq H(I^{\mathrm{lex}} \cap(x_n^a),d) \ \ \mbox{for all } d \geq 0.$$ \end{theorem} We study a generalization of Theorems \ref{green} and \ref{hpg}. Let $>_{\mathrm{{oplex}}}$ be the lexicographic order on $S$ induced by the ordering $x_n> \cdots > x_1$. A set $W$ of monomials of $S$ is said to be \textit{opposite lex} if, for all monomials $u,v \in S$ of the same degree, $u \in W$ and $v>_{\mathrm{{oplex}}} u$ imply $v \in W$. Also, we say that a monomial ideal $I \subset S$ is an \textit{opposite lex ideal} if the set of monomials in $I$ is opposite lex. For a homogeneous ideal $I \subset S$, let $I^{\mathrm{{oplex}}}$ be the opposite lex ideal with the same Hilbert function as $I$ and let $\ensuremath{\mathrm{Gin}}_\sigma(I)$ be the generic initial ideal (\cite[\S 15.9]{Ei}) of $I$ with respect to a term order $>_\sigma$. In Section 3 we will prove the following \begin{theorem} \label{intersection} Suppose $\mathrm{char}(K)=0$. Let $I\subset S$ and $J \subset S$ be homogeneous ideals such that $\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(J)$ is lex. For a general change of coordinates $g$ of $S$, $$H(I \cap g(J),d) \leq H(I^{\mathrm{lex}} \cap J^{\mathrm{{oplex}}} ,d) \ \ \mbox{for all } d\geq 0.$$ \end{theorem} Theorems \ref{green} and \ref{hpg}, assuming that the characteristic is zero, are special cases of the above theorem when $J$ is principal. Note that Theorem \ref{intersection} is sharp since the equality holds if $I$ is lex and $J$ is oplex (Remark \ref{rem1}). Note also that if $\ensuremath{\mathrm{Gin}}_\sigma(I)$ is lex for some term order $>_\sigma$ then $\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(J)$ must be lex as well (\cite[Corollary 1.6]{Co1}). Unfortunately, the assumption on $J$, as well as the assumption on the characteristic of $K$, in Theorem \ref{intersection} are essential (see Remark \ref{example}). However, we prove the following result for the product of ideals. \begin{theorem} \label{product} Suppose $\mathrm{char}(K)=0$. Let $I\subset S$ and $J \subset S$ be homogeneous ideals. For a general change of coordinates $g$ of $S$, $$H(I g(J),d) \geq H(I^{\mathrm{lex}} J^{\mathrm{{oplex}}} ,d) \ \ \mbox{for all } d\geq 0.$$ \end{theorem} Inspired by Theorems \ref{intersection} and \ref{product}, we suggest the following conjecture. \begin{conjecture} \label{conj} Suppose $\mathrm{char}(K)=0.$ Let $I\subset S$ and $J \subset S$ be homogeneous ideals such that $\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(J)$ is lex. For a general change of coordinates $g$ of $S$, \[ \dim_K \ensuremath{\mathrm{Tor}}_i(S/I,S/g(J))_d \leq \dim_K \ensuremath{\mathrm{Tor}}_i(S/I^{\mathrm{lex}},S/J^{\mathrm{{oplex}}})_d \ \ \mbox{for all } d\geq 0. \] \end{conjecture} Theorems \ref{intersection} and \ref{product} show that the conjecture is true if $i=0$ or $i=1.$ The conjecture is also known to be true when $J$ is generated by linear forms by the result of Conca \cite[Theorem 4.2]{Co}. Theorem \ref{2.5}, which we prove later, also provides some evidence supporting the above inequality. \section{Dimension of $\ensuremath{\mathrm{Tor}}$ and general change of coordinates} Let ${GL}_n(K)$ be the general linear group of invertible $n \times n$ matrices over $K$. Throughout the paper, we identify each element $h=(a_{ij}) \in {GL}_n(K)$ with the change of coordinates defined by $h(x_i)=\sum_{j=1}^n a_{ji}x_j$ for all $i$. We say that a property (P) holds for a general $g \in {GL}_n(K)$ if there is a non-empty Zariski open subset $U \subset {GL}_n(K)$ such that (P) holds for all $g \in U$. We first prove that, for two homogeneous ideals $I \subset S$ and $J \subset S$, the Hilbert function of $I \cap g(J)$ and that of $I g(J)$ are well defined for a general $g \in {GL}_n (K)$, i.e.\ there exists a non-empty Zariski open subset of ${GL}_n(K)$ on which the Hilbert function of $I \cap g(J)$ and that of $I g(J)$ are constant. \begin{lemma} \label{2-0} Let $I \subset S$ and $J \subset S$ be homogeneous ideals. For a general change of coordinates $g \in {GL}_n(K)$, the function $H(I \cap g(J),-)$ and $H(I g(J),-)$ are well defined. \end{lemma} \begin{proof} We prove the statement for $I \cap g(J)$ (the proof for $Ig(J)$ is similar). It is enough to prove the same statement for $I+g(J)$. We prove that $\ensuremath{\mathrm{in}}_{\mathrm{lex}}(I+g(J))$ is constant for a general $g \in {GL}_n(K)$. Let $t_{kl}$, where $1 \leq k,l \leq n$, be indeterminates, $\tilde K=K(t_{kl}: 1 \leq k,l \leq n)$ the field of fractions of $K[t_{kl}: 1 \leq k,l \leq n]$ and $A=\tilde K [x_1,\dots,x_n]$. Let $\rho: S \to A$ be the ring map induced by $\rho(x_k)= \sum_{l=1}^n t_{lk} x_l$ for $k=1,2,\dots,n$, and $\tilde L= I A + \rho(J)A \subset A$. Let $L \subset S$ be the monomial ideal with the same monomial generators as $\ensuremath{\mathrm{in}}_{\mathrm{lex}}(\tilde L)$. We prove $\ensuremath{\mathrm{in}}_{\mathrm{lex}}(I+g(J))=L$ for a general $g \in {GL}_n(K)$. Let $f_1,\dots,f_s$ be generators of $I$ and $g_1,\dots,g_t$ those of $J$. Then the polynomials $f_1,\dots,f_s,\rho(g_1),\dots,\rho(g_t)$ are generators of $\tilde L$. By the Buchberger algorithm, one can compute a Gr\"obner basis of $\tilde L$ from $f_1,\dots,f_s,\rho(g_1),\dots,\rho(g_t)$ by finite steps. Consider all elements $h_1,\dots,h_m \in K(t_{kl}:1 \leq k,l \leq n)$ which are the coefficient of polynomials (including numerators and denominators of rational functions) that appear in the process of computing a Gr\"obner basis of $\tilde L$ by the Buchberger algorithm. Consider a non-empty Zariski open subset $U \subset {GL}_n(K)$ such that $h_i(g) \in K \setminus \{0\}$ for any $g \in U$, where $h_i(g)$ is an element obtained from $h_i$ by substituting $t_{kl}$ with entries of $g$. By construction $\ensuremath{\mathrm{in}}_{\mathrm{lex}}(I+g(J))=L$ for every $g \in U$. \end{proof} \begin{remark}\label{ConstantHF} The method used to prove the above lemma can be easily generalized to a number of situations. For instance for a general $g \in {GL}_n(K)$ and a finitely generated graded $S$-module $M,$ the Hilbert function of $\ensuremath{\mathrm{Tor}}_i(M,S/g(J))$ is well defined for every $i$. Let $\mathbb F: 0 \stackrel{\varphi_{p+1}}{\longrightarrow} \mathbb F_p \stackrel{\varphi_p}{\longrightarrow} \cdots \longrightarrow \mathbb F_1 \stackrel{\varphi_1}{\longrightarrow} \mathbb F_0 \stackrel{\varphi_0}{\longrightarrow}0$ be a graded free resolution of $M.$ Given a change of coordinates $g$, one first notes that for every $i=0,\dots,p$, the Hilbert function $H(\ensuremath{\mathrm{Tor}}_i(M,S/g(J)),-)$ is equal to the difference between the Hilbert function of $\rm{Ker}(\pi_{i-1} \circ \varphi_i)$ and the one of $\varphi_{i+1}(F_{i+1}) + F_i \otimes_S g(J)$ where $\pi_{i-1}: F_{i-1} \rightarrow F_{i-1} \otimes_S S/g(J)$ is the canonical projection. Hence we have \begin{align}\label{H-TOR} \nonumber H(\ensuremath{\mathrm{Tor}}_i & (M,S/g(J)),-)= \\ &H(F_i, -) -H(\varphi_i(F_i)+ g(J) F_{i-1},-) + H(g(J) F_{i-1},-)\\ \nonumber &- H(\varphi_{i+1}(F_{i+1}) + g(J) F_i,-). \end{align} Clearly $H(F_i,-)$ and $H(g(J) F_{i-1},-)$ do not depend on $g.$ Thus it is enough to show that, for a general $g$, the Hilbert functions of $\varphi_i(F_i)+g(J) F_{i-1}$ are well defined for all $i=0,\dots,p+1.$ This can be seen as in Lemma \ref{2-0}. \end{remark} Next, we present two lemmas which will allow us to reduce the proofs of the theorems in the third section to combinatorial considerations regarding Borel-fixed ideals. The first Lemma is probably clearly true to some experts, but we include its proof for the sake of the exposition. The ideas used in Lemma \ref{lemma2} are similar to that of \cite[Lemma 2.1]{Ca1} and they rely on the construction of a flat family and on the use of the structure theorem for finitely generated modules over principal ideal domains. \begin{lemma} \label{lemma1} Let $M$ be a finitely generated graded $S$-module and $J \subset S$ a homogeneous ideal. For a general change of coordinates $g \in {GL}_n(K)$ we have that $\dim_K \ensuremath{\mathrm{Tor}}_i(M,S/g(J))_j \leq \dim_K \ensuremath{\mathrm{Tor}}_i(M,S/J)_j$ for all $i$ and for all $j.$ \end{lemma} \begin{proof} Let $\mathbb F$ be a resolution of $M,$ as in Remark \ref{ConstantHF}. Let $i$, $0\leq i \leq p+1$ and notice that, by equation \eqref{H-TOR}, it is sufficient to show: $H(\varphi_i(F_i)+g(J) F_{i-1},-)\geq H(\varphi_i(F_i)+JF_{i-1},-).$ We fix a degree $d$ and consider the monomial basis of $ (F_{i-1})_d.$ Given a change of coordinates $h=(a_{kl}) \in {GL}_n(K)$ we present the vector space $V_d=(\varphi_i(F_i)+h(J)F_{i-1})_d$ with respect to this basis. The dimension of $V_d$ equals the rank of a matrix whose entries are polynomials in the $a_{kl}$'s with coefficients in $K.$ Such a rank is maximal when the change of coordinates $h$ is general. \end{proof} For a vector $\mathbf w=(w_1,\ldots,w_n) \in \ensuremath{\mathbb{Z}}_{\geq 0}^n$, let $\ensuremath{\mathrm{in}}_\mathbf w (I)$ be the initial ideal of a homogeneous ideal $I$ with respect to the weight order $>_\mathbf w$ (see \cite[p.\ 345]{Ei}). Let $T$ be a new indeterminate and $R=S[T]$. For $\mathbf a=(a_1,\dots,a_n) \in \ensuremath{\mathbb{Z}}_{\geq 0}^n$, let $x^{\mathbf a}=x_1^{a_1} x_2^{a_2} \cdots x_n^{a_n}$ and $(\mathbf a, \mathbf w)= a_1w_1 + \cdots + a_n w_n$. For a polynomial $f= \sum_{\mathbf a \in \ensuremath{\mathbb{Z}}_{\geq 0}^n} c_{\mathbf a} x^{\mathbf a}$, where $c_{\mathbf a} \in K$, let $b= \max \{ (\mathbf a,\mathbf w) : c_{\mathbf a} \ne 0\}$ and $$\tilde f = T^b \left(\sum_{\mathbf a \in \ensuremath{\mathbb{Z}}_{\geq 0}^n} T^{-(\mathbf a,\mathbf w)}c_{\mathbf a} x^{\mathbf a}\right) \in R.$$ Note that $\tilde f$ can be written as $\tilde f=\ensuremath{\mathrm{in}}_\mathbf w(f) + T g$ where $g \in R$. For an ideal $I \subset S$, let $\tilde I =(\tilde f :f \in I) \subset R$. For $\lambda \in K \setminus\{0\}$, let $D_{\lambda,\mathbf w}$ be the diagonal change of coordinates defined by $D_{\lambda,\mathbf w}(x_i)=\lambda^{-w_i} x_i$. From the definition, we have $$R/\big(\tilde I +(T)\big) \cong S/ \ensuremath{\mathrm{in}}_\mathbf w(I)$$ and $$R/\big(\tilde I +(T-\lambda)\big) \cong S/D_{\lambda,\mathbf w}(I)$$ where $\lambda \in K \setminus \{0\}$. Moreover $(T-\lambda)$ is a non-zero divisor of $R/\tilde I$ for any $\lambda \in K$. See \cite[\S 15.8]{Ei}. \begin{lemma} \label{lemma2} Fix an integer $j$. Let $\mathbf w \in \ensuremath{\mathbb{Z}}_{\geq 0}^n$, $M$ a finitely generated graded $S$-module and $J \subset S$ a homogeneous ideal. For a general $\lambda \in K$, one has \[ \dim_K \ensuremath{\mathrm{Tor}}_i \big(M,S/\ensuremath{\mathrm{in}}_\mathbf w(J)\big)_j\geq \dim_K \ensuremath{\mathrm{Tor}} _i\big(M, S/D_{\lambda,\mathbf w}(J)\big)_j \ \mbox{ for all $i$.} \] \end{lemma} \begin{proof} Consider the ideal $\tilde {J} \subset R$ defined as above. Let $\tilde M = M \otimes_S R$ and $T_i=\ensuremath{\mathrm{Tor}}_i^{R}(\tilde M,R/\tilde{J})$. By the structure theorem for modules over a PID (see \cite[p.\ 149]{La}), we have $$(T_i)_j\cong K[T]^{a_{ij}} \bigoplus A_{ij}$$ as a finitely generated $K[T]$-module, where $a_{ij} \in \ensuremath{\mathbb{Z}}_{\geq 0}$ and where $A_{ij}$ is the torsion submodule. Moreover $A_{ij}$ is a module of the form $$A_{ij}\cong \bigoplus_{h=1}^{b_{ij}} K [T]/(P^{i,j}_{h}),$$ where $P^{i,j}_h$ is a non-zero polynomial in $K[T]$. Set $l_{\lambda}=T-\lambda$. Consider the exact sequence \begin {eqnarray} \label{aa} \begin{CD} 0 @>>> R/\tilde{J} @>\cdot l_{\lambda}>> R/\tilde{J} @>>> R/\big((l_{\lambda})+\tilde{J} \big) @>>> 0. \end{CD} \end {eqnarray} By considering the long exact sequence induced by $\ensuremath{\mathrm{Tor}}^R_i(\tilde M,-),$ we have the following exact sequence \begin{equation}\label{bo} 0\longrightarrow T_i/l_{\lambda} T_i \longrightarrow \ensuremath{\mathrm{Tor}}_i^{R}\big(\tilde M,R/\big((l_{\lambda})+\tilde{J}\big)\big) \longrightarrow K_{i-1} \longrightarrow 0, \end{equation} where $K_{i-1}$ is the kernel of the map $T_{i-1} \xrightarrow{\cdot l_{\lambda}} T_{i-1}$. Since $l_{\lambda}$ is a regular element for $R$ and $\tilde M$, the middle term in (\ref{bo}) is isomorphic to \begin{eqnarray*} \ensuremath{\mathrm{Tor}}_i^{R/(l_\lambda)} \big(\tilde M /l_\lambda \tilde M, R/\big((l_{\lambda})+\tilde J \big)\big) =\left\{ \begin{array}{lll} \ensuremath{\mathrm{Tor}}_i^S \big(M,S/\ensuremath{\mathrm{in}}_\mathbf w(J)\big), & \mbox{ if } \lambda=0,\\ \ensuremath{\mathrm{Tor}}_i^S \big(M,S/D_{\lambda,\mathbf w}(J)\big), & \mbox{ if } \lambda\ne0 \end{array} \right. \end{eqnarray*} (see \cite[p.\ 140]{Mat}). By taking the graded component of degree $j$ in (\ref{bo}), we obtain \begin{eqnarray} \label{banngou} \begin{array}{lll} \dim_K \ensuremath{\mathrm{Tor}}_i^{S}\big(M,S/\ensuremath{\mathrm{in}}_\mathbf w (J) \big)_j &=& a_{ij} + \# \{P^{ij}_h : P^{i,j}_h(0)=0\}\\ && + \# \{P^{i-1,j}_h : P^{i-1,j}_h(0)=0\}, \end{array} \end{eqnarray} where $\# X$ denotes the cardinality of a finite set $X$, and \begin{eqnarray} \label{yon} \dim_K \ensuremath{\mathrm{Tor}}_i^{S}\big(M,S/D_{\lambda,\mathbf w}(J) \big)_j &=& a_{ij} \end{eqnarray} for a general $\lambda \in K$. This proves the desired inequality. \end{proof} \begin{corollary} \label{add} With the same notation as in Lemma \ref{lemma2}, for a general $\lambda \in K$, \[ \dim_K \ensuremath{\mathrm{Tor}}_i \big(M,\ensuremath{\mathrm{in}}_\mathbf w(J)\big)_j \geq \dim_K \ensuremath{\mathrm{Tor}}_i \big(M, D_{\lambda,\mathbf w}(J) \big)_j \mbox{ for all }i. \] \end{corollary} \begin{proof} For any homogeneous ideal $I \subset S$, by considering the long exact sequence induced by $\ensuremath{\mathrm{Tor}}_i(M,-)$ from the short exact sequence $0 \longrightarrow I \longrightarrow S \longrightarrow S/I \longrightarrow 0$ we have $$\ensuremath{\mathrm{Tor}}_i(M,I) \cong \ensuremath{\mathrm{Tor}}_{i+1}(M,S/I) \mbox{ for }i \geq 1$$ and $$\dim_K \ensuremath{\mathrm{Tor}}_0(M,I)_j = \dim_K \ensuremath{\mathrm{Tor}}_1(M,S/I)_j + \dim_K M_j - \dim_K \ensuremath{\mathrm{Tor}}_0(M,S/I)_j.$$ Thus by Lemma \ref{lemma2} it is enough to prove that \begin{eqnarray*} &&\dim_K \ensuremath{\mathrm{Tor}}_1\big(M,S/\ensuremath{\mathrm{in}}_\mathbf w(J)\big)_j -\dim_K \ensuremath{\mathrm{Tor}}_1\big(M,S/D_{\lambda,\mathbf w}(J)\big)_j\\ &&\geq \dim_K \ensuremath{\mathrm{Tor}}_0\big(M,S/\ensuremath{\mathrm{in}}_\mathbf w(J)\big)_j -\dim_K \ensuremath{\mathrm{Tor}}_0\big(M,S/D_{\lambda,\mathbf w}(J)\big)_j. \end{eqnarray*} This inequality follows from (\ref{banngou}) and (\ref{yon}). \end{proof} \begin{proposition} \label{2.3} Fix an integer $j$. Let $I \subset S$ and $J \subset S$ be homogeneous ideals. Let $\mathbf w,\mathbf w' \in \ensuremath{\mathbb{Z}}_{\geq 0}^n$. For a general change of coordinates $g \in {GL}_n(K)$, \begin{itemize} \item[(i)] $\dim_K \ensuremath{\mathrm{Tor}}_i(S/I,S/g(J))_j \leq \dim_K \ensuremath{\mathrm{Tor}}_i (S/\ensuremath{\mathrm{in}}_{\mathbf w}(I), S/{\ensuremath{\mathrm{in}}_{\mathbf w'}}(J))_j \ \mbox{ for all }i.$ \item[(ii)] $\dim_K \ensuremath{\mathrm{Tor}}_i(I,S/g(J))_j \leq \dim_K \ensuremath{\mathrm{Tor}}_i (\ensuremath{\mathrm{in}}_{\mathbf w}(I), S/{\ensuremath{\mathrm{in}}_{\mathbf w'}}(J))_j \ \mbox{ for all }i.$ \end{itemize} \end{proposition} \begin{proof} We prove (ii) (the proof for (i) is similar). By Lemmas \ref{lemma1} and \ref{lemma2} and Corollary \ref{add}, we have \begin{eqnarray*} \dim_K \ensuremath{\mathrm{Tor}}_i \big(\ensuremath{\mathrm{in}}_{\mathbf w}(I), S/\ensuremath{\mathrm{in}}_{\mathbf w'}(J)\big)_j &\geq& \dim_K \ensuremath{\mathrm{Tor}}_i \big(D_{\lambda_1,\mathbf w}(I), S/D_{\lambda_2,\mathbf w'}(J)\big)_j \\ &=& \dim_K \ensuremath{\mathrm{Tor}}_i \big(I, S/D^{-1}_{\lambda_1,\mathbf w} \big(D_{\lambda_2,\mathbf w'}(J)\big)\big)_j\\ &\geq& \dim_K \ensuremath{\mathrm{Tor}}_i\big(I,S/g(J)\big)_j, \end{eqnarray*} as desired, where $\lambda_1,\lambda_2$ are general elements in $K$. \end{proof} \begin{remark} Let $\mathbf w'=(1,1,\dots,1)$ and note that the composite of two general changes of coordinates is still general. By replacing $J$ by $h(J)$ for a general change of coordinates $h,$ from Proposition \ref{2.3}(i) it follows that \[ \dim_K \ensuremath{\mathrm{Tor}}_i(S/I,S/h(J))_j \leq \dim_K \ensuremath{\mathrm{Tor}}_i\big(S/\ensuremath{\mathrm{in}}_{>_{\sigma}}(I),S/h(J))_j \] for any term order $>_\sigma$. The above fact gives, as a special case, an affirmative answer to \cite[Question 6.1]{Co}. This was originally proved in the thesis of the first author \cite{Ca2}. We mention it here because there seem to be no published article which includes the proof of this fact. \end{remark} \begin{theorem} \label{2.5} Fix an integer $j$. Let $I \subset S$ and $J \subset S$ be homogeneous ideals. For a general change of coordinates $g \in {GL}_n(K)$, \begin{itemize} \item[(i)] $\dim_K \ensuremath{\mathrm{Tor}}_i(S/I,S/g(J))_j \leq \dim_K \ensuremath{\mathrm{Tor}}_i(S/\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(I),S/\ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}} (J))_j \ \ \mbox{for all }i.$ \item[(ii)] $\dim_K \ensuremath{\mathrm{Tor}}_i(I,S/g(J))_j \leq \dim_K \ensuremath{\mathrm{Tor}}_i(\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(I),S/\ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}} (J))_j \ \ \mbox{for all }i.$ \end{itemize} \end{theorem} \begin{proof} Without loss of generality, we may assume $\ensuremath{\mathrm{in}}_{\mathrm{lex}}(I)=\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(I)$ and that $\ensuremath{\mathrm{in}}_{\mathrm{{oplex}}}(J)=\ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}}(J)$. It follows from \cite[Propositin 15.16]{Ei} that there are vectors $\mathbf w, \mathbf w' \in \ensuremath{\mathbb{Z}}_{\geq 0}^n$ such that $\ensuremath{\mathrm{in}}_\mathbf w(I)=\ensuremath{\mathrm{in}}_{\mathrm{lex}}(I)$ and $\ensuremath{\mathrm{in}}_{\mathbf w'}(g(J))=\ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}}(J)$. Then the desired inequality follows from Proposition \ref{2.3}. \end{proof} Since $\ensuremath{\mathrm{Tor}}_0(S/I,S/J)\cong S/(I+J)$ and $\ensuremath{\mathrm{Tor}}_0(I,S/J)\cong I/IJ$, we have the next corollary. \begin{corollary} \label{2.6} Let $I \subset S$ and $J \subset S$ be homogeneous ideals. For a general change of coordinates $g \in {GL}_n(K)$, \begin{itemize} \item[(i)] $H(I \cap g(J) ,d) \leq H(\ensuremath{\mathrm{Gin}}_{\mathrm{lex}} (I)\cap \ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}}(J),d)$ for all $d \geq 0$. \item[(ii)] $H(Ig(J),d) \geq H(\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(I)\ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}}(J),d)$ for all $d \geq 0$. \end{itemize} \end{corollary} We conclude this section with a result regarding the Krull dimension of certain Tor modules. We show how Theorem \ref{2.5} can be used to give a quick proof of Proposition \ref{MiSp}, which is a special case (for the variety $X=\mathbb{P}^{n-1}$ and the algebraic group ${SL}_n$) of the main Theorem of \cite{MS}. Recall that generic initial ideals are \textit{Borel-fixed}, that is they are fixed under the action of the Borel subgroup of ${GL}_n(K)$ consisting of all the upper triangular invertible matrices. In particular for an ideal $I$ of $S$ and an upper triangular matrix $b\in {GL}_n(K)$ one has $b(\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(I))= \ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(I).$ Similarly, if we denote by $op$ the change of coordinates of $S$ which sends $x_i$ to $x_{n-i}$ for all $i=1,\dots,n,$ we have that $b( op (\ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}}(I)))= op (\ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}}(I)).$ We call \textit{opposite Borel-fixed} an ideal $J$ of $S$ such that $op(J)$ is Borel-fixed (see \cite[\S 15.9]{Ei} for more details on the combinatorial properties of Borel-fixed ideals). It is easy to see that if $J$ is Borel-fixed, then so is $(x_1,\dots,x_i)+J$ for every $i=1,\dots,n.$ Furthermore if $j$ is an integer equal to $\min \{i : x_i\not \in J \}$ then $J:x_j$ is also Borel-fixed; in this case $I$ has a minimal generator divisible by $x_j$ or $I=(x_1,\dots,x_{j-1}).$ Analogous statements hold for opposite Borel-fixed ideals. Let $I$ and $J$ be ideals generated by linear forms. If we assume that $I$ is Borel fixed and that $J$ is opposite Borel fixed, then there exist $1\leq i,j \leq n $ such that $I=(x_1,\dots,x_i)$ and $J=(x_j,\dots,x_n).$ An easy computation shows that the Krull dimension of $\ensuremath{\mathrm{Tor}}_i(S/I,S/J)$ is always zero when $i>0.$ More generally one has \begin{proposition}[Miller--Speyer]\label{MiSp} Let $I$ and $J$ be two homogeneous ideals of $S.$ For a general change of coordinates $g$, the Krull dimension of $\ensuremath{\mathrm{Tor}}_i(S/I,S/g(J))$ is zero for all $i>0.$ \end{proposition} \begin{proof} When $I$ or $J$ are equal to $(0)$ or to $S$ the result is obvious. Recall that a finitely generated graded module $M$ has Krull dimension zero if and only if $M_d=0$ for all $d$ sufficiently large. In virtue of Theorem \ref{2.5} it is enough to show that $\ensuremath{\mathrm{Tor}}_i(S/I,S/J)$ has Krull dimension zero whenever $I$ is Borel-fixed, $J$ opposite Borel-fixed and $i>0.$ By contradiction, let the pair $I,J$ be a maximal counterexample (with respect to point-wise inclusion). By the above discussion, and by applying $op$ if necessary, we can assume that $I$ has a minimal generator of degree greater than 1. Let $j=\min \{h : x_h\not \in I \}$ and notice that both $(I:x_j)$ and $(I+(x_j))$ strictly contain $I.$ For every $i>0$ the short exact sequence $ 0 \rightarrow S/(I:x_j) \rightarrow S/I \rightarrow S/(I+(x_j)) \rightarrow 0$ induces the exact sequence \[ \ensuremath{\mathrm{Tor}}_i(S/(I:x_j),S/J) \rightarrow \ensuremath{\mathrm{Tor}}_i(S/I,S/J) \rightarrow \ensuremath{\mathrm{Tor}}_i(S/(I+(x_j)),S/J). \] By the maximality of $I,J$, the first and the last term have Krull dimension zero. Hence the middle term must have dimension zero as well, contradicting our assumption. \end{proof} \section{General intersections and general products} In this section, we prove Theorems \ref{intersection} and \ref{product}. We will assume throughout the rest of the paper $\mathrm{char}(K)=0.$ A monomial ideal $I \subset S$ is said to be \textit{$0$-Borel} (or \textit{strongly stable}) if, for every monomial $u x_j \in I$ and for every $1 \leq i <j$ one has $ux_i \in I$. Note that $0$-Borel ideals are precisely all the possible Borel-fixed ideals in characteristic $0$. In general, the Borel-fixed property depends on the characteristic of the field and we refer the readers to \cite[\S 15.9]{Ei} for the details. A set $W \subset S$ of monomials in $S$ is said to be \textit{$0$-Borel} if the ideal they generate is $0$-Borel, or equivalently if for every monomial $u x_j \in W$ and for every $1 \leq i <j$ one has $ux_i \in W$. Similarly we say that a monomial ideal $J \subset S$ is \textit{opposite $0$-Borel} if for every monomial $ux_j \in J$ and for every $j < i \leq n$ one has $ux_i \in J$. Let $>_{\mathrm{{rev}}}$ be the reverse lexicographic order induced by the ordering $x_1 > \cdots >x_n$. We recall the following result \cite[Lemma 3.2]{Mu}. \begin{lemma} \label{3-1} Let $V=\{v_1,\dots,v_s\} \subset S_d$ be a $0$-Borel set of monomials and $W =\{w_1,\dots,w_s\} \subset S_d$ the lex set of monomials, where $v_1 \geq_{{\mathrm{{rev}}}} \cdots \geq_{{\mathrm{{rev}}}} v_s$ and $w_1 \geq_{{\mathrm{{rev}}}} \cdots \geq _{{\mathrm{{rev}}}} w_s$. Then $v_i \geq_{{\mathrm{{rev}}}} w_i$ for all $i=1,2,\dots,s$. \end{lemma} Since generic initial ideals with respect to $>_{\mathrm{lex}}$ are $0$-Borel, the next lemma and Corollary \ref{2.6}(i) prove Theorem \ref{intersection}. \begin{lemma} \label{3-2} Let $I \subset S$ be a $0$-Borel ideal and $P \subset S$ an opposite lex ideal. Then $\dim_K(I\cap P)_d \leq \dim_K (I^{\mathrm{lex}} \cap P)_d$ for all $d\geq 0$. \end{lemma} \begin{proof} Fix a degree $d$. Let $V,W$ and $Q$ be the sets of monomials of degree $d$ in $I$, $I^{\mathrm{lex}}$ and $P$ respectively. It is enough to prove that $\# V \cap Q \leq \# W \cap Q$. Observe that $Q$ is the set of the smallest $\#Q$ monomials in $S_d$ with respect to $>_{\mathrm{{rev}}}$. Let $m=\max_{>_{\mathrm{{rev}}}} Q$. Then by Lemma \ref{3-1} $$\# V \cap Q = \# \{ v \in V: v \leq_{{\mathrm{{rev}}}} m\} \leq \# \{ w \in W: w \leq_{{\mathrm{{rev}}}} m\} = \# W \cap Q,$$ as desired. \end{proof} Next, we consider products of ideals. For a monomial $u \in S$, let $\max u$ (respectively, $\min u$) be the maximal (respectively, minimal) integer $i$ such that $x_i$ divides $u$, where we set $\max 1 = 1$ and $\min 1 = n$. For a monomial ideal $I \subset S$, let $I_{(\leq k)}$ be the K-vector space spanned by all monomials $u \in I$ with $\max u \leq k$. \begin{lemma} \label{3-4} Let $I \subset S$ be a $0$-Borel ideal and $P \subset S$ an opposite $0$-Borel ideal. Let $G(P)=\{u_1,\dots,u_s\}$ be the set of the minimal monomial generators of $P$. As a $K$-vector space, $IP$ is the direct sum $$ IP=\bigoplus_{i=1}^s (I_{(\leq \min u_i)})u_i. $$ \end{lemma} \begin{proof} It is enough to prove that, for any monomial $w \in IP$, there is the unique expression $w=f(w)g(w)$ with $f(w) \in I$ and $g(w) \in P$ satisfying \begin{itemize} \item[(a)] $\max f(w) \leq \min g(w)$. \item[(b)] $g(w) \in G(P)$. \end{itemize} Given any expression $w=fg$ such that $f \in I$ and $g \in P$, since $I$ is $0$-Borel and $P$ is opposite $0$-Borel, if $\max f > \min g$ then we may replace $f$ by $f \frac{x_{\min g}} {x_{\max f}} \in I$ and replace $g$ by $g \frac{x_{\max f}} {x_{\min g}} \in P$. This fact shows that there is an expression satisfying (a) and (b). Suppose that the expressions $w=f(w)g(w)$ and $w=f'(w)g'(w)$ satisfy conditions (a) and (b). Then, by (a), $g(w)$ divides $g'(w)$ or $g'(w)$ divides $g(w)$. Since $g(w)$ and $g'(w)$ are generators of $P$, $g(w)=g'(w)$. Hence the expression is unique. \end{proof} \begin{lemma} \label{3-5} Let $I \subset S$ be a $0$-Borel ideal and $P \subset S$ an opposite $0$-Borel ideal. Then $\dim_K(IP)_d \geq \dim_K (I^{{\mathrm{lex}}}P)_d$ for all $d\geq 0$. \end{lemma} \begin{proof} Lemma \ref{3-1} shows that $\dim_K {I_{(\leq k)}}_d \geq \dim_K {I^{\mathrm{lex}}_{(\leq k)}}_d$ for all $k$ and $d \geq 0$. Then the statement follows from Lemma \ref{3-4}. \end{proof} Finally we prove Theorem \ref{product}. \begin{proof}[Proof of Theorem \ref{product}] Let $I'=\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(I)$ and $J'=\ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}}(J)$. Since $I'$ is $0$-Borel and $J'$ is opposite $0$-Borel, by Corollary \ref{2.6}(ii) and Lemmas \ref{3-5} $$H(Ig(J),d) \geq H(I'J',d) \geq H(I^{{\mathrm{lex}}} J',d) \geq H(I^{\mathrm{lex}} J^{\mathrm{{oplex}}},d)$$ for all $d \geq 0$. \end{proof} \begin{remark} \label{rem1} Theorems \ref{intersection} and \ref{product} are sharp. Let $I \subset S$ be a Borel-fixed ideal and $J \subset S$ an ideal satisfying that $h(J)=J$ for any lower triangular matrix $h \in {GL}_n(K)$. For a general $g \in {GL}_n(K)$, we have the LU decomposition $g=bh$ where $h \in {GL}_n(K)$ is a lower triangular matrix and $b \in {GL}_n(K)$ is an upper triangular matrix. Then as $K$-vector spaces $$I \cap g(J) \cong b^{-1}(I) \cap h(J)= I\cap J \mbox{ and } I g(J) \cong b^{-1}(I) h(J)= I J.$$ Thus if $I$ is lex and $J$ is opposite lex then $H(I\cap g(J),d)=H(I\cap J,d)$ and $H(Ig(J),d)=H(I J,d)$ for all $d\geq 0$. \end{remark} \begin{remark}\label{example} The assumption on $\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(J)$ in Theorem \ref{intersection} is necessary. Let $I=(x_1^3,x_1^2x_2,x_1x_2^2,x_2^3) \subset K[x_1,x_2,x_3]$ and $J=(x_3^2,x_3^2x_2,x_3x_2^2,x_2^3)\subset K[x_1,x_2,x_3]$. Then the set of monomials of degree $3$ in $I^{\mathrm{lex}}$ is $\{x_1^3,x_1^2x_2,x_1^2x_3,x_1x_2^2\}$ and that of $J^{\mathrm{{oplex}}}$ is $\{x_3^3,x_3^2x_2,x_3^2x_1,x_3x_2^2\}$. Hence $H(I^{\mathrm{lex}}\cap J^{\mathrm{{oplex}}},3)=0$. On the other hand, as we see in Remark \ref{rem1}, $H(I\cap g(J),3)=H(I\cap J,3)=1$. Similarly, the assumption on the characteristic of $K$ is needed as one can easily see by considering $\mathrm{char}(K)=p>0$, $I=(x_1^p,x_2^p)\subset K[x_1,x_2]$ and $J=x_2^p.$ In this case we have $H(I^{\mathrm{lex}}\cap J^{\mathrm{{oplex}}},p)=0$, while $H(I\cap g(J),p)=H(g^{-1}(I)\cap J,p)=1$ since $I$ is fixed under any change of coordinates. \end{remark} Since $\ensuremath{\mathrm{Tor}}_0(S/I,S/J) \cong S/(I+J)$ and $\ensuremath{\mathrm{Tor}}_1(S/I,S/J) \cong (I\cap J)/ IJ$ for all homogeneous ideals $I \subset S$ and $J \subset S$, Theorems \ref{intersection} and \ref{product} show the next statement. \begin{remark} \label{cor} Conjecture \ref{conj} is true if $i=0$ or $i=1.$ \end{remark} \end{document}
\begin{document} \title{Mutual insurance for uninsurable income} \author{Michiko Ogaku} \date{28 June 2022} \maketitle \begin{abstract} This paper shows a result counter to prior work: if infinitely lived individuals' incomes are unobservable, efficient allocations cause immiseration and welfare inequality among individuals. An efficient mutual contract provides (i) full insurance in each period with varying inter-period allocation weights; (ii) the lifetime utility is higher than that in autarky; (iii) it does not invoke inequality and sustainable. \end{abstract} {\bf JEL}: D61; D82. \section{Introduction} An infinite-period contract between a firm and individuals whose incomes are private information is said to provide an immiseration result: individuals' utilities on consumption converge to the negative infinity and welfare inequality rises without bound \citep[][]{Green-1987, thomas-warrall-1990, Atkeson-Lucas-1992, Phelan-1998}. The extant literature concludes that efficient allocations are achieved only at the expense of invoking inequality without full information \citep[][]{Atkeson-Lucas-1992} or preferences that almost do not discount future values \citep[][]{Carrasco-Fuchs-Fukuda-2019}. This paper shows a counter result: an efficient contract can solve the immiseration problems. (i) Such a contract provides optimal (full insurance) transfers in each period with varying inter-period allocation weights; (ii) the lifetime utility is higher than that in autarky; (iii) it does not invoke inequality and sustainable. This paper's key departure from prior work is the use of a mechanism adapted from the $\lambda$-mechanism by \cite{Marcet-Marimon-1992}, which induces truth telling by varying future allocation weight $\lambda$.\footnote{Although $\lambda$ could be associated with the Lagrange multipliers used in recursive contracts of \cite{Marcet-Marimon-2019}, it is not based on the seminar paper in the sence that it does not use a saddle-point functional equation to derive the optimal contract. } This paper proceeds as follows. Section \ref{sec:model} introduces the model and defines the mutual insurance mechanism. Section \ref{sec:mutual-contract} shows impacts of the mutual insurance mechanism on consumption in the long run. Section \ref{sec:discussion} provides comparisons with prior work, and section \ref{sec:conclusion} provides conclusions. \section{Model}\label{sec:model} Consider a firm (risk neutral) and a continuum of infinitely lived individuals (risk averse) on the unit interval. In each period, $t \in \mathbb{N}\cup \{0\}$, the individuals receive an idiosyncratic income shock, $e_t: (\Omega, \mathcal{F}, P) \to E:=\{e^1, e^2, \dots, e^M\}$, where $e^i \in \mathbb{R}_+$ with $e^i<e^j $ for $i<j \in \{1,\dots, M\}$. The firm considers a transfer mechanism $\Gamma=(\mathbb{R}_{++} \times E, \tau)$ repeating from a given period to infinity, where $\tau: \mathbb{R}_{++} \times E\to \mathbb{R}$, $(\lambda, e^i) \in \mathbb{R}_{++}\times E \mapsto \tau(\lambda, e^i)$ denotes transfers. We will use $\lambda \in \mathbb{R}_{++}$ as an allocation weight. \subsection{Pareto optimal contracts in the full information} First, we derive Pareto optimal contracts in a case with full information. Given $\lambda_0>0$ and $e_0 \in E$ a planner's problem is written as \begin{align} \max_{\tau(\lambda_0, e_t), t \in \mathbb{N}\cup \{0\}} &(1-\beta ) \mathrm{E}_0 [\sum_{t=0}^{\infty} \beta^t [\lambda_0 u(c_t)-\tau(\lambda_0, e_t)]], \label{eq:PO} \end{align} where $E_0$ is the conditional expectation given $e_0$, $u: \mathbb{R}_+ \to \mathbb{R}$ is the utility function for the individuals, $c_t=e_t+\tau(\lambda_0, e_t)$ is consumption in period $t$ and $\beta \in (0,1)$ is the common discount factor for the parties. The utility function $u$ is assumed differenciable, increasing, strictly concave and $\{u'(x) \vert x \in \mathbb{R}_+\}=\mathbb{R}_{++}$. From the first order condition, \begin{align*} u'(c_t)=\frac{1}{\lambda_0} \text{ for } t \in \mathbb{N}\cup \{0\} \end{align*} and we obtain $c_t=(u')^{-1}(\lambda_0^{-1})$ for $t \in \mathbb{N}\cup\{0\}$. That is, since the individual is risk averse and the firm is risk neutral, the optimal contract provide the individual with the constant consumption in every period. As a result the lowest income individuals receive the highest value of transfers. For given $\lambda_0>0$, $e_0 \in E$, the parties' valuation of the contract is written as \begin{align*} v_1(\lambda_0, e_{0})&:= (1-\beta) \mathrm{E}_0[ \sum_{t=0}^{\infty} \beta^t u(c_t)]=u((u')^{-1}(\lambda_0^{-1})) =: \overline{v}_1(\lambda_0)\\ v_2(\lambda_0, e_{0})&:=(1-\beta) \mathrm{E}_{0}[\sum_{t=0}^{\infty}\beta^t (-\tau (\lambda_0, e_{t}))] \nonumber \\ \overline{v}_2(\lambda_0)&:=\mathrm{E}[v_2(\lambda_0,e)]. \nonumber \end{align*} Note that the individuals' value of contract could be seen as a function of the allocation weight $\lambda_0$. \subsection{Pareto optimal contracts with assymmetric information} What is the contract like if the planner relies on self-reported incomes? For higher income individuals truth telling requires incentives. Varying the allocation weights could be incentives, provided that for given today's allocation weight $\lambda$, tomorrow's allocation weight $\lambda'$ satisfies the incentive constraints: \begin{align} (1-\beta)u (e+\tau(\lambda, e)) &+ \beta \overline{v}_1(\lambda'(e)) \nonumber \\ &\geq (1-\beta)u (e+\tau(\lambda, \tilde{e})) + \beta \overline{v}_1(\lambda'(\tilde{e})) \text{ for }e, \tilde{e} \in E, \label{ineq:IC} \end{align} where the tomorrow's allocation weight $\lambda':E \to \mathbb{R}_{++}$ is redefined as a function assigning the weight for the next period given a self-reported income $e$. For later use, let $\Lambda (\lambda)$ be the set of tomorrow's $\lambda'$ satisfying the incentive constraints \eqref{ineq:IC} and write the union of the sets of such allocation weights simply $\Lambda=\bigcup_{i \in \mathbb{N}}\Lambda(\lambda_i) \cup \{\lambda_0\}$, where $\lambda_i$ denotes an allocation weight in period $i$. There are various manners to define tomorrow's $\lambda'$. A possible choice is the following: For given $\lambda \in \Lambda$, \begin{align}\lambda'(e)=\overline{v}_1^{-1}\big(\overline{v}_1(\lambda) + \lambda^{-1}\beta^{-1}(v_2(\lambda, e) - \beta \mathrm{E}[v_2(\lambda, e)])\big) \text{ for } e \in E. \label{eq:lambda'} \end{align} This equation is equivalent to \begin{align} \overline{v}_1(\lambda'(e))=\overline{v}_1(\lambda) + \lambda^{-1}\beta^{-1}(v_2(\lambda, e) - \beta \mathrm{E}[v_2(\lambda, e)]) \text{ for } e \in E. \label{eq:one-step-solution} \end{align} Focussing on the recursive relation of $\lambda$ and $\lambda'$ in \eqref{eq:lambda'}, consider a sequence of $\lambda$'s satisfying \eqref{eq:lambda'}. The sequence of $\lambda$s could be a tool to build an allocation mechanism: \begin{definition}\label{def:lambda-mechanism} A mutual insurance mechanism $\Gamma^m=(E, \tau(\lambda_t, \cdot))_{t \in \mathbb{N}\cup\{0\}}$ is produced by a sequence $(\lambda_t)_{t \in \mathbb{N}\cup \{0\}}$ with the recursive relation \eqref{eq:lambda'}. \end{definition} The mechanism $\Gamma^m$ provides a sequence of contract values $\{\overline{v}_1(\lambda_i)\}_{i \in \mathbb{N}\cup\{0\}}$. Today, individuals are guaranteed the lifetime contract value $\overline{v}_1(\lambda)$. Tomorrow, individuals are guaranteed the lifetime contract value $\overline{v}_1(\lambda')$. Actually, the tomorrow's lifetime contract value absorbs the realised risks $v_2(\lambda, e)-\beta \mathrm{E}[v_2(\lambda, e)]=(1-\beta)(-\tau(\lambda,e))$, where the firm does not incur cost. Importantly, within each period individuals are fully insured. Notice that $\overline{v}_1(\lambda)$ today does not depend on today's income. Similarly, $\overline{v}_1(\lambda'(e))$ does not depend on tomorrow's income. Since the firm exists only for mediating a series of these transactions among individuals, it is a mutual insurance contract. Importantly, $\Gamma^m$ is sequentially efficient. That is, sequentially incentive compatible and not dominated by other sequentially incentive compatible mechanisms. \begin{lemma}\label{lem:nonempty} $\Gamma^m$ is sequentially efficient. \end{lemma} \begin{proof} Show: $\Gamma^m$ is sequentially incentive compatible. For $\lambda>0$ \begin{align*} \lambda \big[&(1-\beta)u(e+\tau(\lambda,e))+\beta \overline{v}_1(\lambda'(e))\big] \\ &=\lambda \big[(1-\beta)u(e+\tau(\lambda, e))+\beta \overline{v}_1(\lambda)\big]+v_2(\lambda,e)-\beta \mathrm{E}[v_2(\lambda,e)] \\ &\geq \lambda \big[(1-\beta)u(e + \tau(\lambda,\tilde{e})) + \beta \overline{v}_1(\lambda) \big] + v_2(\lambda, \tilde{e}) -\beta \mathrm{E}[v_2(\lambda, \tilde{e})] \\ &=\lambda \big[(1-\beta)u(e+\tau(\lambda, \tilde{e}))+\beta \overline{v}_1(\lambda'(\tilde{e}))\big]. \end{align*} The last inequality follows from the optimality of $\lambda \overline{v}_1(\lambda) + v_2(\lambda, e)$ in \eqref{eq:PO} given $\lambda$. Hence, $\Gamma^m$ is sequentially incentive compatible. Show: $\Gamma^m$ is Pareto optimal and not dominated by any other sequentially incentive compatible mechanisms. Since $\Gamma^m$ provides a sequence of truthful reports on incomes and the corresponding transfers are solutions of the Pareto optimal problem \eqref{eq:PO}, it is Pareto optimal. To see that $\Gamma^m$ is not Pareto dominated by any other sequentially incentive compatible mechanisms, suppose there exists a sequentially incentive compatible mechanism $\Gamma^*$ that Pareto dominates $\Gamma^m$ for a given state $e$. Let $(v_1^*,v_2^*)$ be the present value attained through $\Gamma^*$. Set $\lambda=\overline{v}_1^{-1}(v_1^*)$ and use it as the initial condition $(\lambda, e)$ for $\Gamma^m$. Then, by construction the risk averse agent has the same present value for both contracts. Thus, Pareto dominance requires that $v_2^* > v_2(\lambda, e)$. However, this contradicts that solutions of $\Gamma^m$ are Pareto optimal. \end{proof} \section{The optimal contract}\label{sec:mutual-contract} The mutual insurance mechanism $\Gamma^m$ could be expressed using the following one-step operator: \begin{definition}\label{def:competitive} The one-step operator $T$ satisfies \begin{align} T \overline{v}_1(\lambda)=\max_{\lambda' \in \Lambda(\lambda)} \mathrm{E}_{(e)}[(1-\beta)u(e+\tau(\lambda, e)) + \beta \overline{v}_1(\lambda'(e))], \label{eq:one-step} \end{align} where $\mathrm{E}_{(e)}$ is the conditional expectation given income $e$. \end{definition} The one-step operator $T$ defines the value functions recursively through a sequence $\{\lambda_i\}_{i \in \mathbb{N}\cup \{0\}}$ producing $\Gamma^m$: \begin{lemma}\label{lem:recurrence-relation} The one step recurrence relation of value functions in \eqref{eq:one-step} is given by \eqref{eq:one-step-solution}. \end{lemma} \begin{proof} Substituting \eqref{eq:one-step-solution} into the objective function of \eqref{eq:one-step}, we obtain \[ \mathrm{E}_{(e)}\big[ (1-\beta) u(e+\tau(\lambda,e))+\beta \overline{v}_1(\lambda)+\lambda^{-1}(v_2(\lambda,e)-\beta\mathrm{E}[v_2(\lambda,e)]) \big]. \] Since $\beta \mathrm{E}[v_2(\lambda,e)]$ is constant and $\lambda$ is realised, this could be rewirtten as \[\mathrm{E}_{(e)}\big[\lambda [(1-\beta)u(e+\tau(\lambda,e))+\beta \overline{v}_1(\lambda)]+v_2(\lambda,e)\big].\] From the optimality of the problem \eqref{eq:PO}, $\lambda'(e)$ in \eqref{eq:one-step} provide the optimal solution to \eqref{eq:one-step}. \end{proof} The contract induced by $\Gamma^m$ is give by the following proposition: \begin{proposition}\label{prop:main} The value function $\overline{v}_1 \circ \lambda^*$ of $\Gamma^m$ is given by \begin{align*} \overline{v}_1 \circ \lambda^* &= (1-\beta)\sum_{i=0}^{\infty} \beta^i\overline{v}_1(\lambda_i). \end{align*} If it converges, \begin{align*} \overline{v}_1 \circ \lambda^* &= \overline{v}_1(\lambda_0)+(1-\beta)\sum_{i=0}^{\infty}\beta^i (e_i-(u')^{-1}(\lambda_i^{-1}))\lambda_i^{-1}. \end{align*} \end{proposition} \begin{proof} Let $L=\beta \max_{\lambda' \in \Lambda(\lambda)}$. Then the value function is given by \begin{align*} \overline{v}_1 \circ \lambda^* &= \sum_{n=0}^{\infty} L^n (1-\beta) \overline{v}_1(\lambda_0) \\ &=(1-\beta) \big[ \overline{v}_1(\lambda_0) + \beta \overline{v}_1(\lambda_1) + \beta^2 \overline{v}_1(\lambda_2)+ \cdots \big]. \end{align*} If it converges, the last equation is rewritten as \begin{align*} \overline{v}_1 \circ \lambda^* &= \overline{v}_1(\lambda_0)+(1-\beta)\sum_{i=0}^{\infty}\beta^i (e_i-(u')^{-1}(\lambda_i^{-1}))\lambda_i^{-1}. \end{align*} \end{proof} Importantly, the value of contract is better than the state of autarky: \begin{corollary} The lifetime utility of $\Gamma^m$ is higher than that in autarky. \end{corollary} \begin{proof} It is easy to see that an autarky is equivalent to the state that all the transfers in $\Gamma^m$ are changed to zero. From the optimality of transfers in $\Gamma^m$, the contract value of $\Gamma^m$ is higher than the lifetime utility of autarky. \end{proof} \section{Discussion}\label{sec:discussion} A promise keeping constraint in the promised utility approach makes it optimal for the firm to provide only the highest income individuals with full insurance because, by false reporting, the highest income individuals can switch to tomorrow's contract value for lower individuals, receiving the transfers for lower individuals. This reduces the tomorrow's value of contracts for lower individuals relative to today's value. Consequently, the average contract value becomes a decreasing sequence. Prior work with the promised utility approach \citep[][]{Green-1987, thomas-warrall-1990, Atkeson-Lucas-1992, Phelan-1998} shows that efficient allocations are available at the expense of inequality in the absence of full information or the discount factor of $1$.\footnote{\cite{Carrasco-Fuchs-Fukuda-2019} shows that the contract could converge to the first best (full information contract) as the discount factor $\beta$ approaches to $1$.} Furthermore, other related studies based on the promised utility approach \citep[][]{phelan_2006} show that efficient allocation of resources may require social rankings. However, the current study shows that incentives to deter moral hazard do not necessarily result in immiseration. A limitation could be that the proposed mechanism only shifts risks to the future periods. However, it is better than the state of autarky because individuals could recover from the shock, even though they have to return the borrowed transfer in the future. A real world example might be cooperation between local governments to aid an area hit by a natural disaster. If the transfer is common knowledge and must be returned in the future, it would not be good to exaggerate the damage. Importantly, the cooperation could help to recover the economy. \section{Conclusion}\label{sec:conclusion} This paper shows a result counter to the prior work's conclusion: if infinitely lived individuals' incomes are unobservable, efficient allocations are achieved only at the expense of invoking inequality. An efficient mutual contract achieves full insurance by shifting risks to future periods. It is not in exchange for equal opportunities to be wealthy. The result sheds some light on efficient resource allocations for societies with equal opportunities. \end{document}
\begin{document} \title{Eppstein's bound on intersecting triangles revisited} \begin{abstract} Let $S$ be a set of $n$ points in the plane, and let $T$ be a set of $m$ triangles with vertices in $S$. Then there exists a point in the plane contained in $\Omega(m^3/(n^6\log^2 n))$ triangles of $T$. Eppstein (1993) gave a proof of this claim, but there is a problem with his proof. Here we provide a correct proof by slightly modifying Eppstein's argument. \emph{Keywords:} Triangle; Simplex; Selection Lemma; $k$-Set \end{abstract} \section{Introduction} Let $S$ be a set of $n$ points in the plane in general position (no three points on a line), and let $T$ be a set of $m \le {n\choose 3}$ triangles with vertices in $S$. Aronov et al.~\cite{ACEGSW} showed that there always exists a point in the plane contained in the interior of \begin{equation}\label{eq_bound_log5} \Omega{\left({ m^3 \over n^6\log^5 n }\right)} \end{equation} triangles of $T$. Eppstein \cite{eppstein} subsequently claimed to have improved this bound to \begin{equation}\label{eq_bound_log2} \Omega{\left({ m^3 \over n^6\log^2 n }\right)}. \end{equation} There is a problem in Eppstein's proof, however.\footnote{The very last sentence in the proof of Theorem 4 (Section 4) in \cite{eppstein} reads: ``So $\epsilon = 1/2^{i+1}$, and $x = m\epsilon/y = O(m/8^i)$, from which it follows that $x/\epsilon^3 = O(n^2)$.'' This is patently false, since what actually follows is that $x/\epsilon^3 = O(m)$, and the entire argument falls through.} In this note we provide a correct proof of (\ref{eq_bound_log2}), by slightly modifying Eppstein's argument. \subsection{The Second Selection Lemma and $k$-sets} The above result is the special case $d=2$ of the following lemma (called the \emph{Second Selection Lemma} in \cite{matou}), whose proof was put together by B\'ar\'any et al.~\cite{BFL}, Alon et al.~\cite{ABFK}, and \v Zivaljevi\'c and Vre\'cica \cite{ZV}: \begin{lemma}\label{lemma_2nd_sel} If $S$ is an $n$-point set in $\mathbb R^d$ and $T$ is a family of $m\le {n \choose d+1}$ $d$-simplices spanned by $S$, then there exists a point $p\in \mathbb R^d$ contained in at least \begin{equation}\label{eq_2nd_SL} c_d \left({ m \over n^{d+1}}\right)^{s_d} n^{d+1} \end{equation} simplices of $T$, for some constants $c_d$ and $s_d$ that depend only on $d$. \end{lemma} (Note that $m/n^{d+1} = O(1)$, so the smaller the constant $s_d$, the stronger the bound.) Thus, for $d=2$ the constant $s_2$ in (\ref{eq_2nd_SL}) can be taken arbitrarily close to $3$. The general proof of Lemma \ref{lemma_2nd_sel} gives very large bounds for $s_d$; roughly $s_d \approx (4d+1)^{d+1}$. The main motivation for the Second Selection Lemma is deriving upper bounds for the maximum number of \emph{$k$-sets} of an $n$-point set in $\mathbb R^d$; see \cite[ch.~11]{matou} for the definition and details. \section{The proof} We assume that $m = \Omega( n^2 \log^{2/3} n)$, since otherwise the bound (\ref{eq_bound_log2}) is trivial. The proof, like the proof of the previous bound (\ref{eq_bound_log5}), relies on the following two one-dimensional \emph{selection lemmas} \cite{ACEGSW}: \begin{lemma}[Unweighted Selection Lemma]\label{lemma_unw} Let $V$ be a set of $n$ points on the real line, and let $E$ be a set of $m$ distinct intervals with endpoints in $V$. Then there exists a point $x$ lying in the interior of $\Omega(m^2/n^2)$ intervals of $E$. \end{lemma} \begin{lemma}[Weighted Selection Lemma]\label{lemma_weighted} Let $V$ be a set of $n$ points on the real line, and let $E$ be a \emph{multiset} of $m$ intervals with endpoints in $V$. Then there exists a multiset $E'\subseteq E$ of $m'$ intervals, having as endpoints a subset $V' \subseteq V$ of $n'$ points, such that all the intervals of $E'$ contain a common point $x$ in their interior, and such that \begin{equation*} {m'\over n'} = \Omega{\left({m\over n\log n}\right)}. \end{equation*} \end{lemma} The proof of the desired bound (\ref{eq_bound_log2}) proceeds as follows: Assume without loss of generality that no two points of $S$ have the same $x$-coordinate. For each triangle in $T$ define its \emph{base} to be the edge with the longest $x$-projection. For each pair of points $a, b\in S$, let $T_{ab}$ be the set of triangles in $T$ that have $ab$ as base, and let $m_{ab} = |T_{ab}|$. (Thus, $\sum_{ab} m_{ab} = m$.) Discard all sets $T_{ab}$ for which $m_{ab} < m/n^2$. We discarded at most ${n\choose 2} m/n^2 < m/2$ triangles, so we are left with a subset $T'$ of at least $m/2$ triangles, such that either $m_{ab} = 0$ or $m_{ab} \ge m/n^2$ for each base $ab$.\footnote{This critical discarding step is missing in \cite{eppstein}, and that is why the proof there does not work.} Partition the bases into a logarithmic number of subsets $E_1, E_2, \ldots, E_k$ for $k = \log_4 (n^3/m)$, so that each $E_j$ contains all the bases $ab$ for which \begin{equation}\label{eq_bound_m_ab} {4^{j-1} m \over n^2} \le m_{ab} < {4^j m \over n^2}. \end{equation} Let $T_j = \bigcup_{ab \in E_j} T_{ab}$ denote the set of triangles with bases in $E_j$, and $m_j = |T_j|$ denote their number. There must exist an index $j$ for which \begin{equation*} m_j \ge 2^{-(j+1)} m, \end{equation*} since otherwise the total number of triangles in $T'$ would be less than $m/2$. From now on we fix this $j$, and work only with the bases in $E_j$ and the triangles in $T_j$. For each pair of triangles $abc$, $abd$ having the same base $ab \in E_j$, project the segment $cd$ into the $x$-axis, obtaining segment $c'd'$. We thus obtain a multiset $M_0$ of horizontal segments, with \begin{equation*} |M_0| \ge {m_j\over 2} \left( {4^{j-1} m \over n^2} - 1 \right) = \Omega{\left({ 2^j m^2 \over n^2 }\right)}. \end{equation*} (Each of the $m_j$ triangles in $T_j$ is paired with all other triangles sharing the same base, and each such pair is counted twice.) We now apply the Weighted Selection Lemma (Lemma~\ref{lemma_weighted}) to $M_0$, obtaining a multiset $M_1$ of segments delimited by $n_1$ distinct endpoints, all segments containing some point $z_0$ in their interior, with \begin{equation*} { |M_1| \over n_1} = \Omega{\left({ |M_0| \over n \log n }\right)} = \Omega{\left({ 2^j m^2 \over n^3 \log n }\right)}. \end{equation*} \begin{figure} \caption{ Pairing two triangles with a common base.} \label{fig_triangle_pair} \end{figure} Let $\ell$ be the vertical line passing through $z_0$. For each horizontal segment $c'd' \in M_1$, each of its (possibly multiple) instances in $M_1$ originates from a pair of triangles $abc$, $abd$, where points $a$ and $c$ lie to the left of $\ell$, and points $b$ and $d$ lie to the right of $\ell$. Let $p$ be the intersection of $\ell$ with $ad$, and let $q$ be the intersection of $\ell$ with $bc$. Then, $pq$ is a vertical segment along $\ell$, contained in the union of the triangles $abc$, $abd$ (see Figure \ref{fig_triangle_pair}). Let $M_2$ be the set of all these segments $pq$ for all $c'd' \in M_1$. Note that the vertical segments in $M_2$ are all distinct, since each such segment $pq$ uniquely determines the originating points $a$, $b$, $c$, $d$ (assuming $z_0$ was chosen in general position). Let $n_2$ be the number of endpoints of the segments in $M_2$. We have $n_2 \le n n_1$, since each endpoint (such as $p$) is uniquely determined by one of $n_1$ ``inner'' vertices (such as $d$) and one of at most $n$ ``outer'' vertices (such as $a$). Next, apply the Unweighted Selection Lemma (Lemma~\ref{lemma_unw}) to $M_2$, obtaining a point $x_0\in \ell$ that is contained in \begin{equation*} \Omega{\left({ |M_2| ^2 \over n_2^2 }\right)} = \Omega{\left( {1\over n^2} \left({ |M_1| \over n_1 }\right)^2 \right)} = \Omega{\left({ 4^j m^4 \over n^8 \log^2 n }\right)} \end{equation*} segments in $M_2$. Thus, $x_0$ is contained in at least these many \emph{unions of pairs of triangles} of $T_j$. But by (\ref{eq_bound_m_ab}), each triangle in $T_j$ participates in at most $4^j m / n^2$ pairs. Therefore, $x_0$ is contained in \begin{equation*} \Omega{\left({ m^3 \over n^6 \log^2 n }\right)} \end{equation*} triangles of $T_j$. \section{Discussion} Eppstein \cite{eppstein} also showed that there always exists a point in $\mathbb R^2$ contained in $\Omega(m/n)$ triangles of $T$. This latter bound is stronger than (\ref{eq_bound_log2}) for small $m$, namely for $m = O(n^{5/2} \log n)$. On the other hand, as Eppstein also showed \cite{eppstein}, for every $n$-point set $S$ in general position and every $m = \Omega(n^2)$, $m\le {n\choose 3}$, there exists a set $T$ of $m$ triangles with vertices in $S$, such that no point in the plane is contained in more than $O(m^2/n^3)$ triangles of $T$. Thus, with the current lack of any better lower bound, the bound (\ref{eq_bound_log2}) appears to be far from tight. Even achieving a lower bound of $\Omega(m^3/n^6)$, without any logarithmic factors, is a major challenge still unresolved. It is known, however, that if $S$ is a set of $n$ points in $\mathbb R^3$ in general position (no four points on a plane), and $T$ is a set of $m$ triangles spanned by $S$, then there exists a \emph{line} (in fact, a line spanned by two points of $S$) that intersects the interior of $\Omega(m^3/n^6)$ triangles of $T$; see \cite{DE} and \cite{smo_phd} for two different proofs of this. \end{document}
\begin{document} \title[B\'enard convection]{ local well-posedness for the B\'enard convection without surface tension} \author{Yunrui Zheng} \address{Beijing International Center for Mathematical Research, Peking University, 100871, P. R. China} \email{[email protected]} \keywords{B\'enard convection, Boussinesq apporoximation, energy method} \begin{abstract} We consider the B\'enard convection in a three-dimensional domain bounded below by a fixed flatten boundary and above by a free moving surface. The domain is horizontally periodic. The fluid dynamics are governed by the Boussinesq approximation and the effect of surface tension is neglected on the free surface. Here we develop a local well-posedness theory for the equations of general case in the framework of the nonlinear energy method. \end{abstract} \maketitle \section{Introduction} \subsection{Formulation of the problem} In this paper, we consider the B\'enard convection in a shallow horizontal layer of a fluid heated from below evolving in a moving domain \begin{eqnarray*} \Om(t)=\left\{y\in\Sigma\times\mathbb{R}\mid-1<y_3<\eta(y_1,y_2,t)\right\}. \end{eqnarray*} Here we assume that $\Sigma=(L_1\mathbb{T})\times(L_2\mathbb{T})$ for $\mathbb{T}=\mathbb{R}/\mathbb{Z}$ the usual $1$-torus and $L_1$, $L_2>0$ the periodicity lengths. Assuming the Boussinesq approximation \cite{Chand}, we obtain the basic hydrodynamic equations governing B\'enard convection as \begin{eqnarray*} \pa_t u+u\cdot\nabla u+\frac{1}{\rho_0}\nabla p&=&\nu\Delta u+g\alpha \theta\mathbf{e}_{y_3},\quad \text{in}\ \Om(t),\\ \pa_t \theta+u\cdot\nabla \theta&=&\kappa\Delta \theta,\quad \text{in}\ \Om(t),\\ u\mid_{t=0}&=&u_0(y_1,y_2,y_3),\quad \theta\mid_{t=0}=\theta_0(y_1,y_2,y_3), \end{eqnarray*} Here, $u=(u_1,u_2,u_3)$ is the velocity field of the fluid satisfying $\mathop{\rm div}\nolimits u=0$, $p$ the pressure, $g>0$ the strength of gravity, $\nu>0$ the kinematic viscosity, $\alpha$ the thermal expansion coefficient, $e_{y_3}=(0,0,1)$ the unit upward vector, $\theta$ the temperature field of the fluid, $\kappa$ the thermal diffusively coefficient, and $\rho_0$ the density at the temperature $T_0$. Notice that, we have made the shift of actual pressure $\bar{p}$ by $p=\bar{p}+gy_3-p_{atm}$ with the constant atmosphere pressure $p_{atm}$. The boundary condition is \begin{eqnarray*} \pa_t\eta-u^\prime\cdot\nabla\eta+u_3&=&0,\quad \text{on}\ \{y_3=\eta(t,y_1,y_2)\},\\ (pI-\nu\mathbb{D}(u))n&=&g\eta n+\sigma Hn+(\mathbf{t}\cdot\nabla)\sigma\mathbf{t},\quad \text{on}\ \{y_3=\eta(t,y_1,y_2)\},\\ n\cdot\nabla \theta+Bi \theta&=&-1,\quad \text{on}\ \{y_3=\eta(t,y_1,y_2)\},\\ u\mid_{y_3=-1}&=&0,\quad \theta\mid_{y_3=-1}=0, \end{eqnarray*} Here, $u^\prime=(u_1,u_2)$, $I$ the $3\times3$ identity matrix, $\mathbb{D}(u)_{ij}=\pa_iu_j+\pa_ju_i$ the symmetric gradient of $u$, $\mathscr{N}$ the upward normal vector of the free surface $\{y_3=\eta\}$, $n=\mathscr{N}/|\mathscr{N}|$ the unit upward normal vector of the free surface $\{y_3=\eta\}$ where $\mathscr{N}=(-\pa_1\eta, -\pa_2\eta, 1)$ is the upward normal vector of the free surface $\{y_3=\eta\}$ and $|\mathscr{N}|=\sqrt{(\pa_1\eta)^2+(\pa_2\eta)^2+1}$, $\mathbf{t}$ the unit tangential vector of the free surface, $Bi\ge0$ the Biot number and $H$ the mean curvature of the free surface. For simplicity, we only consider the case without surface tension in this paper, i.e. $\sigma=0$. We will always assume the natural condition that there exists a positive number $\delta_0$ such that $1+\eta_0\ge\delta_0>0$ on $\Sigma$, which means that the initial free surface is strictly separated from the bottom. And without loss of generality, we may assume that $\rho_0=\mu=\kappa=\alpha=g=Bi=1$. That is, we will consider the equations \begin{eqnarray}\label{equ:BC} \left\{ \begin{aligned} \pa_tu+u\cdot\nabla u+\nabla p-\Delta u-\theta e_{y_3}&=0& \quad \text{in}\quad \Om(t), \\ \mathop{\rm div}\nolimits u&=0& \quad \text{in}\quad \Om(t),\\ \pa_t\theta+u\cdot\nabla \theta-\Delta \theta&=0& \quad \text{in}\quad \Om(t),\\ (pI-\mathbb{D}u)n&=\eta n& \quad \text{on}\quad \{y_3=\eta(t,y_1,y_2)\},\\ \nabla \theta\cdot n+\theta &=-1&\quad\text{on}\quad\{y_3=\eta(t,y_1,y_2)\},\\ u=0,\quad \theta&=0& \quad\text{on}\quad\{y_3=-1\},\\ u\mid_{t=0}=u_0, \quad \theta\mid_{t=0}&=\theta_0&\quad \text{in}\quad \Om(0),\\ \pa_t\eta+u_1\pa_1\eta+u_2\pa_2\eta_2&=u_3& \quad\text{on}\quad\{y_3=\eta(t,y_1,y_2)\},\\ \eta\mid_{t=0}&=\eta_0&\quad\text{on}\quad\{y_3=\eta(t,y_1,y_2)\}. \end{aligned} \right. \end{eqnarray} The discussion of fourth equation in \eqref{equ:BC} may be found in \cite{WL}. The eighth equation in \eqref{equ:BC} implies that the free surface is advected with the fluid. \subsection{Previous results} Traditionally, the B\'enard convection problem has been studied in fixed upper boundary and in free boundary surface with surface tension. For the problem with surface tension case, the existence and decay of global in time solutions of B\'enard convection problem with free boundary surface was proved by T. Iohara, T. Nishida and Y. Teramoto in $L^2$ spaces. T. Iohara proved this in $2$-D setting. T. Nishida and Y. Teramoto proved this in $3$-D background. They all utilized the framework of \cite{Beale2} in the Lagrangian coordinates. \subsection{Geometrical formulation} In the absence of surface tension effect, we will solve this problem in Eulerian coordinates. First, we straighten the time dependent domain $\Om(t)$ to a time independent domain $\Om$. The idea was introduced by J. T. Beale in section 5 of \cite{Beale2}. And in \cite{GT1}, \cite{GT2} and \cite{GT3}, Y. Guo and I. Tice proved the local and global existence results for the incompressible Navier--Stokes equations with a deformable surface using this idea. In \cite{GT1}, \cite{GT2} and \cite{GT3}, Guo and Tice assume that the surface function $\eta$ in some norms is small, which means $\eta$ is a small perturbation for the plane $\{y_3=0\}$. In order to study the free boundary problem of the incompressible Navier--Stokes equations with a general surface function $\eta$, L. Wu introduced the $\varepsilon$-Poisson integral method in \cite{LW}. In this paper, we will use the flattening transformation method introduced by L. Wu. We define $\bar{\eta}^\varepsilon$ by \[ \bar{\eta}^\varepsilon=\mathscr{P}^\varepsilon\eta=\thinspace\text{the parametrized harmonic extension of}\thinspace \eta. \] The definition of $\mathscr{P}^\varepsilon\eta$ can be seen in the section 1.3.1 of \cite{LW} for the periodic case. We introduce the mapping $\Phi^\varepsilon$ from $\Om$ to $\Om(t)$ as \begin{equation}\label{map:phi} \Phi^\varepsilon :(x_1,x_2,x_3)\mapsto (x_1,x_2,x_3+(1+x_3)\bar{\eta}^\varepsilon)=(y_1,y_2,y_3), \end{equation} and its Jacobian matrix \[ \nabla\Phi^\varepsilon=\left(\begin{array}{ccc} 1 & 0 & 0\\ 0 & 1 & 0\\ A^\varepsilon & B^\varepsilon & J^\varepsilon \end{array}\right) \] and the transform matrix \[ \mathscr{A}^\varepsilon=((\nabla\Phi^\varepsilon)^{-1})^\top=\left(\begin{array}{ccc} 1 & 0 & -A^\varepsilon K^\varepsilon \\ 0 & 1 & -B^\varepsilon K^\varepsilon \\ 0 & 0 & K^\varepsilon \end{array}\right) \] where \begin{equation}\label{equ:components} A^\varepsilon=(1+x_3)\pa_1\bar{\eta}^{\varepsilon},\thinspace B^\varepsilon=(1+x_3)\pa_2\bar{\eta}^{\varepsilon},\thinspace J^\varepsilon=1+\bar{\eta}^\varepsilon+(1+x_3)\pa_3\bar{\eta}^\varepsilon,\thinspace K^\varepsilon=1/J^\varepsilon. \end{equation} According to Theorem 2.7 in \cite{LW} and the assumption that $1+\eta_0>\delta_0>0$, there exists a $\delta>0$ such that $J^\varepsilon(0)>\delta>0$ for a sufficiently small $\varepsilon$ depending on $\|\eta_0\|_{H^{5/2}}$. This implies that $\Phi^\varepsilon(0)$ is a homomorphism. Furthermore, $\Phi^\varepsilon(0)$ is a $C^1$ diffeomorphism deduced from Lemma 2.5 and 2.6 in \cite{LW}. For simplicity, in the following, we just write $\bar{\eta}$ instead of $\bar{\eta}^\varepsilon$, while the same fashion applies to $\mathscr{A}$, $\Phi$, $A$, $B$, $J$ and $K$. Then, we define some transformed operators. The differential operators $\nabla_{\mathscr{A}}$, $\mathop{\rm div}\nolimits_{\mathscr{A}}$ and $\Delta_{\mathscr{A}}$ are defined as follows. \begin{align*} &(\nabla_{\mathscr{A}}f)_i=\mathscr{A}_{ij}\pa_jf,\\ &\mathop{\rm div}\nolimits_{\mathscr{A}} u=\mathscr{A}_{ij}\pa_ju_i,\\ &\Delta_{\mathscr{A}}f=\nabla_{\mathscr{A}}\cdot\nabla_{\mathscr{A}}f. \end{align*} The symmetric $\mathscr{A}$-gradient $\mathbb{D}_{\mathscr{A}}$ is defined as $(\mathbb{D}_{\mathscr{A}}u)_{ij}=\mathscr{A}_{ik}\pa_ku_j+\mathscr{A}_{jk}\pa_ku_i$. And we write the stress tensor as $S_{\mathscr{A}}(p,u)=pI-\mathbb{D}_{\mathscr{A}}u$, where $I$ is the $3\times3$ identity matrix. Then we note that $\mathop{\rm div}\nolimits_{\mathscr{A}}S_{\mathscr{A}}(p,u)=\nabla_{\mathscr{A}}p-\Delta_{\mathscr{A}}u$ for vector fields satisfying $\mathop{\rm div}\nolimits_{\mathscr{A}}u=0$. We have also written $\mathscr{N}=(-\pa_1\eta,-\pa_2\eta,1)$ for the nonunit normal to $\{y_3=\eta(y_1,y_2,t)\}$. Then the original equations \eqref{equ:BC} becomes \begin{eqnarray}\label{equ:NBC} \left\{ \begin{aligned} &\pa_tu-\pa_t\bar{\eta}(1+x_3)K\pa_3u+u\cdot\nabla_{\mathscr{A}}u-\Delta_{\mathscr{A}}u+\nabla_{\mathscr{A}}p-\theta \nabla_{\mathscr{A}}y_3=0 &\quad\text{in}\quad\Om \\ &\nabla_{\mathscr{A}}\cdot u=0&\quad\text{in}\quad\Om \\ &\pa_t\theta-\pa_t\bar{\eta}(1+x_3)K\pa_3\theta+u\cdot\nabla_{\mathscr{A}}\theta-\Delta_{\mathscr{A}}\theta=0&\quad\text{in}\quad\Om \\ &(p I-\mathbb{D}_{\mathscr{A}}u)\mathscr{N}=\eta\mathscr{N}&\quad\text{on}\quad \Sigma\\ &\nabla_{\mathscr{A}}\theta\cdot\mathscr{N}+\theta\left|\mathscr{N}\right|=-\left|\mathscr{N}\right|&\quad\text{on}\quad \Sigma\\ &u=0,\quad\theta=0& \quad\text{on}\quad \Sigma_b\\ &u(x,0)=u_0, \quad \theta(x,0)=\theta_0&\quad \text{in}\quad \Om\\ &\pa_t\eta+u_1\pa_1\eta+u_2\pa_2\eta=u_3&\quad\text{on}\quad \Sigma\\ &\eta(x^\prime,0)=\eta_0(x^\prime)&\quad\text{on}\quad \Sigma \end{aligned} \right. \end{eqnarray} where $e_3=(0, 0, 1)$ and we can split the equation \eqref{equ:NBC} into a equation governing B\'enard convection and a transport equation, i.e. \begin{eqnarray}\label{equ:nonlinear BC} \left\{ \begin{aligned} &\pa_tu-\pa_t\bar{\eta}(1+x_3)K\pa_3u+u\cdot\nabla_{\mathscr{A}}u-\Delta_{\mathscr{A}}u+\nabla_{\mathscr{A}}p-\theta \nabla_{\mathscr{A}}y_3=0& \quad\text{in}\quad\Om\\ &\nabla_{\mathscr{A}}\cdot u=0&\quad\text{in}\quad\Om\\ &\pa_t\theta-\pa_t\bar{\eta}(1+x_3)K\pa_3\theta+u\cdot\nabla_{\mathscr{A}}\theta-\Delta_{\mathscr{A}}\theta=0&\quad\text{in}\quad\Om\\ & (p I-\mathbb{D}_{\mathscr{A}}u)\mathscr{N}=\eta\mathscr{N}&\quad\text{on}\quad \Sigma\\ &\nabla_{\mathscr{A}}\theta\cdot\mathscr{N}+\theta\left|\mathscr{N}\right|=-\left|\mathscr{N}\right|&\quad\text{on}\quad \Sigma\\ &u=0,\quad\theta=0& \quad\text{on}\quad \Sigma_b\\ &u(x,0)=u_0, \quad \theta(x,0)=\theta_0&\quad \text{in}\quad \Om\\ \end{aligned} \right. \end{eqnarray} and \begin{eqnarray} \left\{ \begin{aligned} &\pa_t\eta+u_1\pa_1\eta+u_2\pa_2\eta=u_3&\quad\text{on}\quad \Sigma\\ &\eta(x^\prime,0)=\eta_0(x^\prime)&\quad\text{on}\quad \Sigma \end{aligned} \right. \end{eqnarray} Clearly, all the quantities in these two above systems are related to $\eta$. \subsection{Main theorem} The main result of this paper is the local well-posedness of the B\'enard convection. Before stating our result, we need to mention the issue of compatibility conditions for the initial data $(u_0,\theta_0,\eta_0)$. We will study for the regularity up to $N$ temporal derivatives for $N\ge2$ an integer. This requires us to use $u_0$, $\theta_0$ and $\eta_0$ to construct the initial data $\pa_t^ju(0)$, $\pa_t^j\theta(0)$ and $\pa_t^j\eta(0)$ for $j=1,\ldots,N$ and $\pa_t^jp(0)$ for $j=0,\ldots,N-1$. These data must then satisfy various conditions, which we describe in detail in Section 5.1, so we will not state them here. Now for stating our result, we need to explain the notation for spaces and norms. When we write $\|\pa_t^ju\|_{H^k}$, $\|\pa_t^j\theta\|_{H^k}$ and $\|\pa_t^jp\|_{H^k}$, we always mean that the space is $H^k(\Om)$, and when we write $\|\pa_t^j\eta\|_{H^s}$, we always mean that the space is $H^s(\Sigma)$, where $H^k(\Om)$ and $H^s(\Sigma)$ are usual Sobolev spaces for $k, s\ge0$. \begin{theorem}\label{thm:main} Let $N\ge2$ be an integer. Assume that $\eta_0+1\ge\delta>0$, and that the initial data $(u_0,\theta_0,\eta_0)$ satisfies \[ \mathscr{E}_0:=\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\|\eta_0\|_{H^{2N+1/2}}^2<\infty, \] as well as the $N$-th compatibility conditions \eqref{cond:compatibility N}. Then there exists a $0<T_0<1$ such that for any $0<T<T_0$, there exists a solution $(u,p,\theta,\eta)$ to \eqref{equ:NBC} on the interval $[0,T]$ that achieves the initial data. The solution obeys the estimate \begin{eqnarray} \label{est:main est} \begin{aligned} &\sum_{j=0}^N\left(\sup_{0\le t\le T}\|\pa_t^ju\|_{H^{2N-2j}}^2+\|\pa_t^ju\|_{L^2H^{2N-2j+1}}^2\right)+\|\pa_t^{N+1}u\|_{(\mathscr{X}_T)^\ast}\\ &\quad+\sum_{j=0}^{N-1}\left(\sup_{0\le t\le T}\|\pa_t^jp\|_{H^{2N-2j-1}}^2+\|\pa_t^jp\|_{L^2H^{2N-2j}}^2\right)\\ &\quad+\sum_{j=0}^N\left(\sup_{0\le t\le T}\|\pa_t^j\theta\|_{H^{2N-2j}}^2+\|\pa_t^j\theta\|_{L^2H^{2N-2j+1}}^2\right)+\|\pa_t^{N+1}\theta\|_{(\mathscr{H}^1_T)^\ast}\\ &\quad+\Bigg(\sup_{0\le t\le T}\|\eta\|_{H^{2N+1/2}(\Sigma)}^2+\sum_{j=1}^N\sup_{0\le t\le T}\|\pa_t^j\eta\|_{H^{2N-2j+3/2}}^2\\ &\quad+\sum_{j=2}^{N+1}\|\pa_t^j\eta\|_{L^2H^{2N-2j+5/2}}^2\Bigg)\\ &\le C(\Om_0,\delta)P(\mathscr{E}_0), \end{aligned} \end{eqnarray} where $C(\Om_0,\delta)>0$ depends on the initial domain $\Om_0$ and $\delta$, $P(\cdot)$ is a polynomial satisfying $P(0)=0$, and the temporal norm $L^2$ is computed on $[0,T]$. The solution is unique among functions that achieve the initial data and for which the left-hand side of \eqref{est:main est} is finite. Moreover, $\eta$ is such that the mapping $\Phi(\cdot,t)$ defined by \eqref{map:phi} is a $C^{2N-1}$ diffeomorphism for each $t\in [0,T]$. \end{theorem} \begin{remark} The space $\mathscr{X}_T$ is defined in section $2$ of \cite{GT1}. \end{remark} \begin{remark} Since the mapping $\Phi(\cdot,t)$ is a $C^{2N-1}$ diffeomorphism, we may change coordinates to produce solutions to \eqref{equ:BC}. \end{remark} \subsection{Notation and terminology} Now, we mention some definitions, notation and conventions that we will use throughout this paper. \begin{enumerate}[1.] \item Constants. The constant $C>0$ will denote a universal constant that only depend on the parameters of the problem, $N$ and $\Om$, but does not depend on the data, etc. They are allowed to change from line to line. We will write $C=C(z)$ to indicate that the constant $C$ depends on $z$. And we will write $a\lesssim b$ to mean that $a\le C b$ for a universal constant $C>0$.\\ \item Polynomials. We will write $P(\cdot)$ to denote polynomials in one variable and they may change from one inequality or equality to another.\\ \item Norms. We will write $H^k$ for $H^k(\Om)$ for $k\ge0$, and $H^s(\Sigma)$ with $s\in\mathbb{R}$ for usual Sobolev spaces. Typically, we will write $H^0=L^2$, With the exception to this is we will use $L^2([0,T];H^k)$ (or $L^2([0,T];H^s(\Sigma))$) to denote the space of temporal square--integrable functions with values in $H^k$ (or $H^s(\Sigma)$). Sometimes we will write $\|\cdot\|_k$ instead of $\|\cdot\|_{H^k(\Om)}$ or $\|\cdot\|_{H^k(\Sigma)}$. We assume that functions have natural spaces. For example, the functions $u$, $p$, $\theta$ and $\bar{\eta}$ live on $\Om$, while $\eta$ lives on $\Sigma$. So we may write $\|\cdot\|_{H^k}$ for the norms of $u$, $p$, $\theta$ and $\bar{\eta}$ in $\Om$, and $\|\cdot\|_{H^s}$ for norms of $\eta$ on $\Sigma$. \end{enumerate} \subsection{Plan of the paper} In section 2, we develop the machinery of time--dependent function spaces based on \cite{GT1}. In section 3, we make some elliptic estimates for the linear steady equations of \eqref{equ:linear BC}. In section 4, we will study the local existence theory of the following linear problem for $(u,p,\theta)$, where we think of $\eta$ (and hence $\mathscr{A}$, $\mathscr{N}$, etc.) is given: \begin{eqnarray} \label{equ:linear BC} \left\{ \begin{aligned} &\pa_t u-\Delta_{\mathscr{A}}u+\nabla_{\mathscr{A}}p-\theta \nabla_{\mathscr{A}}y_3=F^1& \quad\text{in}\quad\Om,\\ &\nabla_{\mathscr{A}}\cdot u=0&\quad\text{in}\quad\Om,\\ &\pa_t\theta-\Delta_{\mathscr{A}}\theta=F^3&\quad\text{in}\quad\Om,\\ & (p I-\mathbb{D}_{\mathscr{A}}u)\mathscr{N}=F^4&\quad\text{on}\quad \Sigma,\\ &\nabla_{\mathscr{A}}\theta\cdot\mathscr{N}+\theta\left|\mathscr{N}\right|=F^5&\quad\text{on}\quad \Sigma,\\ &u=0,\quad\theta=0& \quad\text{on}\quad \Sigma_b, \end{aligned} \right. \end{eqnarray} subject to the initial condition $u(0)=u_0$ and $\theta(0)=\theta_0$, with the time-dependent Galerkin method. In section 5, we construct the initial data and do some estimates for the forcing terms. In section 6, we construct solutions to \eqref{equ:NBC} using iteration and contraction, and complete the proof of Theorem \ref{thm:main}. \section{Functional setting} \subsection{Function spaces} Throughout this paper, we utilize the functional spaces defined by Guo and Tice in section 2 of \cite{GT1}. The only modification is the definition of space $\mathscr{H}^1(t)$. For the vector-valued space $\mathscr{H}^1(t)$, its definition is the same as \cite{GT1}. The following is the definition for the scalar-valued space $\mathscr{H}^1(t)$. \[ \mathscr{H}^1(t):=\left\{\theta| \|\theta\|_{\mathscr{H}^1}<\infty, \theta|_{\Sigma_b}=0\right\} \] with the norm $\|\theta\|_{\mathscr{H}^1}:=\left(\theta,\theta\right)_{\mathscr{H}^1}^{1/2}$, where the inner product $\left(\cdot,\cdot\right)_{\mathscr{H}^1}$ is defined as \[ \left(\theta,\phi\right)_{\mathscr{H}^1}:=\int_{\Om}\left(\nabla_{\mathscr{A}(t)}\theta\cdot\nabla_{\mathscr{A}(t)}\phi\right)J(t). \] The following lemma implies that this space $\mathscr{H}^1$ is equivalent to the usual Sobolev space $H^1$. \begin{lemma}\label{lem:theta H0 H1} Suppose that $0<\varepsilon_0<1$ and $\|\eta-\eta_0\|_{H^{5/2}(\Sigma)}<\varepsilon_0$. Then it holds that \begin{equation}\label{est:theta H0} \|\theta\|_{H^0}^2\lesssim\int_\Om J|\theta|^2\lesssim\left(1+\|\eta_0\|_{H^{5/2}(\Sigma)}\right)\|\theta\|_{H^0}^2, \end{equation} \begin{equation}\label{est:theta H1} \f{1}{\left(1+\|\eta_0\|_{H^{5/2}(\Sigma)}\right)^3}\|\theta\|_{H^1(\Om)}^2\lesssim\int_\Om J|\nabla_{\mathscr{A}}\theta|^2\lesssim\left(1+\|\eta_0\|_{H^{5/2}(\Sigma)}\right)^3\|\theta\|_{H^1(\Om)}^2. \end{equation} \end{lemma} \begin{proof} From the Poinc\'are inequality, we know that $\|\theta\|_{H^1}$ is equivalent to $\|\nabla\theta\|_{H^0}$. So in the following, we will use $\|\theta\|_{H^1}$ instead of $\|\nabla\theta\|_{H^0}$. From the assumption and the Sobolev inequalities, we may derive that \[ \delta\lesssim\|J\|_{L^\infty}\lesssim1+\|\bar{\eta}\|_{L^\infty}+\|\nabla\bar{\eta}\|_{L^\infty}\lesssim 1+\|\eta\|_{H^{5/2}}\lesssim 1+\|\eta_0\|_{H^{5/2}}, \] and \begin{align*} \|\mathscr{A}\|_{L^\infty}&\lesssim\max\{1, \|AK\|_{L^\infty}^2, \|BK\|_{L^\infty}^2, \|K\|_{L^\infty}^2\}\\ &\lesssim1+(1+\|\nabla\bar{\eta}\|_{L^\infty}^2)\|K\|_{L^\infty}^2\lesssim \left(1+\|\eta_0\|_{H^{5/2}}\right)^2. \end{align*} Thus \eqref{est:theta H0} is clearly derived from the estimate of $\|J\|_{L^\infty}$ and we have that \begin{align*} \int_\Om J|\nabla_{\mathscr{A}}\theta|^2&\lesssim(1+\|\eta_0\|_{H^{5/2}})\int_\Om |\nabla_{\mathscr{A}}\theta|^2\\ &\lesssim(1+\|\eta_0\|_{H^{5/2}})\max\{1, \|AK\|_{L^\infty}^2, \|BK\|_{L^\infty}^2, \|K\|_{L^\infty}^2\}\|\theta\|_{H^1}^2\\ &\lesssim\left(1+\|\eta_0\|_{H^{5/2}}\right)^3\|\theta\|_{H^1}^2. \end{align*} Now we have proved the second inequality of \eqref{est:theta H1}. To prove the first inequality of \eqref{est:theta H1}, we rewrite the $\|\theta\|_{\mathscr{H}^1}$ as \[ \int_\Om J|\nabla_{\mathscr{A}}\theta|^2=\int_\Om J|\nabla_{\mathscr{A}_0}\theta|^2+\int_\Om J(\nabla_{\mathscr{A}}\theta+\nabla_{\mathscr{A}_0}\theta)\cdot(\nabla_{\mathscr{A}}\theta-\nabla_{\mathscr{A}_0}\theta), \] Here $\mathscr{A}_0$ is in terms of $\eta_0$. By the estimates of $\|J\|_{L^\infty}$, we derive that \begin{align*} \int_\Om J|\nabla_{\mathscr{A}_0}\theta|^2&\gtrsim \f{1}{1+\|\eta_0\|_{H^{5/2}}}\int_\Om J_0|\nabla_{\mathscr{A}_0}\theta|^2\\ &=\f{1}{1+\|\eta_0\|_{H^{5/2}}}\int_{\Om_0} |\nabla(\theta\circ\Phi(0))|^2\\ &\gtrsim \f{1}{(1+\|\eta_0\|_{H^{5/2}})^3}\|\theta\|_{H^1}, \end{align*} where in the last inequality, we have used the following Lemma \ref{lem:transport}, since $\Phi(0)$ is a diffeomorphism. Here $J_0$ is in terms of $\eta_0$. Then, using the estimates of $\|\mathscr{A}\|_{L^\infty}$ and $\|J\|_{L^\infty}$, we have that \begin{align*} \left|\int_\Om J(\nabla_{\mathscr{A}}\theta+\nabla_{\mathscr{A}_0}\theta)\cdot(\nabla_{\mathscr{A}}\theta-\nabla_{\mathscr{A}_0}\theta)\right|&\lesssim \|J\|_{L^\infty}\|\mathscr{A}+\mathscr{A}_0\|_{L^\infty}\|\mathscr{A}-\mathscr{A}_0\|_{L^\infty}\|\theta\|_{H^1}\\ &\lesssim\varepsilon_0\left(1+\|\eta_0\|_{H^{5/2}}\right)^3\|\theta\|_{H^1}. \end{align*} Then taking $\varepsilon_0$ sufficiently small, we may derive that \begin{align*} \int_\Om J|\nabla_{\mathscr{A}}\theta|^2&\gtrsim\int_\Om J|\nabla_{\mathscr{A}_0}\theta|^2-\left|\int_\Om J(\nabla_{\mathscr{A}}\theta+\nabla_{\mathscr{A}_0}\theta)\cdot(\nabla_{\mathscr{A}}\theta-\nabla_{\mathscr{A}_0}\theta)\right|\\ &\gtrsim \f{1}{(1+\|\eta_0\|_{H^{5/2}})^3}\|\theta\|_{H^1}. \end{align*} This is the first inequality of \eqref{est:theta H1}. \end{proof} We define an operator $\mathcal{K}_t$ by $\mathcal{K}_t\theta=K(t)\theta$, where $K(t):=K$ is defined as \eqref{equ:components}. Clearly, $\mathcal{K}_t$ is invertible and $\mathcal{K}_t^{-1}\Theta=K(t)^{-1}\Theta=J(t)\Theta$, and $J(t):=J=1/K$. \begin{proposition}\label{prop:k} For each $t\in[0, T]$, $\mathcal{K}_t$ is a bounded linear isomorphism: from $H^k(\Om)$ to $H^k(\Om)$ for $k=0, 1, 2$; from $L^2(\Om)$ to $\mathscr{H}^0(t)$; and from ${}_0H^1(\Om)$ to $\mathscr{H}^1(t)$. In each case, the norms of the operators $\mathcal{K}_t$, $\mathcal{K}_t^{-1}$ are bounded by a polynomial $P(\|\eta(t)\|_{H^{\f72}})$. The mapping $\mathcal{K}$ defined by $\mathcal{K}\theta(t):=\mathcal{K}_t\theta(t)$ is a bounded linear isomorphism: from $L^2([0, T]; H^k(\Om))$ to $L^2([0, T]; H^k(\Om))$ for $k=0, 1, 2$; from $L^2([0, T]; H^0(\Om))$ to $\mathscr{H}^0_T$ and from ${_0}{H}{^1}(\Om)$ to $\mathscr{H}^1_{T}$. In each case, the operators $\mathcal{K}$ and $\mathcal{K}^{-1}$ are bounded by the polynomial $P(\sup_{0\le t\le T}\|\eta(t)\|_{H^\f72})$. \end{proposition} \begin{proof} It is easy to see that for each $t\in[0,T]$, \begin{equation} \|\mathcal{K}_t\theta\|_{H^0}\lesssim \|\mathcal{K}_t\|_{C^0}\|\theta\|_{H^0}\lesssim P(\|\eta(t)\|_{H^{\f72}})\|\theta\|_{H^0}, \end{equation} \begin{equation} \|\mathcal{K}_t\theta\|_{H^1}\lesssim \|\mathcal{K}_t\|_{C^1}\|\theta\|_{H^1}\lesssim P(\|\eta(t)\|_{H^{\f72}})\|\theta\|_{H^1}, \end{equation} \begin{equation} \|\mathcal{K}_t\theta\|_{H^2}\lesssim \|\mathcal{K}_t\|_{C^1}\|\theta\|_{H^2}+\|\mathcal{K}_t\|_{H^2}\|\theta\|_{C^0}\lesssim P(\|\eta(t)\|_{H^{\f72}})\|\theta\|_{H^2}. \end{equation} These inequalities imply that $\mathcal{K}_t$ is a bounded operator from $H^k$ to $H^k$, for $k=0,1,2$. Since $\mathcal{K}_t$ is invertible, we may have the estimate $\|\mathcal{K}_t^{-1}\Theta\|_{H^k}\lesssim P(\|\eta(t)\|_{H^{\f72}})\|\Theta\|_{H^k}$. Thus, $\mathcal{K}_t$ is an isomorphism of $H^k$ to $H^k$, for $k=0,1,2$. With this fact in hand, Lemma \ref{lem:theta H0 H1} implies that $\mathcal{K}_t$ is an isomorphism of $L^2(\Om)$ to $\mathscr{H}^0(t)$ and of ${}_0H^1(\Om)$ to $\mathscr{H}^1(t)$. The mapping properties of the operator $\mathcal{K}$ on space-time functions may be established in a similar manner. \end{proof} \subsection{Pressure as a Lagrange multiplier} The introduction of pressure function has been studied by Guo and Tice in section 2 of \cite{GT1}, of which the modification was given by L. Wu in section 2.2 of \cite{LW}. So we omit the details here. \section{Elliptic estimates} \subsection{Preliminary} Before studying the linear problem \eqref{equ:linear BC}, we need some elliptic estimates. In order to study the elliptic problem, we may transform the equations on the domain $\Om$ into constant coefficient equations on the domain $\Om^\prime=\Phi(\Om)$, where $\Phi$ is defined by \eqref{map:phi}. The following lemma shows that the mapping $\Phi$ is an isomorphism between $H^k(\Om^\prime)$ and $H^k(\Om)$. Here, the Sobolev spaces are either vector-valued or scalar-valued. \begin{lemma}\label{lem:transport} Let $\Psi: \Om\to\Om^\prime$ be a $C^1$ diffeomorphism satisfying $\Psi\in H^{k+1}_{loc}$, $\nabla\Psi-I\in H^k(\Om)$ and the Jacobi $J=\det(\nabla\Psi)>\delta>0$ almost everywhere in $\Om$ for an integer $k\ge3$. If $v\in H^m(\Om^\prime)$, then $v\circ\Psi\in H^m(\Om)$ for $m=0,1, \ldots, k+1$, and \[ \|v\circ\Psi\|_{H^m(\Om)}\lesssim C\left(\|\nabla\Psi-I\|_{H^k(\Om)}\right)\|v\|_{H^m(\Om^\prime)}, \] where $C(\|\nabla\Psi-I\|_{H^k(\Om)})$ is a constant depending on $\|\nabla\Psi-I\|_{H^k(\Om)}$. Similarly, for $u\in H^m(\Om)$, we have $u\circ\Psi^{-1}\in H^m(\Om^\prime)$ for $m=0,1, \ldots, k+1$, and \[ \|u\circ\Psi^{-1}\|_{H^m(\Om^\prime)}\lesssim C\left(\|\nabla\Psi-I\|_{H^k(\Om)}\right)\|u\|_{H^m(\Om)}. \] Let $\Sigma^\prime=\Psi(\Sigma)$ be the top boundary of $\Om^\prime$. If $v\in H^{m-\f12}(\Sigma^\prime)$ for $m=1, \ldots, k-1$, then $v\circ\Psi\in H^{m-\f12}(\Sigma)$, and \[ \|v\circ\Psi\|_{H^{m-\f12}(\Sigma)}\lesssim C\left(\|\nabla\Psi-I\|_{H^k(\Om)}\right)\|v\|_{H^{m-\f12}(\Sigma^\prime)}. \] If $u\in H^{m-\f12}(\Sigma)$ for $m=1, \ldots, k-1$, then $u\circ\Psi^{-1}\in H^{m-\f12}(\Sigma^\prime)$ and \[ \|u\circ\Psi^{-1}\|_{H^{m-\f12}(\Sigma^\prime)}\lesssim C\left(\|\nabla\Psi-I\|_{H^k(\Om)}\right)\|u\|_{H^{m-\f12}(\Sigma)}. \] \end{lemma} \begin{proof} The proof of this lemma is the same as Lemma $3.1$ in \cite{GT1}, which has been proved by Y. Guo and I. Tice, so we omit the details here. \end{proof} \subsection{The $\mathscr{A}$-stationary convection problem} In this section, we consider the stationary equations \begin{eqnarray}\label{equ:SBC} \left\{ \begin{aligned} \mathop{\rm div}\nolimits_{\mathscr{A}}S_{\mathscr{A}}(p, u)-\theta \nabla_{\mathscr{A}}y_3&=F^1 \quad\text{in}\quad\Om\\ \mathop{\rm div}\nolimits_{\mathscr{A}}u&=F^2\quad\text{in}\quad\Om\\ -\Delta_{\mathscr{A}}\theta&=F^3\quad\text{in}\quad\Om\\ S_{\mathscr{A}}(p, u)\mathscr{N}&=F^4\quad\text{on}\quad \Sigma\\ \nabla_{\mathscr{A}}\theta\cdot\mathscr{N}+\theta\left|\mathscr{N}\right|&=F^5\quad\text{on}\quad \Sigma\\ u=0,\quad\theta&=0\quad\text{on}\quad \Sigma_b\\ \end{aligned} \right. \end{eqnarray} Before discussing the regularity for strong solution to \eqref{equ:SBC}, we need to define the weak solution of equation \eqref{equ:SBC}. Suppose $F^1\in (\mathscr{H}^1)^\ast$, $F^2\in H^0$, $F^3\in (\mathscr{H}^1)^\ast$ $F^4\in H^{-\f12}(\Sigma)$ and $F^5\in H^{-\f12}(\Sigma)$, $(u,p,\theta)$ is called a weak solution of equation \eqref{equ:SBC} if it satisfies $\nabla_{\mathscr{A}}\cdot u=F^2$, \begin{equation}\label{equ:weak theta} \left(\nabla_{\mathscr{A}}\theta,\nabla_{\mathscr{A}}\phi\right)_{\mathscr{H}^0}+\left(\theta\left|\mathscr{N}\right|,\phi\right)_{H^0(\Sigma)}=\left<F^3, \phi\right>_{(\mathscr{H}^1)^\ast}+\left<F^5,\phi\right>_{H^{-\f12}(\Sigma)}, \end{equation} and \begin{equation}\label{equ:weak u} \f12\left(\mathbb{D}_\mathscr{A}u,\mathbb{D}_\mathscr{A}\psi\right)_{\mathscr{H}^0}+\left(p, \nabla_{\mathscr{A}}\psi\right)_{\mathscr{H}^0}-\left(\theta \nabla_{\mathscr{A}}y_3, \psi\right)_{\mathscr{H}^0}=\left<F^1,\psi\right>_{(\mathscr{H}^1)\ast}-\left<F^4, \psi\right>_{H^{-\f12}(\Sigma)}, \end{equation} for any $\phi, \psi\in \mathscr{H}^1$. \begin{lemma} Suppose $F^1\in (\mathscr{H}^1)^\ast$, $F^2\in \mathscr{H}^0$, $F^3\in (\mathscr{H}^1)^\ast$, $F^4\in H^{-\f12}(\Sigma)$ and $F^5\in H^{-\f12}(\Sigma)$. Then there exists a unique weak solution $(u, p, \theta) \in \mathscr{H}^1\times \mathscr{H}^0 \times \mathscr{H}^1$ to \eqref{equ:SBC}. \end{lemma} \begin{proof} For the Hilbert space $\mathscr{H}^1$ with the inner product $\left(\theta,\phi\right)=\left(\nabla_{\mathscr{A}}\theta, \nabla_{\mathscr{A}}\phi\right)_{\mathscr{H}^0}+\left(\theta\left|\mathscr{N}\right|,\phi\right)_{H^0(\Sigma)}$, we can define a linear functional $\ell\in (\mathscr{H}^1)^\ast$ by \[ \ell(\phi)=\left<F^3, \phi\right>_{(\mathscr{H}^1)^\ast}+\left<H^5,\phi\right>_{H^{-\f12}(\Sigma)}, \] for all $\phi\in \mathscr{H}^1$. Then by using the Riesz representation theorem, there exists a unique $\theta\in \mathscr{H}^1$ such that \[ \left(\nabla_{\mathscr{A}}\theta, \nabla_{\mathscr{A}}\phi\right)_{\mathscr{H}^0}+\left(\theta\left|\mathscr{N}\right|,\phi\right)_{H^0(\Sigma)}=\left<F^3, \phi\right>_{(\mathscr{H}^1)^\ast}+\left<H^5,\phi\right>_{H^{-\f12}(\Sigma)}, \] for all $\phi\in \mathscr{H}^1$. By Lemma 2.6 in \cite{GT1}, there exists a $\bar{u}\in \mathscr{H}^1$ such that $\mathop{\rm div}\nolimits_{\mathscr{A}}\bar{u}=F^2$. Then, we may restrict our test function to $\psi\in \mathscr{X}$. A straight application of Riesz representation theorem to the Hilbert space $\mathscr{X}$ with inner product defined as $\left(u,\psi\right)=\left(\mathbb{D}_{\mathscr{A}}u, \mathbb{D}_{\mathscr{A}}\psi\right)_{\mathscr{H}^0}$ provides a unique $w\in \mathscr{X}$ such that \begin{equation} \label{eq:velocity} \f12\left(\mathbb{D}_{\mathscr{A}}w, \mathbb{D}_{\mathscr{A}}\psi\right)_{\mathscr{H}^0}=-\f12\left(\mathbb{D}_{\mathscr{A}}\bar{u}, \mathbb{D}_{\mathscr{A}}\psi\right)_{\mathscr{H}^0}+\left(\theta \nabla_{\mathscr{A}}y_3, \psi\right)_{\mathscr{H}^0}+\left<F^1, \psi\right>_{(\mathscr{H}^1)^\ast}-\left<F^4, \psi\right>_{H^{-\f12}(\Sigma)} \end{equation} for all $\psi\in \mathscr{X}$. Then we can find $u$ satisfying \begin{equation}\label{equ:pressureless weak u} \f12\left(\mathbb{D}_\mathscr{A}u,\mathbb{D}_\mathscr{A}\psi\right)_{\mathscr{H}^0}-\left(\theta \nabla_{\mathscr{A}}y_3, \psi\right)_{\mathscr{H}^0}=\left<F^1,\psi\right>_{(\mathscr{H}^1)\ast}-\left<F^4, \psi\right>_{H^{-\f12}(\Sigma)}, \end{equation} by $u=w+\bar{u}\in\mathscr{H}^1$, with $\mathop{\rm div}\nolimits_{\mathscr{A}}u=F^2$. It is easily to be seen that $u$ is unique. Suppose that there exists another $\tilde{u}$ still satisfies \eqref{equ:pressureless weak u}. Then we have $\mathop{\rm div}\nolimits_{\mathscr{A}}(u-\tilde{u})=0$, and $\left(\mathbb{D}_\mathscr{A}(u-\tilde{u}),\mathbb{D}_\mathscr{A}\psi\right)_{\mathscr{H}^0}=0$ for any $\psi\in \mathscr{X}$. By taking $\psi=u-\tilde{u}$, and using the Korn's inequality, we know that $\|u-\tilde{u}\|_{H^0}=0$ which implies $u=\tilde{u}$. In order to introduce the pressure $p$, we can define $\lam\in (\mathscr{H}^1)^\ast$ as the difference of the left and right hand sides of \eqref{eq:velocity}. Then $\lam(\psi)=0$ for all $\psi\in \mathscr{X}$. According to the Proposition $2.12$ in \cite{LW}, there exists a unique $p\in \mathscr{H}^0$ satisfying $\left(p, \mathop{\rm div}\nolimits_{\mathscr{A}}\psi\right)_{\mathscr{H}^0}=\lam(\psi)$ for all $\psi\in \mathscr{H}^1$. \end{proof} In the next result, we establish the strong solutions of \eqref{equ:SBC} and present some elliptic estimates. \begin{lemma}\label{lem:S lower regularity} Suppose that $\eta\in H^{k+\f12}(\Sigma)$ for $k\ge3$ such that the mapping $\Phi$ defined in \eqref{map:phi} is a $C^1$ diffeomorphism of $\Om$ to $\Om^\prime=\Phi(\Om)$. If $F^1\in H^0$, $F^2\in H^1$, $F^3\in H^0$, $F^4\in H^{\f12}$ and $F^5\in H^{\f12}$, then the problem \eqref{equ:SBC} admits a unique strong solution $(u, p, \theta)\in H^2(\Om)\times H^1(\Om)\times H^2(\Om)$, i.e. $(u, p, \theta)$ satisfy \eqref{equ:SBC} a.e. in $\Om$ and on $\Sigma$, $\Sigma_b$. Moreover, for $r=2, \ldots, k-1$, we have the estimate \begin{equation}\label{ineq:lower elliptic} \begin{aligned} \|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r} &\lesssim& C(\eta)\Big(\|F^1\|_{H^{r-2}}+\|F^2\|_{H^{r-1}}+\|F^3\|_{H^{r-2}}\\ &&+\|F^4\|_{H^{r-\f32}(\Sigma)}+\|F^5\|_{H^{r-\f32}(\Sigma)}\Big), \end{aligned} \end{equation} whenever the right-hand side is finite, where $C(\eta)$ is a constant depending on $\|\eta\|_{H^{k+\f12}(\Sigma)}$. \end{lemma} \begin{proof} First, we consider the problem \begin{eqnarray*} \left\{ \begin{aligned} -\Delta_{\mathscr{A}}\theta&=F^3 \quad \text{in}\quad \Om,\\ \nabla_{\mathscr{A}}\theta\cdot\mathscr{N}+\theta\left|\mathscr{N}\right|&=F^5 \quad \text{on}\quad \Sigma,\\ \theta&=0 \quad \text{on}\quad \Sigma_b. \end{aligned} \right. \end{eqnarray*} Since the coefficients of this equation are not constants, We transform this problem to one on $\Om^\prime=\Phi(\Om)$ by introducing the unknowns $\Theta$ according to $\theta=\Theta\circ\Phi$. Then $\Theta$ should be solutions to the usual problem on $\Om^\prime=\{-1\le y_3\le \eta(y_1, y_2)\}$ with upper boundary $\Sigma^\prime=\{y_3=\eta\}$: \begin{eqnarray}\label{equ:theta} \left\{ \begin{aligned} -\Delta \Theta&=F^3\circ\Phi^{-1}=G^3 &\quad\text{in}\quad\Om^\prime,\\ \nabla \Theta\cdot\mathscr{N}&+\Theta\left|\mathscr{N}\right|= F^5\circ\Phi^{-1}=G^5&\quad\text{on}\quad\Sigma^\prime,\\ \Theta&=0 &\quad\text{on}\quad\Sigma^\prime_b. \end{aligned} \right. \end{eqnarray} Note that, according to Lemma \ref{lem:transport}, $G^3\in H^0(\Om^\prime)$ and $G^5\in H^{1/2}(\Sigma^\prime)$. Then we may argue as the Lemma 2.8 in \cite{Beale1} and use the Theorem 10.5 in \cite{ADN}, to obtain that there exists a unique $\Theta\in H^2(\Om^\prime)$, solving problem \eqref{equ:theta} with \[ \|\Theta\|_{H^2(\Om^\prime)}\lesssim C(\eta)(\|G^3\|_{H^0(\Om^\prime)}+\|G^5\|_{H^{\f12}(\Sigma^\prime)}), \] for $C(\eta)$ a constant depending on $\|\eta\|_{H^{k+\f12}}$. For the $\mathscr{A}$-Stokes equations, we introduce the unknowns $v, q$ by $u=v\circ\Phi$ and $q=p\circ\Phi$. For the usual Stokes problem \begin{eqnarray}\label{equ:Stokes} \left\{ \begin{aligned} S(q, v)-\Theta e_3&=F^1\circ\Phi^{-1}=G^1& \quad\text{in}\quad\Om^\prime\\ \nabla\cdot v&=F^2\circ\Phi^{-1}=G^2&\quad\text{in}\quad\Om^\prime\\ S(q, v)\mathscr{N}&=F^4\circ\Phi^{-1}=G^4&\quad\text{on}\quad \Sigma^\prime\\ v&=0&\quad\text{on}\quad \Sigma_b, \end{aligned} \right. \end{eqnarray} we use the same argument as in the proof of Lemma 3.6 in \cite{GT1} with $G^1+\Theta e_3$ instead of $G^1$. Then we have that there exist unique $v\in H^2(\Om^\prime)$, $q\in H^1(\Om^\prime)$, solving problem \eqref{equ:Stokes} with \[ \|v\|_{H^2(\Om^\prime)}+\|q\|_{H^1(\Om^\prime)}\lesssim C(\eta)\left(\|G^1\|_{H^0(\Om^\prime)}+\|G^2\|_{H^1(\Om^\prime)}+\|G^4\|_{H^{\f12}(\Sigma^\prime)}+\|\Theta\|_{H^0(\Om^\prime)}\right), \] for $C(\eta)$ a constant depending on $\|\eta\|_{H^{k+\f12}}$. so we have that \begin{equation} \begin{aligned} \|v\|_{H^2(\Om^\prime)}+\|q\|_{H^1(\Om^\prime)}+\|\Theta\|_{H^2(\Om^\prime)}&\lesssim C(\eta)\Big(\|G^1\|_{H^0(\Om^\prime)}+\|G^2\|_{H^1(\Om^\prime)}\\ &\quad+\|G^3\|_{H^0(\Om^\prime)}+\|G^4\|_{H^{\f12}(\Sigma^\prime)}+\|G^5\|_{H^{\f12}(\Sigma^\prime)}\Big), \end{aligned} \end{equation} for $C(\eta)$ a constant depending on $\|\eta\|_{H^{k+\f12}}$. Then we may argue it as in Lemma 3.6 of \cite{GT1} to derive that, for $r=2, \ldots, k-1$, \begin{equation} \begin{aligned} &\|v\|_{H^r(\Om^\prime)}+\|q\|_{H^{r-1}(\Om^\prime)}+\|\Theta\|_{H^r(\Om^\prime)}\\ &\lesssim C(\eta)\Big(\|G^1\|_{H^{r-2}(\Om^\prime)}+\|G^2\|_{H^{r-1}(\Om^\prime)}+\|G^3\|_{H^{r-2}(\Om^\prime)}\\ &\quad+\|G^4\|_{H^{r-\f32}(\Sigma^\prime)}+\|G^5\|_{H^{r-\f32}(\Sigma^\prime)}\Big), \end{aligned} \end{equation} for $C(\eta)$ a constant depending on $\|\eta\|_{H^{k+\f12}}$. Now, we transform back to $\Om$ with $u=v\circ\Phi$, $p=q\circ\Phi$ and $\theta=\Theta\circ\Phi$. It is readily verified that $(u, p, T)$ are strong solutions of \eqref{equ:SBC}. According to Lemma \ref{equ:SBC}, \begin{align*} \|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r} &\lesssim C(\eta)\Big(\|F^1\|_{H^{r-2}}+\|F^2\|_{H^{r-1}}+\|F^3\|_{H^{r-2}}\\ &\quad+\|F^4\|_{H^{r-\f32}(\Sigma)}+\|F^5\|_{H^{r-\f32}(\Sigma)}\Big), \end{align*} whenever the right-hand side is finite, where $C(\eta)$ is a constant depending on $\|\eta\|_{H^{k+\f12}(\Sigma)}$. This is what we want. \end{proof} In the next lemma, we verify that the constant in \eqref{ineq:lower elliptic} can actually only depend on the initial free surface. \begin{lemma}\label{lem:initial lower regularity} Let $k\ge3$ be an integer and suppose that $\eta\in H^{k+\f12}(\Sigma)$ and $\eta_0\in H^{k+\f12}(\Sigma)$. Then there exists a positive number $\varepsilon_0<1$ such that if $\|\eta-\eta_0\|_{H^{k-\f32}}\le \varepsilon_0$, the solution to \eqref{equ:SBC} satisfies \begin{equation}\label{ineq:initial lower elliptic} \begin{aligned} \|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r} &\lesssim C(\eta_0)\Big(\|F^1\|_{H^{r-2}}+\|F^2\|_{H^{r-1}}+\|F^3\|_{H^{r-2}}\\ &\quad+\|F^4\|_{H^{r-\f32}(\Sigma)}+\|F^5\|_{H^{r-\f32}(\Sigma)}\Big), \end{aligned} \end{equation} for $r=2, \ldots, k-1$, whenever the right hand side is finite, where $C(\eta_0)$ is a constant depending on $\|\eta_0\|_{H^{k+\f12}}$. \end{lemma} \begin{proof} Here, we use the same idea as in Lemma 2.17 of \cite{LW}. We rewrite the equation \eqref{equ:SBC} with its coefficients determined by $\eta_0$, i.e. it can be thought as a perturbation of equations of \eqref{equ:SBC} in terms of initial data, \begin{eqnarray} \left\{ \begin{aligned} \mathop{\rm div}\nolimits_{\mathscr{A}_0}S_{\mathscr{A}_0}(p, u)-\theta \nabla_{\mathscr{A}_0}y_{3,0}&=F^1+F^{1,0}& \quad\text{in}\quad\Om\\ \nabla_{\mathscr{A}_0}\cdot u&=F^2+F^{2,0}&\quad\text{in}\quad\Om\\ -\Delta_{\mathscr{A}_0}\theta&=F^3+F^{3,0}&\quad\text{in}\quad\Om\\ S_{\mathscr{A}_0}(p, u)\mathscr{N}_0&=F^4+F^{4,0}&\quad\text{on}\quad \Sigma\\ \nabla_{\mathscr{A}_0}\theta\cdot\mathscr{N}_0+\theta&\left|\mathscr{N}_0\right|=F^5+F^{5,0}&\quad\text{on}\quad \Sigma\\ u=0,\quad\theta&=0&\quad\text{on}\quad \Sigma_b\\ \end{aligned} \right. \end{eqnarray} where \begin{align*} F^{1,0}&=\nabla_{\mathscr{A}_0-\mathscr{A}}\cdot S_{\mathscr{A}}(p,u)+\nabla_{\mathscr{A}_0}\cdot S_{\mathscr{A}_0-\mathscr{A}}(p, u)+\theta\nabla_{\mathscr{A}_0-\mathscr{A}}y_3+\theta\nabla_{\mathscr{A}_0}(y_{3,0}-y_3),\\ F^{2,0}&=\mathop{\rm div}\nolimits_{\mathscr{A}_0-\mathscr{A}}u,\\ F^{3,0}&=\nabla_{\mathscr{A}_0-\mathscr{A}}\cdot\nabla_{\mathscr{A}}\theta+\nabla_{\mathscr{A}_0}\cdot\nabla_{\mathscr{A}_0-\mathscr{A}}\theta,\\ F^{4,0}&=S_{\mathscr{A}_0}(p, u)(\mathscr{N}_0-\mathscr{N})+S_{\mathscr{A}_0-\mathscr{A}}(p, u)\mathscr{N},\\ F^{5,0}&=\nabla_{\mathscr{A}_0}\theta\cdot(\mathscr{N}_0-\mathscr{N})+\nabla_{\mathscr{A}_0-\mathscr{A}}\theta\cdot\mathscr{N}+\theta\left(\left|\mathscr{N}_0\right|-\left|\mathscr{N}\right|\right). \end{align*} Here, $\mathscr{A}_0$, $\mathscr{N}_0$ and $y_{3,0}$ are quantities of $\mathscr{A}$, $\mathscr{N}$ and $y_{3}$ in terms of $\eta_0$. By the assumption, we know that $\eta-\eta_0\in H^{k+\f12}(\Sigma)$ and $\|\eta-\eta_0\|_{H^{k-\f32}(\Sigma)}^\ell\le \|\eta-\eta_0\|_{H^{k-\f32}(\Sigma)}<1$ for any positive integer $\ell$. By the straightforward computation, we may derive that \begin{align*} &\|F^{1,0}\|_{H^{r-2}}\le C\left(1+\|\eta_0\|_{H^{k+\f12}}\right)^4\|\eta-\eta_0\|_{H^{k-\f32}}\left(\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^{r-2}}\right),\\ &\|F^{2,0}\|_{H^{r-1}}\le C\Big(1+\|\eta_0\|_{H^{k+\f12}}\Big)^2\|\eta-\eta_0\|_{H^{k-\f32}}\|u\|_{H^r},\\ &\|F^{3,0}\|_{H^{r-2}}\le C\Big(1+\|\eta_0\|_{H^{k+\f12}}\Big)^4\|\eta-\eta_0\|_{H^{k-\f32}}\|\theta\|_{H^r},\\ &\|F^{4,0}\|_{H^{r-\f32}(\Sigma)}\le C\Big(1+\|\eta_0\|_{H^{k+\f12}}\Big)^2\|\eta-\eta_0\|_{H^{k-\f32}}\left(\|u\|_{H^r}+\|p\|_{H^{r-1}}\right),\\ &\|F^{5,0}\|_{H^{r-\f32}(\Sigma)}\le C\Big(1+\|\eta_0\|_{H^{k+\f12}}\Big)^2\|\eta-\eta_0\|_{H^{k-\f32}}\|\theta\|_{H^r}, \end{align*} for $r=2, \ldots, k-1$. Based on the Lemma \ref{lem:S lower regularity}, we have the estimate \begin{align*} &\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r}\\ &\lesssim C(\eta_0)\Big(\|F^1+F^{1,0}\|_{H^{r-2}}+\|F^2+F^{2,0}\|_{H^{r-1}}+\|F^3+F^{3,0}\|_{H^{r-2}}\\ &\quad+\|F^4+F^{4,0}\|_{H^{r-\f32}(\Sigma)}+\|F^5+F^{5,0}\|_{H^{r-\f32}(\Sigma)}\Big), \end{align*} where $C(\eta_0)$ is a constant depending on $\|\eta_0\|_{H^{k+\f12}}$. Combining the above estimates, we have \begin{equation} \begin{aligned} &\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r}\\ &\lesssim C(\eta_0)\Big(\|F^1\|_{H^{r-2}}+\|F^2\|_{H^{r-1}}+\|F^3\|_{H^{r-2}}+\|F^4\|_{H^{r-\f32}(\Sigma)}+\|F^5\|_{H^{r-\f32}(\Sigma)}\Big)\\ &\quad+C(\eta_0)\Big(1+\|\eta_0\|_{H^{k+\f12}}\Big)^4\|\eta-\eta_0\|_{H^{k-\f32}}\left(\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r}\right), \end{aligned} \end{equation} for $r=2, \ldots, k-1$. Then, if $\|\eta-\eta_0\|_{H^{k-\f32}}$ is to be chosen small enough such that the second term of the above inequality on the right-hand side less than $\f12(\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r})$, then it can be absorbed into the left hand side, and we have that \begin{align*} &\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r}\\ &\lesssim C(\eta_0)\Big(\|F^1\|_{H^{r-2}}+\|F^2\|_{H^{r-1}}+\|F^3\|_{H^{r-2}}+\|F^4\|_{H^{r-\f32}(\Sigma)}+\|F^5\|_{H^{r-\f32}(\Sigma)}\Big), \end{align*} for $r=2, \ldots, k-1$. \end{proof} Notice that the estimate in \eqref{ineq:initial lower elliptic} can only go up to $k-1$ order, which does not satisfy our requirement. In the next result, we can achieve two more order with a bootstrap argument, where we use the idea of \cite{LW}. \begin{proposition}\label{prop:high regulatrity} Let $k\ge3$ be an integer. Suppose that $\eta\in H^{k+\f12}(\Sigma)$ as well as $\eta_0\in H^{k+\f12}(\Sigma)$ satisfying $\|\eta-\eta_0\|_{H^{k+\f12}(\Sigma)}\le\varepsilon_0$. Then the solution to \eqref{equ:SBC} satisfies \begin{equation} \begin{aligned} &\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r}\\ &\lesssim C(\eta_0)\Big(\|F^1\|_{H^{r-2}}+\|F^2\|_{H^{r-1}}+\|F^3\|_{H^{r-2}}+\|F^4\|_{H^{r-\f32}(\Sigma)}+\|F^5\|_{H^{r-\f32}(\Sigma)}\Big), \end{aligned} \end{equation} for $r=2, \ldots, k+1$, whenever the right hand side is finite, where $C(\eta_0)$ is a constant depending on $\|\eta_0\|_{H^{k+\f12}(\Sigma)}$. \end{proposition} \begin{proof} Here, we only consider the case for $r=k$ and $r=k+1$, since the conclusion has been proved when $r\le k-1$. For $m\in\mathbb{N}$, we define $\eta^m$ by throwing away high frequencies: \begin{equation*} {\hat{\eta}}^m(n)=\left\{ \begin{aligned} &\hat{\eta}(n),\quad &\text{for}\quad |n|\le m-1,\\ &0, \quad &\text{for}\quad |n|\ge m. \end{aligned} \right. \end{equation*} Then for each $m$, $\eta^m\in H^j(\Sigma)$ for arbitrary $j\ge0$ and $\eta^m\to \eta$ in $H^{k+\f12}(\Sigma)$ as $m\to\infty$. We consider the problem \eqref{equ:SBC} with $\mathscr{A}$ and $\mathscr{N}$ replaced by $\mathscr{A}^m$ and $\mathscr{N}^m$, and $y_3$ replaced by $y_3^m$. Since $\eta^m\in H^{\f52}$, we may apply Lemma \ref{lem:S lower regularity} to deduce that there exists a unique $(u^m, p^m, \theta^m)$ which solves \begin{eqnarray} \left\{ \begin{aligned} \mathop{\rm div}\nolimits_{\mathscr{A}^m}S_{\mathscr{A}^m}(p^m, u^m)-\theta^m \nabla_{\mathscr{A}^m}y_3^m&=F^1 \quad\text{in}\quad\Om\\ \mathop{\rm div}\nolimits_{\mathscr{A}^m}u^m&=F^2\quad\text{in}\quad\Om\\ -\Delta_{\mathscr{A}^m}\theta^m&=F^3\quad\text{in}\quad\Om\\ S_{\mathscr{A}^m}(p^m, u^m)\mathscr{N}^m&=F^4\quad\text{on}\quad \Sigma\\ \nabla_{\mathscr{A}^m}\theta^m\cdot\mathscr{N}^m+\theta^m\left|\mathscr{N}^m\right|&=F^5\quad\text{on}\quad \Sigma\\ u^m=0,\quad \theta^m&=0\quad\text{on}\quad \Sigma_b\\ \end{aligned} \right. \end{eqnarray} and satisfies \begin{align*} \|u^m\|_{H^r}+\|p^m\|_{H^{r-1}}+\|\theta^m\|_{H^r} &\lesssim& C(\|\eta^m\|_{H^{k+\f52}})\Big(\|F^1\|_{H^{r-2}}+\|F^2\|_{H^{r-1}}+\|F^3\|_{H^{r-2}}\\ &&+\|F^4\|_{H^{r-\f32}(\Sigma)}+\|F^5\|_{H^{r-\f32}(\Sigma)}\Big) \end{align*} for $r=2, \ldots, k+1$. In the following, we will prove that the constant $C(\|\eta^m\|_{H^{k+\f52}})$ can be improved only in terms of $\|\eta^m\|_{H^{k+\f12}}$. For convenience, we define \[ \mathscr{Z}=C(\eta_0)P(\eta^m)\Big(\|F^1\|_{H^{r-2}}^2+\|F^2\|_{H^{r-1}}^2 +\|F^3\|_{H^{r-2}}^2+\|F^4\|_{H^{r-\f32}(\Sigma)}^2+\|F^5\|_{H^{r-\f32}(\Sigma)}^2\Big) \] where $C(\eta_0)$ is a constant depending on $\|\eta_0\|_{H^{k+\f12}}$ and $P(\eta)$ is a polynomial of $\|\eta^m\|_{H^{k+\f12}}$. Then after the same computation as in the proof of Proposition 2.18 in \cite{LW} except for the only modification of $F$ replaced by $F^1+\theta^m \nabla_{\mathscr{A}^m}y_3^m$, we have \[ \|u^m\|_{H^r}+\|p^m\|_{H^{r-1}}\lesssim\mathscr{Z}, \] for $r=2, \ldots, k+1$. That's because in the above estimate, we only need to consider the terms $\|\theta^m\|_{H^r}$, for $r=2, \cdots, k-1$, but $\|\theta^m\|_{H^r}\lesssim\mathscr{Z}$ is assured by the Lemma \ref{ineq:initial lower elliptic}. Then we consider the temperature $\theta^m$. In the following of bootstrap argument, we may abuse the notation $\theta$ instead of $\theta^m$ and also for $\eta$, $\mathscr{A}$, $\mathscr{N}$, but they should be thought as $\eta^m$, $\mathscr{A}^m$, $\mathscr{N}^m$. We write explicitly the equation of $\theta$ as \begin{eqnarray}\label{equ:equ T} \begin{aligned} &\pa_{11}\theta+\pa_{22}\theta+(1+A^2+B^2)K^2\pa_{33}\theta-2AK\pa_{13}\theta-2BK\pa_{23}\theta\\ &\quad+(AK\pa_3(AK)+BK\pa_3(BK)-\pa_1(AK)-\pa_2(BK)+K\pa_3K)\pa_3\theta=-F^3. \end{aligned} \end{eqnarray} \begin{enumerate}[step 1] \item $r=k$ case. By Lemma \ref{ineq:initial lower elliptic}, \[ \|\theta\|_{H^{k-1}}^2\lesssim C(\eta_0)\Big(\|F^3\|_{H^{k-3}}^2+\|F^5\|_{H^{k-\f52}(\Sigma)}^2\Big)\lesssim\mathscr{Z}, \] where the constant $C(\eta_0)$ only depends on $\|\eta_0\|_{H^{k+\f12}}$. For $i=1,2$, since $\pa_i \theta$ satisfies the equation \begin{eqnarray*} \left\{ \begin{aligned} -\Delta_{\mathscr{A}}\pa_i \theta&=\bar{F}^3 \quad \text{in}\quad \Om,\\ \nabla_{\mathscr{A}}\pa_i \theta\cdot\mathscr{N}+\pa_i \theta\left|\mathscr{N}\right|&=\bar{F}^5 \quad \text{on}\quad \Sigma,\\ \pa_i \theta&=0 \quad \text{on}\quad \Sigma_b, \end{aligned} \right. \end{eqnarray*} where \begin{align*} \bar{F}^3&=\pa_i F^3+\mathop{\rm div}\nolimits_{\pa_i\mathscr{A}}\nabla_{\mathscr{A}}\theta+\mathop{\rm div}\nolimits_{\mathscr{A}}\nabla_{\pa_i\mathscr{A}}\theta,\\ \bar{F}^5&=\pa_i F^5-\nabla_{\pa_i\mathscr{A}}\theta\cdot\mathscr{N}-\nabla_{\mathscr{A}}\theta\cdot\pa_i\mathscr{N}-\theta\pa_i\left|\mathscr{N}\right|. \end{align*} Applying the Lemma A.1--A.2 in \cite{GT1}, we have \begin{align*} &\|\bar{F}^3\|_{H^{k-3}}^2+\|\bar{F}^5\|_{H^{k-\f52}(\Sigma)}^2\\ &\lesssim \|F^3\|_{H^{k-2}}^2+\|F^5\|_{H^{k-\f32}(\Sigma)}^2+P(\eta)\|\theta\|_{H^{k-1}}^2\\ &\lesssim\mathscr{Z}. \end{align*} Employing the $k-1$ order elliptic estimate, we have \[ \|\pa_i \theta\|_{H^{k-1}}^2\lesssim C(\eta_0)\Big(\|\bar{F}^3\|_{H^{k-3}}^2+\|\bar{F}^5\|_{H^{k-\f52}(\Sigma)}^2\Big)\lesssim\mathscr{Z}. \] Then taking derivative $\pa_3^{k-2}$ on both sides of \eqref{equ:equ T} and focusing on the term $(1+A^2+B^2)K^2\pa_3^k\theta$, the estimates of all the other terms in $H^0$-norm implies that \[ \|\pa_3^k\theta\|_{H^0}^2\lesssim\mathscr{Z}. \] Thus, we have proved that \[ \|\theta\|_{H^k}^2\lesssim\mathscr{Z}. \] \item $r=k+1$ case. For $i,j=1,2$, since $\pa_{ij}\theta$ satisfies the equation \begin{eqnarray*} \left\{ \begin{aligned} -\Delta_{\mathscr{A}}\pa_{ij} \theta&=\tilde{F}^3 \quad \text{in}\quad \Om,\\ \nabla_{\mathscr{A}}\pa_{ij} \theta\cdot\mathscr{N}+\pa_{ij} \theta\left|\mathscr{N}\right|&=\tilde{F}^5 \quad \text{on}\quad \Sigma,\\ \pa_{ij} \theta&=0 \quad \text{on}\quad \Sigma_b, \end{aligned} \right. \end{eqnarray*} where \begin{align*} \tilde{F}^3&=\pa_{ij}F^3+\mathop{\rm div}\nolimits_{\pa_{ij}\mathscr{A}}\nabla_{\mathscr{A}}\theta +\mathop{\rm div}\nolimits_{\mathscr{A}}\nabla_{\pa_{ij}\mathscr{A}}\theta+\mathop{\rm div}\nolimits_{\pa_i\mathscr{A}}\nabla_{\pa_j\mathscr{A}}\theta +\mathop{\rm div}\nolimits_{\pa_j\mathscr{A}}\nabla_{\pa_i\mathscr{A}}\theta\\ &\quad+\mathop{\rm div}\nolimits_{\pa_i\mathscr{A}}\nabla_{\mathscr{A}}\pa_j \theta+\mathop{\rm div}\nolimits_{\pa_j\mathscr{A}}\nabla_{\mathscr{A}}\pa_i \theta+\mathop{\rm div}\nolimits_{\mathscr{A}}\nabla_{\pa_i\mathscr{A}}\pa_j \theta+\mathop{\rm div}\nolimits_{\mathscr{A}}\nabla_{\pa_j\mathscr{A}}\pa_i \theta,\\ \tilde{F}^5&=\pa_{ij}F^5-\nabla_{\mathscr{A}} \theta\cdot\pa_{ij}\mathscr{N}-(\nabla_{\pa_i\mathscr{A}} \theta+\nabla_{\mathscr{A}}\pa_i\theta)\cdot\pa_{j}\mathscr{N}-(\nabla_{\pa_j\mathscr{A}} \theta+\nabla_{\mathscr{A}}\pa_j\theta)\cdot\pa_{i}\mathscr{N}\\ &\quad-(\nabla_{\pa_{ij}\mathscr{A}}\theta+\nabla_{\pa_i\mathscr{A}}\pa_j\theta-\nabla_{\pa_j\mathscr{A}}\pa_i\theta)\mathscr{N} -\theta\pa_{ij}\left|\mathscr{N}\right|-\pa_i\theta\pa_j\left|\mathscr{N}\right|-\pa_j\theta\pa_i\left|\mathscr{N}\right|. \end{align*} Applying the Lemma A.1--A.2 in \cite{GT1} to the forcing terms, we have \begin{align*} &\|\tilde{F}^3\|_{H^{k-3}}^2+\|\tilde{F}^5\|_{H^{k-\f52}(\Sigma)}^2\\ &\lesssim \|F^3\|_{H^{k-1}}^2+\|F^5\|_{H^{k-\f12}(\Sigma)}^2+P(\eta)\|\theta\|_{H^k}^2\\ &\lesssim\mathscr{Z}. \end{align*} Then the Lemma \ref{ineq:initial lower elliptic} implies that \[ \|\pa_{ij}\theta\|_{H^{k-1}}^2\lesssim C(\eta_0)\left(\|\tilde{F}^3\|_{H^{k-3}}^2+\|\tilde{F}^5\|_{H^{k-\f52}(\Sigma)}^2\right) \lesssim\mathscr{Z}. \] Since we have proved the case $r=k$, we take derivative $\pa_3^{k-2}\pa_i$ on both sides of \eqref{equ:equ T} for $i=1,2$ and focus on the term of $(1+A^2+B^2)K^2\pa_3^k\pa_i\theta$. Utilizing the estimates of all the other terms in $H^0$-norm, we have \[ \|\pa_3^k\pa_i\theta\|_{H^0}^2\lesssim\mathscr{Z}. \] Then, taking derivative $\pa_3^{k-1}$ on both sides of \eqref{equ:equ T} and focusing on the term of $(1+A^2+B^2)K^2\pa_3^{k+1}\theta$, by all the estimates above, we have \[ \|\pa_3^{k+1}\theta\|_{H^0}^2\lesssim\mathscr{Z}. \] Therefore, we have proved \[ \|\theta\|_{H^{k+1}}^2\lesssim\mathscr{Z}. \] \end{enumerate} Now, we go back to the original notation. According to the convergence of $\eta^m$, we have \begin{eqnarray}\label{ineq:bound} \begin{aligned} &\|u^m\|_{H^r}^2+\|p^m\|_{H^{r-1}}^2+\|\theta^m\|_{H^r}^2\\ &\lesssim C(\eta_0)P(\eta^m)\Big(\|F^1\|_{H^{r-2}}^2+\|F^2\|_{H^{r-1}}^2 +\|F^3\|_{H^{r-2}}^2+\|F^4\|_{H^{r-\f32}(\Sigma)}^2+\|F^5\|_{H^{r-\f32}(\Sigma)}^2\Big)\\ &\lesssim C(\eta_0)P(\eta)\Big(\|F^1\|_{H^{r-2}}^2+\|F^2\|_{H^{r-1}}^2 +\|F^3\|_{H^{r-2}}^2+\|F^4\|_{H^{r-\f32}(\Sigma)}^2+\|F^5\|_{H^{r-\f32}(\Sigma)}^2\Big)\\ &\lesssim C(\eta_0)\Big(\|F^1\|_{H^{r-2}}^2+\|F^2\|_{H^{r-1}}^2 +\|F^3\|_{H^{r-2}}^2+\|F^4\|_{H^{r-\f32}(\Sigma)}^2+\|F^5\|_{H^{r-\f32}(\Sigma)}^2\Big), \end{aligned} \end{eqnarray} for $r=2, \ldots, k+1$, where in the last inequality we have used the assumption that $\|\eta-\eta_0\|_{H^{k+\f12}}\le\varepsilon_0$ and the term $P(\eta_0)$ is absorbed by $C(\eta_0)$. Here $C(\eta_0)$ depends only on $\|\eta_0\|_{H^{k+\f12}}$. The inequality of boundedness \eqref{ineq:bound} implies that the sequence $\{(u^m, p^m, \theta^m)\}$ is uniformly bounded in $H^r\times H^{r-1}\times H^r$, so we can extract a weakly convergent subsequence, which is still denoted by $\{(u^m, p^m, \theta^m)\}$. That is, $u^m\rightharpoonup u^0$ in $H^r(\Om)$, $p^m\rightharpoonup p^0$ in $H^{r-1}(\Om)$ and $\theta^m\rightharpoonup \theta^0$ in $H^r(\Om)$. Since $\eta^m\rightarrow\eta$ in $H^{k+\f12}(\Sigma)$, we also have that $\mathscr{A}^m\to\mathscr{A}$, $J^m\to J$ in $H^k(\Om)$, and $\mathscr{N}^m\to\mathscr{N}$ in $H^{k-\f12}(\Sigma)$. After multiplying the equation $\mathop{\rm div}\nolimits_{\mathscr{A}^m}u^m=F^2$ by $wJ^m$ for $w\in C_c^\infty(\Om)$ and integrating by parts, we see that \begin{align*} \int_{\Om}F^2wJ^m=\int_{\Om}\mathop{\rm div}\nolimits_{\mathscr{A}^m}(u^m)wJ^m&=-\int_{\Om}u^m\cdot\nabla_{\mathscr{A}^m}wJ^m\\ &\to-\int_{\Om}u^0\cdot\nabla_{\mathscr{A}}wJ=\int_{\Om}\mathop{\rm div}\nolimits_{\mathscr{A}}(u^0)wJ, \end{align*} from which we deduce that $\mathop{\rm div}\nolimits_{\mathscr{A}}u^0=F^2$. Then multiplying the third equation in \eqref{equ:SBC} by $wJ^m$ for $w\in{}_0H^1(\Om)$ and integrating by parts, we have that \[ \int_{\Om}\nabla_{\mathscr{A}^m}\theta^m\cdot\nabla_{\mathscr{A}^m}wJ^m+\int_{\Sigma}\theta^mw\left|\mathscr{N}^m\right|=\int_{\Om}F^3wJ^m+\int_{\Sigma}F^5w, \] which, by passing to the limit $m\to\infty$, reveals that \[ \int_{\Om}\nabla_{\mathscr{A}}\theta^0\cdot\nabla_{\mathscr{A}}wJ+\int_{\Sigma}\theta^0w\left|\mathscr{N}\right|=\int_{\Om}F^3wJ+\int_{\Sigma}F^5w. \] Finally we multiply the first equation in \eqref{equ:SBC} by $wJ^m$ for $w\in{}_0H^1(\Om)$ and integrate by parts to see that \[ \int_{\Om}\f12\mathbb{D}_{\mathscr{A}^m}u^m:\mathbb{D}_{\mathscr{A}^m}wJ^m-p^mJ^m-\theta^m\nabla_{\mathscr{A}^m}y_3^m\cdot wJ^m=\int_{\Om}F^1\cdot wJ^m-\int_{\Sigma}F^4\cdot w. \] Passing to the limit $m\to\infty$ , we deduce that \[ -\int_{\Om}\f12\mathbb{D}_{\mathscr{A}}u^0:\mathbb{D}_{\mathscr{A}} wJ+p^0\mathop{\rm div}\nolimits_{\mathscr{A}}(w)J-\theta^0\nabla_{\mathscr{A}}y_3\cdot wJ=\int_{\Om}F^1\cdot wJ-\int_{\Sigma}F^4\cdot w. \] After integrating by parts again, we deduce that $(u^0, p^0, \theta^0)$ satisfies \eqref{equ:SBC}. Since $(u, p, \theta)$ is the unique solution to \eqref{equ:SBC}, we have that $u=u^0$, $p=p^0$ and $\theta=\theta^0$. Then, according to the weak lower semicontinuity and the uniform boundedness of \eqref{ineq:bound}, we have that \begin{align*} &\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r}\\ &\lesssim C(\eta_0)\Big(\|F^1\|_{H^{r-2}}+\|F^2\|_{H^{r-1}}+\|F^3\|_{H^{r-2}}+\|F^4\|_{H^{r-\f32}(\Sigma)}+\|F^5\|_{H^{r-\f32}(\Sigma)}\Big), \end{align*} for $r=2, \ldots, k+1$, where $C(\eta_0)$ is a constant depending on $\|\eta_0\|_{H^{k+\f12}(\Sigma)}$. \end{proof} \subsection{The $\mathscr{A}$-Poisson problem} Now we consider the elliptic problem \begin{eqnarray}\label{equ:poisson} \left\{ \begin{aligned} &\Delta_{\mathscr{A}}p=f^1\quad&\text{in}\thinspace\Om,\\ &p=f^2\quad&\text{on}\thinspace\Sigma,\\ &\nabla_{\mathscr{A}}p\cdot\nu=f^3\quad&\text{on}\thinspace\Sigma_b, \end{aligned} \right. \end{eqnarray} where $\nu$ is the outward--pointing normal on $\Sigma_b$. The details of elliptic estimates of \eqref{equ:poisson} has been interpreted in \cite{GT1} and \cite{LW}, so we omit them here. \section{Linear estimates} Now we study the problem \eqref{equ:linear BC}, following the path of \cite{GT1}. First, we will employ two notions of solution: weak and strong. \subsection{The weak solution} Suppose that a smooth solution to \eqref{equ:linear BC} exists, then by integrating over $\Om$ by parts, and in time from $0$ to $T$, we see that \begin{eqnarray} \begin{aligned} \left(\pa_tu, \psi\right)_{L^2\mathscr{H}^0}+\f12\left(u, \psi\right)_{L^2\mathscr{H}^1}-\left(p, \mathop{\rm div}\nolimits_{\mathscr{A}}\psi\right)_{L^2\mathscr{H}^0}-\left(\theta \nabla_{\mathscr{A}}y_3,\psi\right)_{L^2\mathscr{H}^0}\\ =\left(F^1, \psi\right)_{L^2\mathscr{H}^0} -\left(F^4, \psi\right)_{L^2H^0(\Sigma)},\\ \left(\pa_t\theta, \phi\right)_{L^2\mathscr{H}^0}+\left(\nabla_{\mathscr{A}}\theta, \nabla_{\mathscr{A}}\phi\right)_{L^2\mathscr{H}^0}+\left(\theta\left|\mathscr{N}\right|, \phi\right)_{L^2H^0(\Sigma)}\\ =\left(F^3, \psi\right)_{L^2\mathscr{H}^0} +\left(F^5, \psi\right)_{L^2H^0(\Sigma)}, \end{aligned} \end{eqnarray} for $\phi$, $\psi\in\mathscr{H}^1_T$. If we were to restrict the test function $\psi$ to $\psi\in\mathscr{X}$, the term $\left(p, \mathop{\rm div}\nolimits_{\mathscr{A}}\psi\right)_{L^2\mathscr{H}^0}$ would vanish. Then we have a pressureless weak formulation. \begin{eqnarray} \begin{aligned} \left(\pa_tu, \psi\right)_{L^2\mathscr{H}^0}+\f12\left(u, \psi\right)_{L^2\mathscr{H}^1}-\left(\theta \nabla_{\mathscr{A}}y_3,\psi\right)_{L^2\mathscr{H}^0}\\ =\left(F^1, \psi\right)_{L^2\mathscr{H}^0} -\left(F^4, \psi\right)_{L^2H^0(\Sigma)},\\ \left(\pa_t\theta, \phi\right)_{L^2\mathscr{H}^0}+\left(\nabla_{\mathscr{A}}\theta, \nabla_{\mathscr{A}}\phi\right)_{L^2\mathscr{H}^0}+\left(\theta\left|\mathscr{N}\right|, \phi\right)_{L^2H^0(\Sigma)}\\ =\left(F^3, \psi\right)_{L^2\mathscr{H}^0} +\left(F^5, \psi\right)_{L^2H^0(\Sigma)}, \end{aligned} \end{eqnarray} This leads us to define a weak solution without pressure. \begin{definition} Suppose that $u_0\in \mathscr{Y}(0)$, $\theta_0\in H^0(\Om)$, $F^1-F^4\in (\mathscr{X}_T)^\ast$ and $F^3+F^5\in(\mathscr{H}^1_T)^\ast$. If there exists a pair $(u, \theta)$ achieving the initial data $u_0$, $\theta_0$ and satisfies $u\in\mathscr{H}^1_T$, $\theta\in\mathscr{H}^1_T$ and $\pa_tu\in (\mathscr{X}_T)^\ast$, $\pa_t \theta\in (\mathscr{H}^1_T)^\ast$, such that \begin{eqnarray}\label{equ:lpws} \begin{aligned} &\left<\pa_tu, \psi\right>_{(\mathscr{X}_T)^\ast}+\f12\left(u, \psi\right)_{L^2\mathscr{H}^1}-\left(\theta \nabla_{\mathscr{A}}y_3,\psi\right)_{L^2\mathscr{H}^0}=\left(F^1-F^4, \psi\right)_{(\mathscr{X}_T)^\ast},\\ &\left<\pa_t\theta, \phi\right>_{(\mathscr{H}^1_T)^\ast}+\left(\theta,\phi\right)_{L^2\mathscr{H}^1}+\left(\theta\left|\mathscr{N}\right|, \phi\right)_{L^2H^0(\Sigma)}=\left(F^3+F^5, \psi\right)_{(\mathscr{H}^1_T)^\ast}, \end{aligned} \end{eqnarray} holds for any $\psi\in\mathscr{X}_T$ and $\phi\in\mathscr{H}^1_T$, we call the pair $(u, \theta)$ a pressureless weak solution. \end{definition} Since our aim is to construct solutions with high regularity to \eqref{equ:linear BC}, we will directly construct strong solutions to \eqref{equ:lpws}. And it is easy to see that weak solutions will arise as a byproduct of the construction of strong solutions to \eqref{equ:linear BC}. Hence, we will not study the existence of weak solutions. Now we derive some properties and uniqueness of weak solutions. \begin{lemma} Suppose that $u$, $\theta$ are weak solutions of \eqref{equ:lpws}. Then, for almost every $t\in [0,T]$, \begin{eqnarray}\label{eq:integral} \begin{aligned} \f12\|u(t)\|_{\mathscr{H}^0(t)}^2+\f12\int_0^t\|u(s)\|_{\mathscr{H}^1(s)}^2\,\mathrm{d}s=\f12\|u(0)\|_{\mathscr{H}^0(0)}^2+\left(F^1-F^4,u\right)_{(\mathscr{X}_t)^\ast}\\ +\f12\int_0^t\int_{\Om}|u(s)|^2\pa_sJ(s)\,\mathrm{d}s+\int_0^t\int_{\Om}\theta(s)\nabla_{\mathscr{A}}y_3\cdot u(s)\,\mathrm{d}s,\\ \f12\|\theta(t)\|_{\mathscr{H}^0(t)}^2+\int_0^t\|\theta(s)\|_{\mathscr{H}^1(s)}^2\,\mathrm{d}s+\int_0^t\int_{\Sigma}|\theta(s)|^2\left|\mathscr{N}\right|\,\mathrm{d}s=\f12\|\theta(0)\|_{\mathscr{H}^0(0)}^2\\ +\left(F^3+F^5,\theta\right)_{(\mathscr{H}^1_t)^\ast}+\f12\int_0^t\int_{\Om}|\theta(s)|^2\pa_sJ(s)\,\mathrm{d}s. \end{aligned} \end{eqnarray} Also, \begin{equation}\label{est:weak theta} \sup_{0\le t\le T}\|\theta(t)\|_{\mathscr{H}^0(t)}^2+\|\theta\|_{\mathscr{H}^1_T}^2\lesssim \exp\left(C_0(\eta)T\right)\left(\|\theta(0)\|_{\mathscr{H}^0(0)}^2+\|F^3+F^5\|_{(\mathscr{H}^1_T)^\ast}^2\right), \end{equation} \begin{eqnarray}\label{est:weak u} \begin{aligned} \sup_{0\le t\le T}\|u(t)\|_{\mathscr{H}^0(t)}^2+\|u\|_{\mathscr{H}^1_T}^2\lesssim \exp\left(CC_0(\eta)T\right)\Big(\|u(0)\|_{\mathscr{H}^0(0)}^2+\|\theta(0)\|_{\mathscr{H}^0(0)}^2\\ +\|F^1-F^4\|_{(\mathscr{X}_T)^\ast}^2+\|F^3+F^5\|_{(\mathscr{H}^1_T)^\ast}^2\Big), \end{aligned} \end{eqnarray} where $C_0(\eta):=\max\{\sup_{0\le t\le T}\|\pa_tJK\|_{L^\infty}, \sup_{0\le t\le T}\|\nabla_{\mathscr{A}}y_3\|_{L^\infty}\}$. \end{lemma} \begin{proof} The identity \eqref{eq:integral} follows directly from Lemma 2.4 in \cite{GT1} and \eqref{equ:lpws} by using the test function $\psi=u\chi_{[0,t]}\in \mathscr{X}_T$, and $\phi=\theta\chi_{[0,t]}\in \mathscr{H}^1_T$, where $\chi_{[0,t]}$ is a temporal indicator function to $1$ on the interval $[0,t]$. From \eqref{eq:integral}, we can directly derive the inequalities \begin{equation}\label{est:weak theta1} \f12\|\theta(t)\|_{\mathscr{H}^0(t)}^2+\|\theta\|_{\mathscr{H}^1_t}^2\le \f12\|\theta(0)\|_{\mathscr{H}^0(0)}^2+\|F^3+F^5\|_{(\mathscr{H}^1_t)^\ast}\|\theta\|_{\mathscr{H}^1_t}+\f12C_0(\eta)\|\theta(t)\|_{\mathscr{H}^0_t}^2, \end{equation} \begin{eqnarray}\label{est:weak u1} \begin{aligned} \f12\|u(t)\|_{\mathscr{H}^0(t)}^2+\f12\|u\|_{\mathscr{H}^1_t}^2\le \f12\|u(0)\|_{\mathscr{H}^0(0)}^2+\|F^1-F^4\|_{(\mathscr{X}_t)^\ast}\|u\|_{\mathscr{H}^1_t}\\ +\f12C_0(\eta)\|u(t)\|_{\mathscr{H}^0_t}^2+CC_0(\eta)\|\theta\|_{\mathscr{H}^1_t}\|u\|_{\mathscr{H}^1_t}, \end{aligned} \end{eqnarray} where, for \eqref{est:weak u1}, we have used the Poincar\'e inequality in Lemma A.14 on \cite{GT1}, and \[ \|u\|_{\mathscr{H}^k_t}^2=\int_0^t\|u(s)\|_{\mathscr{H}^k(s)}^2\,\mathrm{d}s\quad \text{for}\thinspace k=0,1, \] and similarly for $\|\theta\|_{\mathscr{H}^k_t}^2$, $\|F^1-F^4\|_{(\mathscr{X}_t)^\ast}$, $\|F^3+F^5\|_{(\mathscr{H}^1_t)^\ast}$. Inequalities \eqref{est:weak theta1}, \eqref{est:weak u1} and Cauchy inequality imply that \begin{eqnarray}\label{ineq:integral} \begin{aligned} \f12\|\theta(t)\|_{\mathscr{H}^0(t)}^2+\f34\|\theta\|_{\mathscr{H}^1_t}^2\le \f12\|\theta(0)\|_{\mathscr{H}^0(0)}^2+\|F^3-F^5\|_{(\mathscr{H}^1_t)^\ast}^2+\f12C_0(\eta)\|\theta(t)\|_{\mathscr{H}^0_t}^2,\\ \f12\|u(t)\|_{\mathscr{H}^0(t)}^2+\f18\|u\|_{\mathscr{H}^1_t}^2\le \f12\|u(0)\|_{\mathscr{H}^0(0)}^2+\|F^1-F^4\|_{(\mathscr{X}_t)^\ast}^2 +\f12C_0(\eta)\|u(t)\|_{\mathscr{H}^0_t}^2\\ +CC_0(\eta)\|\theta\|_{\mathscr{H}^1_t}^2, \end{aligned} \end{eqnarray} Then \eqref{est:weak theta} and \eqref{est:weak u} follow from the integral inequality \eqref{ineq:integral} and Gronwall's lemma. \end{proof} \begin{proposition} Weak solutions to \eqref{equ:lpws} are unique. \end{proposition} \begin{proof} Suppose that $(u^1,\theta^1)$ and $(u^2,\theta^2)$ are both weak solutions to \eqref{equ:lpws}, then $(w,\vartheta)$, defined by $w=u^1-u^2$ and $\vartheta=\theta^1-\theta^2$, is a weak solution with $F^1-F^4=0$, $F^3+F^5=0$, $w(0)=u^1(0)-u^2(0)=0$ and $\vartheta(0)=\theta^1(0)-\theta^2(0)$. Then the bounds \eqref{est:weak theta} and \eqref{est:weak u} imply that $w=0$ and $\vartheta=0$. Hence, weak solutions to \eqref{equ:lpws} are unique. \end{proof} \subsection{The strong solution} Before we define the strong solution, we need to define an operator $D_t$ as \begin{equation}\label{def:Dt} D_tu:=\pa_tu-Ru\quad \text{for}\quad R:=\pa_tMM^{-1}, \end{equation} with $M=K\nabla\Phi$, where $K$, $\Phi$ are as defined in \eqref{map:phi} and \eqref{equ:components}. It is easily to be known that $D_t$ preserves the $\mathop{\rm div}\nolimits_{\mathscr{A}}$ -free condition, since \[ J\mathop{\rm div}\nolimits_{\mathscr{A}}(D_tv)=J\mathop{\rm div}\nolimits_{\mathscr{A}}(M\pa_t(M^{-1}v))=\mathop{\rm div}\nolimits(\pa_t(M^{-1}v))=\pa_t\mathop{\rm div}\nolimits(M^{-1}v)=\pa_t(J\mathop{\rm div}\nolimits_{\mathscr{A}}v), \] where the equality $J\mathop{\rm div}\nolimits_{\mathscr{A}}v=\mathop{\rm div}\nolimits(M^{-1}v)$ can be found in Page 299 of \cite{GT1}. \begin{definition}\label{def:strong solution} Suppose that the forcing functions satisfy \begin{eqnarray}\label{cond:force} \begin{aligned} F^1&\in L^2([0, T]; H^1(\Om))\cap C^0([0, T]; H^0(\Om)),\\ F^3&\in L^2([0, T]; H^1(\Om))\cap C^0([0, T]; H^0(\Om)),\\ F^4&\in L^2([0, T]; H^{\f32}(\Sigma))\cap C^0([0, T]; H^{\f12}(\Sigma)),\\ \pa_t(F^1-F^4)&\in L^2([0, T]; ({}_0H^1(\Om))^\ast), \quad\pa_t(F^3+F^5)\in L^2([0, T]; ({}_0H^1(\Om))^\ast). \end{aligned} \end{eqnarray} We also assume that $u_0\in H^2\cap \mathscr{X}(0)$ and $\theta_0\in H^2\cap\mathscr{H}^1(0)$. If there exists a pair $(u, p, \theta)$ achieving the initial data $u_0$, $\theta_0$ and satisfies \begin{eqnarray}\label{equ:strong solution} \begin{aligned} &u\in L^2([0, T]; H^3)\cap C^0([0,T];H^2)\cap \mathscr{X}_T \thinspace &\pa_tu\in L^2([0, T]; H^1)\cap C^0([0,T];H^0)\\ &D_tu\in \mathscr{X}_T,\quad\pa_t^2u\in\mathscr{X}_T^\ast \thinspace &p\in L^2([0, T]; H^2)\cap C^0([0,T];H^1)\\ &\theta\in L^2([0, T]; H^3)\cap C^0([0,T];H^2) \thinspace &\pa_t\theta\in L^2([0, T]; H^1)\cap C^0([0,T];H^0)\\ &\pa_t^2\theta\in(\mathscr{H}_T^1)^\ast, \end{aligned} \end{eqnarray} such that they satisfies \eqref{equ:linear BC} in the strong sense, we call it a strong solution. \end{definition} Then, we have to prove the lower regularity of strong solutions. \begin{theorem}\label{thm:lower regularity} Suppose that the forcing terms and the initial data satisfy the condition in Definition \ref{def:strong solution}, and that $u_0$, $F^4(0)$ satisfy the compatibility condition \begin{equation}\label{cond:compatibility} \Pi_0\left(F^4(0)+\mathbb{D}_\mathscr{A_0}u_0\mathscr{N}_0\right)=0,\quad \text{where}\thinspace \mathscr{N}_0=(-\pa_1\eta_0, -\pa_2\eta_0, 1), \end{equation} and $\Pi_0$ is an orthogonal projection onto the tangent space of the surface $\{x_3=\eta_0\}$ defined by \begin{equation}\label{def:projection} \Pi_0v=v-(v\cdot\mathscr{N}_0)\mathscr{N}_0|\mathscr{N}_0|^{-2}. \end{equation} Then there exists a strong solution $(u, p, \theta)$ satisfying \eqref{equ:strong solution}. Moreover, \begin{eqnarray}\label{inequ:est strong solution} \begin{aligned} &\|u\|_{L^\infty H^2}^2+\|u\|_{L^2H^3}^2+\|\pa_tu\|_{L^\infty H^0}^2+\|\pa_tu\|_{L^2H^1}^2+\|\pa_t^2u\|_{(\mathscr{X}_T)^\ast}+\|p\|_{L^\infty H^1}^2+\|p\|_{L^2H^2}^2\\ &\quad+\|\theta\|_{L^\infty H^2}^2+\|\theta\|_{L^2H^3}^2+\|\pa_t\theta\|_{L^\infty H^0}^2+\|\pa_t\theta\|_{L^2H^1}^2+\|\pa_t^2\theta\|_{(\mathscr{H}^1_T)^\ast}\\ &\lesssim P(\|\eta_0\|_{H^{5/2}})\left(1+\mathscr{K}(\eta)\right)\exp\left(C(1+\mathscr{K}(\eta))T\right)\Big(\|u_0\|_{H^2}^2+\|\theta_0\|_{H^2}^2+\|F^1(0)\|_{H^0}^2\\ &\quad+\|F^3(0)\|_{H^0}^2+\|F^4(0)\|_{H^{1/2}(\Sigma)}^2+\|F^1\|_{L^2H^1}^2+\|F^3\|_{L^2H^1}^2+\|F^4\|_{L^2H^{3/2}(\Sigma)}^2\\ &\quad+\|F^5\|_{L^2H^{3/2}(\Sigma)}^2+\|\pa_t(F^1-F^4)\|_{(\mathscr{X}_T)^\ast}^2+\|\pa_t(F^3+F^5)\|_{(\mathscr{H}^1_T)^\ast}^2\Big), \end{aligned} \end{eqnarray} where $C$ is a constant independent of $\eta$ and $\mathscr{K}(\eta)$ is defined as \begin{equation} \mathscr{K}(\eta):=\sup_{0\le t\le T}\left(\|\eta\|_{H^{9/2}}^2+\|\pa_t\eta\|_{H^{7/2}}^2+\|\pa_t^2\eta\|_{H^{5/2}}^2\right). \end{equation} The initial pressure, $p(0)\in H^1(\Om)$ is determined by terms $u_0$, $\theta_0$, $F^1(0)$, $F^4(0)$ as a weak solution to \begin{eqnarray}\label{equ:p0} \left\{ \begin{aligned} &\mathop{\rm div}\nolimits_{\mathscr{A}_0}\left(\nabla_{\mathscr{A}_0}p(0)-F^1(0)-\theta_0\nabla_{\mathscr{A}_0}y_{3,0}\right)=-\mathop{\rm div}\nolimits_{\mathscr{A}_0}(R(0)u_0)\in H^0(\Om),\\ &p(0)=(F^4(0)+\mathbb{D}_{\mathscr{A}_0}u_0\mathscr{N}_0)\cdot\mathscr{N}_0|\mathscr{N}_0|^{-2}\in H^{1/2}(\Sigma),\\ &\left(\nabla_{\mathscr{A}_0}p(0)-F^1(0)\right)\cdot\nu=\Delta_{\mathscr{A}_0}u_0\cdot\nu\in H^{-1/2}(\Sigma_b), \end{aligned} \right. \end{eqnarray} where $y_{3,0}$ in terms of $\eta_0$. Also, $\pa_t\theta(0)$ satisfies \begin{equation} \pa_t\theta(0)=\Delta_{\mathscr{A}_0}\theta_0+F^3(0) \in H^0(\Om), \end{equation} and $D_tu(0)=\pa_tu(0)-R(0)u_0$ satisfies \begin{equation} D_tu(0)=\Delta_{\mathscr{A}_0}u_0-\nabla_{\mathscr{A}_0}p(0)+F^1(0)+\theta_0e_3-R(0)u_0\in\mathscr{Y}(0). \end{equation} Moreover, $\pa_t\theta$ satisfies \begin{eqnarray}\label{equ:pat theta} \left\{ \begin{aligned} &\pa_t(\pa_t\theta)-\Delta_{\mathscr{A}}(\pa_t\theta)=\pa_tF^3+G^3\quad&\text{in}\quad\Om,\\ &\nabla_{\mathscr{A}}(\pa_t\theta)\cdot\mathscr{N}+\pa_t\theta\left|\mathscr{N}\right|=\pa_tF^5+G^5\quad&\text{on}\quad\Sigma,\\ &\pa_t\theta=0\quad&\text{on}\quad\Sigma_b, \end{aligned} \right. \end{eqnarray} and $D_tu$ satisfies \begin{eqnarray}\label{equ:Dt u} \left\{ \begin{aligned} &\pa_t(D_tu)-\Delta_{\mathscr{A}}(D_tu)+\nabla_{\mathscr{A}}(\pa_tp)-D_t(\theta \nabla_{\mathscr{A}}y_3)=D_tF^1+G^1\quad&\text{in}\quad\Om,\\ &\mathop{\rm div}\nolimits_{\mathscr{A}}(D_tu)=0\quad&\text{in}\quad\Om,\\ &S_{\mathscr{A}}(\pa_tp,D_tu)\mathscr{N}=\pa_tF^4+G^4\quad&\text{on}\quad\Sigma,\\ &D_tu=0\quad&\text{on}\quad\Sigma_b, \end{aligned} \right. \end{eqnarray} in the weak sense of \eqref{equ:lpws}, where $G^1$ is defined by \[ G^1=-(R+\pa_tJK)\Delta_{\mathscr{A}}u-\pa_tRu+(\pa_tJK+R+R^\top)\nabla_{\mathscr{A}}p+\mathop{\rm div}\nolimits_{\mathscr{A}}(\mathbb{D}_{\mathscr{A}}(Ru)-R\mathbb{D}_{\mathscr{A}}u+\mathbb{D}_{\pa_t\mathscr{A}}u) \] ($R^\top$ denoting the matrix transpose of $R$), $G^3$ by \[ G^3=-\pa_tJK\Delta_{\mathscr{A}}\theta+\mathop{\rm div}\nolimits_{\mathscr{A}}(-R\nabla_{\mathscr{A}}\theta+\nabla_{\pa_t\mathscr{A}}\theta), \] $G^4$ by \[ G^4=\mathbb{D}_{\mathscr{A}}(Ru)\mathscr{N}-(pI-\mathbb{D}_{\mathscr{A}}u)\pa_t\mathscr{N}+\mathbb{D}_{\pa_t\mathscr{A}}u\mathscr{N}, \] and $G^5$ by \[ G^5=-\nabla_{\mathscr{A}}\theta\cdot\pa_t\mathscr{N}-\nabla_{\pa_t\mathscr{A}}\theta\cdot\mathscr{N}-\theta\pa_t\left|\mathscr{N}\right|. \] More precisely, \eqref{equ:pat theta} and \eqref{equ:Dt u} hold in the weak sense of \eqref{equ:lpws} in that \begin{eqnarray}\label{equ:weak pat theta} \begin{aligned} &\left<\pa_t^2\theta,\phi\right>_{(\mathscr{H}^1_T)^\ast}+\left(\pa_t\theta,\phi\right)_{\mathscr{H}^1_T}+\left(\pa_t\theta\left|\mathscr{N}\right|,\phi\right)_{L^2H^0(\Sigma)}\\ &=\left<\pa_t(F^3+F^5)\right>_{(\mathscr{H}^1_T)^\ast}+\left(\pa_tJKF^3,\phi\right)_{\mathscr{H}^0_T}-\left(\pa_tJK\pa_t\theta,\phi\right)_{\mathscr{H}^0_T}\\ &\quad-\int_0^T\int_{\Om}\left(\pa_tJK\nabla_{\mathscr{A}}\theta\cdot\nabla_{\mathscr{A}}\phi+\nabla_{\pa_t\mathscr{A}}\theta\cdot\nabla_{\mathscr{A}}\phi+\nabla_{\mathscr{A}}\theta\cdot\nabla_{\pa_t\mathscr{A}}\phi\right)J \end{aligned} \end{eqnarray} and \begin{eqnarray} \begin{aligned} &\left<\pa_tD_tu,\psi\right>_{(\mathscr{X}_T)^\ast}+\f12\left(\pa_tu,\psi\right)_{\mathscr{H}^1_T}\\ &=\left<\pa_t(F^1-F^4),\psi\right>_{(\mathscr{X}_T)^\ast}+\left(\pa_t(\theta \nabla_{\mathscr{A}}y_3),\psi\right)_{\mathscr{H}^0}-\left(\pa_tRu+R\pa_tu,\psi\right)_{\mathscr{H}^0_T}\\ &\quad+\left(\pa_tJKF^1,\psi\right)_{\mathscr{H}^0_T}-\left(\pa_tJK\theta e_3,\psi\right)_{\mathscr{H}^0_T}-\left(\pa_tJK\pa_tu,\psi\right)_{\mathscr{H}^0_T}-\left(p,\mathop{\rm div}\nolimits_{\mathscr{A}}(R\psi)\right)_{\mathscr{H}^0_T}\\ &\quad-\f12\int_0^T\int_\Om\left(\pa_tJK\mathbb{D}_{\mathscr{A}}u:\mathbb{D}_{\mathscr{A}}\psi+\mathbb{D}_{\pa_t\mathscr{A}}u:\mathbb{D}_{\mathscr{A}}\psi+\mathbb{D}_{\mathscr{A}}u:\mathbb{D}_{\pa_t\mathscr{A}}\psi\right)J \end{aligned} \end{eqnarray} for all $\phi\in\mathscr{H}^1_T$, $\psi\in\mathscr{X}_T$. \end{theorem} \begin{proof} Here we will use the Galerkin method, which may be referred to \cite{Evans}. Step 1. The construction of approximate solutions for $\theta$. Since the scalar-valued space $H^2(\Om)\cap{}_0H^1(\Om)$ is separable, we can choose a countable basis $\{\tilde{w}^j\}_{i=1}^\infty$. Note that this basis is time-independent. Now, we need to construct a time-dependent basis for $H^2\cap\mathscr{H}^1$. We define $\phi^j=\phi^j(t):=K(t)\tilde{w}^j$. According to the Proposition \ref{prop:k}, $\phi^j(t)\in H^2(\Om)\cap\mathscr{H}^1(t)$, and $\{\phi^j(t)\}_{j=1}^\infty$ is a basis of $H^2(\Om)\cap\mathscr{H}^1(t)$ for each $t\in [0, T]$. Moreover, \begin{equation}\label{equ:dt phi} \pa_t\phi^j(t)=\pa_tK(t)\tilde{w}^j=\pa_tKJK\tilde{w}^j=\pa_tKJ\phi^j(t), \end{equation} which allows us to express $\pa_t\phi^j$ in terms of $\phi^j$. For any integer $m\ge1$, we define the finite-dimensional space $\mathscr{H}^1_m(t):=$ span $\{\phi^1(t), \ldots, \phi^m(t)\}\subset H^2(\Om)\cap\mathscr{H}^1(t)$ and we define $\mathscr{P}^m_t: H^2(\Om)\to \mathscr{H}^1_m(t)$ for $H^2(\Om)$ orthogonal projection onto $\mathscr{H}^1_m(t)$. Clearly, if $\theta\in H^2(\Om)\cap\mathscr{H}^1(t)$, $\mathscr{P}^m_t\theta\to\theta$ as $m\to\infty$. For each $m\ge1$, we define an approximate solution \[ \theta^m=d^m_j(t)\phi^j(t), \quad \text{with}\quad d^m_j(t): [0, T] \to \mathbb{R} \quad\text{for}\quad j=1, \ldots, m, \] where as usual we use the Einstein convention of summation of the repeated index $j$. We want to choose $d^m_j$ such that \begin{equation}\label{equ:thetam} \left(\pa_t\theta^m, \phi\right)_{\mathscr{H}^0}+\left(\theta^m, \phi\right)_{\mathscr{H}^1}+\left(\theta^m\left|\mathscr{N}\right|, \phi\right)_{H^0(\Sigma)}=\left(F^3, \phi\right)_{\mathscr{H}^0}+\left(F^5, \phi\right)_{H^0(\Sigma)}, \end{equation} with the initial data $\theta^m(0)=\mathscr{P}^m_t\theta_0\in \mathscr{H}^1_m(0)$ for each $\phi\in\mathscr{H}^1_m(t)$. And \eqref{equ:thetam} is equivalent to the system of ODEs for $d^m_j$: \begin{eqnarray}\label{equ:ode} \begin{aligned} \dot{d}^m_j\left(\phi^j, \phi^k\right)_{\mathscr{H}^0}+d^m_j\left(\left(\pa_tKJ\phi^j,\phi^k\right)_{\mathscr{H}^0}+\left(\phi^j, \phi^k\right)_{\mathscr{H}^1}+\left(\phi^j\left|\mathscr{N}\right|,\phi^k\right)_{H^0(\Sigma)}\right)\\ =\left(F^3, \phi^k\right)_{\mathscr{H}^0}+\left(F^5, \phi^k\right)_{H^0(\Sigma)} \end{aligned} \end{eqnarray} for $j, k=1, \ldots, m$. The $m\times m$ matrix with $j, k$ entry $\left(\phi^j, \phi^k\right)_{\mathscr{H}^0}$ is invertible, the coefficients of the linear system \eqref{equ:ode} are $C^1([0, T])$, and the forcing terms are $C^0([0, T])$, so the usual well-posedness of ODEs guarantees that the existence of a unique solution $d^m_j\in C^1([0, T])$ to \eqref{equ:ode} that satisfies the initial data. This provides the desired solution, $\theta^m$, to \eqref{equ:thetam}. Since $F^3$, $F^5$ satisfy \eqref{cond:force}, equation \eqref{equ:ode} may be differentiated in time to see that $d_j^m\in C^{1,1}([0, T])$, which means $d_j^m$ is twice differentiable almost everywhere in $[0, T]$. Step 2. The energy estimates for $\theta^m$. Since $\theta^m(t)\in \mathscr{H}^1_m(t)$, we take $\phi=\theta^m$ as a test function in \eqref{equ:thetam}, using the Poincar\'e-type inequalities in Lemma A.14 of \cite{GT1} and usual trace theory, we have \begin{align*} \pa_t\f12\|\theta^m\|_{\mathscr{H}^0}^2+\|\theta^m\|_{\mathscr{H}^1}^2\lesssim (\|F^3\|_{\mathscr{H}^0}+\|F^5\|_{H^{1/2}(\Sigma)})\|\theta^m\|_{\mathscr{H}^1}-\f12\int_{\Om}|\theta^m|^2\pa_tJ. \end{align*} Then, applying Cauchy's inequality, we may derive that \begin{align*} \pa_t\f12\|\theta^m\|_{\mathscr{H}^0}^2+\f14\|\theta^m\|_{\mathscr{H}^1}^2\lesssim\|F^3\|_{\mathscr{H}^0}^2+\|F^5\|_{H^{1/2}(\Sigma)}^2+C_0(\eta)\f12\|\theta^m\|_{\mathscr{H}^0}^2 \end{align*} with $C_0(\eta):=1+\sup_{0\le t\le T}\|\pa_tJK\|_{L^\infty}$. Using the Lemma 2.9 in \cite{LW}, we may have \begin{eqnarray}\label{equ:initial thetam} \begin{aligned} \|\theta^m(0)\|_{\mathscr{H}^0}&\le P(\|\eta_0\|_{H^{5/2}})\|\theta^m(0)\|_{H^0}\le P(\|\eta_0\|_{H^{5/2}})\|\theta^m(0)\|_{H^2}\\ &=P(\|\eta_0\|_{H^{5/2}})\|\mathscr{P}^m_0\theta_0\|_{H^2}\le P(\|\eta_0\|_{H^{5/2}})\|\theta_0\|_{H^2}. \end{aligned} \end{eqnarray} Now, we can utilize Gronwall's lemma to deduce energy estimates for $\theta^m$: \begin{eqnarray}\label{equ:est thetam} \begin{aligned} \sup_{0\le t\le T}\|\theta^m\|_{\mathscr{H}^0}^2&+\|\theta^m\|_{\mathscr{H}^1_T}^2\\ &\lesssim P(\|\eta_0\|_{H^{5/2}})\exp(C_0(\eta)T)(\|\theta_0\|_{H^2}^2+\|F^3\|_{\mathscr{H}^0_T}^2+\|F^5\|_{L^2H^{1/2}(\Sigma)}^2). \end{aligned} \end{eqnarray} Step 3. Estimates for $\pa_t\theta^m(0)$. If $\theta\in H^2(\Om)\cap\mathscr{H}^1(t)$, $\phi\in\mathscr{H}^1$, the integration by parts reveals that \begin{equation}\label{equ:theta1} \left(\theta, \phi\right)_{\mathscr{H}^1}=\int_{\Om}-\Delta_{\mathscr{A}}\theta\phi J+\int_{\Sigma}(\nabla_{\mathscr{A}}\theta\cdot\mathscr{N})\phi=\left(-\Delta_{\mathscr{A}}\theta,\phi\right)_{\mathscr{H}^0} +\left(\nabla_{\mathscr{A}}\theta\cdot\mathscr{N},\phi\right)_{H^0(\Sigma)} \end{equation} Evaluating \eqref{equ:thetam} at $t=0$ and employing \eqref{equ:theta1}, we have that \begin{equation}\label{equ:thetam0} \left(\pa_t\theta^m(0), \phi\right)_{\mathscr{H}^0}=\left(\Delta_{\mathscr{A}_0}\theta^m(0)+F^3(0), \phi\right)_{\mathscr{H}^0}, \end{equation} for all $\phi\in\mathscr{H}^1_m(t)$. By virtue of \eqref{equ:dt phi}, we have that \begin{equation}\label{equ:test theta} \pa_t\theta^{m}-\pa_tK(t)J(t)\theta^m(t)=\dot{d}^m_j(t)\phi^j(t)\in\mathscr{H}^1_m(t), \end{equation} so that $\phi=\pa_t\theta^m(0)-\pa_tK(0)J(0)\theta^m(0)\in\mathscr{H}^1_m(0)$ is a choice for the test function in \eqref{equ:thetam0}. So using this test function in \eqref{equ:thetam0}, we have \begin{eqnarray}\label{equ:thetam1} \begin{aligned} \|\pa_t\theta^m(0)\|_{\mathscr{H}^0}^2&\le \|\pa_tK(0)J(0)\theta^m(0)\|_{\mathscr{H}^0}\|\pa_t\theta^m(0)\|_{\mathscr{H}^0}\\ &+\|\pa_t\theta^m(0)-\pa_tK(0)J(0)\theta^m(0)\|_{\mathscr{H}^0}\|\Delta_{\mathscr{A}_0}\theta^m(0)+F^3(0)\|_{\mathscr{H}^0}. \end{aligned} \end{eqnarray} Then after using \eqref{equ:initial thetam} and Cauchy's inequality for the right--hand side of \eqref{equ:thetam1}, we have the bound \begin{equation}\label{equ:est thetam0} \|\pa_t\theta^m(0)\|_{\mathscr{H}^0}^2\lesssim C_1(\eta)\left(\|\theta_0\|_{H^2}^2+\|F^3(0)\|_{\mathscr{H}^0}^2\right) \end{equation} with $C_1(\eta)=P(\|\eta_0\|_{H^{5/2}})\left(1+\|\pa_tK(0)J(0)\|_{L^\infty}^2+\|\mathscr{A}_0\|_{C^1}^2\right)$. Step 4. Energy estimates for $\pa_t\theta^m$. Now, suppose that $\phi(t)=c_j^m(t)\phi^j$ for $c_j^m\in C^{0,1}([0, T])$, $j=1, \ldots, m$; it is proved as in \eqref{equ:test theta}, that $\pa_t\phi-\pa_tK(t)J(t)\phi\in \mathscr{H}^1_m(t)$ as well. Then in \eqref{equ:thetam}, using this $\phi$, and temporally differentiating the result equation, and then subtracting from the result equation \eqref{equ:thetam} with test function $\pa_t\phi-\pa_tK(t)J(t)\phi$, we find that \begin{eqnarray}\label{equ:dt thetam1} \begin{aligned} &\left<\pa_t^2\theta^m, \phi\right>_{(\mathscr{H}^1)^\ast}+\left(\pa_t\theta^m,\phi\right)_{\mathscr{H}^1}+\left(\pa_t\theta^m\left|\mathscr{N}\right|, \phi\right)_{H^0(\Sigma)}\\ &=\left<\pa_t(F^3+F^5),\phi\right>_{(\mathscr{H}^1)^\ast}+\left(F^3,(\pa_tKJ+\pa_tJK)\phi\right)_{\mathscr{H}^0}+\left(F^5,\pa_tKJ\phi\right)_{H^0(\Sigma)}\\ &\quad-\left(\pa_t\theta^m, (\pa_tKJ+\pa_tJK)\phi\right)_{\mathscr{H}^0}-\left(\theta^m, \pa_tKJ\phi\right)_{\mathscr{H}^1}-\left(\theta^m, \pa_tKJ\phi\right)_{H^0(\Sigma)}\\ &\quad-\int_{\Om}\left(\pa_tJK\nabla_{\mathscr{A}}\theta^m\cdot\nabla_{\mathscr{A}}\phi+\nabla_{\pa_t\mathscr{A}}\theta^m\cdot\nabla_{\mathscr{A}}\phi+\nabla_{\mathscr{A}}\theta^m\cdot\nabla_{\pa_t\mathscr{A}}\phi\right)J. \end{aligned} \end{eqnarray} According to \eqref{equ:test theta} and the fact that $d_j^m(t)$ is twice differentiable almost everwhere as we have pointed in the first step, we use $\phi=\pa_t\theta^m-\pa_tKJ\theta^m$ as a test function in \eqref{equ:dt thetam1}. Utilizing Cauchy's inequality, trace theory and the Remark $2.3$ in \cite{GT1}, we have that \begin{eqnarray}\label{equ:dt thetam2} \begin{aligned} &\pa_t\left(\f12\|\pa_t\theta^m\|_{\mathscr{H}^0}^2-\left(\pa_t\theta^m, \pa_tKJ\theta^m\right)_{\mathscr{H}^0}\right)+\f14\|\pa_t\theta^m\|_{\mathscr{H}^1}^2\\ &\le C_0(\eta)\left(\f12\|\theta^m\|_{\mathscr{H}^0}^2-\left(\pa_t\theta^m, \pa_tKJ\theta^m\right)_{\mathscr{H}^0}\right)+C_2(\eta)\|\pa_t\theta^m\|_{\mathscr{H}^1}^2\\ &\quad+C\left(\|F^3\|_{\mathscr{H}^0}^2+\|F^5\|_{H^{1/2}(\Sigma)}^2\right)+C\|\pa_t(F^3+F^5)\|_{(\mathscr{H}^1)^\ast} \end{aligned} \end{eqnarray} for $C_2(\eta)$ is defined as \begin{eqnarray*} C_2(\eta):&=&\sup_{0\le t\le T}\big[1+\|\pa_t(\pa_tKJ)\|_{L^\infty}^2+\|\pa_tKJ\|_{C^1}^2+\|\pa_t\mathscr{A}\|_{L^\infty}^2\\ &&\quad+(1+\|\mathscr{A}\|_{L^\infty}^2)(1+\|\pa_tJ K\|_{L^\infty}^2)\big](1+\|\pa_tKJ\|_{C^1}^2). \end{eqnarray*} Then according to Cauchy's inequality and Gronwall's lemma, \eqref{equ:dt thetam2} implies that \begin{eqnarray}\label{equ:dt thetam3} \begin{aligned} &\sup_{0\le t\le T}(\|\pa_t\theta^m\|_{\mathscr{H}^0}^2+\|\pa_t\theta^m\|_{\mathscr{H}^1_T}^2\\ &\lesssim \exp(C_0(\eta)T)\Big(\|\pa_t\theta^m(0)\|_{\mathscr{H}^0}^2 +C_1(\eta)\|\theta^m(0)\|_{\mathscr{H}^0}^2+\|F^3\|_{\mathscr{H}^0_T}^2\\ &\quad+\|F^5\|_{L^2H^{1/2}}^2+\|\pa_t(F^3+F^5)\|_{(\mathscr{H}^1_T)^\ast}^2\Big)\\ &\quad+C_2(\eta)\left(\sup_{0\le t\le T}\|\theta^m\|_{\mathscr{H}^0}^2+\int_0^T\exp(C_0(\eta)(T-s))\|\theta^m(s)\|_{\mathscr{H}^1}^2\,\mathrm ds\right). \end{aligned} \end{eqnarray} Now, the energy estimates for $\pa_t\theta^m$ is deduced by combining \eqref{equ:dt thetam3} with the estimates \eqref{equ:initial thetam}, \eqref{equ:est thetam} and \eqref{equ:est thetam0}, \begin{eqnarray}\label{equ:dt thetam4} \begin{aligned} &\sup_{0\le t\le T}\|\pa_t\theta^m\|_{\mathscr{H}^0}^2+\|\pa_t\theta^m\|_{\mathscr{H}^1_T}^2\\ &\lesssim \left(C_1(\eta)+C_2(\eta)\right)\exp(C_0(\eta)T)\left(\|\theta^m(0)\|_{\mathscr{H}^0}^2+\|F^3(0)\|_{\mathscr{H}^0}^2\right)\\ &\quad+\exp(C_0(\eta)T)\left[C_2(\eta)\left(\|F^3\|_{\mathscr{H}^0_T}^2+\|F^5\|_{L^2H^{1/2}}^2\right)+\|\pa_t(F^3+F^5)\|_{(\mathscr{H}^1_T)^\ast}^2\right]. \end{aligned} \end{eqnarray} Step 5. Improved estimates for $\theta^m$. Using the $\phi=\pa_t\theta^m-\pa_tKJ\theta^m\in \mathscr{H}^1_m(t)$ as a test function in \eqref{equ:thetam}, we can improve the energy estimates for $\theta^m$. \begin{eqnarray}\label{equ:improve thetam} \begin{aligned} &\pa_t\f12\left(\|\theta^m\|_{\mathscr{H}^1}^2+\|\theta^m\|_{H^0(\Sigma)}^2\right)+\|\pa_t\theta^m\|_{\mathscr{H}^0}^2\\ &=\left(\pa_t\theta^m,\pa_tKJ\theta^m\right)_{\mathscr{H}^0}+\left(\theta^m,\pa_tKJ\theta^m\right)_{\mathscr{H}^1}+\left(F^3,\pa_t\theta^m-\pa_tKJ\theta^m\right)_{\mathscr{H}^0}\\ &\quad+\left(F^5,\pa_t\theta^m-\pa_tKJ\theta^m\right)_{H^0(\Sigma)}+\int_{\Om}\left(\nabla_{\mathscr{A}}\theta^m\cdot\nabla_{\pa_t\mathscr{A}}\theta^m+\pa_tJK\f{|\nabla_{\mathscr{A}}\theta^m|^2}{2}J\right). \end{aligned} \end{eqnarray} Since we have already controlled $\|\theta^m\|_{\mathscr{H}^1_T}^2$ and $\|\pa_t\theta^m\|_{\mathscr{H}^1_T}^2$, integrating \eqref{equ:improve thetam} in time implies that \begin{eqnarray}\label{equ:improve thetam1} \begin{aligned} &\sup_{0\le t\le T}\|\theta^m\|_{\mathscr{H}^1}^2+\|\pa_t\theta^m\|_{\mathscr{H}^0_T}^2\\ &\lesssim P(\|\eta_0\|_{H^{5/2}}) \left(C_1(\eta)+C_2(\eta)\right)\exp(C_0(\eta)T)\left(\|\theta_0\|_{H^0}^2+\|F^3(0)\|_{\mathscr{H}^0}^2\right)\\ &\quad+P(\|\eta_0\|_{H^{5/2}})\exp(C_0(\eta)T)\Big[C_2(\eta)\left(\|F^3\|_{\mathscr{H}^0_T}^2+\|F^5\|_{L^2H^{1/2}}^2\right)\\ &\quad+\|\pa_t(F^3+F^5)\|_{(\mathscr{H}^1_T)^\ast}^2\Big]. \end{aligned} \end{eqnarray} Step 6. Uniform bounds for \eqref{equ:dt thetam4} and \eqref{equ:improve thetam1}. Now, we seek to estimate the constants $C_i(\eta)$, $i=0, 1, 2$ in terms of the quantity $\mathscr{K}(\eta)$. A direct computation combining with the Lemma A.10 in \cite{GT1} reveal that \begin{equation} C_0(\eta)+C_1(\eta)+C_2(\eta)\le C(1+\mathscr{K}(\eta)), \end{equation} For a constant $C$ independent of $\eta$. Step 7. Passing to the limit. According to the energy estimates \eqref{equ:dt thetam4} and \eqref{equ:improve thetam1} and Lemma \ref{lem:theta H0 H1}, we know that the sequence $\{\theta^m\}$ is uniformly bounded in $L^\infty H^1$ and $\{\pa_t\theta^m\}$ is uniformly bounded in $L^\infty H^0\cap L^2H^1$. Then, up to extracting a subsequence, we know that \[ \theta^m\stackrel{\ast}\rightharpoonup \theta \thinspace \text{weakly-}\ast \thinspace\text{in}\thinspace L^\infty H^1,\thinspace\pa_t\theta^m\stackrel{\ast}\rightharpoonup\pa_t\theta\thinspace\text{in}\thinspace L^\infty H^0,\thinspace \pa_t\theta^m\rightharpoonup\pa_t\theta\thinspace\text{weakly in}\thinspace L^2H^1, \] as $m\to\infty$. By lower semicontinuity, the energy estimates reveal that \[ \|\theta\|_{L^\infty H^1}^2+\|\pa_t\theta\|_{L^\infty H^0}^2+\|\pa_t\theta\|_{L^2H^1}^2 \] is bounded from above by the right-hand side of \eqref{inequ:est strong solution}. According these convergence results, we can integrate \eqref{equ:dt thetam1} termporally from $0$ to $T$ and let $m\to\infty$ to deduce that $\pa_t^2\theta^m\rightharpoonup\pa_t^2\theta$ weakly in $(\mathscr{H}^1_T)^\ast$, with an action of $\pa_t^2\theta$ on an element $\phi\in\mathscr{H}^1_T$ defined by replacing $\theta^m$ with $\theta$ everywhere in \eqref{equ:dt thetam1}. From passing to the limit in \eqref{equ:dt thetam1}, it is straightforward to show that $\|\pa_t^2\theta\|_{(\mathscr{H}^1_T)^\ast}^2$ is bounded from above by the right-hand side of \eqref{inequ:est strong solution}. This bound shows that $\pa_t\theta\in C^0L^2$. Step 8. In the limit, \eqref{equ:thetam} implies that for almost every $t$, \begin{equation}\label{equ:limit theta} \left(\pa_t\theta,\phi\right)_{\mathscr{H}^0}+\left(\theta,\phi\right)_{\mathscr{H}^1}+\left(\theta\left|\mathscr{N}\right|,\phi\right)_{H^0(\Sigma)}=\left(F^3,\phi\right)_{\mathscr{H}^0}+\left(F^5,\phi\right)_{H^0(\Sigma)}\quad\text{for every}\thinspace\phi\in\mathscr{H}^1. \end{equation} For almost every $t\in [0, T]$, $\theta(t)$ is the unique weak solution to the elliptic problem \eqref{equ:SBC} in the sense of \eqref{equ:weak theta}, with $F^3$ replaced by $F^3(t)-\pa_t\theta(t)$ and $F^5$ replaced by $F^5(t)$. Since $F^3(t)-\pa_t\theta(t)\in H^0(\Om)$ and $F^5(t)\in H^{1/2}(\Sigma)$, Lemma \ref{lem:S lower regularity} shows that this elliptic problem admits a unique strong solution, which must coincide with the weak solution. Then applying Proposition \ref{prop:high regulatrity}, we have the bound \begin{equation}\label{est:bound} \|\theta(t)\|_{H^r}^2\lesssim C(\eta_0) \left(\|\pa_t\theta(t)\|_{\mathscr{H}^{r-2}}^2+\|F^3(t)\|_{\mathscr{H}^{r-2}}^2+\|F^5(t)\|_{H^{r-3/2}(\Sigma)}^2\right) \end{equation} when $r=2,3$. When $r=2$, we take the superemum of \eqref{est:bound} over $t\in [0, T]$, and when $r=3$, we integrate over $[0, T]$; the resulting inequalities imply that $\theta\in L^\infty H^2\cap L^2H^3$ with estimates as in \eqref{inequ:est strong solution}. Then for the linear Navier--Stokes equations, the process is exactly the same as \cite{GT1}. Then we know that $(u, p, \theta)$ is a strong solution of \eqref{equ:linear BC} with the estimates as in \eqref{inequ:est strong solution}. Step 9. The weak solution satisfied by $\pa_t\theta$ and $D_tu$. We may integrate \eqref{equ:dt thetam1} in time from $0$ to $T$ and pass the limit $m\to\infty$. For any $\phi\in\mathscr{H}^1$, we have $\pa_tKJ\phi\in\mathscr{H}^1$, so that we may subsititute $\pa_tKJ\phi$ for $\phi$ in \eqref{equ:limit theta}; this yields \begin{eqnarray} \begin{aligned} &\left<\pa_t^2\theta,\phi\right>_{(\mathscr{H}^1_T)^\ast}+\left(\pa_t\theta,\phi\right)_{\mathscr{H}^1_T}+\left(\pa_t\theta\left|\mathscr{N}\right|,\phi\right)_{L^2H^0(\Sigma)}\\ &=\left<\pa_t(F^3+F^5)\right>_{(\mathscr{H}^1_T)^\ast}+\left(\pa_tJKF^3,\phi\right)_{\mathscr{H}^0_T}-\left(\pa_tJK\pa_t\theta,\phi\right)_{\mathscr{H}^0_T}\\ &\quad-\int_0^T\int_{\Om}\left(\pa_tJK\nabla_{\mathscr{A}}\theta\cdot\nabla_{\mathscr{A}}\phi+\nabla_{\pa_t\mathscr{A}}\theta\cdot\nabla_{\mathscr{A}}\phi+\nabla_{\mathscr{A}}\theta\cdot\nabla_{\pa_t\mathscr{A}}\phi\right)J \end{aligned} \end{eqnarray} for all $\phi\in\mathscr{H}^1_T$. This is exactly the \eqref{equ:weak pat theta}. To justify that \eqref{equ:weak pat theta} implies \eqref{equ:pat theta}, we may integrate by parts for the equality \begin{eqnarray} \begin{aligned} &-\int_0^T\int_{\Om}\left(\pa_tJK\nabla_{\mathscr{A}}\theta\cdot\nabla_{\mathscr{A}}\phi+\nabla_{\pa_t\mathscr{A}}\theta\cdot\nabla_{\mathscr{A}}\phi+\nabla_{\mathscr{A}}\theta\cdot\nabla_{\pa_t\mathscr{A}}\phi\right)J\\ &=-\int_0^T\int_\Om\left(-R\nabla_{\mathscr{A}}u+\nabla_{\pa_t\mathscr{A}}u\right)\cdot\nabla_{\mathscr{A}}\phi J\\ &=\left(\mathop{\rm div}\nolimits_{\mathscr{A}}(-R\nabla_{\mathscr{A}}u+\nabla_{\pa_t\mathscr{A}}u),\phi\right)_{\mathscr{H}^0_T}-\left<\nabla_{\mathscr{A}}u\cdot\pa_t\mathscr{N}+\nabla_{\pa_t\mathscr{A}}u\cdot\mathscr{N},\phi\right>_{L^2H^{-1/2}}. \end{aligned} \end{eqnarray} We then may deduce from \eqref{equ:weak pat theta} that $\pa_t\theta$ is a weak solution of \eqref{equ:pat theta} in the sense of \eqref{equ:lpws} with $\pa_t\theta(0)\in\mathscr{H}^0(0)$. Then we may appeal to the computation in \cite{GT1} to deduce that $p(0)$ satisfies the equation \eqref{equ:p0} and $D_tu$ is a weak solution of \eqref{equ:Dt u} in the sense of \eqref{equ:lpws} with $D_tu(0)\in\mathscr{Y}(0)$. \end{proof} \subsection{Higher regularity} In order to state our higher regularity results for \eqref{equ:linear BC}, we need to construct the initial data and compatible conditions. First, we define the vector or scalar fields $\mathfrak{E}^{01}$, $\mathfrak{E}^{02}$, $\mathfrak{E}^1$, $\mathfrak{E}^3$ in $\Om$ and $\mathfrak{E}^4$, $\mathfrak{E}^5$ on $\Sigma$ by \begin{eqnarray} \begin{aligned} \mathfrak{E}^{01}(G^1, v, q)&=\Delta_{\mathscr{A}}v-\nabla_{\mathscr{A}}q+G^1-Rv,\\ \mathfrak{E}^{02}(G^3,\Theta)&=\Delta_{\mathscr{A}}\Theta+G^3,\\ \mathfrak{E}^1(v,q)&=-(R+\pa_tJK)\Delta_{\mathscr{A}}v-\pa_tRv+(\pa_tJK+R+R^\top)\nabla_{\mathscr{A}}q\\ &\quad+\mathop{\rm div}\nolimits_{\mathscr{A}}(\mathbb{D}_{\mathscr{A}}(Rv)-R\mathbb{D}_{\mathscr{A}}v+\mathbb{D}_{\pa_t\mathscr{A}}v),\\ \mathfrak{E}^3(\Theta)&=-\pa_tJK\Delta_{\mathscr{A}}\Theta+\mathop{\rm div}\nolimits_{\mathscr{A}}(-R\nabla_{\mathscr{A}}\Theta+\nabla_{\pa_t\mathscr{A}}\Theta),\\ \mathfrak{E}^4(v,q)&=\mathbb{D}_{\mathscr{A}}(Rv)\mathscr{N}-(qI-\mathbb{D}_{\mathscr{A}}v)\pa_t\mathscr{N}+\mathbb{D}_{\pa_t\mathscr{A}}v\mathscr{N},\\ \mathfrak{E}^5(\Theta)&=-\nabla_{\mathscr{A}}\Theta\cdot\pa_t\mathscr{N}-\nabla_{\pa_t\mathscr{A}}\Theta\cdot\mathscr{N}-\Theta\pa_t\left|\mathscr{N}\right|, \end{aligned} \end{eqnarray} and we define functions $\mathfrak{f}^1$ in $\Om$, $\mathfrak{f}^2$ on $\Sigma$ and $\mathfrak{f}^3$ on $\Sigma_b$ by \begin{eqnarray} \begin{aligned} \mathfrak{f}^1(G^1, v)&=\mathop{\rm div}\nolimits_{\mathscr{A}}(G^1-Rv),\\ \mathfrak{f}^2(G^4, v)&=(G^4+\mathbb{D}_{\mathscr{A}}v{\mathscr{N}})\cdot{\mathscr{N}}|{\mathscr{N}}|^{-2},\\ \mathfrak{f}^3(G^1, v)&=(G^1+\Delta_{\mathscr{A}}v)\cdot\nu. \end{aligned} \end{eqnarray} We write $F^{1,0}=F^1+\theta \nabla_{\mathscr{A}}y_3$, $F^{3,0}=F^3$, $F^{4,0}=F^4$ and $F^{5,0}=F^5$. When $F^1$, $F^3$, $F^4$, $F^5$, $u$, $p$, and $\theta$ are regularly enough, we can recursively define \begin{eqnarray} \label{equ:force 1} \begin{aligned} F^{1,j}&:=D_tF^{1,j-1}-\pa_t^{j-1}(\theta \nabla_{\mathscr{A}}y_3)+D_t^{j-1}(\theta \nabla_{\mathscr{A}}y_3)+\mathfrak{E}^1(D_t^{j-1}u, \pa_t^{j-1}p)\\ &=D_t^jF^1-\left(\pa_t^{j-1}(\theta \nabla_{\mathscr{A}}y_3)-D_t^{j-1}(\theta \nabla_{\mathscr{A}}y_3)\right)+\sum_{\ell=0}^{j-1}D_t^\ell\mathfrak{E}^1(D_t^{j-\ell-1}u, \pa_t^{j-\ell-1}p),\\ F^{3,j}&:=\pa_tF^{3,j-1}+\mathfrak{E}^3(\pa_t^{j-1}\theta)=\pa_t^jF^3+\sum_{\ell=0}^{j-1}\pa_t^\ell\mathfrak{E}^3(\pa_t^{j-\ell-1}\theta), \end{aligned} \end{eqnarray} in $\Om$ and \begin{eqnarray}\label{equ:force 2} \begin{aligned} F^{4,j}&:=\pa_tF^{4,j-1}+\mathfrak{E}^4(D_t^{j-1}u, \pa_t^{j-1}p)=\pa_t^jF^4+\sum_{\ell=0}^{j-1}\pa_t^\ell\mathfrak{E}^4(D_t^{j-\ell-1}u, \pa_t^{j-\ell-1}p),\\ F^{5,j}&:=\pa_tF^{5,j-1}+\mathfrak{E}^5(\pa_t^{j-1}\theta)=\pa_t^jF^5+\sum_{\ell=0}^{j-1}\pa_t^\ell\mathfrak{E}^5(\pa_t^{j-\ell-1}\theta) \end{aligned} \end{eqnarray} on $\Sigma$, for $j=1, \ldots, N$. Now, we define the sums of norms with $F^1$, $F^3$, $F^4$ and $F^5$. \begin{eqnarray} \label{def:force F F0} \begin{aligned} \mathfrak{F}(F^1,F^3,F^4,F^5)&:=\sum_{j=0}^{N-1}\left(\|\pa_t^jF^1\|_{L^2H^{2N-2j-1}}+\|\pa_t^jF^3\|_{L^2H^{2N-2j-1}}\right)\\ &\quad+\|\pa_t^{N}F^1\|_{L^2({}_0H^1(\Om))^\ast}+\|\pa_t^{N}F^3\|_{L^2({}_0H^1(\Om))^\ast}\\ &\quad+\sum_{j=0}^N\left(\|\pa_t^jF^4\|_{L^2H^{2N-2j-1/2}}+\|\pa_t^jF^5\|_{L^2H^{2N-2j-1/2}}\right)\\ &\quad+\sum_{j=0}^{N-1}\left(\|\pa_t^jF^1\|_{L^\infty H^{2N-2j-2}}+\|\pa_t^jF^3\|_{L^\infty H^{2N-2j-2}}\right)\\ &\quad+\sum_{j=0}^{N-1}\left(\|\pa_t^jF^4\|_{L^\infty H^{2N-2j-3/2}}+\|\pa_t^jF^5\|_{L^\infty H^{2N-2j-3/2}}\right),\\ \mathfrak{F}_0(F^1,F^3,F^4,F^5)&:=\sum_{j=0}^{N-1}\left(\|\pa_t^jF^1(0)\|_{H^{2N-2j-2}}+\|\pa_t^jF^3(0)\|_{H^{2N-2j-2}}\right)\\ &\quad+\sum_{j=0}^{N-1}\left(\|\pa_t^jF^4(0)\|_{H^{2N-2j-3/2}}+\|\pa_t^jF^5(0)\|_{H^{2N-2j-3/2}}\right). \end{aligned} \end{eqnarray} For simplicity, we will write $\mathfrak{F}$ for $\mathfrak{F}(F^1,F^3,F^4,F^5)$ and $\mathfrak{F}_0$ for $\mathfrak{F}_0(F^1,F^3,F^4,F^5)$ throughout the rest of this paper. From the Lemma A.4 and Lemma 2.4 of \cite{GT1}, we know that if $\mathfrak{F}<\infty$, then \begin{align*} &\pa_t^jF^1\in C^0([0,T];H^{2N-2j-2}(\Om)),\quad \pa_t^jF^3\in C^0([0,T];H^{2N-2j-2}(\Om)),\\ &\pa_t^jF^4\in C^0([0,T];H^{2N-2j-3/2}(\Sigma)),\quad\text{and}\quad \pa_t^jF^5\in C^0([0,T];H^{2N-2j-3/2}(\Sigma)) \end{align*} for $j=0, \ldots, N-1$. For $\eta$, we define \begin{eqnarray} \label{def:norm eta} \begin{aligned} \mathfrak{D}(\eta)&:=\sum_{j=2}^{N+1}\|\pa_t^j\eta\|_{L^2H^{2N-2j+5/2}}^2,\\ \mathfrak{E}(\eta)&:=\|\eta\|_{L^\infty H^{2N+1/2}(\Sigma)}^2+\sum_{j=1}^{N}\|\pa_t^j\eta\|_{L^\infty H^{2N-2j+3/2}(\Sigma)}^2,\\ \mathfrak{K}(\eta)&:=\mathfrak{D}(\eta)+\mathfrak{E}(\eta),\\ \mathfrak{E}_0(\eta)&:=\|\eta_0\|_{H^{2N+1/2}(\Sigma)}^2+\sum_{j=1}^{N}\|\pa_t^j\eta(0)\|_{H^{2N-2j+3/2}(\Sigma)}^2. \end{aligned} \end{eqnarray} These following lemmas are similar to Lemma 4.5, 4.6, 4.7 in \cite{GT1} as well as the idea of proof, so we omit these details here. \begin{lemma}\label{lem:pa tv Dt v} If $k=0,\ldots, 2N-1$ and $v$, $\Theta$ are sufficiently regular, then \begin{equation}\label{est:pat v Dt v l2} \|\pa_tv-D_tv\|_{L^2H^k}^2\lesssim P(\mathfrak{K}(\eta))\|v\|_{L^2H^k}^2, \end{equation} \begin{equation} \|\pa_t(\Theta \nabla_{\mathscr{A}}y_3)-D_t(\Theta \nabla_{\mathscr{A}}y_3)\|_{L^2H^k}^2\lesssim P(\mathfrak{K}(\eta))\|\Theta\|_{L^2H^k}^2, \end{equation} and if $k=0,\ldots, 2N-2$, then \begin{equation}\label{est:pat v Dt v linfty} \|\pa_tv-D_tv\|_{L^\infty H^k}^2\lesssim P(\mathfrak{K}(\eta))\|v\|_{L^\infty H^k}^2, \end{equation} \begin{equation} \|\pa_t(\Theta \nabla_{\mathscr{A}}y_3)-D_t(\Theta \nabla_{\mathscr{A}}y_3)\|_{L^\infty H^k}^2\lesssim P(\mathfrak{K}(\eta))\|\Theta\|_{L^\infty H^k}^2. \end{equation} If $m=1, \ldots, N-1$, $j=1, \ldots, m$, and $v$, $\Theta$ are sufficiently regular, then \begin{equation} \|\pa_t^jv-D_t^jv\|_{L^2H^{2m-2j+3}}^2\lesssim P(\mathfrak{K}(\eta))\sum_{\ell=0}^{j-1}\left(\|\pa_t^\ell v\|_{L^2H^{2m-2j+3}}^2+\|\pa_t^\ell v\|_{L^\infty H^{2m-2j+2}}^2\right), \end{equation} \begin{equation}\label{est:pa t Dt v j} \|\pa_t^jv-D_t^jv\|_{L^\infty H^{2m-2j+2}}^2\lesssim P(\mathfrak{K}(\eta))\sum_{\ell=0}^{j-1}\|\pa_t^\ell v\|_{L^\infty H^{2m-2j+2}}^2, \end{equation} \begin{equation} \|\pa_t^j(\Theta \nabla_{\mathscr{A}}y_3)-D_t^j(\Theta \nabla_{\mathscr{A}}y_3)\|_{L^2H^{2m-2j+2}}^2\lesssim P(\mathfrak{K}(\eta))\sum_{\ell=0}^{j-1}\left(\|\pa_t^\ell \Theta\|_{L^2H^{2m-2j+3}}^2+\|\pa_t^\ell \Theta\|_{L^\infty H^{2m-2j+2}}^2\right), \end{equation} \begin{equation} \|\pa_t^j(\Theta \nabla_{\mathscr{A}}y_3)-D_t^j(\Theta \nabla_{\mathscr{A}}y_3)\|_{L^\infty H^{2m-2j+3}}^2\lesssim P(\mathfrak{K}(\eta))\sum_{\ell=0}^{j-1}\|\pa_t^\ell \Theta\|_{L^\infty H^{2m-2j+2}}^2, \end{equation} and \begin{eqnarray} \begin{aligned} &\|\pa_tD_t^mv-\pa_t^{m+1}v\|_{L^2H^1}^2+\|\pa_t^2D_t^mv-\pa_t^{m+2}v\|_{(\mathscr{X}_T)^\ast}^2\\ &\lesssim P(\mathfrak{K}(\eta))\left(\|\pa_t^{m+1}v\|_{(\mathscr{X}_T)^\ast}^2+\sum_{\ell=0}^m\left(\|\pa_t^\ell v\|_{L^2H^1}^2+\|\pa_t^\ell v\|_{L^\infty H^2}^2\right)\right). \end{aligned} \end{eqnarray} Also, if $j=0, \ldots, N$ and $v$ is sufficiently regular, then \begin{equation}\label{equ:initial v j} \|\pa_t^jv(0)-D_t^jv(0)\|_{H^{2N-2j}}^2\lesssim P(\mathfrak{E}_0(\eta))\sum_{\ell=0}^{j-1}\|\pa_t^\ell v(0)\|_{H^{2N-2j}}^2, \end{equation} and if $j=0, \ldots, N-1$ and $\Theta$ is sufficiently regular, then \begin{equation}\label{equ:initial theta j} \|\pa_t^j(\Theta(0)\nabla_{\mathscr{A}_0}y_{3,0})-D_t^j(\Theta(0)\nabla_{\mathscr{A}_0}y_{3,0})\|_{H^{2N-2j-2}}^2\lesssim P(\mathfrak{E}_0(\eta))\sum_{\ell=0}^{j-1}\|\pa_t^\ell \Theta(0)\|_{H^{2N-2j-2}}^2. \end{equation} Here all of the $P(\cdot)$ are polynomial, allowed to be changed from line to line. \end{lemma} \begin{lemma}\label{lem:force linear} For $m=1, \ldots, N-1$ and $j=1, \ldots, m$, the following estimates hold whenever the right--hand sides are finite: \begin{eqnarray}\label{equ:force l2} \begin{aligned} &\|F^{1,j}\|_{L^2H^{2m-2j+1}}^2+\|F^{3,j}\|_{L^2H^{2m-2j+1}}^2+\|F^{4,j}\|_{L^2H^{2m-2j+3/2}}^2+\|F^{5,j}\|_{L^2H^{2m-2j+3/2}}^2\\ &\lesssim P(\mathfrak{K}(\eta))\bigg(\mathfrak{F}+\sum_{\ell=0}^{j-1}\left(\|\pa_t^\ell u\|_{L^2H^{2m-2\ell+3}}^2+\|\pa_t^\ell \theta\|_{L^2H^{2m-2\ell+3}}^2\right)\\ &\quad+\sum_{\ell=0}^{j-1}\Big(\|\pa_t^\ell u\|_{L^\infty H^{2m-2\ell+2}}^2+\|\pa_t^\ell \theta\|_{L^\infty H^{2m-2\ell+2}}^2+\|\pa_t^\ell p\|_{L^2H^{2m-2\ell+2}}^2\\ &\quad+\|\pa_t^\ell p\|_{L^\infty H^{2m-2\ell+1}}^2\Big)\bigg), \end{aligned} \end{eqnarray} \begin{eqnarray}\label{equ:force l infity} \begin{aligned} &\|F^{1,j}\|_{L^\infty H^{2m-2j}}^2+\|F^{3,j}\|_{L^\infty H^{2m-2j}}^2+\|F^{4,j}\|_{L^\infty H^{2m-2j+1/2}}^2+\|F^{5,j}\|_{L^\infty H^{2m-2j+1/2}}^2\\ &\lesssim P(\mathfrak{K}(\eta))\bigg(\mathfrak{F}+\sum_{\ell=0}^{j-1}\Big(\|\pa_t^\ell u\|_{L^\infty H^{2m-2\ell+2}}^2+\|\pa_t^\ell \theta\|_{L^\infty H^{2m-2\ell+2}}^2\\ &\quad+\|\pa_t^\ell p\|_{L^\infty H^{2m-2\ell+1}}^2\Big)\bigg), \end{aligned} \end{eqnarray} \begin{eqnarray}\label{equ:force dual} \begin{aligned} &\|\pa_t(F^{1,m}-F^{4,m})\|_{L^2({}_0H^1(\Om))^\ast}^2+\|\pa_t(F^{3,m}+F^{5,m})\|_{L^2({}_0H^1(\Om))^\ast}^2\\ &\lesssim P(\mathfrak{K}(\eta))\bigg(\mathfrak{F}+\|\pa_t^m u\|_{L^2 H^{2}}^2+\|\pa_t^m \theta\|_{L^2 H^{2}}^2+\|\pa_t^m p\|_{L^2 H^{1}}^2\\ &\quad+\sum_{\ell=0}^{m-1}\Big(\|\pa_t^\ell u\|_{L^\infty H^{2}}^2+\|\pa_t^\ell u\|_{L^2 H^{2}}^3+\|\pa_t^\ell \theta\|_{L^\infty H^{2}}^2+\|\pa_t^\ell \theta\|_{L^2 H^{2}}^3\\ &\quad+\|\pa_t^\ell p\|_{L^\infty H^{1}}^2+\|\pa_t^\ell p\|_{L^2 H^{2}}^2\Big)\bigg). \end{aligned} \end{eqnarray} Similarly, for $j=1, \ldots, N-1$, \begin{eqnarray}\label{equ:initial force j} \begin{aligned} &\|F^{1,j}(0)\|_{H^{2N-2j-2}}^2+\|F^{3,j}(0)\|_{H^{2N-2j-2}}^2+\|F^{4,j}(0)\|_{H^{2N-2j-3/2}}^2+\|F^{5,j}(0)\|_{H^{2N-2j-3/2}}^2\\ &\lesssim P(\mathfrak{E}_0(\eta))\bigg(\mathfrak{F}_0+\|\pa_t^j \theta(0)\|_{H^{2N-2j}}+\sum_{\ell=0}^{j-1}\big(\|\pa_t^\ell u(0)\|_{H^{2N-2\ell}}\\ &\quad+\|\pa_t^\ell \theta(0)\|_{H^{2N-2\ell}}+\|\pa_t^\ell p(0)\|_{H^{2N-2\ell-1}}\big)\bigg). \end{aligned} \end{eqnarray} Here all of the $P(\cdot)$ are polynomial allowed to be changed from line to line. \end{lemma} \begin{lemma}\label{lem:v,q,G} Suppose that $v$, $q$, $G^1$, $G^3$ are evaluated at $t=0$ and are sufficiently regular for the right--hand sides of the following estimates to make sense. If $j=0, \ldots, N-1$, then \begin{eqnarray}\label{equ:initial G1 v q} \begin{aligned} &\|\mathfrak{E}^{01}(G^1,v,q)\|_{H^{2N-2j-2}}^2\\ &\lesssim P(\mathfrak{E}_0(\eta))\left(\|v\|_{H^{2N-2j}}^2+\|q\|_{H^{2N-2j-1}}^2+\|G^1\|_{H^{2N-2j-2}}^2\right), \end{aligned} \end{eqnarray} \begin{equation}\label{equ:e02} \|\mathfrak{E}^{02}(G^3,\Theta)\|_{H^{2N-2j-2}}^2\lesssim P(\mathfrak{E}_0(\eta))\left(\|\Theta\|_{H^{2N-2j}}^2+\|G^3\|_{H^{2N-2j-2}}^2\right). \end{equation} If $j=0,\ldots,N-2$, then \begin{eqnarray}\label{equ:initial g1 g4} \begin{aligned} &\|\mathfrak{f}^1(G^1,v)\|_{H^{2N-2i-3}}^2+\|\mathfrak{f}^2(G^4,v)\|_{H^{2N-2i-3/2}}^2+\|\mathfrak{f}^3(G^1,v)\|_{H^{2N-2i-5/2}}^2\\ &\lesssim P(\mathfrak{E}_0(\eta))\left(\|G^1\|_{H^{2N-2j-2}}^2+\|G^4\|_{H^{2N-2j-3/2}}^2+\|v\|_{H^{2N-2j}}^2\right). \end{aligned} \end{eqnarray} For $j=N-1$, if $\mathop{\rm div}\nolimits_{\mathscr{A}(0)}v(0)=0$ in $\Om$, then \begin{equation} \|\mathfrak{f}^2(G^4,v)\|_{H^{1/2}}^2+\|\mathfrak{f}^3(G^1,v)\|_{H^{-1/2}}^2\lesssim P(\mathfrak{E}_0(\eta))\left(\|G^1\|_{H^{2}}^2+\|G^4\|_{H^{1/2}}^2+\|v\|_{H^{2}}^2\right). \end{equation} Here all of the $P(\cdot)$ are polynomial allowed to be changed from line to line. \end{lemma} Now we can construct the initial data and compatible conditions. We assume that $u_0\in H^{2N}(\Om)$, $\theta_0\in H^{2N}$, $\eta_0\in H^{2N+1/2}(\Sigma)$. Then we will iteratively construct the initial data $D_t^ju(0)$, $\pa_t^j\theta(0)$ for $j=1,\ldots, N$ and $\pa_t^jp(0)$ for $j=1,\ldots, N-1$. First, we denote $F^{1,0}(0)=F^1(0)\in H^{2N-2}$, $F^{3,0}(0)=F^3(0)\in H^{2N-2}$, $F^{4,0}(0)=F^4(0)\in H^{2N-3/2}$, $F^{5,0}(0)=F^5(0)\in H^{2N-3/2}$ and $D_t^0u(0)=u_0\in H^{2N}$, $\pa_t^0\theta(0)=\theta_0\in H^{2N}$. Suppose now that we have constructed $F^{1,\ell}\in H^{2N-2\ell-2}$, $F^{3,\ell}\in H^{2N-2\ell-2}$, $F^{4,\ell}\in H^{2N-2\ell-3/2}$, $F^{5,\ell}\in H^{2N-2\ell-3/2}$, and $D_t^ju(0)\in H^{2N-2\ell}$, $\pa_t^\ell\theta(0)\in H^{2N-2\ell}$ for $0\le\ell\le j\le N-2$; we will construct $\pa_t^jp(0)\in H^{2N-2j-1}$ as well as $D_t^{j+1}u(0)\in H^{2N-2j-2}$, $\pa_t^{j+1}\theta(0)\in H^{2N-2j-2}$, $F^{1,j+1}(0)\in H^{2N-2j-4}$, $F^{3,j+1}(0)\in H^{2N-2j-4}$, $F^{4,j+1}(0)\in H^{2N-2j-7/2}$ and $F^{5,j+1}(0)\in H^{2N-2j-7/2}$ as follows. By virtue of estimate, we know that \begin{eqnarray} \begin{aligned} f^1&=\mathfrak{f}^1(F^{1,j}(0),D_t^ju(0))\in H^{2N-2j-3},\\ f^2&=\mathfrak{f}^2(F^{4,j}(0),D_t^ju(0))\in H^{2N-2j-3/2},\\ f^3&=\mathfrak{f}^3(F^{1,j}(0),D_t^ju(0))\in H^{2N-2j-5/2} \end{aligned} \end{eqnarray} This allows us to define $\pa_t^jp(0)$ as the solution to \eqref{equ:poisson}. The choice of $f^1$, $f^2$, $f^3$, implies that $\pa_t^jp(0)\in H^{2N-2j-1}$, according to the Proposition 2.15 of \cite{WL}. Now the estimates \eqref{equ:initial force j}, \eqref{equ:initial v j} and \eqref{equ:initial G1 v q} allows us to define \begin{align*} D_t^{j+1}u(0)&:=\mathfrak{E}^{01}\left(F^{1,j}(0)+\pa_t^j(\theta(0) \nabla_{\mathscr{A}_0}y_{3,0}), D_t^ju(0), \pa_t^jp(0)\right)\in H^{2N-2j-2},\\ \pa_t^{j+1}\theta(0)&:=\mathfrak{E}^{02}\left(F^{3,j}(0),\pa_t^j\theta(0)\right)\in H^{2N-2j-2},\\ F^{1,j+1}(0)&:=D_t^jF^{1,j}(0)-\pa_t^j(\theta(0) \nabla_{\mathscr{A}_0}y_{3,0})+D_t^j(\theta(0) \nabla_{\mathscr{A}_0}y_{3,0})\\ &\quad+\mathfrak{E}^1\left(D_t^ju(0),\pa_t^jp(0)\right)\in H^{2N-2j-4},\\ F^{3,j+1}(0)&:=\pa_tF^{3,j}(0)+\mathfrak{E}^3\left(\pa_t^j\theta(0)\right)\in H^{2N-2j-4},\\ F^{4,j+1}(0)&:=\pa_tF^{4,j}(0)+\mathfrak{E}^4\left(D_t^ju(0),\pa_t^jp(0)\right)\in H^{2N-2j-7/2},\\ F^{5,j+1}(0)&:=\pa_tF^{5,j}(0)+\mathfrak{E}^5\left(\pa_t^j\theta(0)\right)\in H^{2N-2j-7/2}. \end{align*} Then, from the above analysis, we can iteratively construct all of the desired data except for $D_t^Nu(0)$, $\pa_t^{N-1}p(0)$ and $\pa_t^N\theta(0)$. By construction, the initial data $D_t^ju(0)$, $\pa_t^jp(0)$ and $\pa_t^j\theta(0)$ are determined in terms of $u_0$, $\theta_0$ as well as $\pa_t^\ell F^1(0)$, $\pa_t^\ell F^3(0)$, $\pa_t^\ell F^4(0)$ and $\pa_t^\ell F^5(0)$ for $\ell=0, \ldots, N-1$. In order to use these in Theorem \ref{thm:lower regularity} and to construct $D_t^Nu(0)$, $\pa_t^{N-1}p(0)$ and $\pa_t^N\theta(0)$, we must enforce compatibility conditions for $j=0,\ldots,N-1$. We say that the $j$--th compatibility condition is satisfied if \begin{eqnarray}\label{cond:compatibility j} \left\{ \begin{aligned} &D_t^ju(0)\in \mathscr{X}(0)\cap H^2(\Om),\\ &\Pi_0\left(F^{4,j}(0)+\mathbb{D}_{\mathscr{A}_0}D_t^ju(0)\mathscr{N}_0\right)=0. \end{aligned} \right. \end{eqnarray} The construction of $D_t^ju(0)$ and $\pa_t^jp(0)$ ensures that $D_t^ju(0)\in H^2(\Om)$ and $\mathop{\rm div}\nolimits_{\mathscr{A}_0}(D_t^ju(0))=0$. In the following, we define $\pa_t^N\theta(0)\in H^0$, $\pa_t^{N-1}p(0)\in H^1$ and $D_t^N u(0)\in H^0$. First, we can define \[ \pa_t^N\theta(0)=\mathfrak{E}^{02}(F^{3,N-1}(0),\pa_t^{N-1}\theta(0))\in H^0(\Om), \] employing \eqref{equ:e02} for the inclusion in $H^0$. Then using the same analysis in \cite{GT1}, the data $\pa_t^{N-1}p(0)\in H^1$ can be defined as a weak solution to \eqref{equ:poisson}. Then we define \[ D_t^Nu(0)=\mathfrak{E}^{01}\left(F^{1,N-1}(0)+\pa_t^{N-1}(\theta(0) \nabla_{\mathscr{A}_0}y_{3,0}), D_t^{N-1}u(0),\pa_t^{N-1}p(0)\right)\in H^0, \] employing \eqref{equ:initial G1 v q} and \eqref{equ:initial theta j} for the inclusion in $H^0$. And $D_t^Nu(0)\in\mathscr{Y}(0)$ is guaranteed by the construction of $\pa_t^{N-1}p(0)$. Combining the inclusions above with the bounds \eqref{equ:initial force j}, \eqref{equ:initial g1 g4} , \eqref{equ:initial G1 v q} and \eqref{equ:e02} implies that \begin{eqnarray}\label{equ:est initial data j} \begin{aligned} &\sum_{j=0}^N\|D_t^ju(0)\|_{H^{2N-2j}}^2+\sum_{j=0}^{N-1}\|\pa_t^jp(0)\|_{H^{2N-2j-1}}^2+\sum_{j=0}^N\|\pa_t^j\theta(0)\|_{H^{2N-2j}}^2\\ &\lesssim P(\mathfrak{E}_0(\eta))\left(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0\right). \end{aligned} \end{eqnarray} Before stating the result on higher regularity for solutions to \eqref{equ:linear BC} , we define some quantities: \begin{eqnarray} \begin{aligned} \mathfrak{D}(u,p,\theta)&:=\sum_{j=0}^N\left(\|\pa_t^ju\|_{L^2H^{2N-2j+1}}^2+\|\pa_t^j\theta\|_{L^2H^{2N-2j+1}}^2\right)+\|\pa_t^{N+1}u\|_{(\mathscr{X}_T)^\ast}\\ &\quad+\|\pa_t^{N+1}\theta\|_{(\mathscr{H}^1_T)^\ast}+\sum_{j=0}^{N-1}\|\pa_t^jp\|_{L^2H^{2N-2j}},\\ \mathfrak{E}(u,p,\theta)&:=\sum_{j=0}^N\left(\|\pa_t^ju\|_{L^\infty H^{2N-2j}}^2+\|\pa_t^j\theta\|_{L^\infty H^{2N-2j}}^2\right)+\sum_{j=0}^{N-1}\|\pa_t^jp\|_{L^\infty H^{2N-2j-1}},\\ \mathfrak{K}(u,p,\theta)&:=\mathfrak{D}(u,p,\theta)+\mathfrak{E}(u,p,\theta). \end{aligned} \end{eqnarray} \begin{theorem}\label{thm:higher regularity} Suppose that $u_0\in H^{2N}(\Om)$, $\theta_0\in H^{2N}(\Om)$, $\eta_0\in H^{2N+1/2}(\Sigma)$, and $\mathfrak{F}<\infty$. Let $D_t^ju(0)\in H^{2N-2j}(\Om)$, $\pa_t^j\theta(0)\in H^{2N-2j}(\Om)$ and $\pa_t^jp(0)\in H^{2N-2j-1}(\Om)$, for $j=1, \ldots, N-1$ along with $D_t^Nu(0)\in\mathscr{Y}(0)$ and $\pa_t^N\theta(0)\in H^0$, all be determined in terms of $u_0$, $\theta_0$ and $\pa_t^jF^1(0)$, $\pa_t^jF^3(0)$, $\pa_t^jF^4(0)$, $\pa_t^jF^5(0)$ for $j=0, \ldots, N-1$. There exists a universal constant $T_0>0$ such that if $0<T\le T_0$, then there exists a unique strong solution $(u,p,\theta)$ on $[0,T]$ such that \[ \pa_t^ju\in C^0\left([0,T]; H^{2N-2j}(\Om)\right)\cap L^2\left([0,T];H^{2N-2j+1}(\Om)\right)\quad\text{for}\thinspace j=0,\ldots,N, \] \[ \pa_t^jp\in C^0\left([0,T]; H^{2N-2j-1}(\Om)\right)\cap L^2\left([0,T];H^{2N-2j}(\Om)\right)\quad\text{for}\thinspace j=0,\ldots,N-1, \] \[ \pa_t^j\theta\in C^0\left([0,T]; H^{2N-2j}(\Om)\right)\cap L^2\left([0,T];H^{2N-2j+1}(\Om)\right)\quad\text{for}\thinspace j=0,\ldots,N, \] \[ \pa_t^{N+1}u\in(\mathscr{X}_T)^\ast,\quad\text{and}\quad\pa_t^{N+1}\theta\in(\mathscr{H}^1_T)^\ast. \] The pair $(D_t^ju,\pa_t^jp, \pa_t^j\theta)$ satisfies \begin{eqnarray}\label{equ:higher linear BC} \left\{ \begin{aligned} &\pa_t(D_t^ju)-\Delta_{\mathscr{A}}(D_t^ju)+\nabla_{\mathscr{A}}(\pa_t^jp)-\pa_t^j(\theta \nabla_{\mathscr{A}}y_3)=F^{1,j}\quad &\text{in}\thinspace\Om,\\ &\mathop{\rm div}\nolimits_{\mathscr{A}}(D_t^ju)=0\quad &\text{in}\thinspace\Om,\\ &\pa_t(\pa_t^j\theta)-\Delta_{\mathscr{A}}(\pa_t^j\theta)=F^{3,j}\quad &\text{in}\thinspace\Om,\\ &S_{\mathscr{A}}(\pa_t^jp,D_t^ju)\mathscr{N}=F^{4,j}\quad &\text{on}\thinspace \Sigma,\\ &\nabla_{\mathscr{A}}(\pa_t^j\theta)\cdot\mathscr{N}+\pa_t^j\theta\left|\mathscr{N}\right|=F^{5,j}\quad &\text{on}\thinspace \Sigma,\\ &D_t^ju=0,\quad \pa_t^j\theta=0\quad &\text{on}\thinspace \Sigma_b, \end{aligned} \right. \end{eqnarray} in the strong sense with initial data $\left(D_t^ju(0),\pa_t^jp(0),\pa_t^j\theta(0)\right)$ for $j=0,\ldots,N-1$, and in the weak sense with initial data $D_t^{N}u(0)\in\mathscr{Y}(0)$ and $\pa_t^{N}\theta(0)\in H^0$. Here the forcing terms $F^{1,j}$, $F^{3,j}$, $F^{4,j}$ and $F^{5,j}$ are as defined by \eqref{equ:force 1} and \eqref{equ:force 2}. Moreover, the solution satisfies the estimate \begin{equation} \label{est:higher regularity} \mathfrak{K}(u,p,\theta)\lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(T P(\mathfrak{E}(\eta))\right)\left(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\right), \end{equation} where the constant $C>0$, is independent of $\eta$. \end{theorem} \begin{proof} First, notice that $P(\cdot,\cdot)$ and $P(\cdot)$ throughout this proof is allowed to change from line to line. Theorem \ref{thm:lower regularity} guarantees the existence of $(u,p,\theta)$ satisfying the inclusions \eqref{equ:strong solution}. The $(D_t^ju,\pa_t^jp,\pa_t^j\theta)$ are solutions of \eqref{equ:higher linear BC} in the strong sense when $j=0$ and in the weak sense when $j=1$. Finally, the estimate \eqref{inequ:est strong solution} holds. For an integer $m\ge0$, let $\mathbb{P}_m$ denote the proposition asserting the following three statements. First, $(D_t^ju,\pa_t^jp,\pa_t^j\theta)$ are solutions of \eqref{equ:higher linear BC} in the strong sense for $j=0,\ldots,m$ and in the weak sense when $j=m+1$. Second, \[ \pa_t^ju\in L^\infty H^{2m-2j+2}\cap L^2H^{2m-2j+3},\quad \pa_t^j\theta\in L^\infty H^{2m-2j+2}\cap L^2H^{2m-2j+3} \] for $j=0,1,\ldots,m+1$, $\pa_t^{m+2}u\in(\mathscr{X}_T)^\ast$, $\pa_t^{m+2}\theta\in(\mathscr{H}^1_T)^\ast$ and \[ \pa_t^jp\in L^\infty H^{2m-2j+1}\cap L^2H^{2m-2j+2} \] for $j=0,1,\ldots,m$. Third, the estimate \begin{eqnarray}\label{est:bound pm} \begin{aligned} &\sum_{j=0}^{m+1}\left(\|\pa_t^ju\|_{L^\infty H^{2m-2j+2}}^2+\|\pa_t^ju\|_{L^2H^{2m-2j+3}}^2+\|\pa_t^j\theta\|_{L^\infty H^{2m-2j+2}}^2+\|\pa_t^j\theta\|_{L^2H^{2m-2j+3}}^2\right)\\ &\quad+\|\pa_t^{m+2}u\|_{(\mathscr{X}_T)^\ast}^2+\|\pa_t^{m+2}\theta\|_{\mathscr{H}^1_T}^2+\sum_{j=0}^m\left(\|\pa_t^jp\|_{L^\infty H^{2m-2j+1}}^2+\|\pa_t^jp\|_{L^2H^{2m-2j+2}}^2\right)\\ &\lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(T P(\mathfrak{E}(\eta))\right)\left(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\right) \end{aligned} \end{eqnarray} holds. We will use a finite induction method to prove that $\mathbb{P}_m$ holds. Theorem \ref{thm:lower regularity} implies that $\mathbb{P}_0$ holds. Then in the rest of this proof, we will divide the proof into two steps. Step 1. Proving the first assertion. Suppose that $\mathbb{P}_m$ holds for $m=0, \ldots, N-2$. From \eqref{equ:force l2}--\eqref{equ:force dual} of Lemma \ref{lem:force linear}, we have that \begin{eqnarray}\label{equ:force m+1 l2} \begin{aligned} &\|F^{1,m+1}(v,q)\|_{L^2H^1}^2+\|F^{3,m+1}(\Theta)\|_{L^2H^1}^2+\|F^{4,m+1}(v,q)\|_{L^2H^{3/2}}^2\\ &\quad+\|F^{5,m+1}(\Theta)\|_{L^2H^{3/2}}^2\\ &\lesssim P(\mathfrak{K}(\eta))\bigg(\mathfrak{F}+\sum_{\ell=0}^{m}\left(\|\pa_t^\ell v\|_{L^2H^3}^2+\|\pa_t^\ell \Theta\|_{L^2H^3}^2\right)\\ &\quad+\sum_{\ell=0}^{m}\Big(\|\pa_t^\ell v\|_{L^\infty H^2}^2+\|\pa_t^\ell \Theta\|_{L^\infty H^2}^2+\|\pa_t^\ell q\|_{L^2H^2}^2+\|\pa_t^\ell q\|_{L^\infty H^1}^2\Big)\bigg), \end{aligned} \end{eqnarray} \begin{eqnarray}\label{equ:force m+1 l infty} \begin{aligned} &\|F^{1,m+1}(v,q)\|_{L^\infty H^0}^2+\|F^{3,m+1}(\Theta)\|_{L^\infty H^0}^2+\|F^{4,j}(v,q)\|_{L^\infty H^{1/2}}^2\\ &\quad+\|F^{5,j}(\Theta)\|_{L^\infty H^{1/2}}^2\\ &\lesssim P(\mathfrak{K}(\eta))\bigg(\mathfrak{F}+\sum_{\ell=0}^m\Big(\|\pa_t^\ell v\|_{L^\infty H^2}^2+\|\pa_t^\ell \Theta\|_{L^\infty H^2}^2+\|\pa_t^\ell q\|_{L^\infty H^1}^2\Big)\bigg), \end{aligned} \end{eqnarray} \begin{eqnarray}\label{equ:force m+1 dual} \begin{aligned} &\|\pa_t(F^{1,m+1}(v,q)-F^{4,m+1}(v,q))\|_{L^2({}_0H^1(\Om))^\ast}^2\\ &\quad+\|\pa_t(F^{3,m+1}(\Theta)-F^{5,m+1}(\Theta))\|_{L^2({}_0H^1(\Om))^\ast}^2\\ &\lesssim P(\mathfrak{K}(\eta))\bigg(\mathfrak{F}+\|\pa_t^{m+1} v\|_{L^2 H^{2}}^2+\|\pa_t^{m+1} \Theta\|_{L^2 H^{2}}^2+\|\pa_t^{m+1} q\|_{L^2 H^{1}}^2\\ &\quad+\sum_{\ell=0}^m\Big(\|\pa_t^\ell v\|_{L^\infty H^{2}}^2+\|\pa_t^\ell v\|_{L^2 H^{2}}^3+\|\pa_t^\ell \Theta\|_{L^\infty H^{2}}^2+\|\pa_t^\ell \Theta\|_{L^2 H^{2}}^3\\ &\quad+\|\pa_t^\ell q\|_{L^\infty H^{1}}^2+\|\pa_t^\ell q\|_{L^2 H^{2}}^2\Big)\bigg). \end{aligned} \end{eqnarray} Now we will use the iteration method. We let $u^0$ be the extension of the initial data $\pa_t^ju(0)$, $j=1,\ldots,N$, given by Lemma A.5 in \cite{GT1}, which may also give $\theta^0$, the extension of the initial data $\pa_t^j\theta(0)$, $j=1,\ldots,N$, and similarly let $p^0$ be the extension of $\pa_t^jp(0)$, $j=1,\ldots,N-1$, given by Lemma A.6 in \cite{GT1}. By \eqref{equ:est initial data j} and the estimates given in the Lemma A.5 and Lemma A.6 in \cite{GT1}, we have \begin{eqnarray}\label{est:u0 p0 theta0} \begin{aligned} &\sum_{j=0}^N\left(\|\pa_t^ju^0\|_{L^2H^{2N-2j+1}}^2+\|\pa_t^ju^0\|_{L^\infty H^{2N-2j}}^2+\|\pa_t^j\theta^0\|_{L^2H^{2N-2j+1}}^2+\|\pa_t^j\theta^0\|_{L^\infty H^{2N-2j}}^2\right)\\ &\quad+\sum_{j=0}^{N-1}\left(\|\pa_t^jp^0\|_{L^2H^{2N-2j}}^2+\|\pa_t^jp^0\|_{L^\infty H^{2N-2j-1}}^2\right)\\ &\lesssim \sum_{j=0}^N\|D_t^ju(0)\|_{H^{2N-2j}}^2+\sum_{j=0}^{N-1}\|\pa_t^jp(0)\|_{H^{2N-2j-1}}^2+\sum_{j=0}^N\|\pa_t^j\theta(0)\|_{H^{2N-2j}}^2\\ &\lesssim P(\mathfrak{E}_0(\eta))\left(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0\right). \end{aligned} \end{eqnarray} According to \eqref{equ:force m+1 l2}--\eqref{est:u0 p0 theta0}, we may derive that $F^{1,m+1}(u^0,p^0)$, $F^{3,m+1}(\theta^0)$, $F^{4,m+1}(u^0,p^0)$ and $F^{5,m+1}(\theta^0)$ satisfy \eqref{cond:force}. Also the compatibility condition \eqref{cond:compatibility} with $F^4$ replaced by $F^{4,m+1}(u^0,p^0)$ and $u_0$ replaced by $D_t^{m+1}u(0)$ holds by \eqref{cond:compatibility j} since $u^0$ and $p^0$ achieve the initial data. Then we can apply Theorem \ref{thm:lower regularity} to find a pair $(v^1,q^1,\Theta^1)$ satisfying the conclusions of the theorem. For simplicity, we abbreviate \eqref{equ:linear BC} as $\mathcal{L}(v,q,\Theta)=\mathbb{F}=(F^1,F^3,F^4,F^5)$. Then \[ \mathcal{L}(v^1,q^1,\Theta^1)=\mathbb{F}^{m+1}:=(F^{1,m+1}(u^0,p^0),F^{3,m+1}(\theta^0),F^{4,m+1}(u^0,p^0),F^{5,m+1}(\theta^0)), \] \[ v^1(0)=D_t^{m+1}u(0),\quad q^1(0)=\pa_t^{m+1}p(0),\quad \Theta^1(0)=\pa_t^{m+1}\theta(0). \] If we denote the left--hand side of \eqref{inequ:est strong solution} as $\mathfrak{B}(u,p,\theta)$, then we may combine \eqref{inequ:est strong solution}, \eqref{equ:initial force j}, \eqref{equ:force m+1 l2}, \eqref{equ:force m+1 dual} and \eqref{est:u0 p0 theta0} to derive that \[ \mathfrak{B}(v^1,q^1,\Theta^1)\lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\left(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\right). \] Now, suppose that $(v^n,q^n,\Theta^n)$ is given and satisfies $\mathfrak{B}(v^n,q^n,\Theta^n)<\infty$, we define $(u^n,p^n,\theta^n)$ which satisfies the ODEs \begin{eqnarray}\label{equ:ode uv} \left\{ \begin{aligned} &D_t^{m+1}u^n=v^n,\\ &\pa_t^ju^n(0)=v^n(0) \quad\text{for}\thinspace j=0,\ldots,m, \end{aligned} \right. \end{eqnarray} \begin{eqnarray}\label{equ:ode pq} \left\{ \begin{aligned} &\pa_t^{m+1}p^n=q^n,\\ &\pa_t^jp^n(0)=q^n(0) \quad\text{for}\thinspace j=0,\ldots,m, \end{aligned} \right. \end{eqnarray} \begin{eqnarray}\label{equ:ode theta Theta} \left\{ \begin{aligned} &\pa_t^{m+1}\theta^n=\Theta^n,\\ &\pa_t^j\theta^n(0)=\Theta^n(0) \quad\text{for}\thinspace j=0,\ldots,m. \end{aligned} \right. \end{eqnarray} From the wellposedness theory of linear ODEs, we know that these ODEs have unique solutions. If we define $\mathfrak{K}(v,q,\Theta)$ by \begin{align*} \mathfrak{K}(v,q,\Theta):&=\|\pa_t^{m+1}v\|_{L^2H^2}^2+\|\pa_t^{m+1}q\|_{L^2H^1}^2+\|\pa_t^{m+1}\Theta\|_{L^2H^2}^2+\sum_{\ell=0}^m\Big(\|\pa_t^\ell v\|_{L^2H^3}^2\\ &\quad+\|\pa_t^\ell v\|_{L^\infty H^2}^2+\|\pa_t^\ell \Theta\|_{L^2H^3}^2+\|\pa_t^\ell \Theta\|_{L^\infty H^2}^2+\|\pa_t^\ell q\|_{L^2H^2}^2+\|\pa_t^\ell q\|_{L^\infty H^1}^2\Big), \end{align*} then the solutions of \eqref{equ:ode uv}--\eqref{equ:ode theta Theta} satisfy the estimate \begin{equation}\label{est:un,pn,thetan} \begin{aligned} \mathfrak{K}(u^n,p^n,\theta^n)&\lesssim P(T)P(\mathfrak{K}(\eta))\bigg(\sum_{j=0}^m\|\pa_t^ju(0)\|_{H^3}^2+\|\pa_t^jp(0)\|_{H^2}^2\\ &\quad+\|\pa_t^j\theta(0)\|_{H^3}^2+T\mathfrak{B}(v^n,q^n,\Theta^n)\bigg)<\infty, \end{aligned} \end{equation} where $P(T)$ is a polynomial in $T$. Applying Theorem \ref{thm:lower regularity} iteratively, we can obtain sequences $\{(v^n,q^n,\Theta^n)\}_{n=1}^\infty$ and $\{u^n,p^n,\theta^n\}_{n=1}^\infty$ satisfying \eqref{equ:ode uv}--\eqref{equ:ode theta Theta} and \begin{eqnarray}\label{equ:iterate n n-1} \begin{aligned} \mathcal{L}(v^n,q^n,\Theta^n)&=\mathbb{F}^{m+1}(u^{n-1},p^{n-1},\theta^{n-1}),\\ v^n(0)&=D_t^{m+1}u(0),\quad q^n(0)=\pa_t^{m+1}p(0),\quad \Theta^n(0)=\pa_t^{m+1}\theta(0). \end{aligned} \end{eqnarray} Then \begin{align*} \mathcal{L}(v^{n+1}-v^n,q^{n+1}-q^n,\Theta^{n+1}-\Theta^n)&=\mathbb{F}^{m+1}(u^n-u^{n-1},p^n-p^{n-1},\theta^n-\theta^{n-1}),\\ v^{n+1}(0)-v^n(0)=0,\quad &q^{n+1}(0)-q^n(0)=0,\quad \Theta^{n+1}(0)-\Theta^n(0)=0. \end{align*} Since the terms involving $F^1$, $F^3$, $F^4$ and $F^5$ are canceled in $\mathbb{F}^{m+1}(u^n-u^{n-1},p^n-p^{n-1},\theta^n-\theta^{n-1})$, we can use \eqref{equ:force m+1 l2} and \eqref{equ:force m+1 dual} to derive that \begin{align*} &\|F^{1,m+1}(u^n-u^{n-1},p^n-p^{n-1})\|_{L^2H^1}^2+\|F^{3,m+1}(\theta^n-\theta^{n-1})\|_{L^2H^1}^2\\ &\quad+\|F^{4,m+1}(u^n-u^{n-1},p^n-p^{n-1})\|_{L^2H^{3/2}}^2+\|F^{5,m+1}(\theta^n-\theta^{n-1})\|_{L^2H^{3/2}}^2\\ &\quad+\|\pa_t(F^{1,m+1}(u^n-u^{n-1},p^n-p^{n-1})-F^{4,m+1}(u^n-u^{n-1},p^n-p^{n-1}))\|_{L^2({}_0H^1(\Om))^\ast}^2\\ &\quad+\|\pa_t(F^{3,m+1}(\theta^n-\theta^{n-1})-F^{5,m+1}(\theta^n-\theta^{n-1}))\|_{L^2({}_0H^1(\Om))^\ast}^2\\ &\lesssim P(\mathfrak{K}(\eta))\mathfrak{K}(u^n-u^{n-1},p^n-p^{n-1},\theta^n-\theta^{n-1}). \end{align*} Since, for each $n$, $(u^n,p^n,\theta^n)$ achieves the same initial data, similar to the ODEs \eqref{equ:ode uv}--\eqref{equ:ode theta Theta}, we have that \begin{equation} \mathfrak{K}(u^n-u^{n-1},p^n-p^{n-1},\theta^n-\theta^{n-1})\lesssim P(\mathfrak{K}(\eta))T P(T)\mathfrak{B}(v^n-v^{n-1},q^n-q^{n-1},\Theta^n-\Theta^{n-1}). \end{equation} The above two estimates with \eqref{inequ:est strong solution} imply that \begin{eqnarray} \begin{aligned} &\mathfrak{B}(v^{n+1}-v^n,q^{n+1}-q^n,\Theta^{n+1}-\Theta^n)\\ &\lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\\ &\quad\times T P(T)\mathfrak{B}(v^n-v^{n-1},q^n-q^{n-1},\Theta^n-\Theta^{n-1}), \end{aligned} \end{eqnarray} which implies that there exists a universal $T_0>0$ such that if $T\le T_0$, then the sequence $\{(v^n,q^n,\Theta^n)\}_{n=1}^\infty$ converges to $(v,q,\Theta)$ in the norm $\sqrt{\mathfrak{B}(\cdot,\cdot)}$, which reveals that $\{(u^n,p^n,\theta^n)\}_{n=1}^\infty$ converges to $(u,p,\theta)$ in the norm $\sqrt{\mathfrak{K}(\cdot,\cdot)}$. By passing to the limit in \eqref{equ:ode uv}--\eqref{equ:ode theta Theta}, we have that $v=D_t^{m+1}u$, $q=\pa_t^{m+1}p$ and $\Theta=\pa_t^{m+1}\theta$. Then, passing to the limit in \eqref{equ:iterate n n-1}, we have that \[ \mathcal{L}(D_t^{m+1}u,\pa_t^{m+1}p,\pa_t^{m+1}\theta)=\mathbb{F}^{m+1}(u,p,\theta). \] Then Theorem \ref{thm:lower regularity} with the assumption of $\mathbb{P}_m$, which provides that $(D_t^{m+1}u,\\ \pa_t^{m+1}p,\pa_t^{m+1}\theta)$ are solutions of \eqref{equ:higher linear BC} in the strong sense for $j=0,\ldots,m$, enables us to deduce the first assertion of $\mathbb{P}_{m+1}$. Theorem \ref{thm:lower regularity}, together with the estimates \eqref{equ:force l2}, \eqref{equ:force m+1 dual} and \eqref{est:bound pm}, gives us that \begin{eqnarray} \begin{aligned} &\mathfrak{B}(D_t^{m+1}u,\pa_t^{m+1}p,\pa_t^{m+1}\theta)\\ &\lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2\\ &\quad+\mathfrak{F}_0+\mathfrak{F}+\|\pa_t^{m+1}u\|_{L^2H^2}^2+\|\pa_t^{m+1}p\|_{L^2H^1}^2+\|\pa_t^{m+1}\theta\|_{L^2H^2}^2\big). \end{aligned} \end{eqnarray} On the other hand, the estimate \eqref{est:pa t Dt v j} implies that \begin{eqnarray} \begin{aligned} &\|\pa_t^{m+1}u\|_{L^2H^2}^2+\|\pa_t^{m+1}p\|_{L^2H^1}^2+\|\pa_t^{m+1}\theta\|_{L^2H^2}^2\\ &\le T\left(\|\pa_t^{m+1}u\|_{L^\infty H^2}^2+\|\pa_t^{m+1}p\|_{L^\infty H^1}^2+\|\pa_t^{m+1}\theta\|_{L^\infty H^2}^2\right)\\ &\lesssim T\left(\|\pa_t^{m+1}u-D_t^{m+1}u\|_{L^\infty H^2}^2+\|D_t^{m+1}u\|_{L^\infty H^2}^2+\|\pa_t^{m+1}p\|_{L^\infty H^1}^2+\|\pa_t^{m+1}\theta\|_{L^\infty H^2}^2\right)\\ &\lesssim T\Big(P(\mathfrak{K}(\eta))\sum_{\ell=0}^m\|\pa_t^\ell u\|_{L^\infty H^2}^2+\mathfrak{B}(D_t^{m+1}u,\pa_t^{m+1}p,\pa_t^{m+1}\theta)\Big)\\ &\lesssim T\Big(P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big)\\ &\quad+\mathfrak{B}(D_t^{m+1}u,\pa_t^{m+1}p,\pa_t^{m+1}\theta)\Big), \end{aligned} \end{eqnarray} where in the last inequality, we have used \eqref{est:bound pm} again. Combining the above two estimates, we may further restrict the size of universal $T_0>0$ such that if $T\le T_0$, then \begin{eqnarray}\label{est:B Dtu pa tp pa ttheta} \begin{aligned} &\mathfrak{B}(D_t^{m+1}u,\pa_t^{m+1}p,\pa_t^{m+1}\theta)\\ &\lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big). \end{aligned} \end{eqnarray} Step 2. Proving the second and third assertions. In the following, the second and third assertions will be derived simultaneously. The estimate of \eqref{est:B Dtu pa tp pa ttheta} with Lemma \ref{lem:pa tv Dt v} and estimate \eqref{est:bound pm} imply that \begin{eqnarray} \begin{aligned} &\|\pa_t^{m+1}u\|_{L^2H^3}^2+\|\pa_t^{m+2}u\|_{L^2H^1}^2+\|\pa_t^{m+3}u\|_{(\mathscr{X}_T)^\ast}^2+\|\pa_t^{m+1}u\|_{L^\infty H^2}^2+\|\pa_t^{m+2}u\|_{L^\infty H^0}^2\\ &\lesssim P(\mathfrak{K}(\eta))\left(\sum_{\ell=0}^{m+2}\|\pa_t^\ell u\|_{L^2H^{2m-2\ell+3}}^2+\|\pa_t^\ell u\|_{L^\infty H^{2m-2\ell+2}}^2\right)\\ &\quad+P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big)\\ &\lesssim P(\mathfrak{K}(\eta))P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(p(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big)\\ &\quad+P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big)\\ &\lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big). \end{aligned} \end{eqnarray} Thus \begin{eqnarray} \begin{aligned} &\sum_{j=m+1}^{m+2}\left(\|\pa_t^ju\|_{L^2H^{2(m+1)-2j+3}}^2+\|\pa_t^ju\|_{L^\infty H^{2(m+1)-2j+2}}^2\right)+\|\pa_t^{m+3}u\|_{(\mathscr{X}_T)^\ast}^2\\ &\quad+\sum_{j=m+1}^{m+2}\left(\|\pa_t^jp\|_{L^2H^{2(m+1)-2j+2}}^2+\|\pa_t^jp\|_{L^\infty H^{2(m+1)-2j+1}}^2\right)\\ &\quad+\sum_{j=m+1}^{m+2}\left(\|\pa_t^j\theta\|_{L^2H^{2(m+1)-2j+3}}^2+\|\pa_t^j\theta\|_{L^\infty H^{2(m+1)-2j+2}}^2\right)+\|\pa_t^{m+3}\theta\|_{(\mathscr{X}_T)^\ast}^2\\ &\lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big). \end{aligned} \end{eqnarray} Thus, in order to derive the second and third assertions of $\mathbb{P}_{m+1}$, it suffices to prove that \begin{eqnarray}\label{est:bound pm 1} \begin{aligned} &\sum_{j=0}^{m}\left(\|\pa_t^ju\|_{L^2H^{2(m+1)-2j+3}}^2+\|\pa_t^jp\|_{L^2H^{2(m+1)-2j+2}}^2+\|\pa_t^j\theta\|_{L^2H^{2(m+1)-2j+3}}^2\right)\\ &\quad+\sum_{j=0}^m\left(\|\pa_t^ju\|_{L^\infty H^{2(m+1)-2j+2}}^2+\|\pa_t^jp\|_{L^\infty H^{2(m+1)-2j+1}}^2+\|\pa_t^j\theta\|_{L^\infty H^{2(m+1)-2j+2}}^2\right)\\ &\lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big). \end{aligned} \end{eqnarray} In order to prove this estimate, we will use the elliptic regularity of Proposition \ref{prop:high regulatrity} with $k=2N$ and iteration argument. As the first step, we need the estimates for the forcing terms. Combining \eqref{est:bound pm} with the estimates \eqref{equ:force l2} and \eqref{equ:force l infity} of Lemma \ref{lem:force linear} implies that \begin{eqnarray}\label{est:sum force j} \begin{aligned} &\sum_{j=1}^{m+1}\Big(\|F^{1,j}\|_{L^2H^{2m-2j+1}}^2+\|F^{3,j}\|_{L^2H^{2m-2j+1}}^2+\|F^{4,j}\|_{L^2H^{2m-2j+3/2}}^2\\ &\quad+\|F^{5,j}\|_{L^2H^{2m-2j+3/2}}^2+\|F^{1,j}\|_{L^\infty H^{2m-2j}}^2+\|F^{3,j}\|_{L^\infty H^{2m-2j}}^2\\ &\quad+\|F^{4,j}\|_{L^\infty H^{2m-2j+1/2}}^2+\|F^{5,j}\|_{L^\infty H^{2m-2j+1/2}}^2\Big)\\ &\lesssim P(\mathfrak{K}(\eta))\bigg(\mathfrak{F}+\sum_{\ell=0}^{j-1}\left(\|\pa_t^\ell u\|_{L^2H^{2m-2\ell+3}}^2+\|\pa_t^\ell \theta\|_{L^2H^{2m-2\ell+3}}^2\right)\\ &\quad+\sum_{\ell=0}^{j-1}\Big(\|\pa_t^\ell u\|_{L^\infty H^{2m-2\ell+2}}^2+\|\pa_t^\ell \theta\|_{L^\infty H^{2m-2\ell+2}}^2+\|\pa_t^\ell p\|_{L^2H^{2m-2\ell+2}}^2\\ &\quad+\|\pa_t^\ell p\|_{L^\infty H^{2m-2\ell+1}}^2\Big)\bigg)\\ &\lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big), \end{aligned} \end{eqnarray} The estimates of \eqref{est:B Dtu pa tp pa ttheta} , \eqref{est:bound pm} as well as \eqref{est:pat v Dt v l2}, \eqref{est:pat v Dt v linfty} of Lemma \ref{lem:pa tv Dt v}, allow us to deduce that \begin{eqnarray}\label{est:pat Dtm u} \begin{aligned} &\|\pa_tD_t^mu\|_{L^\infty H^2}^2+\|\pa_tD_t^mu\|_{L^2 H^3}^2\\ &\lesssim \|\pa_tD_t^mu-D_t^{m+1}u\|_{L^\infty H^2}^2+\|\pa_tD_t^mu-D_t^{m+1}u\|_{L^2 H^3}^2\\ &\quad+\|D_t^{m+1}u\|_{L^\infty H^2}^2+\|D_t^{m+1}u\|_{L^2 H^3}^2\\ &\lesssim P(\mathfrak{K}(\eta))\left(\|D_t^mu\|_{L^\infty H^2}^2+\|D_t^mu\|_{L^2 H^3}^2\right)+\|D_t^{m+1}u\|_{L^\infty H^2}^2+\|D_t^{m+1}u\|_{L^2 H^3}^2\\ &\lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big). \end{aligned} \end{eqnarray} Since \eqref{equ:higher linear BC} is satisfied in the strong sense for $j=m$, for almost $t\in [0,T]$, $(D_t^mu, \pa_t^mp,\\ \pa_t^m\theta)$ solves elliptic system \eqref{equ:SBC} with $F^1$ replaced by $F^{1,m}-\pa_tD_t^mu$, $F^2=0$, $F^3$ replaced by $F^{3,m}-\pa_t(\pa_t^m\theta)$ and $F^4$, $F^5$ replaced by $F^{4,m}$, $F^{5,m}$, respectively. Then, we apply Proposition \ref{prop:high regulatrity} with $r=5$, then square the resulting estimate and integrate over $[0,T]$, to deduce that \begin{eqnarray} \begin{aligned} &\|D_t^mu\|_{L^2H^5}^2+\|\pa_t^mp|_{L^2H^4}^2+\|\pa_t^m\theta\|_{L^2H^5}^2\\ &\lesssim \|F^{1,m}-\pa_tD_t^mu\|_{L^2H^3}^2+\|F^{3,m}-\pa_t(\pa_t^m\theta)\|_{L^2H^3}^2\\ &\quad+\|F^{4,m}\|_{L^2H^{7/2}}^2+\|F^{5,m}\|_{L^2H^{7/2}}^2\\ &\lesssim \|F^{1,m}\|_{L^2H^3}^2+\|\pa_tD_t^mu\|_{L^2H^3}^2+\|F^{3,m}\|_{L^2H^3}^2+\|\pa_t(\pa_t^m\theta)\|_{L^2H^3}^2\\ &\quad+\|F^{4,m}\|_{L^2H^{7/2}}^2+\|F^{5,m}\|_{L^2H^{7/2}}^2\\ &\lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big), \end{aligned} \end{eqnarray} where in the last inequality, we have used \eqref{est:B Dtu pa tp pa ttheta}, \eqref{est:sum force j} and \eqref{est:pat Dtm u}. Similarly, Proposition \ref{prop:high regulatrity} with $r=4$ reveals that \begin{eqnarray} \begin{aligned} &\|D_t^mu\|_{L^\infty H^4}^2+\|\pa_t^mp\|_{L^\infty H^3}^2+\|\pa_t^m\theta\|_{L^\infty H^4}^2\\ &\lesssim \|F^{1,m}-\pa_tD_t^mu\|_{L^\infty H^2}^2+\|F^{3,m}-\pa_t(\pa_t^m\theta)\|_{L^\infty H^2}^2\\ &\quad+\|F^{4,m}\|_{L^\infty H^{5/2}}^2+\|F^{5,m}\|_{L^\infty H^{5/2}}^2\\ &\lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big). \end{aligned} \end{eqnarray} By iterating to estimate $\pa_t^ju$, $\pa_t^jp$ and $\pa_t^j\theta$ for $j=1,\ldots,m$, as well as the above two estimates, we have that \begin{align*} &\|\pa_t^m u\|_{L^\infty H^4}^2+\|\pa_t^mu\|_{L^2H^5}^2\\ &\lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big). \end{align*} Thus, we have that \begin{eqnarray}\label{est:bound pm j ge1} \begin{aligned} &\sum_{j=1}^{m}\left(\|\pa_t^ju\|_{L^2H^{2(m+1)-2j+3}}^2+\|\pa_t^jp\|_{L^2H^{2(m+1)-2j+2}}^2+\|\pa_t^j\theta\|_{L^2H^{2(m+1)-2j+3}}^2\right)\\ &\quad+\sum_{j=1}^m\left(\|\pa_t^ju\|_{L^\infty H^{2(m+1)-2j+2}}^2+\|\pa_t^jp\|_{L^\infty H^{2(m+1)-2j+1}}^2+\|\pa_t^j\theta\|_{L^\infty H^{2(m+1)-2j+2}}^2\right)\\ &\lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big). \end{aligned} \end{eqnarray} Then we apply Proposition \ref{prop:high regulatrity} with $r=2(m+1)+3\le2N+1$, square the result estimate and integrate over $[0,T]$ to see that \begin{eqnarray}\label{est:bound pm j=0 l2} \begin{aligned} &\|u\|_{L^2H^{2(m+1)+3}}^2+\|p\|_{L^2H^{2(m+1)+2}}^2+\|\theta\|_{L^2H^{2(m+1)+3}}^2\\ &\lesssim \|F^1-\pa_tu\|_{L^2H^{2(m+1)+1}}^2+\|F^3-\pa_t\theta\|_{L^2H^{2(m+1)+1}}^2\\ &\quad+\|F^4\|_{L^2H^{2(m+1)+3/2}}^2+\|F^5\|_{L^2H^{2(m+1)+3/2}}^2\\ &\lesssim \|F^1\|_{L^2H^{2(m+1)+1}}^2+\|\pa_tu\|_{L^2H^{2(m+1)+1}}^2+\|F^3\|_{L^2H^{2(m+1)+1}}^2+\|\pa_t\theta\|_{L^2H^{2(m+1)+1}}^2\\ &\quad+\|F^4\|_{L^2H^{2(m+1)+3/2}}^2+\|F^5\|_{L^2H^{2(m+1)+3/2}}^2\\ &\lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big), \end{aligned} \end{eqnarray} and then again with $r=2(m+1)+2\le2N$ to see that \begin{eqnarray}\label{est:bound pm j=0 linfty} \begin{aligned} &\|u\|_{L^\infty H^{2(m+1)+2}}^2+\|p\|_{L^\infty H^{2(m+1)+1}}^2+\|\theta\|_{L^\infty H^{2(m+1)+2}}^2\\ &\lesssim \|F^1-\pa_tu\|_{L^\infty H^{2(m+1)}}^2+\|F^3-\pa_t\theta\|_{L^\infty H^{2(m+1)}}^2\\ &\quad+\|F^4\|_{L^\infty H^{2(m+1)+1/2}}^2+\|F^5\|_{L^\infty H^{2(m+1)+1/2}}^2\\ &\lesssim \|F^1\|_{L^\infty H^{2(m+1)}}^2+\|\pa_tu\|_{L^\infty H^{2(m+1)}}^2+\|F^3\|_{L^\infty H^{2(m+1)}}^2+\|\pa_t\theta\|_{L^\infty H^{2(m+1)}}^2\\ &\quad+\|F^4\|_{L^\infty H^{2(m+1)+1/2}}^2+\|F^5\|_{L^\infty H^{2(m+1)+1/2}}^2\\ &\lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big). \end{aligned} \end{eqnarray} Thus \eqref{est:bound pm 1} is obtained by summing \eqref{est:bound pm j ge1}--\eqref{est:bound pm j=0 linfty}. This completes the proof. \end{proof} \section{Preliminaries for the nonlinear problem} In order to use linear theory for the problem \eqref{equ:linear BC} to solve the nonlinear problem \eqref{equ:nonlinear BC}, we have to define forcing terms $F^1$, $F^3$, $F^4$, $F^5$ to be used in the linear estimates. Given $u$, $\theta$, $\eta$, we define \begin{eqnarray}\label{equ:force u p theta} \begin{aligned} F^1(u,\theta,\eta)&=\pa_t\bar{\eta}(1+x_3)K\pa_3u-u\cdot\nabla_{\mathscr{A}}u\quad \text{and}\quad F^4(u,\theta,\eta)=\eta\mathscr{N},\\ F^3(u,\theta,\eta)&=\pa_t\bar{\eta}(1+x_3)K\pa_3\theta-u\cdot\nabla_{\mathscr{A}}\theta \quad \text{and}\quad F^5(u,\theta,\eta)=-\left|\mathscr{N}\right|, \end{aligned} \end{eqnarray} where $\mathscr{A}$, $\mathscr{N}$, $K$ are determined as before by $\eta$. Then we define the quantities $\mathfrak{K}_{N}(u,\theta)$ and $\mathfrak{K}_{N}(u)$ as \begin{eqnarray}\label{equ:N u theta} \begin{aligned} \mathfrak{K}_{N}(u,\theta)&=\sum_{j=0}^{N}\Big(\|\pa_t^ju\|_{L^2H^{2N-2j+1}}^2+\|\pa_t^ju\|_{L^\infty H^{2N-2j}}^2\\ &\quad+\|\pa_t^j\theta\|_{L^2H^{2N-2j+1}}^2+\|\pa_t^j\theta\|_{L^\infty H^{2N-2j}}^2\Big), \end{aligned} \end{eqnarray} and \begin{equation} \mathfrak{K}_{N}(u)=\sum_{j=0}^{N}\Big(\|\pa_t^ju\|_{L^2H^{2N-2j+1}}^2+\|\pa_t^ju\|_{L^\infty H^{2N-2j}}^2\Big). \end{equation} \subsection{Initial data estimates}\label{sec:initial data} Since $\eta$ is unknown for the full nonlinear problem, and its evolution is coupled to that of $u$, $p$ and $\theta$, we must reconstruct the initial data to contain this coupling, only with $u_0$, $\theta_0$ and $\eta_0$. Here we will define some quantities which have minor difference from \cite{GT1}. \begin{equation} \mathscr{E}_0:=\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\|\eta_0\|_{H^{2N+1/2}}^2, \end{equation} and \begin{equation} \mathfrak{E}_0(u,p,\theta):=\sum_{j=0}^N\|\pa_t^ju(0)\|_{H^{2N-2j}}^2+\sum_{j=0}^{N-1}\|\pa_t^jp(0)\|_{H^{2N-2j-1}}^2+\sum_{j=0}^N\|\pa_t^j\theta(0)\|_{H^{2N-2j}}^2. \end{equation} For $j=0,\ldots,N-1$, \begin{eqnarray} \begin{aligned} &\mathfrak{F}_0^j(F^1(u,p,\theta),F^3(u,p,\theta),F^4(u,p,\theta),F^5(u,p,\theta))\\ &:=\sum_{\ell=0}^j\Big(\|\pa_t^\ell F^1(0)\|_{H^{2N-2\ell-2}}^2+\|\pa_t^\ell F^3(0)\|_{H^{2N-2\ell-2}}^2+\|\pa_t^\ell F^4(0)\|_{H^{2N-2\ell-3/2}}^2\\ &\quad+\|\pa_t^\ell F^5(0)\|_{H^{2N-2\ell-3/2}}^2\Big). \end{aligned} \end{eqnarray} \begin{equation} \mathfrak{E}_0^0(\eta):=\|\eta_0\|_{H^{2N+1/2}}^2, \end{equation} and for $j=1,\ldots,N$, \begin{equation} \mathfrak{E}_0^j(\eta):=\|\eta_0\|_{H^{2N+1/2}}^2+\sum_{\ell=1}^j\|\pa_t^\ell\eta(0)\|_{H^{2N-2\ell+3/2}}^2. \end{equation} \begin{equation} \mathfrak{E}_0^0(u,p,\theta):=\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2, \end{equation} and for $j=1,\ldots,N$, \begin{equation} \mathfrak{E}_0^j(u,p,\theta):=\sum_{\ell=0}^j\|\pa_t^\ell u(0)\|_{H^{2N-2\ell}}^2+\sum_{\ell=0}^{j-1}\|\pa_t^\ell p\|_{H^{2N-2\ell-1}}^2+\sum_{\ell=0}^j\|\pa_t^\ell\theta(0)\|_{H^{2N-2\ell}}^2. \end{equation} The following lemma is a minor modification of Lemma 5.2 in \cite{GT1}, so we omit the details of proof. \begin{lemma}\label{lem:preliminary} For $j=0, \ldots,N$, \begin{equation}\label{est:pa t Dt u} \|\pa_t^ju(0)-D_t^ju(0)\|_{H^{2N-2j}}^2\le P_j(\mathfrak{E}_0^j(\eta),\mathfrak{E}_0^j(u,p,\theta)) \end{equation} and \begin{equation} \label{est:pa t Dt theta} \|\pa_t^j(\theta(0)\nabla_{\mathscr{A}_0}y_{3,0})-D_t^j(\theta(0)\nabla_{\mathscr{A}_0}y_{3,0})\|_{H^{2N-2j}}^2\le P_j(\mathfrak{E}_0^j(\eta),\mathfrak{E}_0^j(u,p,\theta)) \end{equation} for $P_j(\cdot,\cdot)$ a polynomial such that $P_j(0,0)=0$. For $F^1(u,\theta,\eta)$, $F^3(u,\theta,\eta)$, $F^4(u,\theta,\eta)$ and $F^5(u,\theta,\eta)$ defined by \eqref{equ:force u p theta} and $j=0,\ldots,N-1$, we have that \begin{equation}\label{est:F 0j} \mathfrak{F}_0^j(F^1(u,p,\theta),F^3(u,p,\theta),F^4(u,p,\theta),F^5(u,p,\theta))\le P_j(\mathfrak{E}_0^{j+1}(\eta),\mathfrak{E}_0^j(u,p,\theta)) \end{equation} for $P_j(\cdot,\cdot)$ a polynomial such that $P_j(0,0)=0$. For $j=1,\ldots,N-1$, let $F^{1,j}(0)$, $F^{3,j}(0)$, $F^{4,j}(0)$ and $F^{5,j}(0)$ are determined by \eqref{equ:force 1}, \eqref{equ:force 2} and \eqref{equ:force u p theta}. Then \begin{eqnarray}\label{est:force 1345} \begin{aligned} &\|F^{1,j}(0)\|_{H^{2N-2j-2}}^2+\|F^{3,j}(0)\|_{H^{2N-2j-2}}^2\\ &\quad+\|F^{4,j}(0)\|_{H^{2N-2j-3/2}}^2+\|F^{5,j}(0)\|_{H^{2N-2j-3/2}}^2\\ &\le P_j(\mathfrak{E}_0^{j+1}(\eta),\mathfrak{E}_0^j(u,p,\theta)) \end{aligned} \end{eqnarray} for $P_j(\cdot,\cdot)$ a polynomial such that $P_j(0,0)=0$. For $j=1,\ldots,N-1$, \begin{equation} \left\|\sum_{\ell=0}^j{j \choose \ell}\pa_t^\ell\mathscr{N}(0)\cdot\pa_t^{j-\ell}u(0)\right\|_{H^{2N-2j+3/2}}^2\le P_j(\mathfrak{E}_0^j(\eta),\mathfrak{E}_0^j(u,p,\theta)) \end{equation} for $P_j(\cdot,\cdot)$ a polynomial such that $P_j(0,0)=0$. Also, \begin{equation}\label{est:pat eta0} \|u_0\cdot\mathscr{N}_0\|_{H^{2N-1/2}(\Sigma)}^2\le \|u_0\|_{H^{2N}}^2\left(1+\|\eta_0\|_{H^{2N+1/2}}^2\right). \end{equation} \end{lemma} This lemma allows us to construct all of the initial data $\pa_t^ju(0)$, $\pa_t^j\theta(0)$, $\pa_t^j\eta(0)$ for $j=0,\ldots,N$ and $\pa_t^jp(0)$ for $j=0,\ldots,N-1$. Assume that $\mathscr{E}_0<\infty$. As before, we will iteratively construct the initial data, but this time we will use Lemma \ref{lem:preliminary}. We define $\pa_t\eta(0)=u_0\cdot\mathscr{N}_0$, where $u_0\in H^{2N-1/2}(\Sigma)$, and $\mathscr{N}_0$ is determined by $\eta_0$. \eqref{est:pat eta0} implies that $\|\pa_t\eta(0)\|_{H^{2N-1/2}}^2\lesssim P(\mathscr{E}_0)$ for a polynomial $P(\cdot)$ such that $P(0)=0$, and hence that $\mathfrak{E}_0^0(u,p,\theta)+\mathfrak{E}_0^1(\eta)\lesssim P(\mathscr{E}_0)$. Then \eqref{est:F 0j} with $j=0$ implies that \begin{equation} \mathfrak{F}_0^0(F^1(u,p,\theta),F^3(u,p,\theta),F^4(u,p,\theta),F^5(u,p,\theta))\le P_0(\mathfrak{E}_0^0(\eta),\mathfrak{E}_0^0(u,p,\theta))\lesssim P(\mathscr{E}_0) \end{equation} for a polynomial $P(\cdot)$ such that $P(0)=0$. Note that in these estimates and in the estimates below, the polynomial $P(\cdot)$ of $\mathscr{E}_0$ are allowed to change from line to line, but they always satisfy $P(0)=0$. In this paragraph, we will give the iterative definition of $\pa_t^jp(0)$, $\pa_t^{j+1}u(0)$, $\pa_t^{j+1}\theta(0)$ and $\pa_t^{j+2}\eta(0)$ for $0\le j\le N-2$. Now suppose that $\pa_t^\ell u(0)$, $\pa_t^\ell\theta(0)$ are known for $\ell=0,\ldots,j$, $\pa_t^\ell \eta(0)$ is known for $\ell=0,\ldots,j+1$, $\pa_t^\ell p(0)$ is known for $\ell=0, \ldots,j-1$ (with the exception for $p(0)$ when $j=0$) and \begin{eqnarray}\label{est:e0j f0j} \begin{aligned} &\mathfrak{E}_0^j(u,p,\theta)+\mathfrak{E}_0^{j+1}(\eta)\\ &\quad+\mathfrak{F}_0^j(F^1(u,p,\theta),F^3(u,p,\theta),F^4(u,p,\theta),F^5(u,p,\theta))\\ &\lesssim P(\mathscr{E}_0). \end{aligned} \end{eqnarray} And according to \eqref{est:force 1345} and \eqref{est:pa t Dt u}, we know that \begin{eqnarray}\label{est:f j1234 Dt u} \begin{aligned} &\|D_t^ju(0)\|_{H^{2N-2j}}^2+\|F^{1,j}(0)\|_{H^{2N-2j-2}}^2+\|F^{3,j}(0)\|_{H^{2N-2j-2}}^2\\ &\quad+\|F^{4,j}(0)\|_{H^{2N-2j-3/2}}^2+\|F^{5,j}(0)\|_{H^{2N-2j-3/2}}^2\\ &\lesssim P(\mathscr{E}_0). \end{aligned} \end{eqnarray} By virtue of estimates \eqref{equ:initial g1 g4} \begin{eqnarray} \begin{aligned} &\|\mathfrak{f}^1(F^{1,j}(0),D_t^ju(0))\|_{H^{2N-2i-3}}^2+\|\mathfrak{f}^2(F^{3,j}(0),D_t^ju(0))\|_{H^{2N-2i-3/2}}^2\\ &\quad+\|\mathfrak{f}^3(F^{1,j}(0),D_t^ju(0))\|_{H^{2N-2i-5/2}}^2\\ &\lesssim P(\mathscr{E}_0) \end{aligned} \end{eqnarray} This allows us to define $\pa_t^jp(0)$ as the solution to \eqref{equ:poisson} with $f^1$, $f^2$, $f^3$ replaced by $\mathfrak{f}^1$, $\mathfrak{f}^2$, $\mathfrak{f}^3$. The Proposition 2.15 in \cite{LW} with $k=2N$ and $r=2N-2j-1$ implies that \begin{equation}\label{est:pa t j p} \|\pa_t^jp(0)\|_{H^{2N-2j-1}}^2\lesssim P(\mathscr{E}_0). \end{equation} Now we define \begin{equation} \pa_t^{j+1}\theta(0)=\mathfrak{E}^{02}(\pa_t^j\theta(0),F^{3,j}(0)) \in H^{2N-2j-2}. \end{equation} Then according to \eqref{est:e0j f0j} and \eqref{est:f j1234 Dt u}, we have that \begin{equation} \|\pa_t^{j+1}\theta(0)\|_{H^{2N-2j-2}}^2\lesssim P(\mathscr{E}_0). \end{equation} Now the estimates \eqref{equ:initial G1 v q}, \eqref{est:e0j f0j} and \eqref{est:f j1234 Dt u} allow us to defined \begin{equation} D_t^{j+1}u(0):=\mathfrak{E}^{01}\left(F^{1,j}(0)+\pa_t^j(\theta(0)\nabla_{\mathscr{A}_0}y_{3,0}), D_t^ju(0),\pa_t^jp(0)\right) \in H^{2N-2j-2}, \end{equation} and then according to \eqref{est:pa t Dt u}, we have \begin{equation}\label{est:pa t j+1 u} \|\pa_t^{j+1}u(0)\|_{H^{2N-2j-2}}^2\le P(\mathscr{E}_0). \end{equation} Now the estimates \eqref{est:pat eta0}, \eqref{est:e0j f0j} and \eqref{est:pa t j+1 u} allow us to define \[ \pa_t^{j+2}\eta(0)=\sum_{\ell=0}^{j+1}{j+1 \choose \ell}\pa_t^\ell\mathscr{N}(0)\cdot\pa_t^{j+1-\ell}u(0), \] and imply the estimate \begin{equation}\label{est:pa t j+2 eta} \|\pa_t^{j+2}\eta(0)\|_{H^{2N-2j-5/2}}^2\le P(\mathscr{E}_0). \end{equation} Thus, \eqref{est:e0j f0j} together with \eqref{est:pa t j p}--\eqref{est:pa t j+2 eta} imply that \[ \mathfrak{E}_0^{j+1}(u,p,\theta)+\mathfrak{E}_0^{j+2}(\eta)\le P(\mathscr{E}_0), \] and then \eqref{est:F 0j} implies that \[ \mathfrak{F}_0^{j+1}(F^1(u,p,\theta),F^3(u,p,\theta),F^4(u,p,\theta),F^5(u,p,\theta))\le P(\mathscr{E}_0). \] Hence that we can deduce the estimate \begin{align*} &\mathfrak{E}_0^{j+1}(u,p,\theta)+\mathfrak{E}_0^{j+2}(\eta)\\ &\quad+\mathfrak{F}_0^{j+1}(F^1(u,p,\theta),F^3(u,p,\theta),F^4(u,p,\theta),F^5(u,p,\theta))\\ &\le P(\mathscr{E}_0). \end{align*} For $j=N-2$, we have \begin{eqnarray}\label{est:e0 N-1 f0 N-1} \begin{aligned} &\mathfrak{E}_0^{N-1}(u,p,\theta)+\mathfrak{E}_0^N(\eta)\\ &\quad+\mathfrak{F}_0^{N-1}(F^1(u,p,\theta),F^3(u,p,\theta),F^4(u,p,\theta),F^5(u,p,\theta))\\ &\le P(\mathscr{E}_0). \end{aligned} \end{eqnarray} Then, we only need to define $\pa_t^{N-1}p(0)$, $\pa_t^N\theta(0)$ and $\pa_t^Nu(0)$. Like the construction after Lemma \ref{lem:v,q,G}, we need the compatibility conditions on $u_0$ and $\eta_0$. Now we have constructed $\pa_t^jp(0)$ for $j=0,\ldots,N-2$, $\pa_t^ju(0)$, $\pa_t^j\theta(0)$, $F^{1,j}(0)$, $F^{3,j}(0)$, $F^{4,j}(0)$, $F^{5,j}(0)$ for $j=0,\ldots,N-1$, and $\pa_t^j\eta(0)$ for $j=0,\ldots,N$. We say that $u_0$ and $\eta_0$ satisfy the $N$-th order compatibility conditions if \begin{eqnarray}\label{cond:compatibility N} \left\{ \begin{aligned} &\nabla_{\mathscr{A}_0}\cdot(D_t^ju(0))=0\quad &\text{in}\thinspace\Om,\\ &D_t^ju(0)=0\quad &\text{on}\thinspace\Sigma_b,\\ &\Pi_0\left(F^{4,j}(0)+\mathbb{D}_{\mathscr{A}_0}D_t^ju(0)\mathscr{N}_0\right)=0\quad &\text{on}\thinspace\Sigma, \end{aligned} \right. \end{eqnarray} for $j=0,\ldots,N-1$, where $\Pi_0$ is the projection defined as in \eqref{def:projection} and $D_t$ be the operator defined by \eqref{def:Dt}. Note that if $u_0$ and $\eta_0$ satisfy \eqref{cond:compatibility N}, then the $j$-th compatibility condition \eqref{cond:compatibility j} is satisfied for $j=0,\ldots,N-1$. Then the construction of $\pa_t^{N-1}p(0)$ is the same as \cite{GT1} using the compatibility condition \eqref{cond:compatibility N} and the elliptic theory of $\mathscr{A}$- Poisson equations \eqref{equ:poisson} derived by Y. Guo and I. Tice in \cite{GT1} and L. Wu in \cite{LW}. And \begin{equation}\label{est:pa t N-1 p} \|\pa_t^{N-1}p(0)\|_{H^1}^2\le P(\mathscr{E}_0). \end{equation} Then we set $\pa_t^N\theta(0)=\mathfrak{E}^{02}(\pa_t^{N-1}\theta(0),F^{3,N-1}(0))\in H^0$ due to \eqref{equ:e02} and \eqref{est:force 1345}, and set $D_t^Nu(0)=\mathfrak{E}^{01}(F^{1,N-1}(0)+\pa_t^{N-1}(\theta \nabla_{\mathscr{A}_0}y_{3,0}), D_t^{N-1}u(0),\pa_t^{N-1}p(0))\in H^0$ due to \eqref{equ:initial G1 v q} and Lemma \ref{lem:preliminary}. And $D_t^Nu(0)\in \mathscr{Y}(0)$ is guaranteed by the construction of $\pa_t^{N-1}p(0)$. As before, we have \begin{equation}\label{est:pa t N u theta} \|\pa_t^Nu(0)\|_{H^0}^2+\|\pa_t^N\theta(0)\|_{H^0}^2\lesssim P(\mathscr{E}_0). \end{equation} This completes the construction of initial data. Then summing the estimates \eqref{est:e0 N-1 f0 N-1}, \eqref{est:pa t N-1 p} and \eqref{est:pa t N u theta}, we directly have the following proposition. \begin{proposition}\label{prop:high order initial} Suppose that $u_0$, $\theta_0$ and $\eta_0$ satisfy $\mathscr{E}_0<\infty$. Let the initial data $\pa_t^ju(0)$, $\pa_t^j\theta(0)$, $\pa_t^j\eta(0)$ for $j=0,\ldots,N$ and $\pa_t^jp(0)$ for $j=0,\ldots,N-1$ be given as above. Then \begin{equation}\label{est:high order initial} \mathscr{E}_0\le \mathfrak{E}_0(u,p,\theta)+\mathfrak{E}_0(\eta)\lesssim P(\mathscr{E}_0). \end{equation} Here $\mathfrak{E}_0(\eta)=\mathfrak{E}_0^N(\eta)$, which is defined in \eqref{def:norm eta}. \end{proposition} \subsection{Transport equation} Here we consider the equation \begin{eqnarray}\label{equ:transport} \left\{ \begin{aligned} &\pa_t\eta+u_1\pa_1\eta+u_2\pa_2\eta=u_3 \quad \text{on}\thinspace \Sigma,\\ &\eta(0)=\eta_0. \end{aligned} \right. \end{eqnarray} The local well--posedness of \eqref{equ:transport} has been proved by L. Wu, which is the Theorem 2.17 in \cite{LW}. The idea of his proof is similar to the proof of Theorem 5.4 in \cite{GT1}. In \cite{LW}, L. Wu has proved in Lemma 2.18, that the difference of $\eta$ and $\eta_0$ in a small time period is also small. \subsection{Forcing estimates} In the next section for the estimates of full nonlinear problem, we need some forcing quantities. Besides $\mathfrak{F}$ and $\mathfrak{F}_0$ which have been defined in \eqref{def:force F F0}, we define the following quantities \begin{align*} \mathcal{F}:&=\sum_{j=0}^{N-1}\left(\|\pa_t^j F^1\|_{L^2H^{2N-2j-1}}^2+\|\pa_t^j F^3\|_{L^2H^{2N-2j-1}}^2\right)+\|\pa_t^NF^1\|_{L^2H^0}^2+\|\pa_t^NF^3\|_{L^2H^0}^2\\ &\quad+\sum_{j=0}^N\left(\|\pa_t^jF^4\|_{L^\infty H^{2N-2j-1/2}(\Sigma)}^2+\|\pa_t^jF^5\|_{L^\infty H^{2N-2j-1/2}(\Sigma)}^2\right), \end{align*} \begin{align*} \mathcal{H}:&=\sum_{j=0}^{N-1}\left(\|\pa_t^j F^1\|_{L^2H^{2N-2j-1}}^2+\|\pa_t^j F^3\|_{L^2H^{2N-2j-1}}^2\right)\\ &\quad+\sum_{j=0}^{N-1}\left(\|\pa_t^jF^4\|_{L^2 H^{2N-2j-1/2}(\Sigma)}^2+\|\pa_t^jF^5\|_{L^2 H^{2N-2j-1/2}(\Sigma)}^2\right), \end{align*} The following theorem is similar to Theorem 2.21 in \cite{LW} with obvious modification. \begin{theorem}\label{lem:forcing estimates} The forcing terms satisfy the estimates \begin{eqnarray} \mathfrak{F}&\lesssim& P(\mathfrak{K}(\eta))+P(\mathfrak{K}_N(u,\theta)),\\ \mathfrak{F}_0&\lesssim& P(\mathscr{E}_0),\\ \mathcal{F}&\lesssim& P(\mathfrak{K}(\eta))+P(\mathfrak{K}_N(u,\theta)),\\ \mathcal{H}&\lesssim& T\left(P(\mathfrak{K}(\eta))+P(\mathfrak{K}_N(u,\theta))\right). \end{eqnarray} \end{theorem} \begin{proof} The proof of this theorem is the same as the proof of Theorem $2.21$ in \cite{LW}, so we omit the details here. \end{proof} \section{Local well-posedness for the nonlinear problem} \subsection{Construction of approximate solutions} In order to solve the \eqref{equ:NBC}, we will construct a sequence of approximate solutions $(u^m, p^m,\theta^m, \eta^m)$, then take the limit $m\to\infty$. First, we construct an initial pair $(u^0,\theta^0,\eta^0)$ as a start point, then we iteratively define all sequences $(u^m,p^m,\theta^m,\eta^m)$ for $m\ge1$. Suppose that the initial data $(u_0,\theta_0,\eta_0)$ has given. According to the Lemma A.5 in \cite{GT1}, there exist $u^0$ and $\theta^0$ defined in $\Om\times [0,\infty)$ with $\pa_t^ju^0(0)=\pa_t^ju(0)$, $\pa_t^j\theta^0(0)=\pa_t^j\theta(0)$, for $j=0,\ldots,N$, satisfying \begin{equation}\label{est:sequence u0 theta0} \mathfrak{K}_N(u^0,\theta^0)\lesssim P(\mathscr{E}_0). \end{equation} Then we consider the equation \eqref{equ:poisson} with $u$ replaced by $u^0$. From the Theorem 2.17 in \cite{LW}, the hypothesis of which is satisfied by \eqref{est:high order initial} and \eqref{est:sequence u0 theta0}, there exists a $\eta^0$ defined in $\Om\times [0,T_0)$, which satisfies $\pa_t^j\eta^0(0)=\pa_t^j\eta(0)$ for $j=0,\ldots,N$ as well as \[ \mathfrak{K}(\eta^0)\lesssim P(\mathscr{E}_0). \] Then for any integer $m\ge1$, we formally define the sequence $(u^m,p^m,\theta^m,\eta^m)$ on the time interval $[0,T_m)$ as the solutions of system \begin{eqnarray}\label{equ:iteration equation} \left\{ \begin{aligned} &\pa_tu^m-\Delta_{\mathscr{A}^{m-1}}u^m+\nabla_{\mathscr{A}^{m-1}}p^m+\theta^m\nabla_{\mathscr{A}^{m-1}}y_3^{m-1} &\\ &\qquad=\pa_t\bar{\eta}^{m-1}(1+x_3)K^{m-1}\pa_3u^{m-1} -u^{m-1}\cdot\nabla_{\mathscr{A}^{m-1}}u^{m-1}&\text{in}\thinspace\Om,\\ &\mathop{\rm div}\nolimits_{\mathscr{A}^{m-1}}u^m=0&\text{in}\thinspace\Om,\\ &\pa_t\theta^m-\Delta_{\mathscr{A}^{m-1}}\theta^m=\pa_t\bar{\eta}^{m-1}(1+x_3)K^{m-1}\pa_3\theta^{m-1}-u^{m-1}\cdot\nabla_{\mathscr{A}^{m-1}}\theta^{m-1} &\text{in}\thinspace\Om,\\ &S_{\mathscr{A}^{m-1}}(p^m,u^m)\mathscr{N}^{m-1}=\eta^{m-1}\mathscr{N}^{m-1} &\text{on}\thinspace\Sigma,\\ &\nabla_{\mathscr{A}^{m-1}}\theta^m\cdot\mathscr{N}^{m-1}+\theta^m\left|\mathscr{N}^{m-1}\right|=-\left|\mathscr{N}^{m-1}\right| &\text{on}\thinspace\Sigma,\\ &u^m=0,\quad\theta^m=0 &\text{on}\thinspace\Sigma_b, \end{aligned} \right. \end{eqnarray} and \begin{equation} \label{equ:iteration theta} \pa_t\eta^m=u^m\cdot\mathscr{N}^m\quad\text{on}\thinspace\Sigma, \end{equation} where $\mathscr{A}^{m-1}$, $\mathscr{N}^{m-1}$, $K^{m-1}$ are determined in terms of $\eta^{m-1}$ and $\mathscr{N}^m$ is in terms of $\eta^m$, with the initial data $(u^m(0), \theta^m(0), \eta^m(0))=(u_0,\theta_0,\eta_0)$. In the following, we will prove that these sequences can be defined for any integer $m\ge1$ and the existence time $T_m$ does not shrink to $0$ as $m\to \infty$. The following theorem is a modified version of Theorem $2.24$ in \cite{LW}, which improves the estimate \eqref{est:higher regularity} using the energy structure and elliptic estimates. \begin{theorem}\label{thm:boundedness} Suppose $J(0)>\delta>0$. Assume that the initial data $(u_0,\theta_0,\eta_0)$ satisfy $\mathscr{E}_0<\infty$ and $\pa_t^ju(0)$, $\pa_t^j\theta(0)$, $\pa_t^j\eta(0)$, for $j=0,\ldots,N$, are given as above from the Proposition \ref{prop:high order initial}. Then there exists a positive constant $\mathscr{Z}<\infty$ and $0<\bar{T}<1$ depending on $\mathscr{E}_0$, such that if $0<T<\bar{T}$, then there exists a sequence $\{(u^m,p^m,\theta^m,\eta^m)\}_{m=0}^\infty$ (when $m=0$, the sequence should be considered as $(u^0,\theta^0,\eta^0)$) satisfying the iteration equation \eqref{equ:iteration equation} within the time interval $[0,T)$ and the following properties: \begin{enumerate}[1.] \item The iteration sequence satisfies \begin{equation} \mathfrak{K}_N(u^m,\theta^m)+\mathfrak{K}(\eta^m)\le \mathscr{Z} \end{equation} for any integer $m\ge0$, where the temporal norm is taken with respect to $[0,T)$.\\ \item $J^m(t)\ge\delta/2$ with $0\le t\le T$, for any integer $m\ge0$. \end{enumerate} \end{theorem} \begin{proof} In this proof, we will follow the path of proof of Theorem 2.24 in \cite{LW}. We will use an infinite induction to prove this theorem. Let us denote the above two assertions as statement $\mathbb{P}_m$. Step $1$. $\mathbb{P}_0$ case. The only modification here is that the construction of $u^0$ and $\theta^0$ reveals that $\mathfrak{K}_N(u^0,\theta^0)\lesssim P(\mathscr{E}_0)$. Then the rest proof of this case is the same as the proof of Theorem $2.24$ in \cite{LW}. Hence, $\mathbb{P}_0$ holds. That is $\mathfrak{K}_N(u^0,\theta^0)+\mathfrak{K}(\eta^0)\le \mathscr{Z}$ with the temporal norm taken with respect to $[0,T)$ and $J^0(t)\ge\delta/2$ for $0\le t\le T$. In the following, we suppose that $\mathbb{P}_{m-1}$ holds for $m\ge1$. Then we will prove that $\mathbb{P}_m$ also holds. Step $2$. $\mathbb{P}_m$ case: energy estimates of $\theta^m$ and $u^m$. By Theorem \ref{thm:higher regularity}, the pair $(D_t^Nu^m,\pa_t^Np^m,\pa_t^N\theta^m)$ satisfies the equation \begin{eqnarray} \left\{ \begin{aligned} &\pa_t(D_t^Nu^m)-\Delta_{\mathscr{A}^{m-1}}(D_t^Nu^m)+\nabla_{\mathscr{A}^{m-1}}(\pa_t^Np^m)&\\ &\qquad-\pa_t^N(\theta^m \nabla_{\mathscr{A}^{m-1}}y_3^{m-1})=F^{1,N}\quad &\text{in}\thinspace\Om,\\ &\mathop{\rm div}\nolimits_{\mathscr{A}^{m-1}}(D_t^Nu^m)=0\quad &\text{in}\thinspace\Om,\\ &\pa_t(\pa_t^N\theta^m)-\Delta_{\mathscr{A}^{m-1}}(\pa_t^N\theta^m)=F^{3,N}\quad &\text{in}\thinspace\Om,\\ &S_{\mathscr{A}^{m-1}}(\pa_t^Np^m,D_t^Nu^m)\mathscr{N}^{m-1}=F^{4,N}\quad &\text{on}\thinspace \Sigma,\\ &\nabla_{\mathscr{A}^{m-1}}(\pa_t^N\theta^m)\cdot\mathscr{N}^{m-1}+\pa_t^j\theta^m\left|\mathscr{N}^{m-1}\right|=F^{5,N}\quad &\text{on}\thinspace \Sigma,\\ &D_t^Nu^m=0,\quad \pa_t^N\theta^m=0\quad &\text{on}\thinspace \Sigma_b, \end{aligned} \right. \end{eqnarray} in the weak sense, where $F^{1,N}$, $F^{3,N}$, $F^{4,N}$ and $F^{5,N}$ are given in terms of $u^m$, $p^m$, $\theta^m$, and $u^{m-1}$, $p^{m-1}$, $\theta^{m-1}$, $\eta^{m-1}$. Then for any test function $\phi\in(\mathscr{H}^1_T)^{m-1}$, where $(\mathscr{H}^1_T)^{m-1}$ is the space $\mathscr{H}^1_T$ with $\eta$ replaced by $\eta^{m-1}$, the following holds \begin{align*} \left<\pa_t(\pa_t^N\theta^m), \phi\right>_{\ast}+\left(\pa_t^N\theta^m,\phi\right)_{\mathscr{H}^1_T}+\left(\pa_t^N\theta^m\left|\mathscr{N}^{m-1}\right|,\phi\right)_{L^2H^0(\Sigma)}\\ =\left(F^{3,N},\phi\right)_{\mathscr{H}^0_T}+\left(F^{5,N},\phi\right)_{L^2H^0(\Sigma)}. \end{align*} Therefore, when taking the test function $\phi=\pa_t^N\theta^m$, we have the energy structure \begin{eqnarray} \begin{aligned} &\f12\int_{\Om} J^{m-1}|\pa_t^N\theta^m|^2+\int_0^t\int_{\Om} J^{m-1}|\nabla_{\mathscr{A}^{m-1}}(\pa_t^N\theta^m)|^2+\int_0^t\int_{\Sigma}|\pa_t^N\theta^m|^2\left|\mathscr{N}^{m-1}\right|\\ &=\f12\int_{\Om} J^{m-1}(0)|\pa_t^N\theta^m(0)|^2+\f12\int_0^t\int_{\Om} \pa_tJ^{m-1}|\pa_t^N\theta^m|^2\\ &\quad+\int_0^t\int_{\Om} J^{m-1}F^{3,N}\pa_t^N\theta^m+\int_0^t\int_{\Sigma}F^{5,N}\pa_t^N\theta^m. \end{aligned} \end{eqnarray} By induction hypothesis, \eqref{est:high order initial}, trace theory and Cauchy inequality, we have \begin{eqnarray} \begin{aligned} &\|\pa_t^N\theta^m\|_{L^\infty H^0}^2+\|\pa_t^N\theta^m\|_{L^2H^1}^2\\ &\lesssim \sup_{0\le t\le T}\left(\f12\int_{\Om} J^{m-1}|\pa_t^N\theta^m|^2+\int_0^t\int_{\Om} J^{m-1}|\nabla_{\mathscr{A}^{m-1}}(\pa_t^N\theta^m)|^2+\int_0^t\int_{\Sigma}|\pa_t^N\theta^m|^2\right)\\ &\lesssim \f12\int_{\Om} J^{m-1}(0)|\pa_t^N\theta^m(0)|^2+\f12\int_0^T\int_{\Om} \pa_tJ^{m-1}|\pa_t^N\theta^m|^2+\int_0^T\int_{\Om} J^{m-1}F^{3,N}\pa_t^N\theta^m\\ &\quad+\int_0^T\int_{\Sigma}F^{5,N}\pa_t^N\theta^m\\ &\lesssim P(\mathscr{E}_0)+T\mathscr{Z}\|\pa_t^N\theta^m\|_{L^\infty H^0}^2+\sqrt{T}\mathscr{Z}\|F^{3,N}\|_{L^2H^0}\|\pa_t^N\theta^m\|_{L^\infty H^0}\\ &\quad+\sqrt{T}\|F^{5,N}\|_{L^\infty H^{-1/2}(\Sigma)}\|\pa_t^N\theta^m\|_{L^2H^{1/2}(\Sigma)}\\ &\lesssim P(\mathscr{E}_0)+T\mathscr{Z}\|\pa_t^N\theta^m\|_{L^\infty H^0}^2+\sqrt{T}\|F^{3,N}\|_{L^2H^0}^2\\ &\quad+\sqrt{T}\mathscr{Z}^2\|\pa_t^N\theta^m\|_{L^\infty H^0}^2+\sqrt{T}\|F^{5,N}\|_{L^\infty H^{-1/2}(\Sigma)}^2+\sqrt{T}\|\pa_t^N\theta^m\|_{L^2H^{1/2}(\Sigma)}^2 \end{aligned} \end{eqnarray} for a polynomial $P(0)=0$. Taking $T\le \min\{1/4, 1/(16\mathscr{Z}^4)\}$ and absorbing the extra terms on the right--hand side into left--hand side imply \begin{equation}\label{est:energy thetam um} \|\pa_t^N\theta^m\|_{L^\infty H^0}^2+\|\pa_t^N\theta^m\|_{L^2H^1}^2\lesssim P(\mathscr{E}_0)+\sqrt{T}\|F^{3,N}\|_{L^2H^0}^2+\sqrt{T}\|F^{5,N}\|_{L^\infty H^{-1/2}(\Sigma)}^2. \end{equation} By induction hypothesis, we have \begin{align*} &\|F^{3,N}\|_{L^2H^0}^2\\ &\lesssim P(\mathfrak{K}(\eta^{m-1}))\left(\sum_{j=0}^{N-1}\|\pa_t^ju^m\|_{L^2H^2}^2+\|\pa_t^j\theta^m\|_{L^2H^2}^2\right)+\mathcal{F}\\ &\lesssim P(\mathscr{E}_0+\mathscr{Z})+\mathcal{F}, \end{align*} \begin{align*} &\|F^{5,N}\|_{L^\infty H^{-1/2}(\Sigma)}^2\\ &\lesssim P(\mathfrak{K}(\eta^{m-1}))\left(\sum_{j=0}^{N-1}\|\pa_t^ju^m\|_{L^\infty H^2}^2+\|\pa_t^j\theta^m\|_{L^\infty H^2}^2\right)+\mathcal{F}\\ &\lesssim P(\mathscr{E}_0+\mathscr{Z})+\mathcal{F}. \end{align*} And, the energy estimates about $u^m$ is the same as the proof of of Theorem $2.24$ in \cite{LW}. Therefore, we have \begin{equation} \|\pa_t^N u^m\|_{L^2H^1}^2+\|\pa_t^N \theta^m\|_{L^2H^1}^2\lesssim P(\mathscr{E}_0)+\sqrt{T}P(\mathscr{E}_0+\mathscr{Z})+\sqrt{T}\mathcal{F}. \end{equation} Step 3. $\mathbb{P}_m$ case: elliptic estimates for $\theta^m$, $u^m$. For $0\le n\le N-1$, the $n$-th order heat equation is \begin{eqnarray} \left\{ \begin{aligned} &\pa_t(\pa_t^n\theta^m)-\Delta_{\mathscr{A}^{m-1}}\pa_t^n\theta^m=F^{3,n}\quad &\text{in}\thinspace \Om,\\ &\nabla_{\mathscr{A}^{m-1}}\pa_t^n\theta^m\cdot\mathscr{N}^{m-1}+\pa_t^n\theta^m\left|\mathscr{N}^{m-1}\right|=F^{5,n}\quad &\text{on}\thinspace \Sigma,\\ &\pa_t^n\theta^m=0\quad &\text{on}\thinspace \Sigma_b. \end{aligned} \right. \end{eqnarray} The elliptic estimate in the proof of Lemma \ref{lem:S lower regularity} reveals that \begin{equation} \label{est:elliptic thetam} \|\pa_t^n\theta^m\|_{L^2H^{2N-2n+1}}^2\lesssim \|F^{3,n}\|_{L^2H^{2N-2n-1}}^2+\|\pa_t^{n+1}\theta^m\|_{L^2H^{2N-2n-1}}^2+\|F^{5,n}\|_{L^2H^{2N-2n-1/2}}^2. \end{equation} As what we did before, \begin{align*} &\|F^{3,n}\|_{L^2H^{2N-2n-1}}^2\\ &\lesssim T P(\mathfrak{K}(\eta^{m-1}))\left(\sum_{j=0}^{N-2}\|\pa_t^j\theta^m\|_{L^\infty H^{2N-2j-1}}^2+\|\pa_t^ju^m\|_{L^\infty H^{2N-2j-1}}^2\right)+\mathcal{H}\\ &\lesssim T P(\mathscr{E}_0+\mathscr{Z})+\mathcal{H}. \end{align*} \begin{align*} &\|F^{5,n}\|_{L^2H^{2N-2n-1}}^2\\ &\lesssim T P(\mathfrak{K}(\eta^{m-1}))\left(\sum_{j=0}^{N-2}\|\pa_t^j\theta^m\|_{L^\infty H^{2N-2j-1}}^2+\|\pa_t^ju^m\|_{L^\infty H^{2N-2j-1}}^2\right)+\mathcal{H}\\ &\lesssim T P(\mathscr{E}_0+\mathscr{Z})+\mathcal{H}. \end{align*} But for the term $\|\pa_t^{n+1}\theta^m\|_{L^2H^{2N-2n-1}}^2$, we estimate backward from $N-1$ to $0$. First, when $n=N-1$, this is the case of energy estimate of $\theta^m$. Then we iteratively use the elliptic estimates \eqref{est:elliptic thetam} from $n=N-2$ to $n=0$ to obtain all the control of $\|\pa_t^{n+1}\theta^m\|_{L^2H^{2N-2n-1}}^2$. And the elliptic estimate for $u^m$ is the same as the proof of of Theorem $2.24$ in \cite{LW}. Thereore, we have that \begin{eqnarray}\label{est:elliptic thetam um} \begin{aligned} &\sum_{n=0}^{N-1}\left(\|\pa_t^nu^m\|_{L^2H^{2N-2n+1}}^2+\|\pa_t^n\theta^m\|_{L^2H^{2N-2n+1}}^2\right)\\ &\lesssim P(\mathscr{E}_0)+\sqrt{T}P(1+\mathscr{E}_0+\mathscr{Z})+\sqrt{T}\mathcal{F}+\mathcal{H}. \end{aligned} \end{eqnarray} Step $4$. $\mathbb{P}_m$ case: synthesis of estimates for $u^m$ and $\theta^m$. Combining \eqref{est:energy thetam um}, \eqref{est:elliptic thetam um} and Lemma $2.19$ in \cite{LW}, we deduce that \begin{equation} \mathfrak{K}_N(u^m, \theta^m)\lesssim P(\mathscr{E}_0)+\sqrt{T}P(\mathscr{E}_0+\mathscr{Z})+\sqrt{T}\mathcal{F}+\mathcal{H}. \end{equation} Then by the induction hypothesis and the forcing estimates of Lemma \ref{lem:forcing estimates}, we have that \[ \mathcal{F}\lesssim P(\mathfrak{K}(\eta^{m-1}))+P(\mathfrak{K}_N(u^{m-1}, \theta^{m-1}))\lesssim P(\mathscr{Z}), \] \[ \mathcal{H}\lesssim T\left(P(\mathfrak{K}(\eta^{m-1}))+P(\mathfrak{K}_N(u^{m-1}, \theta^{m-1}))\right)\lesssim T P(\mathscr{Z}). \] Hence we obtain the estimate \begin{equation} \mathfrak{K}_N(u^m, \theta^m)\le C\left( P(\mathscr{E}_0)+\sqrt{T}P(\mathscr{E}_0+\mathscr{Z})\right) \end{equation} for some universal constant $C>0$. Taking $\mathscr{Z}\ge 2 C P(\mathscr{E}_0)$ and then taking $T$ sufficient small which depends on $\mathscr{Z}$, we can achieve that $\mathfrak{K}_N(u^m, \theta^m)\le 2 C P(\mathscr{E}_0)\le \mathscr{Z}$. Step 5. $\mathbb{P}_m$ case: estimate for $\eta^m$ and $J^m(t)$. These estimates are exactly the same as the proof of of Theorem $2.24$ in \cite{LW}. So we omit the details here. Thus, we can take $\mathscr{Z}=P(\mathscr{E}_0)$ for some polynomial $P(\cdot)$ and $T$ small enough depending on $\mathscr{Z}$ to deduce that \begin{equation} \mathfrak{K}_N(u^m,\theta^m)\le\mathscr{Z} \end{equation} and \begin{equation} J^m(t)\ge\delta/2 \quad \text{for}\thinspace t\in [0,T]. \end{equation} Hence $\mathbb{P}_m$ holds. By induction, $\mathbb{P}_n$ holds for any integer $n\ge0$. \end{proof} \begin{theorem}\label{thm:uniform boundedness} Assume the same conditions as Theorem \ref{thm:boundedness}. Then \begin{equation} \mathfrak{K}(u^m,p^m,\theta^m)+\mathfrak{K}(\eta^m)\lesssim P(\mathscr{E}_0) \end{equation} for a polynomial $P(\cdot)$ satisfying $P(0)=0$. \end{theorem} \begin{proof} From the estimates \eqref{est:higher regularity}, \eqref{est:high order initial}, Lemma \ref{lem:forcing estimates} as well as Theorem $2.17$ in \cite{LW}, we directly have that \[ \mathfrak{K}(u^m,p^m,\theta^m)+\mathfrak{K}(\eta^m)\lesssim P(\mathscr{E}_0)+P(\mathfrak{K}_N(u^m,\theta^m)+\mathfrak{K}(\eta^m)). \] Then, applying the Theorem \ref{thm:boundedness}, we have that \[ \mathfrak{K}(u^m,p^m,\theta^m)+\mathfrak{K}(\eta^m)\lesssim P(\mathscr{E}_0). \] \end{proof} \subsection{Contraction} According to Theorem \ref{thm:uniform boundedness}, we may extract weakly converging subsequences from $\{(u^m,p^m,\theta^m,\eta^m)\}_{m=0}^\infty$. Unfortunately, the original sequence $\{(u^m,p^m,\theta^m,\eta^m)\}_{m=0}^\infty$ could not be guaranteed to converge to the same limit. In order to obtain the desired solution to \eqref{equ:NBC} by passing to the limit in \eqref{equ:iteration equation} and \eqref{equ:iteration theta}, we need to study its contraction in some norm. For $T>0$, we define the norms \begin{eqnarray} \begin{aligned} \mathfrak{N}(v,q,\Theta; T)&=\|v\|_{L^\infty H^2}^2+\|v\|_{L^2H^3}^2+\|\pa_tv\|_{L^\infty H^0}^2+\|\pa_tv\|_{L^2H^1}^2+\|q\|_{L^\infty H^1}^2+\|q\|_{L^2H^2}^2\\ &\quad+\|\Theta\|_{L^\infty H^2}^2+\|\Theta\|_{L^2H^3}^2+\|\pa_t\Theta\|_{L^\infty H^0}^2+\|\pa_t\Theta\|_{L^2H^1}^2\\ \mathfrak{M}(\zeta;T)&=\|\zeta\|_{L^\infty H^{5/2}}^2+\|\pa_t\zeta\|_{L^\infty H^{3/2}}^2+\|\pa_t^2\zeta\|_{L^2H^{1/2}}^2, \end{aligned} \end{eqnarray} where the norm $L^pH^k$ is $L^p([0,T];H^k(\Om))$ in $\mathfrak{N}$, and is $L^p([0,T];H^k(\Sigma))$ in $\mathfrak{M}$. The next theorem is not only used to prove the contraction of approximate solutions, but also used to verify the uniqueness of solutions to \eqref{equ:NBC}. To avoid confusion with $\{(u^m,p^m,\theta^m,\eta^m)\}$, we refer to velocities as $v^j$, $w^j$, pressures as $q^j$, temperatures as $\Theta^j$, $\vartheta^j$, and surface functions as $\zeta^j$ for $j=1,2$. \begin{theorem}\label{thm:contraction} For $j=1,2$, suppose that $v^j$, $q^j$, $\Theta^j$, $w^j$, $\vartheta^j$ and $\zeta^j$ satisfy the initial data $\pa_t^kv^1(0)=\pa_t^kv^2(0)$, $\pa_t^k\Theta^1(0)=\pa_t^k\Theta^2(0)$, for $k=0,1$, $q^1(0)=q^2(0)$ and $\zeta^1(0)=\zeta^2(0)$, and that the following system holds: \begin{eqnarray}\label{equ:difference} \left\{ \begin{aligned} &\pa_tv^j-\Delta_{\mathscr{A}^j}v^j+\nabla_{\mathscr{A}^j}q^j-\Theta^j\nabla_{\mathscr{A}^j}y_3^j=\pa_t\bar{\zeta}^j(1+x_3)K^j\pa_3w^j\\ &\qquad-w^j\cdot\nabla_{\mathscr{A}^j}w^j\quad &\text{in}\thinspace \Om,\\ &\mathop{\rm div}\nolimits_{\mathscr{A}^j}v^j=0 \quad &\text{in}\thinspace \Om,\\ &\pa_t\Theta^j-\Delta_{\mathscr{A}^j}\Theta^j=\pa_t\bar{\zeta}^j(1+x_3)K^j\pa_3\vartheta^j-w^j\cdot\nabla_{\mathscr{A}^j}\vartheta^j\quad &\text{in}\thinspace \Om,\\ &S_{\mathscr{A}^j}(q^j,v^j)\mathscr{N}^j=\zeta^j\mathscr{N}^j\quad &\text{on}\thinspace \Sigma,\\ &\nabla_{\mathscr{A}^j}\Theta^j\cdot\mathscr{N}^j+\Theta^j\left|\mathscr{N}^j\right|=-\left|\mathscr{N}^j\right|\quad &\text{on}\thinspace \Sigma,\\ &v^j=0,\quad \Theta^j=0\quad &\text{on}\thinspace \Sigma_b,\\ &\pa_t\zeta^j=w^j\cdot\mathscr{N}^j\quad &\text{on}\thinspace \Sigma, \end{aligned} \right. \end{eqnarray} where $\mathscr{A}^j$, $\mathscr{N}^j$, $K^j$ are determined by $\zeta^j$. Assume that $\mathfrak{K}(v^j,q^j,\Theta^j)$, $\mathfrak{K}(w^j,0,\vartheta^j)$ and $\mathfrak{K}(\zeta^j)$ are bounded by $\mathscr{Z}$. Then there exists $0<T_1<1$ such that for any $0<T<T_1$, then we have \begin{equation}\label{est:n} \mathfrak{N}(v^1-v^2,q^1-q^2,\Theta^1-\Theta^2;T)\le \f12\mathfrak{N}(w^1-w^2,0,\vartheta^1-\vartheta^2;T), \end{equation} \begin{equation}\label{est:m} \mathfrak{M}(\zeta^1-\zeta^2;T)\lesssim \mathfrak{N}(w^1-w^2,0,\vartheta^1-\vartheta^2;T). \end{equation} \end{theorem} \begin{proof} This proof follows the path of Theorem $6.2$ in \cite{GT1}. First, we define $v=v^1-v^2$, $w=w^1-w^2$, $\Theta=\Theta^1-\Theta^2$, $\vartheta=\vartheta^1-\vartheta^2$, $q=q^1-q^2$. Step 1. Energy evolution for differences. Like the proof of Theorem $6.2$ in \cite{GT1}, we can derive the PDE satisfied by $v$, $q$ and $\Theta$: \begin{eqnarray} \left\{ \begin{aligned} &\pa_tv+\mathop{\rm div}\nolimits_{\mathscr{A}^1}S_{\mathscr{A}^1}(q,v)-\Theta \nabla_{\mathscr{A}^1}y_3^1=\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\mathbb{D}_{(\mathscr{A}^1-\mathscr{A}^2)}v^2)+H^1 \quad &\text{in}\thinspace \Om,\\ &\mathop{\rm div}\nolimits_{\mathscr{A}^1}v=H^2 \quad &\text{in}\thinspace \Om,\\ &\pa_t\Theta-\Delta_{\mathscr{A}^1}\Theta=\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\nabla_{(\mathscr{A}^1-\mathscr{A}^2)}\Theta^2)+H^3 \quad &\text{in}\thinspace \Om,\\ &S_{\mathscr{A}^1}(q,v)\mathscr{N}^1=\mathbb{D}_{(\mathscr{A}^1-\mathscr{A}^2)}v^2\mathscr{N}^1+H^4 \quad &\text{on}\thinspace \Sigma,\\ &\nabla_{\mathscr{A}^1}\Theta\cdot\mathscr{N}^1+\Theta\left|\mathscr{N}^1\right|=-\nabla_{(\mathscr{A}^1-\mathscr{A}^2)}\Theta^2\cdot\mathscr{N}^1+H^5 \quad &\text{on}\thinspace \Sigma,\\ &v=0,\quad \Theta=0 \quad &\text{on}\thinspace \Sigma_b,\\ &v(t=0)=0,\quad \Theta(t=0)=0, \end{aligned} \right. \end{eqnarray} and the PDE satisfied by $\pa_tv$, $\pa_tq$, $\pa_t\Theta$ from taking temporal derivative for the above system: \begin{eqnarray} \left\{ \begin{aligned} &\pa_t(\pa_t v)+\mathop{\rm div}\nolimits_{\mathscr{A}^1}S_{\mathscr{A}^1}(\pa_t q,\pa_t v)-\pa_t(\Theta \nabla_{\mathscr{A}^1}y_3^1)&\\ &\qquad=\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\mathbb{D}_{\pa_t(\mathscr{A}^1-\mathscr{A}^2)}v^2)+\tilde{H}^1 \thinspace &\text{in}\thinspace \Om,\\ &\mathop{\rm div}\nolimits_{\mathscr{A}^1}\pa_t v=\tilde{H}^2 \thinspace &\text{in}\thinspace \Om,\\ &\pa_t(\pa_t\Theta)-\Delta_{\mathscr{A}^1}\pa_t\Theta=\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\nabla_{(\pa_t\mathscr{A}^1-\pa_t\mathscr{A}^2)}\Theta^2)+\tilde{H}^3 \thinspace &\text{in}\thinspace \Om,\\ &S_{\mathscr{A}^1}(\pa_t q,\pa_t v)\mathscr{N}^1=\mathbb{D}_{(\pa_t\mathscr{A}^1-\pa_t\mathscr{A}^2)}v^2\mathscr{N}^1+\tilde{H}^4 \thinspace &\text{on}\thinspace \Sigma,\\ &\nabla_{\mathscr{A}^1}\pa_t\Theta\cdot\mathscr{N}^1+\pa_t\Theta\left|\mathscr{N}^1\right|=-\nabla_{\pa_t(\mathscr{A}^1-\mathscr{A}^2)}\Theta^2\cdot\mathscr{N}^1+\tilde{H}^5 \thinspace &\text{on}\thinspace \Sigma,\\ &\pa_tv=0,\quad \pa_t\Theta=0 \thinspace &\text{on}\thinspace \Sigma_b,\\ &\pa_tv(t=0)=0,\quad \pa_t\Theta(t=0)=0, \end{aligned} \right. \end{eqnarray} where $H^2$, $H^4$, $\tilde{H}^2$ and $\tilde{H}^4$ have been given by Y. Guo and I. Tice in \cite{GT1}, \begin{align*} H^1&=\Theta^2\nabla_{\mathscr{A}^1-\mathscr{A}^2}y_3^1+\Theta^2\nabla_{\mathscr{A}^2}(y_3^1-y_3^2)+\mathop{\rm div}\nolimits_{\mathscr{A}^1-\mathscr{A}^2}(\mathbb{D}_{\mathscr{A}^2}v^2)-\nabla_{\mathscr{A}^1-\mathscr{A}^2}q^2\\ &\quad+\pa_t\bar{\zeta}^1(1+x_3)K^1(\pa_3w^1-\pa_3w^2)+(\pa_t\bar{\zeta}^1-\pa_t\bar{\zeta}^2)(1+x_3)K^1\pa_3w^2\\ &\quad+\pa_t\bar{\zeta}^1(1+x_3)(K^1-K^2)\pa_3w^2-(w^1-w^2)\cdot\nabla_{\mathscr{A}^1}w^1-w^2\cdot\nabla_{\mathscr{A}^1}(w^1-w^2)\\ &\quad-w^2\cdot\nabla_{\mathscr{A}^1-\mathscr{A}^2}w^2,\\ H^3&=\mathop{\rm div}\nolimits_{\mathscr{A}^1-\mathscr{A}^2}(\nabla_{\mathscr{A}^2}\Theta^2)+\pa_t\bar{\zeta}^1(1+x_3)K^1(\pa_3\vartheta^1-\pa_3\vartheta^2)\\ &\quad+(\pa_t\bar{\zeta^1}-\pa_t\bar{\zeta}^2)(1+x_3)K^1\pa_3\vartheta^2+\pa_t\bar{\zeta}^1(K^1-K^2)\pa_3w^2-(w^1-w^2)\cdot\nabla_{\mathscr{A}^1}\vartheta^1\\ &\quad-w^2\cdot\nabla_{\mathscr{A}^1}(\vartheta^1-\vartheta^2)-w^2\cdot\nabla_{\mathscr{A}^1-\mathscr{A}^2}\vartheta^2,\\ H^5&=-\nabla_{\mathscr{A}^2}\Theta^2\cdot(\mathscr{N}^1-\mathscr{N}^2)-\Theta^2\left(\left|\mathscr{N}^1\right|-\left|\mathscr{N}^2\right|\right),\\ \tilde{H}^1&=\pa_tH^1+\mathop{\rm div}\nolimits_{\pa_t\mathscr{A}^1}(\mathbb{D}_{\mathscr{A}^1-\mathscr{A}^2}v^2)+\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\mathbb{D}_{\mathscr{A}^1-\mathscr{A}^2}\pa_tv^2)+\mathop{\rm div}\nolimits_{\pa_t\mathscr{A}^1}(\mathbb{D}_{\mathscr{A}^1}v)\\ &\quad+\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\mathbb{D}_{\pa_t\mathscr{A}^1}v)-\nabla_{\pa_t\mathscr{A}^1}q,\\ \tilde{H}^3&=\pa_tH^3+\mathop{\rm div}\nolimits_{\pa_t\mathscr{A}^1}(\nabla_{(\mathscr{A}^1-\mathscr{A}^2)}\Theta^2)+\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\nabla_{(\mathscr{A}^1-\mathscr{A}^2)}\pa_t\Theta^2)+\mathop{\rm div}\nolimits_{\pa_t\mathscr{A}^1}\nabla_{\mathscr{A}^1}\Theta\\ &\quad+\mathop{\rm div}\nolimits_{\mathscr{A}^1}\nabla_{\pa_t\mathscr{A}^1}\Theta,\\ \tilde{H}^5&=\pa_tH^5-\nabla_{(\mathscr{A}^1-\mathscr{A}^2)}\pa_t\Theta^2\cdot\mathscr{N}^1-\nabla_{(\mathscr{A}^1-\mathscr{A}^2)}\Theta^2\cdot\pa_t\mathscr{N}^1-\nabla_{\mathscr{A}^1}\Theta\cdot\pa_t\mathscr{N}^1\\ &\quad-\nabla_{\pa_t\mathscr{A}^1}\Theta\cdot\mathscr{N}^1-\Theta\pa_t\left|\mathscr{N}^1\right|. \end{align*} Then we can deduce the equations \begin{eqnarray}\label{equ:evolution difference} \begin{aligned} &\f12\int_{\Om}|\pa_tv|^2J^1(t)+\f12\int_0^t\int_{\Om}|\mathbb{D}_{\mathscr{A}^1}\pa_tv|^2J^1\\ &=\f12\int_0^t\int_{\Om}|\pa_tv|^2(\pa_tJ^1K^1)J^1+\int_0^t\int_{\Om}\pa_t(\Theta \nabla_{\mathscr{A}^1}y_3^1)\cdot\pa_tv J^1\\ &\quad+\int_0^t\int_{\Om}J^1(\tilde{H}^1\cdot \pa_tv+\tilde{H}^2\pa_tq)\\ &\quad-\f12\int_0^t\int_{\Om}J^1\mathbb{D}_{\pa_t\mathscr{A}^1-\pa_t\mathscr{A}^2}v^2:\mathbb{D}_{\mathscr{A}^1}\pa_t v-\int_0^t\int_{\Sigma}\tilde{H}^3\cdot\pa_tv,\\ &\f12\int_{\Om}|\pa_t\Theta|^2J^1(t)+\int_0^t\int_{\Om}|\nabla_{\mathscr{A}^1}\pa_t\Theta|^2J^1+\int_0^t\int_{\Sigma}|\pa_t\Theta|^2\left|\mathscr{N}^1\right|\\ &=\f12\int_0^t\int_{\Om}|\pa_t\Theta|^2(\pa_tJ^1K^1)J^1+\int_0^t\int_{\Om}J^1\tilde{H}^3\cdot \pa_t\Theta\\ &\quad-\int_0^t\int_{\Om}J^1\nabla_{\pa_t\mathscr{A}^1-\pa_t\mathscr{A}^2}\Theta^2\cdot\nabla_{\mathscr{A}^1}\pa_t \Theta+\int_0^t\int_{\Sigma}\tilde{H}^5\cdot\pa_t\Theta. \end{aligned} \end{eqnarray} Step 2. Estimates for the forcing terms. Now we need to estimate the forcing terms that appear on the right-hand sides of \eqref{equ:evolution difference}. Throughout this section, $P(\cdot)$ is written as a polynomial such that $P(0)=0$, which allows to be changed from line to line. The estimates for $\|\tilde{H}^1\|_0$, $\|\tilde{H}^2\|_0$, $\|\pa_t\tilde{H}^2\|_0$, $\|\tilde{H}^4\|_{-1/2}$, $\|H^1\|_r$, $\|H^2\|_{r+1}$, $\|H^4\|_{r+1/2}$, $\|\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\mathbb{D}_{(\mathscr{A}^1-\mathscr{A}^2)}v^2)\|_r$ and $\|\mathbb{D}_{(\mathscr{A}^1-\mathscr{A}^2)}v^2\mathscr{N}^1\|_{r+1/2}$ have been done by Guo and Tice in \cite{GT1}. So we can directly using them only after replacing $\varepsilon$ by $\mathscr{Z}$. By the same method, we can also deduce that \begin{eqnarray}\label{est:tilde H3} \begin{aligned} \|\tilde{H}^3\|_0&\lesssim P(\sqrt{\mathscr{Z}})\big(\|\Theta\|_2+\|\zeta^1-\zeta^2\|_{3/2}+\|\pa_t\zeta^1-\pa_t\zeta^2\|_{1/2}+\|\pa_t^2\zeta^1-\pa_t^2\zeta^2\|_{1/2}\\ &\quad+\|w^1-w^2\|_0+\|\pa_tw^1-\pa_tw^2\|_0+\|\vartheta^1-\vartheta^2\|_1+\|\pa_t\vartheta^1-\pa_t\vartheta^2\|_1\big), \end{aligned} \end{eqnarray} \begin{equation}\label{est:tilde H5} \|\tilde{H}^5\|_{-1/2}\lesssim P(\sqrt{\mathscr{Z}})\big(\|\zeta^1-\zeta^2\|_{1/2}+\|\pa_t\zeta^1-\pa_t\zeta^2\|_{1/2}+\|\Theta\|_2\big), \end{equation} and for $r=0,1$, \begin{eqnarray}\label{est:bound H3} \begin{aligned} \|H^3\|_r &\lesssim P(\sqrt{\mathscr{Z}})\big(\|\zeta^1-\zeta^2\|_{r+1/2}+\|\pa_t\zeta^1-\pa_t\zeta^2\|_{r-1/2}\\ &\quad+\|w^1-w^2\|_r+\|\vartheta^1-\vartheta^2\|_{r+1}\big), \end{aligned} \end{eqnarray} \begin{equation} \|H^5\|_{r+1/2}\lesssim P(\sqrt{\mathscr{Z}})\|\zeta^1-\zeta^2\|_{r+3/2}, \end{equation} \begin{equation} \|\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\nabla_{\mathscr{A}^1-\mathscr{A}^2}\Theta^2)\|_r\lesssim P(\sqrt{\mathscr{Z}})\|\zeta^1-\zeta^2\|_{r+3/2}, \end{equation} \begin{equation}\label{est:bound difference theta2} \|\nabla_{\mathscr{A}^1-\mathscr{A}^2}\Theta^2\cdot\mathscr{N}^1\|_{r+1/2}\lesssim P(\sqrt{\mathscr{Z}})\|\zeta^1-\zeta^2\|_{r+3/2}. \end{equation} Step 3. Energy estimates of $\pa_tv$ and $\pa_t\Theta$. First, owing to the assumption and Sobolev embeddings, we obtain that \begin{equation}\label{est:bound J K} \|J^1\|_{L^\infty}+\|K^1\|_{L^\infty}\lesssim 1+P(\sqrt{\mathscr{Z}})\quad \text{and}\quad \|\pa_tJ^1\|_{L^\infty}\lesssim P(\sqrt{\mathscr{Z}}). \end{equation} The bounds of \eqref{est:bound J K} reveals that \begin{equation}\label{est:rhs 1 evolution difference} \f12\int_0^t\int_{\Om}|\pa_t\Theta|^2(\pa_tJ^1K^1)J^1\lesssim P(\sqrt{\mathscr{Z}})\f12\int_0^t\int_{\Om}|\pa_t\Theta|^2J^1. \end{equation} In addition, estimates \eqref{est:tilde H3}, \eqref{est:tilde H5} together with trace theory and the Poincar\'e inequality reveals that \begin{eqnarray}\label{est:rhs 2 evolution difference} \begin{aligned} &\int_0^t\int_{\Om}J^1\tilde{H}^3\cdot \pa_t\Theta-\int_0^t\int_{\Om}J^1\nabla_{\pa_t\mathscr{A}^1-\pa_t\mathscr{A}^2}\Theta^2\cdot\nabla_{\mathscr{A}^1}\pa_t \Theta-\int_0^t\int_{\Sigma}\tilde{H}^5\cdot\pa_t\Theta\\ &\le \int_0^t\int_{\Om}\|J^1\|_{L^\infty}\left(\|J^1\|_{L^\infty}\|\tilde{H}^3\|_0\|\pa_t\Theta\|_0+\|\nabla_{\pa_t\mathscr{A}^1-\pa_t\mathscr{A}^2}\Theta^2\|_0\|\nabla_{\mathscr{A}^1}\pa_t \Theta\|_0\right)\\ &\quad+\int_0^t\|\tilde{H}^5\|_{-1/2}\|\pa_t\Theta\|_{1/2}\\ &\lesssim \int_0^tP(\sqrt{\mathscr{Z}})\sqrt{\mathcal{Z}}, \end{aligned} \end{eqnarray} where we have written \begin{eqnarray} \begin{aligned} \mathcal{Z}:&=\|\zeta^1-\zeta^2\|_{3/2}^2+\|\pa_t\zeta^1-\pa_t\zeta^2\|_{1/2}^2+\|\pa_t^2\zeta^1-\pa_t^2\zeta^2\|_{1/2}^2\\ &\quad+\|w^1-w^2\|_1^2+\|\pa_tw^1-\pa_tw^2\|_1^2+\|\vartheta^1-\vartheta^2\|_1^2+\|\pa_t\vartheta^1-\pa_t\vartheta^2\|_1^2\\ &\quad+\|v\|_2^2+\|q\|_1^2+\|\Theta\|_2^2. \end{aligned} \end{eqnarray} Combining \eqref{est:rhs 1 evolution difference}, \eqref{est:rhs 2 evolution difference}, \eqref{equ:evolution difference}, Poincar\'e inequality of Lemma A.14 in \cite{GT1} and Lemma $2.9$ in \cite{LW} and utilizing Cauchy inequality to absorb $\|\pa_t\Theta\|_1$ into left, yield that \begin{eqnarray} \begin{aligned} &\f12\int_{\Om}|\pa_t\Theta|^2J^1(t)+\f12\int_0^t\|\pa_t\Theta\|_1^2\\ &\le P(\sqrt{\mathscr{Z}})\f12\int_{\Om}|\pa_t\Theta|^2J^1(t)+\int_0^tP(\sqrt{\mathscr{Z}})\mathcal{Z} \end{aligned} \end{eqnarray} Then Gronwall's lemma and Lemma $2.9$ in \cite{LW} imply that \begin{equation} \|\pa_t\Theta\|_{L^\infty H^0}^2+\|\pa_t\Theta\|_{L^2H^1}^2\le \exp\{P(\sqrt{\mathscr{Z}})T\}\int_0^TP(\sqrt{\mathscr{Z}})\mathcal{Z}. \end{equation} Then energy estimates for $\pa_tv$ are likely the same as what Guo and Tice did in \cite{GT1}, so we omit the details. The energy estimates for $\pa_tv$ and $\pa_t\Theta$ allow us to deduce that \begin{eqnarray} \begin{aligned} &\|\pa_tv\|_{L^\infty H^0}^2+\|\pa_tv\|_{L^2H^1}^2+\|\pa_t\Theta\|_{L^\infty H^0}^2+\|\pa_t\Theta\|_{L^2H^1}^2\\ &\le \exp\{P(\sqrt{\mathscr{Z}})T\}\Bigg[P(\sqrt{\mathscr{Z}})\|q\|_{L^2H^0}^2+C\|\pa_t\zeta^1-\pa_t\zeta^2\|_{L^2H^{-1/2}}^2+\int_0^TP(\sqrt{\mathscr{Z}})\mathcal{Z}\\ &\quad+P(\sqrt{\mathscr{Z}})\|q\|_{L^\infty H^0}^2\bigg(\sum_{j=0}^1\|\pa_t^j\zeta^1-\pa_t^j\zeta^2\|_{L^\infty H^{1/2}}+\|v\|_{L^\infty H^1}\bigg)\\ &\quad+P(\sqrt{\mathscr{Z}})\|q\|_{L^2 H^0}^2\bigg(\sum_{j=0}^2\|\pa_t^j\zeta^1-\pa_t^j\zeta^2\|_{L^2 H^{1/2}}+\|v\|_{L^2 H^1}\bigg)\Bigg], \end{aligned} \end{eqnarray} where the temporal norm of $L^\infty$ and $L^2$ are computed over $[0,T]$. Step 4. Elliptic estimates for $v$, $q$ and $\Theta$. For $r=0,1$, we combine Proposition \eqref{prop:high regulatrity} with estimates \eqref{est:bound H3}--\eqref{est:bound difference theta2} as well as the bounds of $\|H^1\|_r$, $\|H^2\|_{r+1}$, $\|H^4\|_{r+1/2}$ $\|\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\mathbb{D}_{(\mathscr{A}^1-\mathscr{A}^2)}v^2)\|_r$, $\|\mathbb{D}_{(\mathscr{A}^1-\mathscr{A}^2)}v^2\mathscr{N}^1\|_{r+1/2}$ done in the proof of Theorem $6.2$ in \cite{GT1} to deduce that \begin{eqnarray} \begin{aligned} &\|v\|_{r+2}^2+\|q\|_{r+1}^2+\|\Theta\|_{r+2}^2\\ &\lesssim C(\eta_0)\bigg(\|\pa_tv\|_r^2+\|\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\mathbb{D}_{(\mathscr{A}^1-\mathscr{A}^2)}v^2)\|_r^2+\|H^1\|_r^2+\|H^2\|_{r+1}^2+\|\pa_t\Theta\|_r^2\\ &\quad+\|H^3\|_r^2+\|\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\nabla_{\mathscr{A}^1-\mathscr{A}^2}\Theta^2)\|_r^2+\|\mathbb{D}_{(\mathscr{A}^1-\mathscr{A}^2)}v^2\mathscr{N}^1\|_{r+1/2}^2+\|H^4\|_{r+1/2}^2\\ &\quad+\|\nabla_{\mathscr{A}^1-\mathscr{A}^2}\Theta^2\cdot\mathscr{N}^1\|_{r+1/2}^2+\|H^5\|_{r+1/2}^2\bigg)\\ &\lesssim C(\eta_0)\bigg(\|\pa_tv\|_r^2+\|\pa_t\Theta\|_r^2+\|\zeta^1-\zeta^2\|_{r+1/2}^2\\ &\quad+P(\sqrt{\mathscr{Z}})\big(\|\zeta^1-\zeta^2\|_{r+3/2}^2+\|\pa_t\zeta^1-\pa_t\zeta^2\|_{r-1/2}^2\\ &\quad+\|w^1-w^2\|_{r+1}^2+\|\vartheta^1-\vartheta^2\|_{r+1}^2\big)\bigg). \end{aligned} \end{eqnarray} Then we take supremum in time over $[0,T]$, when $r=0$, to deduce \begin{eqnarray} \begin{aligned} &\|v\|_{L^\infty H^2}^2+\|q\|_{L^\infty H^1}^2+\|\Theta\|_{L^\infty H^2}^2\\ &\lesssim C(\eta_0)\bigg(\|\pa_tv\|_{L^\infty H^0}^2+\|\pa_t\Theta\|_{L^\infty H^0}^2+\|\zeta^1-\zeta^2\|_{L^\infty H^{1/2}}^2\\ &\quad+P(\sqrt{\mathscr{Z}})\big(\|\zeta^1-\zeta^2\|_{L^\infty H^{3/2}}^2+\|\pa_t\zeta^1-\pa_t\zeta^2\|_{L^\infty H^{-1/2}}^2\\ &\quad+\|w^1-w^2\|_{L^\infty H^1}^2+\|\vartheta^1-\vartheta^2\|_{L^\infty H^1}^2\big)\bigg). \end{aligned} \end{eqnarray} Then we integrate over $[0,T]$ when $r=1$ to find \begin{eqnarray} \begin{aligned} &\|v\|_{L^2H^3}^2+\|q\|_{L^2H^2}^2+\|\Theta\|_{L^2H^3}^2\\ &\lesssim C(\eta_0)\bigg(\|\pa_tv\|_{L^2H^1}^2+\|\pa_t\Theta\|_{L^2H^1}^2+\|\zeta^1-\zeta^2\|_{L^2H^{3/2}}^2\\ &\quad+P(\sqrt{\mathscr{Z}})\big(\|\zeta^1-\zeta^2\|_{L^2H^{5/2}}^2+\|\pa_t\zeta^1-\pa_t\zeta^2\|_{L^2H^{1/2}}^2\\ &\quad+\|w^1-w^2\|_{L^2H^2}^2+\|\vartheta^1-\vartheta^2\|_{L^2H^2}^2\big)\bigg). \end{aligned} \end{eqnarray} Step 5. Estimates of $\zeta^1-\zeta^2$ and contraction. After making preparations in the above steps, we can derive the contraction results. Since this step follows exactly the same manner as the proof of Theorem $6.2$ in \cite{GT1}, we omit the details here. Hence, we get the \eqref{est:n} and \eqref{est:m}. \end{proof} \subsection{Proof of Theorem \ref{thm:main}} Now we can combine Theorem \ref{thm:uniform boundedness} and Theorem \ref{thm:contraction} to produce a unique strong solution to \eqref{equ:NBC}. It is notable that Theorem \ref{thm:main} can be directly derived from the following theorem, which will be proved in the same manner as the proof of Theorem $6.3$ in \cite{GT1}. \begin{theorem} Assume that $u_0$, $\theta_0$, $\eta_0$ satisfy $\mathscr{E}_0<\infty$ and that the initial data $\pa_t^ju(0)$, etc. are constructed in Section \ref{sec:initial data} and satisfy the $N$-th compatibility conditions \eqref{cond:compatibility N}. Then there exists $0<T_0<1$ such that if $0<T\le T_0$, then there exists a solution $(u,p,\theta,\eta)$ to the problem \eqref{equ:NBC} on the time interval $[0,T]$ that achieves the initial data and satisfies \begin{equation}\label{est:bound K} \mathfrak{K}(u,p,\theta)+\mathfrak{K}(\eta)\le CP(\mathscr{E}_0), \end{equation} for a universal constant $C>0$. The solution is unique through functions that achieve the initial data. Moreover, $\eta$ is such that the mapping $\Phi(\cdot,t)$, defined by \eqref{map:phi}, is a $C^{2N-1}$ diffeomorphism for each $t\in [0, T]$. \end{theorem} \begin{proof} Step 1. The sequences of approximate solutions. From the assumptions, we know that the hypothesis of Theorems \ref{thm:boundedness} and \ref{thm:uniform boundedness} is satisfied. These two theorems allow us to produce a sequence of $\{(u^m,p^m,\theta^m,\eta^m)\}_{m=1}^\infty$, which achieve the initial data, satisfy the systems \eqref{equ:iteration equation}, and obey the uniform bounds \begin{equation}\label{est:uniform bound} \sup_{m\ge1}\left(\mathfrak{K}(u^m,p^m,\theta^m)+\mathfrak{K}(\eta^m)\right)\le CP(\mathscr{E}_0). \end{equation} The uniform bounds allow us to take weak and weak-$\ast$ limits, up to the extraction of a subsequence: \begin{align*} &\pa_t^ju^m\rightharpoonup \pa_t^ju\quad \text{weakly in}\thinspace L^2([0,T];H^{2N-2j+1}(\Om))\thinspace \text{for}\thinspace j=0,\ldots,N,\\ &\pa_t^{N+1}u^m\rightharpoonup\pa_t^{N+1}u\quad \text{weakly in}\thinspace (\mathscr{X}_T)^\ast,\\ &\pa_t^ju^m\stackrel{\ast}\rightharpoonup\pa_t^ju\quad \text{weakly}-\ast\thinspace\text{in}\thinspace L^\infty([0,T];H^{2N-2j}(\Om))\thinspace \text{for}\thinspace j=0,\ldots,N,\\ &\pa_t^jp^m\rightharpoonup\pa_t^jp\quad \text{weakly in}\thinspace L^2([0,T];H^{2N-2j}(\Om))\thinspace \text{for}\thinspace j=0,\ldots,N,\\ &\pa_t^jp^m\stackrel{\ast}\rightharpoonup \pa_t^jp\quad \text{weakly}-\ast\thinspace\text{in}\thinspace L^\infty([0,T];H^{2N-2j-1}(\Om))\thinspace \text{for}\thinspace j=0,\ldots,N,\\ &\pa_t^j\theta^m\rightharpoonup \pa_t^j\theta\quad \text{weakly in}\thinspace L^2([0,T];H^{2N-2j+1}(\Om))\thinspace \text{for}\thinspace j=0,\ldots,N,\\ &\pa_t^{N+1}\theta^m\rightharpoonup\pa_t^{N+1}\theta\quad \text{weakly in}\thinspace (\mathscr{H}^1_T)^\ast,\\ &\pa_t^j\theta^m\stackrel{\ast}\rightharpoonup\pa_t^j\theta\quad \text{weakly}-\ast\thinspace\text{in}\thinspace L^\infty([0,T];H^{2N-2j}(\Om))\thinspace \text{for}\thinspace j=0,\ldots,N, \end{align*} and \begin{align*} &\pa_t^j\eta^m\rightharpoonup\pa_t^j\eta \quad \text{weakly in}\thinspace L^2([0,T];H^{2N-2j+5/2}(\Sigma))\thinspace\text{for}\thinspace j=2,\ldots,N+1,\\ &\eta^m\stackrel{\ast}\rightharpoonup\eta\quad \text{weakly}-\ast\thinspace\text{in}\thinspace L^\infty([0,T];H^{2N+1/2}(\Sigma)),\\ &\pa_t^j\eta^m\stackrel{\ast}\rightharpoonup\pa_t^j\eta\quad \text{weakly}-\ast\thinspace\text{in}\thinspace L^\infty([0,T];H^{2N-2j+3/2}(\Sigma))\thinspace \text{for}\thinspace j=1,\ldots,N. \end{align*} The collection $(v,q,\Theta,\zeta)$ achieving the initial data, that is, $\pa_t^jv(0)=\pa_t^ju(0)$, $\pa_t^j\Theta(0)=\pa_t^j\theta(0)$, $\pa_t^j\zeta(0)=\pa_t^j\eta(0)$ for $j=0,\ldots,N$ and $\pa_t^jq(0)=\pa_t^jp(0)$ for $j=0,\ldots,N-1$, is closed in the above weak topology by Lemma A.4 in \cite{GT1}. Hence the limit $(u,p,\theta,\eta)$ achieves the initial data, since each $(u^m,p^m,\theta^m,\eta^m)$ is in the above collection. Step 2. Contraction. For $m\ge1$, we set $v^1=u^{m+2}$, $v^2=u^{m+1}$, $w^1=u^{m+1}$, $w^2=u^m$, $q^1=p^{m+2}$, $q^2=p^{m+1}$, $\Theta^1$=$\theta^{m+2}$, $\Theta^2=\theta^{m+1}$, $\vartheta^1=\theta^{m+1}$, $\vartheta^2=\theta^m$, $\zeta^1=\eta^{m+1}$, $\zeta^2=\eta^m$. Then from the construction of initial data, the initial data of $v^j$, $w^j$, $q^j$, $\Theta^j$, $\vartheta^j$, $\zeta^j$ math the hypothesis of Theorem \ref{thm:contraction}. Because of \eqref{equ:iteration equation}, \eqref{equ:difference} holds. In addition, \eqref{est:uniform bound} holds. Thus, all hypothesis of Theorem \ref{thm:contraction} are satisfied. Then \begin{eqnarray}\label{est:bound N um pm thetam} \begin{aligned} &\mathfrak{N}(u^{m+2}-u^{m+1},p^{m+2}-p^{m+1},\theta^{m+2}-\theta^{m+1};T)\\ &\le\f12\mathfrak{N}(u^{m+1}-u^m,p^{m+1}-p^m,\theta^{m+1}-\theta^m;T), \end{aligned} \end{eqnarray} \begin{equation}\label{est:bound M etam} \mathfrak{M}(\eta^{m+1}-\eta^m;T)\lesssim \mathfrak{N}(u^{m+1}-u^m,p^{m+1}-p^m,\theta^{m+1}-\theta^m;T). \end{equation} The bound \eqref{est:bound N um pm thetam} implies that the sequence $\{(u^m,p^m,\theta^m)\}_{m=0}^\infty$ is Cauchy in the norm $\sqrt{\mathfrak{N}(\cdot,\cdot,\cdot;T)}$. Thus \begin{eqnarray} \left\{ \begin{aligned} &u^m\to u\quad &\text{in}\thinspace L^\infty\left([0,T];H^2(\Om)\right)\cap L^2\left([0,T];H^3(\Om)\right),\\ &\pa_tu^m\to\pa_tu\quad &\text{in}\thinspace L^\infty\left([0,T];H^0(\Om)\right)\cap L^2\left([0,T];H^1(\Om)\right),\\ &p^m\to p\quad &\text{in}\thinspace L^\infty\left([0,T];H^1(\Om)\right)\cap L^2\left([0,T];H^2(\Om)\right),\\ &\theta^m\to \theta\quad &\text{in}\thinspace L^\infty\left([0,T];H^2(\Om)\right)\cap L^2\left([0,T];H^3(\Om)\right),\\ &\pa_t\theta^m\to\pa_t\theta\quad &\text{in}\thinspace L^\infty\left([0,T];H^0(\Om)\right)\cap L^2\left([0,T];H^1(\Om)\right), \end{aligned} \right. \end{eqnarray} as $m\to\infty$. Because of \eqref{est:bound M etam}, we deduce that the sequence $\{\eta^m\}_{m=1}^\infty$ is Cauchy in the norm $\sqrt{\mathfrak{M}(\cdot;T)}$. Thus, \begin{eqnarray} \left\{ \begin{aligned} &\eta^m\to\eta\quad &\text{in}\thinspace L^\infty\left([0,T];H^{5/2}(\Sigma)\right),\\ &\pa_t\eta^m\to\pa_t\eta\quad &\text{in}\thinspace L^\infty\left([0,T];H^{3/2}(\Sigma)\right),\\ &\pa_t^2\eta^m\to\pa_t^2\eta\quad &\text{in}\thinspace L^2\left([0,T];H^{1/2}(\Sigma)\right), \end{aligned} \right. \end{eqnarray} as $m\to\infty$. Step 3. Interpolation and passing to the limit. This section is exactly the same as the proof of Theorem 6.3 in\cite{GT1}, which gives the existence of solutions and the estimate \eqref{est:bound K}. Step 4. Uniqueness and diffemorphism. This section is similar to the proof of Theorem 6.3 in\cite{GT1}. \end{proof} \end{document}
\begin{document} \title{Predicting Team Performance with Spatial Temporal Graph Convolutional Networks \\(Supplementary Material) } \author{\IEEEauthorblockN{Shengnan Hu} \IEEEauthorblockA{Department of Computer Science\\ University of Central Florida\\ Orlando, FL, USA\\ Email: [email protected]} \and \IEEEauthorblockN{Gita Sukthankar} \IEEEauthorblockA{Department of Computer Science\\ University of Central Florida\\ Orlando, FL, USA\\ Email: [email protected]}} \maketitle \subsection{Dataset} The dataset used in our paper was collected from search and rescue missions executed in a custom Minecraft experimentation environment created by Aptima~\cite{ASU/BZUZDE_2022,asisttestbed}. This section provides more details on the data collection process than we were able to include in our main paper. \begin{figure} \caption{Maps annotated with victim locations. Figure courtesy of Aptima \cite{ASU/BZUZDE_2022}.} \label{fig:map} \end{figure} \subsubsection{Trajectory Data} Experiments were conducted on the same map but with different victim placements (Mission A and Mission B); the victim configurations are shown in the Fig. \ref{fig:map}. Specifically, in the map, green blocks denote the regularly wounded victims, and yellow blocks denote critical victims that need to be rescued rapidly before they expire. A successful team should triage yellow victims first, while marking the location of green victims for later rescue. In total, there are 50 regular victims and 5 critical victims in each mission. The gray blocks denote impassable regions. Each team consists of three human subjects recruited to participate in a two hour experiment that includes intake and exit surveys, task instruction, and a Minecraft competency test. During the fifteen minute mission, the team is asked to explore the map and rescue the victims. There are three different team roles: medic (heals victims), searcher (has faster movement to empower faster exploration), and engineer (breaks obstacles). The testbed records information about the participants' locations and actions. After a team completes the experiment, the event data is stored into files on Google Cloud. From these files, we extracted the location and velocity of the players to create the triaging feature vectors used by our proposed framework, ST-GCN. \subsubsection{FOV data} \begin{figure} \caption{Example player viewport marked by CMU's PyGL-FoV agent.} \label{fig:fov} \end{figure} To obtain the victim information for FOV node features, we utilize the PyGL-FoV agent developed by CMU to extract the Field of View information; the code is available at: \url{https://gitlab.com/cmu_asist/PyGLFoVAgent}. PyGL-FoV generates a summary of Minecraft blocks that appear in the particpants viewport. To calculate the block list, PyGL-FoV renders the blocks in the Minecraft world using OpenGL and extracts information from the rendered scene by mapping the rendered pixels back to the Minecraft blocks. An example player viewport annotated by the PyGL-FoV agent is shown in Fig. \ref{fig:fov}. \end{document}
\begin{document} \title{Realistic Interpretation of Quantum Mechanics and Encounter-Delayed-Choice Experiment} \author{ Gui-Lu Long$^{1,2,3}$, Wei Qin$^1$, Zhe Yang$^1$ and Jun-Lin Li$^1$} \affiliation{$^1$ State Key Laboratory of Low-Dimensional Quantum Physics and Department of Physics, Tsinghua University, Beijing 100084, China\\ $^2$Innovative Center of Quantum Matter, Beijing 100084, China\\ $^3$ Tsinghua National Laboratory for Information Science and Technology, Tsinghua University, Beijing 100084, China\\} \date{\today} \begin{abstract} A realistic interpretation(REIN) of wave function in quantum mechanics is briefly presented in this work. In REIN, the wave function of a microscopic object is just its real existence rather than a mere mathematical description. Quantum object can exist in disjoint regions of space which just as the wave function distributes, travels at a finite speed, and collapses instantly upon a measurement. The single photon interference in a Mach-Zehnder interferometer is analyzed using REIN. In particular, we proposed and experimentally implemented a generalized delayed-choice experiment, the encounter-delayed-choice(EDC) experiment, in which the second beam splitter is inserted at the encounter of the two sub-waves from the two arms. In the EDC experiment, the front parts of wave functions before the beam splitter insertion do not interfere and show the particle nature, and the back parts of the wave functions will interfere and show a wave nature. The predicted phenomenon is clearly demonstrated in the experiment, and supports the REIN idea. \pacs{03.65.Ta, 03.65.Ud, 42.50.Xa,42.50.Dv} \end{abstract} \maketitle \date{\today} The wave-particle duality is a central concept of quantum mechanics and is strikingly illustrated in the well-known Wheeler's delayed-choice gedanken experiment \cite{VW1, VW2, Marlow, Hellmut, Lawson,Kim, Jacques1, Jacques2, Ma}. A good demonstration of the delayed-choice experiment is given by a two-path interferometer, Mach-Zehnder interferometer (MZI), seen in Figure \ref{WDC}(a). A single photon is directed to the MZI followed by two detectors at its end. If the output beam splitter BS$_{2}$ is present (closed configuration), the photon is first split by the input beam splitter BS$_{1}$ and then travels inside the MZI with a tunable phase shifter $\phi$ until the two interfering paths are recombined by BS$_{2}$. When $\phi$ is varied, the interference fringes are observed as a modulation of the detection probabilities of detectors D$_{1}$ and D$_{2}$. It indicates that the photon travels both paths of the MZI to behave as a wave and the two paths are indistinguishable. If BS$_{2}$ is absent (open configuration), a click in only one of the two detectors with probability $1/2$, independent of $\phi$, is associated with a given path to indicate that the photon travels along a single path and behaves as a particle. Such an experiment concludes that quantum systems exhibit wave or particle behavior depending on the configuration of the measurement apparatus. Moreover, the two complementary experimental setups are mutually exclusive and the two behaviors, wave and particle behavior, cannot be observed simultaneously. Recently, a new extension of the delayed-choice experiment (quantum delayed-choice) \cite{Ionicioiu, Schirber, Roy, Auccause, Peruzzo, Kaiser, GGC, Adesso}, where the output beam splitter in this classical state is replaced with that in a quantum superposition state, has been proposed. The experiment indicates that BS$_{2}$ can be simultaneously absent and present, and both wave and particle behavior can be simultaneously observed to show a morphing behavior between wave and particle. The concept of a wave function is introduced to quantum theory as a completely description of a quantum system. Wave function usually can be determined through tomographic methods, and be measured directly by sequential measurements of two complementary variables relying on the weak measurement \cite{Lundeen, kocsis, Schleich}. It is the heart of quantum theory and its typical interpretation is provided by the Copenhagen interpretation \cite{Landau}, in which the wave function is treated as a complex probability amplitude in a pure mathematical manner. The essential understanding of the wave function has not been solved yet so far \cite{cohen,Mermin}. In this article, we propose a realistic interpretation, the REIN, on the wave function in quantum mechanics. Then we propose a generalized delayed-choice experiment, the encounter-delayed-choice (EDC) experiment to test the REIN. The EDC is experimentally demonstrated, and the results agree with the theoretical interpretation very well, which supports the idea of REIN. In the following, we will first present the main points of REIN. Then we describe the EDC experiment proposal. The experimental demonstration of the EDC proposal is followed. Finally we give a discussion and summary. {\noindent \bf Results}\ \ {\bf The REIN.} The essential idea of REIN is that wave function is realistic existence rather than just mathematical description. Here we give a brief introduction, and a detailed description will be given elsewhere \cite{rein}. Quantum object, an object that obeys quantum mechanics, exists in the form of its wave function: extended in space and even in disjoint regions of space in some case. It changes forms as the wave function changes frequently. Since a wave function is usually a complex function, it has both amplitude and phase. If we just look at its spatial distribution, the square of the modulus of the wave function gives this distribution. However, it also has phase, and when two sub-wave functions merge or encounter, the resulting wave function will change differently at different locations: some is strengthened due to constructive interference whereas some other is canceled due to destructive interference. Thus a photon in a MZI is an extended object that exists in both arms. In the REIN view, there is no difference for a photon in a closed MZI setting and that in an open setting before they arrive at the second beam splitter. It also easier to comprehend how a photon can travel both arms. In REIN, a photon is an extended and separated objects that exists simultaneously at both arms, just like a segment stream of water is divided into two branches, each then flows on its own in its riverbed. Of course, the quantum wave function is more powerful than the water stream as it has also a phase factor that gives rise to interference when it encounters with other sub-wave functions. A sub-wave function is part of the whole wave function, for instance, the wave function in the upper arm of MZI, needs not be normalized \cite{duality}. To emphasize, we use $|\psi\kket$ and $\bbra \psi|$ to denote a sub-wave function throughout this article. The extended quantum wave function, the true or realistic quantum object, moves at a speed less or equals to the speed of light. As we know, light, an ensemble of photons, takes time to travel from the Sun to our planet. The electrons in a cyclotron travels slower than the light. Quantum wave function, or quantum object, can change form by transformation or by measurement. It is easy to visualize the change in the wave function, but is difficult to visualize the change in a quantum object. This difficulty is pertinent to our stubborn notion of a rigid particle for a microscopic object, as the name quantum particle suggests. If we adopt the view that the quantum object does exist in the form of the wave function, it will be very easy to understand this change in form. Hence a photon wave function changes into two sub-wave functions when it is transformed by a beam splitter. A measurement changes the shape, or form of a quantum object drastically. According to the measurement postulate of quantum mechanics, a measurement will collapse the wave function instantly into one of the eigenstate of the measured observable. This change of the quantum object takes no time, and it is within all the spaces occupied by the wave function, which are disjoint in some cases. The measurement postulate cannot be derived from the Schroedinger equation, which governs the evolution of the quantum wave function. At this stage, one should not ask why measurement has such dramatic effect. The quantum object behaves just in this way. It is Nature. {\bf EDC Experiment Proposal} According to REIN, a photon is considered as the whole spatial distribution of its wave function, which really exists, more than a mere mathematical description. A new interpretation of the single photon interference experiment in the MZI is given in the point of view of REIN. The action of a $50/50$ beam splitter can be described by a so-called Hadamard transformation given by \begin{equation}\label{hadamard} H=\frac{1}{\sqrt{2}} \left( \begin{array}{cc} 1 & 1 \\ 1 & -1 \\ \end{array} \right). \end{equation} When a single photon with its wave function $|\psi\rangle_{i}$ is directed to the MZI, BS$_{1}$ works as a divider to split the wave function to two sub wave functions, $|\psi\} _{in,1}$ and $|\psi\}_{in,2}$, traveling along path$_{1}$ and path$_{2}$ as \begin{equation} \left( \begin{array}{c} |\psi\}_{in,1} \\ |\psi\}_{in,2} \\ \end{array} \right)=H\left( \begin{array}{c} |\psi\rangle_{i} \\ 0 \\ \end{array} \right), \end{equation} which gives that $|\psi\}_{in,1}=|\psi\}_{in,2} =|\psi\rangle_{i}/\sqrt{2}$. After a phase shifter $\phi$, an additional phase $e^{i\phi}$ is introduced and $|\psi\}_{in,1}$ becomes $e^{i\phi}|\psi\}_{in,1}$. If BS$_{2}$ is absent, the two sub wave functions are directed to the two detectors D$_{1}$ and D$_{2}$ without interference between them. The detection probabilities of D$_{1}$ and D$_{2}$ are $P_{1}=\leftidx{_{in,1}}\{\psi|\psi\}_{in,1}=1/2$ and $P_{2}=\leftidx{_{in,2}}\{\psi|\psi\}_{in,2}=1/2$. The sub-waves exist at both arms. There is equal probability the photon to collapse in either detectors. When a click is registered in D$_{1}$ (D$_{2}$), both of the two sub-wave functions collapse to D$_{1}$ (D$_{2}$) instantly. In standard interpretation, this open MZI is usually interpreted as showing the particle nature. In contrast, the REIN interprets it still as realistic quantum waves. The two sub-waves from the two arms do not encounter, and both of them arrive at the two detectors. According to the measurement postulate of quantum mechanics, the measurement result will be one of the eigenstates, the eigenstates of discrete positions at D$_1$ and D$_2$, with some probability. If BS$_{2}$ is present, the coalescence of the two sub-waves occurs to form two new sub-waves $|\psi\}_{out,1}$ and $|\psi\}_{out,2}$, which are directed to D$_{1}$ and D$_{2}$ respectively. After the transformation of BS$_{2}$, we have \begin{equation} |\psi\}_{out,1} =\frac{1}{\sqrt{2}}(e^{i\phi}|\psi\}_{in,1}-|\psi\}_{in,2}) \end{equation} and \begin{equation} |\psi\}_{out,2} =\frac{1}{\sqrt{2}}(e^{i\phi}|\psi\}_{in,1}+|\psi\}_{in,2}). \end{equation} The detection probabilities of D$_{1}$ and D$_{2}$ are $P_{1}=\leftidx{_{out,1}}\{\psi|\psi\}_{out,1}=\sin^{2}\frac{\phi}{2}$ and $P_{2}=\leftidx{_{out,2}}\{\psi|\psi\}_{out,2}=\cos^{2}\frac{\phi}{2}$. As $\phi$ varies, an interference pattern will appear. This has been used to show wave behavior in a closed MZI setting experiment. However, in the point view of REIN, the quantum wave behaves exactly the same as that in the open MZI before reaching the end of the MZI. The insertion of BS$_2$ make the two sub-waves encounter and interfere due to their phases. Like in the open MZI, when a click is registered in D$_{1}$ (D$_{2}$), both of the two output sub-waves collapse to D$_{1}$ (D$_{2}$) simultaneously. In the special case where $\phi=0$, $|\psi\}_{in,1}$ and $|\psi\}_{in,2}$ interfere constructively to give that $|\psi\}_{out,2}=|\psi\rangle_{i}$ along path$_{2}$, and interfere destructively to give $|\psi\}_{out,1}=0$ along path$_{1}$. In this case, only D$_{2}$ can detect the photon. \begin{widetext} \begin{center} \begin{figure} \caption{(a) A Mach-Zehnder interferometer (MZI) with a tunable phase $\phi$ between its two arms. In the delay-choice MZI, the decision whether or not to insert BS$_2$ is made after the photon has reached the MZI, but has not arrived at the intended position of BS$_2$ (the exit point); (b) In the encounter-delayed-choice experiment, the insertion of BS$_{2}$ is made right at the encounter of the two sub-waves. As shown here, the front parts of the sub-waves have passed the exit point, while the back parts of the sub-waves have not passed through the exit point and are "closed" by BS$_2$; (c) Still in EDC experiment, the two sub-waves leave the MZI and continue to move forward to D$_1$ and D$_2$. The front parts of the sub-waves retain their shape before they leave the MZI, but the back parts of the sub-waves are changed by the inserted BS$_2$. The back part of the up-going sub-wave vanishes due to destructive interference, whereas the right-going part of the sub-wave increases due to the constructive interference due to BS$_2$. The interference pattern of back parts of the sub-waves may vary according to their relative phases. } \label{WDC} \end{figure} \end{center} \end{widetext} If it is decided to insert BS$_{2}$ at the end of the MZI when the two sub-waves encounter at the end of the MZI, $|\psi\}_{in,\rho}$ can be divided into two components and expressed as \begin{equation} |\psi\}_{in,\rho}=|\psi\}_{in,\rho}^{p}+|\psi\}_{in,\rho}^{w}, \end{equation} with $\rho=1,2$. Here, $|\psi\}_{in,\rho}^{p}$ is the part of the sub-wave which has passed the exit point while BS$_{2}$ has not been inserted, and they do not pass BS$_2$. $|\psi\}_{in,\rho}^{w}$ is the part of the sub-wave which has passed the exit point while BS$_{2}$ has been inserted, and they will be subject to the action of BS$_2$. The interference between $|\psi\rangle_{in,1}^{w}$ and $|\psi\rangle_{in,2}^{w}$ occurs because BS$_{2}$ is present when they leave MZI. After the second beamsplitter, it gives \begin{equation}\label{outp1} |\psi\}_{out,1}^{w}=\frac{1}{\sqrt{2}} (e^{i\phi}|\psi\}_{in,1}^{w}-|\psi\}_{in,2}^{w}) \end{equation} and \begin{equation}\label{outp2} |\psi\}_{out,2}^{w}=\frac{1}{\sqrt{2}} (e^{i\phi}|\psi\}_{in,1}^{w}+|\psi\}_{in,2}^{w}), \end{equation} where $|\psi\}_{out,\rho}^{w}$ is the component of $|\psi\}_{out,\rho}$ which will give the wave behavior in standard interpretation. The interference between $|\psi\}_{in,1}^{p}$ and $|\psi\}_{in,2}^{p}$ never occurs because BS$_{2}$ is absent when they exit out of the MZI. They are directed to the detectors along their paths and we have \begin{equation}\label{outw1} |\psi\}_{out,1}^{p}=e^{i\phi}|\psi\}_{in,1}^{p} \end{equation} and \begin{equation}\label{outw2} |\psi\}_{out,2}^{p}=|\psi\}_{in,2}^{p}, \end{equation} where $|\psi\}_{out,\rho}^{p}$ is the component of $|\psi\}_{out,\rho}$ that will give the particle behavior in standard interpretation. Combining equations (\ref{outp1}), (\ref{outp2}), (\ref{outw1}) and (\ref{outw2}), we have the two new sub-waves after the action of BS$_{2}$ \begin{equation} |\psi\}_{out,1}=|\psi\}_{out,1}^{p}+\frac{1}{\sqrt{2}} (e^{i\phi}|\psi\}_{in,1}^{w}-|\psi\}_{in,2}^{w}) \end{equation} and \begin{equation} |\psi\}_{out,2}=|\psi\}_{out,2}^{p}+\frac{1}{\sqrt{2}} (e^{i\phi}|\psi\}_{in,1}^{w}+|\psi\}_{in,2}^{w}) \end{equation} Ensuring the two paths inside the MZI are of equal length, we have $|\psi\}_{in,1}^{p}=|\psi\}_{in,2}^{p}$ and $|\psi\}_{in,1}^{w}=|\psi\}_{in,2}^{w}$. The detection probabilities of D$_{1}$ and D$_{2}$ are \begin{equation} P_{1}=2\sin^{2}\frac{\phi}{2}P_{1}^{w}+P_{1}^{p}, \end{equation} and \begin{equation} P_{2}=2\cos^{2}\frac{\phi}{2}P_{2}^{w}+P_{2}^{p}, \end{equation} respectively. Here the relation \begin{equation} \leftidx{_{in,\rho}^{p}}\{\psi|\psi\}_{in,\rho}^{w}=0 \end{equation} is employed, and $P_{\rho}^{w}=\leftidx{_{in,\rho}^{w}}\{\psi|\psi\}_{in,\rho}^{w}$ ($P_{\rho}^{p}=\leftidx{_{in,\rho}^{p}}\{\psi|\psi\}_{in,\rho}^{p}$) is the probability that will (will not) show interference behavior in the $\rho$-th arm. They satisfy the relation \begin{equation} P_{\rho}^{p}+P_{\rho}^{w}=\frac{1}{2}. \end{equation} \begin{center} \begin{figure} \caption{The detection probabilities, $P_{1}$ and $P_{2}$, as functions of the phase $\phi$ at fixed values of $P_{p}$. $P_{p}$ can be controlled by the BS$_2$ insertion instant of time that divides the passing sub-waves into different ratio between particle-like and wave-like parts. When $P_p=1.0$, BS$_2$ is not inserted, and no interference occur and the photon exhibits particle-like nature. When $P_p=0$, BS$_2$ is inserted before the sub-waves arrive at the exit point, and full interference will occur, and the photon will show wave-like behavior. In between these two extremes, photon will exhibit partial particle-like nature and partial wave-like nature simultaneously as in the quantum delayed-choice case. } \label{DPG} \end{figure} \end{center} Apparently, $P_{1}^{w}=P_{2}^{w}=P_{w}/2$ and $P_{1}^{p}=P_{2}^{p}=P_{p}/2$, where $P_{w}$ ($P_{p}$) is the total probability that will (will not) show interference (which is called wave (particle) nature in standard interpretation). Thus \begin{equation} P_{1}=\sin^{2}\frac{\phi}{2}+\frac{\cos\phi}{2}P_{p} \end{equation} and \begin{equation} P_{2}=\cos^{2}\frac{\phi}{2}-\frac{\cos\phi}{2}P_{p}, \end{equation} and $P_{1}+P_{2}=1$. In the special case where $\phi=0$, BS$_{2}$ is inserted when half of the two sub-waves have exited the MZI, and this gives that $P_{1}=1/4$ and $P_{2}=3/4$. $P_{1}$ and $P_{2}$ as functions of the phase $\phi$ at several fixed values of $P_{p}$ are shown in Figure \ref{DPG}. It is seen that as $P_p$ changes from 0.0 to 1.0, the detection probabilities at the two arms change from a complete interference pattern to a flat line that exhibit no interference. Or in standard interpretation, the photon behavior changes from a wave to a particle. When the value of $P_p$ is fixed at a value between the two extremes, the probabilities are the incoherent superposition of a flat line and an interference pattern. In standard interpretation, a single photon exhibits wave nature and particle nature simultaneously. This is equivalent to the quantum-delayed-choice experiment, where the controlled-insertion of the second beamsplitter serves as a controlled unitary gate that produces the superposed quantum state. The position of insertion gives the form of the unitary gate. At middle point insertion, the controlled gate is a Hadamard gate. This can also be explained in terms of the duality quantum computing framework in Ref.\cite{duality,duality2,duality3}, as in Ref.\cite{Roy}. {\noindent \bf The EDC Experiment.} We design and implement the EDC experiment in which the insertion of output beam splitter is decided at the end of the MZI when the photon is passing through the exit point. The experimental setup is shown in Fig. \ref{setup}. \begin{widetext} \begin{center} \begin{figure} \caption{Experimental realization of the EDC experiment. SWL: Single-wavelength laser. EOM: Electro-optic modulator. ATT: Optical attenuator. BS: Beam splitter. D: Single photon detector. Single photons are produced by attenuating the pulses generated by EOM$_1$ from a continuous light wave emitted from a $780$ nm laser with a linewidth of $600$ kHz. The input and output beam splitters are of 50:50 in transmission and reflection. The square waves TTL S$_{2}$ and S$_{3}$ signals apply to the EOM$_{2}$ and EOM$_{3}$, respectively, which serves as a controller for insertion the second beamsplitter by guiding the sub-waves to different channels. The control signals S$_{2}$ and S$_{3}$ are in-phase, and $t_{d}$ is the time difference between $S_1$ and S$_{2}$, S$_{3}$.} \label{setup} \end{figure} \end{center} \end{widetext} The experiment starts from a $780$ nm continuous-wave (CW) polarized laser (SWL) with a linewidth of $600$ kHz. The first EOM$_{1}$ modulates and transforms the continuous light into pulse sequences, which then are attenuated to the single-photon level by using an attenuator. Then the pulses are sent into the Mach-Zehnder interferometer, which are composed of two $50/50$ beam splitters and reflection mirrors. The input beamsplitter (BS$_{1}$) divides the wave function of a single photon into two spatially separated components of equal amplitude, and the output BS (BS$_{2}$) works as a combiner of the two components. The two arms of the MZI are of equal lengths. The insertion of BS$_2$ is realized by using two additional modulators (EOM$_{2}$ and EOM$_{3}$), which are inserted in the two arms of the interferometer which are of equal distance from the input BS$_1$. The half-wave voltages of the three modulators are $V_{\pi}=91 \pm 1$ V. When the TTL signal is the "high" voltage level, the half-wave voltage applies to the EOM and the photon is transmitted, that is, the beamsplitter is lifted. Otherwise, the photon is reflected by the EOM, and the beamsplitter is inserted. There are three TTL control signals with a repetition rate of $1$ MHz to determine whether or not the half-wave voltages apply to the three modulators. EOM$_{1}$ is used to cut the continuous waves into fragmented pulses at the single photon level as explained earlier. The two modulators EOM$_{2}$ and EOM$_{3}$ are used to split the two sub-waves of the single photon into four sub-waves. When EOM$_{2}$ and EOM$_{3}$ are in the high voltage level, the two photon sub-waves are transmitted, and the MZI is open. The SUB-waves are directed to the detectors D$_3$ and D$_4$ respectively, and they show particle-like behavior. When the TTL are in the low voltage level, two of the sub-waves are reflected and pass through the output BS$_2$. Their paths are indistinguishable, and hence interfere with each other. The MZI interferometer is closed for them, hence show wave-like behavior in standard delayed-choice interpretation. By maintaining the control signals S$_{2}$ and S$_{3}$ in-phase so that they act as a single one, and tune the time difference $t_{d}$ between the signal S$_{1}$ and S$_{2}$. $t_{d}=0$ is the insertion time, namely, $t_d/(T/2)$ part of the sub-wave have transmitted, and move towards detectors $D_3$ and $D_4$, where $T/2$ is the length of the pulse. The relative detection probability of $\text{D}_{3}$ is turned out to be \begin{eqnarray} R_{P}&=&\frac{_{out,1}^p\{\psi|\psi\}_{out,1}^p} {_{out,1}^p\{\psi|\psi\}_{out,1}^p+_{out,2}^p\{\psi|\psi\}_{out,2}^p}\nonumber\\ &=&\frac{P_{1}^{p}} {P_{1}^p+P_{2}^{p}}=\frac{N_{3}}{N_{3}+N_{4}}\nonumber\\ &=&\frac{1}{2}, \end{eqnarray} where $N_3$ and $N_4$ are the number of clocks registered by detectors $D_3$ and $D_4$ respectively. The result is independent of $t_d$, which is interpreted as exhibiting particle-like nature in standard interpretation. In REIN, this is naturally explained by the non-interfering sub-waves traveling through both arms simultaneously. The detection by either $D_3$ or $D_4$ is due to the measurement, which gives equal probabilities to each of the detectors. On the other hand, because of $\text{BS}_{2}$, the interference between the two sub-waves, $|\psi\}_{in,1}^w$ and $|\psi\}_{in,2}^w$, occurs. The two resulting sub-waves, $|\psi\}_{out,1}^w$ and $|\psi\}_{out,2}^w$, are then directed to detectors, $\text{D}_{1}$ and $\text{D}_{2}$. The relative detection probability of $\text{D}_{1}$ is evaluated as \begin{eqnarray} R_{W}&=& _{out,1}^w\{\psi|\psi\}_{out,1}^{w}\nonumber\\ &=&P^w_1(1-\cos\phi),\nonumber\\ &=&{N_1 \over N_{t} },\nonumber\\ \end{eqnarray} where $N_1$ is the number of clicks registered by detectors $D_1$, and $N_{t}=\sum^{4}_{i}N_{i}$. By choosing $\phi=0$, $R_{W}=0$ showing that destructive interference results in completely canceling each other in the output of $D_1$. $P_{w}$ ($P_{p}$) is a probability that a single photon will (will not) show wave (particle) nature. \begin{eqnarray} P_{w}&=&_{out,1}^w\{\psi|\psi\}_{out,1}^w+_{out,2}^w\{\psi|\psi\}_{out,2}^w\nonumber\\ &=&2P_{1}^w\sin^2\phi/2+2P_1^w\cos^2\phi/2\nonumber\\ &=&2P_{1}^w=\frac{N_{1}+N_{2}}{N_{t}},\label{DW} \end{eqnarray} and \begin{eqnarray} P_{P}&=&_{out,1}^p\{\psi|\psi\}_{out,1}^p+_{out,2}^p\{\psi|\psi\}_{out,2}^p\nonumber\\ &=&2P_{1}^p=\frac{N_{3}+N_{4}}{N_{t}},\label{DP} \end{eqnarray} with $N_{t}=\sum^{4}_{i}N_{i}$ and $D_{W}+D_{P}=1$. In our experiment, photon uniformly distributes in a pulse, yielding, \begin{equation} P_p=2t_d/T, \end{equation} and \begin{equation} P_w=1-P_p=1-2t_d/T. \end{equation} Both of $P_p$ and $P_w$ have linear relations with the delayed time $t_d$. \begin{center} \begin{figure} \caption{Experimental results. (a) Black points represent ratio $R_w=N_1/(N_1+N_2)$ and red points are $R_p=N_3/(N_3+N_4)$, which represents wave-like behavior and particle-like behavior respectively in standard interprettaion; (b) The total probability $P_w$ of interfering photon (black dots) and the that, $P_p$, of non-interfering photon (red dots) .} \label{data} \end{figure} \end{center} The experimental results are shown in Fig. \ref{data}. It is seen that the wave function of a single photon is divided into four parts and detected by four detectors, respectively. If the output BS is present, we will observe the interference fringes with a tunable phase difference between the two paths which the single photon sub-waves travel through. When the two arms of the interferometer are of equal length, the two paths are fully recombined by the output BS and perfectly indistinguishable. We register, with probability $1$, a click in only one of the two detectors (D$_{1}$ and D$_{2}$) placed on the output ports of the interferometer. If the output BS is absent, each of the detector has 50\% probability to register a click. In the standard interpretation, this is interpreted as the photon having particle-like behavior, and the photon travels through a single path to each one of the detector. In the REIN view, this is interpreted in a unified way just as the closed setting case. The only difference is whether or not BS$_2$ exists. Before the exit point, sub-waves travel in both arms. Without BS$_2$, sub-waves travel without interference, and with BS$_2$ sub-waves interferes that may lead the photon wave to go one detector completely. As seen in Figure \ref{data}(a), the black points $R_{W}=N_{1}/(N_{1}+N_{2})$ shows the wave-like behavior, and the red ones represent $R_{P}=N_{3}/(N_{3}+N_{4})$ shows the particle-like behavior. $P_{W}$ gives the percentage of the component of the the single photon wave function showing wave-like behavior and $D_{P}$ gives that of the component showing particle-like behavior. It allows the ratios $D_{W}$ and $D_{P}$ to vary between $0$ and $1$ when the time delay $t_{d}$ varies between $0$ and $T$, where $T$ is the period of the control signal, where $T/2$ are in high voltage level and $T/2$ are in low voltage level. The wave function of the single photon distributes with uniform intensity along the propagation direction in virtue of the rectangular control signals with $50\%$ duty cycle. Because the frequency of the control signal, $f=1/T$, is larger than the laser linewidth of $600$ kHz, the coherence length of the light modulated by EOM$_{1}$ approaches to that of the pulse and the length of the the single photon wave function along the propagation direction could be considered as that of the pulse $L={Tc}/(2n)$ with the light speed $c$ and the effective refractive index $n$. Hence, the two quantities $D_{W}$ and $D_{P}$ change linearly with the time-delay $t_d$ as shown in Figure \ref{data}(b). {\noindent \bf Discussion} \ \ In this work, we have presented the realistic interpretation of quantum mechanics, the REIN. In REIN, the wave function, or wave are the real existence of quantum object. It is not merely a mathematical description. Like classical wave, quantum wave can be divided into sub-waves, and the sub-waves can be recombined. When they are measured, they collapse and show the particle-like nature. The essential difference between quantum wave and classical wave is that quantum wave collapses in totality, namely the whole of the quantum wave, whatever scattered in space, will collapse into a single point instantly. Apart from this, quantum wave can be almost viewed in the same manner as classical wave. In the REIN view, in the MZI device, the photon sub-waves travel through both arms. The simultaneous travel of a photon through the two arms is easy to comprehend and understand in REIN: photon is no longer a ball-like particle, it is an extended, and even separated stuff distributed in space, the quantum wave, or quantum sub-waves. The sub-waves travel simultaneously through the two arms. Each sub-wave contains the full attributes of the quantum object: when measured, it collapses with certain probability to exhibit the full properties of the quantum object, such as spins, masses and so on. In the REIN view, the wave-like nature or particle-like nature in the standard interpretation of delayed-choice MZI, is simply the interference or non-interference of the sub-waves of the single photons. In the REIN view, the photons are all sub-waves before they are detected. When they are are detected, they collapse and cause a click in the detector which is viewed as a particle. The REIN view has been exploited in the duality quantum computer \cite{duality}. The duality quantum computer uses the superpositions of quantum sub-waves, and hence allows the linear combinations of unitary operators as generalized quantum gates. The mathematical expressions have been constructed and developed \cite{gudder1,duality4,duality3p,duality5,duality6}. Recently, it has been found that linear combinations of unitary operators are superior in simulating Hamiltonian systems over traditional formalism of products of unitary operators \cite{childs}. The REIN idea is more detailed demonstrated by an encounter-delayed-choice experiment proposed in this work. By inserting a beam-splitter during the encounter of two sub-waves, one is able to let part of the sub-waves to interfere and the other part not to interfere, hence exhibiting the so-called wave-like nature and particle-like nature simultaneously as in the quantum delayed-choice experiment. We have experimentally demonstrated the EDC proposal, and the experiment results support the REIN idea. \textbf{References} {\bf Acknowledgments} This work is supported by the National Natural Science Foundation of China under Grants No.11474181, the National Basic Research Program of China under Grant No. 2011CB9216002 and the Open Research Fund Program of the State Key Laboratory of Low-Dimensional Quantum Physics, Tsinghua University. {\bf Author contributions} GLL conceived the realistic interpretation idea, the encountered-delayed-choice experimental proposal, and supervised the whole project. WQ, JLL and GLL setup the experimental apparatus, WQ, JLL, ZY and GLL performed the experiment, WQ and GLL analyzed the data. GLL and WQ wrote the paper. {\bf Competing financial interests} The authors declare no competing financial interests. {\bf Correspondence} and requests for materials should be addressed to G.-L.L. ([email protected]). \end{document}
\begin{document} \title{Sub-Riemannian curvature and a Gauss--Bonnet theorem in the Heisenberg group} \author{Zolt\'an Balogh} \address{Mathematisches Institut, Universit\"at Bern, Sidlerstrasse 5, 3012 Bern, Switzerland} \email{[email protected]} \author{Jeremy T. Tyson} \address{Department of Mathematics \\ University of Illinois \\ 1409 West Green St. \\ Urbana, IL, 61801} \email{[email protected]} \author{Eugenio Vecchi} \address{Dipartimento di Matematica, Universit\`{a} di Bologna, Piazza di Porta San Donato 5, 40126 Bologna, Italy} \email{[email protected]} \date{\today} \thanks{ZMB and EV were supported by the Swiss National Science Foundation Grant No.\ 200020-146477, and have also received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme FP7/2007-2013/ under REA grant agreement No.\ 607643 (ERC Grant MaNET `Metric Analysis for Emergent Technologies'). JTT acknowledges support from U.S. National Science Foundation Grant DMS-0120870 and Simons Foundation Collaboration Grant 353627.} \keywords{Heisenberg group, sub-Riemannian geometry, Riemannian approximation, Gauss--Bonnet theorem, Steiner formula} \subjclass[2010]{Primary 53C17; Secondary 53A35, 52A39} \maketitle \begin{abstract} We use a Riemannnian approximation scheme to define a notion of \textit{sub-Riemannian Gaussian curvature} for a Euclidean $C^{2}$-smooth surface in the Heisenberg group $\mathbb{H}$ away from characteristic points, and a notion of \textit{sub-Riemannian signed geodesic curvature} for Euclidean $C^{2}$-smooth curves on surfaces. These results are then used to prove a Heisenberg version of the Gauss--Bonnet theorem. An application to Steiner's formula for the Carnot-Carath\'eodory distance in $\mathbb{H}$ is provided. \end{abstract} \tableofcontents \section{Introduction} A full understanding of the notion of curvature has been at the core of studies in differential geometry since the foundational works of Gauss and Riemann. The aim of this paper is to propose a suitable candidate for the notion of \textit{sub-Riemannian Gaussian curvature} for Euclidean $C^{2}$-smooth surfaces in the first Heisenberg group $\mathbb{H}$, adopting the so called \textit{Riemannian approximation scheme}, which has proved to be a very powerful tool to address sub-Riemannian issues. Referencing the seminal work of Gauss, we recall that to a compact and oriented Euclidean $C^{2}$-smooth regular surface $\Sigma \subset \mathbb{R}^{3}$ we can attach the notions of \textit{mean curvature} and \textit{Gaussian curvature} as symmetric polynomials of the second fundamental form. To be more precise, for every $p \in \Sigma$ we have a well-defined \textit{outward unit normal} vector field, $N(p):\Sigma \to \mathbb{S}^{2}$, usually called the \textit{Gauss normal map}. For every $p \in \Sigma$, the differential of the Gauss normal map $dN(p) : T_{p}\Sigma \to T_{N(p)}\mathbb{S}^{2},$ defines a positive definite and symmetric quadratic form on $T_{p}\Sigma$ whose two real eigenvalues are usually called \textit{principal curvatures} of $\Sigma$ at $p$. The arithmetic mean of these principal curvatures is the \textit{mean curvature} and their product is the \textit{Gaussian curvature}. The importance of the latter became particularly clear after Gauss' famous \textit{Theorema Egregium}, which asserts that Gaussian curvature is intrinsic and is also an isometric invariant of the surface $\Sigma$. The notions of curvature, as briefly recalled above, can be extended to far more general situations, for instance to submanifolds of higher codimension in $\mathbb{R}^{n}$, and also to the broader geometrical context provided by Riemannian geometry, as was done by Riemann. In particular, we will consider 2-dimensional Riemannian manifolds isometrically embedded into 3-dimensional Riemannian manifolds. We refer to Section \ref{C3} for details. Our interest in the study of curvatures of surfaces in $\mathbb{H}$ is motivated by the still ongoing studies in the context of sub-Riemannian manifolds or more specific structures like Carnot groups, whose easiest example is provided by the first Heisenberg group $\mathbb{H}$. Restricting our attention to $\mathbb{H}$, there is a currently accepted notion of \textit{horizontal mean curvature} ${\mathcal{H}}_0$ at non-characteristic points of Euclidean regular surfaces. This notion has been considered by Pauls (\cite{P04}) via the method of Riemannian approximants, but has also been proved to be equivalent to other notions of mean curvature appearing in the literature (e.g.\ \cite{CHMY05} or \cite{DGN07}). The method of Riemannian approximants relies on a famous result due to Gromov, which states that the metric space $(\mathbb{H},d_{cc})$ can be obtained as the pointed Gromov-Hausdorff limit of a family of metric spaces $(\mathbb{R}^{3}, g_L)$, where $g_L$ is a suitable family of Riemannian metrics. The Riemannian approximation scheme has also proved to be a very efficient tool in more analytical settings, for instance, in the study of estimates for fundamental solutions of the sub-Laplacian $\Delta_{\textnormal{H}}$ (e.g.\ \cite{CM06, CCM07}) as well as regularity theory for sub-Riemannian curvature flows (e.g.\ \cite{ccm:mcf}). The preceding represents only a small sample of the many applications of the Riemannian approximation method in sub-Riemannian geometric analysis, and we refer the reader to the previously cited papers for more information and references to other work in the literature. The monograph \cite{CDPT} provides a detailed description of the Riemannian approximation scheme in the setting of the Heisenberg group. Let us denote by $X_{1}, X_{2}$ and $X_{3}$ the left-invariant vector fields which span the Lie algebra $\mathfrak{h}$ of $\mathbb{H}$. In particular, $[X_1, X_2]=X_3$. In order to exploit the contact nature of $\mathbb{H}$ it is customary to define an inner product $\scal{\cdot}{\cdot}_{\textnormal{H}}$ which makes $\{X_1, X_2\}$ an orthonormal basis. A possible way to define a Riemannian scalar product is to set $X_{3}^{L} := X_{3}/\sqrt{L}$ for every $L>0$, and then to extend $\scal{\cdot}{\cdot}_{\textnormal{H}}$ to a scalar product $\scal{\cdot}{\cdot}_{L}$ which makes $\{X_1,X_2, X_{3}^{L}\}$ an orthonormal basis. The family of metric spaces $(\mathbb{R}^{3}, g_{L})$ converges to $(\mathbb{H}, d_{cc})$ in the pointed Gromov-Hausdorff sense. Within this family of Riemannian manifolds, we can now perform computations adopting the unique Levi-Civita connection associated to the family of Riemannian metrics $g_{L}$. Obviously, all the results are expected depend on the positive constant $L$. The plan is to extract horizontal notions out of the computed objects and to study their asymptotics in $L$ as $L \to +\infty$. This is the technique adopted in \cite{P04} to define a notion of horizontal mean curvature. It is natural to ask whether such a method can be employed to study the curvature of curves, and especially to articulate an appropriate notion of \textit{sub-Riemannian Gaussian curvature}. One attempt in this direction has been carried out in \cite{CPT10}, where the authors proposed a notion of horizontal second fundamental form in relation with $H$-convexity. A different notion of sub-Riemannian Gaussian curvature for graphs has been suggested in \cite{DGN03}. Our approach follows closely the classical theory of Riemannian geometry and leads us to the following notion of \textit{sub-Riemannian curvature} for a Euclidean $C^{2}$-smooth and regular curve $\gamma=(\gamma_1, \gamma_2, \gamma_3):[a,b]\to \mathbb{H}$: \begin{equation}\label{eq:k0INTRO} k_{\gamma}^{0} = \begin{cases} \dfrac{|\dot{\gamma}_{1} \ddot{\gamma}_2 - \dot{\gamma}_{2} \ddot{\gamma}_1|}{(\dot{\gamma}_{1}^2 + \dot{\gamma}_{2}^2)^{3/2}}, &\mbox{if $\gamma(t)$ is a horizontal point of $\gamma$,} \\ \\ \dfrac{\sqrt{\dot{\gamma}_{1}^{2}+ \dot{\gamma}_{2}^{2}}}{|\omega(\dot{\gamma})|}, & \mbox{if $\gamma(t)$ is not a horizontal point of $\gamma$.} \end{cases} \end{equation} Here $\omega = dx_3 -\tfrac{1}{2}\left( x_1 dx_2 - x_2 dx_1 \right)$ is the standard contact form on $\mathbb{R}^3$. We stress that, when dealing with \textit{purely} horizontal curves, the above notion of curvature is already known and appears frequently in the literature. An analogous procedure allow us to define also a notion of \textit{sub-Riemannian signed geodesic curvature} for Euclidean $C^{2}$-smooth and regular curves $\gamma=(\gamma_1, \gamma_2, \gamma_3):[a,b]\to \Sigma \subset \mathbb{H}$ living on a surface $\Sigma = \{ x\in \mathbb{H}: u(x)=0\}$, with $u \in C^{2}(\mathbb{R}^{3})$. This notion takes the form \begin{equation}\label{eq:ksIntro} k_{\gamma, \Sigma}^{0,s} = \left \{ \begin{array}{rl} \dfrac{\bar{p} \dot{\gamma}_{1} + \bar{q} \dot{\gamma}_{2}}{|\omega(\dot{\gamma})|}, & \mbox{if $\gamma(t)$ is a non-horizontal point,} \\ 0 , & \mbox{if $\gamma(t)$ is a horizontal point,} \end{array}\right. \end{equation} where $\nabla_{\textnormal{H}} u = (X_1u,X_2u)$, $\bar{p}=\tfrac{X_1 u}{\|\nabla_{\textnormal{H}} u \|_{\textnormal{H}}}$ and $\bar{q}=\tfrac{X_2 u}{\|\nabla_{\textnormal{H}} u \|_{\textnormal{H}}}$. We refer to Section \ref{C1} and Section \ref{C2} for precise statements and definitions. In the same spirit we introduce a notion of \textit{sub-Riemannian Gaussian curvature} ${\mathcal{K}}_0$ away from characteristic points. We will work with Euclidean $C^{2}$-smooth surfaces $\Sigma = \{x\in \mathbb{H}: u(x)=0\}$, whose characteristic set $C(\Sigma)$ is defined as the set of points $x\in \Sigma$ where $\nabla_{\textnormal{H}} u(x)=(0,0)$. The explicit expression of ${\mathcal{K}}_0$ reads as follows: \begin{equation}\label{eq:K0intro} {\mathcal{K}}_0 = -\left( \dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right)^{2} - \left( \dfrac{X_2 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right) X_1 \left(\dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right) + \left( \dfrac{X_1 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} \right) X_2 \left(\dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} \right). \end{equation} The quantity in \eqref{eq:K0intro} cannot easily be viewed as a symmetric polynomial of any kind of horizontal Hessian. Moreover, the expression of ${\mathcal{K}}_0$ written above resembles one of the integrands, the one which would be expected to replace the classical Gaussian curvature, appearing in the Heisenberg Steiner's formula proved in \cite{BFFVW}. The discrepancy between these two quantities will be the object of further investigation. The definition of an appropriate notion of sub-Riemannian Gaussian curvature leads to the question of proving a suitable Heisenberg version of the celebrated Gauss--Bonnet Theorem, which is the first main result of this paper. For a surface $\Sigma = \{ x \in \mathbb{H}: u(x)=0\},$ with $u \in C^{2}(\mathbb{R}^{3})$, our main theorem is as follows. \begin{theorem}\label{HGB} Let $\Sigma \subset \mathbb{H}$ be a regular surface with finitely many boundary components $(\partial \Sigma)_{i}$, $i \in \{1, \ldots,n\}$, given by Euclidean $C^{2}$-smooth regular and closed curves $\gamma_{i}:[0,2\pi] \to (\partial \Sigma)_{i}.$ Let ${\mathcal{K}}_0$ be the sub-Riemannian Gaussian curvature of $\Sigma$, and $k_{\gamma_{i}, \Sigma}^{0,s}$ the sub-Riemannian signed geodesic curvature of $\gamma_{i}$ relative to $\Sigma$. Suppose that the characteristic set $C(\Sigma)$ satisfies $\mathcal{H}_{E}^{1}(C(\Sigma))=0$, and that $\|\nabla_{\textnormal{H}} u \|_{\textnormal{H}}^{-1}$ is locally summable with respect to the Euclidean 2-dimensional Hausdorff measure near the characteristic set $C(\Sigma)$. Then $$ \int_{\Sigma}{\mathcal{K}}_0 \, d\mathcal{H}_{cc}^{3} + \sum_{i=1}^{n}\int_{\gamma_{i}}k_{\gamma_{i}, \Sigma}^{0,s} \, d\dot{\gamma}_{i} =0. $$ \end{theorem} The sharpness of the assumption made on the 1-dimensional Euclidean Hausdorff measure $\mathcal{H}_{E}^{1}(C(\Sigma))$ of the characteristic set $C(\Sigma)$ is discussed in Section \ref{gb}, while comments on the local summability asked for $\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}^{-1}$ are postponed to Section \ref{questions}. The measure $d\dot{\gamma}_i$ on the $i$th boundary curve $(\partial\Sigma)_i$ in the statement of Theorem \ref{HGB} is the limit of scaled length measures in the Riemannian approximants. We remark that this measure vanishes along purely horizontal boundary curves. Gauss--Bonnet type theorems have previously been obtained by Diniz and Veloso \cite{DV12} for non-characteristic surfaces in $\mathbb{H}$, and by Agrachev, Boscain and Sigalotti \cite{ABS08} for almost-Riemannian structures. We would also like to mention the results obtained by Bao and Chern \cite{BC96} in Finsler spaces. The notion of \textit{horizontal mean curvature} has featured in a long and ongoing research program concerning the study of \textit{constant mean curvature} surfaces in $\mathbb{H}$, especially in relation to {\it Pansu's isoperimetric problem} (e.g. \cite{N04}, \cite{RR06}, \cite{HP08}, \cite{dgn:partial} or \cite{CDPT}). A simplified version of the aforementioned Gauss--Bonnet Theorem \ref{HGB}, i.e., when we consider a compact, oriented, Euclidean $C^{2}$-surface with no boundary, or with boundary consisting of fully horizontal curves, ensures that the only compact surfaces with constant \textit{sub-Riemannian Gaussian curvature} have ${\mathcal{K}}_0 = 0$. Our main application concerns a Steiner's formula for non-characteristic surfaces. This result (see Theorem \ref{Torus}) is a simplification of the Steiner's formula recently proved in \cite{BFFVW}. The structure of the paper is as follows. In Section \ref{not} we provide a short introduction to the first Heisenberg group $\mathbb{H}$ and the notation which we will use throughout the paper, with a special focus to the Riemannian approximation scheme. In Section \ref{C1} and \ref{C2} we adopt the Riemannian approximation scheme to derive the expression (\ref{eq:k0INTRO}) for the sub-Riemannian curvature of Euclidean $C^{2}$-smooth curves in $\mathbb{H}$, and the expression (\ref{eq:ksIntro}) for the sub-Riemannian geodesic curvature of curves on surfaces. In Section \ref{C3}, we will derive the expression (\ref{eq:K0intro}) for the \textit{sub-Riemannian Gaussian curvature}. In Section \ref{gb} we prove Theorem \ref{HGB} and its corollaries. Section \ref{examples} contains the proof of Steiner's formula for non-characteristic surfaces. In Section \ref{questions} we present a Fenchel-type theorem for horizontal closed curves (see Theorem \ref{Fenchel}) and we pose some questions. One of the more interesting and challenging questions concerns the summability of the sub-Riemannian Gaussian curvature ${\mathcal{K}}_0$ with respect to the Heisenberg perimeter measure near isolated characteristic points. This summability issue is closely related to the open problem posed in \cite{DGN12} concerning the summability of the horizontal mean curvature ${\mathcal{H}}_0$ with respect to the Riemannian surface measure near the characteristic set. To end the paper, we add an appendix where we collect several examples of surfaces in which we compute explicitly the sub-Riemannian Gaussian curvature ${\mathcal{K}}_0$. \ \paragraph{\bf Acknowlegements.} Research for this paper was conducted during visits of the second and third authors to the University of Bern in 2015 and 2016. The hospitality of the Institute of Mathematics of the University of Bern is gratefully acknowledged. The authors would also like to thank Luca Capogna for many valuable conversations on these topics and for helpful remarks concerning the proof of Theorem \ref{HGB}. \section{Notation and background}\label{not} Let $\mathbb{H}$ be the first Heisenberg group where the non-commutative group law is given by $$(y_1,y_2,y_3) \ast (x_1,x_2,x_3) = \left(x_1 + y_1, x_2 + y_2, x_3 + y_3 - \dfrac{1}{2}(x_1 y_2 - x_2 y_1) \right).$$ The corresponding Lie algebra of left-invariant vector fields admits a $2$-step stratification, $\mathfrak{h} = \mathfrak{v}_1 \oplus \mathfrak{v}_2$, where $\mathfrak{v}_{1} = \mathrm{span}\{X_1,X_2\}$ and $\mathfrak{v}_2 = \mathrm{span}\{X_3\}$ for $X_1 = \partial_{x_1} - \tfrac12{x_2}\partial_{x_3}$, $X_2 = \partial_{x_2} + \tfrac12{x_1}\partial_{x_3}$ and $X_3 = [X,Y] = \partial_{x_3}$. On $\mathbb{H}$ we consider also the standard contact form of $\mathbb{R}^{3}$ $$\omega = dx_3 - \dfrac{1}{2} \left( x_1 dx_2 - x_2 dx_1 \right).$$ The left-invariant vector fields $X_{1}$ and $X_{2}$ play a major role in the theory of the Heisenberg group because they span a two-dimensional plane distribution $H \mathbb{H}$, known as the \textit{horizontal distribution}, which is also the kernel of the contact form $\omega$: $$ H_{x}\mathbb{H} := \mathrm{span}\{ X_{1}(x), X_{2}(x)\} = (\mathrm{Ker} \omega)(x), \qquad x \in \mathbb{H}. $$ This smooth distribution of planes is a subbundle of the tangent bundle of $\mathbb{H}$, and it is a non integrable distribution because $[X_{1}, X_{2}] = X_{3} \notin H\mathbb{H}$. We can define an inner product $\scal{\cdot}{\cdot}_{x,\textnormal{H}}$ on $H \mathbb{H}$, so that for every $x \in \mathbb{H}$, $\{X_{1}(x), X_{2}(x)\}$ forms a orthonormal basis of $H_{x} \mathbb{H}$. We will then denote by $\| \cdot \|_{x,\textnormal{H}}$ the horizontal norm induced by the scalar product $\scal{\cdot}{\cdot}_{x, \textnormal{H}}$. In both cases, we will omit the dependence on the base point $x \in \mathbb{H}$ when it is clear. \begin{definition} An absolutely continuous curve $\gamma:[a,b]\subset \mathbb{R} \to \mathbb{H}$ is said to be horizontal if $\dot{\gamma}(t) \in H_{\gamma(t)}\mathbb{H}$ for a.e.\ $t \in [a,b]$. \end{definition} \begin{definition}\label{length} Let $\gamma:[a,b]\to \mathbb{H}$ a horizontal curve. The horizontal length $l_{\textnormal{H}}(\gamma)$ of $\gamma$ is defined as $$l_{\textnormal{H}}(\gamma) := \int_{a}^{b} \|\dot{\gamma}\|_{\textnormal{H}} \, dt.$$ \end{definition} It is standard to equip the Heisenberg group $\mathbb{H}$ with a path-metric known as Carnot-Carath\'{e}odory, or $cc$, distance: \begin{definition} Let $x,y \in \mathbb{H}$, with $x \neq y$. The $cc$ distance between $x,y$ is defined as $$d_{cc}(x,y):= \inf \{ l_{\textnormal{H}}(\gamma) | \gamma:[a,b] \to \mathbb{H}, \gamma(a)=x, \gamma(b)=y\}$$ \end{definition} Dilations of the Heisenberg group are defined as follows: \begin{equation}\label{dilations} \delta_r(x_1,x_2,x_3) = (rx_1,rx_2,r^2x_3), \qquad r>0 \end{equation} It is easy to verify that dilations are compatible with the group operation: $\delta_r(y*x)=\delta_r(y)*\delta_r(x)$, $x,y\in\mathbb{H}$, $r>0$, and that the $cc$ distance is homogeneous of order one with respect to dilations: $d_{cc}(\delta_r(x),\delta_r(y)) = r\,d_cc(x,y)$. The scaling behavior of the left-invariant vector fields $X_1,X_2,X_3$ with respect to dilations is as follows: $$ X_1(f\circ\delta_r) = r \, X_1f \circ \delta_r, \quad X_2(f\circ\delta_r) = r \, X_2f \circ \delta_r, \quad X_3(f\circ\delta_r) = r^2 \, X_3f \circ \delta_r. $$ We are now ready to implement the Riemannian approximation scheme. First, let us define $X_{3}^{L} := \tfrac{X_3}{\sqrt{L}}$ for $L>0$. We define a family of Riemannian metrics $(g_L)_{L>0}$ on $\mathbb{R}^{3}$ such that $\{X_1,X_2,X_{3}^{L} \}$ becomes an orthonormal basis. The choice of this specific family of Riemannian metrics on $\mathbb{R}^{3}$ is indicated by the following theorem. \begin{theorem}[Gromov] The family of metric spaces $(\mathbb{R}^{3}, g_L)$ converges to $(\mathbb{H}, d_{cc})$ in the pointed Gromov-Hausdorff sense as $L \to +\infty$. \end{theorem} This deep result continue to hold even for more general Carnot groups, but there is one additional feature which is valid for $\mathbb{H}$: \begin{proposition} Any length minimizing horizontal curve $\gamma$ joining $x \in \mathbb{H}$ to the origin $0 \in \mathbb{H}$ is the uniform limit as $L \to +\infty$ of geodesic arcs joining $x$ to $0$ in the Riemannian manifold $(\mathbb{R}^{3},g_L)$. \end{proposition} For both results, we refer to \cite[Chapter 2]{CDPT}. Continuing with notation, the scalar product that makes $\{X_1,X_2,X_{3}^{L} \}$ an orthonormal basis will be denoted by $\scal{\cdot}{\cdot}_{L}$. Explicitly, this means that, given $V = v_1 X_1 + v_2 X_2 + v_3 X_3$ and $W = w_1 X_1 + w_2 X_2 + w_3 X_3$, $$\scal{V}{W}_{L} = v_1 w_1 + v_2 w_2 + L v_3 w_3.$$ Obviously, if we write $V$ and $W$ in the $\{X_1,X_2,X_{3}^{L} \}$ basis, i.e., $V = v_1 X_1 + v_2 X_2 + v^{L}_{3} X_{3}^{L}$ where $v^{L}_{3} = v_3 \sqrt{L}$ (and similarly for $W$), we have $$ \scal{V}{W}_{L} = v_1 w_1 + v_2 w_2 + v^{L}_{3} w^{L}_{3} = v_1 w_1 + v_2 w_2 + L v_3 w_3. $$ The following relations allow us to switch from the standard basis $\{e_1 ,e_2, e_3\}$ to $\{X_1,X_2,X_{3}^{L} \}$ and vice versa: \begin{equation*} \begin{cases} e_1 = X_1 + \tfrac12{x_2}\sqrt{L} X_{3}^{L}, & \\ e_2 = X_2 - \tfrac12{x_1}\sqrt{L} X_{3}^{L}, & \\ e_3 = \sqrt{L} X_{3}^{L}, & \\ \end{cases} \quad \textrm{and} \quad \begin{cases} X_1 = e_1 - \tfrac12{x_2} e_3, & \\ X_2 = e_2 + \tfrac12{x_1} e_3, & \\ X_{3}^{L} = \tfrac{e_3}{\sqrt{L}}. & \\ \end{cases} \end{equation*} In exponential coordinates, the metric $g_L$ is represented by the $3\times 3$ symmetric matrix $(g_{L})_{ij} := \scal{e_i}{e_j}_L$, for $i,j=1,2,3.$ In particular, $$g_{L}(x_1,x_2,x_3)= \begin{pmatrix} 1+ \tfrac14 {x_{2}^{2}} L & - \tfrac14 {x_{1} x_{2}} L & \tfrac12 {x_2} L \\ - \tfrac14 {x_1 x_2} L & 1+ \tfrac14 {x_{1}^{2}} L & -\tfrac12 {x_1} L \\ \tfrac12 {x_2} L & -\tfrac12 {x_1} L & L \\ \end{pmatrix}. $$ Then $\det(g_L(x)) = L$ and \begin{displaymath} g_{L}^{-1}(x_1,x_2,x_3)= \begin{pmatrix} 1 & 0 & -\tfrac{x_2}{2} \\ 0 & 1 & \tfrac{x_1}{2} \\ -\tfrac12 {x_2} & \tfrac12 {x_1} & \tfrac{4+L(x_{1}^{2} + x_{2}^{2})}{4L} \\ \end{pmatrix} \end{displaymath} Following the classical notation of Riemannian geometry, we will denote by $g_{ij}$ the elements of the matrix $g_L$, and by $g^{ij}$ the elements of its inverse $g_{L}^{-1}$.\\ A standard computational tool in Riemannian geometry is the notion of \textit{affine connection}. \begin{definition} Let $\mathcal{X}(M)$ be the set of $C^{\infty}$-smooth vector fields on a Riemannian manifold $M$. Let $\mathcal{D}(M)$ be the ring of real-valued $C^{\infty}$-smooth functions on $M$. An affine connection $\nabla$ on $M$ is a mapping $$\nabla : \mathcal{X}(M) \times \mathcal{X}(M) \to \mathcal{X}(M),$$ usually denoted by $(X,Y) \mapsto \nabla_{X}Y$, such that: \begin{itemize} \item [i)] $\nabla_{fX+gY}Z = f \nabla_{X}Z+ g \nabla_{Y}Z.$ \item [ii)] $\nabla_{X}(Y+Z) = \nabla_{X}(Y) + \nabla_{X}(Z).$ \item [iii)] $\nabla_{X}(fY) = f \nabla_{X}Y + X(f)Y,$ \end{itemize} for every $X,Y,Z \in \mathcal{X}(M)$ and for every $f,g \in \mathcal{D}(M)$. \end{definition} It is well known that every Riemannian manifold is equipped with a privileged affine connection: the Levi-Civita connection $\nabla$. This is the unique affine connection which is compatible with the given Riemannian metric and symmetric, i.e., $$ X \scal{Y}{Z}_{L} = \scal{\nabla_{X}Y}{Z}_{L} + \scal{Y}{\nabla_{X}Z}_{L} $$ and $$ \nabla_{X}Y - \nabla_{Y}X = [X,Y] $$ for every $X,Y,Z \in \mathcal{X}(M)$. A direct proof of this fact yields the famous \textit{Koszul identity}: \begin{equation}\label{eq:Koszul} \begin{aligned} \scal{Z}{\nabla_{X}Y}_{L} &= \dfrac{1}{2} \big( X \scal{Y}{Z}_{L} + Y \scal{Z}{X}_{L} -Z \scal{X}{Y}_{L} \\ &-\scal{[X,Z]}{Y}_{L} - \scal{[Y,Z]}{X}_{L} - \scal{[X,Y]}{Z}_{L}\big) \end{aligned}\end{equation} for $X,Y,Z \in \mathcal{X}(M)$. It is possible to write the Levi-Civita connection $\nabla$ in a local frame by making use of the Christoffel symbols $\Gamma_{ij}^{m}$. In our case, due to the specific nature of the Riemannian manifold $(\mathbb{R}^{3},g_L)$, we can use a global chart given by the identity map of $\mathbb{R}^{3}$. The Christoffel symbols are uniquely determined by $$\nabla_{e_i}e_j = \Gamma_{ij}^{m}e_m, \quad i,j,m=1,2,3.$$ \begin{lemma} The Christoffel symbols $\Gamma_{ij}^{m}$ of the Levi-Civita connection $\nabla$ of $(\mathbb{R}^{3},g_L)$ are given by \begin{equation}\label{eq:G1} \Gamma_{ij}^{1} = \begin{cases} 0, & (i,j)\in \{ (1,1), (1,3), (3,1), (3,3)\}, \\ \tfrac14 {x_2} L, & (i,j)\in \{(1,2),(2,1)\}, \\ -\tfrac12 {x_1} L, & (i,j)= (2,2), \\ \tfrac12 {L}, & (i,j)\in \{(2,3),(3,2)\}, \end{cases} \end{equation} \begin{equation}\label{eq:G2} \Gamma_{ij}^{2} = \begin{cases} -\tfrac12 {x_2} L, & (i,j)= (1,1), \\ \tfrac14 {x_1} L, & (i,j)\in \{(1,2),(2,1)\}, \\ -\tfrac12 {L}, & (i,j)\in \{(1,3),(3,1)\}, \\ 0, & (i,j)\in \{ (2,2), (2,3), (3,2), (3,3)\}, \\ \end{cases} \end{equation} and \begin{equation}\label{eq:G3} \Gamma_{ij}^{3} = \begin{cases} -\tfrac14 {x_1 x_2} L, & (i,j)=(1,1), \\ \tfrac18 ({x_{1}^{2}-x_{2}^{2}}) L, & (i,j)\in \{(1,2),(2,1)\}, \\ -\tfrac14 {x_1} L, & (i,j)\in \{(1,3),(3,1)\}, \\ \tfrac14 {x_1 x_2} L, & (i,j)=(2,2),\\ -\tfrac14 {x_2} L, & (i,j) \in \{ (2,3), (3,2)\},\\ 0, & (i,j)=(3,3). \\ \end{cases} \end{equation} \end{lemma} \begin{proof} It is a direct computation using $$\Gamma_{ij}^{m} = \dfrac{1}{2}\sum_{k=1}^{3} \left\{ \dfrac{\partial}{\partial_{x_i}}g_{jk} + \dfrac{\partial}{\partial_{x_j}}g_{ki} -\dfrac{\partial}{\partial_{x_k}}g_{ij} \right\}g^{km},$$ for $i,j,m=1,2,3$. \end{proof} We now compute the Levi-Civita connection $\nabla$ associated to the Riemannian metric $g_L$. \begin{lemma} The action of the Levi-Civita connection $\nabla$ of $(\mathbb{R}^{3}, g_L)$ on the vectors $X_1$, $X_2$ and $X_{3}^{L}$ is given by \begin{equation*} \begin{aligned} \nabla_{X_1}X_2 &= -\nabla_{X_2}X_1 = \frac12 {X_3}, \\ \nabla_{X_1}X_{3}^{L} &= \nabla_{X_{3}^{L}}X_1 = -\frac12 {\sqrt{L}} \, X_2, \\ \nabla_{X_2}X_{3}^{L} &= \nabla_{X_{3}^{L}}X_2 = \frac12 {\sqrt{L}} \, X_1. \end{aligned} \end{equation*} \end{lemma} \begin{proof} It follows from a direct application of the Koszul identity \eqref{eq:Koszul}, which here simplifies to $$ \scal{Z}{\nabla_{X}Y}_{L} = -\dfrac{1}{2} \biggl( \scal{[X,Z]}{Y}_{L} + \scal{[Y,Z]}{X}_{L} + \scal{[X,Y]}{Z}_{L}\biggr). $$ \end{proof} To make the paper self-contained, we recall here the definitions of Riemann curvature tensor $R$ and of sectional curvature. \begin{definition} The Riemann curvature tensor $R$ of a Riemannian manifold $M$ is a mapping $R(X,Y): \mathcal{X}(M) \to \mathcal{X}(M)$ defined as follows $$R(X,Y)Z := \nabla_{Y}\nabla_{X}Z - \nabla_{X}\nabla_{Y}Z + \nabla_{[X,Y]}Z, \quad Z \in \mathcal{X}(M).$$ \end{definition} \begin{remark}\label{Rfunct} Note that the Riemann curvature tensor $R$ satisfies the functional property \begin{equation}\label{eq:functional} R(fX,Y)Z = R(X,fY)Z = R(X,Y)(fZ) = f R(X,Y)Z, \end{equation} for every $X,Y,Z \in \mathcal{X}(M)$ and every $f \in \mathcal{D}(M)$. \end{remark} \begin{definition}\label{sectional} Let $M$ be a Riemannian manifold and let $\Pi \subset T_{p}M$ be a two-dimensional subspace of the tangent space $T_{p}M$. Let $\{E_{1}, E_{2}\}$ be two linearly independent vectors in $\Pi$. The sectional curvature $K(E_{1},E_{2})$ of $M$ is defined as $$K(E_{1}, E_{2}) := \dfrac{\scal{R(E_{1},E_{2})E_{1}}{E_{2}}}{|E_{1}\wedge E_{2}|^{2}},$$ where $\wedge$ denotes the usual wedge product. \end{definition} One of the main reasons to introduce the notion of affine connection, is to be able to differentiate smooth vector fields along curves: this operation is known as \textit{covariant differentiation} (see \cite{dC}, Chapter 2). Formally, let $Z = z_1 e_1 + z_2 e_2 + z_3 e_3$ be a smooth vector field (written in the standard basis of $\mathbb{R}^{3}$) along a curve $\gamma = \gamma(t)$. The covariant derivative of $Z$ along the curve $\gamma$ is given by \begin{equation*} D_{t}Z = \sum_{m=1}^{3} \left\{ \dfrac{d z_m}{dt} + \sum_{i,j=1}^{3}\Gamma_{ij}^{m}z_{j} \dfrac{d x_i}{dt}\right\} e_m, \end{equation*} where the $x_i$'s are the coordinates of $\gamma$ in a local chart and $\Gamma_{ij}^{m}$ are the Christoffel symbols introduced before. In particular, if $Z = \dot{\gamma}$, we have \begin{equation}\label{eq:DCov} D_{t}\dot{\gamma} = \sum_{m=1}^{3} \left\{ \ddot{\gamma}_{m} + \sum_{i,j=1}^{3} \Gamma_{ij}^{m} \dot{\gamma}_{i}\dot{\gamma}_{j}\right\}e_m . \end{equation} \section{Riemannian approximation of curvature of curves}\label{C1} Let us define the objects we are going to study in this section. \begin{definition} Let $\gamma :[a,b] \rightarrow (\mathbb{R}^{3},g_{L})$ be a Euclidean $C^{1}$-smooth curve. We say that $\gamma$ is regular if $$\dot{\gamma}(t) \neq 0, \quad \textrm{for every $t \in [a,b]$}.$$ Moreover, we say that $\gamma(t)$ is a horizontal point of $\gamma$ if $$\omega(\dot{\gamma}(t))= \dot{\gamma}_{3}(t) - \dfrac{1}{2} \left( \gamma_{1}(t)\dot{\gamma}_{2}(t)- \gamma_{2}(t)\dot{\gamma}_{1}(t)\right) = 0.$$ \end{definition} \begin{definition} Let $\gamma : [a,b]\subset \mathbb{R} \rightarrow (\mathbb{R}^{3}, g_L)$ be a Euclidean $C^{2}$-smooth regular curve in the Riemannian manifold $(\mathbb{R}^{3}, g_L)$. The curvature $k_{\gamma}^{L}$ of $\gamma$ at $\gamma(t)$ is defined as \begin{equation}\label{eq:LCurv} k_{\gamma}^{L} := \sqrt{\dfrac{\|D_t \dot{\gamma}\|_{L}^{2}}{\|\dot{\gamma}\|_{L}^{4}} - \dfrac{\scal{D_t \dot{\gamma}}{{\dot{\gamma}}}_{L}^{2}}{\|\dot{\gamma}\|_{L}^{6}}} . \end{equation} \end{definition} We stress that the above definition is well posed, indeed by Cauchy-Schwarz, \begin{equation*} \dfrac{\|D_t \dot{\gamma}\|_{L}^{2}}{\|\dot{\gamma}\|_{L}^{4}} - \dfrac{\scal{D_t \dot{\gamma}}{{\dot{\gamma}}}_{L}^{2}}{\|\dot{\gamma}\|_{L}^{6}} \geq \dfrac{\|\dot{\gamma}\|_{L}^{2}\|D_t \dot{\gamma}\|_{L}^{2} - \|\dot{\gamma}\|_{L}^{2}\|D_t \dot{\gamma}\|_{L}^{2}}{\|\dot{\gamma}\|_{L}^{6}} = 0. \end{equation*} \begin{remark} We recall that in Riemannian geometry the standard definition of curvature for a curve $\gamma$ parametrized by arc length is $k_{\gamma}^{L} := \|D_{t}\dot{\gamma}\|_L$. The one we gave before is just more practical to perform computations for curves with an arbitrary parametrization. \end{remark} Let us briefly recall the definition of $g_{L}$-geodesics, cf.\ \cite[Chapter 2]{CDPT}. For a Euclidean $C^{2}$-smooth regular curve $\gamma:[a,b] \to (\mathbb{R}^{3},g_{L})$, define its \textit{penalized energy functional} $E_{L}$ to be $$ E_{L}(\gamma):= \int_{a}^{b}\left( |\dot{\gamma}_{1}(t)|^{2} + |\dot{\gamma}_{2}(t)|^{2} + L \left| \omega(\dot{\gamma}(t))|\right|^{2}\right) \, dt. $$ Using a standard variational argument, we can derive the system of Euler-Lagrange equations for the functional $E_{L}$: we will call $g_L$-geodesics the critical points, which are actually curves, of the functional $E_{L}$. In other words, we will say that $\gamma$ is a $g_L$-geodesic, if for every $t \in [a,b]$ it holds that \begin{equation}\label{eq:gLEuler} \left \{ \begin{array}{l} \ddot{\gamma}_{1}(t)=- L \dot{\gamma}_{2}(t) \omega(\dot{\gamma}(t)) , \\ \ddot{\gamma}_{2}(t)= L \dot{\gamma}_{1}(t) \omega(\dot{\gamma}(t)),\\ (\omega(\dot{\gamma}(t))|_{\gamma(t)})'=0. \end{array}\right.\end{equation} We are now ready to present the first result concerning the curvature $k_{\gamma}^{L}$. \begin{lemma}\label{kL} Let $\gamma : [a,b] \rightarrow (\mathbb{R}^{3}, g_L)$ be a Euclidean $C^{2}$-smooth regular curve in the Riemannian manifold $(\mathbb{R}^{3}, g_L)$. Then \begin{equation}\label{eq:kL} k_{\gamma}^{L} = \sqrt{\dfrac{(\ddot{\gamma}_{1}+ L \dot{\gamma}_2 \omega(\dot{\gamma}))^{2} + (\ddot{\gamma}_{1}- L \dot{\gamma}_1 \omega(\dot{\gamma}))^{2} + L \omega({\ddot{\gamma}})^{2}}{(\dot{\gamma}_{1}^{2}+\dot{\gamma}_{2}^{2} +L\omega(\dot{\gamma})^{2})^{2}} - \dfrac{(\ddot{\gamma}_1 \dot{\gamma}_1 + \ddot{\gamma}_2 \dot{\gamma}_2 + L \omega(\dot{\gamma}) \omega(\ddot{\gamma}))^{2}}{(\dot{\gamma}_{1}^{2} +\dot{\gamma}_{2}^{2}+L\omega(\dot{\gamma})^{2})^{3}}}. \end{equation} In particular, if $\gamma(t)$ is a horizontal point of $\gamma$, \begin{equation}\label{eq:kL2} k_{\gamma}^{L} = \sqrt{\dfrac{\ddot{\gamma}_{1}^{2} + \ddot{\gamma}_{1}^{2}}{(\dot{\gamma}_{1}^{2}+\dot{\gamma}_{2}^{2})^{2}} - \dfrac{(\ddot{\gamma}_1 \dot{\gamma}_1 + \ddot{\gamma}_2 \dot{\gamma}_2 )^{2}}{(\dot{\gamma}_{1}^{2}+\dot{\gamma}_{2}^{2})^{3}}} = \frac{|\ddot{\gamma}_2 \dot{\gamma}_1 - \ddot{\gamma}_1 \dot{\gamma}_2|}{(\dot{\gamma}_{1}^{2}+\dot{\gamma}_{2}^{2})^{3/2}}. \end{equation} \end{lemma} \begin{proof} We first compute the covariant derivative of $\dot{\gamma}$ as in \eqref{eq:DCov}, using \eqref{eq:G1}, \eqref{eq:G2} and \eqref{eq:G3}. In components, with respect to the standard basis of $\mathbb{R}^{3}$, we have \begin{equation} \begin{aligned} (D_t \dot{\gamma})_{1} &= \ddot{\gamma}_{1} + \dfrac{\gamma_2 L}{2} \dot{\gamma}_{1} \dot{\gamma}_{2}- \dfrac{\gamma_1 L}{2} \dot{\gamma}_{2}^{2} +L \dot{\gamma}_{3} \dot{\gamma}_{2} = \ddot{\gamma}_{1}+ L \dot{\gamma}_2 \omega(\dot{\gamma}),\\ (D_t \dot{\gamma})_{2} &= \ddot{\gamma}_{2} - \dfrac{\gamma_2 L}{2} \dot{\gamma}_{1}^{2} + \dfrac{\gamma_1 L}{2} \dot{\gamma}_{1}\dot{\gamma}_{2}-L \dot{\gamma}_{3} \dot{\gamma}_{1} = \ddot{\gamma}_{2}- L \dot{\gamma}_1 \omega(\dot{\gamma}),\\ (D_t \dot{\gamma})_{3} &= \ddot{\gamma}_{3} - \dfrac{\gamma_1 \gamma_2 L}{4} \dot{\gamma}_{1}^{2} + \dfrac{(\gamma_{1}^{2}-\gamma_{2}^{2})L}{4}\dot{\gamma}_{1}\dot{\gamma}_{2} - \dfrac{\gamma_1 L}{2} \dot{\gamma}_{1}\dot{\gamma}_{3} +\dfrac{\gamma_1 \gamma_2 L}{4} \dot{\gamma}_{2}^{2} - \dfrac{\gamma_2 L}{2} \dot{\gamma}_{2}\dot{\gamma}_{3}. \end{aligned}\end{equation} Now we express $D_t \dot{\gamma}$ in the basis $\{X_1,X_2,X_{3}^{L} \}$: \begin{equation}\label{eq:DC} D_{t}\dot{\gamma} = \bigl( \ddot{\gamma}_{1}+ L \dot{\gamma}_2 \omega(\dot{\gamma}) \bigr) \, X_1 + \bigl( \ddot{\gamma}_{2}- L \dot{\gamma}_1 \omega(\dot{\gamma}) \bigr) \, X_2 + \left( \sqrt{L} \omega({\ddot{\gamma}})\right) \, X_{3}^{L}, \end{equation} where $$ \omega(\ddot{\gamma}(t)) = \ddot{\gamma}_{3}(t) - \dfrac{1}{2}\left( \gamma_{1}(t)\ddot{\gamma}_{2}(t) - \gamma_{2}(t)\ddot{\gamma}_{1}(t)\right), $$ coincides with the expression $(\omega(\dot{\gamma}))'$ in \eqref{eq:gLEuler}. Recalling that \begin{equation}\label{dot-gamma-in-coordinates} \dot\gamma = \dot\gamma_1 \, X_1 + \dot\gamma_2 \, X_2 + (\sqrt{L}\omega(\dot\gamma)) \, X_{3}^{L}, \end{equation} we compute $$ \scal{D_t \dot{\gamma}}{\dot{\gamma}}_{L} = \ddot{\gamma}_1 \dot{\gamma}_1 + \ddot{\gamma}_2 \dot{\gamma}_2 + L \omega(\dot{\gamma}) \omega(\ddot{\gamma}). $$ Therefore \begin{equation}\label{eq:Curvature} k_{\gamma}^{L} = \sqrt{\dfrac{(\ddot{\gamma}_{1}+ L \dot{\gamma}_2 \omega(\dot{\gamma}))^{2} + (\ddot{\gamma}_{2}- L \dot{\gamma}_1 \omega(\dot{\gamma}))^{2} + L \omega({\ddot{\gamma}})^{2}}{(\dot{\gamma}_{1}^{2}+\dot{\gamma}_{2}^{2}+L\omega(\dot{\gamma})^{2})^{2}} - \dfrac{(\ddot{\gamma}_1 \dot{\gamma}_1 + \ddot{\gamma}_2 \dot{\gamma}_2 + L \omega(\dot{\gamma}) \omega(\ddot{\gamma}))^{2}}{(\dot{\gamma}_{1}^{2}+\dot{\gamma}_{2}^{2}+L\omega(\dot{\gamma})^{2})^{3}}}. \end{equation} On the other hand, if $\gamma(t)$ is a horizontal point for $\gamma$, then $\omega(\dot{\gamma}(t))=0$ and $$ D_{t}\dot{\gamma} = \ddot\gamma_1 \, X_1 + \ddot\gamma_2 \, X_2. $$ It follows that $$ k_{\gamma}^{L} = \sqrt{\dfrac{\ddot{\gamma}_{1}^{2} + \ddot{\gamma}_{1}^{2}}{(\dot{\gamma}_{1}^{2}+\dot{\gamma}_{2}^{2})^{2}} - \dfrac{(\ddot{\gamma}_1 \dot{\gamma}_1 + \ddot{\gamma}_2 \dot{\gamma}_2 )^{2}}{(\dot{\gamma}_{1}^{2}+\dot{\gamma}_{2}^{2})^{3}}}, $$ as desired. \end{proof} \begin{remark}\label{Remark} The pointwise notion of curvature provided by (\ref{eq:kL}) is continuous along the curve $\gamma$. Moreover, we want to stress that this notion is independent of the parametrization chosen. \end{remark} We now use the previous results to study curvatures of curves in the Heisenberg group $\mathbb{H}$. \begin{definition}\label{def:sRc} Let $\gamma : [a, b] \rightarrow \mathbb{H}$ be a Euclidean $C^{2}$-smooth regular curve. We define the sub-Riemannian curvature $k_{\gamma}^{0}$ of $\gamma$ at $\gamma(t)$ to be $$ k_{\gamma}^{0} := \lim_{L \rightarrow +\infty} k_{\gamma}^{L}, $$ if the limit exists. \end{definition} It is clear that, a priori, the above limit may not exist. In order to deal with the asymptotics as $L \to +\infty$ of the quantities involved, let us introduce the following notation: for continuous functions $f,g: (0, +\infty) \to \mathbb{R}$, \begin{equation}\label{eq:SIM} f(L) \sim g(L), \quad \textrm{as $L \to +\infty$} \quad \stackrel{def}{\Longleftrightarrow} \quad \lim_{L \to +\infty} \dfrac{f(L)}{g(L)}=1. \end{equation} \begin{lemma}\label{k0} Let $\gamma : [a, b] \rightarrow \mathbb{H}$ be a Euclidean $C^{2}$-smooth regular curve. Then \begin{equation}\label{eq:k0} k_{\gamma}^{0} = \left \{ \begin{array}{rl} \dfrac{|\dot{\gamma}_{1} \ddot{\gamma}_2 - \dot{\gamma}_{2} \ddot{\gamma}_1|}{\sqrt{(\dot{\gamma}_{1}^2 + \dot{\gamma}_{2}^2)^{3}}}, & \mbox{if $\gamma(t)$ is a horizontal point of $\gamma$,} \\ \dfrac{\sqrt{\dot{\gamma}_{1}^{2}+ \dot{\gamma}_{2}^{2}}}{|\omega(\dot{\gamma})|} , & \mbox{if $\gamma(t)$ is not a horizontal point of $\gamma$.} \end{array}\right. \end{equation} \end{lemma} \begin{proof} The first result follows from the fact that the expression (\ref{eq:kL2}) for the curvature at horizontal points in $(\mathbb{R}^{3}, g_L)$, is independent of $L$. For the other case, we need to study the asymptotics in $L$. Using the notation introduced in \eqref{eq:SIM}, we have \begin{align*} \|D_t \dot{\gamma}\|_{L} \sim L \sqrt{(\dot{\gamma}_{1}^{2}+ \dot{\gamma}_{2}^{2})} |\omega(\dot{\gamma})|, \quad \mbox{as $L \to +\infty$},\\ \|\dot{\gamma} \|_{L} \sim \sqrt{L} \,|\omega(\dot{\gamma})|, \quad \mbox{as $L \to +\infty$},\\ \scal{D_t \dot{\gamma}}{\dot{\gamma}}_{L} \sim L \,\omega(\dot{\gamma}) \, \omega(\ddot{\gamma}), \quad \mbox{as $L \to +\infty$}. \end{align*} Therefore \begin{align*} \dfrac{\|D_t \dot{\gamma} \|_{L}^{2}}{\|\dot{\gamma}\|_{L}^{4}} &\to \dfrac{\dot{\gamma}_{1}^{2}+ \dot{\gamma}_{2}^{2}}{\omega(\dot{\gamma})^{2}}, \quad \mbox{as $L \to +\infty$},\\ \dfrac{\scal{D_t \dot{\gamma}}{\dot{\gamma}}_{L}^{2}}{\|\dot{\gamma}\|_{L}^{6}} &\sim \dfrac{L^{2} \omega(\dot{\gamma})^{2} \omega(\ddot{\gamma})^{2}}{L^{3} \omega(\dot{\gamma})^{6}} \to 0, \quad \mbox{as $L \to +\infty$}. \end{align*} Altogether, $$k_{\gamma}^{0} = \lim_{L \to +\infty} k_{\gamma}^{L} = \dfrac{\sqrt{\dot{\gamma}_{1}^{2}+ \dot{\gamma}_{2}^{2}}}{|\omega(\dot{\gamma})|},$$ as desired. \end{proof} \begin{remark} We stress that the $g_L$-length of a curve can, in general, blow up, once we take the limit as $L \to +\infty$. The remarkable fact provided by Lemma \ref{k0} is that this never occurs for the curvature $k_{\gamma}^{L}$. We notice that, as in Remark \ref{Remark}, the sub-Riemannian curvature $k_{\gamma}^{0}$ is independent of the parametrization. In the horizontal case, the quantity $k_\gamma^0$ is the absolute value of the {\it signed horizontal curvature} of a horizontal curve which arises in the study of horizontal mean curvature. See Remark \ref{horiz-mean-curve-remark} for more details. \end{remark} There is another fact to notice. If we consider a Euclidean $C^{2}$-smooth regular curve $\gamma :[a,b] \to \mathbb{H}$, which is partially horizontal and partially not, the quantity $k_{\gamma}^{0}(t)$ in \eqref{eq:k0}, need not be a continuous function (in contrast to Remark \ref{Remark}). Moreover, in view of the independence of $k_{\gamma}^{0}$ of the parametrization, when we approach a horizontal point of $\gamma$ from non-horizontal points of $\gamma$, we always find a singularity. Let us clarify the last sentences with an example. \begin{example} Consider the planar curve $$\gamma(\theta):= \left( \cos(\theta)+1, \sin(\theta),0\right), \quad \theta \in [0,2\pi).$$ The curve $\gamma$ is horizontal only for $\theta = \pi$, indeed $\omega(\dot{\gamma})= -\tfrac{1+\cos(\theta)}{2}$, which vanishes only for $\theta = \pi$. The horizontal curvature $k_{\gamma}^{0}$ is a pointwise notion, therefore we have \begin{equation*} k_{\gamma}^{0}(\theta)|_{\theta=\pi} = \sqrt{\dfrac{\ddot{\gamma}_{1}^{2} + \ddot{\gamma}_{1}^{2}}{(\dot{\gamma}_{1}^{2}+\dot{\gamma}_{2}^{2})^{2}} - \dfrac{(\ddot{\gamma}_1 \dot{\gamma}_1 + \ddot{\gamma}_2 \dot{\gamma}_2 )^{2}}{(\dot{\gamma}_{1}^{2}+\dot{\gamma}_{2}^{2})^{3}}} \bigg|_{\theta = \pi} = 1, \end{equation*} and for $\theta \in [0,2\pi)\setminus \{\pi\}$, \begin{equation*} k_{\gamma}^{0}(\theta)= \dfrac{\sqrt{\dot{\gamma}_{1}^{2}(\theta)+ \dot{\gamma}_{2}^{2}(\theta)}}{|\omega(\dot{\gamma}(\theta))|} = \dfrac{2}{|1+\cos(\theta)|} \to +\infty, \quad \textrm{as $\theta \to \pi$}. \end{equation*} \end{example} In the classical theory of differential geometry of smooth space curves in $\mathbb{R}^{3}$, there is a famous \textit{rigidity theorem} (see for instance \cite{dC76}) stating that every curve is characterized by its curvature and its torsion, up to rigid motions. Similar questions are addressed in \cite{CL13} and \cite{CL15} but with a different approach, viewing the first Heisenberg group $\mathbb{H}$ as flat 3-dimensional manifold carrying a pseudo-hermitian structure. We now have a notion of \textit{horizontal curvature} for Euclidean $C^{2}$-smooth regular curves in $(\mathbb{H} ,d_{cc})$. A first result in the direction of the classical rigidity theorem is the following. \begin{proposition}\label{curvature-isometry-proposition} Let $\gamma :[a,b]\to (\mathbb{H},d_{cc})$ be a Euclidean $C^{2}$-smooth regular curve. The sub-Riemannian curvature $k_{\gamma}^{0}$ of $\gamma$ is invariant under left translation and rotations around the $x_3$-axis of $\gamma$. \end{proposition} \begin{proof} Fix a point $g = (g_1, g_2,g_3) \in \mathbb{H}$. Define the curve $\tilde{\gamma}$ as the left translation by $g$ of $\gamma$, $$ \tilde{\gamma}(t):= L_{g}(\gamma(t)) = \left(\gamma_1(t)+ g_1, \gamma_2(t) + g_2, \gamma_3(t)+g_3 - \dfrac{1}{2}(\gamma_1(t) g_2 - \gamma_2(t) g_1)\right), \quad t\in [a,b]. $$ It is clear that $$ \dot{\tilde{\gamma}}_{i}=\dot{\gamma}_{i} \quad \mbox{and} \quad \ddot{\tilde{\gamma}}_{i}=\ddot{\gamma}_{i}, \quad i=1,2, $$ and therefore $k_{\gamma}^{0} = k_{\tilde{\gamma}}^{0}$. For the second assertion, fix an angle $\theta \in [0,2\pi)$ and define the curve $\bar{\gamma}$ as the rotation by $\theta$ of $\gamma$ around the $x_3$-axis, $$ \bar{\gamma}(t):= \left(\gamma_1(t)\cos(\theta)+ \gamma_2(t)\sin(\theta), -\gamma_{1}\sin(\theta)+\gamma_{2}\cos(\theta), \gamma_3 \right), \quad t\in [a,b]. $$ An easy computation shows that in this case we have $$ \dot{\bar{\gamma}}_{1}^{2}+\dot{\bar{\gamma}}_{2}^{2}=\dot{\gamma}_{1}^{2} +\dot{\gamma}_{2}^{2}, \quad \ddot{\bar{\gamma}}_{1}^{2}+\ddot{\bar{\gamma}}_{2}^{2}=\ddot{\gamma}_{1}^{2} +\ddot{\gamma}_{2}^{2}, \quad \dot{\bar{\gamma}}_{1}\ddot{\bar{\gamma}}_{1}+\dot{\bar{\gamma}}_{2}\ddot{\bar{\gamma}}_{2} = \dot{\gamma}_{1}\ddot{\gamma}_{1}+\dot{\gamma}_{2}\ddot{\gamma}_{2}, $$ and therefore $k_{\gamma}^{0} = k_{\bar{\gamma}}^{0}$. \end{proof} \begin{remark}\label{curvature-dilation-remark} The behavior of the curvature $k_\gamma^0$ under dilations is as follows: if $\gamma$ is a $C^2$ smooth regular curve and $r>0$, then $$ k_{\delta_r\gamma}^0 = \frac1r k_\gamma^0. $$ Here $\delta_r\gamma$ denotes the curve $(r\gamma_1,r\gamma_2,r^2\gamma_3)$ when $\gamma=(\gamma_1,\gamma_2,\gamma_3)$. \end{remark} \section{Riemannian approximation of geodesic curvature of curves on surfaces}\label{C2} In order to prove a Heisenberg version of the Gauss--Bonnet Theorem, we need the concept of \textit{sub-Riemannian signed geodesic curvature}. This section will be devoted to the study of curvature of curves living on surfaces. Let us fix once for all the assumptions we will make on the surface $\Sigma$ in this and the coming section. We will say that a surface $\Sigma \subset (\mathbb{R}^{3},g_{L})$, or $\Sigma \subset \mathbb{H}$, is \textbf{regular} if \begin{equation}\label{eq:Sigma} \Sigma \quad \textrm{is a Euclidean $C^{2}$-smooth compact and oriented surface}. \end{equation} In particular we will assume that there exists a Euclidean $C^{2}$-smooth function $u:\mathbb{R}^{3}\to \mathbb{R}$ such that $$ \Sigma = \{(x_1,x_2,x_3)\in \mathbb{R}^{3}: u(x_1,x_2,x_3)=0\} $$ and $\nabla_{\mathbb{R}^{3}}u \neq 0$. As in \cite[Section 4.2]{CDPT}, our study will be local and away from characteristic points of $\Sigma$. For completeness, we recall that a point $x \in \Sigma$ is called \textit{characteristic} if \begin{equation}\label{eq:char} \nabla_{\textnormal{H}} u(x) = (0,0). \end{equation} The presence or absence of characteristic points will be stated explicitly. To fix notation (following the one adopted in \cite[Chapter 4]{CDPT}), let us define first $$ p:= X_1 u, \quad q:= X_2 u \quad \textrm{and} \quad r:= X_{3}^{L} u. $$ We then define \begin{equation}\label{eq:pq} \begin{aligned} &l:=\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}, \quad \bar{p}:= \dfrac{p}{l} \quad \textrm{and} \quad \bar{q}:= \dfrac{q}{l}, \\ &l_{L}:= \sqrt{(X_{1}u)^{2}+(X_{2}u)^{2}+(X_{3}^{L} u)^{2}}, \quad \bar{r}_L:= \dfrac{r}{l_{L}},\\ &\bar{p}_{L}:= \dfrac{p}{l_L} \quad \mbox{and} \quad \bar{q}_{L}:= \dfrac{q}{l_L}. \end{aligned} \end{equation} In particular, $\bar{p}^{2}+\bar{q}^{2}=1$. It is clear from \eqref{eq:char} that these functions are well defined at every non-characteristic point of $\Sigma$. \begin{definition} Let $\Sigma \subset (\mathbb{R}^{3}, g_{L})$ be a regular surface and let $u: \mathbb{R}^{3} \to \mathbb{R}$ be as before. The Riemannian unit normal $\nu_{L}$ to $\Sigma$ is $$\nu_{L} := \dfrac{\nabla_{L}u}{\|\nabla_{L}u\|_{L}} = \bar{p}_{L}X_1 + \bar{q}_{L}X_2 + \bar{r}_{L}X_{3}^{L},$$ where $\nabla_{L}u$ is the Riemannian gradient of $u$. \end{definition} \begin{definition}\label{E1E2} Let $\Sigma \subset (\mathbb{R}^{3}, g_{L})$ be a regular surface and let $u: \mathbb{R}^{3} \to \mathbb{R}$ be as before. For every point $g \in \Sigma$, we introduce the orthonormal basis $\{E_{1}(g), E_{2}(g)\}$ for $T_g\Sigma$, where $$ E_{1}(g) := \bar{q}(g) \, X_{1}(g) - \bar{p}(g) \, X_{2}(g) $$ and $$ E_{2}(g) := \bar{r}_{L}(g) \, \bar{p}(g) \, X_{1}(g) + \bar{r}_{L}(g) \, \bar{q}(g)\, X_{2}(g) - \dfrac{l}{l_L}(g) \, X_{3}^{L}(g) .$$ On $T_g \Sigma$ we define a linear transformation $J_L : T_g \Sigma \to T_g \Sigma$ such that \begin{equation}\label{eq:DefJ} J_L (E_{1}(g)) := E_{2}(g), \quad \mbox{and} \quad J_L(E_{2}(g)):= -E_{1}(g). \end{equation} In the following we will omit the dependence on the point $g \in \Sigma$ when it is clear. \end{definition} Let us spend a few words on the choice of the basis of the tangent plane $T_{g}\Sigma$. The vector $E_1$ is a horizontal vector given by $J(\nabla_{\textnormal{H}} u / \|\nabla_{\textnormal{H}} u\|_H)$, where $J$ is the linear operator acting on horizontal vector fields by $J(aX_1+bX_2) = bX_1-aX_2$. This vector is called by some authors \textit{characteristic direction} and plays an important role in the study of minimal surfaces in $\mathbb{H}$ and in the study of the properties of the characteristic set $\mathrm{char}(\Sigma)$, see for instance \cite{CHMY05}, \cite{CHMY12} and \cite{CH04}. In order to perform computations on the surface $\Sigma$, we need a Riemannian metric and a connection. Classically, see for instance \cite{L} or \cite{dC}, there is a standard way to define a Riemannian metric $g_{L, \Sigma}$ on $\Sigma$ so that $\Sigma$ is isometrically immersed in $(\mathbb{R}^{3}, g_L)$. Once we have a Riemannian metric on the surface $\Sigma$, we can define the unique Levi-Civita connection $\nabla^{\Sigma}$ on $\Sigma$ related to the Riemannian metric $g_{L, \Sigma}$. This procedure is equivalent to the following definition. \begin{definition} Let $\Sigma \subset (\mathbb{R}^{3}, g_L)$ be a Euclidean $C^{2}$-smooth surface. For every $U,V \in T_{g}\Sigma$ we define $\nabla^{\Sigma}_{U}V$ to be the tangential component of $\nabla$, namely $$\nabla^{\Sigma}_{U}V = \Pi \left(\nabla_{U}V\right),$$ where $\Pi: \mathbb{R}^{3} \rightarrow T\Sigma$. \end{definition} Note that to compute $\nabla_{U}V$ we are considering an extension of both $U$ and $V$ to $\mathbb{R}^{3}$. As a notational remark, given a Euclidean $C^{2}$-smooth and regular curve $\gamma \subset \Sigma$, we will denote the covariant derivative of $\dot{\gamma}$ with respect to the $g_{L,\Sigma}$-metric, as $D_{t}^{\Sigma} \dot{\gamma}$. We are now interested in detecting the curvature of a Euclidean $C^{2}$-smooth regular curve $\gamma$ on a given regular surface $\Sigma \subset \mathbb{H}$. We start with a technical lemma that will simplify our treatment in the ensuing discussion. \begin{lemma}\label{limits} For $\bar{p}$, $\bar{q}$, $l_{L}$ and $\bar{r}_L$ as before, we have \begin{align} l_{L} \to \|\nabla_{\textnormal{H}} u\|, & \quad \mbox{as $L \to +\infty$},\\ \bar{r}_{L} \to 0,& \quad \mbox{as $L \to +\infty$}\label{eq:2lim},\\ \dfrac{\bar{r}_{L}}{l_{L}} \to 0,& \quad \mbox{as $L \to +\infty$}, \\ \dfrac{\sqrt{L} \, \bar{r}_{L}}{l_{L}} \to \dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|^{2}},& \quad \mbox{as $L \to +\infty$}, \\ L \,\bar{r}_{L}^{2} \to \dfrac{(X_3 u)^{2}}{\|\nabla_{\textnormal{H}} u\|^{2}},& \quad \mbox{as $L \to +\infty$},\\ \bar{r}_{L}L \sim \dfrac{\sqrt{L}(X_{3}u)}{\|\nabla_{\textnormal{H}} u\|},& \quad \mbox{as $L \to +\infty$}\label{eq:6lim} \end{align} \end{lemma} \begin{proof} All the limits and asymptotics follow directly from the definitions in (\ref{eq:pq}). \end{proof} \begin{lemma}\label{NonHor} Let $\gamma: [a,b] \rightarrow \Sigma$ be a Euclidean $C^{2}$-smooth, regular and non-horizontal curve. The covariant derivative $D^{\Sigma}_t \dot{\gamma}$ of $\dot{\gamma}$ with respect to the $g_{L, \Sigma}$-metric is given by \begin{equation*} D^{\Sigma}_t \dot{\gamma} = (D^{\Sigma}_t \dot{\gamma})_1 E_1 + (D^{\Sigma}_t \dot{\gamma})_2 E_2, \end{equation*} where \begin{equation}\label{eq:DEi} \begin{aligned} (D^{\Sigma}_t \dot{\gamma})_1 &= \bar{q} \left(\ddot{\gamma}_1 + L \omega(\dot{\gamma}) \dot{\gamma}_{2} \right)- \bar{p}\left( \ddot{\gamma}_2 - L\omega(\dot{\gamma}) \dot{\gamma}_{1} \right),\\ (D^{\Sigma}_t \dot{\gamma})_2 &= \bar{r}_L \bar{p} \left(\ddot{\gamma}_1 + L \omega(\dot{\gamma}) \dot{\gamma}_{2} \right)+ \bar{r}_L \bar{q}\left( \ddot{\gamma}_2 - L\omega(\dot{\gamma}) \dot{\gamma}_{1} \right) - \dfrac{l}{l_L}\sqrt{L} \omega(\ddot{\gamma}). \end{aligned} \end{equation} Moreover, if $\gamma: [a,b] \rightarrow \Sigma$ is Euclidean $C^{2}$-smooth, regular and horizontal curve, we have \begin{equation}\label{eq:DEii} D^{\Sigma}_t \dot{\gamma} = \left(\bar{q} \ddot{\gamma}_1- \bar{p} \ddot{\gamma}_2 \right) E_1 + \bar{r}_L \, \left(\bar{p} \ddot{\gamma}_1 + \bar{q}\ddot{\gamma}_2\right) E_2, \end{equation} \end{lemma} \begin{proof} The covariant derivative $D_t \dot{\gamma}$ in the $\{X_1,X_2,X_{3}^{L}\}$ basis is as in (\ref{eq:DC}). Projecting $D_t\dot{\gamma}$ via $\Pi$ inon the tangent plane $T_g \Sigma$, we find \begin{equation*} \begin{aligned} D^{\Sigma}_t \dot{\gamma} :&= \scal{D_t \dot{\gamma}}{E_1}_L E_1 + \scal{D_t \dot{\gamma}}{E_2}_L E_2 \\ ]&= \left[ \bar{q} \left(\ddot{\gamma}_1 + L \omega(\dot{\gamma}) \dot{\gamma}_{2} \right)- \bar{p}\left( \ddot{\gamma}_2 - L\omega(\dot{\gamma}) \dot{\gamma}_{1} \right)\right) E_1 \\ & \quad + \left[ \bar{r}_L \bar{p} \left(\ddot{\gamma}_1 + L \omega(\dot{\gamma}) \dot{\gamma}_{2} \right)+ \bar{r}_L \bar{q}\left( \ddot{\gamma}_2 - L\omega(\dot{\gamma}) \dot{\gamma}_{1} \right) - \dfrac{l}{l_L}\sqrt{L} \omega(\ddot{\gamma})\right] E_2. \end{aligned}\end{equation*} The situation is much simpler if $\gamma$ is horizontal, indeed it suffices to set $\omega(\dot{\gamma}) = \omega(\ddot{\gamma})=0$ in the previous expression. \end{proof} \begin{definition} Let $\Sigma \subset (\mathbb{R}^{3}, g_L)$ be a regular surface. Let $\gamma : [a, b] \rightarrow \Sigma$ be a Euclidean $C^{2}$-smooth and regular curve, and let $D_{t}^{\Sigma}$ be the covariant derivative of $\dot{\gamma}$ with respect to the Riemannian metric $g_{L,\Sigma}$. The geodesic curvature $k_{\gamma, \Sigma}^{L}$ of $\gamma$ at the point $\gamma(t)$ is defined to be $$ k_{\gamma, \Sigma}^{L} := \sqrt{ \dfrac{\| D^{\Sigma}_t \dot{\gamma} \|^{2}_{L,\Sigma}}{\|\dot{\gamma}\|_{L,\Sigma}^{4}} - \dfrac{\scal{D^{\Sigma}_t \dot{\gamma}}{{\dot{\gamma}}}^{2}_{L,\Sigma}}{\|\dot{\gamma}\|_{L,\Sigma}^{6}}}. $$ \end{definition} \begin{definition} Let $\Sigma \subset \mathbb{H}$ be a regular surface and let $\gamma : [a, b] \rightarrow \Sigma$ be a Euclidean $C^{2}$-smooth and regular curve. The \textit{sub-Riemannian geodesic curvature} $k_{\gamma, \Sigma}^{0}$ of $\gamma$ at the point $\gamma(t)$ is defined as $$ k_{\gamma, \Sigma}^{0}:= \lim_{L \rightarrow +\infty} k_{\gamma, \Sigma}^{L}, $$ if the limit exists. \end{definition} As in Section \ref{C1}, our first task is to show that the above limit does always exist. \begin{lemma}\label{GeoCur} Let $\Sigma \subset \mathbb{H}$ be a regular surface and let $\gamma : [a, b] \rightarrow \Sigma$ be a Euclidean $C^{2}$-smooth and regular curve. Then $$ k_{\gamma, \Sigma}^{0} = \dfrac{|\bar{p} \dot{\gamma}_{1} + \bar{q} \dot{\gamma}_{2} |}{|\omega(\dot{\gamma})|} $$ if $\gamma(t)$ is a non-horizontal point, while $$ k_{\gamma, \Sigma}^{0} = 0 $$ if $\gamma(t)$ is a horizontal point. \end{lemma} \begin{proof} First, let $\gamma(t)$ be a non-horizontal point of the curve $\gamma$. From Lemma \ref{NonHor}, we know that \begin{equation*} \begin{aligned} D^{\Sigma}_t \dot{\gamma} &=\left( \bar{q} \left(\ddot{\gamma}_1 + L \omega(\dot{\gamma}) \dot{\gamma}_{2} \right)- \bar{p}\left( \ddot{\gamma}_2 - L\omega(\dot{\gamma}) \dot{\gamma}_{1} \right)\right) E_1 \\ & \quad + \left( \bar{r}_L \bar{p} \left(\ddot{\gamma}_1 + L \omega(\dot{\gamma}) \dot{\gamma}_{2} \right)+ \bar{r}_L \bar{q}\left( \ddot{\gamma}_2 - L\omega(\dot{\gamma}) \dot{\gamma}_{1} \right) - \dfrac{l}{l_L}\sqrt{L} \omega(\ddot{\gamma})\right) E_2. \end{aligned}\end{equation*} For $\dot{\gamma}$, recalling \eqref{dot-gamma-in-coordinates} we have \begin{equation*} \dot{\gamma} =\left( \bar{q} \dot{\gamma}_{1} - \bar{p}\dot{\gamma}_{2} \right) E_1 + \left( \bar{r}_L \bar{p} \dot{\gamma}_{1} + \bar{r}_L \bar{q} \dot{\gamma}_{2} - \dfrac{l}{l_L}\sqrt{L} \omega(\dot{\gamma})\right) E_2. \end{equation*} Recalling \eqref{eq:6lim}, we have \begin{equation*} \begin{aligned} \|D_{t}^{\Sigma}\dot{\gamma}\|_{L,\Sigma}^{2} &= \left(\bar{q} \left(\ddot{\gamma}_1 + L \omega(\dot{\gamma}) \dot{\gamma}_{2} \right)- \bar{p}\left( \ddot{\gamma}_2 - L\omega(\dot{\gamma}) \dot{\gamma}_{1} \right) \right)^{2} \\ & \quad + \left( \bar{r}_L \bar{p} \left(\ddot{\gamma}_1 + L \omega(\dot{\gamma}) \dot{\gamma}_{2} \right)+ \bar{r}_L \bar{q}\left( \ddot{\gamma}_2 - L\omega(\dot{\gamma}) \dot{\gamma}_{1} \right) - \dfrac{l}{l_L}\sqrt{L} \omega(\ddot{\gamma})\right)^{2}\\ & \sim L^{2} \omega(\dot{\gamma})^{2} (\bar{p}\dot{\gamma}_{1} + \bar{q}\dot{\gamma}_{2})^{2}, \quad \mbox{as $L \to +\infty$}. \end{aligned} \end{equation*} In a similar way, we have that \begin{equation*} \begin{aligned} \|\dot{\gamma}\|_{L,\Sigma} &=\sqrt{\left( \bar{q} \dot{\gamma}_{1} - \bar{p}\dot{\gamma}_{2} \right)^{2} + \left( \bar{r}_L \bar{p} \dot{\gamma}_{1} + \bar{r}_L \bar{q} \dot{\gamma}_{2} - \dfrac{l}{l_L}\sqrt{L} \omega(\dot{\gamma})\right)^{2}}\\ & \quad \sim \sqrt{L} |\omega(\dot{\gamma})|, \quad \mbox{as $L \to +\infty$}. \end{aligned} \end{equation*} A bit more involved computation shows that $$ \scal{D_{t}^{\Sigma}\dot{\gamma}}{\dot{\gamma}}_{L,\Sigma} \sim L\, M, \quad \mbox{as $L \to +\infty$}, $$ where $M$ does not depend on $L$. Therefore, at a non-horizontal point $\gamma(t)$ , we have $$ k_{\gamma,\Sigma}^{0} = \dfrac{|\bar{p} \dot{\gamma}_{1} + \bar{q} \dot{\gamma}_{2} |}{|\omega(\dot{\gamma})|}. $$ Now let $\gamma(t)$ be a horizontal point of the curve $\gamma$. In this case, \begin{align*} D^{\Sigma}_t \dot{\gamma} &=\left(\bar{q}\ddot{\gamma}_1 - \bar{p} \ddot{\gamma}_2 \right) E_1 + \bar{r}_L \left( \bar{p} \ddot{\gamma}_1 +\bar{q} \ddot{\gamma}_2\right) E_2,\\ \dot{\gamma} & = \left( \bar{q} \dot{\gamma}_{1} - \bar{p}\dot{\gamma}_{2} \right) E_1 + \bar{r}_L \, \left( \bar{p} \dot{\gamma}_{1} + \bar{q} \dot{\gamma}_{2} \right) E_2. \end{align*} Recalling (\ref{eq:2lim}), we have \begin{align*} \|D_{t}^{\Sigma}\dot{\gamma}\|_{L,\Sigma}^{2} &\to \left(\bar{q}\ddot{\gamma}_1 - \bar{p} \ddot{\gamma}_2 \right)^{2}, \quad \mbox{as $L \to +\infty$,}\\ \|\dot{\gamma}\| _{L,\Sigma}^{2} &\to (\bar{q} \dot{\gamma}_{1} - \bar{p}\dot{\gamma}_{2})^{2} , \quad \mbox{as $L \to +\infty$,}\\ \scal{D_{t}^{\Sigma}\dot{\gamma}}{\dot{\gamma}}_{L,\Sigma} & \to (\bar{q}\dot{\gamma}_{1} - \bar{p}\dot{\gamma}_{2}) (\bar{q} \ddot{\gamma}_1 -\bar{p} \ddot{\gamma}_2), \quad \mbox{as $L \to +\infty$.} \end{align*} Therefore $$k_{\gamma,\Sigma}^{0}= \sqrt{\dfrac{\left(\bar{q}\ddot{\gamma}_1 - \bar{p} \ddot{\gamma}_2 \right)^{2}}{(\bar{q} \dot{\gamma}_{1} - \bar{p}\dot{\gamma}_{2})^{4}} - \dfrac{(\bar{q}\dot{\gamma}_{1} - \bar{p}\dot{\gamma}_{2})^{2} (\bar{q} \ddot{\gamma}_1 -\bar{p} \ddot{\gamma}_2)^{2}}{(\bar{q} \dot{\gamma}_{1} - \bar{p}\dot{\gamma}_{2})^{6}} } = 0,$$ as desired. \end{proof} \begin{remark} In the last computation, it is possible to divide by the term $\bar{q} \dot{\gamma}_{1} - \bar{p}\dot{\gamma}_{2}$, because it cannot be $0$ when $\gamma(t)$ is a horizontal point. Indeed, let $g \in \Sigma$ be a non-characteristic point, then we know that $$\dim (H_g \cap T_{g}\Sigma) =1.$$ At a horizontal point $\gamma(t)$, we have that $\dot{\gamma} \in H_g \cup T_{g}\Sigma$. On the other hand, $E_1 \in H_g \cap T_{g}\Sigma$ as well. Therefore, if $\bar{q} \dot{\gamma}_{1} - \bar{p}\dot{\gamma}_{2} = 0$, this would imply that we would have $$\scal{\dot{\gamma}}{E_1}_{L} = 0,$$ and therefore that either $\dim (H_g \cap T_{g}\Sigma) =2$ or $\dot{\gamma}=0$, which would lead to a contradiction. \end{remark} For planar curves it is possible to define a notion of \textit{signed curvature}. Actually, it is possible to define an analogous concept for curves living on arbitrary two-dimensional Riemannian manifolds (see for instance \cite[Chapter 9]{L}. \begin{definition} Let $\Sigma \subset (\mathbb{R}^{3}, g_L)$ be a regular surface and let $\gamma : [a,b] \rightarrow \Sigma$ be a Euclidean $C^{2}$-smooth and regular curve. Let $\nabla^{\Sigma}$ be the Levi-Civita connection on $\Sigma$ related to the Riemannian metric $g_{L,\Sigma}$. For every $g \in \Sigma$, let $\{E_1(g),E_2(g)\}$ be an orthonormal basis of $T_g \Sigma$. The signed geodesic curvature $k_{\gamma, \Sigma}^{L,s}$ of $\gamma$ at the point $\gamma(t)$ is defined as $$k_{\gamma, \Sigma}^{L,s} := \dfrac{\scal{D_{t}^{\Sigma}\dot{\gamma}}{J_{L}(\dot{\gamma})}_{L,\Sigma}}{\|\dot{\gamma}\|_{L,\Sigma}^{3}},$$ where $J_{L}$ is the linear transformation on $T_g \Sigma$ defined in (\ref{eq:DefJ}). \end{definition} \begin{remark} If we take the absolute value of $k_{\gamma, \Sigma}^{L,s}$, we recover precisely $k_{\gamma, \Sigma}^{L}$. Indeed, by definition of $J_L$, $\{ \dot{\gamma}/\|\dot{\gamma}\|_{L}, J_L(\dot{\gamma})/\|\dot{\gamma}\|_{L}\}$ is another orthonormal basis of $T_g \Sigma$ oriented as $\{E_{1}(g), E_{2}(g)\}$. Since $D_{t}^{\Sigma}\dot{\gamma}$ is defined as the projection of $D_t \dot{\gamma}$ onto $T_g \Sigma$, we have $$ D_{t}^{\Sigma}\dot{\gamma} = \scal{D_{t}^{\Sigma}\dot{\gamma}}{\dfrac{\dot{\gamma}}{\|\dot{\gamma}\|_{L}}}_L \dfrac{\dot{\gamma}}{\|\dot{\gamma}\|_{L}} + \scal{D_{t}^{\Sigma}\dot{\gamma}}{\dfrac{J_L(\dot{\gamma})}{\|\dot{\gamma}\|_{L}}}_L \dfrac{J_L(\dot{\gamma})}{\|\dot{\gamma}\|_{L}}. $$ In particular, $$ \dfrac{\|D_{t}^{\Sigma}\dot{\gamma}\|_{L}^{2}}{\|\dot{\gamma}\|_{L}^{4}} = \dfrac{\scal{D_{t}^{\Sigma}\dot{\gamma}}{\dot{\gamma}}_{L,\Sigma}^{2}}{\|\dot{\gamma}\|_{L}^{6}} + \dfrac{\scal{D_{t}^{\Sigma}\dot{\gamma}}{J_L(\dot{\gamma})}_{L}^{2}}{\|\dot{\gamma}\|_{L}^{6}} $$ and so \begin{equation*} |k_{\gamma, \Sigma}^{L,s}| = \dfrac{|\scal{D_{t}^{\Sigma}\dot{\gamma}}{J_{L}(\dot{\gamma})}_{L}|}{\|\dot{\gamma}\|_{L}^{3}} = \sqrt{\dfrac{\|D_{t}^{\Sigma}\dot{\gamma}\|_{L}^{2}}{\|\dot{\gamma}\|_{L}^{4}}-\dfrac{\scal{D_{t}^{\Sigma}\dot{\gamma}}{\dot{\gamma}}_{L}^{2}}{\|\dot{\gamma}\|_{L}^{6}}} = k_{\gamma, \Sigma}^{L}. \end{equation*} \end{remark} \begin{definition} Let $\Sigma \subset \mathbb{H}$ be a regular surface and let $\gamma : [a, b] \rightarrow \Sigma$ be a Euclidean $C^{2}$-smooth and regular curve. The \textit{sub-Riemannian signed geodesic curvature} $k_{\gamma, \Sigma}^{0,s}(t)$ of $\gamma$ at the point $\gamma(t)$ is defined as $$k_{\gamma, \Sigma}^{0,s} := \lim_{L \rightarrow +\infty} k_{\gamma, \Sigma}^{L,s}.$$ \end{definition} Again, we need to show that the above limit actually exists. \begin{proposition}\label{SGeoCur} Let $\Sigma \subset \mathbb{H}$ be a regular surface and let $\gamma : [a,b] \rightarrow \Sigma$ be a Euclidean $C^{2}$-smooth and regular curve. Then $$ k_{\gamma, \Sigma}^{0,s} = \dfrac{\bar{p} \dot{\gamma}_{1} + \bar{q} \dot{\gamma}_{2}}{|\omega(\dot{\gamma})|} $$ if $\gamma(t)$ is a non-horizontal point, and $$ k_{\gamma, \Sigma}^{0,s} = 0 $$ if $\gamma(t)$ is a horizontal point. \end{proposition} \begin{proof} We already know that \begin{equation*} \begin{aligned} D^{\Sigma}_t \dot{\gamma} &= \left( \bar{q} \left(\ddot{\gamma}_1 + L \omega(\dot{\gamma}) \dot{\gamma}_{2} \right)- \bar{p}\left( \ddot{\gamma}_2 - L\omega(\dot{\gamma}) \dot{\gamma}_{1} \right)\right) E_1 + \\ &\quad \left( \bar{r}_L \bar{p} \left(\ddot{\gamma}_1 + L \omega(\dot{\gamma}) \dot{\gamma}_{2} \right)+ \bar{r}_L \bar{q}\left( \ddot{\gamma}_2 - L\omega(\dot{\gamma}) \dot{\gamma}_{1} \right) - \dfrac{l}{l_L}\sqrt{L} \omega(\ddot{\gamma})\right) E_2. \end{aligned}\end{equation*} and \begin{equation*} \begin{aligned} \dot{\gamma} &= \left(\bar{q} \dot{\gamma}_{1} - \bar{p}\dot{\gamma}_{2} \right) E_1 + \left( \bar{r}_L \bar{p} \dot{\gamma}_{1} + \bar{r}_L \bar{q}\dot{\gamma}_{2} - \dfrac{l}{l_L}\sqrt{L} \omega(\dot{\gamma})\right) E_2, \end{aligned}\end{equation*} and therefore, by definition of $J_L$, \begin{equation*} J_L(\dot{\gamma}) = -\left( \bar{r}_L \bar{p} \dot{\gamma}_{1} + \bar{r}_L \bar{q}\dot{\gamma}_{2} - \dfrac{l}{l_L}\sqrt{L} \omega(\dot{\gamma})\right)E_1 + \left(\bar{q} \dot{\gamma}_{1} - \bar{p}\dot{\gamma}_{2} \right) E_2. \end{equation*} After some simplifications, we get \begin{equation*} \begin{aligned} \scal{D_{t}^{\Sigma}\dot{\gamma}}{J_{L}(\dot{\gamma})}_{L,\Sigma} &= \bar{r}_{L}\dot{\gamma}_{1} \left( \ddot{\gamma}_2 - L \omega(\dot{\gamma})\dot{\gamma}_{2}\right)(\bar{p}^{2}+\bar{q}^{2}) -\bar{r}_{L}\dot{\gamma}_{2} \left( \ddot{\gamma}_1 + L \omega(\dot{\gamma})\dot{\gamma}_{1} \right)(\bar{p}^{2}+\bar{q}^{2})\\ &\quad \dfrac{\sqrt{L} \, l}{l_L} \left[ \omega(\dot{\gamma})\left( \bar{q}\ddot{\gamma}_1 + L\bar{q}\omega(\dot{\gamma})\dot{\gamma}_{2} -\bar{p}\ddot{\gamma}_2 + L \bar{p}\omega(\dot{\gamma})\dot{\gamma}_{1}\right) +\bar{q}\omega(\ddot{\gamma})\dot{\gamma}_{1} - \bar{p}\omega(\ddot{\gamma})\dot{\gamma}_{2} \right] \end{aligned} \end{equation*} Therefore, exploiting Lemma \ref{limits}, for a non-horizontal curve $\gamma$, we have $$\scal{D_{t}^{\Sigma}\dot{\gamma}}{J_{L}(\dot{\gamma})}_{L,\Sigma} \sim L^{3/2} \omega(\dot{\gamma})^{2} \left( \bar{q}\dot{\gamma}_{2} + \bar{p}\dot{\gamma}_{1} \right), \quad \mbox{as $L \to +\infty$}.$$ Recalling that $\|\dot{\gamma}\|_{L,\Sigma} \sim \sqrt{L}|\omega(\dot{\gamma})|$, as $L \to +\infty$, we find $$\dfrac{\scal{D_{t}^{\Sigma}\dot{\gamma}}{J_{L}(\dot{\gamma})}_{L,\Sigma}}{\|\dot{\gamma}\|_{L,\Sigma}^{3}} \to \dfrac{\left( \bar{p}\dot{\gamma}_{1}+ \bar{q}\dot{\gamma}_{2} \right)}{|\omega(\dot{\gamma})|}, \quad \mbox{as $L \to +\infty$}.$$ As for the previous results, the case of $\gamma$ horizontal is slightly easier, because $\omega(\dot{\gamma}) = \omega(\ddot{\gamma})=0$. Therefore, \begin{align*} D^{\Sigma}_t \dot{\gamma} &= \left( \bar{q} \left(\ddot{\gamma}_1 \right)- \bar{p}\left( \ddot{\gamma}_2 \right)\right) E_1 + \bar{r}_L \left( \bar{p} \left(\ddot{\gamma}_1 \right) + \bar{q}\left( \ddot{\gamma}_2 \right)\right) E_2,\\ \dot{\gamma} &= \left(\bar{q} \dot{\gamma}_{1} - \bar{p}\dot{\gamma}_{2} \right) E_1 + \bar{r}_L \left( \bar{p} \dot{\gamma}_{1} + \bar{q}\dot{\gamma}_{2} \right) E_2,\\ J_L(\dot{\gamma}) &= -\bar{r}_L \left( \bar{p} \dot{\gamma}_{1} + \bar{q}\dot{\gamma}_{2} \right) E_1 + \left(\bar{q} \dot{\gamma}_{1} - \bar{p}\dot{\gamma}_{2} \right) E_2 \end{align*} Hence, from Lemma \ref{limits}, $$\scal{D^{\Sigma}_t \dot{\gamma}}{J_L(\dot{\gamma})}_L = \bar{r}_L \left[- \left(\bar{q} \dot{\gamma}_{1} - \bar{p}\dot{\gamma}_{2} \right)\left( \bar{p} \dot{\gamma}_{1} + \bar{q}\dot{\gamma}_{2} \right) + \left( \bar{p} \dot{\gamma}_{1} + \bar{q}\dot{\gamma}_{2} \right)\left(\bar{q} \dot{\gamma}_{1} - \bar{p}\dot{\gamma}_{2} \right)\right] \to 0, \quad \mbox{as $L \to +\infty$},$$ as desired. \end{proof} \begin{remark} The sub-Riemannian geodesic curvature (both signed and unsigned) is invariant under isometries of $\mathbb{H}$ (left translations and rotations about the $x_3$-axis), and scales by the factor $\tfrac1r$ under the dilation $\delta_r$. Compare Proposition \ref{curvature-isometry-proposition} and Remark \ref{curvature-dilation-remark}. We omit the elementary proofs of these facts. \end{remark} \section{Riemannian approximation of curvatures of surfaces}\label{C3} In this section we want to study curvatures of regular surfaces $\Sigma \subset \mathbb{H}$. As in the previous sections, the idea will be to compute first the already known curvatures of a regular 2-dimensional Riemannian manifold $\Sigma \subset (\mathbb{R}^{3},g_{L})$, and then try to derive appropriate Heisenberg notions taking the limit as $L \to +\infty$. First, we need to define the \textit{second fundamental form} $II^{L}$ of the embedding of $\Sigma$ into $(\mathbb{R}^{3}, g_{L})$: \begin{equation}\label{eq:secfun} II^{L} = \left( \begin{array}{cc} \scal{\nabla_{E_1}\nu_L}{E_1}_{L} & \scal{\nabla_{E_1}\nu_L}{E_2}_{L}\\ \scal{\nabla_{E_2}\nu_L}{E_1}_{L} & \scal{\nabla_{E_2}\nu_L}{E_2}_{L}\\ \end{array}\right). \end{equation} The explicit computation of the second fundamental form in our case can be found in \cite{CDPT}. For sake of completeness, we recall here the complete statement. \begin{theorem}[\cite{CDPT}, Theorem 4.3] The second fundamental form $II^{L}$ of of the embedding of $\Sigma$ into $(\mathbb{R}^{3}, g_{L})$ is given by \begin{equation}\label{eq:IIL} II^{L} = \left( \begin{array}{cc} \dfrac{l}{l_L}(X_{1}(\bar{p})+X_{2}(\bar{q})) & -\tfrac12 {\sqrt{L}} - \dfrac{l_L}{l}\scal{E_1}{\nabla_{\textnormal{H}}(\bar{r}_{L})}_{L}\\ -\tfrac12 {\sqrt{L}} - \dfrac{l_L}{l}\scal{E_1}{\nabla_{\textnormal{H}}(\bar{r}_{L})}_{L} & -\dfrac{l^{2}}{l_{L}^{2}}\scal{E_2}{\nabla_{\textnormal{H}} (\tfrac{r}{l})}_{L} + X_{3}^{L}(\bar{r}_{L})\\ \end{array}\right). \end{equation} \end{theorem} The Riemannian mean curvature ${\mathcal{H}}_{L}$ of $\Sigma$ is $$ {\mathcal{H}}_L := \operatorname{tr}(II^L), $$ while the Riemannian Gaussian curvature ${\mathcal{K}}_{L}$ is $$ {\mathcal{K}}_L := \overline{{\mathcal{K}}}_{L}(E_1,E_2) +\det(II^{L}). $$ Here $\overline{{\mathcal{K}}}_{L}(E_1,E_2)$ denotes the sectional curvature of $(\mathbb{R}^{3},g_L)$ in the plane generated by $E_1$ and $E_2$. We need to spend a few words about the last definition, which is actually a result known as the \textit{Gauss lemma}. The geometric meaning is that the second fundamental form $II^{L}$ is the right object to measure the discrepancy between the two sectional curvatures, of the ambient manifold, and of the isometrically immersed one. \begin{proposition} The horizontal mean curvature ${\mathcal{H}}_0$ of $\Sigma \subset \mathbb{H}$ is given by \begin{equation}\label{eq:H0} {\mathcal{H}}_0 = \lim_{L \to +\infty} {\mathcal{H}}_{L} = \mathrm{div}_{H}\left( \dfrac{\nabla_{\textnormal{H}} u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} \right), \end{equation} and the sub-Riemannian Gaussian curvature ${\mathcal{K}}_0$ is given by \begin{equation}\label{eq:K0} \begin{aligned} {\mathcal{K}}_0 &= \lim_{L \to +\infty} {\mathcal{K}}_{L} \\ &= -\left( \dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right)^{2} - \left( \dfrac{X_2 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right) X_1 \left(\dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right) + \left( \dfrac{X_1 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} \right) X_2 \left(\dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} \right). \end{aligned} \end{equation} \end{proposition} In \eqref{eq:H0} the expression $\mathrm{div}_H$ denotes the {\it horizontal divergence} of a horizontal vector field, which is defined as follows: for a horizontal vector field $V = a\, X_1 + b \, X_2$, $$ \mathrm{div}_H(V) = X_1(a) + X_2(b). $$ \begin{proof} By definition, $$ {\mathcal{H}}_{L} = \mathrm{trace}(II^{L}) = \dfrac{l}{l_L}(X_{1}(\bar{p})+X_{2}(\bar{q})) -\dfrac{l^{2}}{l_{L}^{2}}\scal{E_2}{\nabla_{\textnormal{H}} (\tfrac{r}{l})}_{L} + X_{3}^{L}(\bar{r}_{L}). $$ We recall that $r = \tfrac{X_{3}u}{\sqrt{L}}$ and therefore in the limit as $L \to +\infty$, we obtain $$ {\mathcal{H}}_0 = \lim_{L \to +\infty} {\mathcal{H}}_L = X_1 \left( \dfrac{X_1 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right) + X_2 \left( \dfrac{X_2 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right). $$ As we have already observed in the computation of ${\mathcal{H}}_0$, the term $$ -\dfrac{l^{2}}{l_{L}^{2}}\scal{E_2}{\nabla_{\textnormal{H}} (\tfrac{r}{l})}_{L} + X_{3}^{L}(\bar{r}_{L}) $$ tends to zero as $L \to +\infty$, and therefore in analyzing $\det(II^{L})$ we can focus only on the term $$ - \left(-\dfrac{\sqrt{L}}{2} - \dfrac{l_L}{l}\scal{E_1}{\nabla_{\textnormal{H}}(\bar{r}_{L})}_{L}\right)^{2}. $$ Clearly, $$ - \left(-\dfrac{\sqrt{L}}{2} - \dfrac{l_L}{l}\scal{E_1}{\nabla_{\textnormal{H}}(\bar{r}_{L})}_{L}\right)^{2} \sim -\dfrac{L}{4} - \scal{E_1}{\nabla_{\textnormal{H}} \left(\dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right)}_{L} \qquad \mbox{as $L \to +\infty$.} $$ It remains to compute the sectional curvature $\overline{{\mathcal{K}}}_{L}(E_{1},E_{2})$. By Definition \ref{sectional}, the functional property in Remark \ref{Rfunct}, and orthonormality of the basis $\{E_{1}, E_{2}\}$, we have $$ \overline{{\mathcal{K}}}_{L}(E_{1},E_{2}) = - \dfrac{L}{4} - L \, \bar{r}_{L}^{2}. $$ Hence \begin{equation*} \begin{aligned} {\mathcal{K}}_0 &= \lim_{L \to +\infty} {\mathcal{K}}_{L} \\ &= -\left( \dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right)^{2} - \left( \dfrac{X_2 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right) X_1 \left(\dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right) + \left( \dfrac{X_1 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} \right) X_2 \left(\dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} \right), \end{aligned} \end{equation*} as desired. \end{proof} \begin{remark}\label{horiz-mean-curve-remark} It is well known (see, for instance \cite{CDPT}) that the horizontal mean curvature of $\Sigma$ at a non-characteristic point $x$ coincides, up to a choice of sign, with the signed horizontal curvature of the Legendrian curve $\gamma$ in $\Sigma$ through $x$. This is the unique horizontal curve $\gamma$ defined locally near $x$, $\gamma:(-\varepsilon,\varepsilon) \to \Sigma$, such that $\gamma(0) = x$ and $\dot\gamma(0) = J(\nabla_H u/||\nabla_H u||_H) \in H_x\mathbb{H}\cap T_x\Sigma$. The signed horizontal curvature of $\gamma=(\gamma_1,\gamma_2,\gamma_3)$ is given by $$ \dfrac{\dot{\gamma}_{1} \ddot{\gamma}_2 - \dot{\gamma}_{2} \ddot{\gamma}_1}{\sqrt{(\dot{\gamma}_{1}^2 + \dot{\gamma}_{2}^2)^{3}}}. $$ Observe that the absolute value of this expression coincides with the sub-Riemannian curvature $k_\gamma^0$ introduced in Definition \ref{def:sRc}, when considered on horizontal curves. \end{remark} The horizontal mean curvature ${\mathcal{H}}_0$ and sub-Riemannian Gauss curvature ${\mathcal{K}}_0$ have been given in terms of a defining function $u$ for the surface $\Sigma$. The preceding remark shows that ${\mathcal{H}}_0$ is independent of the choice of the defining function, and the question arises whether such a result holds also for ${\mathcal{K}}_0$. This is in fact the case, as we now demonstrate. Let $\Sigma$ be a $C^2$ surface defined locally near a point $x$ as the zero set of two $C^2$ functions $u$ and $v$ with $\nabla_{\mathbb{R}^3} u\ne 0$ and $\nabla_{\mathbb{R}^3} v \ne 0$. Then there exists a function $\sigma$ so that $v=e^\sigma u$ or $v=-e^\sigma u$ in a neighborhood of $x$. Without loss of generality we assume that $v=e^\sigma u$. The expressions $$ \nu_0 = \frac{\nabla_H u}{||\nabla_H u||_H} \qquad \mbox{and} \qquad {\mathcal{P}}_0 = \frac{X_3u}{||\nabla_H u||_H} $$ which occur in the expression for ${\mathcal{K}}_0$ remain invariant under change of defining function. Indeed, $$ \frac{\nabla_H v}{||\nabla_H v||_H} = \frac{e^\sigma(\nabla_H u + u \nabla_H \sigma)}{e^{\sigma}||\nabla_H u + u \nabla_H \sigma||_H} \quad \mbox{and} \quad \frac{X_3v}{||\nabla_H v||_H} = \frac{e^\sigma(X_3u+uX_3\sigma)}{e^\sigma||\nabla_H u + u \nabla_H \sigma||}, $$ which equal $$ \frac{\nabla_H u}{||\nabla_H u||_H} \qquad \mbox{and} \qquad \frac{X_3u}{||\nabla_H u||_H} $$ respectively when evaluated at a point of $\Sigma$ (where $u=0$). Let us see how the horizontal gradient of ${\mathcal{P}}_0$ transforms under such an operation. An easy computation gives $$ \nabla_H \left( \frac{X_3v}{||\nabla_H v||_H} \right) = \nabla_H \left( \frac{X_3u}{||\nabla_H u||_H} \right) + (X_3\sigma - {\mathcal{P}}_0 \langle \nu_0,\nabla_H\sigma\rangle_H) \nu_0 $$ when evaluated on $\Sigma$. Since $$ {\mathcal{K}}_0 = -{\mathcal{P}}_0^2 - \left\langle \nabla_H {\mathcal{P}}_0,J\nu_0 \right\rangle_H $$ we conclude that the expression for ${\mathcal{K}}_0$ is independent of the choice of defining function. We conclude this section with a brief discussion of the local summability of the horizontal Gaussian curvature ${\mathcal{K}}_0$ with respect to the Heisenberg perimeter measure near isolated characteristic points. This observation will be used in our subsequent study of the validity of a sub-Riemannian Gauss--Bonnet theorem in $\mathbb{H}$. Without loss of generality, suppose that the origin is an isolated characteristic point of $\Sigma$. Consider a neighborhood $U$ of the origin on $\Sigma$. Due to the expression (\ref{eq:K0}), and since the Heisenberg perimeter measure $d\sigma_{\textnormal{H}}$ is given by $$d\sigma_{\textnormal{H}}=\dfrac{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}{\|\nabla_{\mathbb{R}^{3}} u\|_{\mathbb{R}^{3}}} d\mathcal{H}^{2}_{\mathbb{R}^{3}},$$ there exists a positive constant $C=C(U)>0$ such that \begin{equation}\label{eq:summ} |{\mathcal{K}}_0| d\sigma_{\textnormal{H}} \leq \dfrac{C}{\|\nabla_{\textnormal{H}} u\|_\textnormal{H}} d\mathcal{H}^{2}_{\mathbb{R}^{3}}. \end{equation} Finer results turn out to be particularly difficult to achieve. We postpone further discussion of this topic to Section \ref{questions}. \section{A Gauss--Bonnet theorem}\label{gb} The goal of this section is to prove Theorem \ref{HGB}. Before doing that, we need to recall a couple of technical results concerning, respectively the Riemannian length measure, and the Riemannian surface measure. Let us first consider the case of a curve $\gamma:[a,b] \to (\mathbb{R}^{3},g_L)$ in the Riemannian manifold $(\mathbb{R}^{3},g_L)$. We define the Riemannian length measure, $$ d\dot{\gamma}_{L} := \gamma_{\sharp}\left( \|\dot{\gamma}\|_L \, d\mathcal{L}^{1}_{\llcorner [a,b]}\right) = \|\dot{\gamma}\|_L \, dt. $$ \begin{lemma}\label{Lmeas} Let $\gamma:[a,b] \to (\mathbb{R}^{3},g_L)$ be a Euclidean $C^{2}$-smooth and regular curve. Then \begin{equation}\label{eq:curvemeas} \lim_{L \to +\infty} \dfrac{1}{\sqrt{L}}\int_{\gamma} d\dot{\gamma}_{L} = \int_{a}^{b}|\omega(\dot{\gamma})| \, dt =: \int_{\gamma} \, d\dot{\gamma}, \end{equation} \end{lemma} \begin{proof} As we already saw, $\|\dot{\gamma}\|_L = \sqrt{\dot{\gamma}_{1}^{2}+\dot{\gamma}_{2}^{2}+L \,\omega(\dot{\gamma})^{2}}$, hence by the definition of $d\dot{\gamma}_{L}$ and the dominated convergence theorem, \begin{equation*} \lim_{L \to +\infty} \dfrac{1}{\sqrt{L}}\int_{\gamma} d\dot{\gamma}_{L} = \int_{a}^{b}\lim_{L \to +\infty} \dfrac{\|\dot{\gamma}\|_L}{\sqrt{L}}\, dt = \int_{a}^{b}\lim_{L \to +\infty} \dfrac{\sqrt{\dot{\gamma}_{1}^{2}+\dot{\gamma}_{2}^{2}+L \,\omega(\dot{\gamma})^{2}}}{\sqrt{L}}\, dt = \int_{a}^{b} |\omega(\dot{\gamma})| \, dt \end{equation*} as desired. \end{proof} \begin{remark} It is clear that this scaled measure vanishes on fully horizontal curves. \end{remark} Let us also recall a technical result concerning the scaled limit of the Riemannian surface measure. \begin{proposition}[\cite{CDPT}, Chapter 5.1.]\label{LSmeas} Let $\Sigma \subset (\mathbb{R}^{3},g_L)$ be a Euclidean $C^{2}$-smooth surface. Let $d\sigma_{L}$ denote the surface measure on $\Sigma$ with respect to the Riemannian metric $g_L$. Let $M$ be the $2 \times 3$ matrix \begin{equation}\label{eq:M} M:= \left ( \begin{array}{ccc} 1 & 0 & -\tfrac{x_2}{2} \\ 0 & 1 & \tfrac{x_1}{2} \\ \end{array} \right ). \end{equation} If $\Sigma = \{u=0\}$ with $u \in C^{2}(\mathbb{R}^{3})$, then \begin{equation}\label{eq:SMeas1} \lim_{L \to \infty} \dfrac{1}{\sqrt{L}} \int_{\Sigma} d\sigma_{L} = \int_{\Sigma}\dfrac{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}{\|\nabla_{\mathbb{R}^{3}}u\|_{\mathbb{R}^{3}}} \, d\mathcal{H}^{2}_{\mathbb{R}^{3}} = \int_{\Sigma} d\mathcal{H}^{3}_{cc}, \end{equation} where $d\mathcal{H}^{2}_{\mathbb{R}^{3}}$ denotes the Euclidean 2-Hausdorff measure and $d\mathcal{H}^{3}_{cc}$ the 3-dimensional Hausdorff measure with respect to the $cc$ metric $d_{cc}$. If $\Sigma = f(D)$ with $$ f = f({\mathbf{u}},{\mathbf{v}}):D\subset \mathbb{R}^{2} \to \mathbb{R}^{3},$$ and Euclidean normal vector to $\Sigma$ given by $\vec{n}({\mathbf{u}},{\mathbf{v}})= (f_{{\mathbf{u}}} \times f_{{\mathbf{v}}})({\mathbf{u}},{\mathbf{v}})$, then \begin{equation}\label{eq:SMeas2} \lim_{L \to \infty} \dfrac{1}{\sqrt{L}} \int_{\Sigma} d\sigma_{L} = \int_{D} \|M\vec{n}\|_{\mathbb{R}^{2}} \, d{\mathbf{u}} d{\mathbf{v}}. \end{equation} \end{proposition} The classical Gauss--Bonnet theorem for a regular surface $\Sigma \subset (\mathbb{R}^{3},g_L)$ with boundary components given by Euclidean $C^{2}$-smooth and regular curves $\gamma_{i}$ (see for instance \cite[Chapter 9]{L} or \cite{dC}) states that \begin{equation}\label{eq:GB} \int_{\Sigma} {\mathcal{K}}_{L}\, d\sigma_{L} + \sum_{i=1}^{n}\int_{\gamma_i} k_{\gamma_i, \Sigma}^{L,s} \, d\dot{\gamma_i}_{L} = 2 \pi \chi(\Sigma), \end{equation} where ${\mathcal{K}}_{L}$ is the Gaussian curvature of $\Sigma$, $k_{\gamma_i, \Sigma}^{L,s}$ is the signed geodesic curvature of the $i^{th}$ boundary component $\gamma_i$, $d\dot{\gamma_i}_{L} = \|\dot{\gamma}_i\|_{L} d\theta$ and $\chi(\Sigma)$ is the Euler characteristic of $\Sigma$. It is clear that for a regular surface $\Sigma \subset (\mathbb{R}^{3}, g_{L})$ without boundary, (\ref{eq:GB}) simplifies to $$\int_{\Sigma}{\mathcal{K}}_{L} \,d\sigma_{L} = 2\pi \chi(\Sigma).$$ Recalling the considerations made on the Riemannian surface measure $d\sigma_{L}$, it is natural to divide (\ref{eq:GB}) by a factor $\sqrt{L}$, \begin{equation}\label{eq:GBScal} \int_{\Sigma} {\mathcal{K}}_{L}\, \dfrac{d\sigma_{L}}{\sqrt{L}} + \sum_{i=1}^{n}\int_{\gamma_i} k_{\gamma_i, \Sigma}^{L,s} \, \dfrac{d\dot{\gamma_i}_{L}}{\sqrt{L}} = \dfrac{2 \pi \chi(\Sigma)}{\sqrt{L}} , \end{equation} and then hope to derive a Gauss--Bonnet Theorem as a limit as $L \to +\infty$. The most difficult task is to take care of the possible presence of characteristic points on $\Sigma$. In order to deal with them, we provide the following general definition: \begin{definition}[(R)-property] Let $S \subset \mathbb{R}^{2}$ be any set in $\mathbb{R}^{2}$. We say that the set $S$ satisfies the removability (R)-property, if for every $\epsilon >0$, there exist a number $n = n(\epsilon) \in \mathbb{N}$ and smooth simple closed curves $\gamma_{1}, \ldots, \gamma_n: I\subset \mathbb{R} \to \mathbb{R}^{2}$, such that $$S \subset \bigcup_{i=1}^{n}\mathrm{int}(\gamma_i), \quad \mbox{and} \quad \sum_{i=1}^{n} \mathrm{length}(\gamma_i) \leq \epsilon,$$ where $\mathrm{length}_{E}(\gamma)$ denotes the usual Euclidean length of a curve in $\mathbb{R}^{2}$. \end{definition} It is clear that if the set $S$ consists of finitely many isolated points, then it satisfies the (R)-property. A more complicated example is provided by the self-similar Cantor set $C^{(2)}(\lambda)$ in $\mathbb{R}^{2}$ with scaling ratio $\lambda< 1/4$. Here $C^{(2)}(\lambda) = C^{(1)}(\lambda) \times C^{(1)}(\lambda)$, where $C^{(1)}(\lambda)$ denotes the unique nonempty compact subset of $\mathbb{R}$ which is fully invariant under the action of the two contractive similarities $f_1(x) = \lambda x$ and $f_2(x) = \lambda x + 1 - \lambda$. \begin{proposition} When $\lambda < \tfrac14$, the self-similar Cantor set $C^{(2)}(\lambda)$ satisfies the (R)-property. \end{proposition} \begin{proof} At the $n^{th}$ stage of the iterative construction of $C^{(2)}(\lambda)$ we have $4^{n}$ pieces. We can surround every such piece with a smooth closed curve $\gamma_{n}^{k}$. For sake of simplicity let us take a Euclidean circle whose radius $r_{n}^{k}$ is comparable to $\lambda^n$. Therefore \begin{equation*} \sum_{k=1}^{4^{n}} \mathrm{length}_{E}(\gamma_{n}^{k}) \lesssim 4^{n} \lambda^{n} \to 0, \quad \mbox{as $n \to +\infty$}, \end{equation*} because $\lambda < 1/4$. \end{proof} \begin{lemma}\label{Rprop} If $S \subset \mathbb{R}^{2}$ is compact and such that $\mathcal{H}_{E}^{1}(S)=0$ then $S$ satisfies the (R)-property. \end{lemma} \begin{proof} First, since $S$ is compact, we can take a finite covering of $S$ made of Euclidean balls $\{B(g_i, r_i)\}_{i=1,\ldots, m}$, with $g_{i} \in S$ for every $i= 1 ,\ldots,m$, and such that $\sum_{i=1}^{m}r_{i} \leq \epsilon$. Enlarging these balls by a factor which will not depend on $\epsilon$, we can assume that $S$ lies entirely in the interior of the union of these balls. Now we define the surrounding curves, as the boundaries of the unions of these balls. By construction, the Euclidean length of such curves is comparable to $\epsilon$. \end{proof} \begin{remark} The converse of Lemma \ref{Rprop} also holds true. In other words, the validity of the (R)-property for a compact set $S\subset \mathbb{R}^{2}$ is equivalent to $\mathcal{H}_{E}^{1}(S)=0$. \end{remark} Before proving Theorem \ref{HGB}, let us make another useful remark. Let us denote by $\Pi$ the projection $\Pi : \mathbb{H} \to \mathbb{R}^{2}$ onto the first two components. Consider a surface $\Sigma \subset \mathbb{H}$ as in the hypothesis of Theorem \ref{HGB}. Then $$ \mathcal{H}_{E}^{1}(C(\Sigma)) = \mathcal{H}_{E}^{1}(\Pi(C(\Sigma))). $$ In particular, if $\mathcal{H}_{E}^{1}(C(\Sigma))=0$, then its projection $\Pi(C(\Sigma))$ satisfies the (R)-property. \begin{proof}[Proof of Theorem \ref{HGB}] The proof of Theorem \ref{HGB} will be a combination of different steps.\\ \textbf{Step 1}: First we consider the case of a regular surface without characteristic points. Precisely, let $\Sigma \subset \mathbb{H}$ be a regular surface without characteristic points, and with finitely many boundary components $(\partial \Sigma)_i$, $i \in \{1, \ldots, n\}$, given by Euclidean $C^{2}$-smooth closed curves $\gamma_i : [0, 2\pi] \rightarrow (\partial \Sigma)_i$. We may assume that none of the boundary components are fully horizontal. Let ${\mathcal{K}}_0$ be the sub-Riemannian Gaussian curvature of $\Sigma$, and let $k_{\gamma_i,\Sigma}^{0,s}$ be the sub-Riemannian signed geodesic curvature of $\gamma_i$, for every $i \in \{1, \ldots, n\}$. Then $$\int_{\Sigma}{\mathcal{K}}_0 \, d\mathcal{H}^{3}_{cc} +\sum_{i=1}^{n} \int_{\gamma_i} k_{\gamma_i,\Sigma}^{0,s} \, d\dot{\gamma}_i = 0.$$ \begin{proof}[Proof of Step 1] The results will follow passing to the limit as $L \to +\infty$ in \begin{equation*} \int_{\Sigma} {\mathcal{K}}_{L}\, \dfrac{d\sigma_{L}}{\sqrt{L}} + \sum_{i=1}^{n}\int_{\gamma_i} k_{\gamma_i, \Sigma}^{L,s} \, \dfrac{d\dot{\gamma_i}_{L}}{\sqrt{L}} = \dfrac{2 \pi \chi(\Sigma)}{\sqrt{L}}. \end{equation*} To do this, we need to apply Lebesgue's dominated convergence theorem. Let us start with the integral of ${\mathcal{K}}_L$. We take a partition of unity $\{\varphi_{i}\}$, $i=1,\ldots, m$. Calling $\Sigma_{i} := \textrm{supp}(\varphi_{i}) \cap \Sigma$, for every $i=1,\ldots,m$, we have $$\int_{\Sigma} {\mathcal{K}}_{L}\, \dfrac{d\sigma_{L}}{\sqrt{L}}= \sum_{i=1}^{m}\int_{\Sigma_i}{\mathcal{K}}_{L} \varphi_{i}\, \dfrac{d\sigma_{L}}{\sqrt{L}}.$$ Let us choose a parametrization of every $\Sigma_{i}$, $\psi_{i}:D_{i} \to \Sigma_{i}$, then, for every $i=1, \ldots,m$, it holds that $$\int_{\Sigma_i}{\mathcal{K}}_{L}\, \varphi_{i}\, \dfrac{d\sigma_{L}}{\sqrt{L}} = \int_{D_i}{\mathcal{K}}_{L} \, \varphi_{i} \, |M_{L} \vec{n}|\, dv \, dw, $$ where \begin{equation*} M_{L}:= \left ( \begin{array}{ccc} 1 & 0 & -\tfrac{x_2}{2} \\ 0 & 1 & \tfrac{x_1}{2} \\ 0 & 0 & \tfrac{1}{\sqrt{L}}\\ \end{array} \right), \end{equation*} and $\vec{n}$ denotes the Euclidean normal vector to $\Sigma$.\\ It is now sufficient to check whether there exist two positive constants $M_1$ and $M_2$, independent on $L$, such that $$|{\mathcal{K}}_{L}| \leq M_1, \quad \textrm{and} \quad |M_{L} \vec{n}|\leq M_2.$$ The second estimate is proved in Proposition \ref{LSmeas}, see \cite{CDPT}. For the first one, we recall that the explicit expression of ${\mathcal{K}}_L$ is given by \begin{equation} \begin{aligned} {\mathcal{K}}_L & = -L \bar{r}_{L}^{2} - \left( \dfrac{l}{l_L}\right)^{3} (X_1 \bar{p} + X_2 \bar{q}) \scal{E_2}{\nabla_{\textnormal{H}} \left( \dfrac{r}{l}\right)}_{L} + \dfrac{l}{l_L}(X_1 \bar{p} + X_2 \bar{q}) X_{3}^{L}\bar{r}_L\\ & - \left( \dfrac{l}{l_L}\right)^{4} \left( \scal{E_1}{\nabla_{\textnormal{H}} \left( \dfrac{r}{l}\right)}_{L} \right)^{2} - \sqrt{L}\left( \dfrac{l}{l_L}\right)^{2}\scal{E_1}{\nabla_{\textnormal{H}} \left( \dfrac{r}{l}\right)}_{L}. \end{aligned} \end{equation} Since $l_{L} = \|\nabla_{L}u \|_{L} = \sqrt{(X_1 u)^{2}+(X_2 u)^{2}+ \left( \dfrac{X_3 u}{\sqrt{L}} \right)^{2}} \geq \|\nabla_{\textnormal{H}} u\|_{\textnormal{H}},$ we have that $$ \dfrac{l}{l_L} = \dfrac{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}{\| \nabla_{L}u\|_{L}} \leq 1. $$ Since $L>1$, it also holds that $$l_L \leq \sqrt{(X_1 u)^{2}+(X_2 u)^{2}+(X_3 u)^{2}}.$$ Moreover, since $u \in C^{2}(\mathbb{R}^{3})$ and $\Sigma$ is a compact surface without characteristic points, there exists a positive constant $C_1 >0$ such that \begin{equation*} |X_1 \bar{p} + X_2 \bar{q}| = \left| X_1 \left( \dfrac{X_{1}u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right) + X_2 \left( \dfrac{X_{2}u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right)\right| \leq C_1. \end{equation*} Therefore we have the following list of estimates: \begin{equation*} L \bar{r}_{L}^{2} = L \dfrac{(X_{3}^{L}u)^{2}}{l_{L}^{2}} \leq \left( \dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right)^{2} \leq C_2. \end{equation*} \begin{equation*} \begin{aligned} \left| \scal{E_2}{\nabla_{\textnormal{H}} \left( \dfrac{r}{l}\right)}_{L} \right| &= \left| \bar{r}_L \bar{p} X_1 \left(\dfrac{r}{l}\right) +\bar{r}_L \bar{q} X_2 \left(\dfrac{r}{l}\right) \right| \\ &= \dfrac{|X_3 u|}{L} \left| \dfrac{X_1 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} X_1 \left( \dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} \right) + \dfrac{X_2 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} X_2 \left( \dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} \right) \right|\\ & \leq |X_3 u| \, \left| \dfrac{X_1 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} X_1 \left( \dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} \right) + \dfrac{X_2 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} X_2 \left( \dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} \right) \right| \leq C_3. \end{aligned}\end{equation*} \begin{equation*} \begin{aligned} \sqrt{L} \left| \scal{E_1}{\nabla_{\textnormal{H}} \left( \dfrac{r}{l}\right)}_{L} \right| & = \sqrt{L} \left| \bar{q} X_{1}\left( \dfrac{X_3 u}{\sqrt{L}\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right) -\bar{p} X_{2}\left( \dfrac{X_3 u}{\sqrt{L}\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} \right)\right|\\ &\leq \left| \dfrac{X_2 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} X_{1}\left( \dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right)\right| +\left| \dfrac{X_1 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} X_{2}\left( \dfrac{X_3 u}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right)\right| \leq C_4. \end{aligned}\end{equation*} Similarly, $$|X_{3}^{L}\bar{r}_{L}| \leq C_5.$$ Altogether, $$|{\mathcal{K}}_{L}|\leq C_2 + C_1 \cdot C_3 + C_1 \cdot C_5 + C_{4}^{2} + C_4 =: M_1,$$ as desired. It remains to see what happens for the boundary integrals. Without loss of generality, let us assume we are given a surface $\Sigma$ with only one boundary component, given by a smooth curve $\gamma$. We need to estimate $$\left| \dfrac{\scal{D_{t}^{\Sigma}\dot{\gamma}}{J_{L}(\dot{\gamma})}_{L}}{\sqrt{L} \|\dot{\gamma}\|_{L}^{2}}\right|,$$ where \begin{equation*} \begin{aligned} \scal{D_{t}^{\Sigma}\dot{\gamma}}{J_{L}(\dot{\gamma})}_{L} & = \bar{r}_L (\dot{\gamma}_1 \ddot{\gamma}_2 - \dot{\gamma}_2 \ddot{\gamma}_1) \\ &+ \sqrt{L} \dfrac{l}{l_L} \left[ \omega(\dot{\gamma}) \left( \bar{q}\ddot{\gamma}_1 - \bar{p}\ddot{\gamma}_2\right) + \omega(\ddot{\gamma}) \left( \bar{q}\dot{\gamma}_1 - \bar{p}\dot{\gamma}_2\right) \right]\\ &- L \omega(\dot{\gamma}) \bar{r}_L (2\dot{\gamma}_1 \dot{\gamma}_2) + L^{3/2} \dfrac{l}{l_L} \omega(\dot{\gamma})^{2}(\bar{p} \dot{\gamma}_1 + \bar{q}\dot{\gamma}_2), \end{aligned}\end{equation*} and $$\|\dot{\gamma}\|_{L}^{2} = \dot{\gamma}_{1}^{2}+ \dot{\gamma}_{2}^{2} + L \omega(\dot{\gamma})^{2}.$$ Note that there exists a positive constant $C_0 >0$, such that $$\dot{\gamma}_{1}^{2}+ \dot{\gamma}_{2}^{2} + L \omega(\dot{\gamma})^{2} \geq \dot{\gamma}_{1}^{2}+ \dot{\gamma}_{2}^{2} + \omega(\dot{\gamma})^{2} \geq C_0 >0.$$ Now, we have \begin{equation*} \begin{aligned} \dfrac{|\bar{r}_L (\dot{\gamma}_1 \ddot{\gamma}_2 - \dot{\gamma}_2 \ddot{\gamma}_1)|}{\sqrt{L}(\dot{\gamma}_{1}^{2}+ \dot{\gamma}_{2}^{2} + L \omega(\dot{\gamma})^{2})} & \leq \dfrac{|\dot{\gamma}_1 \ddot{\gamma}_2 - \dot{\gamma}_2 \ddot{\gamma}_1||X_3 u|}{L \|\nabla_{\textnormal{H}} u\|_{\textnormal{H}} (\dot{\gamma}_{1}^{2}+ \dot{\gamma}_{2}^{2} + L \omega(\dot{\gamma})^{2})}\\ &\leq \dfrac{|X_3 u|}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}} \, \dfrac{|\dot{\gamma}_1 \ddot{\gamma}_2 - \dot{\gamma}_2 \ddot{\gamma}_1|}{\dot{\gamma}_{1}^{2}+ \dot{\gamma}_{2}^{2} + \omega(\dot{\gamma})^{2}}\leq C_6. \end{aligned}\end{equation*} \begin{equation*} \sqrt{L} \dfrac{l}{l_L} \dfrac{|\left[ \omega(\dot{\gamma}) \left( \bar{q}\ddot{\gamma}_1 - \bar{p}\ddot{\gamma}_2\right) + \omega(\ddot{\gamma}) \left( \bar{q}\dot{\gamma}_1 - \bar{p}\dot{\gamma}_2\right) \right]|}{\sqrt{L}(\dot{\gamma}_{1}^{2}+ \dot{\gamma}_{2}^{2} + L \omega(\dot{\gamma})^{2})} \leq \dfrac{|\omega(\dot{\gamma})| |\bar{q} \ddot{\gamma}_2 - \bar{p} \ddot{\gamma}_2| + |\omega(\ddot{\gamma})| |\bar{q}\dot{\gamma}_1 - \bar{p}\dot{\gamma}_2 |}{\dot{\gamma}_{1}^{2}+ \dot{\gamma}_{2}^{2} + \omega(\dot{\gamma})^{2}}\leq C_7. \end{equation*} \begin{equation*} \dfrac{2 L |\omega(\dot{\gamma})| |\bar{r}_L| |\dot{\gamma}_1 \dot{\gamma}_2|}{\sqrt{L}(\dot{\gamma}_{1}^{2}+ \dot{\gamma}_{2}^{2} + L \omega(\dot{\gamma})^{2})} \leq \dfrac{2 |\omega(\dot{\gamma})| |X_3 u| |\dot{\gamma}_1 \dot{\gamma}_2|}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}\, (\dot{\gamma}_{1}^{2}+ \dot{\gamma}_{2}^{2} + \omega(\dot{\gamma})^{2})}\leq C_8. \end{equation*} \begin{equation*} \dfrac{l}{l_L} \, \dfrac{L^{3/2} \omega(\dot{\gamma})^{2} |\bar{p}\dot{\gamma}_1 + \bar{q}\dot{\gamma}_2|}{\sqrt{L}(\dot{\gamma}_{1}^{2}+ \dot{\gamma}_{2}^{2} + L \omega(\dot{\gamma})^{2})} \leq \dfrac{L^{3/2} \omega(\dot{\gamma})^{2}}{L^{3/2} \omega(\dot{\gamma})^{2}} |\bar{p}\dot{\gamma}_1 + \bar{q}\dot{\gamma}_2| = |\bar{p}\dot{\gamma}_1 + \bar{q}\dot{\gamma}_2|\leq C_9. \end{equation*} The behavior of the measure has been already treated in Lemma \ref{Lmeas}. \end{proof} \textbf{Step 2}: Due to Lemma \ref{Rprop}, we can surround the projection of the characteristic set $\Pi(C(\Sigma))$ with smooth simple closed curves $\{\beta_{j}\}_{j=1,\ldots,n(\epsilon)}$ such that \begin{equation}\label{eq:Rproof} \sum_{j=1}^{n(\epsilon)}\mathrm{length}_{E}(\beta_{j}) \leq \epsilon. \end{equation} We can now work with a new surface $\Sigma_{\epsilon}$ which has no characteristic points, and boundary components which are given by the curves $\gamma_{i}$'s and the curves $\beta_{j}$'s. Step 1 tells us that for every $\epsilon >0$, \begin{equation*} \int_{\Sigma_{\epsilon}}{\mathcal{K}}_0 \, d\mathcal{H}_{cc}^{3} + \sum_{i=1}^{n}\int_{\gamma_{i}}k_{\gamma_{i}, \Sigma_{\epsilon}}^{0,s} \, d\dot{\gamma}_{i} = -\sum_{j=1}^{n(\epsilon)}k_{\beta_{j}, \Sigma_{\epsilon}}^{0,s} \, d\dot{\beta}_{j} , \end{equation*} which, combined with (\ref{eq:Rproof}), implies that for every $\epsilon >0$, \begin{equation*} \left| \int_{\Sigma_{\epsilon}} {\mathcal{K}}_0 \, d\mathcal{H}_{cc}^{3}+ \sum_{i=1}^{n}\int_{\gamma_{i}}k_{\gamma_{i}, \Sigma_{\epsilon}}^{0,s} \, d\dot{\gamma}_{i}\right| = \left| \sum_{j=1}^{n(\epsilon)}k_{\beta_{j}, \Sigma_{\epsilon}}^{0,s} \, d\dot{\beta}_{j} \right| \leq \epsilon, \end{equation*} and this completes the proof. \end{proof} \begin{corollary} Let $\Sigma\subset \mathbb{H}$ be a regular surface without boundary, or with boundary components given by Euclidean $C^{2}$-smooth horizontal curves. Assume that the characteristic set $C(\Sigma)$ satisfies $\mathcal{H}^{1}(C(\Sigma))=0$ and that $\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}^{-1}$ is locally summable with respect to the Euclidean 2-dimensional Hausdorff measure, near the characteristic set $C(\Sigma)$. Then the sub-Riemannian Gaussian curvature ${\mathcal{K}}_0$ cannot be an always positive or negative function. In particular, $\Sigma$ cannot have constant non-zero sub-Riemannian Gaussian curvature ${\mathcal{K}}_0$. \end{corollary} \begin{proof} If $\Sigma$ has no boundary, then by Theorem \ref{HGB} we have $$\int_{\Sigma}{\mathcal{K}}_0\, d\mathcal{H}_{cc}^{3}=0,$$ and therefore ${\mathcal{K}}_0$ cannot have a sign.\\ The same holds for boundary components given by horizontal curves because the sub-Riemannian signed curvature $k_{\gamma,\Sigma}^{0,s}$ of a horizontal curve $\gamma$ is $0$. \end{proof} \begin{remark} At the moment we do not know any example of a regular surface with sub-Riemannian Gaussian curvature ${\mathcal{K}}_0$ constantly equal to zero. On the other hand, if we remove the requirement of $\Sigma$ being compact, all vertically ruled (smooth) surfaces have ${\mathcal{K}}_0=0$ at every point. \end{remark} The following examples show that the assumption made in Theorem \ref{HGB} about the 1-dimensional Euclidean Hausdorff measure of the characteristic set $C(\Sigma)$ is sharp. \begin{example} Let $\Sigma = \{ (x_1,x_2,x_3)\in \mathbb{H}: u(x_1,x_2,x_3)=0 \}$, with \begin{equation} u(x_1,x_2,x_3)= \left \{ \begin{array}{rl} x_3 - \dfrac{x_1 x_2}{2} + x_2\, \mathrm{exp}((x_1+1)^{-2}), & x_1 < 1, x_2,x_3 \in \mathbb{R}, \\ x_3 - \dfrac{x_1 x_2}{2}, & x_1 \in [-1, 1], x_2,x_3 \in \mathbb{R}, \\ x_3 - \dfrac{x_1 x_2}{2} + x_2\, \mathrm{exp}((x_1-1)^{-2}), & x_1 > 1, x_2,x_3 \in \mathbb{R} . \end{array}\right. \end{equation} We have that $$C(\Sigma) = \{ (x_1,0,0): x_1\in [-1,1]\}.$$ The idea now is to consider the projection of $C(\Sigma)$ and to surround it with a curve in $\mathbb{R}^{2}$. Then, we will lift it to the surface $\Sigma$, exploiting that $\Sigma$ is given by a graph. For $\epsilon >0$, define $$\gamma(\theta) := \bigcup_{i=1}^{5}\gamma_{i}(\theta),\quad \mbox{$\theta \in [0,2\pi]$},$$ where $$\gamma_{1}(\theta):= \left( 1+ \epsilon \cos\left(\dfrac{\pi}{2 \tan(\epsilon)} \theta\right), \epsilon \sin\left(\dfrac{\pi}{2 \tan(\epsilon)} \theta\right)\right), \quad \mbox{$\theta \in [0,\tan(\epsilon))$},$$ $$\gamma_{2}(\theta):= \left(-\dfrac{2}{\pi-2 \tan(\epsilon)}\theta + \dfrac{\pi}{\pi - 2\tan(\epsilon)}, \epsilon \right), \quad \mbox{$\theta \in [\tan(\epsilon), \pi - \tan(\epsilon))$},$$ $$\gamma_{3}(\theta):= \left( -1+ \epsilon \cos\left(\alpha(\theta)\right), \epsilon \sin\left(\alpha(\theta)\right)\right), \quad \mbox{$\theta \in [\pi - \tan(\epsilon),\pi+\tan(\epsilon))$},$$ $$\gamma_{4}(\theta):= \left(t(\theta), -\epsilon \right), \quad \mbox{$\theta \in [\pi+\tan(\epsilon), 2\pi - \tan(\epsilon))$},$$ $$\gamma_{5}(\theta):= \left( 1+ \epsilon \cos\left(\beta(\theta)\right), \epsilon \sin\left(\beta(\theta)\right)\right), \quad \mbox{$\theta \in [2\pi - \tan(\epsilon),2\pi)$},$$ with $$\alpha(\theta):= \dfrac{\pi}{2 \tan(\epsilon)} \theta + \pi - \dfrac{\pi^{2}}{2\tan(\epsilon)},$$ $$t(\theta):= \dfrac{2}{\pi - 2 \tan(\epsilon)}\theta - \dfrac{3 \pi}{\pi - 2 \tan(\epsilon)},$$ $$\beta(\theta):= \dfrac{\pi}{\pi - 2\tan(\epsilon)}\theta + 2\pi \left( 1 - \dfrac{\pi}{2 \tan(\epsilon)}\right).$$ Now we need to control the integral of the signed curvature. It will be made of five pieces. Three of them (namely for $i=1,3,5$), will behave exactly like when we deal with isolated characteristic points, because the velocity of those parts goes to $0$ as $\epsilon \to 0$. It remains to check the other two integrals. We can re-parametrize those two parts as follows: $$\gamma_{2}(s) = ( -s, \epsilon), \quad \mbox{and} \quad \gamma_{4}(s)=(s,-\epsilon), \quad s \in [-1,1].$$ Because of the results concerning the signed curvature, it does not matter what happens to the third component. In this situation, we get that $$\dot{\gamma}_{2}(s) = (-1,0), \quad \mbox{and} \quad \dot{\gamma}_{4}(s)=(1,0), \quad s \in [-1,1],$$ and $$\bar{p}(x_1,x_2,x_3) = -\dfrac{x_2}{|x_2|}, \quad \quad \bar{q}(x_1,x_2,x_3) = 0.$$ Therefore $\bar{p}|_{\gamma_{2}} = 1$ and $\bar{p}|_{\gamma_{4}} = -1$. Then $$\int_{-1}^{1} \bar{p}|_{\gamma_2} (\dot{\gamma}_2)_1 \, ds + \int_{-1}^{1} \bar{p}|_{\gamma_4} (\dot{\gamma}_4)_1 \, ds = -2.$$ \end{example} \begin{example} Let $\Sigma = \{(x_1,x_2,x_3)\in \mathbb{H}: x_3 = \tfrac{x_1 \, x_2}{2}\}$. The projection of the characteristic set $\Pi(C(\Sigma))$ is the 1-dimensional line $\{x_2 =0\}$. Consider the curve $$\gamma(t) = \left( \cos(t), \sin(t), \dfrac{\sin(2t)}{4}\right), \quad t\in [0,2\pi],$$ which lives on the surface $\Sigma$ as boundary component of a new surface $\tilde{\Sigma}$ which is now bounded. Simple computations show that in this case $${\mathcal{K}}_0 = 0, \quad k_{\gamma, \tilde{\Sigma}}^{0,s} = \dfrac{1}{|\sin(t)|}, \quad \omega(\dot{\gamma})= \dfrac{\cos(2t)}{2} - \dfrac{1}{2}.$$ Therefore, $$\int_{\tilde{\Sigma}}{\mathcal{K}}_0 \, d\mathcal{H}_{cc}^{3} + \int_{\gamma}k_{\gamma, \tilde{\Sigma}}^{0,s} \, d\dot{\gamma} = 4,$$ in contrast with the statement of Theorem \ref{HGB}. \end{example} We end this section with an explicit example. \begin{example}[Kor\'{a}nyi sphere] Consider the Kor\'{a}nyi sphere $\mathbb{S}_{\mathbb{H}}$, $$ \mathbb{S}_{\mathbb{H}} := \{ (x_1,x_2,x_3) \in \mathbb{H} : (x_{1}^2 + x_{2}^2)^2 + 16 x_{3}^2 -1 =0 \}.$$ A parametrization of $\mathbb{S}_{\mathbb{H}}$ is $$f(\varphi,\theta) := \left( \sqrt{\cos(\varphi)} \cos(\theta), \sqrt{\cos(\varphi)} \sin(\theta) , \dfrac{\sin(\varphi)}{4}\right), \quad \mbox{$\varphi \in \left(-\dfrac{\pi}{2},\dfrac{\pi}{2}\right)$, $\theta \in [0,2\pi)$}.$$ In particular, recalling the definition (\ref{eq:M}) of $M$ and after some computations, we get that $$\| M (f_{\varphi} \times f_{\theta})\|_{\mathbb{R}^{2}} = \dfrac{\sqrt{\cos(\varphi)}}{4}.$$ A direct computation shows that the sub-Riemannian Gaussian curvature of $\mathbb{S}_{\mathbb{H}}$ is given by \begin{equation}\label{eq:K0Kor} {\mathcal{K}}_0 = -\dfrac{2}{x_{1}^2 + x_{2}^2} + 6 (x_{1}^2 + x_{2}^2) = -\dfrac{2}{\cos(\varphi)} + 6 \cos(\varphi), \end{equation} which is locally summable around the isolated characteristic points with respect to the the Heisenberg perimeter measure. Thus, by a special instance of Theorem \ref{HGB} \begin{equation}\label{simplified-GM} \int_{\mathbb{S}_{\mathbb{H}}}{\mathcal{K}}_0 \, d\mathcal{H}^{3}_{cc} =0. \end{equation} Equation \ref{simplified-GM} can also be verified directly using \eqref{eq:K0Kor}. \end{example} Further examples can be found in the appendix. \section{Application: a simplified Steiner's formula}\label{examples} The main application of the Gauss--Bonnet Theorem concerns a simplification to the Steiner's formula recently proved in \cite{BFFVW} {\it cc} neighborhoods of surfaces without characteristic points. Let us first recall some notation and background from \cite{BFFVW}. Let $\Omega \subset \mathbb{H}$ be an open, bounded and regular domain with smooth boundary $\partial \Omega$. Let $\delta_{cc}$ be the signed $cc$-distance function from $\partial \Omega$. The $\epsilon$-neighborhood $\Omega_{\epsilon}$ of $\Omega$ with respect to the $cc$-metric is given by \begin{equation}\label{eq:neicha} \Omega_{\epsilon}:= \Omega \cup \left\{ g \in \mathbb{H} : 0 \leq \delta_{cc}(g) < \epsilon\right\}. \end{equation} We define the \textit{iterated divergences} of $\delta_{cc}$ as follows: \begin{equation*} \operatorname{div_{\textnormal{H}}}^{(i)}(\delta_{cc}) =\left\{\begin{array}{l} 1, \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \mbox{$i=0$} \\ \operatorname{div_{\textnormal{H}}}(\operatorname{div_{\textnormal{H}}}^{(i-1)}\delta_{cc} \cdotp \nabla_{\textnormal{H}} \delta_{cc}), \quad \quad \mbox{$i \geq 1$} \end{array}\right. \end{equation*} Finally we put \begin{equation}\begin{split}\label{ABCDE} &A:= \Delta_{\textnormal{H}} \delta_{cc}, \quad \quad B:=- (X_{3}\delta_{cc})^{2}, \quad \quad C:=(X_{1}\delta_{cc})(X_{32}\delta_{cc})-(X_{2}\delta_{cc})(X_{31}\delta_{cc}), \\ &D:= X_{33}\delta_{cc},\quad \quad E:=(X_{31}\delta_{cc})^{2}+(X_{32}\delta_{cc})^{2}. \end{split}\end{equation} In order to make the paper self-contained, we recall a technical result from \cite{BFFVW}. Define the operator $g$ acting on smooth real valued functions as $$ g(\alpha):= \langle \nabla_{\textnormal{H}} \alpha, \nabla_{\textnormal{H}} \delta \rangle. $$ For a real-valued function $h$ we have \begin{equation}\label{div-of-h-formula} \operatorname{div_{\textnormal{H}}}(h\,\nabla_H\delta_{cc}) = h \, \Delta_{\textnormal{H}} \delta_{cc} + \scal{\nabla_{\textnormal{H}} h}{\nabla_{\textnormal{H}} \delta_{cc}} = h \, A + g(h). \end{equation} Note that $g$ is linear and satisfies the Leibniz rule, i.e. \begin{align*} &g(\alpha + \beta) = g(\alpha)+g(\beta), \\ &g(\alpha \, \beta) = g(\alpha) \beta + \alpha g(\beta). \end{align*} The following lemma holds (see \cite{BFFVW}, Lemma 4.2). The proof involves a number of lengthy calculations using higher derivatives of the {\it cc} distance function. \begin{lemma}\label{itdiv} The following relations hold: \begin{align} g(1)&= 0 \label{eq:g1},\\ g(A)&= B+2C -A^{2}\label{eq:gA},\\ g(B)&= 0 \label{eq:gB},\\ g(C)&= D - AC \label{eq:gC},\\ g(D)&= -E \label{eq:gD},\\ g(E)&= -2AE+2CD \label{eq:gE}. \end{align} \end{lemma} We now state the main result of this section, which gives a simplification to the main theorem of \cite{BFFVW} (a Steiner's formula for the {\it cc} distance function). \begin{theorem}\label{Torus} Let $\Omega \subset \mathbb{H}$ be an open, bounded and regular domain whose boundary $\partial \Omega$ is a Euclidean $C^{2}$-smooth compact and oriented surface with no characteristic points. For sufficiently small $\epsilon >0$ define the $\epsilon$-neighborhood $\Omega_{\epsilon}$ of $\Omega$ with respect to the $cc$-metric as in (\ref{eq:neicha}). Then \begin{equation*} \begin{aligned} \mathcal{L}^{3}(\Omega_{\epsilon})&= \mathcal{L}^{3}(\Omega) + \epsilon \mathcal{H}_{cc}^{3}(\partial \Omega) + \dfrac{\epsilon^{2}}{2}\int_{\partial \Omega} A \, d\mathcal{H}_{cc}^{3} + \dfrac{\epsilon^{3}}{3!}\int_{\partial \Omega}C\, d\mathcal{H}_{cc}^{3} +\\ &\sum_{j=1}^{+\infty}\left( \int_{\partial \Omega} (B^{j-1}D) \, d\mathcal{H}_{cc}^{3}\right) \dfrac{\epsilon^{2j+2}}{(2j+2)!} + \sum_{j=1}^{+\infty}\left( \int_{\partial \Omega} (B^{j-1}(AD-E)) \, d\mathcal{H}_{cc}^{3}\right) \dfrac{\epsilon^{2j+3}}{(2j+3)!}, \end{aligned} \end{equation*} where $A$, $B$, $C$, $D$ and $E$ are given as in \eqref{ABCDE}. \end{theorem} \begin{proof} Let us write $\partial_{f} \Omega_{\epsilon} := \{ g\in \mathbb{H}: \delta_{cc}(g)=\epsilon \}$. The evolution of the non-characteristic set $\partial \Omega$ can be explicitly described if we know the defining function of $\partial \Omega$, see \cite{AF07}. In our situation, we can assume that $\partial \Omega = \{ g \in \mathbb{H}: \delta_{cc}(g)=0\}$. In particular, the results from \cite{AF07} tell us that there exists a map $\mathcal{N}:[0,\epsilon]\times \partial \Omega \to \mathbb{R}^{3}$, such that $\mathcal{N}(\cdot,g):[0,\epsilon]\to \mathbb{R}^{3}$ is continuous, $\mathcal{N}(\epsilon,\cdot)\to \partial_{f} \Omega_{\epsilon}$ is smooth and $\mathcal{N}(0,g)= \mbox{id}|_{\partial \Omega}(g)$. We claim that there exists a continuous map $a(\epsilon,g):= \textrm{angle}(T_{\mathcal{N}(\epsilon,g)}\partial_{f} \Omega_{\epsilon}; H_{\mathcal{N}(\epsilon,g)})$. Since $\partial \Omega$ has no characteristic points, $a(0,g)>0$ for every $g \in \partial \Omega$. The continuity of $a(\cdot,\cdot)$ implies that there exists $\epsilon_0$, $0<\epsilon_{0}<\epsilon$, so that $a(s,g) >0$ for every $g \in \partial \Omega$ and for every $s \in (0,\epsilon_0)$. In particular this shows that we can choose a sufficiently small $\epsilon>0$ so that $\partial_{f} \Omega_{\epsilon}$ is still a non-characteristic set. We will use the following proposition from \cite{BFFVW} (see Proposition 3.3) which can be proved with the help of the sub-Riemannian divergence formula. \begin{proposition}\label{prop-3-3} Let $h:\Omega_{\epsilon} \to \mathbb{R}$ be a $C^1$ function. Then the vector field $h \, \nabla_H\delta_{cc}:\Omega_{\epsilon} \to \mathbb{R}^3$ satisfies $$ \int_{\{ s < \delta_{cc} < t \}} \operatorname{div_{\textnormal{H}}}(h \, \nabla_H\delta_{cc}) \, d{\mathcal L}^3 = \int_{\delta_{cc}^{-1}(t)} h \, d{\mathcal H}^3_{cc} - \int_{\delta_{cc}^{-1}(s)} h \, d{\mathcal H}^3_{cc}. $$ \end{proposition} We are interested in a Taylor series expansion of the function $$ \epsilon \mapsto \mathcal{L}^{3}(\Omega_{\epsilon}), $$ about $\epsilon = 0$. The analyticity of this function has been already proved in \cite{BFFVW}, therefore let us denote by $f^{(i)}(\epsilon)$ the $i^{th}$ derivative of $\epsilon \mapsto \mathcal{L}^{3}(\Omega_{\epsilon})$. The first three elements of the expansion are obtained as in \cite{BFFVW}. For the other terms, we need to recall that by Theorem 3.4 of \cite{BFFVW}, for every $i \geq 0$, \begin{equation}\label{eq:lemma} f^{(i)}(s)= \int_{\delta_{cc}^{-1}(s)}(\operatorname{div_{\textnormal{H}}}^{(i-1)}\delta_{cc})\, d\mathcal{H}_{cc}^{3}, \quad \textrm{ for every } s \in [0,\epsilon_0). \end{equation} In particular, for $i=3$, $\operatorname{div_{\textnormal{H}}}^{(2)}\delta_{cc}= B+2C$ and the expression $B+C$ coincides with the horizontal Gaussian curvature ${\mathcal{K}}_0$. Therefore we can apply the Gauss--Bonnet Theorem \ref{HGB} to obtain $$ f^{(3)}(s)= \int_{\delta_{cc}^{-1}(s)}C\, d\mathcal{H}_{cc}^{3} $$ and hence $$ f^{(3)}(0)= \int_{\partial \Omega}C\, d\mathcal{H}_{cc}^{3}. $$ We now claim that \begin{equation}\label{2j2} f^{(2j+2)}(s) = \int_{\delta_{cc}^{-1}(s)} (B^{j-1}D) \, d{\mathcal H}_{cc}^3 \qquad \mbox{for $j \ge 1$} \end{equation} and \begin{equation}\label{2j3} f^{(2j+3)}(s) = \int_{\delta_{cc}^{-1}(s)} (B^{j-1}(AD-E)) \, d{\mathcal H}_{cc}^3 \qquad \mbox{for $j \ge 1$,} \end{equation} from which the indicated values of the coefficients in the series expansion of $\epsilon \mapsto {\mathcal L}^3(\Omega_\epsilon)$ are obtained by setting $s=0$. The formulas in \eqref{2j2} and \eqref{2j3} can be obtained inductively by evaluating difference quotient approximations to the indicated derivatives using the inductive hypothesis and the divergence formula in Proposition \ref{prop-3-3}. First, \begin{equation*}\begin{split} f^{(4)}(s) &= \lim_{\epsilon \to 0} \tfrac1\epsilon \left( f^{(3)}(s+\epsilon) - f^{(3)}(s) \right) \\ &= \lim_{\epsilon \to 0} \tfrac1\epsilon \left( \int_{\delta_{cc}^{-1}(s+\epsilon)} C \, d\mathcal{H}_{cc}^3 - \int_{\delta_{cc}^{-1}(s)} C \, d\mathcal{H}_{cc}^3 \right) \\ &= \lim_{\epsilon \to 0} \tfrac1\epsilon \int_{\{ s < \delta_{cc} < s+\epsilon \}} \operatorname{div_{\textnormal{H}}}(C \, \nabla_H \delta_{cc}) \, d{\mathcal L}^3 = \int_{\delta_{cc}^{-1}(s)} \operatorname{div_{\textnormal{H}}}(C \, \nabla_H \delta_{cc}) \, d{\mathcal H}^3_{cc}. \end{split}\end{equation*} By \eqref{div-of-h-formula} and Lemma \ref{itdiv}, $\operatorname{div_{\textnormal{H}}}(C\,\nabla_H \delta_{cc}) = CA + g(C) = AC + (D-AC) = D$ and so \eqref{2j2} holds in the case $j=1$. Similarly, for $j \ge 2$, \begin{equation*}\begin{split} f^{(2j+2)}(s) &= \lim_{\epsilon \to 0} \tfrac1\epsilon \left( f^{(2j+1)}(s+\epsilon) - f^{(2j+1)}(s) \right) \\ &= \lim_{\epsilon \to 0} \tfrac1\epsilon \left( \int_{\delta_{cc}^{-1}(s+\epsilon)} (B^{j-2}(AD-E)) \, d\mathcal{H}_{cc}^3 - \int_{\delta_{cc}^{-1}(s)} (B^{j-2}(AD-E)) \, d\mathcal{H}_{cc}^3 \right) \\ &= \lim_{\epsilon \to 0} \tfrac1\epsilon \int_{\{ s < \delta_{cc} < s+\epsilon \}} \operatorname{div_{\textnormal{H}}}(B^{j-2}(AD-E) \, \nabla_H \delta_{cc}) \, d{\mathcal L}^3 \\ &= \int_{\delta_{cc}^{-1}(s)} \operatorname{div_{\textnormal{H}}}(B^{j-2}(AD-E) \, \nabla_H \delta_{cc}) \, d{\mathcal H}^3_{cc}. \end{split}\end{equation*} By \eqref{div-of-h-formula} and Lemma \ref{itdiv}, \begin{equation*}\begin{split} \operatorname{div_{\textnormal{H}}}(B^{j-2}(AD-E)\,\nabla_H \delta_{cc}) &= B^{j-2}(AD-E)A + g(B^{j-2}(AD-E)) \\ &= B^{j-2}(A^2D-AE) + B^{j-2}(Ag(D)+Dg(A)-g(E)) \\ &= B^{j-2}(A^2D - AE - AE + BD + 2CD - A^2D +2AE - 2CD) \\ &= B^{j-1}D \end{split}\end{equation*} and so \eqref{2j2} holds in the case $j\ge 2$. A similar computation establishes \eqref{2j3} for all $j \ge 1$. This completes the proof. \end{proof} \section{Questions and remarks}\label{questions} As is clear from the proof of the Gauss--Bonnet theorem, it is of crucial interest to prove local summability of the sub-Riemannian Gaussian curvature ${\mathcal{K}}_0$ around isolated characteristic points, with respect to the Heisenberg perimeter measure. For the horizontal mean curvature ${\mathcal{H}}_{0}$ of $\Sigma$ this is an established result, see \cite{DGN12}. In the same work \cite{DGN12}, it is showed that the situation could change dramatically if we address the problem of local integrability of ${\mathcal{H}}_0$ with respect to the Riemannian surface measure, near the characteristic set. In this case, it is conjectured that we should have locally integrability if we are close to an \textit{isolated characteristic point} of $\Sigma$, and it is presented a counterexample in the case in which the characteristic set $\mathrm{char}(\Sigma)$ is 1-dimensional. \begin{question} Is the sub-Riemannian Gaussian curvature ${\mathcal{K}}_0$ locally summable with respect to the Heisenberg perimeter measure, near the characteristic set? \end{question} Recalling \eqref{eq:summ}, it is clear that the local summability of ${\mathcal{K}}_0$ is closely related to the integrability of ${\mathcal{H}}_0$ near isolated characteristic points with respect to the Riemannian surface measure. As far as we know, the best results in this direction are those of \cite{DGN12}, which provide a class of examples where we have local integrability. In the same spirit of \cite{DGN12}, we have the following result. \begin{proposition} Let $\Sigma \subset \mathbb{H}$ be a Euclidean $C^{2}$-smooth surface. Suppose $\Sigma$ has cylindrical symmetry near an isolated characteristic point $g$, then ${\mathcal{K}}_0 \in L^{1}(\Sigma, d\mathcal{H}^{3}_{cc})$. \end{proposition} \begin{proof} Without loss of generality we can assume that the isolated characteristic point is the origin $0=(0,0,0)$, and that locally around $0$ the surface $\Sigma$ is given by the 0-level set of the function $$u(x_1,x_2,x_3):=x_3 - f\left(\dfrac{x_{1}^{2}+x_{2}^{2}}{4}\right),$$ where $f\in C^{2}$. For simplicity, let us denote by $f(r):=f\left(\tfrac{x_{1}^{2}+x_{2}^{2}}{4}\right).$ Then, $$X_3 u = 1, \quad X_1 u = -\tfrac{1}{2}(x_2 +x_1 f'(r)) \quad \textrm{and} \quad X_2 u = \tfrac{1}{2}(x_1 - x_2 f'(r)),$$ therefore $\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}} = \tfrac{1}{2} \sqrt{x_{1}^{2}+x_{2}^{2}} \sqrt{1+(f'(r))^{2}}$. In order to compute the sub-Riemannian Gaussian curvature ${\mathcal{K}}_0$ we need \begin{equation*} \begin{aligned} X_1 \left( \dfrac{1}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right)&= 2 \partial_{x_1}\left( (x_{1}^{2}+x_{2}^{2})^{-\tfrac{1}{2}}(1+(f')^{2})^{-\tfrac{1}{2}}\right)\\ &=- \dfrac{2 x_1 (1+(f')^{2})+f' \, f'' \, x_1 (x_{1}^{2}+x_{2}^{2})}{(x_{1}^{2}+x_{2}^{2})^{3/2}(1+(f')^{2})^{3/2}}, \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} X_2 \left( \dfrac{1}{\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}}\right)&= 2 \partial_{x_2}\left( (x_{1}^{2}+x_{2}^{2})^{-\tfrac{1}{2}}(1+(f')^{2})^{-\tfrac{1}{2}}\right)\\ &=- \dfrac{2 x_2 (1+(f')^{2})+f' \, f'' \, x_2 (x_{1}^{2}+x_{2}^{2})}{(x_{1}^{2}+x_{2}^{2})^{3/2}(1+(f')^{2})^{3/2}}. \end{aligned} \end{equation*} After some simplifications we get \begin{equation}\label{cK-formula} {\mathcal{K}}_0 = -\dfrac{2}{(x_{1}^{2}+x_{2}^{2})(1+(f')^{2})} + \dfrac{ f'\, f''}{(1+(f')^{2})^{2}}, \end{equation} which is summable. \end{proof} A generalization of \eqref{cK-formula} appears in Example \ref{example-x3graph}. \\ It is obvious that the local summability of $\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}^{-1}$ implies the local summability of the sub-Riemannian Gaussian curvature ${\mathcal{K}}_0$, but it is not necessary, as showed by the following example. \begin{example} Let $\Sigma = \{ (x_1,x_2,x_3)\in \mathbb{H} :u(x_1,x_2,x_3)=0 \},$ for $$u(x_1,x_2,x_3) = x_3 - \dfrac{x_1\, x_2}{2} -\exp\left(-(x_{1}^{2}+x_{2}^{2})^{-2} \right).$$ The origin $0=(0,0,0)$ is an isolated characteristic point of $\Sigma$, indeed $$X_{1}u = -x_2 -\dfrac{4 x_1 \exp\left(-(x_{1}^{2}+x_{2}^{2})^{-2}\right)}{(x_{1}^{2}+x_{2}^{2})^{3}}, \quad \textrm{and} \quad X_{2}u = -\dfrac{4 x_2 \exp\left(-(x_{1}^{2}+x_{2}^{2})^{-2}\right)}{(x_{1}^{2}+x_{2}^{2})^{3}}.$$ Switching to polar coordinates $x=r \cos(\theta), y=r \sin(\theta),$ we have $$\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}^{2} = r^{2} \sin^{2}(\theta) + \dfrac{16 \exp(-2r^{-4})}{r^{10}} + \dfrac{4r^{2}\sin(2\theta)\exp(-r^{-4})}{r^{6}} \leq r^{2}\sin^{2}(\theta) + \exp(-r^{-4}).$$ Therefore we are interested in the summability of the following integral \begin{equation}\label{eq:inte} \int_{0}^{\epsilon} \int_{0}^{2\pi}\dfrac{r}{\sqrt{r^{2}\sin^{2}(\theta)+ \exp(-r^{-4})}} d\theta \, dr . \end{equation} Now, setting $g(r):= r^{-1} \exp(-r^{-2}),$ we have \begin{equation*} \begin{aligned} \int_{0}^{\epsilon} &\int_{0}^{2\pi}\dfrac{r}{\sqrt{r^{2}\sin^{2}(\theta)+ \exp(-r^{-4})}} d\theta \, dr \gtrsim \int_{0}^{\epsilon}\int_{0}^{2\pi}\dfrac{1}{|\sin(\theta)|+ g(r)}d\theta \, dr\\ &\geq \int_{0}^{\epsilon}\left( \int_{0}^{\delta} \dfrac{1}{|\sin(\theta)|+g(r)}d\theta\right)dr \approx \int_{0}^{\epsilon}\left(\int_{0}^{\delta}\dfrac{1}{\theta+g(r)}d\theta\right)dr\\ &=\int_{0}^{\epsilon}\ln\left(1+\dfrac{\delta}{g(r)}\right)dr, \end{aligned} \end{equation*} which is divergent. Unfortunately this does not provide an example of a surface with isolated characteristic points whose sub-Riemannian Gaussian curvature ${\mathcal{K}}_0$ is \textit{not} locally integrable with respect to the Heisenberg perimeter measure. Indeed, after quite long computations, one can prove that in this case the sub-Riemannian Gaussian curvature ${\mathcal{K}}_0$ is actually locally integrable. \end{example} Our second question relates to the possible connection existing between the sub-Riemannian Gaussian curvature ${\mathcal{K}}_0$ and one of the integrands appearing in the localized Steiner's formula proved in \cite{BFFVW}. In particular, if we consider a Euclidean $C^{2}$-smooth and regular surface $\Sigma \subset \mathbb{H}$ a defining function $u$ such that $\|\nabla_{\textnormal{H}} u\|_{\textnormal{H}}=1$, then we have that $${\mathcal{K}}_0 = B + C,$$ where $B$ and $C$ are defined as in Section \ref{examples}. \begin{question} Is there any explanation for the discrepancy between the horizontal Gaussian curvature ${\mathcal{K}}_0$ and the fourth integrand appearing in the Steiner's formula proved in \cite{BFFVW}? \end{question} We want to mention that the expression we got for the sub-Riemannian Gaussian curvature ${\mathcal{K}}_0$ appears also in the paper \cite{DGN07} in the study of stability properties of minimal surfaces in $\mathbb{H}$, and also in the upcoming manuscript \cite{ChMT}. We end this paper with a Fenchel-type theorem for fully horizontal curves. \begin{theorem}\label{Fenchel} Let $\gamma:[a,b]\to \mathbb{H}$ be a Euclidean $C^{2}$-smooth, regular, closed and fully horizontal curve. Then \begin{equation} \int_{\gamma} k_{\gamma}^{0}\, d\dot{\gamma}_{\textnormal{H}} > 2\pi, \end{equation} where $d\dot{\gamma}_{\textnormal{H}}$ is the standard Heisenberg length measure of Definition \ref{length}. \end{theorem} \begin{proof} Define the projected curve $$ \tilde{\gamma}(t):= \Pi(\gamma)(t) = (\gamma_1(t), \gamma_2(t), 0)_{\{e_1,e_2,e_3\}}. $$ Then $\tilde{\gamma}$ is a planar Euclidean $C^{2}$-smooth, regular and closed curve whose curvature $k$ coincides precisely with the sub-Riemannian curvature $k_{\gamma}^{0}$ of $\gamma$. Due to the fact that the curve is horizontal, $$ \int_{\gamma} k_{\gamma}^{0}\, d\dot{\gamma}_{\textnormal{H}} = \int_{\tilde{\gamma}} k \, d\dot{\gamma}_{\mathbb{R}^{2}}, $$ where $d\dot{\gamma}_{\mathbb{R}^{2}}$ denotes the standard Euclidean length measure in $\mathbb{R}^{2}$. The classical Fenchel Theorem (see \cite{dC76}, Chapter 5.7) assures that $$ \int_{\tilde{\gamma}} k \, d\dot{\gamma}_{\mathbb{R}^{2}} \geq 2\pi, $$ and states that equality is achieved if and only if the curve $\tilde{\gamma}$ is convex. It is a well known fact concerning horizontal curves that the projection of a closed fully horizontal curve $\gamma$ has enclosed oriented area equal to 0. Therefore its projection $\tilde{\gamma}$ cannot be a convex curve, and the inequality has to be strict. This completes the proof. \end{proof} \appendix \section{Examples} We want to collect here a list of explicit examples where we compute the sub-Riemannian Gaussian curvature explicitly. \begin{example} Any vertically ruled surface Euclidean $C^{2}$-smooth surface $\Sigma$ has vanishing sub-Riemannian Gaussian curvature, i.e. if $$\Sigma = \{ (x_1, x_2, x_3)\in \mathbb{H}: f(x_1,x_2)=0\},$$ for $f \in C^{2}(\mathbb{R}^2)$, then ${\mathcal{K}}_0 =0$. In particular, every vertical plane has constant sub-Riemannian Gaussian curvature ${\mathcal{K}}_0 = 0$. \end{example} \begin{example} The horizontal plane through the origin, $\Sigma = \{ (x_1,x_2,x_3) \in \mathbb{H}: x_3=0\},$ has $${\mathcal{K}}_0 = -\dfrac{2}{(x_{1}^{2} + x_{2}^{2})}.$$ \end{example} \begin{example} The Kor\'{a}nyi sphere, $\Sigma = \{ (x_1,x_2,x_3) \in \mathbb{H} : (x_{1}^2 + x_{2}^2)^2 + 16 x_{3}^2 -1 =0 \},$ has $${\mathcal{K}}_0 = -\dfrac{2}{(x_{1}^2 + x_{2}^2)} + 6 (x_{1}^2 + x_{2}^2).$$ \end{example} \begin{example} Let $\alpha >0$. The paraboloid, $\Sigma = \{ (x_1, x_2, x_3)\in \mathbb{H}: x_3 = \alpha (x_{1}^2 + x_{2}^2)\},$ has $${\mathcal{K}}_0 = - \dfrac{1}{1+ 16 \alpha^2} \, \dfrac{2}{(x_{1}^2 + x_{2}^2)}.$$ \end{example} \begin{example}\label{example-x3graph} Every surface $\Sigma$ given as a $x_3$-graph, $\Sigma = \{ (x_1,x_2,x_3)\in \mathbb{H}: x_3 = f(x_1,x_2)\}$, with $f\in C^{2}(\mathbb{R}^2)$, has $$ {\mathcal{K}}_0 = -\dfrac{2}{\|\nabla_{\textnormal{H}} u \|_{\mathnormal{H}}^{2}}+ \dfrac{1}{\|\nabla_{\textnormal{H}} u \|_{\mathnormal{H}}^{4}} (\mathrm{Hess}f) \left(\nabla_{\textnormal{H}} u, J \nabla_{\textnormal{H}} u\right), $$ where $u(x_1, x_2, x_3):= x_3 - f(x_1,x_2)$. \end{example} For $x_3$-graphs, we have another useful result that provides a sufficient condition for the sub-Riemannian Gaussian curvature ${\mathcal{K}}_0$ to vanish. \begin{lemma} Let $\Sigma \subset \mathbb{H}$ be as before. Let $g \in \Sigma$ and suppose that $\Sigma$ is a Euclidean $C^{2}$-smooth $x_3$-graph and $X_1u$ and $X_2u$ are linearly dependent in a neighborhood of $g$. Then ${\mathcal{K}}_0(g)=0$. \end{lemma} \begin{proof} In the case of a $x_3$-graph, $X_3 u =1$ and we have $$ {\mathcal{K}}_0 = -\frac1{|\nabla_0 u|^2} + \frac{X_2 u X_1(\tfrac12|\nabla_0 u|^2) - X_1 u X_2(\tfrac12|\nabla_0 u|^2)}{|\nabla_0 u|^4}. $$ Assume that $aX_1 u + bX_2 u = $ in a neighborhood of $g$. Let us suppose that $b \ne 0$; the case $a \ne 0$ is similar. Without loss of generality, assume that $b=1$. We expand \begin{equation}\label{x3graph} {\mathcal{K}}_0 = \frac{-(X_1u)^2-(X_2u)^2 + X_1uX_2uX_1X_1u+(X_2u)^2X_1X_2u-(X_1u)^2X_2X_1u-X_1uX_2uX_2X_2u}{|\nabla_0 u|^4} \end{equation} and use the identities $X_2u=aX_1u$, $$ X_1X_2u=aX_1X_1u, $$ $$ X_2X_1u=X_1X_2u-X_3u=aX_1X_1u-1 $$ and $$ X_2X_2u=aX_2X_1u=a-a^2X_1X_1u $$ to rewrite the numerator of \eqref{x3graph} entirely in terms of $X_1u$ and $X_1X_1u$. A straightforward computation shows that the expression for ${\mathcal{K}}_0$ vanishes. The case when $a\ne 0$ is similar. \end{proof} \begin{example} Every surface $\Sigma$ given as a $x_1$-graph, $\Sigma = \{ (x_1,x_2,x_3)\in \mathbb{H}: x_1 = f(x_2,x_3) \}$ with $f\in C^{2}(\mathbb{R}^2)$, has $${\mathcal{K}}_0 = -\dfrac{f_3^2}{\|\nabla_{\textnormal{H}} u \|_{\mathnormal{H}}^2} + \dfrac{(x_1^2-x_2^2) f_{33}}{8\, \|\nabla_{\textnormal{H}} u \|_{\mathnormal{H}}^4} \left( x_1 \, f_3 + \dfrac{x_1 x_2 \, f_3^2}{2}\right) -\dfrac{f_{23} (1+ \tfrac{x_2}{2}f_3)}{\|\nabla_{\textnormal{H}} u \|_{\mathnormal{H}}^2} + \dfrac{1}{\|\nabla_{\textnormal{H}} u \|_{\mathnormal{H}}^4}\left( \dfrac{x_1^2 \, f_3^3}{8} + \dfrac{(1+\tfrac{x_2}{2}f_3)}{2}f_3\right),$$ where $u(x_1, x_2, x_3):= x_1 - f(x_2,x_3)$. A similar result holds for $x_2$-graphs. \end{example} \nocite{*} \end{document}
\begin{document} \begin{center} \thispagestyle{empty} {\large \bf Tail Asymptotics of Random Sum and Maximum of Log-Normal Risks} \vskip 0.4 cm \centerline{\large Enkelejd Hashorva\footnote{University of Lausanne, UNIL-Dorigny 1015 Lausanne, Switzerland} and Dominik Kortschak\footnote{ Universit\'e de Lyon, F-69622, Lyon, France; Universit\'e Lyon 1, Laboratoire SAF, EA 2429, Institut de Science Financi\`ere et d'Assurances, 50 Avenue Tony Garnier, F-69007 Lyon, France} } \vskip 0.4 cm \end{center} {\bf Abstract:} In this paper we derive the asymptotic behaviour of the survival function of both random sum and random maximum of log-normal risks. As for the case of finite sum and maximum investigated in Asmussen and Rojas-Nandaypa (2008) also for the more general setup of random sums and random maximum the principle of a single big jump holds. We investigate both the log-normal sequences and some related dependence structures motivated by stationary Gaussian sequences. {\bf Key words}: Risk aggregation; log-normal risks; exact asymptotics; Gaussian distribution; product of random variables. \section{Introduction} Let $Y_i,i\ge 1$ be positive random variables \rE{(rv's)} which model claim sizes of an insurance portfolio for a given observation period. Denote by $N$ the total number of \rrE{claims} reported during the observation period, thus $N$ is a discrete rv, which we assume to be independent of claim sizes $Y_i,i\ge 1$. The classical \rE{risk} model $S_N= \sum_{i=1}^N Y_i$ for the total loss amount assumes that $Y_i$'s are independent and identically distributed (iid) rv's. If the assumption of independence of claim sizes is dropped, \pD{one faces the problem how to choose a meaningful dependence structure. Further this dependence structure should be tractable from a theoretical point of view. For example Constantinescu et al.\ (2011) consider \rE{a} model \rE{where} the survival copula of claim sizes is assumed to be Archimedean. Such a model has the interpretation that for some positive rv $V$ and iid unit exponential rv's $E_i,i\ge 1$ independent of $V$, \rE{then} $Y_i=V E_i,i\ge 1$ form a dependent sequence of claim sizes derived by randomly scaling of iid claim sizes $E_i,i\ge 1$.}\\ \pD{In this paper we use dependent Gaussian sequences and related dependence structures \rE{to model claim sizes}.} Specifically, if $X_i,i\ge 1$ \rE{are} dependent Gaussian rv's with $N(0,1)$ distribution, then $Y_i= e^{X_i},i\ge 1$ is the corresponding sequence of dependent log-normal rv's that can be used for modeling claim sizes. For instance, if $X_i,i\ge 1$ is a centered stationary Gaussian sequence of $N(0,1)$ components and constant correlation $\rho= \E{X_1X_i}\in (0,1),i>1$, then $Y_i=e^{X_i}$ is a sequence of dependent log-normal rv's. Since we have (see e.g., Berman (1992)) \begin{eqnarray} X_i= \rho Z_0+ \sqrt{1- \rho^2} Z_i, \label{seq} \end{eqnarray} with $Z_i,i\ge 0$ iid $N(0,1)$ rv's, then $Y_i= e^{\rho Z_0} e^{\sqrt{1- \rho^2}Z_i},i\ge 1$. For such $Y_i$'s, by Asmussen and Rojas-Nandaypa (2008) \begin{eqnarray}\label{NN} \pk{S_n > u} \sim n \pk{X_1> \log u}, \quad u\to \infty \end{eqnarray} holds for any $n\ge 2$, where $\sim$ stands for asymptotic equivalence of two functions when the argument tends to infinity. In view of Asmussen et al.\ (2011) (see also Hashorva (2013)) $S_n$ is asymptotically tail equivalent with the maximum $Y_{n:n}= \max_{1 \le i \le n} Y_i$, i.e., $\pk{S_n> u} \sim \pk{Y_{n:n} >u}$ as $u\to \infty$. Our analysis in this paper is concerned with the probability of observing large values for the random sum $S_N$, thus we shall investigate $\pk{S_N> u}$ when $u$ is large. Additionally, we shall consider also the tail asymptotics of the maximum claim $Y_{N:N}$ among the claim sizes $Y_1 ,\ldots, Y_N$; we set $Y_{0:0}=0$ if $N=0$. \rrE{For the case that $N$ is non-random see for recent results on max-sum equivalence Jiang et al.\ (2014) and the references therein.} For our investigations of the tail behaviours of $S_N$ and $Y_{N:N}$ we shall follow two objectives:\\ A) We shall exploit the tractable dependence structure \rE{implied} by \eqref{seq} choosing general $Z_i$'s such that $e^{Z_i}$ has survival function similar to that of a log-normal rv; \\ B) We consider a log-normal dependence structure induced by a general Gaussian sequence $X_i,i\ge 1$ where $X_i,X_n$ can have a correlation $\rho_{in}$ which is allowed to converge to 1 as $n\to \infty$. For both cases of dependent $Y_i$'s we show that the principle of a single big jump (see Foss et al.\ (2013) for details in iid setup) holds if for the discrete rv $N$ we require that \begin{eqnarray} \label{conN} \E{(1+ \delta)^N} < \infty \end{eqnarray} is valid for some $\delta>0$; a large class of discrete rv's satisfies condition \eqref{conN}.\\ Brief organisation of the rest of the paper: We present our main results in Section 2 followed by the proofs in Section 3. \section{Main Results} We consider first $X_i$'s which \rrE{are in general not Gaussian}. So for a given fixed $\rho \in [0,1)$ let $Z_i,i\ge 0$ be independent rv's which define $X_i$'s via the dependence structure \eqref{seq}. We shall assume that \begin{eqnarray}\label{14b} \pk{e^{Z_0} > u}\sim \mathcal{L}(u) \Psi(\log(u)), \quad u\to \infty, \end{eqnarray} with $\Psi$ the survival function of an $N(0,1)$ rv and $\mathcal{L}(\cdot)$ a regularly varying function at $\infty$ with index $\beta\in \mathbb{R}$, see Bingham et al. (1987) or Mikosch (2009) for details on regularly \rrE{varying} functions. Clearly, \eqref{14b} is satisfied if $Z_0$ is an $N(0,1)$ rv. Considering $Z_0$ as a base risk, we shall further assume that with $c_i\in [0,\infty)$ uniformly in $i$ \begin{eqnarray}\label{14} \pk{Z_i > u}\sim c_i\pk{{Z_0} > u}, \quad u\to \infty. \end{eqnarray} For such models the claim sizes $Y_i=e^{X_i},i\ge 1$ have marginal distributions which are in general neither log-normal nor with tails which are proportional \rrE{to} those of log-normal rv's. We state next our first result \rE{for $Y_{N:N}$ the maximal claim size among $Y_1=e^{X_1} ,\ldots, Y_N=e^{X_N}$ and the random sum $S_N=\sum_{i=1}^N Y_i$; we set $Y_{0:0}=0$ and $S_0:=0$.} \begin{theo}\label{th0} Let $N$ be an integer-valued rv satisfying $\E{(1+\delta)^N} < \infty$ for some $\delta>0$. Let $X_i,i\ge 1$ be a sequence of rv's given by \eqref{seq} with $Z_i,i\ge 0$ iid rv's and $\rho\in [0,1)$ some given constant. Suppose that \eqref{14b} and \eqref{14} hold with $\max_{i \ge 1} c_i < \infty$. \rE{If further $N$ is independent of $X_i,i\ge 1$, then} \begin{eqnarray}\label{NN2} \pk{S_N > u} \sim \pk{Y_{N:N} > u} \sim \E{\sum_{i=1}^N c_i} \frac{\kal{L}(u^{\rho^2}) \kal{L}(u^{1-\rho^2}) }{ \sqrt{ 2 \pi} \log u} \exp\Bigl( - \frac{(\log u)^2}{2} \Bigr) , \quad u \to \infty. \end{eqnarray} \end{theo} {\bf Remarks:} a) Clearly, if $Y=e^{Z}$ with $Z$ an $N(0,1)$ rv (thus $Y$ is a log-normal rv with $LN(0,1)$ distribution), then \eqref{14b} holds with $\kal{L}(u)=1, u>0$. \\ b) If $\kal{L}(\cdot)$ in \netheo{th0} is constant, then the tail asymptotic behaviour of $S_N$ and $Y_{N:N}$ is not influenced by the value of the dependence parameter $\rho$, and hence as expected the principle of a single big jump holds. \rrE{However}, for non-constant $\kal{L}(\cdot)$ the dependence parameter $\rho$ plays a crucial role in the tail asymptotics derived in \eqref{NN2}. The reason for this is that by Lemma \ref{L1} \begin{equation} \pk{Y_i>u} \sim c_i \frac{\kal{L}(u^{\rho^2}) \kal{L}(u^{1-\rho^2}) }{ \sqrt{ 2 \pi} \log u} \exp\Bigl( - \frac{(\log u)^2}{2} \Bigr), \quad u\to \infty. \label{ff} \end{equation} Hence also in this case the principle of a single big jump applies.\\ c) In the proof of Theorem \ref{th0} we can show $S_N\stackrel{d}{=}e^{\rho Z_0}e^{\sqrt{1-\rho^2} Z^\star}$ for some $Z^*$ independent of $Z_0$ and then we apply Lemma \ref{L1}. Here we want to mention that after proving \eqref{ff} we can also apply Proposition 2.2 of Foss and Richards (2010) to determine the asymptotic of $\pk{S_n> u}$ as $u\to \infty$. If we condition on $Z_0$ and set $\overline F(x) =\mathbb{P}(Y_1>x)$, $B_i(x)=\{x:e^{\rho Z_0}\le x^\gamma\}$ for some $\gamma \in (\rho,1)$ and define $\pD{h(x)=x^{\xi}}$ with \[1-\frac 12 \left(\frac{1-\gamma}{\sqrt{1-\rho^2}}\right)<\xi^2<1, \] then it is straightforward to show that the conditions of Proposition 2.2 of Foss and Richards (2010) are met. Our second result is for log-normal rv's where we remove the assumptions of equi-correlations. Specifically, we consider for each $n$ claim sizes $Y_{1,n}=e^{X_{1,n}},\ldots,Y_{n,n}= e^{X_{n,n}}, $ where $(X_{1,n}, \ldots, X_{n,n})$ is a normal random vector with mean zero and covariance matrix $\Sigma^{(n)}$ which is a correlation matrix with entries $\sigma^{(n)}_{i,j}$. We shall assume that $\rho^n_{i,j}:=\sigma^{(n)}_{i,j}$ is bounded by some sequence $\rho_n$ and some $\rho\in (0,1)$, i.e., \begin{eqnarray}\label{rhon} \rho^n_{i,j}\le\max(\rho_n,\rho), \quad \rE{n\ge 1} \end{eqnarray} for all $i\not =j$. \rrE{Further, we suppose that the} sequence $\rho_n,n\ge 1$ \rrE{satisfies} for some $c^*>8$ and some $\eta> 0$ \begin{eqnarray}\label{cnu} \rho_{n(u)} \le 1-\frac{c^* \log(\log(u))}{\log(u)}, \quad \text{with } n(u)= \left \lfloor(1+ \eta ) \frac{ \rrE{(\log (u))^2} }{2\log(1+\delta)}\right\rfloor . \end{eqnarray} If for instance all \rE{${\rho_{i,j}^{n}}$} are bounded, then clearly condition \eqref{cnu} is valid; it holds also if for some $c$ large enough $\rho_n\le 1-c \log(n)/\sqrt{n}$.\\ We present next our final result. \begin{theo}\label{thm} Let $Y_{1,n} ,\ldots, Y_{n,n}, n\ge 1$ be claim sizes as above being further independent of some integer-valued rv $N$ which satisfies \eqref{conN} for some $\delta>0$. If further \eqref{rhon} holds with $\rho_n$ satisfying \eqref{cnu}, then \begin{eqnarray} \mathbb{P}\left( \max_{1\le i \le N}Y_{i,N}>u\right)\sim \mathbb{P}(S_N>u)\sim \rE{\frac{ \mean{N} }{ \sqrt{ 2 \pi} \log u} \exp\Bigl( - \frac{(\log u)^2}{2} \Bigr), \quad u\to \infty}. \end{eqnarray} \end{theo} {\bf Remarks:} a) Our second result in \netheo{thm} shows that the principle of a single big jump still holds even if we allow \rrE{for a more general dependence structure}.\\ b) Kortschak (2012) derives second order asymptotic results for subexponential risks. Similar ideas as therein are utilised to derive second order asymptotic results for the aggregation of log-normal random vectors in Kortschak and Hashorva (\rrE{2013,2014}). In the setup of randomly weighted sums it is also possible to derive such results. \def\overline{F}{\overline{F}} \section{Proofs} We give next two lemmas needed in the proofs below. \rrE{The first lemma is of some interest on its own, in particular it implies Lemma 2.3 in Farkas and Hashorva (2013) (see also Lemma 8.6 in Piterbarg (1996)).} \def \eta{ \eta} \begin{lem} \label{L1} Let $\kal{L}_i\rE{(\cdot)}, i=1,2$ be some regularly varying functions at infinity with index $\beta_i$. If $Z_1,Z_2$ are two independent rv such that $\pk{e^{Z_i}> u} \sim \kal{L}_i(u) \Psi(\log(u)), i=1,2$, then for any $\sigma_1,\sigma_2$ two positive constants \begin{eqnarray} \pk{e^{\sigma_1 Z_1+\sigma_2 Z_2} >u} \sim \sigma^2 e^{\frac{\sigma_1^2 \sigma_2^2}{2 \sigma^2} (\beta_1 - \beta_2)^2 } \kal{L}_1(u^{\gamma})\kal{L}_2(u^{1-\gamma}) \Psi( (\log u)/\sigma) \label{38} \end{eqnarray} holds as $u\to \infty$, where $\gamma=\sigma_1^2/(\sigma_1^2+\sigma_2^2)$ and $\sigma=\sqrt{\sigma_1^2+ \sigma_2^2}$. \end{lem} \def\sigma_*{\sigma_*} \def\sigma_*^2{\sigma_*^2} \prooflem{L1} Choose an $\alpha>0$ such that \[ \frac{\sigma_1^2}{\sigma_2^2}<\frac{1+\alpha}{1-\alpha}. \] Then for any $a>0$ we have \begin{eqnarray} \frac{ \pk{e^{\sigma_1 Z_1 +\sigma_2 Z_2 }> u, e^{\sigma_2 Z_2} \le a}}{\pk{e^{\sigma_1 Z_1+ \sigma_2 Z_2 }> u, e^{\sigma_2 Z_2}> a}} & \le & \frac{ \pk{e^ { \sigma_1 Z_1 } > u/a}} {\pk{e^{\sigma_1 Z_1} > u^\alpha} \pk{{e^{\sigma_2Z_2}}> u^{1-\alpha}}}\notag\\ &\sim & \frac{ \kal{L}_1( (u/a)^{1/\sigma_1})\Psi(\frac{1}{\sigma_1}\log (u/a))}{ \kal{L}_1( u^{\alpha/\sigma_1}) \kal{L}_2( u^{(1-\alpha)/\sigma_1}) \Psi(\frac{\alpha}{\sigma_1}\log (u)) \Psi(\frac{1-\alpha}{\sigma_2}\log (u)) }\notag\\ & \to & 0, \quad u\to \infty,\label{eq:temp1} \end{eqnarray} with $\Psi$ the survival function of an $N(0,1)$ rv. With the same argument we get that for any $a>0$ we have \begin{eqnarray*} \frac{ \pk{e^{\sigma_1 Z_1 +\sigma_2 Z_2 }> u, e^{\sigma_1 Z_1} \le a}}{\pk{e^{\sigma_1 Z_1+ \sigma_2 Z_2 }> u, e^{\sigma_1 Z_1}> a}} \to 0, \quad u\to \infty,\label{eq:temp1C} \end{eqnarray*} and hence \begin{align} \pk{e^{\sigma_1 Z_1 +\sigma_2 Z_2 }> u}=&\pk{e^{\sigma_1 Z_1 +\sigma_2 Z_2 }> u, e^{\sigma_1 Z_1} > a, e^{\sigma_2 Z_2} > a}\notag\\&+\pk{e^{\sigma_1 Z_1 +\sigma_2 Z_2 }> u, e^{\sigma_1 Z_1} \le a}+\pk{e^{\sigma_1 Z_1 +\sigma_2 Z_2 }> u, e^{\sigma_2 Z_2} \le a}\notag \\ \sim&\pk{e^{\sigma_1 Z_1 +\sigma_2 Z_2 }> u, e^{\sigma_1 Z_1} > a, e^{\sigma_2 Z_2} > a}, \quad u\to \infty.\label{eq:temp1b} \end{align} \COM{For $\xi_u=u^{1-\gamma }/\xi,\xi>0$ we have $u/\xi_u= \xi u^{-\gamma}$ \begin{eqnarray*} \pk{ e^{\sigma_1 Z_1 }> u/ \xi_u } \pk{e^{\sigma_2 Z_2}> \xi_u}&= & \pk{ e^{\sigma_1 Z_1 }> \xi u^\gamma } \pk{e^{\sigma_2 Z_2}> u^{1- \gamma}/\xi}\\ &=& \pk{ Z_1 > (\gamma \ln u + \ln \xi)/\sigma_1} \pk{Z_2 > ((1- \gamma) \ln u - \ln \xi)/\sigma_2}. \end{eqnarray*} } In view of \eqref{eq:temp1b} we have \begin{eqnarray*} \pk{e^{\sigma_1 Z_1+ \sigma_2 Z_2} > u} &\sim& \pk{e^{\sigma_1 Z_1+ \sigma_2 Z_2} > u,e^{\sigma_1 Z_1}>\xi,e^{\sigma_2 Z_2}>\xi}, \quad u\to \infty. \end{eqnarray*} Assume next without loss of generalty that $\sigma_1 \ge \sigma_2$. If $H$ denotes the distribution of $e^{\sigma_1 Z_1}$, then for any $\xi>0$ with $u>2\xi$ \begin{eqnarray*} \pk{e^{\sigma_1 Z_1+ \sigma_2 Z_2} > u,e^{\sigma_1 Z_1}>\xi,e^{\sigma_2 Z_2}>\xi} &=&\pk{e^{\sigma_1 Z_1+ \sigma_2 Z_2} > u,u/\xi \ge e^{\sigma_1 Z_1}>\xi,e^{\sigma_2 Z_2}>\xi}\\&&+\ \pk{e^{\sigma_1 Z_1+ \sigma_2 Z_2} > u,e^{\sigma_1 Z_1}>u/\xi,e^{\sigma_2 Z_2}>\xi}\\ &=&\pk{e^{\sigma_1 Z_1+ \sigma_2 Z_2} > u,u/\xi \ge e^{\sigma_1 Z_1}>\xi}+\pk{e^{\sigma_1 Z_1}>u/\xi,e^{\sigma_2 Z_2}>\xi}\\ &=& \int_{\xi}^{u/\xi} \pk{ e^{\sigma_2 Z_2}> u/s} \ d H(s)+ \pk{ e^{\sigma_1 Z_1 }> u/ \xi ,e^{\sigma_2 Z_2}> \xi}. \end{eqnarray*} For all $u $ and $\xi$ large enough $$\int_{\xi}^{u/\xi} \pk{ e^{\sigma_2 Z_2}> u/s} \ d H(s)\ge \frac{1}{2}\pk{e^{\sigma_2 Z_2}> u/\xi}\ge \pk{ e^{\sigma_1 Z_1 }> u/ \xi ,e^{\sigma_2 Z_2}> \xi}$$ implying as $u\to \infty$ \begin{eqnarray*} \int_{\xi}^{u/\xi} \pk{ e^{\sigma_2 Z_2}> u/s} \ d H(s)+ \pk{ e^{\sigma_1 Z_1 }> u/ \xi ,e^{\sigma_2 Z_2}> \xi} &\sim& \int_{\xi}^{u/\xi} \pk{ e^{\sigma_2 Z_2}> u/s} \ d H(s). \end{eqnarray*} Further, since again the constant $\xi$ can be chosen arbitrary large we get for $\gamma=\sigma_1^2/(\sigma_1^2+\sigma_2^2)$ \begin{eqnarray*} \lefteqn{\int_{\xi}^{u/\xi} \pk{ e^{\sigma_2 Z_2}> u/s} \ d H(s)} \\ &\sim& \int_{\xi}^{u/\xi} \frac{\sigma_2^2\kal{L}_2(u/s)} {\sqrt{2 \pi \sigma_2^2}\log (u/s)} \exp\Biggl( - \frac{(\log (u/s))^2}{2 \sigma_2^2}\Biggr)\ d H(s)\\ &=& \frac{\sigma_2^2\kal{L}_2\left(u^{1-\gamma}\right)} {\sqrt{2 \pi \sigma_2^2}\log (u^{1- \gamma})} \int_{\xi u^{-\gamma}}^{\frac 1\xi u^{1-\gamma}} \frac{\kal{L}_2\left(\frac{u^{1-\gamma}} s\right) } {\kal{L}_2\left(u^{1-\gamma}\right) } \frac{\log \left(u^{1-\gamma}\right) } {\log \left(\frac{u^{1-\gamma}} s\right) } \exp\Biggl( - \frac{\left(\log \left(\frac{u^{1-\gamma}} s \right)\right)^2}{2 \sigma_2^2}\Biggr) \ d H( u^{\gamma}s)\\ &=& \frac{(\sigma_1^2+\sigma_2^2)\kal{L}_2\left(u^{1-\gamma}\right)} {\sqrt{2 \pi \sigma_2^2}\log (u)} \int_{\xi u^{-\gamma}}^{\frac 1\xi u^{1-\gamma}} q(u, \gamma,s) \exp\Biggl( - \frac{\left(\log \left(\frac{u^{1-\gamma}} s \right)\right)^2}{2 \sigma_2^2}\Biggr) \ d H( u^{\gamma}s), \end{eqnarray*} with $q(u,\gamma,s)=\frac{\kal{L}_2\left(\frac{u^{1-\gamma}} s\right) } {\kal{L}_2\left(u^{1-\gamma}\right) } \frac{\log \left(u^{1-\gamma}\right) } {\log \left(\frac{u^{1-\gamma}} s\right) }.$ For some $c>0$, \rE{by the uniform convergence theorem for regularly varying functions (see Theorem A3.2 in Embrechts et al.\ (1997))} we get uniformly in $1/c<s<c$ \[ \lim_{u\to\infty} q(u,\gamma,s) =s^{-\beta_2}. \] \rrE{Further note that in the light of Potter's bound \rE{(see Bingham et al.\ (1987))}} for every $\epsilon>0$ and $A>1$ we can find a positive constant \eeE{$\xi$} such that for all $\xi u^{-\gamma} <s< \frac 1\xi u^{1-\gamma}$ \[ \frac 1 A s^{-\beta_2} \min(s^\epsilon,s^{-\epsilon}) \le q(u,\gamma,s) \le A s^{-\beta_2} \max(s^\epsilon,s^{-\epsilon}). \] Consequently, for different values of $0 <a<b$ (that might depend on $u$) and $\beta$ we want to find the asymptotics of \begin{align*} & \int_{a}^{b} s^\beta \exp\Biggl( - \frac{\left(\log \left(\frac{u^{1-\gamma}} s \right)\right)^2}{2 \sigma_2^2}\Biggr) \ d H( u^{\gamma}s)\\&= -s^\beta \exp\Biggl( - \frac{\left(\log \left(\frac{u^{1-\gamma}} s \right)\right)^2}{2 \sigma_2^2}\Biggr)\mathbb{P}\left(e^{\sigma_1Z_1}>u^{\gamma}s \right) \Bigg|_{s=a}^b \\& \quad +\int_a^b s^{\beta-1} \left(\beta + \frac{\log\left(\frac{u^{1-\gamma}} s \right)}{\sigma_2^2} \right) \exp\Biggl( - \frac{\left(\log \left(\frac{u^{1-\gamma}} s \right)\right)^2}{2 \sigma_2^2}\Biggr) \mathbb{P}\left(e^{\sigma_1Z_1}>u^{\gamma}s \right) \ ds. \end{align*} Since we can choose $\xi$ arbitrary large we can replace $\mathbb{P}\left(e^{\sigma_1Z_1}>u^{\gamma}s \right)$ by its asymptotic form and hence we can use the approximation (set $\sigma_*:= \sigma_1 \sigma_2/\sqrt{\sigma_1^2+ \sigma_2^2}$) \begin{align*} &\exp\Biggl( - \frac{\left(\log \left(\frac{u^{1-\gamma}} s \right)\right)^2}{2 \sigma_2^2}\Biggr) \mathbb{P}\left(e^{\sigma_1Z_1}>u^{\gamma}s \right)\\& \approx \sigma_1^2\frac{\kal{L}_1(u^\gamma s)} {\sqrt{2 \pi \sigma_1^2}\log (u^\gamma s)} \exp\Biggl( - \frac{\left(\log \left(\frac{u^{1-\gamma}} s \right)\right)^2}{2 \sigma_2^2}-\frac{\left(\log \left(u^{\gamma}s \right)\right)^2}{2 \sigma_1^2}\Biggr)\\ &= \sigma_1^2\frac{\kal{L}_1(u^\gamma s)} {\sqrt{2 \pi \sigma_1^2}\log (u^\gamma s)} \exp\Biggl( - \frac{\left(\sigma_1^2(1-\gamma)^2+\sigma_2^2\gamma^2\right) (\log (u))^2 + 2\left(\sigma_1^2(\gamma-1)+\sigma_2^2\gamma\right) \log (u)\log (s)+(\sigma_1^2+\sigma_2^2)(\log (s))^2 }{2 \sigma_1^2\sigma_2^2}\Biggr)\\ &=\sigma_1^2 \frac{\kal{L}_1(u^\gamma s)} {\sqrt{2 \pi \sigma_1^2}\log (u^\gamma s)} \exp\Biggl( - \frac{(\log (u))^2}{2(\sigma_1^2+\sigma_2^2)}\Biggr) \exp\Biggl( -\frac{(\log (s))^2 }{2 \sigma_*^2}\Biggr). \end{align*} Since $ \sigma_1^2(\gamma -1)+ \sigma_2^2 \gamma=0$, using again Potter's bounds \rE{(see Bingham et al.\ (1987))} and the fact that $\kal{L}_1(\cdot)$ is regularly varying at infinity, the above derivations imply \begin{align*} & \mathbb{P}(e^{\sigma_1 Z_1+\sigma_2 Z_2}>u)\\ &\sim \frac{\sigma_1^2 (\sigma_1^2 +\sigma_2^2) \kal{L}_1(u^{\gamma}) \kal{L}_2(u^{1-\gamma})} {\sigma_2^2\sqrt{2\pi \sigma_2^2\sigma_1^2} \log (u)}\frac{1-\gamma}{\gamma\sqrt{2\pi} } \exp\Biggl( - \frac{(\log (u))^2}{2(\sigma_1^2+\sigma_2^2)}\Biggr) \int_0^\infty s^{\beta_1-\beta_2-1} \exp\Biggl( -\frac{(\log (s))^2 }{2 \sigma_*^2 }\Biggr) \ ds \\ &= \frac{ \sqrt{\sigma_1^2+\sigma_2^2} \kal{L}_1(u^{\gamma}) \kal{L}_2(u^{1-\gamma})} {\sqrt{2 \pi} \log (u)} \exp\Biggl( - \frac{(\log (u))^2}{2(\sigma_1^2+\sigma_2^2)}\Biggr) \int_0^\infty \frac{1} {\sqrt{2\pi \sigma_*^2}}s^{\beta_1-\beta_2-1} \exp\Biggl( - \frac{(\log (s))^2 }{2 \sigma_*^2}\Biggr) \ ds\\ &= \sqrt{\sigma_1^2+\sigma_2^2} e^{\frac{\sigma_*^2 }{2} (\beta_1-\beta_2)^2 }\frac{ \kal{L}_1(u^{\gamma}) \kal{L}_2(u^{1-\gamma})} {\sqrt{2 \pi} \log (u)} \exp\Biggl( - \frac{(\log (u))^2}{2(\sigma_1^2+\sigma_2^2)}\Biggr), \end{align*} hence the proof is complete. $\Box$ \begin{lem}\label{lem:asymptotictwo} Assume that $n\le n(u)$ with $n(u)$ \rE{defined in \eqref{cnu}} and \rrE{set} $\epsilon(u)=4\log(\log(u))/\log(u)$. If $Y_1$ is an $LN(0,1)$ rv and $X_{i,n}, i\le n$ are as in \netheo{thm}, then as $u\to \infty$ \[ \mathbb{P}(Y_1>u-n u^{1-\epsilon(u)}) \sim \mathbb{P}(Y_1>u) \] and for $i\not=j$ \[ \mathbb{P}(Y_{i,n}>u^{1-\epsilon(u)} ,Y_{j,n}>u^{1-\epsilon(u)}) =o(\mathbb{P}(Y_1>u)). \] \end{lem} \prooflem{lem:asymptotictwo} By the assumptions on $n$ and $n(u)$ as $u\to \infty$ we have \begin{align*} \mathbb{P}(Y_1>u)\le \mathbb{P}(Y_1>u-n u^{1-\epsilon(u)}) &\le \mathbb{P}(Y_1>u-n(u) u^{1-\epsilon(u)})\\ &=\mathbb{P}\left(Y_1>u-\frac u{ \rrE{(\log (u))^4}} (1+ \eta ) \frac{ \rrE{(\log (u))^2} }{2\log(1+\delta)}\right)\\ &=\mathbb{P}\left(Y_1>u-\frac{(1+ \eta )}{2\log(1+\delta)} \frac u{ \rrE{(\log (u))^2} } \right)\\&\sim\mathbb{P}(Y_1>u). \end{align*} \rE{Next, denote by $f$ the probability density function of $Y_1$}. Let further $\rrE{W_1} $ and $\rrE{W_2} $ be two independent $N(0,1)$ rv's, and write \rE{$\rho_*$} for the correlation between \rE{$\log Y_{i,n}$ and $\log Y_{j,n}$}. We may write for $u>0$ \begin{eqnarray*} \mathbb{P}(Y_{i,n}>u^{1-\epsilon(u)} ,Y_{j,n}>u^{1-\epsilon(u)}) &=& \mathbb{P}( e^{\rrE{W_1} }> u^{1-\epsilon(u)}, e^{\rE{\rho_*}\rrE{W_1} +\sqrt{1-\rE{\rho_*}^2} \rrE{W_2} } > u^{1-\epsilon(u)})\\ &=& \mathbb{P}\left(e^{\rrE{W_1} }>\frac{u}{ \rrE{(\log (u))^4}},e^{ \rE{\rho_*}\rrE{W_1} } e^{\sqrt{1-\rE{\rho_*}^2} \rrE{W_2} } >\frac{u}{ \rrE{(\log (u))^4} } \right)\\ &\le &\mathbb{P}\left(\frac u{ \rrE{(\log (u))^4} }<e^{\rrE{W_1} }<2u,e^{ \rE{\rho_*} \rrE{W_1} } e^{\sqrt{1-\rE{\rho_*}^2} \rrE{W_2} } >\frac{u}{ \rrE{(\log (u))^4} } \right)+\mathbb{P}(e^{\rrE{W_1} }>2u)\\ &=& \int_{\frac{u}{ \rrE{(\log (u))^4} }}^{2u} \mathbb{P}\left(e^{\rrE{W_2} }>\left(\frac{u}{ \rrE{(\log (u))^4} x^{\rE{\rho_*}}}\right)^{1/\sqrt{1-\rE{\rho_*}^2}} \right) f(x) d x +\mathbb{P}(e^{\rrE{W_1} }>2u)\\ &\le &\int_{\frac{u}{ \rrE{(\log (u))^4} }}^{2u} \mathbb{P}\left(e^{\rrE{W_2} }>\left(\frac{u^{1-\rho_*}}{ \rrE{(\log (u))^4} 2^{\rho_*}}\right)^{1/\sqrt{1-\rE{\rho_*}^2}} \right) f(x) d x+\mathbb{P}(e^{\rrE{W_1} }>2u)\\ &\le &\mathbb{P}\left(Y_1>\frac{u^{\sqrt{\frac{1-\rho_*}{1+\rho_*}}}}{2^{\rho_*} \rrE{(\log (u))^4} }\right)\mathbb{P}\left(Y_1>\frac{u}{ \rrE{(\log (u))^4} } \right)+\mathbb{P}(e^{\rrE{W_1} }>2u)\\ &=&o(\mathbb{P}(Y_1>u)), \quad u\to \infty \end{eqnarray*} since \begin{eqnarray*} \left(1+\frac{1-\rho_*}{1+\rho_*}\right)\log(u)&=&\frac{2}{1+\rho_*}\log(u)\\ &\ge& \frac{2}{1+\rho_{n(u)}}\log(u)\\ & \ge& \frac{2}{2-\frac{c^* \log(\log(u))} {\log(u)}} \log(u)\\ &=&\log(u) +\frac{2}{2-\frac{c^* \log(\log(u))} {\log(u)}} c^* \log(\log(u))\\ & \sim &\log(u) +c^* \log(\log(u)). \end{eqnarray*} Consequently, the assumption $c^*>8$ entails \begin{align*} & 2\log\left(\mathbb{P}\left(Y_1>\frac{u^{\sqrt{\frac{1-\rho_*}{1+\rho_*}}}}{2^{\rho_*} \rrE{(\log (u))^4} }\right)\mathbb{P}\left(Y_1>\frac{u}{ \rrE{(\log (u))^4} } \right) \right)\\& \sim \log\left(\frac{u^{\sqrt{\frac{1-\rho_*}{1+\rho_*}}}}{2^{\rho_*} \rrE{(\log (u))^4} }\right)^2+ \log\left(\frac{u}{ \rrE{(\log (u))^4} }\right)^2\\ &\sim \left(1+\frac{1-\rho_*}{1+\rho_*}\right)\log(u)^2-8\log(u) \log(\log(u))-8\sqrt{\frac{1-\rho_*}{1+\rho_*}}\log(u) \log(\log(u))\\ &\lesssim \log(u)^2+(c^*-8) \log(\log(u)) \end{align*} establishing the proof. $\Box$ \prooftheo{th0} For any $u>0$ we have \begin{eqnarray*} \pk{S_N> u}&=& \pk{ e^{\rho Z_0} \sum_{i=1}^N e^{ \sqrt{1- \rho ^2} Z_i} > u}\\ &=:& \pk{ e^{\rho Z_0} W_N > u}. \end{eqnarray*} Since $e^{ \sqrt{1- \rho^2} Z_i},i\ge 1$ are subexponential risks, then along the lines of the proof of Theorem 3.37 in Foss et al.\ (2013) (see also for similar result Theorem 1.3.9 in Embrechts et al.\ (1997)) $$\pk{W_N> u} \sim \Theta\pk{e^{ \sqrt{1- \rho^2} Z^*}> u}, \quad \Theta:= \E{\sum_{i=1}^N c_i} $$ as $u\to \infty$, with $Z^*$ an independent copy of $Z_0$. It can be easily checked that $Z_0$ and $\log(W_N)/(1-\rho^2)$ fulfill the conditions of \nelem{L1}, hence the asymptotic of $\mathbb{P}(S_N>u)$ follows. Similarly, \begin{eqnarray*} Y_{N:N}&=& \max_{1 \le i \le N} e^{ \rho Z_0+ \sqrt{1- \rho^2} Z_i} = e^{\rho Z_0 } \max_{1 \le i\le N} e^{ \sqrt{1 -\rho^2} Z_i}=:e^{\rho Z_0} W_N^*. \end{eqnarray*} Since we have $$ \pk{W_N^*> u} \sim \pk{W_N>u} \sim \Theta \pk{ \exp( \sqrt{1 - \rho^2} Z^*)> u}, \quad u \to \infty$$ the proof follows by applying once again \nelem{L1}. $\Box$ \prooftheo{thm} Denote next $Y_1$ an $LN(0,1)$ rv and let $\mathcal{I}_{\{\cdot\}}$ denote the indicator function. Since for all fixed $n\ge 1$ we get by interchanging limit and finite sum that \begin{eqnarray*} \mathbb{P}(S_N>u)&=&\mathbb{P}(S_N>u,N\le n)+\mathbb{P}(S_N>u,N> n)\\ &\sim& \mean{N \mathcal I_{\{ N\le n\}}} \mathbb{P}(Y_1>u)+\mathbb{P}(S_N>u,N> n) \end{eqnarray*} we can assume w.l.o.g. that $\rho_{i,j}^n\le \rho_n$. From \eqref{conN} it follows that there exist $C_1,C_2>0$ such that \begin{align*} p_n:=\mathbb{P}(N=n)\le C_1 (1+\delta)^{-n} \quad\text{and}\quad \mathbb{P}(N>n)\le C_2(1+\delta)^{-n}. \end{align*} By the independence of $N$ and the claim sizes \[ \mathbb{P}(S_N>u)=\sum_{n=1}^\infty p_n\mathbb{P}(S_n>u) \] and for $n(u)$ defined in \eqref{cnu} \begin{eqnarray*} \sum_{n=n(u)}^\infty p_n\mathbb{P}(S_n>u)&\le& \mathbb{P}(N>n(u)) \\ &\le &C_2(1+\delta)^{-n(u)} \\ &\le &C_2\exp\left(-\frac{1+ \eta }2 \rrE{(\log (u))^2} \right)\\ &=&o(\mathbb{P}(Y_1>u)). \end{eqnarray*} Since \[\mathbb{P}(S_n>u)\ge n\mathbb{P}(Y_1>u)-\sum_{i\not=j} \mathbb{P}(Y_i>u,Y_j>u)\] and by Lemma \ref{lem:asymptotictwo} $$\mathbb{P}(Y_i>u,Y_j>u)=o(\mathbb{P}(Y_1>u)), \quad u\to \infty$$ it follows that \begin{eqnarray*} \sum_{n=0} ^{n(u)} p_n\mathbb{P}(S_n>u)&\ge& \mathbb{P}(Y_1>u) \left(\sum_{n=0} ^{n(u)} n p_n- o(1)\sum_{n=0} ^{n(u)} n^2 p_n \right)\\ &\sim &\mean{N} \mathbb{P}(Y_1>u), \quad u\to \infty. \end{eqnarray*} So we are left with finding an asymptotic upper bound. For $n\le n(u)$ we use the \rrE{following} decomposition (c.f. \pD{Asmussen and Rojas-Nandaypa (2008)}) \begin{align*} \mathbb{P}(S_n>u)=\sum_{i=1}^n \mathbb{P}\left(S_n>u,Y_{i,n}\ge Y_{j,n},\max_{j\not=i} Y_{j,n}>u^{1-\epsilon(u)}\right) +\mathbb{P}\left(S_n>u,Y_{i,n}\ge Y_{j,n},\max_{j\not=i} Y_{j,n}\le u^{1-\epsilon(u)}\right), \end{align*} where $\epsilon(u)=4\log(\log(u))/\log(u)$. By Lemma \ref{lem:asymptotictwo} we have \begin{align*} \sum_{i=1}^n \mathbb{P}\left(S_n>u,Y_{i,n}\ge Y_{j,n},\max_{j\not=i} Y_{j,n}>u^{1-\epsilon(u)}\right) &\le \sum_{i=1}^n\sum_{j\not=i} \mathbb{P}\left(S_n>u,Y_{i,n}\ge Y_{j,n}, Y_{j,n}>u^{1-\epsilon(u)}\right)\\ &\le \sum_{i=1}^n\sum_{j\not=i} \mathbb{P}\left(Y_{i,n}>u^{1-\epsilon(u)},Y_{j,n}>u^{1-\epsilon(u)}\right)\\ &=n(n-1) o(\mathbb{P}(Y_1>u)). \end{align*} Further \begin{align*} \mathbb{P}\left(S_n>u,Y_{i,n}\ge Y_{j,n},\max_{j\not=i} Y_{j,n}\le u^{1-\epsilon(u)}\right)&\le\mathbb{P}\left(Y_{i,n}>u-\sum_{i=1}^n Y_{j,n},\max_{j\not=i} Y_{j,n}\le u^{1-\epsilon(u)}\right)\\ &\le \mathbb{P}\left(Y_{i,n}>u-n u^{1-\epsilon(u)},\max_{j\not=i} Y_{j,n}\le u^{1-\epsilon(u)}\right)\\ &\le\mathbb{P}\left(Y_{i,n}>u-n u^{1-\epsilon(u)}\right) \\ & \sim \mathbb{P}(Y_1>u) \end{align*} as $u \to \infty$, hence the proof for the tail asymptotics of $S_N$ follows \eeE{by} applying \eqref{lem:asymptotictwo}. Since for any $u>0$ \[ n\mathbb{P}(Y_1>u)-\sum_{i\not=j} \mathbb{P}(Y_i>u,Y_j>u)\le \mathbb{P}\left( \max_{1\le i \le n}Y_{i,n}>u\right)\le \mathbb{P}(S_n>u) \] \rrE{the tail asymptotics of $\max_{1\le i \le N}Y_{i,N}$ can be easily established, \eeE{and} thus the proof is complete.} $\Box$ \textbf{Acknowledgments.} We would like to that the referees of the paper for several suggestions which improved our manuscript. E. Hashorva kindly acknowledges partially support by the Swiss National Science Foundation Grant 200021-140633/1 and RARE -318984 (an FP7 Marie Curie IRSES Fellowship). \begin{thebibliography}{9} \bibitem{} Asmussen, S., Blanchet, J., Juneja, S., Rojas-Nandayapa, L. (2011) Efficient simulation of tail probabilities of sums of correlated lognormals. \emph{Ann. Oper. Res.} {\bf 189}, 5�-23. \bibitem{} Asmussen, S., Rojas-Nandaypa, L. (2008) Sums of dependent log-normal random variables with Gaussian copula. \emph{Stat. Probab. Lett.} {\bf 78}, 2709--2714. \bibitem{BERMANc} {Berman, M.S. (1992) {\it Sojourns and Extremes of Stochastic Processes}, Wadsworth \& Brooks/ Cole, Boston.} \bibitem{} Bingham, N., Goldie, C.M., Teugels, J.L. (1987) {\it Regular Variations}. Cambridge, Cambridge University Press. \bibitem{} Constantinescu, C., Hashorva, E., Ji, L. (2011) The Archimedean copula in finite and infinite dimensions - with applications to ruin problems. \emph{Insurance: Mathematics and Economics}, {\bf 49}, 487�-495. \bibitem{}Embrechts, P., Kl\"{u}ppelberg, C., Mikosch, T. (1997) \textit{Modeling Extremal Events for Finance and Insurance.} Berlin, Springer. \bibitem{} Farkas, J., Hashorva, E. (2013) Tail approximation for reinsurance portfolios of Gaussian-like risks. \emph{Scandinavian Actuarial Journal}, DOI 10.1080/03461238.2013.825639, in press. \bibitem{}Foss, S., Korshunov, D., Zachary, S. (2013) \textit{An introduction to Heavy-tailed and Subexponential Distributions.} Springer-Verlag, 2nd Edition, New York. \bibitem{} Foss, S., Richards, A. (2010) On sums of conditionally independent subexponential random variables, \emph{Mathematics Operations Research}, {\bf 35}, 102--119. \bibitem{} Hashorva, E. (2013) Exact tail asymptotics of aggregated parametrised risk. \emph{J. Math. Anal. Appl.}, {\bf 400}, 187--199. \bibitem{} Jiang, T., Gao, Q., Wang, Y. (2014) Max-sum equivalence of conditionally dependent random variables. \emph{Stat. Probab. Lett.} {\bf 84}, 60--66. \bibitem{} Kortschak, D. (2012) Second order tail asymptotics for the sum of dependent, tail-independent regularly varying risks. \emph{Extremes}, {\bf 15}, 353--388. \bibitem{} Kortschak, D., Hashorva, E. (2014) Second order asymptotics of aggregated log-elliptical risk. \emph{Meth. Comp. Appl. Probab.}, DOI 10.1007/s11009-013-9356-5, in press. \bibitem{} Kortschak, D., Hashorva, E. (2013) Efficient simulation of tail probabilities for sums of log-elliptical risks. \emph{J. Comp. Appl. Math.}, {\bf 247}, 53--67. \bibitem{} Mikosch, T. (2009) {\it Non-Life Insurance Mathematics. An Introduction with the Poisson Process}. 2nd Edition, Springer. \bibitem{} Mitra, A., Resnick, S.I. (2009) Aggregation of rapidly varying risks and asymptotic independence. \emph{Adv. Appl. Probab.} {\bf 41}, 797--828. \bibitem {PIT96} Piterbarg, V.I. (1996) \textit{Asymptotic Methods in the Theory of Gaussian Processes and Fields}. AMS, Providence. \bibitem {resnick2} Resnick, S.I. (1987) \textit{Extreme Values, Regular Variation and Point Processes.} \emph{Springer, New York}. \end{thebibliography} \end{document} {\bf Another question}: With $Y_i(\rho)=e^{ \sqrt{\rho} Z+ \sqrt{1- \rho^2} X_i}$ and $Z,X_i$ are iid $N(0,1)$ what can we say about \begin{eqnarray} \pk{Y_1(\rho_1)+ Y_2(\rho_1)> u,Y_1(\rho_2)+ Y_2(\rho_2)> a u}, \quad u \to \infty \end{eqnarray} for $\rho_1 \not=\rho_2$ and $a\in (0,1]$? \section{Ad another question} The first task is to find $\theta_1,\theta_2,\theta_3$ which maximize the function \[ f(\mathbf {\theta}):= \min\left(\max\left( \sqrt{\rho} \theta_1+\sqrt{1-\rho}\theta_2,\sqrt{\rho} \theta_1+\sqrt{1-\rho}\theta_3\right),\max\left( \sqrt{\gamma} \theta_1+\sqrt{1-\gamma}\theta_2,\sqrt{\gamma} \theta_1+\sqrt{1-\gamma}\theta_3\right) \right) \] under the condition $\theta_1^2+\theta_2^2+\theta_3^2=1$. W.l.o.g. we can assume that $\theta_i\ge0$. Now there are different cases. If $\theta_2>\theta_3$ (the case $\theta_2<\theta_3$ is symmetric and hence will not be considered) then \[ f(\mathbf {\theta})= \min\left(\sqrt{\rho} \theta_1+\sqrt{1-\rho}\theta_2, \sqrt{\gamma} \theta_1+\sqrt{1-\gamma}\theta_2 \right) \] Now this is maximized if $\theta_3=0$ and $\theta_2=\sqrt{1-\theta_1^2}$. hence we have to consider \[ f(\mathbf {\theta})= \min\left(\sqrt{\rho} \theta_1+\sqrt{1-\rho}\sqrt{1-\theta_1^2}, \sqrt{\gamma} \theta_1+\sqrt{1-\gamma}\sqrt{1-\theta_1^2} \right) \] Since $\sqrt{\rho} \theta_1+\sqrt{1-\rho}\sqrt{1-\theta_1^2}$ is a concave function we get that the optimal $\theta_1$ fulfills the equation \[ \sqrt{\rho} \theta_1+\sqrt{1-\rho}\sqrt{1-\theta_1^2}= \sqrt{\gamma} \theta_1+\sqrt{1-\gamma}\sqrt{1-\theta_1^2}. \] It follows that \begin{align*} \theta^*:=\theta_1&=\frac {1}{\sqrt{1+\left(\frac{\sqrt{\rho}-\sqrt{\gamma}}{\sqrt{1-\gamma}-\sqrt{1-\rho}} \right)^2 }}=\frac {1}{\sqrt{1+\frac{\gamma +\rho-\sqrt{ \rho\gamma}}{2-\rho-\gamma -\sqrt{(1-\rho)(1-\gamma)}} }}=\sqrt{\frac{2-\rho-\gamma -\sqrt{(1-\rho)(1-\gamma)}} {2-\sqrt{\rho\gamma} -\sqrt{(1-\rho)(1-\gamma)}}}.\\ \theta_2&=\frac {\frac{\sqrt{\rho}-\sqrt{\gamma}}{\sqrt{1-\gamma}-\sqrt{1-\rho}} }{\sqrt{1+\left(\frac{\sqrt{\rho}-\sqrt{\gamma}}{\sqrt{1-\gamma}-\sqrt{1-\rho}} \right)^2 }}\quad\text{and}\quad \theta_3=0. \end{align*} Finally we have that \begin{align*} f(\theta^*)&=\left(\sqrt{\rho}+ \sqrt{1-\rho} \frac{\sqrt{\rho}-\sqrt{\gamma}}{\sqrt{1-\gamma}-\sqrt{1-\rho}} \right)\theta^*\\ &=\frac{\sqrt{\rho(1-\gamma)}-\sqrt{\gamma(1-\rho)} }{\sqrt{1-\gamma}-\sqrt{1-\rho}} \theta^*\\ &=\frac{\left|\sqrt{\rho(1-\gamma)}-\sqrt{\gamma(1-\rho)} \right|}{\sqrt{2-\sqrt{\rho\gamma} -\sqrt{(1-\rho)(1-\gamma)}}}. \end{align*} Now the marginal distribution of $\theta_1$ is given by \[ f_1(x)=\frac12, \quad x\in (\cK{-1},1) \] and the marginal distribution of $\theta_2/(\sqrt{1-\theta_1^2})|\theta_1$ is given by \[ f_1(x|\theta_1)=\frac{1}{\pi} (1-x^2)^{-\frac{1}2}, \quad x\in (\cK{-1},1). \] We have to evaluate the integrals \begin{align*} &\frac{1}{2\pi}\int_{-1}^1 \int_{-1}^1 \mathbb{P}\Bigg( e^{R\left(\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)} x\right)}+ e^{R\left(\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)(1-x^2)} \right)}>u\\&\quad\quad\quad\quad\quad\quad, e^{R\left(\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)} x\right)}+e^{R\left(\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)(1-x^2)} \right)}>au \Bigg) (1-x^2)^{-\frac{1}2}d xd \theta\\ &+\frac{1}{2\pi}\int_{-1}^1 \int_{-1}^1 \mathbb{P}\Bigg( e^{R\left(\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)} x\right)}+ e^{R\left(\sqrt{\rho} \theta- \sqrt{(1-\rho)(1-\theta^2)(1-x^2)} \right)}>u\\&\quad\quad\quad\quad\quad\quad, e^{R\left(\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)} x\right)}+e^{R\left(\sqrt{\gamma} \theta- \sqrt{(1-\gamma)(1-\theta^2)(1-x^2)} \right)}>au \Bigg) (1-x^2)^{-\frac{1}2} d xd \theta. \end{align*} Considering the integration domain we see that an asymptotic significant contribution implies that $\theta \approx\theta^*$ and either $x\approx 1$ or $x\approx0$. Since the case $x\approx0$ can be best dealt with replacing the role of $\theta_2$ and $\theta_3$ and hence is some how symmetric to the case $x\approx1$ we will only consider the case $x\approx1$ and $\theta \approx \theta^*$. For an $\epsilon>0$ define the sets $A_\epsilon=\{(\theta,x): |\theta-\theta^*|<\epsilon,x>1-\epsilon\}$ and we choose $\epsilon$ small enough such that there exists an $\delta>0$ with \begin{align*} \sup_{(\theta,x)\in A_\epsilon} \left(\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)(1-x^2)} \right)+\delta&<\inf_{(\theta,x)\in A_\epsilon} \left(\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)} x\right)\\ \sup_{(\theta,x)\in A_\epsilon} \left(\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)(1-x^2)} \right)+\delta&<\inf_{(\theta,x)\in A_\epsilon} \left(\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)} x\right) \end{align*} It follows that as $u\to\infty$ \begin{align*} &\frac{1}{2\pi}\int_{A_\epsilon} \mathbb{P}\Bigg( e^{R\left(\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)} x\right)}+ e^{R\left(\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)(1-x^2)} \right)}>u\\&\quad\quad\quad\quad\quad\quad, e^{R\left(\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)} x\right)}+e^{R\left(\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)(1-x^2)} \right)}>au \Bigg) (1-x^2)^{-\frac{1}2}d xd \theta\\ &\sim \frac{1}{2\pi}\int_{\theta^*-\epsilon}^{\theta^*+\epsilon} \int_{1-\epsilon}^1 \mathbb{P}\Bigg( e^{R\left(\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)} x\right)}>u,e^{R\left(\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)} x\right)}>au \Bigg) (1-x^2)^{-\frac{1}2}d xd \theta\\ &= \frac{1}{2\pi}\int_{\theta^*-\epsilon}^{\theta^*+\epsilon} \int_{1-\epsilon}^1 \mathbb{P}\Bigg( R>\frac{\log(u)} {\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)}x},R>\frac{\log(u) +\log(a)}{\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)} x} \Bigg) (1-x^2)^{-\frac{1}2}d xd \theta\\ &\approx \frac 1 {\sqrt{1-{\theta^*}^2} } \frac{1}{2\pi}\int_{\theta^*-\epsilon}^{\theta^*+\epsilon} \int_{\sqrt{1-\theta^2} (1-\epsilon)}^{\sqrt{1-\theta^2}} \mathbb{P}\Bigg( R>\frac{\log(u)} {\sqrt{\rho} \theta+ \sqrt{(1-\rho)}x},R>\frac{\log(u) +\log(a)}{\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)} x} \Bigg) \left(1-\frac{x^2}{{1-{\theta}^2}}\right)^{-\frac{1}2}d xd \theta\\ &\approx \frac 1 {\sqrt{1-{\theta^*}^2} } \frac{\log(u)}{\sqrt{2\pi^3}\sqrt{\gamma} \theta^*+ \sqrt{(1-\gamma)(1-{\theta^*}^2)} } \\&\quad\times \int_{\theta^*-\epsilon}^{\theta^*+\epsilon} \int_{\sqrt{1-\theta^2} -\epsilon}^{\sqrt{1-\theta^2 }} \exp\left\{-\frac 12 \max\left( \frac{\log(u)} {\sqrt{\rho} \theta+ \sqrt{(1-\rho)}x},\frac{\log(u) +\log(a)}{\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)} x} \right)^2\right\} \left(1-\frac{x^2}{{1-{\theta}^2}}\right)^{-\frac{1}2}d xd \theta \end{align*} Were we used that \[ \mathbb{P}(R>u)=\int_u^\infty \frac{\sqrt{2}}{\sqrt{\pi}} x^2e^{-\frac{x^2}2} d x\sim\frac{\sqrt{2}}{\sqrt{\pi}} ue^{-\frac{u^2}2}. \] To evaluate the last integral note that by Taylor expansion Further note that \begin{align*} & \frac{1} {\sqrt{\rho} \theta+ \sqrt{(1-\rho)}x}=\frac{1} {\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)}x}-\frac{(\theta-\theta^*)\sqrt{\rho}} {(\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)}x)^2} +\frac{(\theta-\theta^*)^2\rho} {\left(\sqrt{\rho} (\theta^*+\xi_1)+ \sqrt{(1-\rho)}x\right)^3}\\ & =\frac{1} {\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)(1-\theta^2)}} -\frac{\sqrt{1-\rho}(x-\sqrt{1-\theta^2})} {\left(\sqrt{\rho} \theta^* +\sqrt{(1-\rho)(1-\theta^2)}\right)^2} -\frac{(\theta-\theta^*)\sqrt{\rho}} {(\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)(1-\theta^2)})^2} \\&\quad+2 \frac{\sqrt{\rho(1-\rho)}(\theta-\theta^*)(x-\sqrt{1-\theta^2})} {\left(\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)}\left(\sqrt{(1-\theta^2)}+\xi_3\right) \right)^3} \\&\quad +\frac{(\theta-\theta^*)^2\rho} {\left(\sqrt{\rho} (\theta^*+\xi_1)+ \sqrt{(1-\rho)}x\right)^3}+ \frac{(1-\rho)(x-\sqrt{1-\theta^2})^2} {\left(\sqrt{\rho} \theta^* +\sqrt{(1-\rho)}\left(\sqrt{(1-\theta^2)}+\xi_2\right)\right)^3} \\ & =\frac{1} {\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)(1-{\theta^*}^2)}} -\frac{\sqrt{1-\rho}(x-\sqrt{1-{\theta}^2})} {\left(\sqrt{\rho} \theta^* +\sqrt{(1-\rho)(1-{\theta^*}^2)}\right)^2} -\frac{(\theta-\theta^*)\left(\sqrt{\rho}-\frac {\sqrt{1-\rho} \theta^*}{\sqrt{1-{\theta^*}^2}} \right)} {(\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)(1-{\theta^*}^2)})^2} \\&\quad+\frac{(\theta-\theta^*)^2\frac {(1-\rho) (\theta^*+\xi_3)^2}{1-(\theta^*+\xi_3)^2}} {(\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)(1-(\theta^*+\xi_3)^2)})^3} +\frac{(\theta-\theta^*)^2 \frac{\sqrt{1-\rho}}{\sqrt{(1-(\theta^*+\xi_3)^2)^3}}} {(\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)(1-(\theta^*+\xi_3)^2)})^2} \\ &\quad - \frac{2 (1-\rho)(x-\sqrt{1-\theta^2})(\theta-\theta^*) \frac{\theta^*+\xi_4}{\sqrt{1-(\theta^*+\xi_4)^2} }} {\left(\sqrt{\rho} \theta^* +\sqrt{(1-\rho)(1-{(\theta^*+\xi_4)}^2)}\right)^3}- \frac{2 \sqrt{\rho(1-\rho)}(\theta-\theta^*)^2 \frac{\theta^*+\xi_5}{\sqrt{1-(\theta^*+\xi_5)^2} }} {\left(\sqrt{\rho} \theta^* +\sqrt{(1-\rho)(1-{(\theta^*+\xi_5)}^2)}\right)^3} \\&\quad+2 \frac{\sqrt{\rho(1-\rho)}(\theta-\theta^*)(x-\sqrt{1-\theta^2})} {\left(\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)}\left(\sqrt{(1-\theta^2)}+\xi_3\right) \right)^3} \\&\quad +\frac{(\theta-\theta^*)^2\rho} {\left(\sqrt{\rho} (\theta^*+\xi_1)+ \sqrt{(1-\rho)}x\right)^3}+ \frac{(1-\rho)(x-\sqrt{1-\theta^2})^2} {\left(\sqrt{\rho} \theta^* +\sqrt{(1-\rho)}\left(\sqrt{(1-\theta^2)}+\xi_2\right)\right)^3} \end{align*} A first check proofed (This has to be redone to be sure) that for $\epsilon$ sufficiently small the sum of all terms with that are of higher order then $1$ is positive (the matrix of the second order derivatives is positive definite). Define the constants \begin{align*} c_\rho=\frac{1} {\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)(1-{\theta^*}^2)}} \quad &\text{and}\quad c_\gamma=\frac{1} {\sqrt{\gamma} \theta^*+ \sqrt{(1-\gamma)(1-{\theta^*}^2)}}\\ d_\rho= \left(\sqrt{\rho}-\sqrt{1-\rho}\frac{\theta^*}{\sqrt{1-{\theta^*}^2}} \right) \quad &\text{and}\quad d_\gamma= \left(\sqrt{\gamma}-\sqrt{1-\gamma}\frac{\theta^*}{\sqrt{1-{\theta^*}^2}} \right) \end{align*} and note that $c_\gamma=c_\rho$ as well as $d_\gamma=-d_\rho$. To get the asymptotic of the integral we will use our Taylor expansion to get an upper bound (Since it is then straight forward to proof that it is also a lower bound we are not going to do it) here $A_\rho$ respectively $A_\gamma$ are certain positive definite matrices. \begin{align*} &\int_{\theta^*-\epsilon}^{\theta^*+\epsilon} \int_{\sqrt{1-\theta^2} -\epsilon}^{\sqrt{1-\theta^2 }} \exp\left\{-\frac 12 \max\left( \frac{\log(u)} {\sqrt{\rho} \theta+ \sqrt{(1-\rho)}x},\frac{\log(u) +\log(a)}{\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)} x} \right)^2\right\} \left(1-\frac{x^2}{{1-{\theta}^2}}\right)^{-\frac{1}2}d xd \theta\\ &\le \int_{\theta^*-\epsilon}^{\theta^*+\epsilon} \int_{\sqrt{1-\theta^2} -\epsilon}^{\sqrt{1-\theta^2 }} \exp\Bigg\{-\frac { \rrE{(\log (u))^2} }2 \max\Bigg( c_\rho -c_\rho^2\sqrt{1-\rho}(x-\sqrt{1-{\theta}^2}) -c_\rho^2d_\rho(\theta-\theta^*) +{{x-\sqrt{1-\theta^2}}\choose{\theta- \theta^* }}^T A_\rho {{x-\sqrt{1-\theta^2}}\choose{\theta- \theta^* }} ,\\&\quad c_\gamma -c_\gamma^2\sqrt{1-\rho}(x-\sqrt{1-{\theta}^2}) -c_\gamma^2d_\gamma(\theta-\theta^*)+c_\gamma\frac{\log(a)}{\log(u)} +{{x-\sqrt{1-\theta^2}}\choose{\theta- \theta^* }}^T A_\gamma {{x-\sqrt{1-\theta^2}}\choose{\theta- \theta^* }} \Bigg)^2\Bigg\} \left(1-\frac{x^2}{{1-{\theta}^2}}\right)^{-\frac{1}2}d xd \theta\\ &= \int_{\theta^*-\epsilon}^{\theta^*+\epsilon} \int_{0}^{\epsilon} \exp\Bigg\{-\frac { \rrE{(\log (u))^2} }2 \max\Bigg( c_\rho +c_\rho^2\sqrt{1-\rho}x -c_\rho^2d_\rho(\theta-\theta^*) +{{-x}\choose{\theta- \theta^* }}^T A_\rho {{-x}\choose{\theta- \theta^* }} ,\\&\quad c_\gamma +c_\gamma^2\sqrt{1-\rho}x -c_\gamma^2d_\gamma(\theta-\theta^*)+c_\gamma\frac{\log(a)}{\log(u)} + {{-x}\choose{\theta- \theta^* }}^T A_\gamma {{-x}\choose{\theta- \theta^* }} \Bigg)^2\Bigg\} \left(1-\frac{(-x+\sqrt{1-\theta^2})^2}{{1-{\theta}^2}}\right)^{-\frac{1}2}d xd \theta\\ &= \frac 1{ \rrE{(\log (u))^2} }\int_{-\epsilon\log(u)}^{\epsilon\log(u)} \int_{0}^{\epsilon\log(u)} \exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \max\Bigg( 1 +c_\rho\sqrt{1-\rho}\frac x{\log(u)} -c_\rho d_\rho \frac \theta{\log(u)} + \frac 1{c_\rho \rrE{(\log (u))^2} } {{-x}\choose{\theta }}^T A_\rho {{-x}\choose{\theta }} ,\\&\quad 1 +c_\gamma\sqrt{1-\rho}\frac x{\log(u)} -c_\gamma d_\gamma \frac \theta{\log(u)}+\frac{\log(a)}{\log(u)} +\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta}}^T A_\gamma {{-x}\choose{\theta }} \Bigg)^2\Bigg\} \left(1-\frac{\left(\frac {-x} {\log(u)} +\sqrt{1-\left(\theta^* +\frac \theta {\log(u)}\right) ^2}\right)^2}{1-\left(\theta^* +\frac \theta {\log(u)}\right) ^2}\right)^{-\frac{1}2}d xd \theta\\ &= \frac 1{ \rrE{(\log (u))^2} }\int_{-\epsilon\log(u)}^{\epsilon\log(u)} \int_{0}^{\epsilon\log(u)} \exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \max\Bigg( 1 +c_\rho\sqrt{1-\rho}\frac x{\log(u)} -c_\rho d_\rho \frac \theta{\log(u)} + \frac 1{c_\rho \rrE{(\log (u))^2} } {{-x}\choose{\theta }}^T A_\rho {{-x}\choose{\theta }} ,\\&\quad 1 +c_\gamma\sqrt{1-\rho}\frac x{\log(u)} -c_\gamma d_\gamma \frac \theta{\log(u)}+\frac{\log(a)}{\log(u)} +\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta}}^T A_\gamma {{-x}\choose{\theta }} \Bigg)^2\Bigg\} \left(\frac{ \frac {2x} {\log(u)} \sqrt{1-\left(\theta^* +\frac \theta {\log(u)}\right) ^2}-\frac {x^2} { \rrE{(\log (u))^2} }}{1-\left(\theta^* +\frac \theta {\log(u)}\right) ^2}\right)^{-\frac{1}2}d xd \theta\\ &\approx \frac 1{\sqrt{2}\log(u)^{3/2}} \left(1-\left(\theta^* +\frac \theta {\log(u)}\right) ^2\right)^{1/4} \\&\times\int_{-\epsilon\log(u)}^{\epsilon\log(u)} \int_{0}^{\epsilon\log(u)}x^{-1/2} \exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \max\Bigg( 1 +c_\rho\sqrt{1-\rho}\frac x{\log(u)} -c_\rho d_\rho \frac \theta{\log(u)} + \frac 1{c_\rho \rrE{(\log (u))^2} } {{-x}\choose{\theta }}^T A_\rho {{-x}\choose{\theta }} ,\\&\quad 1 +c_\gamma\sqrt{1-\rho}\frac x{\log(u)} -c_\gamma d_\gamma \frac \theta{\log(u)}+\frac{\log(a)}{\log(u)} +\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta}}^T A_\gamma {{-x}\choose{\theta }} \Bigg)^2\Bigg\} d xd \theta \end{align*} Intuitively when $d_\rho>0$ this leads to the integrals \begin{align*} & \int_{-\epsilon\log(u)}^{\frac{-\log(a)}{2c_\rho d_\rho}} \int_{0}^{\epsilon\log(u)}x^{-1/2} \exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg( 1 +c_\rho\sqrt{1-\rho}\frac x{\log(u)} -c_\rho d_\rho \frac \theta{\log(u)} + \frac 1{c_\rho \rrE{(\log (u))^2} } {{-x}\choose{\theta }}^T A_\rho {{-x}\choose{\theta }} \Bigg)^2\Bigg\} d xd \theta\\ &\int^{\epsilon\log(u)}_{\frac{-\log(a)}{2c_\rho d_\rho}} \int_{0}^{\epsilon\log(u)}x^{-1/2} \exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(1 +c_\gamma\sqrt{1-\rho}\frac x{\log(u)} -c_\gamma d_\gamma \frac \theta{\log(u)}+\frac{\log(a)}{\log(u)} +\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta}}^T A_\gamma {{-x}\choose{\theta }} \Bigg)^2\Bigg\} d xd \theta \end{align*} and for $d_\rho<0$ this leads to the integrals \begin{align*} & \int^{\epsilon\log(u)}_{\frac{-\log(a)}{2c_\rho d_\rho}} \int_{0}^{\epsilon\log(u)}x^{-1/2} \exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg( 1 +c_\rho\sqrt{1-\rho}\frac x{\log(u)} -c_\rho d_\rho \frac \theta{\log(u)} + \frac 1{c_\rho \rrE{(\log (u))^2} } {{-x}\choose{\theta }}^T A_\rho {{-x}\choose{\theta }} \Bigg)^2\Bigg\} d xd \theta\\ &\int_{-\epsilon\log(u)}^{\frac{-\log(a)}{2c_\rho d_\rho}} \int_{0}^{\epsilon\log(u)}x^{-1/2} \exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(1 +c_\gamma\sqrt{1-\rho}\frac x{\log(u)} -c_\gamma d_\gamma \frac \theta{\log(u)}+\frac{\log(a)}{\log(u)} +\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta}}^T A_\gamma {{-x}\choose{\theta }} \Bigg)^2\Bigg\} d xd \theta \end{align*} As an example lets do one of the integrals on an heuristic asymptotic level. \begin{align*} &\int^{\epsilon\log(u)}_{\frac{-\log(a)}{2c_\rho d_\rho}} \int_{0}^{\epsilon\log(u)}x^{-1/2} \exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(1 +c_\gamma\sqrt{1-\rho}\frac x{\log(u)} +c_\gamma d_\rho \frac \theta{\log(u)}+\frac{\log(a)}{\log(u)} +\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta}}^T A_\gamma {{-x}\choose{\theta }} \Bigg)^2\Bigg\} d xd \theta\\ &=\int^{\epsilon\log(u)+\frac{\log(a)}{2c_\rho d_\rho}}_{0} \int_{0}^{\epsilon\log(u)}x^{-1/2} \\&\exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(1 +c_\gamma\sqrt{1-\rho}\frac x{\log(u)} +c_\gamma d_\rho \frac \theta{\log(u)}+\frac{\log(a)}{2\log(u)} +\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta -\frac{\log(a)}{2c_\rho d_\rho}}}^T A_\gamma {{-x}\choose{\theta-\frac{\log(a)}{2c_\rho d_\rho} }} \Bigg)^2\Bigg\} d xd \theta\\ &=\exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(1 +\frac{\log(a)}{2\log(u)}\Bigg)^2\Bigg) \int^{\epsilon\log(u)+\frac{\log(a)}{2c_\rho d_\rho}}_{0} \int_{0}^{\epsilon\log(u)}x^{-1/2} \\& \exp\Bigg\{- {c_\rho^2 \rrE{(\log (u))^2} }\Bigg(1 +\frac{\log(a)}{2\log(u)}\Bigg) \Bigg(c_\gamma\sqrt{1-\rho}\frac x{\log(u)} +c_\gamma d_\rho \frac \theta{\log(u)} +\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta -\frac{\log(a)}{2c_\rho d_\rho}}}^T A_\gamma {{-x}\choose{\theta-\frac{\log(a)}{2c_\rho d_\rho} }} \Bigg)\Bigg\} \\ &\exp\Bigg\{ -\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(c_\gamma\sqrt{1-\rho}\frac x{\log(u)} +c_\gamma d_\rho \frac \theta{\log(u)} +\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta -\frac{\log(a)}{2c_\rho d_\rho}}}^T A_\gamma {{-x}\choose{\theta-\frac{\log(a)}{2c_\rho d_\rho} }} \Bigg)^2\Bigg\} d xd \theta\\ &\approx\exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(1 +\frac{\log(a)}{2\log(u)}\Bigg)^2\Bigg)\\&\times \int^{\epsilon\log(u)+\frac{\log(a)}{2c_\rho d_\rho}}_{0} \int_{0}^{\epsilon\log(u)}x^{-1/2} \exp\Bigg\{- {c_\rho^2 \rrE{(\log (u))^2} } \Bigg(c_\gamma\sqrt{1-\rho}\frac x{\log(u)} +c_\gamma d_\rho \frac \theta{\log(u)} \Bigg)\Bigg\} d xd \theta\\ &\approx \log(u)^{3/2}\exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(1 +\frac{\log(a)}{2\log(u)}\Bigg)^2\Bigg) \int^{\infty}_{0} \int_{0}^{\infty}x^{-1/2} \exp\Bigg\{- c_\gamma^3\sqrt{1-\rho}x -c_\gamma^3 d_\rho \theta \Bigg)\Bigg\} d xd \theta\\ \end{align*} Note that the last integral can be integrated explicitly. \end{document}
\begin{document} \newtheorem{theo}{Theorem} \newtheorem{prop}[theo]{Proposition} \newtheorem{lem}[theo]{Lemma} \newtheorem{cor}[theo]{Corollary} \newtheorem*{theo*}{Theorem} \newtheorem{rst}{Result} \renewcommand*{\therst}{\Alph{rst}} \theoremstyle{definition} \newtheorem{defi}[theo]{Definition} \theoremstyle{remark} \newtheorem{rem}{Remark} \newtheorem*{rem*}{Remark} \newtheorem*{rems*}{Remarks} \title{Improved Regularity in Bumpy Lipschitz Domains} \author{Carlos Kenig$^*$} \thanks{$^*$The University of Chicago, $5734$ S. University Avenue, Chicago, IL $60637$, USA. \emph{E-mail address:} \texttt{[email protected]}} \author{Christophe Prange$^\dagger$} \thanks{$^\dagger$The University of Chicago, $5734$ S. University Avenue, Chicago, IL $60637$, USA. \emph{E-mail address:} \texttt{[email protected]}} \begin{abstract} This paper is devoted to the proof of Lipschitz regularity, down to the microscopic scale, for solutions of an elliptic system with highly oscillating coefficients, over a highly oscillating Lipschitz boundary. The originality of this result is that it does not assume more than Lipschitz regularity on the boundary. Our Theorem, which is a significant improvement of our previous work on Lipschitz estimates in bumpy domains, should be read as an improved regularity result for an elliptic system over a Lipschitz boundary. Our progress in this direction is made possible by an estimate for a boundary layer corrector. We believe that this estimate in the Sobolev-Kato class is of independent interest. \end{abstract} \maketitle \pagestyle{plain} \section{Introduction} This paper is devoted to the proof of Lipschitz regularity, down to the microscopic scale, for weak solutions $u^\varepsilon=u^\varepsilon(x)\in\mathbb R^N$ of the elliptic system \begin{equation}\label{sysuepsintro} \left\{\begin{array}{rll} -\nabla\cdot A(x/\varepsilon)\nabla u^\varepsilon&=0,&x\in D^\varepsilon_\psi(0,1),\\ u^\varepsilon&=0,&x\in\Delta^\varepsilon_\psi(0,1), \end{array}\right. \end{equation} over a highly oscillating Lipschitz boundary. Throughout this work, $\psi$ is a Lipschitz graph, \begin{equation*} D^\varepsilon_{\psi}(0,1):=\{x'\in (-1,1)^{d-1},\ \varepsilon\psi(x'/\varepsilon)<x_d<\varepsilon\psi(x'/\varepsilon)+1\}\subset\mathbb R^d \end{equation*} and \begin{equation*} \Delta^\varepsilon_{\psi}(0,1):=\{x'\in (-1,1)^{d-1},\ x_d=\varepsilon\psi(x'/\varepsilon)\} \end{equation*} is the lower highly oscillating boundary on which homogeneous Dirichlet boundary conditions are imposed. Our main theorem is the following. \begin{theo}\label{theolipdowmicro} There exists $C>0$ such that for all $\psi\in W^{1,\infty}(\mathbb R^{d-1})$, for all matrix $A=A(y)=(A^{\alpha\beta}_{ij}(y))\in\mathbb R^{d^2\times N^2}$, elliptic with constant $\lambda$, $1$-periodic and H\"older continuous with exponant $\nu>0$, for all $0<\varepsilon<1/2$, for all weak solution $u^\varepsilon$ to \eqref{sysuepsintro}, for all $r\in[\varepsilon,1/2]$ \begin{equation}\label{estlipdownmicroinpropintro} \int_{(-r,r)^{d-1}}\int_{\varepsilon\psi(x'/\varepsilon)}^{\varepsilon\psi(x'/\varepsilon)+r}|\nabla u^\varepsilon|^2\leq Cr^d\int_{(-1,1)^{d-1}}\int_{\varepsilon\psi(x'/\varepsilon)}^{\varepsilon\psi(x'/\varepsilon)+1}|\nabla u^\varepsilon|^2, \end{equation} with $C=C(d,N,\lambda,[A]_{C^{0,\nu}},\|\psi\|_{W^{1,\infty}})$. \end{theo} The uniform estimate of Theorem \ref{theolipdowmicro} should be read as an improved regularity result. Indeed, estimate \eqref{estlipdownmicroinpropintro} can be seen as a Lipschitz estimate down to the microscopic scale $O(\varepsilon)$. Its originality lies in the fact that no smoothness of the boundary, which is just assumed to be Lipschitz, is needed for it to hold. Previous results in this direction always relied on some smoothness of the boundary, typically $\psi\in C^{1,\nu}$ with $\nu>0$, or $\psi\in C^1_\omega$ with $\omega$ a modulus of continuity satisfying a Dini type condition, i.e. $\int_0^1\omega(t)/tdt<\infty$. Pioneering work on uniform estimates in homogenization has been achieved by Avellaneda and Lin in the late 80's \cite{alin,alinscal,alin2,Alin90P,alinLp}. The regularity theory for operators with highly oscillating coefficients has recently attracted a lot of attention, and important contributions have been made to relax the structure assumptions on the oscillations \cite{ArmstrongShen14,ArSmart14,GloriaNeukammOtto14}. Our work is in a different vein. It is focused on the boundary behavior of solutions. Theorem \ref{theolipdowmicro} represents a considerable improvement of a recent result obtained by the two authors, namely Result B and Theorem 16 in \cite{BLtailosc}. This first work dealt with uniform Lipschitz regularity over highly oscillating $C^{1,\nu}$ boundaries. To the best of our knowledge, an improved regularity result up to the boundary such as the one of Theorem \ref{theolipdowmicro} is new. Our breakthrough is made possible by estimating a boundary layer corrector $v=v(y)$ solution to the system \begin{equation}\label{sysblbumpyhpintro} \left\{ \begin{array}{rll} -\nabla\cdot A(y)\nabla v&=0,&y_d>\psi(y'),\\ v&=v_0,&y_d=\psi(y'), \end{array} \right. \end{equation} in the Lipschitz half-space $y_d>\psi(y')$ with non localized Dirichlet boundary data $v_0$. \begin{theo}\label{theoblbumpy} Assume $\psi\in W^{1,\infty}(\mathbb R^{d-1})$ and $v_0\in H^{1/2}_{uloc}(\mathbb R^{d-1})$ i.e. \begin{equation*} \sup_{\xi\in\mathbb Z^{d-1}}\|v_0\|_{H^{1/2}(\xi+(0,1)^{d-1})}<\infty. \end{equation*} Then, there exists a unique weak solution $v$ of \eqref{sysblbumpyhpintro} such that \begin{equation}\label{estbumpyhpuloc} \sup_{\xi\in\mathbb Z^{d-1}}\int_{\xi+(0,1)^{d-1}}\int_{\psi(y')}^\infty|\nabla v|^2dy_ddy'\leq C\|v_0\|_{H^{1/2}_{uloc}}^2<\infty, \end{equation} with $C=C(d,N,\lambda,[A]_{C^{0,\nu}},\|\psi\|_{W^{1,\infty}})$. \end{theo} \subsection*{Overview of the paper} In section \ref{secprelim} we recall several results related to Sobolev-Kato spaces, homogenization and uniform Lipschitz estimates. These results are of constant use in our work. Then the paper has two main parts. The first aim is to prove Theorem \ref{theoblbumpy} about the well-posedness of the boundary layer system in a space of non localized energy over a Lipschitz boundary. The key idea is to carry out a domain decomposition. Subsequently, there are three steps. Firstly, we prove the well-posedness of the boundary layer system over a flat boundary, namely in the domain $\mathbb R^d_+$. This is done in section \ref{secflathalfspace}. Secondly, we define and estimate a Dirichlet to Neumann operator over $H^{1/2}_{uloc}$. This key tool is introduced in section \ref{secDN}. Thirdly, we show that proving the well-posedness of the boundary layer system over a Lipschitz boundary boils down to analyzing a problem in a layer $\{\psi(y')<y_d<0\}$ close to the boundary. The energy estimates for this problem are carried out in section \ref{secwpbumpyhp}. Eventually in section \ref{secimpro}, and this is the last part of this work, we are able to prove Theorem \ref{theolipdowmicro} using a compactness scheme. \subsection*{Framework and notations} Let $\lambda>0$ and $0<\nu<1$ be fixed in what follows. We assume that the coefficients matrix $A=A(y)=(A^{\alpha\beta}_{ij}(y))$, with $1\leq\alpha,\ \beta\leq d$ and $1\leq i,\ j\leq N$ is real, that \begin{equation}\label{smoothA2} A\in C^{0,\nu}(\mathbb R^{d}), \end{equation} that $A$ is uniformly elliptic i.e. \begin{equation}\label{elliptA} \lambda |\xi|^2\leq A^{\alpha\beta}_{ij}(y)\xi^\alpha_i\xi^\beta_j\leq \frac{1}{\lambda}|\xi|^2,\quad\mbox{for all}\ \xi=(\xi^\alpha_i)\in\mathbb R^{dN},\ y\in\mathbb R^d \end{equation} and periodic i.e. \begin{equation}\label{perA} A(y+z)=A(y),\quad\mbox{for all}\ y\in\mathbb R^d,\ z\in\mathbb Z^d. \end{equation} We say that $A$ belongs to the class $\mathcal A^\nu$ if $A$ satisfies \eqref{smoothA2}, \eqref{elliptA} and \eqref{perA}. For easy reference, we summarize here the standard notations used throughout the text. For $x\in\mathbb R^d$, $x=(x',x_d)$, so that $x'\in\mathbb R^{d-1}$ denotes the $d-1$ first components of the vector $x$. For $\varepsilon>0$, $r>0$, let \begin{equation*} \begin{aligned} D^\varepsilon_{\psi}(0,r)&:=\left\{(x',x_d),\ |x'|<r,\ \varepsilon\psi(x'/\varepsilon)<x_d<\varepsilon\psi(x'/\varepsilon)+r\right\},\\ \Delta^\varepsilon_{\psi}(0,r)&:=\left\{(x',x_d),\ |x'|<r,\ x_d=\varepsilon\psi(x'/\varepsilon)\right\},\\ D_0(0,r)&:=\left\{(x',x_d),\ |x'|<r,\ 0<x_d<r\right\},\qquad \Delta_0(0,r):=\left\{(x',0),\ |x'|<r\right\},\\ \mathbb R^d_+&:=\mathbb R^{d-1}\times(0,\infty),\qquad\Omega_+:=\{\psi(y')<y_d\},\\ \Omega_\flat&:=\{\psi(y')<y_d<0\},\qquad\Sigma_k:=(-k,k)^{d-1}, \end{aligned} \end{equation*} where $|x'|=\max_{i=1,\ldots\ d}|x_i|$. We sometimes write $D_\psi(0,r)$ and $\Delta_\psi(0,r)$ in short for $D^1_\psi(0,r)$ and $\Delta^1_\psi(0,r)$; in that situation the boundary is not highly oscillating because $\varepsilon=1$. Let also \begin{equation*} (\overline{u})_{D^\varepsilon_\psi(0,r)}:={- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,r)}u=\frac{1}{|D^\varepsilon_\psi(0,r)|}\int_{D^\varepsilon_\psi(0,r)}u. \end{equation*} The Lebesgue measure of a set is denoted by $|\cdot|$. For a positive integer $m$, let also $\Idd_m$ denote the identity matrix $M_m(\mathbb R)$. The function $\mathbf{1}_E$ denotes the characteristic function of a set $E$. The notation $\eta$ usually stands for a cut-off function. Ad hoc definitions are given when needed. Unless stated otherwise, the duality product $\langle \cdot,\cdot\rangle:=\langle \cdot,\cdot\rangle_{\mathcal D',\mathcal D}$ always denotes the duality between $\mathcal D(\mathbb R^{d-1})=C^\infty_0(\mathbb R^{d-1})$ and $\mathcal D'$. In the sequel, $C>0$ is always a constant uniform in $\varepsilon$ which may change from line to line. \section{Preliminaries} \label{secprelim} \subsection{On Sobolev-Kato spaces} For $s\geq 0$, we define the Sobolev-Kato space $H^s_{uloc}(\mathbb R^{d-1})$ of functions of non localized $H^s$ energy by \begin{equation*} H^s_{uloc}(\mathbb R^{d-1}):=\left\{u\in H^s_{loc}(\mathbb R^d),\ \sup_{\xi\in \mathbb Z^{d-1}}\|u\|_{H^s(\xi+(0,1)^{d-1})}<\infty\right\}. \end{equation*} We will mainly work with $H^{1/2}_{uloc}$. The following lemma is a useful tool to compare the $H^{1/2}_{uloc}$ norm to the $H^{1/2}$ norm of a $H^{1/2}(\mathbb R^{d-1})$ function. \begin{lem}\label{lemh1/2H1/2uloc} Let $\eta\in C^\infty_c(\mathbb R^{d-1})$ and $v_0\in H^{1/2}_{uloc}(\mathbb R^{d-1})$. Assume that $\supp\eta\subset B(0,R)$, for $R>0$. Then, \begin{equation}\label{estulocH12} \|\eta v_0\|_{H^{1/2}}\leq CR^{\frac{d-1}{2}}\|v_0\|_{H^{1/2}_{uloc}}, \end{equation} with $C=C(d,\|\eta\|_{W^{1,\infty}})$. \end{lem} For a proof, we refer to the proof of Lemma 2.26 in \cite{DP_SC}. \subsection{Homogenization and weak convergence} We recall the standard weak convergence result in periodic homogenization for a fixed domain $\Omega$. As usual, the constant homogenized matrix $\overline{A}=\overline{A}^{\alpha\beta}\in M_N(\mathbb R)$ is given by \begin{equation}\label{defA0} \overline{A}^{\alpha\beta}:=\int_{\mathbb T^d}A^{\alpha\beta}(y)dy+\int_{\mathbb T^d}A^{\alpha\gamma}(y)\partial_{y_\gamma}\chi^\beta(y)dy, \end{equation} where the family $\chi=\chi^\gamma(y)\in M_N(\mathbb R)$, $y\in\mathbb T^d$, solves the cell problems \begin{equation}\label{eqdefchi} -\nabla_y \cdot A(y)\nabla_y \chi^\gamma=\partial_{y_\alpha}A^{\alpha\gamma},\ y\in \mathbb T^d\qquad \mbox{and}\qquad \int_{\mathbb T^d}\chi^\gamma(y)dy=0. \end{equation} \begin{theo}[weak convergence]\label{theoweakcvhomo} Let $\Omega$ be a bounded Lipschitz domain in $\mathbb R^d$ and let $u_k\in H^1(\Omega)$ be a sequence of weak solutions to \begin{equation*} -\nabla\cdot A_k(x/\varepsilon_k)\nabla u_k=f_k\in (H^1(\Omega))', \end{equation*} where $\varepsilon_k\rightarrow 0$ and the matrices $A_k=A_k(y)\in L^\infty$ satisfy \eqref{elliptA} and \eqref{perA}. Assume that there exist $f\in (H^1(\Omega))'$ and $u_k\in W^{1,2}(\Omega)$, such that $f_k\longrightarrow f$ strongly in $(H^1(\Omega))'$, $u_k\rightarrow u_0$ strongly in $L^2(\Omega)$ and $\nabla u_k\rightharpoonup\nabla u^0$ weakly in $L^2(\Omega)$. Also assume that the constant matrix $\overline{A_k}$ defined by \eqref{defA0} with $A$ replaced by $A_k$ converges to a constant matrix $A^0$. Then \begin{equation*} A_k(x/\varepsilon_k)\nabla u_k\rightharpoonup A^0\nabla u^0\quad\mbox{weakly in}\ L^2(\Omega) \end{equation*} and \begin{equation*} \nabla\cdot A^0\nabla u^0=f\in (H^1(\Omega))'. \end{equation*} \end{theo} For a proof, which relies on the classical oscillating test function argument, we refer for instance to \cite[Lemma 2.1]{KLSNeumann}. This is an interior convergence result, since no boundary condition is prescribed on $u_k$. \subsection{Uniform estimates in homogenization and applications} We recall here the boundary Lipschitz estimate proved by Avellaneda and Lin in \cite{alin}. \begin{theo}[Lipschitz estimate, {\cite[Lemma 20]{alin}}]\label{theoestlipbdaryalin} For all $\kappa>0$, $0<\mu<1$, there exists $C>0$ such that for all $\psi\in C^{1,\nu}(\mathbb R^{d-1})\cap W^{1,\infty}(\mathbb R^{d-1})$, for all $A\in\mathcal A^{\nu}$, for all $r>0$, for all $\varepsilon>0$, for all $f\in L^{d+\kappa}(D_\psi(0,r))$, for all $F\in C^{0,\mu}(D_\psi(0,r))$, for all $u^\varepsilon\in L^\infty(D_\psi(0,r))$ weak solution to \begin{equation*} \left\{\begin{array}{rll} -\nabla\cdot A(x/\varepsilon)\nabla u^\varepsilon&=f+\nabla\cdot F&x\in D_{\psi}(0,r),\\ u^\varepsilon&=0,&x\in\Delta_\psi(0,r), \end{array}\right. \end{equation*} the following estimate holds \begin{equation}\label{estlipbdaryalin} \|\nabla u^\varepsilon\|_{L^\infty(D_\psi(0,r/2))}\leq C\left\{r^{-1}\|u^\varepsilon\|_{L^\infty(D_\psi(0,r))}+r^{1-d/(d+\kappa)}\|f\|_{L^{d+\kappa}(D_\psi(0,r))}+r^{\mu}\|F\|_{C^{0,\mu}(D_\psi(0,r))}\right\}. \end{equation} Notice that $C=C(d,N,\lambda,\kappa,\mu,\|\psi\|_{W^{1,\infty}},[\nabla\psi]_{C^{0,\nu}},[A]_{C^{0,\nu}})$. \end{theo} As stated in our earlier work \cite{BLtailosc}, this estimate does not cover the case of highly oscillating boundaries, since the constant in \eqref{estlipbdaryalin} involves the $C^{0,\nu}$ semi-norm of $\nabla\psi$. In this work, we rely on Theorem \ref{theoestlipbdaryalin} to get large-scale pointwise estimates on the Poisson kernel $P=P(y,\tilde{y})$ associated to the domain $\mathbb R^d_+$ and to the operator $-\nabla\cdot A(y)\nabla$. \begin{prop}\label{propestpeps} For all $d\geq 2$, there exists $C>0$, such that for all $A\in \mathcal A^{\nu}$, we have: \begin{enumerate}[label=(\arabic*)] \item for all $y\in \mathbb R^d_+$, for all $\tilde{y}\in \mathbb R^{d-1}\times\{0\}$, we have \begin{align} |P(y,\tilde{y})|&\leq\frac{Cy_d}{|y-\tilde{y}|^d},\label{estPoissonkernel}\\ |\nabla_yP(y,\tilde{y})|&\leq\frac{C}{|y-\tilde{y}|^d},\label{estnablaP} \end{align} \item for all $y,\ \tilde{y}\in \mathbb R^{d-1}\times\{0\}$, $y\neq\tilde{y}$, \begin{equation}\label{estnablaPbdary} |\nabla_yP(y,\tilde{y})|\leq\frac{C}{|y-\tilde{y}|^d}. \end{equation} \end{enumerate} Notice that $C=C(d,N,\lambda,[A]_{C^{0,\nu}})$. \end{prop} The proof of those estimates starting from the uniform Lipschitz estimate of Theorem \ref{theoestlipbdaryalin} is standard (see for instance \cite{alin}). \section{Boundary layer corrector in a flat half-space} \label{secflathalfspace} This section is devoted to the well-posedness of the boundary layer problem \begin{equation}\label{sysblhp} \left\{ \begin{array}{rll} -\nabla\cdot A(y)\nabla v&=0,&y_d>0,\\ v&=v_0\in H^{1/2}_{uloc}(\mathbb R^{d-1}),&y_d=0, \end{array} \right. \end{equation} in the flat half-space $\mathbb R^d_+$. \begin{theo}\label{theoblflat} Assume $v_0\in H^{1/2}_{uloc}(\mathbb R^{d-1})$. Then, there exists a unique weak solution $v$ of \eqref{sysblbumpyhp} such that \begin{equation}\label{estulocflathp} \sup_{\xi\in\mathbb Z^{d-1}}\int_{\xi+(0,1)^{d-1}}\int_{0}^\infty|\nabla v|^2dy_ddy'\leq C\|v_0\|_{H^{1/2}_{uloc}}^2<\infty, \end{equation} with $C=C(d,N,\lambda,[A]_{C^{0,\nu}})$. \end{theo} The proof is in three steps: (i) we define a function $v$ and prove it is a weak solution to \eqref{sysblhp}, (ii) we prove that the solution we have defined satisfies the estimate \eqref{estulocflathp}, (iii) we prove uniqueness of solutions verifying \eqref{estulocflathp}. \subsection{Existence of a weak solution} Let $\eta\in C^\infty_c(\mathbb R)$ a cut-off function such that \begin{equation}\label{defeta} \eta\equiv 1\ \mbox{on}\ (-1,1),\qquad 0\leq\eta\leq 1,\qquad \|\eta'\|_{L^\infty}\leq 2. \end{equation} Let $y_*\in\mathbb R^d_+$ be fixed. Notice that \begin{equation*} \eta(|\cdot-y_*'|)\in C^\infty_c(\mathbb R^{d-1}),\quad \eta(|\cdot-y_*'|)\equiv 1\ \mbox{on}\ B(y_*',1),\quad 0\leq \eta(|\cdot-y_*'|)\leq 1\quad\mbox{and}\quad\|\nabla(\eta(|\cdot-y_*'|))\|_{L^\infty}\leq 2. \end{equation*} We define \begin{equation}\label{defsolflateta} v(y_*):=v^\sharp(y_*)+v^\flat(y_*), \end{equation} where for $y\in\mathbb R^d_+$, \begin{equation*} v^\sharp(y):=\int_{\mathbb R^{d-1}\times\{0\}}P(y,\tilde{y})(1-\eta(|\tilde{y}'-y_*'|))v_0(\tilde{y}')d\tilde{y}, \end{equation*} and $v^\flat=v^\flat(y)\in H^1(\mathbb R^d_+)$ is the unique weak solution to \begin{equation*} \left\{ \begin{array}{rll} -\nabla\cdot A(y)\nabla v^\flat&=0,&y_d>0,\\ v^\flat&=\eta(|y'-y_*'|)v_0(y')\in H^{1/2}(\mathbb R^{d-1}),&y_d=0, \end{array} \right. \end{equation*} satisfying \begin{equation}\label{estvflatH1} \int_{\mathbb R^{d}_+}|\nabla v^\flat|^2dy'dy_d\leq C\|\eta v_0\|_{H^{1/2}}^2, \end{equation} with $C=C(d,N,\lambda)$. First of all, one has to prove that the definition of $v$ does not depend on the choice of the cut-off $\eta$. Let $\eta_1,\ \eta_2\in C^\infty_c(\mathbb R)$ be two cut-off functions satisfying \eqref{defeta}. We denote by $v_1(y_*)$ and $v_2(y_*)$ the associated vectors defined by \begin{equation*} \begin{aligned} v_1(y_*)&:=\int_{\mathbb R^{d-1}\times\{0\}}P(y_*,\tilde{y})(1-\eta_1(|\tilde{y}'-y_*'|)v_0(\tilde{y}')d\tilde{y}+v^\flat_1(y_*),\\ v_2(y_*)&:=\int_{\mathbb R^{d-1}\times\{0\}}P(y_*,\tilde{y})(1-\eta_2(|\tilde{y}'-y_*'|))v_0(\tilde{y}')d\tilde{y}+v^\flat_2(y_*). \end{aligned} \end{equation*} Substracting, we get \begin{equation}\label{substrindepeta} v_1(y_*)-v_2(y_*)=\int_{\mathbb R^{d-1}\times\{0\}}P(y_*,\tilde{y})(\eta_2(|\tilde{y}'-y_*'|)-\eta_1(|\tilde{y}'-y_*'|))v_0(\tilde{y}')d\tilde{y}+v^\flat_1(y_*)-v^\flat_2(y_*). \end{equation} Now since \begin{equation*} y\longmapsto\int_{\mathbb R^{d-1}\times\{0\}}P(y,\tilde{y})(\eta_2(|\tilde{y}'-y_*'|)-\eta_1(|\tilde{y}'-y_*'|))v_0(\tilde{y}')d\tilde{y} \end{equation*} is the unique solution to \begin{equation*} \left\{ \begin{array}{rll} -\nabla\cdot A(y)\nabla v^\flat&=0,&y_d>0,\\ v^\flat&=(\eta_2(|y'-y_*'|)-\eta_1(|y'-y_*'|))v_0(y')\in H^{1/2}(\mathbb R^{d-1}),&y_d=0, \end{array} \right. \end{equation*} the difference in \eqref{substrindepeta} has to be zero, which proves that our definition of $v$ is independent of the choice of $\eta$. It remains to prove that $v=v(y)$ defined by \eqref{defsolflateta} is actually a weak solution to \eqref{sysblhp}. Let $\varphi_\diamond=\varphi_\diamond(y')\in C^\infty_c(\mathbb R^{d-1})$ and $\varphi_d=\varphi_d(y_d)\in C^\infty_c((0,\infty))$. We choose $\eta\in C^\infty_c(\mathbb R)$ satisfying \eqref{defeta} and such that $\eta(|\cdot|)\equiv 1$ on $\supp\varphi_\diamond+B(0,1)$. We aim at proving \begin{equation*} \int_{\mathbb R^{d}_+}v(y)\left(-\nabla\cdot A^*(y)\nabla(\varphi_\diamond\varphi_d)\right)dy=0. \end{equation*} This relation is clear for $v^\flat$. For $v^\sharp$, by Fubini and then integration by parts \begin{equation*} \begin{aligned} &\int_{\mathbb R^d_+}v^\sharp(y)\left(-\nabla\cdot A^*(y)\nabla(\varphi_\diamond\varphi_d)\right)dy\\ &=\int_{\supp\varphi_\diamond\times\supp\varphi_d}\int_{\mathbb R^{d-1}\times\{0\}}P(y,\tilde{y})(1-\eta(\tilde{y}))v_0(\tilde{y}')\left(-\nabla\cdot A^*(y)\nabla\varphi_\diamond\varphi_d\right)d\tilde{y}dy\\ &=\int_{\mathbb R^{d-1}\times\{0\}}\int_{\supp\varphi_\diamond\times\supp\varphi_d}P(y,\tilde{y})\left(-\nabla\cdot A^*(y)\nabla(\varphi_\diamond\varphi_d)\right)dy(1-\eta(\tilde{y}))v_0(\tilde{y}')d\tilde{y}\\ &=\int_{\mathbb R^{d-1}\times\{0\}}\left\langle-\nabla\cdot A(y)\nabla P(y,\tilde{y}),\varphi_\diamond\varphi_d\right\rangle (1-\eta(\tilde{y}))v_0(\tilde{y}')d\tilde{y}=0. \end{aligned} \end{equation*} \subsection{Gradient estimate} Let $\varphi_\diamond=\varphi_\diamond(y')\in C^\infty_c(\mathbb R^{d-1})$ and $\varphi_d=\varphi_d(y_d)\in C^\infty_c((0,\infty))$. We choose $R>1$ such that $\supp\varphi_\diamond+B(0,1)\subset B(0,R)$. Our goal is to prove \begin{equation*} \left|\int_{\mathbb R^d_+}\nabla v(y)\varphi_\diamond\varphi_d(y)dy\right|\leq CR^{\frac{d-1}{2}}\|v_0\|_{H^{1/2}_{uloc}}\|\varphi_\diamond\|_{L^2}\|\varphi_d\|_{L^2}, \end{equation*} with $C=C(d,N,\lambda,[A]_{C^{0,\nu}})$. This estimate clearly implies the bound \eqref{estulocflathp}. Let $\eta\in C^{\infty}_c(\mathbb R)$ such that \eqref{defeta} \begin{equation*} \eta(|\cdot|)\equiv 1\ \mbox{on}\ B(0,R)\quad\mbox{and}\quad\supp\eta(|\cdot|)\subset B(0,2R). \end{equation*} Combining \eqref{estvflatH1} and the result of Lemma \ref{lemh1/2H1/2uloc}, we get \begin{equation*} \int_{\mathbb R^{d}_+}|\nabla v^\flat|^2dy'dy_d\leq CR^{d-1}\|v_0\|^2_{H^{1/2}_{uloc}}, \end{equation*} with $C=C(d,N,\lambda)$. It remains to estimate \begin{equation*} \int_{\mathbb R^d_+}\nabla v^\sharp(y)\varphi_\diamond\varphi_d(y)dy=\int_0^1\int_{\mathbb R^{d-1}}\nabla v^\sharp(y)\varphi_\diamond\varphi_d(y)dy'dy_d+\int_1^\infty\int_{\mathbb R^{d-1}}\nabla v^\sharp(y)\varphi_\diamond\varphi_d(y)dy'dy_d. \end{equation*} To estimate these terms we rely on the the bound \eqref{estnablaP}: for all $y\in\mathbb R^d_+$, $\tilde{y}\in\mathbb R^{d-1}\times\{0\}$, \begin{equation*} |\nabla_yP(y,\tilde{y})|\leq \frac{C}{|y-\tilde{y}|^d}=\frac{C}{(y_d^2+|y'-\tilde{y}'|^2)^{d/2}}, \end{equation*} with $C=C(d,N,\lambda,[A]_{C^{0,\nu}})$. We begin with two useful estimates. For $y\in B(0,R)$, we have on the one hand \begin{equation}\label{usefulest1} \begin{aligned} \int_{\mathbb R^{d-1}}\frac{1}{|y'-\tilde{y}'|^d}(1-\eta(|\tilde{y}'|))|v_0(\tilde{y}')|^2d\tilde{y}&\leq \int_{\mathbb R^{d-1}\setminus B(0,1)}\frac{1}{|y'-\tilde{y}'|^d}|v_0(\tilde{y}')|^2d\tilde{y}\\ &\leq\sum_{\xi\in\mathbb Z^{d-1}\setminus\{0\}}\frac{1}{|\xi|^d}\|v_0\|_{L^2_{uloc}}^2\leq C\|v_0\|_{L^2_{uloc}}^2 \end{aligned} \end{equation} and on the other hand \begin{equation}\label{usefulest2} \begin{aligned} \int_{\mathbb R^{d-1}}\frac{1}{(y_d^2+|y'-\tilde{y}'|^2)^{d/2}}(1-\eta(|\tilde{y}'|))|v_0(\tilde{y}')|^2d\tilde{y}&\leq \int_{\mathbb R^{d-1}}\frac{1}{(y_d^2+|y'-\tilde{y}'|^2)^{d/2}}|v_0(\tilde{y}')|^2d\tilde{y}\\ &\leq\int_{\mathbb R^{d-1}}\frac{1}{(y_d^2+|y'-\tilde{y}'|^2)^{d/2}}d\tilde{y}\|v_0\|_{L^2_{uloc}}^2\\ &\leq \frac{C}{y_d}\|v_0\|_{L^2_{uloc}}^2. \end{aligned} \end{equation} Using \eqref{usefulest1}, we get \begin{equation*} \begin{aligned} &\left|\int_0^1\int_{\mathbb R^{d-1}}\nabla v^\sharp(y)\varphi_\diamond\varphi_d(y)dy'dy_d\right|=\left|\int_0^1\int_{\mathbb R^{d-1}}\int_{\mathbb R^{d-1}\times\{0\}}\nabla_yP(y,\tilde{y})(1-\eta(|\tilde{y}'|))v_0(\tilde{y}')d\tilde{y}\varphi_\diamond\varphi_d(y)dy'dy_d\right|\\ &\leq \int_0^1\int_{\mathbb R^{d-1}}\left(\int_{\mathbb R^{d-1}\times\{0\}}\frac{1-\eta(|\tilde{y}'|)}{|y'-\tilde{y}'|^d}d\tilde{y}'\right)^{1/2}\left(\int_{\mathbb R^{d-1}\times\{0\}}\frac{1-\eta(|\tilde{y}'|)}{|y'-\tilde{y}'|^d}|v_0(\tilde{y}')|^2d\tilde{y}\right)^{1/2}|\varphi_\diamond\varphi_d(y)|dy'dy_d\\ &\leq C\|v_0\|_{L^2_{uloc}}\int_0^1\int_{\mathbb R^{d-1}}\left(\int_1^\infty\frac{1}{r^2}\right)^{1/2}|\varphi_\diamond\varphi_d(y)|dy'dy_d\\ &\leq C\|v_0\|_{L^2_{uloc}}\int_0^1\int_{\mathbb R^{d-1}}|\varphi_\diamond\varphi_d(y)|dy'dy_d\leq CR^{\frac{d-1}{2}}\|v_0\|_{L^2_{uloc}}\|\varphi_\diamond\|_{L^2}\|\varphi_d\|_{L^2}. \end{aligned} \end{equation*} Using \eqref{usefulest2}, we infer \begin{equation*} \begin{aligned} &\left|\int_1^\infty\int_{\mathbb R^{d-1}}\nabla v^\sharp(y)\varphi_\diamond\varphi_d(y)dy'dy_d\right|\\ &\leq C\int_1^\infty\int_{\mathbb R^{d-1}}\left(\int_{\mathbb R^{d-1}\times\{0\}}\frac{1}{(y_d^2+|y'-\tilde{y}'|^2)^{d/2}}d\tilde{y}'\right)^{1/2}\\ &\qquad\qquad\qquad\left(\int_{\mathbb R^{d-1}\times\{0\}}\frac{1}{(y_d^2+|y'-\tilde{y}'|^2)^{d/2}}|v_0(\tilde{y}')|^2d\tilde{y}\right)^{1/2}|\varphi_\diamond\varphi_d(y)|dy'dy_d\\ &\leq C\|v_0\|_{L^2_{uloc}}\int_1^\infty\frac{1}{y_d}|\varphi_d(y_d)|dy_d\int_{\mathbb R^{d-1}}|\varphi_\diamond(y')|dy'\leq CR^{\frac{d-1}{2}}\|v_0\|_{L^2_{uloc}}\|\varphi_\diamond\|_{L^2}\|\varphi_d\|_{L^2}. \end{aligned} \end{equation*} \subsection{Uniqueness} By linearity, it is enough to prove uniqueness for $v=v(y)$ weak solution to \begin{equation*} \left\{ \begin{array}{rll} -\nabla\cdot A(y)\nabla v&=0,&y_d>0,\\ v&=0,&y_d=0, \end{array} \right. \end{equation*} such that \begin{equation}\label{estulocuniqflat} \sup_{\xi\in\mathbb Z^{d-1}}\int_{\xi+(0,1)^{d-1}}\int_0^\infty|\nabla v|^2\leq C<\infty. \end{equation} Clearly, by Poincar\'e's inequality, for all $a>0$, \begin{equation*} \sup_{\xi\in\mathbb Z^{d-1}}\int_{\xi+(0,1)^{d-1}}\int_0^a|v|^2\leq Ca^2. \end{equation*} For $k\in\mathbb N$, we will take as a test function $\eta_k^2v\in H^1_0(\mathbb R^d_+)$ for an ad hoc cut-off $\eta_k$ such that $\eta_k\equiv 1$ on $(-k,k)^{d-1}\times (0,k)$ and $\supp\eta_k\subset(-k-1,k+1)^{d-1}\times(-1,k+1)$. We want to construct $\eta_k$ such that $\|\nabla\eta_k\|_{L^\infty}$ is bounded uniformly in $k$. Let $\eta\in C^\infty_c(B(0,1/2))$ such that $\int_{\mathbb R^d}\eta=1$. We define $\eta_k$ as folows \begin{equation*} \begin{aligned} \eta_k(y)&:=\int_{\mathbb R^d}\mathbf{1}_{(-k-1/2,k+1/2)^{d-1}\times(-1/2,k+1/2)}(y-\tilde{y})\eta(\tilde{y})d\tilde{y}\\ &=\int_{(-k-1/2,k+1/2)^{d-1}\times(-1/2,k+1/2)}\eta(y-\tilde{y})d\tilde{y}. \end{aligned} \end{equation*} For $y\in (-k,k)^{d-1}\times (0,k)$, $\supp(\eta(y-\cdot))\subset(-k-1/2,k+1/2)^{d-1}\times(-1/2,k+1/2)$, so $\eta_k(y)=1$. Moreover, $\supp\eta_k\subset(-k-1/2,k+1/2)^{d-1}\times(-1/2,k+1/2)+\supp\eta\subset(-k-1,k+1)^{d-1}\times(-1,k+1)$. Finally, convolution inequalities imply $\|\nabla\eta_k\|_{L^\infty}\leq\|\nabla\eta\|_{L^1}$. Now, testing against $\eta_k^2v$, we get \begin{equation*} 0=\int_{\mathbb R^d_+}A(y)\nabla v\cdot\nabla(\eta_k^2v)=\int_{\mathbb R^d_+}A(y)\eta_k^2\nabla v\cdot\nabla v+2\int_{\mathbb R^d_+}A(y)\eta_k\nabla v\cdot(\nabla\eta_k)v. \end{equation*} Therefore, letting \begin{equation*} E_k:=\int_{(-k,k)^{d-1}\times(0,k)}|\nabla v|^2, \end{equation*} we have \begin{equation}\label{estuniqflat} E_k\leq C^*(E_{k+1}-E_k), \end{equation} where $C^*=C^*(d,N,\lambda,\|\nabla\eta\|_{L^1})$. Using the hole-filling trick, we get for fixed $k$ and for all $n\geq k$, \begin{equation*} E_k\leq \left(\frac{C^*}{C^*+1}\right)^{n-k}E_n. \end{equation*} Estimate \eqref{estulocuniqflat} implies $E_n\leq Cn^{d-1}$, so that \begin{equation*} E_k\leq C\left(\frac{C^*}{C^*+1}\right)^{n-k}n^{d-1}\stackrel{n\rightarrow\infty}{\longrightarrow}0, \end{equation*} and $E_k=0$. This concludes the uniqueness proof. \section{Estimates for a Dirichlet to Neumann operator} \label{secDN} The Dirichlet to Neumann operator $\DtoN$ is crucial in the proof of the well-posedness of the elliptic system in the bumpy half-space (see section \ref{secwpbumpyhp}). The key idea there is to carry out a domain decomposition. The Dirichlet to Neumann map is the tool enabling this domain decomposition. Since we are working in spaces of infinite energy to be useful $\DtoN$ has to be defined on $H^{1/2}_{uloc}$. Similar studies have been carried out in \cite{ABZ13_Kato} (context of water-waves), \cite{DGVNMnoslip} ($2$d Stokes system), \cite{DP_SC} ($3$d Stokes-Coriolis system). We first define the Dirichlet to Neumann operator on $H^{1/2}(\mathbb R^{d-1})$: \begin{equation*} \DtoN:\ H^{1/2}(\mathbb R^{d-1})\longrightarrow \mathcal D', \end{equation*} such that for any $v_0\in H^{1/2}(\mathbb R^{d-1})$, for all $\varphi\in C^\infty_c(\mathbb R^{d-1})$, \begin{equation*} \langle\DtoN(v_0),\varphi\rangle_{\mathcal D',\mathcal D}:=\langle A(y)\nabla v\cdot e_d,\varphi\rangle_{\mathcal D',\mathcal D}, \end{equation*} where $v$ is the unique weak solution to \begin{equation}\label{sysflatH1/2secdn} \left\{\begin{array}{rll} -\nabla\cdot A(y)\nabla v&=0,&y_d>0,\\ v&=v_0\in H^{1/2}(\mathbb R^{d-1}),&y_d=0. \end{array} \right. \end{equation} \begin{prop}\label{propdnh12}\ \begin{enumerate}[label=(\arabic*)] \item For all $\varphi\in C^\infty_c(\overline{\mathbb R^d_+})$, \begin{equation}\label{prop1dnh12} \langle\DtoN(v_0),\varphi|_{y_d=0}\rangle_{\mathcal D',\mathcal D}=\langle A(y)\nabla v\cdot e_d,\varphi|_{y_d=0}\rangle_{\mathcal D',\mathcal D}=-\int_{\mathbb R^d_+}A(y)\nabla v\cdot\nabla\varphi. \end{equation} \item For all $\varphi\in C^\infty_c(\mathbb R^{d-1})$, \begin{equation}\label{prop2dnh12} \langle\DtoN(v_0),\varphi|_{y_d=0}\rangle_{\mathcal D',\mathcal D}=\int_{\mathbb R^{d-1}\times\{0\}}\int_{\mathbb R^{d-1}\times\{0\}}A(y)\nabla_yP(y,\tilde{y})\cdot e_dv_0(\tilde{y})d\tilde{y}\varphi(y)dy. \end{equation} \end{enumerate} \end{prop} For $y,\ \tilde{y}\in\mathbb R^{d-1}\times\{0\}$, let \begin{equation*} K(y,\tilde{y}):=A(y)\nabla_yP(y,\tilde{y})\cdot e_d \end{equation*} be the kernel appearing in \eqref{prop2dnh12}. Estimate \eqref{estnablaPbdary} of Proposition \ref{propestpeps} implies that \begin{equation*} |K(y,\tilde{y})|\leq \frac{C}{|y-\tilde{y}|^d}, \end{equation*} for any $y,\ \tilde{y}\in \mathbb R^{d-1}\times\{0\}$, $y\neq\tilde{y}$ with $C=C(d,N,\lambda,[A]_{C^{0,\nu}})$. Both formulas in Proposition \ref{propdnh12} follow from integration by parts. Because of \eqref{prop1dnh12}, it is clear that for all $v_0\in H^{1/2}(\mathbb R^{d-1})$, for all $\varphi\in C^{\infty}_c(\mathbb R^{d-1})$, \begin{equation}\label{continuityestdnflathp} \left|\langle\DtoN(v_0),\varphi\rangle\right|\leq C\|v_0\|_{H^{1/2}}\|\varphi\|_{H^{1/2}}, \end{equation} with $C=C(d,N,\lambda)$, so that $\DtoN(v_0)$ extends as a continuous operator on $H^{1/2}(\mathbb R^{d-1})$. Another consequence of \eqref{prop1dnh12} is the following corollary. \begin{cor}\label{cordnneg} For all $v_0\in H^{1/2}(\mathbb R^{d-1})$, \begin{equation*} \langle\DtoN(v_0),v_0\rangle=-\int_{\mathbb R^{d}_+}A(y)\nabla v\cdot\nabla v\leq 0, \end{equation*} where $v$ is the unique solution to \eqref{sysflatH1/2secdn}. \end{cor} Our next goal is to extend the definition of $\DtoN$ to $v_0\in H^{1/2}_{uloc}(\mathbb R^{d-1})$. We have to make sense of the duality product $\langle\DtoN(v_0),\varphi\rangle$. As for the definition of the solution to the flat half-space problem (see section \ref{secflathalfspace}), the basic idea is to use a cut-off function $\eta$ to split the definition between one part $\langle\DtoN(\eta v_0),\varphi\rangle$ where $\eta v_0\in H^{1/2}(\mathbb R^{d-1})$, and another part $\langle\DtoN((1-\eta)v_0),\varphi\rangle$ which does not see the singularity of the kernel $K(y,\tilde{y})$. For $R>1$, there exists $\eta\in C^\infty_c(\mathbb R)$ such that \begin{equation}\label{condeta} 0\leq\eta\leq 1,\quad\eta\equiv 1\ \mbox{on}\ (-R,R),\quad\supp\eta\subset(-R-1,R+1),\quad\|\eta'\|_{L^\infty}\leq 2. \end{equation} Let $v_0\in H^{1/2}_{uloc}(\mathbb R^{d-1})$. Let $R>1$ and $\varphi\in C^\infty_c(\mathbb R^{d-1})$ such that $\supp\varphi+B(0,1)\subset B(0,R)$. There exists $\eta\in C^\infty_c(\mathbb R)$ satisfying the conditions \eqref{condeta}. We define the action of $\DtoN(v_0)$ on $\varphi$ by \begin{multline}\label{defdnh12ulocflat} \langle\DtoN(v_0),\varphi\rangle_{\mathcal D',\mathcal D}:=\langle\DtoN(\eta(|\cdot|)v_0),\varphi\rangle_{H^{-1/2},H^{1/2}}\\ +\int_{\mathbb R^{d-1}\times\{0\}}\int_{\mathbb R^{d-1}\times\{0\}}K(y,\tilde{y})(1-\eta(|\tilde{y}'|))v_0(\tilde{y}')\varphi(y')d\tilde{y}dy. \end{multline} The fact that this definition does not depend on the cut-off $\eta\in C^\infty_c(\mathbb R)$ follows from Proposition \ref{propdnh12}. The first term in the right-hand side of \eqref{defdnh12ulocflat} is estimated using \eqref{continuityestdnflathp} and the bound of Lemma \ref{lemh1/2H1/2uloc} between the $H^{1/2}$ norm of $\eta(|\cdot|)v_0$ and the $H^{1/2}_{uloc}$ norm of $v_0$. That yields \begin{equation*} \left|\langle\DtoN(\eta(|\cdot|)v_0),\varphi\rangle\right|\leq C\|\eta(|\cdot|)v_0\|_{H^{1/2}}\|\varphi\|_{H^{1/2}}\leq CR^{\frac{d-1}{2}}\|v_0\|_{H^{1/2}_{uloc}}\|\varphi\|_{H^{1/2}}, \end{equation*} with $C=C(d,N,\lambda)$. We deal with the integral part in the right hand side of \eqref{defdnh12ulocflat} in a way similar to the proof of estimates \eqref{usefulest1} and \eqref{usefulest2}. Using the fact that the supports of $(1-\eta(|y'|))v_0(y')$ on the one hand and $\varphi$ on the other hand are disjoint, we have \begin{equation*} \begin{aligned} &\left|\int_{\mathbb R^{d-1}\times\{0\}}\int_{\mathbb R^{d-1}\times\{0\}}K(y,\tilde{y})(1-\eta(|\tilde{y}'|))v_0(\tilde{y}')\varphi(y')d\tilde{y}dy\right|\\ &\leq C\int_{\mathbb R^{d-1}\times\{0\}}\int_{\mathbb R^{d-1}\times\{0\}}\frac{1}{|y-\tilde{y}|^d}(1-\eta(|\tilde{y}'|))|v_0(\tilde{y}')||\varphi(y')|d\tilde{y}dy\\ &\leq C\int_{\mathbb R^{d-1}\times\{0\}}\left(\int_{\mathbb R^{d-1}\times\{0\}}\frac{1}{|y-\tilde{y}|^d}(1-\eta(|\tilde{y}'|))d\tilde{y}\right)^{1/2}\\ &\qquad\qquad\left(\int_{\mathbb R^{d-1}\times\{0\}}\frac{1}{|y-\tilde{y}|^d}(1-\eta(|\tilde{y}'|))|v_0(\tilde{y}')|^2d\tilde{y}\right)^{1/2}|\varphi(y')|dy\\ &\leq C\int_{\mathbb R^{d-1}\times\{0\}}\left(\int_1^\infty\frac{1}{r^2}dr\right)^{1/2}|\varphi(y')|dy\|v_0\|_{L^2_{uloc}}\\ &\leq CR^{\frac{d-1}{2}}\|v_0\|_{L^2_{uloc}}\|\varphi\|_{L^2}, \end{aligned} \end{equation*} with $C=C(d,N,\lambda,[A]_{C^{0,\nu}})$. These results are put in a nutshell in the following proposition. \begin{prop}\label{estdnflat}\ \begin{enumerate}[label=(\arabic*)] \item For $v_0\in H^{1/2}(\mathbb R^{d-1})$, for any $\varphi\in C^\infty_c(\mathbb R^{d-1})$, we have \begin{equation*} \left|\langle\DtoN(v_0),\varphi\rangle\right|\leq C\|v_0\|_{H^{1/2}}\|\varphi\|_{H^{1/2}}, \end{equation*} with $C=C(d,N,\lambda)$. \item For $v_0\in H^{1/2}_{uloc}(\mathbb R^{d-1})$, for $R>1$ and any $\varphi\in C^\infty_c(\mathbb R^{d-1})$ such that \begin{equation*} \supp\varphi+B(0,1)\subset B(0,R), \end{equation*} we have \begin{equation}\label{estdnflatest} \left|\langle\DtoN(v_0),\varphi\rangle\right|\leq CR^{\frac{d-1}{2}}\|v_0\|_{H^{1/2}_{uloc}}\|\varphi\|_{H^{1/2}}, \end{equation} with $C=C(d,N,\lambda,[A]_{C^{0,\nu}})$. \end{enumerate} \end{prop} \section{Boundary layer corrector in a bumpy half-space} \label{secwpbumpyhp} This section is devoted to the well-posedness of the boundary layer problem \begin{equation}\label{sysblbumpyhp} \left\{ \begin{array}{rll} -\nabla\cdot A(y)\nabla v&=0,&y_d>\psi(y'),\\ v&=v_0\in H^{1/2}_{uloc}(\mathbb R^{d-1}),&y_d=\psi(y'), \end{array} \right. \end{equation} in the bumpy half-space $\Omega_+:=\{y_d>\psi(y')\}$. For technical reasons, the boundary $\psi\in W^{1,\infty}(\mathbb R^{d-1})$ is assumed to be negative, i.e. $\psi(y')<0$ for all $y'\in\mathbb R^{d-1}$. We prove Theorem \ref{theoblbumpy} of the introduction which asserts the existence of a unique solution $v$ in the class \begin{equation*} \sup_{\xi\in\mathbb Z^{d-1}}\int_{\xi+(0,1)^{d-1}}\int_{\psi(y')}^\infty|\nabla v|^2dy_ddy'<\infty. \end{equation*} The idea is to split the bumpy half-space into two subdomains: a flat half-space $\mathbb R^d_+$ on the one hand and a bumpy channel $\Omega_\flat:=\{\psi(y')<y_d<0\}$ on the other hand. Both domains are connected by a transparent boundary condition involving the Dirichlet to Neumann operator $\DtoN$ defined in section \ref{secDN}. Therefore, solving \eqref{sysblhp} is equivalent to solving \begin{equation}\label{sysblbumpyc} \left\{ \begin{array}{rll} -\nabla\cdot A(y)\nabla v&=0,&0>y_d>\psi(y'),\\ v&=v_0\in H^{1/2}_{uloc}(\mathbb R^{d-1}),&y_d=\psi(y'),\\ A(y)\nabla v\cdot e_d&=\DtoN(v|_{y_d=0}),&y_d=0. \end{array} \right. \end{equation} This fact is stated in the following technical lemma. \begin{lem}\label{lemeqsysdec} If $v$ is a weak solution of \eqref{sysblbumpyc} in $\Omega_\flat$ such that \begin{equation*} \sup_{\xi\in\mathbb Z^{d-1}}\int_{\xi+(0,1)^{d-1}}\int_{\psi(y')}^0|\nabla v|^2dy_ddy'<\infty, \end{equation*} then $\tilde{v}$, defined by $\tilde{v}(y):=v(y)$ for $\psi(y')<y_d<0$ and $\tilde{v}|_{\mathbb R^d_+}$ is the unique solution to \eqref{sysblhp} with boundary condition $\tilde{v}|_{y_d=0^+}=v|_{y_d=0^-}$ given by Theorem \ref{theoblflat}, is a weak solution to \eqref{sysblbumpyhp}. Moreover, the reverse is also true. Namely, if $v$ is a weak solution to \eqref{sysblbumpyhp} in $\Omega_+$ such that \begin{equation*} \sup_{\xi\in\mathbb Z^{d-1}}\int_{\xi+(0,1)^{d-1}}\int_{\psi(y')}^\infty|\nabla v|^2dy_ddy'<\infty, \end{equation*} then $v|_{\{\psi(y')<y_d<0\}}$ is a weak solution to \eqref{sysblbumpyc}. \end{lem} The main advantage of the domain decomposition is to make it possible to work in a channel, bounded in the vertical direction, in which one can rely on Poincar\'e type inequalities. Therefore our method is energy based, which makes it possible to deal with rough boundaries. We now lift the boundary condition $v_0$. There exists $V_0$ such that \begin{equation*} \sup_{\xi\in\mathbb Z^{d-1}}\int_{\xi+(0,1)^{d-1}}\int_{\psi(y')}^\infty|V_0|^2+|\nabla V_0|^2dy_ddy'\leq C\|v_0\|_{H^{1/2}_{uloc}}^2, \end{equation*} with $C=C(d,N,\|\psi\|_{W^{1,\infty}})$ and such that the trace of $V_0$ is $v_0$. Thus, $w:=v-V_0$ solves the system \begin{equation}\label{sysblbumpychomo} \left\{ \begin{array}{rll} -\nabla\cdot A(y)\nabla w&=\nabla\cdot F,&0>y_d>\psi(y'),\\ w&=0,&y_d=\psi(y'),\\ A(y)\nabla w\cdot e_d&=\DtoN(w|_{y_d=0})+f,&y_d=0, \end{array} \right. \end{equation} where \begin{equation*} \begin{aligned} F&:=A(y)\nabla V_0,\\ f&:=\DtoN(V_0|_{y_d=0})-A(y)\nabla V_0\cdot e_d. \end{aligned} \end{equation*} Notice that the source terms satisfy the following estimates: \begin{equation}\label{estFl2uloc} \sup_{\xi\in\mathbb Z^{d-1}}\int_{\xi+(0,1)^{d-1}}\int_{\psi(y')}^0|F|^2dy_ddy'\leq C\|v_0\|_{H^{1/2}_{uloc}}^2, \end{equation} with $C=C(d,N,\lambda,\|\psi\|_{W^{1,\infty}})$ and for all $\varphi\in C^\infty_c(\mathbb R^{d-1})$ such that $B(0,R)\subset\supp\varphi\subset B(0,2R)$ for some $R>0$, \begin{equation}\label{estfh1/2uloc} \left|\langle f,\varphi\rangle\right|\leq CR^{\frac{d-1}{2}}\|v_0\|_{H^{1/2}_{uloc}}\|\varphi\|_{H^{1/2}}, \end{equation} with $C=C(d,N,\lambda,[A]_{C^{0,\nu}},\|\psi\|_{W^{1,\infty}})$. There are three steps in the proof of the well-posedness of \eqref{sysblbumpyhp}. Firstly, for $n\in\mathbb N$ we build approximate solutions $w_n=w_n(y)$ solving \begin{equation}\label{sysblbumpychomon} \left\{ \begin{array}{rll} -\nabla\cdot A(y)\nabla w_n&=\nabla\cdot F,&0>y_d>\psi(y'),\\ w_n&=0,&\{y_d=\psi(y')\}\cup\{|y'|=n\},\\ A(y)\nabla w_n\cdot e_d&=\DtoN(w_n|_{y_d=0})+f,&y_d=0, \end{array} \right. \end{equation} on $\Omega_{\flat,n}:=\{y'\in(-n,n)^{d-1},\ 0>y_d>\psi(y')\}$ and extend $w_n$ by $0$ on $\Omega_\flat\setminus\Omega_{\flat,n}$. We have that $w_n\in H^1(\Omega_\flat)$. This construction is utterly classical. Secondly, we aim at getting estimates uniform in $n$ on $w_n$ in the norm \begin{equation}\label{sndstepbound} \sup_{\xi\in\mathbb Z^{d-1}}\int_{\xi+(0,1)^{d-1}}\int_{\psi(y')}^0|\nabla w_n|^2dy_ddy'. \end{equation} This is done carrying out so-called Saint-Venant estimates in the bounded channel. We close this step by using a hole-filling argument. The method has been pioneered by Lady{\v{z}}enskaja and Solonnikov \cite{LS} for the Navier-Stokes system in a bounded channel. Here the situation is more involved because of the nonlocal operator $\DtoN$ on the upper boundary. The situation here is closer to \cite{DGVNMnoslip,DaGV11} ($2$d Stokes system) and \cite{DP_SC} ($3$d Stokes-Coriolis system). Finally, one has to check that weak limits of $w_n$ are indeed solutions of \eqref{sysblbumpychomo}. This step is straightforward because of the linearity of the equations. Uniqueness follows from the Saint-Venant estimate of the second step, with zero source terms. We focus on the second step, which is by far the most intricate one. Let $r>0$, $y'_0\in\mathbb R^{d-1}$ and \begin{equation*} \Omega_{\flat,y'_0,r}:=\{y'\in B(y'_0,r),\ 0>y_d>\psi(y')\}. \end{equation*} Let $w_r\in H^1(\Omega_\flat)$ be a weak solution to \begin{equation}\label{sysblbumpychomor} \left\{ \begin{array}{rll} -\nabla\cdot A(y)\nabla w_r&=\nabla\cdot F_r,&0>y_d>\psi(y'),\\ w_r&=0,&y_d=\psi(y'),\\ A(y)\nabla w_r\cdot e_d&=\DtoN(w_r|_{y_d=0})+f_r,&y_d=0, \end{array} \right. \end{equation} such that $w_r=0$ on $\Omega_\flat\setminus\Omega_{\flat,y'_0,r}$, and where \begin{equation*} F_r:=F\mathbf{1}_{\Omega_{\flat,y'_0,r}},\qquad f_r:=f\mathbf{1}_{B(y'_0,r)}. \end{equation*} Both $F_r$ and $f_r$ satisfy (respectively) the estimates \eqref{estFl2uloc} and \eqref{estfh1/2uloc} with constants uniform in $r$. Notice furthermore that $w_n$ defined above (see \eqref{sysblbumpychomon}) is equal to $w_r$ solution of \eqref{sysblbumpychomor} for $r:=n$ and $y'_0=0$. For $k\in \mathbb N$, let \begin{equation*} \Omega_{\flat,k}:=\{y'\in(-k,k)^{d-1},\ 0>y_d>\psi(y')\}. \end{equation*} Our goal is to estimate, \begin{equation*} E_k:=\int_{\Omega_{\flat,k}}|\nabla w_r|^2. \end{equation*} In the following, for $k,\ m\in\mathbb N$, $k,\ m\geq 1$, \begin{equation*} \Sigma_k:=(-k,k)^{d-1}, \end{equation*} and the set $\mathcal C_{k,m}$ denotes the family of cubes $T$ of volume $m^{d-1}$ contained in $\mathbb R^{d-1}\setminus\Sigma_{k+m-1}$ with vertices in $\mathbb Z^{d-1}$, i.e. \begin{equation*} \mathcal C_{k,m}:=\left\{T=\xi+(-m',m')^{d-1},\ \xi\in\mathbb Z^{d-1}\ \mbox{and}\ T\subset\mathbb R^{d-1}\setminus\Sigma_{k+m-1}\right\}. \end{equation*} Let also $\mathcal C_m$ be the family of all the cubes of volume $m^{d-1}$ with vertices in $\mathbb Z^{d-1}$ \begin{equation*} \mathcal C_{m}:=\left\{T=\xi+(-m',m')^{d-1},\ \xi\in\mathbb Z^{d-1}\right\}. \end{equation*} Notice that for $k\geq \hat{k}\geq m'$, \begin{equation*} \mathcal C_{k,m}\subset\mathcal C_{\hat{k},m}\subset\mathcal C_{m',m}\subset\mathcal C_m. \end{equation*} For $T\in\mathcal C_{k,m}$, \begin{equation}\label{defomegaT} E_T:=\int_{\Omega_{T}}|\nabla w_r|^2,\qquad \Omega_T:=\{y'\in T,\ 0>y_d>\psi(y')\}. \end{equation} \begin{prop}\label{propaprioriest} There exists a constant $C^*=C^*(d,N,\lambda,[A]_{C^{0,\nu}},\|\psi\|_{W^{1,\infty}},\|v_0\|_{H^{1/2}_{uloc}})$ such that for all $r>0$, $y'_0\in\mathbb R^{d-1}$, for all $k,\ m\in\mathbb N$, $m\geq 3$ and $k\geq m/2=m'$, for any weak solution $w_r\in H^1(\Omega_\flat)$ of \eqref{sysblbumpychomor}, the following bound holds \begin{equation}\label{stvenantest} E_k\leq C^*\left(k^{d-1}+E_{k+m}-E_k+\frac{k^{3d-5}}{m^{3d-3}}\sup_{T\in\mathcal C_{k,m}}E_T\right). \end{equation} Notice that $C^*$ is independent of $r$ and $y'_0$. \end{prop} The crucial point for the control of the large-scale energies in \eqref{stvenantest} is the fact that the power $3d-5$ of $k$ is strictly smaller that the power $3d-3$ of $m$. Before tackling the proof of Proposition \ref{propaprioriest}, let us explain how to infer from \eqref{stvenantest} an a priori bound uniform in $n$ on $w_n$ solution of \eqref{sysblbumpychomon}. \subsection{Proof of the a priori bound} \label{secproofaprioribound} Let $C^*$ be given by Proposition \ref{propaprioriest}, and let \begin{equation}\label{eqAB} A:=\sum_{k=1}^\infty\left(\frac{C^*}{C^*+1}\right)^k(2k-1)^{d-1}<\infty,\qquad B:=\sum_{k=1}^\infty\left(\frac{C^*}{C^*+1}\right)^k(2k-1)^{3d-5}<\infty. \end{equation} We now choose an integer $m$ so that \begin{equation}\label{eqdefm} m\geq 3,\quad m\ \mbox{is even}\quad\mbox{and}\quad 1-2^{5-3d}\frac{B}{m^2}>\frac{1}{2}. \end{equation} Notice that $m=m(d,N,\lambda,[A]_{C^{0,\nu}},\|\psi\|_{W^{1,\infty}},\|v_0\|_{H^{1/2}_{uloc}})$, but is independent of $r$ and $y_0'$. The reason for taking $m$ even is technical; it is only used in the translation argument below. Take $n=lm=2lm'$, with $l\in\mathbb N$, $l\geq 1$, and take $w_n$ to be the solution of \eqref{sysblbumpychomon}. There exists $T^*\in \mathcal C_{m}$ such that $T^*\subset\Sigma_n$ and $E_{T^*}=\sup_{T\in\mathcal C_{m}}E_T$. By definition, there is $\xi^*\in\mathbb Z^{d-1}$ for which $T^*=\xi^*+(-m',m')^{d-1}$. We want to center $T^*$ at zero by simply translating the origin. Doing so, $w^*_n(y):=w_n(y'+\xi^*,y_d)$ is a solution of \eqref{sysblbumpychomor} with $y'_0:=-\xi^*,\ r=n$ and \begin{equation*} \begin{aligned} A^*(y):=A(y'+\xi^*,y_d),&\quad \psi^*(y'):=\psi(y'+\xi^*),\quad v_0^*(y'):=v_0(y'+\xi^*),\\ F^*(y):=F(y'+y^*,y_d)&\quad\mbox{and}\quad f^*(y'):=f(y'+\xi^*). \end{aligned} \end{equation*} Notice that \begin{equation*} [A^*]_{C^{0,\mu}}=[A]_{C^{0,\nu}},\quad \|\psi^*\|_{W^{1,\infty}}=\|\psi\|_{W^{1,\infty}}\quad\mbox{and}\quad \|v^*_0\|_{H^{1/2}_{uloc}}=\|v_0\|_{H^{1/2}_{uloc}}, \end{equation*} so that $w^*_n$ satisfies the Saint-Venant estimate \eqref{stvenantest} with the same constant $C^*$. Furthermore, $E_{m'}=E_{T^*}$. \begin{lem}\label{lemdownwardinduction} We have the following a priori bound \begin{equation*} E_{m'}\leq 2^{2-d}Am^{d-1}, \end{equation*} where $A$ is defined by \eqref{eqAB}. \end{lem} The Lemma is obtained by downward induction, using a hole-filling type argument. Since $w^*_n$ is supported in $\Omega_{\flat,2n}$, we start from $k$ sufficiently large in \eqref{stvenantest}. For $k=2n+m'=(4l+1)m'$, estimate \eqref{stvenantest} implies \begin{equation*} E_{(4l+1)m'}\leq \frac{C^*}{C^*+1}((4l+1)m')^{d-1}, \end{equation*} because $E_T=0$ for any $T\in\mathcal C_{(4l+1)m',m}$. Then, \begin{equation*} E_{(2(2l-1)+1)m'}=E_{(4l+1)m'-m}\leq \frac{C^*}{C^*+1}(2(2l-1)+1)^{d-1}(m')^{d-1}+\left(\frac{C^*}{C^*+1}\right)^2(4l+1)^{d-1}(m')^{d-1}. \end{equation*} Let $p\in\{0,\ldots\ 2l-1\}$. We then have \begin{multline*} E_{(2p+1)m'}\leq \frac{C^*}{C^*+1}(2p+1)^{d-1}(m')^{d-1}+\ldots\ \left(\frac{C^*}{C^*+1}\right)^{2l-p}(4l+1)^{d-1}(m')^{d-1}\\ +\frac{2^{5-3d}}{m^2}\left[\frac{C^*}{C^*+1}(2p+1)^{3d-5}+\ldots\ \left(\frac{C^*}{C^*+1}\right)^{2l-p}(4l+1)^{3d-5}\right]E_{m'}. \end{multline*} Eventually, for $p=0$ \begin{multline*} E_{m'}\leq \frac{C^*}{C^*+1}(m')^{d-1}+\left(\frac{C^*}{C^*+1}\right)^2(3m')^{d-1}+\ldots\ \left(\frac{C^*}{C^*+1}\right)^{2l-p}(4l+1)^{d-1}(m')^{d-1}\\ +\frac{2^{5-3d}}{m^2}\left[\frac{C^*}{C^*+1}+\ldots\ \left(\frac{C^*}{C^*+1}\right)^{2l-p}(4l+1)^{3d-5}\right]E_{m'}\leq 2^{1-d}Am^{d-1}+\frac{2^{5-3d}}{m^2}BE_{m'}. \end{multline*} Therefore, \begin{equation*} \frac{E_{m'}}{2}<\left(1-\frac{B}{m^2}\right)E_{m'}\leq 2^{1-d}Am^{d-1}, \end{equation*} which proves Lemma \ref{lemdownwardinduction}. Finally, \begin{equation*} \sup_{\xi\in\mathbb Z^{d-1}}\int_{\xi+(0,1)^{d-1}}\int_{\psi(y')}^0|\nabla w_n|^2\leq E_m\leq 2^{2-d}Am^{d-1}, \end{equation*} which proves the a priori bound in the norm \eqref{sndstepbound} uniformly in $n$. \subsection{Proof of Proposition \ref{propaprioriest}} \label{secproofpropaprioriest} \subsubsection*{Construction of a cut-off} Let $\eta\in C^\infty(B(0,1/2))$ such that $\eta\geq 0$ and $\int_{\mathbb R^d}\eta=1$. For all $k\in\mathbb N$, let $\eta_k=\eta_k(y')$ be defined by \begin{equation*} \eta_k(y')=\int_{\mathbb R^{d-1}}\mathbf{1}_{[-k-1/2,k+1/2]^{d-1}}(y'-\tilde{y}')\eta(\tilde{y}')d\tilde{y}'=\int_{[-k-1/2,k+1/2]^{d-1}}\eta(y'-\tilde{y}')d\tilde{y}'. \end{equation*} For all $k\in\mathbb N$, we have the following properties: \begin{equation*} \eta_k\equiv 1\ \mbox{on}\ [-k,k]^{d-1},\quad \supp\eta_k\subset[-k-1,k+1]^{d-1},\quad \eta_k\in C^\infty_c(\mathbb R^{d-1}) \end{equation*} and most importantly, we have the control \begin{equation*} \|\nabla\eta_k\|_{L^\infty}\leq\|\nabla\eta\|_{L^1} \end{equation*} uniformy in $k$. \subsubsection*{Energy estimate} Testing the system \eqref{sysblbumpychomor} against $\eta_k^2w_r$ we get \begin{multline}\label{eqtestetak2} \int_{\Omega_\flat}\eta_k^2A(y)\nabla w_r\cdot\nabla w_r=-\int_{\Omega_\flat}2\eta_kA(y)\nabla w_r\cdot\nabla\eta_k w_r\\+\langle\nabla\cdot F,\eta_k^2w_r\rangle+\langle f,\eta_k^2w_r\rangle+\langle\DtoN(w_r|_{y_d=0}),\eta_k^2w_r\rangle. \end{multline} By ellipticity, we have \begin{equation*} \lambda\int_{\Omega_\flat}\eta_k^2|\nabla w_r|^2\leq\int_{\Omega_\flat}\eta_k^2A(y)\nabla w_r\cdot\nabla w_r. \end{equation*} The following estimate (or variations of it) is of constant use: by the trace theorem and Poincar\'e inequality \begin{equation}\label{eststvenanta} \begin{aligned} \left(\int_{\Sigma_{k+1}}\eta_k^4|w_r(y',0)|dy'\right)^{1/2}&\leq \|\eta_k^2w_r\|_{H^{1/2}}\leq C\|\eta_k^2w_r\|_{H^1(\Omega_\flat)}\leq C\|\nabla(\eta_k^2w_r)\|_{L^2(\Omega_\flat)}\\ &\leq C(E_{k+1}-E_k)^{1/2}+C'\left(\int_{\Omega_\flat}\eta_k^4|\nabla w|^2\right)^{1/2}, \end{aligned} \end{equation} with $C=C(d,\|\psi\|_{W^{1,\infty}},\|\eta\|_{L^1})$ and $C'=C'(d)$. We now estimate every term on the right hand side of \eqref{eqtestetak2}. We have, \begin{equation*} \begin{aligned} \left|\int_{\Omega_\flat}2\eta_kA(y)\nabla w_r\cdot\nabla\eta_k w_r\right|&\leq\frac{2}{\lambda}\left(\int_{\Omega_\flat}\eta_k^2|\nabla w_r|^2\right)^{1/2}\left(\int_{\Omega_\flat}|\nabla\eta_k|^2|w_r|^2\right)^{1/2}\\ &\leq C\left(\int_{\Omega_\flat}\eta_k^2|\nabla w_r|^2\right)^{1/2}(E_{k+1}-E_k)^{1/2}, \end{aligned} \end{equation*} with $C=C(\lambda,\|\eta\|_{L^1})$. We also have, \begin{equation*} \begin{aligned} |\langle\nabla\cdot F,\eta^2_kw_r\rangle|&=|\langle F,\nabla(\eta_k^2w_r)\rangle|\\ &\leq Ck^{\frac{d-1}{2}}(E_{k+1}-E_k)^{1/2}+C'k^{\frac{d-1}{2}}\left(\int_{\Omega_\flat}\eta_k^4|\nabla w_r|^2\right)^{1/2}, \end{aligned} \end{equation*} where $C=C(\|v_0\|_{H^{1/2}_{uloc}},\|\nabla\eta\|_{L^1})$ and $C'=C'(\|v_0\|_{H^{1/2}_{uloc}})$, and by the trace theorem and Poincar\'e inequality \begin{equation*} \begin{aligned} |\langle f,\eta_k^2w_r\rangle|&\leq Ck^\frac{d-1}{2}\|\eta_k^2w_r\|_{H^{1/2}}\leq Ck^\frac{d-1}{2}\|\nabla(\eta_k^2w_r)\|_{L^2}\\ &\leq Ck^\frac{d-1}{2}(E_{k+1}-E_k)^{1/2}+C'k^{\frac{d-1}{2}}\left(\int_{\Omega_\flat}\eta_k^4|\nabla w_r|^2\right)^{1/2}, \end{aligned} \end{equation*} with $C=C(d,\|\psi\|_{W^{1,\infty}},\|v_0\|_{H^{1/2}_{uloc}},\|\nabla\eta\|_{L^1})$ and $C'=C'(\|v_0\|_{H^{1/2}_{uloc}})$. We have now to tackle the non local term involving the Dirichlet to Neumann operator. We split this term into \begin{multline*} \langle\DtoN(w_r|_{y_d=0}),\eta_k^2w_r\rangle=\langle\DtoN((1-\eta_{k+m-1}^2)w_r|_{y_d=0}),\eta_k^2w_r\rangle\\ +\langle\DtoN((\eta_{k+m-1}^2-\eta_k^2)w_r|_{y_d=0}),\eta_k^2w_r\rangle+\langle\DtoN(\eta_{k}^2w_r),\eta_k^2w_r\rangle. \end{multline*} By Corollary \ref{cordnneg}, \begin{equation*} \langle\DtoN(\eta_{k}^2w_r),\eta_k^2w_r\rangle\leq 0. \end{equation*} Relying on Proposition \ref{estdnflat} and on estimate \eqref{estdnflatest}, we get \begin{equation*} \begin{aligned} &|\langle\DtoN((\eta_{k+m-1}^2-\eta_k^2)w_r|_{y_d=0}),\eta_k^2w_r\rangle|\leq Ck^{\frac{d-1}{2}}\|(\eta_{k+m-1}^2-\eta_k^2)w_r|_{y_d=0}\|_{H^{1/2}_{uloc}}\|\eta_k^2w_r\|_{H^{1/2}}\\ &\leq C(E_{k+m}-E_k)^{1/2}\left(\int_{\Omega_\flat}\eta_k^4|\nabla w_r|^2\right)^{1/2}+C(E_{k+m}-E_k)^{1/2}(E_{k+1}-E_k)^{1/2}, \end{aligned} \end{equation*} with $C=C(d,N,\lambda,[A]_{C^{0,\nu}},\|\psi\|_{W^{1,\infty}},\|\eta\|_{L^1})$. Notice that the bound \eqref{continuityestdnflathp} for the Dirichlet to Neumann operator in $H^{1/2}$ here is actually enough, since $w_r$ is compactly supported. However, when dealing with solutions not compactly supported, as for the uniqueness proof in section \ref{secendproofbumpy}, we have to use the result of Proposition \ref{estdnflat}. \subsubsection*{Control of the non local term} \begin{lem}\label{lemcontrolnonlocalterm} For all $m\geq 3$, all $k\geq m'=m/2$, we have \begin{equation}\label{lemcontrolnonlocaltermest} \int_{\Sigma_{k+1}}\left(\int_{\mathbb R^{d-1}}\frac{1}{|y'-\tilde{y}'|^d}(1-\eta^2_{k+m-1})|w_r(\tilde{y}',0)|d\tilde{y}'\right)^2dy'\leq C\frac{k^{3d-5}}{m^{3d-3}}\sup_{T\in\mathcal C_{k,m}}E_T, \end{equation} where $C=C(d)$. \end{lem} Let $y'\in \Sigma_{k+1}$ be fixed. We have \begin{equation*} \begin{aligned} &\int_{\mathbb R^{d-1}}\frac{1}{|y'-\tilde{y}'|^d}(1-\eta^2_{k+m-1})|w_r(\tilde{y}',0)|d\tilde{y}'\\ &=\sum_{j=1}^\infty\int_{\mathbb R^{d-1}}\frac{1}{|y'-\tilde{y}'|^d}(\eta^2_{k+(j+1)(m-1)}-\eta^2_{k+m-1})|w_r(\tilde{y}',0)|d\tilde{y}'\\ &=\sum_{j=1}^\infty\int_{\Sigma_{k+(j+1)(m-1)+1}\setminus\Sigma_{k+j(m-1)}}\frac{1}{|y'-\tilde{y}'|^d}|w_r(\tilde{y}',0)|d\tilde{y}'\\ &=\sum_{j=1}^\infty\sum_{T\in\mathcal C_{k,j,m}}\int_{T}\frac{1}{|y'-\tilde{y}'|^d}|w_r(\tilde{y}',0)|d\tilde{y}', \end{aligned} \end{equation*} where $\mathcal C_{k,j,m}$ is a family of disjoint cubes $T=\xi+(-m',m')^{d-1}$ such that $T\subset\Sigma_{k+(j+1)(m-1)+1}\setminus\Sigma_{k+j(m-1)}$ and \begin{equation*} \bigsqcup_{T\in \mathcal C_{k,j,m}}T=\Sigma_{k+(j+1)(m-1)+1}\setminus\Sigma_{k+j(m-1)}. \end{equation*} For all $T\in \mathcal C_{k,j,m}$, by Cauchy-Schwarz, trace theorem and Poincar\'e inequality \begin{equation*} \begin{aligned} \int_{T}\frac{1}{|y'-\tilde{y}'|^d}|w_r(\tilde{y}',0)|d\tilde{y}'&\leq\left(\int_T\frac{1}{|y'-\tilde{y}'|^{2d}}d\tilde{y}'\right)^{1/2}\left(\int_T|w(\tilde{y}',0)|^2d\tilde{y}'\right)^{1/2}\\ &\leq C\left(\int_T\frac{1}{|y'-\tilde{y}'|^{2d}}d\tilde{y}'\right)^{1/2}\left(\int_{\Omega_T}|\nabla w|^2d\tilde{y}'\right)^{1/2}\\ &\leq C\left(\int_T\frac{1}{|y'-\tilde{y}'|^{2d}}d\tilde{y}'\right)^{1/2}\left(\sup_{T\in \mathcal C_{k,j,m}}E_T\right)^{1/2}, \end{aligned} \end{equation*} where $\Omega_T$ and $E_T$ are defined in \eqref{defomegaT}. Notice that the constant $C$ in the last inequality only depends on $d$ and on $\|\psi\|_{W^{1,\infty}}$. Moreover, for any $T\in \mathcal C_{k,j,m}$, \begin{equation*} \left(\int_T\frac{1}{|y'-\tilde{y}'|^{2d}}d\tilde{y}'\right)^{1/2}\leq \frac{m^{\frac{d-1}{2}}}{(k+j(m-1)-|y'|)^d}, \end{equation*} and the number of elements of $\mathcal C_{k,j,m}$ is bounded by \begin{equation*} \#\mathcal C_{k,j,m}=\frac{\left|\Sigma_{k+(j+1)(m-1)+1}\setminus\Sigma_{k+j(m-1)}\right|}{m^{d-1}}\lesssim\frac{(k+j(m-1))^{d-2}}{m^{d-2}}. \end{equation*} Therefore, \begin{equation*} \begin{aligned} &\int_{\mathbb R^{d-1}}\frac{1}{|y'-\tilde{y}'|^d}(1-\eta^2_{k+m-1})|w_r(\tilde{y}',0)|d\tilde{y}'\\ &\leq C\left(\sup_{T\in \mathcal C_{k,j,m}}E_T\right)^{1/2}\sum_{j=1}^\infty\sum_{T\in\mathcal C_{k,j,m}}\frac{m^{\frac{d-1}{2}}}{(k+j(m-1)-|y'|)^d}\\ &\leq C\left(\sup_{T\in \mathcal C_{k,j,m}}E_T\right)^{1/2}\sum_{j=1}^\infty\frac{1}{m^{\frac{d-3}{2}}}\frac{(k+j(m-1))^{d-2}}{(k+j(m-1)-|y'|)^d}\\ &\leq C\left(\sup_{T\in \mathcal C_{k,j,m}}E_T\right)^{1/2}\frac{(k+m-1)^{d-2}}{m^{\frac{d-1}{2}}(k+m-1-|y'|)^{d-1}}, \end{aligned} \end{equation*} with $C=C(d)$. Eventually, we get for $m\geq 3$ \begin{equation*} \begin{aligned} &\int_{\Sigma_{k+1}}\left(\int_{\mathbb R^{d-1}}\frac{1}{|y'-\tilde{y}'|^d}(1-\eta^2_{k+m-1})|w_r(\tilde{y}',0)|d\tilde{y}'\right)^2dy'\\ &\leq C\left(\sup_{T\in \mathcal C_{k,j,m}}E_T\right)\frac{(k+m-1)^{2d-4}}{m^{d-1}}\int_{\Sigma_{k+1}}\frac{1}{(k+m-1-|y'|)^{2d-2}}dy'\\ &\leq C\left(\sup_{T\in \mathcal C_{k,j,m}}E_T\right)\frac{(k+m-1)^{2d-4}}{m^{d-1}}\frac{(k+1)^{d-1}}{(m-2)^{2d-2}}\leq C\frac{k^{3d-5}}{m^{3d-3}}\sup_{T\in \mathcal C_{k,j,m}}E_T, \end{aligned} \end{equation*} with $C=C(d)$, the last inequality being only true on condition that $k\geq m/2=m'$. This proves Lemma \ref{lemcontrolnonlocalterm}. In particular, by the definition of $\DtoN$ in \eqref{defdnh12ulocflat}, by the fact that $(1-\eta^2_{k+m-1})w_r(\tilde{y}',0)$ and $\eta_k^2w_r(y',0)$ have disjoint support, by estimate \eqref{lemcontrolnonlocaltermest} and by the bound \eqref{eststvenanta} we get \begin{equation*} \begin{aligned} &|\langle\DtoN((1-\eta_{k+m-1}^2)w_r|_{y_d=0}),\eta_k^2w_r\rangle|\\ &\leq C\left|\int_{\mathbb R^{d-1}}\int_{\mathbb R^{d-1}}\frac{1}{|y'-\tilde{y}'|^d}(1-\eta^2_{k+m-1}(\tilde{y}'))|w_r(\tilde{y}',0)|\eta_k^2(y')|w_r(y',0)|d\tilde{y}'dy'\right|\\ &\leq C\left(\int_{\Sigma_{k+1}}\eta_k^4|w_r(y',0)|^2dy'\right)^{1/2}\left(\int_{\Sigma_{k+1}}\left(\int_{\mathbb R^{d-1}}\frac{1-\eta^2_{k+m-1}(\tilde{y}')}{|y'-\tilde{y}'|^d}|w_r(y',0)|d\tilde{y}'\right)^2dy'\right)^{1/2},\\ &\leq C\frac{k^\frac{3d-5}{2}}{m^\frac{3d-3}{2}}\left(\int_{\Sigma_{k+1}}\eta_k^4|w_r(y',0)|^2dy'\right)^{1/2}\left(\sup_{T\in \mathcal C_{k,j,m}}E_T\right)^\frac{1}{2}\\ &\leq C\frac{k^\frac{3d-5}{2}}{m^\frac{3d-3}{2}}\left[(E_{k+1}-E_k)^{1/2}+\left(\int_{\Omega_\flat}\eta_k^4|\nabla w|^2\right)^{1/2}\right]\left(\sup_{T\in \mathcal C_{k,j,m}}E_T\right)^\frac{1}{2}, \end{aligned} \end{equation*} with $C=C(d,N,\lambda,[A]_{C^{0,\nu}},\|\psi\|_{W^{1,\infty}})$. \subsubsection*{End of the proof of the Saint-Venant estimate} Combining all our bounds and using \begin{equation*} E_{k+1}-E_k\leq E_{k+m}-E_k,\qquad \eta_k^4\leq C(\|\eta\|_{L^\infty})\eta_k^2 \end{equation*} whenever possible, we get from \eqref{eqtestetak2} the following estimate \begin{multline*} \lambda\int_{\Omega_\flat}\eta_k^2|\nabla w_r|^2\leq C\left(\int_{\Omega_\flat}\eta_k^2|\nabla w_r|^2\right)^{1/2}(E_{k+m}-E_k)^{1/2}+Ck^\frac{d-1}{2}(E_{k+m}-E_k)^{1/2}+C(E_{k+m}-E_k)\\ +Ck^\frac{d-1}{2}\left(\int_{\Omega_\flat}\eta_k^2|\nabla w_r|^2\right)^{1/2}+C(E_{k+m}-E_k)^{1/2}\left(\int_{\Omega_\flat}\eta_k^2|\nabla w_r|^2\right)^{1/2}\\ +C\frac{k^\frac{3d-5}{2}}{m^\frac{3d-3}{2}}\left[(E_{k+1}-E_k)^{1/2}+\left(\int_{\Omega_\flat}\eta_k^2|\nabla w|^2\right)^{1/2}\right]\left(\sup_{T\in \mathcal C_{k,j,m}}E_T\right)^{1/2}, \end{multline*} with $C=C(d,N,\lambda,[A]_{C^{0,\nu}},\|v_0\|_{H^{1/2}_{uloc}},\|\psi\|_{W^{1,\infty}})$. Swallowing every term of the type \begin{equation*} \int_{\Omega_\flat}\eta_k^2|\nabla w|^2 \end{equation*} in the left hand side, we end up with the Saint-Venant estimate \eqref{stvenantest}. This concludes the proof of Proposition \ref{propaprioriest}. \subsection{End of the proof of Theorem \ref{theoblbumpy}} \label{secendproofbumpy} Extracting subsequences using a classical diagonal argument and passing to the limit in the weak formulation of \eqref{sysblbumpychomon} relying on the continuity of the Dirichlet to Neumann map asserted in estimate \eqref{continuityestdnflathp} yields the existence of a weak solution $w$ to the system \eqref{sysblbumpychomo}. In addition, the weak solution satisfies the bound \begin{equation}\label{estwuloc} \sup_{\xi\in\mathbb Z^{d-1}}\int_{\xi+(0,1)^{d-1}}\int_{\psi(y')}^\infty|\nabla w|^2dy_ddy'\leq 2^{2-d}Am^{d-1}<\infty. \end{equation} Let us turn to the uniqueness of the solution to \eqref{sysblbumpychomo} satisfying the bound \eqref{estwuloc}. By linearity of the problem, it is enough to prove the uniqueness for zero source terms. Assume $w\in H^1_{loc}(\Omega_\flat)$ is a weak solution to \eqref{sysblbumpychomo} with $f=F=0$ satisfying \begin{equation}\label{boundulocC_0} \sup_{\xi\in\mathbb Z^{d-1}}\int_{\xi+(0,1)^{d-1}}\int_{\psi(y')}^0|\nabla w|^2\leq C_0<\infty. \end{equation} Repeating the estimates leading to Proposition \ref{propaprioriest} (see section \ref{secproofpropaprioriest}), we infer that for the same constant $C^*$ appearing in the Saint-Venant estimate \eqref{stvenantest} and for $m$ defined by \eqref{eqdefm}, for $k\in\mathbb N$, $k\geq m/2=m'$, \begin{equation}\label{stvenantestzerosource} E_k\leq C^*\left(E_{k+m}-E_k+\frac{k^{3d-5}}{m^{3d-3}}\sup_{T\in\mathcal C_{k,m}}E_T\right). \end{equation} The fact that $w$, unlike $w_n$, does not vanish outside $\Omega_{\flat,n}$ does not lead to any difference in the proof of this estimate. Since \begin{equation*} \sup_{T\in\mathcal C_{m}}E_T<\infty, \end{equation*} for any $\varepsilon$, there exists $T^*_\varepsilon\in\mathcal C_{m}$ such that \begin{equation}\label{eqsqueeze} \sup_{T\in\mathcal C_{m}}E_T-\varepsilon\leq E_{T^*_\varepsilon}\leq \sup_{T\in\mathcal C_{m}}E_T. \end{equation} Again, $T^*_\varepsilon:=\xi^*_\varepsilon+(-m',m')^{d-1}$ for $\xi^*_\varepsilon\in \mathbb Z^{d-1}$, and we can translate $T^*_\varepsilon$ so that it is centered at the origin as has been done in section \ref{secproofaprioribound}. Estimate \eqref{stvenantestzerosource} still holds. For any $n\in\mathbb N$, $E_n\leq C_0n^{d-1}$ where $C_0$ is defined by \eqref{boundulocC_0}. The idea is now to carry out a downward iteration. For any $n=(2l+1)m'$ with $l\in\mathbb N$, $l\geq 1$ fixed, for $p\in\{1,\ldots\ l-1\}$ one can show that \begin{equation*} \begin{aligned} E_{(2p+1)m'}&\leq \left[\frac{C^*}{C^*+1}+\left(\frac{C^*}{C^*+1}\right)^2+\ldots\ \left(\frac{C^*}{C^*+1}\right)^{l-p}\right]E_n\\ &\qquad\qquad+\frac{2^{5-3d}}{m^2}\left[\frac{C^*}{C^*+1}(2p+1)^{3d-5}+\ldots\ \left(\frac{C^*}{C^*+1}\right)^{l-p}(2l+1)^{3d-5}\right]\sup_{T\in\mathcal C_{m}}E_T\\ &\leq C_0\frac{C^*+1}{2C^*+1}\left(\frac{C^*}{C^*+1}\right)^{l-p+1}n^{d-1}\\ &\qquad\qquad+\frac{2^{5-3d}}{m^2}\left[\frac{C^*}{C^*+1}(2p+1)^{3d-5}+\ldots\ \left(\frac{C^*}{C^*+1}\right)^{l-p}(2l+1)^{3d-5}\right]\sup_{T\in\mathcal C_{m}}E_T. \end{aligned} \end{equation*} Thus, \begin{equation*} \begin{aligned} E_{m'}&\leq C_0\frac{C^*+1}{2C^*+1}\left(\frac{C^*}{C^*+1}\right)^{l}(2l+1)^{d-1}(m')^{d-1}+\frac{2^{5-3d}}{m^2}B\sup_{T\in\mathcal C_{m}}E_T\\ &\leq C_0\frac{C^*+1}{2C^*+1}\left(\frac{C^*}{C^*+1}\right)^{2l+1}(2l+1)^{d-1}(m')^{d-1}+\frac{2^{5-3d}}{m^2}B(E_{m'}+\varepsilon). \end{aligned} \end{equation*} From this we infer using \eqref{eqdefm} that \begin{equation*} E_{m'}\leq 2C_0\frac{C^*+1}{2C^*+1}\left(\frac{C^*}{C^*+1}\right)^{2l+1}(2l+1)^{d-1}(m')^{d-1}+\frac{2^{6-3d}}{m^2}B\varepsilon\stackrel{l\rightarrow\infty}{\longrightarrow}\frac{2^{6-3d}}{m^2}B\varepsilon. \end{equation*} Therefore, from equation \eqref{eqsqueeze} \begin{equation*} \sup_{T\in\mathcal C_{m}}E_T\leq \left(1+\frac{2^{6-3d}}{m^2}B\right)\varepsilon, \end{equation*} which eventually leads to $\sup_{T\in\mathcal C_{m}}E_T=0$, or in other words $w=0$. Combining this existence and uniqueness result for the system \eqref{sysblbumpychomo} in the bumpy channel $\Omega_\flat$ with Lemma \ref{lemeqsysdec} and Theorem \ref{theoblflat} about the well-posedness in the flat half-space finishes the proof of Theorem \ref{theoblbumpy}. \section{Improved regularity over Lipschitz boundaries} \label{secimpro} The goal in this section is to prove Theorem \ref{theolipdowmicro} of the introduction. Let us recall the result we prove in the following proposition. \begin{prop}\label{proplipdowmicro} For all $\nu>0$, $\gamma>0$, there exists $C>0$ and $\varepsilon_0>0$ such that for all $\psi\in W^{1,\infty}(\mathbb R^{d-1})$, $-1<\psi<0$ and $\|\nabla\psi\|_{L^\infty}\leq\gamma$, for all $A\in\mathcal A^\nu$, for all $0<\varepsilon<(1/2)\varepsilon_0$, for all weak solution $u^\varepsilon$ to \eqref{sysuepsintro}, for all $r\in[\varepsilon/\varepsilon_0,1/2]$ \begin{equation}\label{estlipdownmicroinprop} {- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,r)}|\nabla u^\varepsilon|^2\leq C{- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,1)}|\nabla u^\varepsilon|^2, \end{equation} or equivalently, \begin{equation*} {- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,r)}|u^\varepsilon|^2\leq Cr^2{- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,1)}|u^\varepsilon|^2, \end{equation*} with $C=C(d,N,\lambda,\nu,\gamma,[A]_{C^{0,\nu}})$. \end{prop} We rely on a compactness argument inspired by the pioneering work of Avellaneda and Lin \cite{alin,Alin90P}, and our recent work \cite{BLtailosc}. The proof is in two steps. Firstly, we carry out the compactness argument. Secondly, we iterate the estimate obtained in the first step, to get an estimate down to the microscopic scale $O(\varepsilon)$. A key step in the proof of boundary Lipschitz estimates is to estimate boundary layer correctors, which is done by combining the classical Lipschitz estimate with a uniform H\"older estimate, as in \cite[Lemma 17]{alin} or \cite[Lemma 10]{BLtailosc}. We are able to relax the regularity assumption on $\psi$. This progress is enabled by our new estimate \eqref{estbumpyhpuloc} for the boundary layer corrector, which holds for Lipschitz boundaries $\psi$. We begin with an estimate which is of constant use in this part of our work. Take $\psi\in W^{1,\infty}(\mathbb R^{d-1})$ and $A\in\mathcal A^{\nu}$. By Cacciopoli's inequality, there exists $C>0$ such that for all $\varepsilon>0$, for all weak solution $u^\varepsilon$ to \begin{equation}\label{sysuepslemstep1} \left\{\begin{array}{rll} -\nabla\cdot A(x/\varepsilon)\nabla u^\varepsilon&=0,&x\in D^\varepsilon_\psi(0,1),\\ u^\varepsilon&=0,&x\in\Delta^\varepsilon_\psi(0,1), \end{array}\right. \end{equation} for all $0<\theta<1$, \begin{equation}\label{estcacciomeanpartialdueps} \begin{aligned} \left|(\overline{\partial_{x_d} u^\varepsilon})_{D_\psi(0,\theta)}\right|&=\left|{- \hspace{- 1.05 em}} \int_{D_\psi(0,\theta)}\partial_{x_d}u^\varepsilon\right|\leq \left({- \hspace{- 1.05 em}} \int_{D_\psi(0,\theta)}|\partial_{x_d}u^\varepsilon|^2\right)^{1/2}\\ &\leq \frac{C_0}{\theta^{d/2}(1-\theta)}\left({- \hspace{- 1.05 em}} \int_{D_\psi(0,1)}|u^\varepsilon|^2\right)^{1/2}. \end{aligned} \end{equation} Notice that $C_0$ in \eqref{estcacciomeanpartialdueps} only depends on $\lambda$. Proposition \ref{proplipdowmicro} is a consequence of the two following lemmas. The first one contains the compactness argument. The second one is the iteration lemma. In order to alleviate the statement of the following lemma, the definition of the boundary layer $v$ is given straight after the lemma. \begin{lem}\label{lemstep1lipdownmicro} For all $\nu>0$, $\gamma>0$, there exists $\theta>0$, $0<\mu<1$, $\varepsilon_0>0$, such that for all $\psi\in W^{1,\infty}(\mathbb R^{d-1})$, $-1<\psi<0$ and $\|\nabla\psi\|_{L^\infty}\leq\gamma$, for all $A\in\mathcal A^{\nu}$, for all $0<\varepsilon<\varepsilon_0$, for all weak solution $u^\varepsilon$ to \eqref{sysuepslemstep1} we have \begin{equation*} {- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,1)}|u^\varepsilon|^2\leq 1 \end{equation*} implies \begin{equation*} {- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta)}\left|u^\varepsilon(x)-(\overline{\partial_{x_d} u^\varepsilon})_{D^\varepsilon_\psi(0,\theta)}\left[x_d+\varepsilon\chi^d(x/\varepsilon)+\varepsilon v(x/\varepsilon)\right]\right|^2dy\leq \theta^{2+2\mu}. \end{equation*} \end{lem} The boundary layer $v=v(y)$ is the unique solution given by Theorem \ref{theoblbumpy} to the system \begin{equation}\label{blsysiteration} \left\{ \begin{array}{rll} -\nabla\cdot A(y)\nabla v&=0,&y_d>\psi(y'),\\ v&=y_d+\chi^d(y),&y_d=\psi(y'). \end{array} \right. \end{equation} The estimate of Theorem \ref{theoblbumpy} implies \begin{equation*} \sup_{\xi\in\mathbb Z^{d-1}}\int_{\xi+(0,1)^{d-1}}\int_{\psi(y')}^\infty|\nabla v|^2\leq C\left\{\|\psi\|_{H^{1/2}_{uloc}(\mathbb R^{d-1})}+\|\chi(\cdot,\psi(\cdot))\|_{H^{1/2}_{uloc}(\mathbb R^{d-1})}\right\}, \end{equation*} with $C=C(d,N,\lambda,[A]_{C^{0,\nu}},\|\psi\|_{W^{1,\infty}})$. Now, by Sobolev injection $W^{1,\infty}(\mathbb R^{d-1})\hookrightarrow H^{1/2}(\mathbb R^{d-1})$ \begin{equation*} \|\psi\|_{H^{1/2}_{uloc}(\mathbb R^{d-1})}\leq C\|\psi\|_{W^{1,\infty}(\mathbb R^{d-1})}, \end{equation*} with $C=C(d)$ and by classical interior Lipschitz regularity \begin{equation*} \|\chi(\cdot,\psi(\cdot))\|_{H^{1/2}_{uloc}}\leq C\|\chi(\cdot,\psi(\cdot))\|_{W^{1,\infty}(\mathbb R^{d-1})}\leq C\|\chi\|_{W^{1,\infty}(\mathbb R^d)}\leq C, \end{equation*} with in the last inequality $C=C(d,N,\lambda,[A]_{C^{0,\nu}})$. Eventually, \begin{equation}\label{mainestblcorrector} \sup_{\xi\in\mathbb Z^{d-1}}\int_{\xi+(0,1)^{d-1}}\int_{\psi(y')}^\infty|\nabla v|^2\leq C, \end{equation} with $C=C(d,N,\lambda,[A]_{C^{0,\nu}},\|\psi\|_{W^{1,\infty}})$ uniform in $\varepsilon$. \begin{lem}\label{lemstep2lipdownmicro} Let $\theta$, $\varepsilon_0$ and $\gamma$ be given as in Lemma \ref{lemstep1lipdownmicro}. For all $\psi\in W^{1,\infty}(\mathbb R^{d-1})$, $-1<\psi<0$ and $\|\nabla\psi\|_{L^\infty}\leq\gamma$, for all $A\in\mathcal A^{\nu}$, for all $k\in\mathbb N$, $k>0$, for all $0<\varepsilon<\theta^{k-1}\varepsilon_0$, for all weak solution $u^\varepsilon$ to \eqref{sysuepslemstep1} there exists $a^\varepsilon_k\in\mathbb R^N$ satifying \begin{equation*} |a^\varepsilon_k|\leq C_0\frac{1+\theta^\mu+\ldots\ \theta^{\mu(k-1)}}{\theta^{d/2}(1-\theta)}, \end{equation*} such that \begin{equation*} {- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,1)}|u^\varepsilon|^2\leq 1 \end{equation*} implies \begin{equation}\label{estitstep2orderk} {- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^k)}\left|u^\varepsilon(x)-a^\varepsilon_k\left[x_d+\varepsilon\chi^d(x/\varepsilon)+\varepsilon v(x/\varepsilon)\right]\right|^2dy\leq \theta^{(2+2\mu)k}, \end{equation} where $v=v(y)$ is the solution, given by Theorem \ref{theoblbumpy}, to the boundary layer system \eqref{blsysiteration}. \end{lem} The condition $\varepsilon<\theta^{k-1}\varepsilon_0$ can be seen as giving a lower bound on the scales $\theta^k$ for which one can prove the regularity estimate: $\theta^{k-1}>\varepsilon/\varepsilon_0$. In that perspective, estimate \eqref{estitstep2orderk} is an improved $C^{1,\mu}$ estimate down to the microscale $\varepsilon/\varepsilon_0$. For fixed $0<\varepsilon/\varepsilon_0<1/2$ and $r\in[\varepsilon/\varepsilon_0,1/2]$, there exists $k\in\mathbb N$ such that $\theta^{k+1}<r\leq\theta^k$. We aim at estimating \begin{equation*} {- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,r)}|u^\varepsilon(x)|^2 \end{equation*} using the bound \eqref{estitstep2orderk}. We have \begin{equation}\label{splittrianglefinalest} \begin{aligned} &\left({- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,r)}|u^\varepsilon(x)|^2\right)^{1/2}\leq\left({- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^k)}|u^\varepsilon(x)|^2\right)^{1/2}\\ &\leq\left({- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^k)}\left|u^\varepsilon(x)-a^\varepsilon_k\left[x_d-\psi(x')+\varepsilon\chi^d(x/\varepsilon)+\varepsilon v(x/\varepsilon)\right]\right|^2dy\right)^{1/2}\\ &\qquad\qquad+|a^\varepsilon_k|\left\{\left({- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^k)}|x_d|^2\right)^{1/2}+\left({- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^k)}|\varepsilon\chi^d(x/\varepsilon)|^2\right)^{1/2}+\left({- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^k)}|\varepsilon v(x/\varepsilon)|^2\right)^{1/2}\right\}. \end{aligned} \end{equation} Let us focus on the term involving the boundary layer. Let $\eta=\eta(y_d)\in C^\infty_c(\mathbb R)$ be a cut-off such that $\eta\equiv 1$ on $(-1,1)$ and $\supp\eta\subset (-2,2)$. The triangle inequality yields \begin{multline*} \left({- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^k)}|\varepsilon v(x/\varepsilon)|^2\right)^{1/2}\leq \left({- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^k)}|\varepsilon v(x/\varepsilon)-(x_d+\varepsilon\chi^d(x/\varepsilon))\eta(x_d/\varepsilon)|^2\right)^{1/2}\\ +\left({- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^k)}|(x_d+\varepsilon\chi^d(x/\varepsilon))\eta(x_d/\varepsilon)|^2\right)^{1/2}. \end{multline*} Poincar\'e's inequality implies \begin{equation*} \begin{aligned} &\left({- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^k)}|\varepsilon v(x/\varepsilon)-(x_d+\varepsilon\chi^d(x/\varepsilon))\eta(x_d/\varepsilon)|^2\right)^{1/2}\\ &\leq \theta^k\left({- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^k)}\left|\nabla\left(\varepsilon v(x/\varepsilon)-(x_d+\varepsilon\chi^d(x/\varepsilon))\eta(x_d/\varepsilon)\right)\right|^2\right)^{1/2}\\ &\leq \theta^k\left({- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^k)}|\nabla v(x/\varepsilon)|^2\right)^{1/2}+(1+\|\nabla\chi\|_{L^\infty}^2)\theta^k\left({- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^k)}|\eta(x_d/\varepsilon)|^2\right)^{1/2}\\ &\qquad\qquad+\frac{\theta^k}{\varepsilon}\left({- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^k)}|(x_d+\varepsilon\chi^d(x/\varepsilon))\eta'(x_d/\varepsilon)|^2\right)^{1/2}. \end{aligned} \end{equation*} Estimate \eqref{mainestblcorrector} now yields \begin{equation*} {- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^k)}|\nabla v(x/\varepsilon)|^2\leq C\varepsilon\theta^{-k}, \end{equation*} so that eventually using $\varepsilon/\varepsilon_0\leq r\leq\theta^k$, \begin{equation*} \left({- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^k)}|\varepsilon v(x/\varepsilon)-(x_d+\varepsilon\chi^d(x/\varepsilon))\eta(x_d/\varepsilon)|^2\right)^{1/2}\leq C\left(\varepsilon^{1/2}\theta^{k/2}+\theta^k+\frac{\theta^k}{\varepsilon}(\varepsilon+\varepsilon)\right)\leq C\theta^k \end{equation*} with $C=C(d,N,\lambda,[A]_{C^{0,\nu}},\|\psi\|_{W^{1,\infty}})$. It follows from \eqref{splittrianglefinalest} and \eqref{estitstep2orderk} that \begin{equation*} \left({- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,r)}|u^\varepsilon(x)|^2\right)^{1/2}\leq\theta^{(1+\mu)k}+C\theta^k\leq C\theta^k\leq Cr, \end{equation*} which is the estimate of Proposition \ref{proplipdowmicro}. \subsection{Proof of Lemma \ref{lemstep1lipdownmicro}} Let $0<\theta<1/8$ and $u^0\in H^1(D_0(0,1/4))$ be a weak solution of \begin{equation}\label{sysu0} \left\{ \begin{array}{rll} -\nabla\cdot A^0\nabla u^0&=0,&x\in D_0(0,1/4),\\ u^0&=0,&x\in \Delta_0(0,1/4), \end{array} \right. \end{equation} such that \begin{equation*} {- \hspace{- 1.05 em}} \int_{D_0(0,1/4)}|u^0|^2\leq 4^d. \end{equation*} The classical regularity theory yields $u^0\in C^2(\overline{D_0(0,1/8)})$. Using that for all $x\in D_0(0,\theta)$ \begin{equation*} \begin{aligned} u^0(x)-\left(\overline{\partial_{x_d} u^0}\right)_{0,\theta}x_d&=u^0(x)-u^0(x',0)-\left(\overline{\partial_{x_d} u^0}\right)_{0,\theta}x_d\\ &=\frac{1}{|D^0(0,\theta)|}\int_0^1\int_{D_0(0,\theta)}\left(\partial_{x_d}u^0(x',tx_d)-\partial_{x_d}u^0(y)\right)x_ddydt. \end{aligned} \end{equation*} we get \begin{equation}\label{classSchauderu0} {- \hspace{- 1.05 em}} \int_{D_0(0,\theta)}\left|u^0(x)-(\overline{\partial_{x_d} u^0})_{D_0(0,\theta)}x_d\right|^2dy\leq \widehat{C}\theta^4, \end{equation} where $\widehat{C}=\widehat{C}(d,N,\lambda)$. Fix $0<\mu<1$. Choose $0<\theta<1/8$ sufficiently small such that \begin{equation}\label{estthetatocontra} \theta^{2+2\mu}>\widehat{C}\theta^4. \end{equation} The rest of the proof is by contradiction. Fix $\gamma>0$. Assume that for all $k\in\mathbb N$, there exists $\psi_k\in W^{1,\infty}(\mathbb R^{d-1})$, \begin{equation}\label{condpsik} -1<\psi<0\quad\mbox{and}\quad\|\psi_k\|_{L^\infty}\leq \gamma, \end{equation} there exists $A_k\in\mathcal A^\nu$, there exists $0<\varepsilon_k<1/k$, there exists $u^{\varepsilon_k}_k$ solving \begin{equation*} \left\{\begin{array}{rll} -\nabla\cdot A_k(x/\varepsilon_k)\nabla u^{\varepsilon_k}_k&=0,&x\in D^{\varepsilon_k}_{\psi_k}(0,1),\\ u^{\varepsilon_k}_k&=0,&x\in\Delta^{\varepsilon_k}_{\psi_k}(0,1), \end{array}\right. \end{equation*} such that \begin{equation}\label{contrastep1hyp} {- \hspace{- 1.05 em}} \int_{D^{\varepsilon_k}_{\psi_k}(0,1)}|u^{\varepsilon_k}_k|^2\leq 1 \end{equation} and \begin{equation}\label{contrastep1est} {- \hspace{- 1.05 em}} \int_{D^{\varepsilon_k}_{\psi_k}(0,\theta)}\left|u^{\varepsilon_k}_k(x)-(\overline{\partial_{x_d} u^{\varepsilon_k}_k})_{D^{\varepsilon_k}_{\psi_k}(0,\theta)}\left[x_d+\varepsilon_k\chi^d_k(x/\varepsilon_k)+\varepsilon_k v_k(x/\varepsilon_k)\right]\right|^2dy>\theta^{2+2\mu}. \end{equation} Notice that $\chi^d_k$ is the cell corrector associated to the operator $-\nabla\cdot A_k(y)\nabla$ and $v_k$ is the boundary layer corrector associated to $-\nabla\cdot A_k(y)\nabla$ and to the domain $y_d>\psi_k(y')$. First of all, for technical reasons, let us extend $u^{\varepsilon_k}_k$ by zero below the boundary, on $\{x'\in(-1,1)^{d-1},\ x_d\leq\varepsilon_k\psi_k(x'/\varepsilon_k)\}$. The extended functions are still denoted the same, and $u^{\varepsilon_k}_k$ is a weak solution of \begin{equation*} -\nabla\cdot A_k(x/\varepsilon_k)\nabla u^{\varepsilon_k}_k=0 \end{equation*} on $\{x'\in(-1,1)^{d-1},\ x_d\leq\varepsilon_k\psi_k(x'/\varepsilon_k)+1\}$. For $k$ sufficiently large, by Cacciopoli's inequality, \begin{equation*} \int_{(-1/4,1/4)^{d}}|\nabla u^{\varepsilon_k}_k|^2\leq C\int_{(-1/2,1/2)^{d}}|u^{\varepsilon_k}_k|^2\leq C, \end{equation*} where $C=C(d,N,\lambda)$. Therefore, up to a subsequence, which we denote again by $u^{\varepsilon_k}_k$, we have \begin{equation}\label{cvstep1uepsk} \begin{aligned} u^{\varepsilon_k}_k\stackrel{k\rightarrow\infty}{\longrightarrow}u^0,\quad\mbox{strongly in}\quad L^2((-1/4,1/4)^{d-1}\times(-1,1/4)),\\ \nabla u^{\varepsilon_k}_k\stackrel{k\rightarrow\infty}{\rightharpoonup}\nabla u^0,\quad\mbox{weakly in}\quad L^2((-1/4,1/4)^{d-1}\times(-1,1/4)). \end{aligned} \end{equation} Moreover, $\varepsilon_k\psi_k(\cdot/\varepsilon_k)$ converges to $0$ because $\psi_k$ is bounded uniformly in $k$ (see \eqref{condpsik}). Let $\varphi\in C^\infty_c(D_0(0,1/4))$. Theorem \ref{theoweakcvhomo} implies that \begin{equation*} \int_{D^{\varepsilon_k}_{\psi_k}(0,1/4)}A_k(x/\varepsilon_k)\nabla u^{\varepsilon_k}_k\cdot\nabla\varphi\stackrel{k\rightarrow\infty}{\longrightarrow}\int_{D_0(0,1/4)}A^0\nabla u^0\nabla\varphi, \end{equation*} so that $u^0$ is a weak solution to \begin{equation*} -\nabla\cdot A^0\nabla u^0=0\quad\mbox{in}\quad D_0(0,1/4). \end{equation*} Furthermore, for all $\varphi\in C^\infty_c((-1/4,1/4)^{d-1}\times(-1,0))$, \begin{equation*} 0=\int_{\{x'\in(-1/4,1/4)^{d-1},\ -1\leq x_d\leq\varepsilon_k\psi_k(x'/\varepsilon_k)\}}u^{\varepsilon_k}_k\varphi\stackrel{k\rightarrow\infty}{\longrightarrow}\int_{(-1/4,1/4)^{d-1}\times(-1,0)}u^0\varphi, \end{equation*} so that $u^0(x)=0$ for all $x\in (-1/4,1/4)^{d-1}\times(-1,0)$. In particular, $u^0=0$ in $H^{1/2}(\Delta_0(0,1/4))$. Thus, $u^0$ is a solution to \eqref{sysu0} and satisfies the estimate \eqref{classSchauderu0}. It remains to pass to the limit in \eqref{contrastep1est} to reach a contradiction. Since $|D^{\varepsilon_k}_{\psi_k}(0,\theta)|=|D_0(0,\theta)|$, we have \begin{multline}\label{cvmeanstep1} \left|(\overline{\partial_{x_d}u^{\varepsilon_k}_k})_{D^{\varepsilon_k}_{\psi_k}(0,\theta)}-(\overline{\partial_{x_d}u^0})_{D_0(0,\theta)}\right|\leq \frac{1}{|D_0(0,\theta)|}\left[\left|\int_{D^{\varepsilon_k}_{\psi_k}(0,\theta)\cap D_0(0,\theta)}\left(\partial_{x_d}u^{\varepsilon_k}_k-\partial_{x_d}u^0\right)\right|\right.\\ \left.+\int_{\left(D^{\varepsilon_k}_{\psi_k}(0,\theta)\setminus D_0(0,\theta)\right)\cup\left(D_0(0,\theta)\setminus D^{\varepsilon_k}_{\psi_k}(0,\theta)\right)}\left|\partial_{x_d}u^{\varepsilon_k}_k-\partial_{x_d}u^0\right|\right]. \end{multline} The first term in the right hand side of \eqref{cvmeanstep1} tends to $0$ thanks to the weak convergence of $\nabla u^{\varepsilon_k}_k$ in \eqref{cvstep1uepsk}. The second term in the right hand side of \eqref{cvmeanstep1} goes to $0$ when $k\rightarrow\infty$ because of the $L^2$ bound on the gradient, and the fact that \begin{equation*} \left|\left(D^{\varepsilon_k}_{\psi_k}(0,\theta)\setminus D_0(0,\theta)\right)\cup\left(D_0(0,\theta)\setminus D^{\varepsilon_k}_{\psi_k}(0,\theta)\right)\right|\stackrel{k\rightarrow\infty}{\longrightarrow} 0. \end{equation*} Therefore, \begin{equation*} {- \hspace{- 1.05 em}} \int_{D^{\varepsilon_k}_{\psi_k}(0,\theta)\cap D_0(0,\theta)}\left|(\overline{\partial_{x_d} u^{\varepsilon_k}_k})_{D^{\varepsilon_k}_{\psi_k}(0,\theta)}\left[x_d+\varepsilon_k\chi^d_k(x/\varepsilon_k)\right]-(\overline{\partial_{x_d}u^0})_{D_0(0,\theta)}x_d\right|^2\stackrel{k\rightarrow\infty}{\longrightarrow}0. \end{equation*} Moreover, the strong $L^2$ convergence in \eqref{cvstep1uepsk} implies \begin{equation*} {- \hspace{- 1.05 em}} \int_{D^{\varepsilon_k}_{\psi_k}(0,\theta)\cap D_0(0,\theta)}|u^{\varepsilon_k}_k-u^0|^2\stackrel{k\rightarrow\infty}{\longrightarrow}0. \end{equation*} The last thing we have to check is the convergence \begin{equation*} {- \hspace{- 1.05 em}} \int_{D^{\varepsilon_k}_{\psi_k}(0,\theta)}|\varepsilon_kv_k(x/\varepsilon_k)|^2\stackrel{k\rightarrow\infty}{\longrightarrow}0. \end{equation*} Let $\eta=\eta(y_d)\in C^\infty_c(\mathbb R)$ such that $\eta\equiv 1$ on $(-1,1)$ and $\supp\eta\subset(-2,2)$. We have \begin{equation}\label{controlstep1termvk} \begin{aligned} &{- \hspace{- 1.05 em}} \int_{D^{\varepsilon_k}_{\psi_k}(0,\theta)}|\varepsilon_kv_k(x/\varepsilon_k)|^2\\ &\leq{- \hspace{- 1.05 em}} \int_{D^{\varepsilon_k}_{\psi_k}(0,\theta)}|\varepsilon_kv_k(x/\varepsilon_k)-(x_d+\varepsilon_k\chi^d_k(x/\varepsilon_k))\eta(x_d/\varepsilon_k)|^2+{- \hspace{- 1.05 em}} \int_{D^{\varepsilon_k}_{\psi_k}(0,\theta)}|(x_d+\varepsilon_k\chi^d_k(x/\varepsilon_k))\eta(x_d/\varepsilon_k)|^2. \end{aligned} \end{equation} The last term in the right hand side of \eqref{controlstep1termvk} goes to $0$ when $k\rightarrow\infty$. Now by Poincar\'e's inequality, \begin{equation*} \begin{aligned} &{- \hspace{- 1.05 em}} \int_{D^{\varepsilon_k}_{\psi_k}(0,\theta)}|\varepsilon_kv_k(x/\varepsilon_k)-(x_d+\varepsilon_k\chi^d_k(x/\varepsilon_k))\eta(x_d/\varepsilon_k)|^2\\ &\leq C\theta^2\left[{- \hspace{- 1.05 em}} \int_{D^{\varepsilon_k}_{\psi_k}(0,\theta)}|\nabla v_k(x/\varepsilon_k)|^2+{- \hspace{- 1.05 em}} \int_{D_{\psi_k}(0,\theta)}\left|\nabla\left((x_d+\varepsilon_k\chi^d_k(x/\varepsilon_k))\eta(x_d/\varepsilon_k)\right)\right|^2\right]. \end{aligned} \end{equation*} On the one hand by estimate \eqref{mainestblcorrector} \begin{equation*} \begin{aligned} {- \hspace{- 1.05 em}} \int_{D^{\varepsilon_k}_{\psi_k}(0,\theta)}|\nabla v_k(x/\varepsilon_k)|^2&\leq \frac{C\varepsilon_k^d}{\theta^d}\int_{D^1_{\psi_k}(0,\theta/\varepsilon_k)}|\nabla v_k(y)|^2\\ &\leq C\varepsilon_k\sup_{\xi\in\mathbb Z^{d-1}}\int_{\xi+(0,1)^{d-1}}\int_{\psi_k(y')}^{\infty}|\nabla v_k|^2\leq C\varepsilon_k\stackrel{k\rightarrow\infty}{\longrightarrow}0 \end{aligned} \end{equation*} with in the last inequality $C=C(d,N,\lambda,[A]_{C^{0,\nu}})$ uniform in $\varepsilon$, and on the other hand \begin{multline*} {- \hspace{- 1.05 em}} \int_{D^{\varepsilon_k}_{\psi_k}(0,\theta)}\left|\nabla\left((x_d+\varepsilon_k\chi^d_k(x/\varepsilon_k))\eta(x_d/\varepsilon_k)\right)\right|^2\leq (1+\|\nabla\chi_k\|_{L^\infty}^2){- \hspace{- 1.05 em}} \int_{D^{\varepsilon_k}_{\psi_k}(0,\theta)}|\eta(x_d/\varepsilon_k)|^2\\ +\frac{1}{\varepsilon_k^2}{- \hspace{- 1.05 em}} \int_{D^{\varepsilon_k}_{\psi_k}(0,\theta)}|(x_d+\varepsilon_k\chi^d_k(x/\varepsilon_k))\eta'(x_d/\varepsilon_k)|^2\leq C\varepsilon_k\stackrel{k\rightarrow\infty}{\longrightarrow}0, \end{multline*} with in the last inequality $C=C(d,N,\lambda,[A]_{C^{0,\nu}})$. These convergence results imply that passing to the limit in \eqref{contrastep1est} we get \begin{multline*} \theta^{2+2\mu}\leq{- \hspace{- 1.05 em}} \int_{D^{\varepsilon_k}_{\psi_k}(0,\theta)}\left|u^{\varepsilon_k}_k(x)-(\overline{\partial_{x_d} u^{\varepsilon_k}_k})_{D^{\varepsilon_k}_{\psi_k}(0,\theta)}\left[x_d+\varepsilon_k\chi^d_k(x/\varepsilon_k)+\varepsilon_k v_k(x/\varepsilon_k)\right]\right|^2dy\\ \stackrel{k\rightarrow\infty}{\longrightarrow}{- \hspace{- 1.05 em}} \int_{D_0(0,\theta)}\left|u^0(x)-(\overline{\partial_{x_d} u^0})_{D_0(0,\theta)}x_d\right|^2dy\leq\widehat{C}\theta^4, \end{multline*} which contradicts \eqref{estthetatocontra}. \subsection{Proof of Lemma \ref{lemstep2lipdownmicro}} The proof is by induction on $k$. The result for $k=1$ is true because of Lemma \ref{lemstep1lipdownmicro}. Let $k\in\mathbb N$, $k\geq 1$. Assume that for all $\psi\in W^{1,\infty}(\mathbb R^{d-1})$ such that $-1<\psi<0$ and $\|\nabla\psi\|_{L^\infty}\leq\gamma$, for all $A\in\mathcal A^{\nu}$, for all $k\in\mathbb N$, $k>0$, for all $0<\varepsilon<\theta^{k-1}\varepsilon_0$, for all weak solution $u^\varepsilon$ to \eqref{sysuepslemstep1} there exists $a^\varepsilon_k\in\mathbb R^N$ satifying \begin{equation*} |a^\varepsilon_k|\leq C_0\frac{1+\theta^\mu+\ldots\ \theta^{\mu(k-1)}}{\theta^{d/2}(1-\theta)}, \end{equation*} such that \begin{equation*} {- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,1)}|u^\varepsilon|^2\leq 1 \end{equation*} implies \begin{equation}\label{estitstep2orderkproof} {- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^k)}\left|u^\varepsilon(x)-a^\varepsilon_k\left[x_d+\varepsilon\chi^d(x/\varepsilon)+\varepsilon v(x/\varepsilon)\right]\right|^2dy\leq \theta^{(2+2\mu)k}. \end{equation} This is our induction hypothesis. Given $\psi\in W^{1,\infty}(\mathbb R^{d-1})$, $-1<\psi<0$ and $\|\nabla\psi\|_{L^\infty}\leq\gamma$ and $A\in\mathcal A^{\nu}$, $0<\varepsilon<\theta^{k-1}\varepsilon_0$ and a solution $u^\varepsilon$ to \eqref{sysuepslemstep1} such that \begin{equation*} {- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,1)}|u^\varepsilon|^2\leq 1 \end{equation*} we define \begin{equation*} U^\varepsilon(x):=\frac{1}{\theta^{(1+\mu)k}}\left\{u^\varepsilon(\theta^kx)-a^\varepsilon_k\left[\theta^kx_d+\varepsilon\chi^d(\theta^kx/\varepsilon)+\varepsilon v(\theta^kx/\varepsilon)\right]\right\} \end{equation*} for all $x\in D^{\varepsilon/\theta^k}_{\psi}(0,1)$. The goal is to apply the estimate of Lemma \ref{lemstep2lipdownmicro} to $U^\varepsilon$. By the induction estimate \eqref{estitstep2orderkproof}, we have \begin{equation*} {- \hspace{- 1.05 em}} \int_{D^{\varepsilon/\theta^k}_{\psi}(0,1)}|U^\varepsilon|^2\leq 1. \end{equation*} Moreover, $U^\varepsilon$ solves the system \begin{equation}\label{sysUepsstep2} \left\{ \begin{array}{rll} -\nabla\cdot A(\theta^kx/\varepsilon)\nabla U^\varepsilon&=0,&x\in D^{\varepsilon/\theta^k}_{\psi}(0,1),\\ U^\varepsilon&=0,&x\in \Delta^{\varepsilon/\theta^k}_{\psi}(0,1). \end{array} \right. \end{equation} The boundary layer $v$ solving \eqref{blsysiteration} has been designed for $U^\varepsilon$ to solve \eqref{sysUepsstep2}. It follows that $U^\varepsilon$ satisfies the assumptions of Lemma \ref{lemstep1lipdownmicro}. Therefore, for all $\varepsilon/\theta^k<\varepsilon_0$, we have \begin{equation*} {- \hspace{- 1.05 em}} \int_{D^{\varepsilon/\theta^k}_\psi(0,\theta)}\left|U^\varepsilon(x)-(\overline{\partial_{x_d} U^\varepsilon})_{D^{\varepsilon/\theta^k}_\psi(0,\theta)}\left[x_d+\frac{\varepsilon}{\theta^k}\chi^d(\theta^kx/\varepsilon)+\frac{\varepsilon}{\theta^k} v(\theta^kx/\varepsilon)\right]\right|^2dy\leq \theta^{2+2\mu}. \end{equation*} Eventually, \begin{equation*} {- \hspace{- 1.05 em}} \int_{D^\varepsilon_\psi(0,\theta^{k+1})}\left|u^\varepsilon(x)-a^\varepsilon_{k+1}\left[x_d+\varepsilon\chi^d(x/\varepsilon)+\varepsilon v(x/\varepsilon)\right]\right|^2dy\leq \theta^{(2+2\mu)(k+1)}, \end{equation*} with \begin{equation*} a^\varepsilon_{k+1}:=a^\varepsilon_k+\theta^{\mu k}(\overline{\partial_{x_d} U^\varepsilon})_{D^{\varepsilon/\theta^k}_\psi(0,\theta)} \end{equation*} satisfying the estimate \begin{equation*} |a^\varepsilon_{k+1}|\leq C_0\frac{1+\theta^\mu+\ldots\ \theta^{\mu(k-1)}}{\theta^{d/2}(1-\theta)}+C_0\frac{\theta^{\mu k}}{\theta^{d/2}(1-\theta)}\leq C_0\frac{1+\theta^\mu+\ldots\ \theta^{\mu k}}{\theta^{d/2}(1-\theta)}. \end{equation*} This concludes the iteration step and proves Lemma \ref{lemstep2lipdownmicro}. \end{document}
\begin{document} \title[Non-associative Ore Extensions]{Non-associative Ore Extensions} \author{Patrik Nystedt} \address{University West, Department of Engineering Science, SE-46186 Trollh\"{a}ttan, Sweden} \author{Johan \"{O}inert} \address{Blekinge Institute of Technology, Department of Mathematics and Natural Sciences, SE-37179 Karlskrona, Sweden} \author{Johan Richter} \address{M\"{a}lardalen University, Academy of Education, Culture and Communication, \\ Box 883, SE-72123 V\"{a}ster\aa s, Sweden} \email{[email protected]; [email protected]; [email protected]} \subjclass[2010]{17D99, 17A36, 17A99, 16S36, 16W70, 16U70} \keywords{non-associative Ore extension, simple, outer derivation.} \begin{abstract} We introduce non-associative Ore extensions, $S = R[X ; \sigma , \delta]$, for any non-associa\-tive unital ring $R$ and any additive maps $\sigma,\delta : R \rightarrow R$ satisfying $\sigma(1)=1$ and $\delta(1)=0$. In the special case when $\delta$ is either left or right $R_{\delta}$-linear, where $R_{\delta} = \ker(\delta)$, and $R$ is $\delta$-simple, i.e. $\{ 0 \}$ and $R$ are the only $\delta$-invariant ideals of $R$, we determine the ideal structure of the non-associative differential polynomial ring $D = R[X ; \id_R , \delta]$. Namely, in that case, we show that all ideals of $D$ are generated by monic polynomials in the center $Z(D)$ of $D$. We also show that $Z(D) = R_{\delta}[p]$ for a monic $p \in R_{\delta}[X]$, unique up to addition of elements from $Z(R)_{\delta}$. Thereby, we generalize classical results by Amitsur on differential polynomial rings defined by derivations on associative and simple rings. Furthermore, we use the ideal structure of $D$ to show that $D$ is simple if and only if $R$ is $\delta$-simple and $Z(D)$ equals the field $R_{\delta} \cap Z(R)$. This provides us with a non-associative generalization of a result by \"{O}inert, Richter, and Silvestrov. This result is in turn used to show a non-associative version of a classical result by Jordan concer\-ning simplicity of $D$ in the cases when the characteristic of the field $R_{\delta} \cap Z(R)$ is either zero or a prime. We use our findings to show simplicity results for both non-associative versions of Weyl algebras and non-associative differential polynomial rings defined by monoid/group actions on compact Hausdorff spaces. \end{abstract} \maketitle \pagestyle{headings} \section{Introduction} In 1933 Ore \cite{ore1933} introduced a version of non-commutative polynomial rings, nowadays called Ore extensions, that have become one of the most useful constructions in ring theory. The Ore extensions play an important role when investigating cyclic algebras, enveloping rings of solvable Lie algebras, and various types of graded rings such as group rings and crossed products, see e.g. \cite{cohn1977}, \cite{jacobson1999}, \cite{mcconell1988} and \cite{rowen1988}. They are also a natural source of examples and counter-examples in ring theory, see e.g. \cite{bergman1964} and \cite{cohn1961}. Furthermore, various special cases of Ore extensions are used as tools in diverse analytical settings, such as differential-, pseudo-differential and fractional differential operator rings \cite{goodearl1983} and $q$-Heisenberg algebras \cite{hellstromsilvestrov2000}. Let us recall the definition of an (associative) Ore extension. Let $S$ be a unital ring. Take $x \in S$ and let $R$ be a subring of $S$ containing $1$, the multiplicative identity element of $S$. \begin{defn}\label{defore} The pair $(S,x)$ is called an {\it Ore extension} of $R$ if the following axioms hold: \begin{itemize} \item[(O1)] $S$ is a free left $R$-module with basis $\{ 1,x,x^2,\ldots \}$; \item[(O2)] $xR \subseteq R + Rx$; \item[(O3)] $S$ is associative. \end{itemize} If (O2) is replaced by \begin{itemize} \item[(O2)$'$] $[x,R] \subseteq R$; \end{itemize} then $(S,x)$ is called a {\it differential polynomial ring over} $R$. \end{defn} Recall that $[x,R]$ denotes the set of finite sums of elements of the form $[x,r]=xr-rx$, for $r \in R$. To construct Ore extensions, one considers generalized polynomial rings $R[X ; \sigma , \delta]$ over an associative ring $R$, where $\sigma$ is a ring endomorphism of $R$, respecting 1, and $\delta$ is a $\sigma$-derivation of $R$, i.e. an additive map $R \rightarrow R$ satisfying $\delta(ab) = \sigma(a) \delta(b) + \delta(a) b$, for $a,b \in R$. Let $\mathbb{N}$ denote the set of non-negative integers. As an additive group $R[X ; \sigma , \delta]$ is equal to the usual polynomial ring $R[X].$ The ring structure on $R[X ; \sigma , \delta]$ is defined on monomials by \begin{equation}\label{productmonomials} a X^m \cdot b X^n = \sum_{i \in \mathbb{N}} a \pi_i^m(b) X^{i+n}, \end{equation} for $a , b \in R$ and $m,n \in \mathbb{N}$, where $\pi_i^m$ denotes the sum of all the ${m \choose i}$ possible compositions of $i$ copies of $\sigma$ and $m-i$ copies of $\delta$ in arbitrary order (see equation (11) in \cite{ore1933}). Here we make the convention that $\pi_i^m(b) = 0$, for $i,m \in \mathbb{N}$ such that $i > m$. The product \eqref{productmonomials} makes the pair $(R[X ; \sigma , \delta],X)$ an Ore extension of $R$. In fact, (O1) and (O2) are immediate and (O3) can be shown in several different ways, see e.g. \cite[Proposition 7.1]{bergman1978}, \cite{nystedt2013}, \cite{richter2014}, \cite[Proposition 1.6.15]{rowen1988}, or Proposition \ref{newproof} in the present article for yet another proof. This class of generalized polynomial rings provides us with all Ore extensions of $R$ . Indeed, given an Ore extension $(S,x)$ of $R$, then define the maps $\sigma : R \rightarrow R$ and $\delta : R \rightarrow R$ by the relations $x a = \delta(a) + \sigma(a) x$, for $a \in R$. Then it follows that $\sigma$ is a ring endomorphism of $R$, respecting 1, $\delta$ is a $\sigma$-derivation of $R$ and there is a unique well defined ring isomorphism $f : S \rightarrow R[X;\sigma,\delta]$ subject to the relations $f(x) = X$ and $f|_R = \id_R$. If $(S,x)$ is a differential polynomial ring over $R$, then $\sigma = \id_R$ and $\delta$ is a derivation on $R$. Many different properties of associative Ore extensions, such as when they are integral domains, principal domains, prime or noetherian have been studied by numerous authors (see e.g. \cite{cozzens1975} or \cite{mcconell1988} for surveys). Here we focus on the property of simplicity of differential polynomial rings $D = R[X ; \id_R , \delta]$. Recall that $\delta$ is called {\it inner} if there is $a \in R$ such that $\delta(r) = ar - ra$, for $r \in R$. In that case we write $\delta = \delta_a$. If $\delta$ is not inner, then $\delta$ is called {\it outer}. We let the {\it characteristic} of a ring $R$ be denoted by ${\rm char}(R)$. In an early article by Jacobson \cite{jacobson1937} it is shown that if $\delta$ is outer and $R$ is a division ring with ${\rm char}(R)=0$, then $D$ is simple. The case of positive characteristic is more complicated and $D$ may contain non-trivial ideals. In fact, Amitsur \cite{amitsur1950} has shown that if $R$ is a division ring with ${\rm char}(R)= p > 0$, then every ideal of $D$ is generated by a polynomial, all of whose monomials have degrees which are multiples of $p$. A few years later Amitsur \cite{amitsur1957} generalized this result to the case of simple $R$. To describe this generalization we need to introduce some more notation. Let $T$ be a subring of $D$. Let $Z(T)$ denote the {\it center} of $T$, i.e. the set of all elements in $T$ that commute with every element of $T$. If $T$ is a subring of $R$, then put $T_{\delta} = T \cap \ker(\delta)$. Note that if $R$ is simple, then $Z(R)$ is a field. Therefore, in that case, ${\rm char}(R) = {\rm char}(Z(R))$ and hence ${\rm char}(R)$ is either zero or a prime $p > 0$. \begin{thm}[Amitsur \cite{amitsur1957}]\label{amitsurtheorem} Suppose that $R$ is a simple associative ring and let $\delta$ be a derivation on $R$. If we put $D = R[X ; \id_R , \delta]$, then the following assertions hold: \begin{itemize} \item[(a)] Every ideal of $D$ is generated by a unique monic polynomial in $Z(D)$; \item[(b)] There is a monic $b \in R_{\delta}[X]$, unique up to addition of elements from $Z(R)_{\delta}$, such that $Z(D) = Z(R)_{\delta}[b]$; \item[(c)] If ${\rm char}(R)=0$ and $b \neq 1$, then there is $c \in R_{\delta}$ such that $b = c + X$. In that case, $\delta = \delta_c$; \item[(d)] If ${\rm char}(R) = p > 0$ and $b \neq 1$, then there is $c \in R_{\delta}$ and $b_0,\ldots,b_n \in Z(R)_{\delta}$, with $b_n=1$, such that $b = c + \sum_{i=0}^n b_i X^{p^i}$. In that case, $\sum_{i=0}^n b_i \delta^{p^i} = \delta_c$. \end{itemize} \end{thm} The condition that $R$ is simple in the above theorem is not necessary for simplicity of $D = R[X ; \id_R , \delta]$. Consider e.g. the well known example of the first Weyl algebra where $R = K[Y]$, $K$ is a field with ${\rm char}(K)=0$ and $\delta$ is the usual derivative on $R$ (for more details, see e.g. \cite[Example 1.6.32]{rowen1988}). However, $\delta$-simplicity of $R$ is always a necessary condition for simplicity of $D$ (see \cite[Lemma 4.1.3(i)]{jordan1975} or Proposition \ref{sigmadeltasimple}). Recall that an ideal $I$ of $R$ is called $\delta$-invariant if $\delta(I) \subseteq I$. The ring $R$ is called $\delta$-simple if $\{ 0 \}$ and $R$ are the only $\delta$-invariant ideals of $R$. Note that if $R$ is $\delta$-simple, then the ring $Z(R)_{\delta}$ is always a field. Therefore, in that case, ${\rm char}(R) = {\rm char}(Z(R)_{\delta})$ and hence ${\rm char}(R)$ is either zero or a prime $p > 0$. Jordan \cite{jordan1975} (and Cozzens and Faith \cite{cozzens1975} in a special case) has shown the following result. \begin{thm}[Jordan \cite{jordan1975}]\label{jordantheorem} Suppose that $R$ is a $\delta$-simple associative ring and let $\delta$ be a derivation on $R$. If we put $D = R[X ; \id_R , \delta]$, then the following assertions hold: \begin{itemize} \item[(a)] If ${\rm char}(R)=0$, then $D$ is simple if and only if $\delta$ is outer; \item[(b)] If ${\rm char}(R) = p > 0$, then $D$ is simple if and only if no derivation of the form $\sum_{i=0}^n b_i \delta^{p^i}$, $b_i \in Z(R)_{\delta}$, and $b_n=1$, is an inner derivation induced by an element in $R_{\delta}$. \end{itemize} \end{thm} In the case when $R$ is commutative, Cozzens and Faith \cite{cozzens1975} (for integral domains $R$ of prime characteristic) and Goodearl and Warfield \cite{goodearl1982} (in the general case) have shown that $R[x ; \id_R, \delta]$ is simple if and only if $R$ is $\delta$-simple and $R$ is infinite-dimensional as a vector space over $R_{\delta}$. If one has a family of commuting derivations, then one can form a differential polynomial ring in several variables. The articles \cite{malm1988}, \cite{posner1960} and \cite{voskoglou1985} consider the question when such rings are simple. In the preprint \cite{nystedtoinertrichter} the authors of the present article study when non-associative differential polynomial rings in several variables are simple. In the simplicity results mentioned above, a distinction is often made between the cases when the characteristic of $R$ is zero or the characteristic of $R$ is prime. Special attention is also often paid to the case when $R$ is commutative. However, in \cite{oinert2013} \"{O}inert, Richter and Silvestrov have shown the following simplicity result that holds for all associative differential polynomial rings regardless of characteristic. \begin{thm}[\"{O}inert, Richter and Silvestrov \cite{oinert2013}]\label{richtersilvestrovoinert} If $R$ is associative and $\delta : R \rightarrow R$ is a derivation, then $D = R[X ; \id_R , \delta]$ is simple if and only if $R$ is $\delta$-simple and $Z(D)$ is a field. \end{thm} In this article, we address the question of what it should mean for a pair $(S,x)$ to be a non-associative Ore extension of $R$ and when the resulting rings are simple. It seems to the authors of the present article that this question has not previously been analysed in the literature. Let us briefly describe the train of reasoning that lead the authors to their definition of such objects. The product \eqref{productmonomials} equips the set $R[X ; \sigma , \delta]$ of generalized polynomials over any non-associative ring $R$ with a well defined non-associative ring structure for any additive maps $\sigma : R \rightarrow R$ and $\delta : R \rightarrow R$ satisfying $\sigma(1)=1$ and $\delta(1)=0$. We wish to adapt the axioms (O1), (O2) and (O3) to the non-associative situation so that the resulting collection of non-associative rings coincides with this family of generalized polynomial rings. It turns out that this happens precisely when $x$ belongs to the right and middle nucleus of $S$. To be more precise, let $S$ be a non-associative ring, by this we mean that $S$ is an additive abelian group equipped with a multiplication which is distributive with respect to addition and which has multiplicative identity $1$. We suggest the following. \begin{defn}\label{defnonore} The pair $(S,x)$ is called a {\it non-associative Ore extension} of $R$ if the following axioms hold: \begin{itemize} \item[(N1)] $S$ is a free left $R$-module with basis $\{ 1,x,x^2,\ldots \}$; \item[(N2)] $xR \subseteq R + Rx$; \item[(N3)] $(S,S,x) = (S,x,S) = \{ 0 \}$. \end{itemize} If (N2) is replaced by \begin{itemize} \item[(N2)$'$] $[x,R] \subseteq R$; \end{itemize} then $(S,x)$ is called a {\it non-associative differential polynomial ring over} $R$. \end{defn} For non-empty subsets $A$, $B$ and $C$ of $S$, we let $(A,B,C)$ denote the set of finite sums of elements of the form $(a,b,c) = (ab)c - a(bc)$, for $a \in A$, $b \in B$ and $c \in C$. Note that from (N3) it follows that the element $x$ is power associative, so that the symbols $x^i$, for $i \in \mathbb{N}$, are well defined. Here is an outline of this article. In Section \ref{nonassociativeringtheory}, we gather some well known facts from non-associative ring and module theory that we need in the sequel. In particular, we state our conventions concerning modules over non-associative rings and what a basis should mean in that situation. In Section \ref{oreextensions}, we show that there is a bijection between the set of non-associative Ore extensions of $R$ and the set of generalized polynomial rings $R[X ; \sigma , \delta]$ over $R$, where $\sigma$ and $\delta$ are additive maps $R \rightarrow R$ such that $\sigma(1)=1$ and $\delta(1)=0$. If $T$ is a subset of $R$, then we put $T_{\delta}^{\sigma} = \{ a \in T \mid \sigma(a)=a \ \mbox{and} \ \delta(a)=0 \}$, $T_{\delta} = T_{\delta}^{\id_R}$ and $T^{\sigma} = T_{0}^{\sigma}$. In Section \ref{oreextensions}, we introduce the class of {\it strong} non-associative Ore extensions (see Definition \ref{defstrong}). These correspond to generalized polynomial rings $R[X ; \sigma , \delta]$, where $\sigma$ is a, what we call, {\it fixed point homomorphism} of $R$ and $\delta$ is a, what we call, $\sigma$-{\it kernel derivation} of $R$. By this we mean that $\sigma$ and $\delta$ are maps $R \rightarrow R$ satisfying $\sigma(1)=1$, $\delta(1)=0$ and both of them are right $R_{\delta}^{\sigma}$-linear or both of them are left $R_{\delta}^{\sigma}$-linear. Clearly, every classical derivation is a $\sigma$-kernel derivation with $\sigma=\id_R$ and every classical homomorphism is a fixed point homomorphism. In general, a $\sigma$-kernel derivation with $\sigma=\id_R$ will simply be called a \emph{kernel derivation}. In Section \ref{simplicity}, we introduce $\sigma$-$\delta$-simplicity for rings $R$, where $\sigma$ and $\delta$ are additive maps $R \rightarrow R$ such that $\sigma(1)=1$ and $\delta(1)=0$ (see Definition \ref{defsigmadeltasimple}). We show that $\sigma$-$\delta$-simplicity of $R$ is a necessary condition for simplicity of non-associative Ore extensions $R[X ; \sigma , \delta]$ (see Proposition \ref{sigmadeltasimple}). We also show that if $R$ is $\sigma$-$\delta$-simple, then $Z(R)_{\delta}^{\sigma}$ is a field (see Proposition \ref{R-sigmadelta-simple-Z-field}). Thus, in that case, we get that ${\rm char}(R) = {\rm char}( Z(R)_{\delta}^{\sigma} )$ and hence that ${\rm char}(R)$ is either zero or a prime $p > 0$. In Section \ref{simplicity}, we prove the following non-associative generalization of Theorems \ref{amitsurtheorem}, \ref{jordantheorem} and \ref{richtersilvestrovoinert}. \begin{thm}\label{maintheorem} Suppose that $R$ is a non-associative ring and that $\delta$ is a kernel derivation on $R$. If we put $D = R[X ; \id_R , \delta]$, then the following assertions hold: \begin{itemize} \item[(a)] If $R$ is $\delta$-simple, then every ideal of $D$ is generated by a unique monic polynomial in $Z(D)$; \item[(b)] If $R$ is $\delta$-simple, then there is a monic $b \in R_{\delta}[X]$, unique up to addition of elements from $Z(R)_{\delta}$, such that $Z(D) = Z(R)_{\delta}[b]$; \item[(c)] $D$ is simple if and only if $R$ is $\delta$-simple and $Z(D)$ is a field. In that case $Z(D) = Z(R)_{\delta}$ in which case $b=1$; \item[(d)] If $R$ is $\delta$-simple, $\delta$ is a derivation on $R$ and ${\rm char}(R)=0$, then either $b=1$ or there is $c \in R_{\delta}$ such that $b = c + X$. In the latter case, $\delta = \delta_c$; \item[(e)] If $R$ is $\delta$-simple, $\delta$ is a derivation on $R$ and ${\rm char}(R)=p>0$, then either $b=1$ or there is $c \in R_{\delta}$ and $b_0,\ldots,b_n \in Z(R)_{\delta}$, with $b_n=1$, such that $b = c + \sum_{i=0}^n b_i X^{p^i}$. In the latter case, $\sum_{i=0}^n b_i \delta^{p^i} = \delta_c$. \end{itemize} \end{thm} In Section \ref{sectionweyl}, we introduce non-associative versions of the first Weyl algebra (see Definition \ref{definitionweyl}) and we show that they are often simple regardless of the characteristic (see Theorem \ref{theoremweyl}). In Section \ref{sectiondynamics}, we introduce a special class of $\sigma$-kernel derivations induced by ring automorphisms (see Definition \ref{definitionkernel}). This yields simplicity results for a differential polynomial ring analogue of the quantum plane (see Theorem \ref{theoremquantumtorus}) and for differential polynomial rings defined by monoid/group actions on compact Hausdorff spaces (see Theorem~\ref{NYtheoremdynamics} and Theorem~\ref{theoremdynamics}). In Section \ref{sectionassociative}, we show that if the coefficients are associative, then we can often obtain simplicity of the differential polynomial ring just from the assumption that the map $\delta$ is not a derivation. \section{Preliminaries from Non-associative Ring Theory}\label{nonassociativeringtheory} In this section, we recall some notions from non-associative ring theory that we need in subsequent sections. Although the results stated in this section are presumably rather well known, we have, for the convenience of the reader, nevertheless chosen to include proofs of these statements. Throughout this section, $R$ denotes a non-associative ring. By this we mean that $R$ is an additive abelian group in which a multiplication is defined, satisfying left and right distributivity. We always assume that $R$ is unital and that the multiplicative identity of $R$ is denoted by $1$. The term ''non-associative'' should be interpreted as ''not necessarily associative''. Therefore all associative rings are non-associative. If a ring is not associative, we will use the term ''not associative ring''. By a {\it left module} over $R$ we mean an additive group $M$ equipped with a biadditive map $R \times M \ni (r,m) \mapsto rm \in M$. In that case, we say that a subset $B$ of $M$ is a basis if for every $m \in M$, there are unique $r_b \in R$, for $b \in B$, such that $r_b = 0$ for all but finitely many $b \in B$, and $m = \sum_{b \in B} r_b b$. {\it Right modules} over $R$ and bases are defined in an analogous manner. Recall that the \emph{commutator} $[\cdot,\cdot] : R \times R \rightarrow R$ and the \emph{associator} $(\cdot,\cdot,\cdot) : R \times R \times R \rightarrow R$ are defined by $[r,s]=rs-sr$ and $(r,s,t) = (rs)t - r(st)$ for all $r,s,t \in R$, respectively. The \emph{commuter} of $R$, denoted by $C(R)$, is the subset of $R$ consisting of elements $r \in R$ such that $[r,s]=0$ for all $s \in R$. The \emph{left}, \emph{middle} and \emph{right nucleus} of $R$, denoted by $N_l(R)$, $N_m(R)$ and $N_r(R)$, respectively, are defined by $N_l(R) = \{ r \in R \mid (r,s,t) = 0, \ \mbox{for} \ s,t \in R\}$, $N_m(R) = \{ s \in R \mid (r,s,t) = 0, \ \mbox{for} \ r,t \in R\}$, and $N_r(R) = \{ t \in R \mid (r,s,t) = 0, \ \mbox{for} \ r,s \in R\}$. The \emph{nucleus} of $R$, denoted by $N(R)$, is defined to be equal to $N_l(R) \cap N_m(R) \cap N_r(R)$. From the so-called \emph{associator identity} $u(r,s,t) + (u,r,s)t + (u,rs,t) = (ur,s,t) + (u,r,st)$, which holds for all $u,r,s,t \in R$, it follows that all of the subsets $N_l(R)$, $N_m(R)$, $N_r(R)$ and $N(R)$ are associative subrings of $R$. The \emph{center} of $R$, denoted by $Z(R)$, is defined to be equal to the intersection $N(R) \cap C(R)$. It follows immediately that $Z(R)$ is an associative, unital and commutative subring of $R$. \begin{prop}\label{intersection} The following three equalities hold: \begin{align} Z(R) &= C(R) \cap N_l(R) \cap N_m(R); \label{FIRST}\\ Z(R) &= C(R) \cap N_l(R) \cap N_r(R); \label{SECOND}\\ Z(R) &= C(R) \cap N_m(R) \cap N_r(R). \label{THIRD} \end{align} \end{prop} \begin{proof} We only show \eqref{FIRST}. The equalities \eqref{SECOND} and \eqref{THIRD} are shown in a similar way and are therefore left to the reader. It is clear that $Z(R) \subseteq C(R) \cap N_l(R) \cap N_m(R)$. Now we show the reversed inclusion. Take $r \in C(R) \cap N_l(R) \cap N_m(R)$. We need to show that $r \in N_r(R)$. Take $s,t \in R$. We wish to show that $(s,t,r)=0$, i.e. $(st)r = s(tr)$. Using that $r\in C(R) \cap N_l(R) \cap N_m(R)$ we get $(st)r = r(st) = (rs)t = (sr)t = s(rt) = s(tr)$. \end{proof} \begin{prop}\label{centerInvClosed} If $r \in Z(R)$ and $s \in R$ satisfy $rs = 1$, then $s \in Z(R)$. \end{prop} \begin{proof} Let $r\in Z(R)$ and suppose that $rs=1$. First we show that $s \in C(R)$. To this end, take $u \in R$. Then $su = (su)1 = (su)(rs) = (r(su))s= ((rs)u)s = (1u)s = us$ and hence $s \in C(R)$. By Proposition \ref{intersection}, we are done if we can show $s \in N_l(R) \cap N_m(R)$. To this end, take $v \in R$. Then $s(uv) = s((1u)v)= s(((rs) u)v) = (rs) ( (su) v ) = 1( (su)v ) = (su)v$ which shows that $s \in N_l(R)$. We also see that $(us)v = (us)(1v) = (us) ( (rs) v ) = ( u (rs) ) (sv) = (u1)(sv) = u(sv)$ which shows that $s \in N_m(R)$. \end{proof} \begin{prop}\label{centerfield} If $R$ is simple, then $Z(R)$ is a field. \end{prop} \begin{proof} We already know that $Z(R)$ is a unital commutative ring. What is left to show is that every non-zero element of $Z(R)$ has a multiplicative inverse in $Z(R)$. To this end, take a non-zero $r \in Z(R)$. Then $Rr$ is a non-zero ideal of $R$. Since $R$ is simple, this implies that $R = Rr$. In particular, we get that there is $s \in R$ such that $1 = sr$. By Proposition \ref{centerInvClosed}, we get that $s \in Z(R)$ and we are done. \end{proof} \section{Non-associative Ore extensions}\label{oreextensions} In this section, we show that there is a bijection between the set of (strong) non-associative Ore extensions of $R$ and the set of generalized polynomial rings $R[X ; \sigma , \delta]$ over $R$, where $\sigma$ (is a fixed point homomorphism) and $\delta$ (is a $\sigma$-kernel derivation) are additive maps $R \rightarrow R$ such that $\sigma(1)=1$ and $\delta(1)=0$ (see Proposition \ref{sufficientpolynomial} and Proposition \ref{necessarypolynomial}). We also show that if $S = R[X ; \sigma , \delta]$ is a generalized polynomial ring, then $S$ is associative if and only if $R$ is associative, $\sigma$ is a ring endomorphism and $\delta$ is a $\sigma$-derivation (see Proposition \ref{newproof}). Throughout this section, $R$ denotes a non-associative ring. \begin{defn}\label{definitionpolynomials} By a formal set of polynomials $R[X]$ over $R$ we mean the collection of functions $f : \mathbb{N} \rightarrow R$ with the property that $f(n)=0$ for all but finitely many $n \in \mathbb{N}$. If $f,g \in R[X]$ and $r,s \in R$, then we define $rf + sg \in R[X]$ by the relation $(rf + sg)(n) = rf(n) + sg(n)$, for $n \in \mathbb{N}$. If we for each $n \in \mathbb{N}$, let $X^n \in R[X]$ be defined by $X^n(m) = 1$, if $m=n$, and $X^n(m)=0$, if $m \neq n$, then $R[X]$ is a free left $R$-module with $B = \{ X^n \}_{n \in \mathbb{N} }$ as a basis. In fact, for each $f \in R[X]$, we have that $f = \sum_{n \in \mathbb{N}} f(n) X^n$. By the degree of $f$, denoted by $\deg(f)$, we mean the supremum of $\{ -\infty \} \cup \{ n \in \mathbb{N} \mid f(n) \neq 0 \}$. If $f \neq 0$, then we call $f( \deg(f) )$ the {\it leading coefficient of} $f$. If the leading coefficient of $f$ is 1, then we say that $f$ is {\it monic}. \end{defn} \begin{defn} Let $\sigma : R \rightarrow R$ and $\delta : R \rightarrow R$ be additive maps such that $\sigma(1)=1$ and $\delta(1)=0$. By the generalized polynomial ring $R[X ; \sigma , \delta]$ over $R$ defined by $\sigma$ and $\delta$ we mean the set $R[X]$ of formal polynomials over $R$ equipped with the product defined on monomials by the relation \eqref{productmonomials}. We will often identify each $r \in R$ with $rX^0$. It is clear that $R[X ; \sigma , \delta]$ is a non-associative ring with $1 = X^0$. It is also clear that $X$ is power associative so that $X^n$, for $n > 0$, is in fact equal to the product of $X$ with itself $n$ times. \end{defn} \begin{defn}\label{defstrong} Suppose that $(S,x)$ is a non-associative Ore extension of $R$. Put $R_x = \{ a \in R \mid ax = xa \}$. We say that $(S,x)$ is {\it strong} if at least one of the following axioms holds: \begin{itemize} \item[(N4)] $(x,R,R_x) = \{ 0 \}$; \item[(N5)] $(x,R_x,R) = \{ 0 \}$. \end{itemize} In that case we call $R_x$ {\it the ring of constants of $R$}. If $(S,x)$ is a non-associative differential polynomial ring, then we say that it is strong if it is strong as a non-associative Ore extension. \end{defn} The usage of the term ''ring'' in Definition \ref{defstrong} is justified by the next result. \begin{prop} If $(S,x)$ is a strong non-associative Ore extensions of $R$, then $R_x$ is a subring of $R$. \end{prop} \begin{proof} It is clear that $R_x$ is an additive subgroup of $R$ containing 1. Now we show that $R_x$ is multiplicatively closed. Take $a,b \in R_x$. Then $(ab)x \stackrel{(N3)}{=} a(bx) \stackrel{[ b \in R_x ]}{=} a(xb) \stackrel{(N3)}{=} (ax)b \stackrel{[a \in R_x ]}{=} (xa)b = x(ab)$. The last equality follows from the strongness of $(S,x)$. Therefore $ab \in R_x$. \end{proof} \begin{prop}\label{sufficientpolynomial} Every generalized polynomial ring $S = R[X ; \sigma , \delta]$ over $R$ (with $\sigma$ a fixed point homomorphism and $\delta$ a $\sigma$-kernel derivation) is a (strong) non-associa\-tive Ore extension of $R$ with $x = X$. \end{prop} \begin{proof} We first show the ''non-strong'' statement. From Definition \ref{definitionpolynomials}, we know that $S$ is free as a left $R$-module with $B$ as a basis. Therefore (N1) holds. Also $Rx = RX^0 \cdot 1X = \delta(R) + \sigma(R)X = \delta(R) + \sigma(R)x \subseteq R + Rx$. Therefore (N2) holds. Now we show (N3). Suppose that $a,b \in R$ and $m,n \in \mathbb{N}$. Then we get that $(aX^m \cdot bX^n) \cdot X = \sum_{i \in \mathbb{N}} a \pi_i^m(b) X^{i+n} \cdot X = \sum_{i \in \mathbb{N}} a \pi_i^m(b) X^{i+n+1} =a X^m \cdot (bX^{n+1}) = aX^m \cdot (bX^n \cdot X).$ Next we get that $(aX^m \cdot X) \cdot bX^n = aX^{m+1} \cdot bX^n = \sum_{i \in \mathbb{N}} a \pi_i^{m+1}(b) X^{i+n} = \sum_{i \in \mathbb{N}} a \pi_i^m(\delta(b)) X^{i+n} + \sum_{i \in \mathbb{N}} a \pi_{i-1}^m(\sigma(b)) X^{i+n} = aX^m \cdot ( \delta(b) X^n + \sigma(b)X^{n+1} ) = aX^m \cdot (X \cdot bX^n).$ Now we show the ''strong'' statement. Note that $R_X = R_{\delta}^{\sigma}$. Suppose first that both $\sigma$ and $\delta$ are right $R_{\delta}^{\sigma}$-linear. We show (N4). To this end, take $a \in R$ and $b \in R_X$. Then $ (X \cdot a) \cdot b = (\delta(a) + \sigma(a)X) \cdot b \stackrel{(N3)}{=} \delta(a) b + \sigma(a) (X b) = [b \in R_{\delta}^{\sigma} ]= \delta(a)b + \sigma(a) (bX) \stackrel{(N3)}{=} \delta(a)b + (\sigma(a) b)X.$ Since $\sigma$ and $\delta$ are right $R_{\delta}^{\sigma}$-linear, we get that $ (X \cdot a) \cdot b = \delta(ab) + \sigma(ab)X = X \cdot (ab)$. Suppose now that both $\sigma$ and $\delta$ are left $R_{\delta}^{\sigma}$-linear. We show (N5). To this end, take $a \in R_X$ and $b \in R$. Then $( X \cdot a ) \cdot b = [a \in R_{\delta}^{\sigma}] = (a \cdot X) \cdot b \stackrel{(N3)}{=} a \cdot (X b) = a \cdot (\delta(b) + \sigma(b)X) = a \delta(b) + a \sigma(b)X.$ Since $\sigma$ and $\delta$ are left $R_{\delta}^{\sigma}$-linear, we get that $( X \cdot a) \cdot b = \delta(ab) + \sigma(ab)X = X \cdot (ab).$ \end{proof} \begin{prop}\label{necessarypolynomial} Every non-associative Ore extension of $R$ is isomorphic to a generalized polynomial ring $R[X ; \sigma, \delta]$. If the non-associative Ore extension is strong, then $\sigma$ is a fixed point homomorphism and $\delta$ is a $\sigma$-kernel derivation. \end{prop} \begin{proof} We first show the ''non-strong'' statement. Suppose that $S$ is a non-associative Ore extension of $R$ defined by the element $x \in S$. Take $a,b \in R$. By (N1) and (N2), we get that $xa = \delta(a) + \sigma(a)x$, for some unique $\delta(a),\sigma(a) \in R$. Hence this defines functions $\sigma : R \rightarrow R$ and $\delta : R \rightarrow R$. By distributivity of $S$, we get the relation $x(a+b) = xa + xb$ which implies that $\sigma(a+b) = \sigma(a) + \sigma(b)$ and $\delta(a + b) = \delta(a) + \delta(b)$. From the relation $x1 = x$ we get that $\sigma(1)=1$ and $\delta(1)=0$. Define $f : S \rightarrow R[X ; \sigma, \delta]$ by the additive extension of the relations $f( a x^m ) = a X^m$, for $a \in R$ and $m \in \mathbb{N}$. Then clearly $f$ is an isomorphism of additive groups. What is left to show is that $f$ respects multiplication. Take $a,b \in R$ and $m,n \in \mathbb{N}$. We claim that $(a x^m)(b x^n) = \sum_{i \in \mathbb{N}} a \pi_i^m(b) x^{i+n}$. If we assume that the claim holds, then $f( (a x^m)(b x^n) ) = f( \sum_{i \in \mathbb{N}} a \pi_i^m(b) x^{i+n} ) = \sum_{i \in \mathbb{N}} a \pi_i^m(b) X^{i+n} = (a X^m) \cdot (b X^n) = f( ax^n ) \cdot f(b x^m)$. Now we prove the claim by induction over $m$. First we show the base case $m=0$. By (N2) we get that $x \in N_r(S)$. Therefore $x^n \in N_r(S)$ and hence we get that $(a x^0)(b x^n) = a (b x^n) = (a b)x^n = a \pi_0^0(b) x^n = \sum_{i \in \mathbb{N}} a \pi_i^0(b) x^{i+n}$. Next we show the induction step. Suppose that the claim holds for some $m \in \mathbb{N}$. By (N2), we get that $x \in N_m(S) \cap N_r(S)$. Therefore all powers of $x$ also belong to $N_m(S) \cap N_r(S)$ and hence we get that $(a x^{m+1}) (b x^n) = (a( x^m x)) (b x^n) = ((a x^m)x) (b x^n) = (a x^m) ( x (b x^n) ) = (a x^m) ( (xb) x^n ) = (a x^m) ( ( \delta(b) + \sigma(b) x ) x^n ) = (a x^m) ( \delta(b) x^n + \sigma(b) x^{n+1} ) = (a x^m)( \delta(b) x^n ) + (a x^m)( \sigma(b) x^{n+1} ).$ By the induction hypothesis the last expression equals $\sum_{i \in \mathbb{N}} a \pi_i^m( \delta(b) ) x^{i+n} + \sum_{i \in \mathbb{N}} a \pi_i^m( \sigma(b) ) x^{i+n+1} = \sum_{i \in \mathbb{N}} a \pi_i^m( \delta(b) ) x^{i+n} + \sum_{i \in \mathbb{N}} a \pi_{i-1}^m( \sigma(b) ) x^{i+n} = \sum_{i \in \mathbb{N}} a [ \pi_i^m( \delta(b) ) + \pi_{i-1}^m( \sigma(b) ) ] x^{i+n} = \sum_{i \in \mathbb{N}} a \pi_i^{m+1}( b ) x^{i+n}. $ This proves the induction step. Now we show the ''strong'' statement. To this end, take $a \in R_x$ and $b \in R$. Suppose first that (N5) holds. Then $x(ab) = (xa)b$. Thus, since $a \in R_x$, we get that $\delta(ab) + \sigma(ab)x = (ax)b \stackrel{(N3)}{=} a(xb) = a (\delta(b) + \sigma(b)x) = a\delta(b) + a(\sigma(b)x) \stackrel{(N3)}{=} a \delta(b) + (a \sigma(b))x$. Hence by (N1), we get that $\delta(ab)=a\delta(b)$ and $\sigma(ab)=a\sigma(b)$. Suppose now that (N4) holds. Then $x(ba) = (xb)a$. Thus $\delta(ba) + \sigma(ba)x = (\delta(b) + \sigma(b)x)a = \delta(b)a + (\sigma(b)x)a \stackrel{(N3)}{=} \delta(b)a + \sigma(b)(xa) = [a \in R_x] = \delta(b)a + \sigma(b)(ax) \stackrel{(N3)}{=} \delta(b)a + (\sigma(b)a)x$. Hence, by (N1), we get that $\delta(ba)=\delta(b)a$ and $\sigma(ba) = \sigma(b)a$. Thus, in either case, $\sigma$ is a fixed point homomorphism of $R$ and $\delta$ is a $\sigma$-kernel derivation of $R$. \end{proof} For use in later sections, we now note that the axioms (N4) and (N5) of Definition \ref{defstrong} can be replaced by seemingly stronger statements. \begin{prop}\label{generalaxioms} Let $(S,x)$ be a non-associative Ore extension of $R$. \begin{itemize} \item[(a)] The axiom {\rm (N4)} holds if and only if $(\mathbb{Z}[x] , S , R_x[x]) = \{ 0 \}$ holds. \item[(b)] The axiom {\rm (N5)} holds if and only if $(\mathbb{Z}[x] , R_x[x] , S) = \{ 0 \}$ holds. \end{itemize} \end{prop} \begin{proof} Since the ''if'' statements are trivial, we only show the ''only if'' statements. To this end, take $a \in R_x$, $b \in R$ and $m,n,p \in \mathbb{N}$. (a) We need to show that $(x^n , b x^m , ax^p) = 0$. Since $x \in N_m(S) \cap N_r(S)$ and $a \in R_x$ it is enough to show this relation for $m=p=0$. Since (N4) holds, we get, from the proof of Proposition \ref{necessarypolynomial}, that $(x^n b)a = \sum_{i \in \mathbb{N}} \pi_i^n(b) x^i a = \sum_{i \in \mathbb{N}} \pi_i^n(b) a x^i = \sum_{i \in \mathbb{N}} \pi_i^n(ba) x^i = x^n(ba).$ (b) We need to show that $(x^n , a x^p , b x^m) = 0$. Since $x \in N_m(S) \cap N_r(S)$ and $a \in R_x$ it is enough to show this relation for $m=p=0$. Since (N5) holds, we get, from the proof of Proposition \ref{necessarypolynomial}, that $(x^n a)b = (a x^n)b = a (x^n b) = \sum_{i \in \mathbb{N}} a \pi_i^n(b) x^i = \sum_{i \in \mathbb{N}} \pi_i^n(ab) x^i = x^n(ab).$ \end{proof} \begin{prop}\label{newproof} If $S = R[X ; \sigma , \delta]$ is a generalized polynomial ring, then \begin{itemize} \item[(a)] $R \subseteq N_l(S)$ if and only if $R$ is associative; \item[(b)] $X \in N_l(S)$ if and only if $\sigma$ is a ring endomorphism and $\delta$ is a $\sigma$-derivation; \item[(c)] $S$ is associative if and only if $R$ is associative, $\sigma$ is a ring endomorphism and $\delta$ is a $\sigma$-derivation. \end{itemize} \end{prop} \begin{proof} (a) The ''only if'' statement is clear. Now we show the ''if'' statement. Suppose that $R$ is associative. Take $a,b,c \in R$ and $m,n \in \mathbb{N}$. We wish to show that \begin{equation}\label{associativeabc} (a , bX^m , cX^n)=0. \end{equation} Since $X \in N_r(S)$, we get that $(a , bX^m , cX^n) = (a , bX^m , c)X^n$. Thus it is enough to prove \eqref{associativeabc} for $n=0$. Since $X \in N_m(S) \cap N_r(S)$ we get that $(a , bX^m , c) = (a , b , X^m c) = \sum_{i \in \mathbb{N}} (a , b , \pi_i^m(c) X^i) = \sum_{i \in \mathbb{N}} (a , b , \pi_i^m(c) ) X^i = 0$, using that $R$ is associative. (b) First we show the ''only if'' statement. Suppose that $X \in N_l(S)$. Take $a,b \in R$. From the equality $X(ab) = (Xa)b$ we get that $\delta(ab) + \sigma(ab)X = (\delta(a) + \sigma(a)X)b \stackrel{(N3)}{=} \delta(a)b + \sigma(a) (Xb) = \delta(a)b + \sigma(a)( \delta(b) + \sigma(b)X ) \stackrel{(N3)}{=} \delta(a)b + \sigma(a)\delta(b) + (\sigma(a)\sigma(b))X$. Hence, by (N1), we get that $\sigma$ is a homomorphism and $\delta$ is a $\sigma$-derivation. Now we show the ''if'' statement. Suppose that $\sigma$ is a homomorphism and that $\delta$ is a $\sigma$-derivation. From the calculation in the proof of the ''only if'' statement it follows that $X \in N_l(R)$. From the same type of reasoning that we used in the proof of the ''if'' statement in (a), we therefore get that $(X,S,S) \subseteq \sum_{i \in \mathbb{N}} (X,R,R)X^i = \{0\}$. (c) The ''only if'' statement follows directly from (a) and (b). Now we show the ''if'' statement. Suppose that $R$ is associative, $\sigma$ is a ring endomorphism and that $\delta$ is a $\sigma$-derivation. Take $a \in R$ and $m \in \mathbb{N}$. From (a) and (b) we get that $a,X \in N_l(S)$. Since $N_l(S)$ is multiplicatively closed we get that $aX^m \in N_l(S)$. Since $N_l(S)$ is closed under addition, we get that $S \subseteq N_l(S)$ and thus $S$ is associative. \end{proof} \begin{prop}\label{rightbasis} If $S = R[X ; \sigma , \delta]$ is a generalized polynomial ring with $\sigma$ bijective, then $B = \{ X^n \}_{n \in \mathbb{N}}$ is a basis for $S$ as a right $R$-module. \end{prop} \begin{proof} First we show that $B$ is a right $R$-linearly independent set. We will show that for each $n \in \mathbb{N}$, the set $B_n := \{ X^i \}_{i=0}^n$ is right $R$-linearly independent. We will prove this by induction over $n$. Base case: $n=0$. It is clear that $\{ 1 \}$ is right $R$-linearly independent. Induction step: suppose that $B_n$ is right $R$-linearly independent for some $n \in \mathbb{N}$. Suppose that $a_i \in R$, for $i \in \{1 ,\ldots , n+1\}$, are chosen so that $\sum_{i=0}^{n+1} X^i a_i = 0$. Then $0 = \sigma^{n+1}(a_{n+1}) X^{n+1} + \mbox{[lower terms]}$. Since $B_{n+1}$ is left $R$-linearly independent, we get that $\sigma^{n+1}(a_{n+1}) = 0$. Since $\sigma$ is injective, we get that $a_{n+1}=0$. Thus $\sum_{i=0}^{n} X^i a_i = 0$. By the induction hypothesis, we get that $a_i = 0$, for $i \in \{0,\ldots,n\}$. Next we show that $B$ right $R$-spans $S$. For each $n \in \mathbb{N}$, let $S_n$ (or $T_n$) denote the left (or right) $R$-span of $B_n$. We will show that for each $n \in \mathbb{N}$, the relation $S_n = T_n$ holds. We will prove this by induction over $n$. Base case: $n=0$. It is clear that $S_0 = R = T_0$. Induction step: suppose that $S_n = T_n$ for some $n \in \mathbb{N}$. Take $a = \sum_{i=0}^{n+1} a_i X^i \in S_{n+1}$. Since $\sigma$ is surjective, we can pick $r \in R$ such that $\sigma^{n+1}(r) = a_{n+1}$. This implies that $a - X^{n+1} r \in S_n$. By the induction hypothesis this implies that $a - X^{n+1} r \in T_n$. Thus $a \in T_n + X^{n+1}r \subseteq T_{n+1}$. Thus $S_{n+1} \subseteq T_{n+1}$. Since the inclusion $S_{n+1} \supseteq T_{n+1}$ trivially holds, the induction step is complete. \end{proof} Explicit formulas for how elements of generalized polynomial rings can be expressed as right $R$-linear combinations of elements from $B$ can be worked out exactly as in the classical case (see e.g. the formulas right after Theorem 7 in Ore's classical article \cite{ore1933}). In this article, we only need the following special case of these relations. \begin{prop}\label{rightformula} Suppose that $S = R[X ; \id_R , \delta]$ is a non-associative differential polynomial ring. If $r \in R$ and $n \in \mathbb{N}$, then $r X^n = \sum_{i=0}^n (-1)^i {n \choose i} X^{n-i} \delta^i(r)$. \end{prop} \begin{proof} We will show this by induction over $n$. Base case: $n=0$. This is clear since $r X^0 = r = X^0 r$. Induction step: suppose that $r X^n = \sum_{i=0}^n (-1)^i {n \choose i} X^{n-i} \delta^i(r)$ for some $n \in \mathbb{N}$. Then, since $X \in N_m(S) \cap N_r(S)$, we get that $r X^{n+1} = r X^n X = \sum_{i=0}^n (-1)^i {n \choose i} X^{n-i} \delta^i(r) X = \sum_{i=0}^n (-1)^i {n \choose i} X^{n-i} ( X \delta^i(r) - \delta^{i+1}(r) ) = \sum_{i=0}^n (-1)^i {n \choose i} X^{n+1-i} \delta^i(r) + (-1)^{i+1} {n \choose i} X^{n-i} \delta^{i+1}(r) = [ {n+1 \choose i} = {n \choose i} + {n \choose i-1} ]= \sum_{i=0}^{n+1} (-1)^i {n+1 \choose i} X^{n+1-i} \delta^i(r)$ \end{proof} \section{Ideal Structure}\label{simplicity} The aim of this section is to prove Theorem \ref{maintheorem}. To this end, we first show a series of results concerning simplicity and the center. Throughout this section, $R$ denotes a non-associative ring and $\sigma$ and $\delta$ are additive maps $R \rightarrow R$ satisfying $\sigma(1)=1$ and $\delta(1)=0$. Furthermore, we let $S = R[X ; \sigma , \delta]$ denote a non-associative Ore extension of $R$. \begin{defn}\label{defsigmadeltasimple} An ideal $I$ of $R$ is said to be \emph{$\sigma$-$\delta$-invariant} if $\sigma(I) \subseteq I$ and $\delta(I) \subseteq I$. If $\{ 0 \}$ and $R$ are the only $\sigma$-$\delta$-invariant ideals of $R$, then $R$ is said to be \emph{$\sigma$-$\delta$-simple}. \end{defn} \begin{prop}\label{sigmadeltasimple} If $S$ is simple, then $R$ is $\sigma$-$\delta$-simple. \end{prop} \begin{proof} Take a non-zero $\sigma$-$\delta$-invariant ideal $J$ of $R$. We wish to show that $J = R$. Let $I = \oplus_{i \in \mathbb{N}} J X^i$. Since $J$ is a right ideal of $R$ it follows that $I$ is a right ideal of $S$. Using that $J$ is $\sigma$-$\delta$-invariant it follows that $I$ is a left ideal of $S$. Since $J$ is non-zero it follows that $I$ is non-zero. By simplicity of $S$, we get that $I=S$ and thus $J = R$. \end{proof} \begin{prop}\label{R-sigmadelta-simple-Z-field} Suppose that $R$ is $\sigma$-$\delta$-simple. If $\sigma$ is a fixed point homomorphism and $\delta$ is a $\sigma$-kernel derivation, then $Z(R)_{\delta}^{\sigma}$ is a field. \end{prop} \begin{proof} Put $T = Z(R)_{\delta}^{\sigma}$. We already know that $Z(R)$ is an associative commutative unital ring. Suppose that $\sigma$ and $\delta$ are right $R_{\delta}^{\sigma}$-linear. Take $a,b \in T$. We have $\sigma(ab) = \sigma(a)b = ab$ and $\delta(ab) = \delta(a)b = 0b = 0$. Thus $ab \in T$. Since it is clear that $1 \in T$ and that $T$ is additively closed, it follows that $T$ is an associative commutative unital ring. What remains to show is that every non-zero element of $T$ has a multiplicative inverse. To this end, take a non-zero $a \in T$. Then $Ra$ is a non-zero ideal of $R$ with $\sigma(Ra) = \sigma(R)a \subseteq Ra$ and $\delta(Ra) = \delta(R)a \subseteq Ra$. Hence $Ra$ is $\sigma$-$\delta$-invariant. By $\sigma$-$\delta$-simplicity of $R$, we get that $Ra = R$. Thus, there is $b \in R$ such that $ab = 1$. By Proposition \ref{centerInvClosed}, we get that $b \in Z(R)$. Now we show that $b \in R_{\delta}^{\sigma}$. Indeed, $\sigma(b)=\sigma(b)1=\sigma(b)ab=\sigma(ba)b=\sigma(1)b=1b=b$ and $\delta(b) = \delta(b)1 = \delta(b)ab = \delta(ba)b = \delta(1)b = 0b = 0$. This shows that $b\in T$. The left $R_{\delta}^{\sigma}$-linear case is treated analogously. \end{proof} \begin{prop}\label{commutativecondition} If $a \in R_{\delta}^{\sigma}[X]$ commutes with every element of $R$, then $a \in C(S)$. \end{prop} \begin{proof} First we show, using induction, that, for every $n \in \mathbb{N}$, the relation $[a,x^n]=0$ holds. The base case $n=0$ follows immediately since $[a,X^0] = [a , 1] = 0$. Now we show the induction step. Suppose that $[a,X^n]=0$ for some $n \in \mathbb{N}$. Then $[a,X^{n+1}] = a X^{n+1} - X^{n+1} a = a ( X X^n) - (X X^n) a = (a X) X^n - X (X^n a) = (a X) X^n - X (a X^n) = (a X) X^n - (X a) X^n = [a,X]X^n = 0$, since it follows from $a \in R_{\delta}^{\sigma}[X]$ that $[a,X]=0$. Now $[a,bX^n] = a(bX^n) - (bX^n)a = (ab)X^n - b(X^n a) = (ab)X^n - b(a X^n) = (ab)X^n - (ba)X^n = [a,b]X^n = 0$. \end{proof} \begin{prop}\label{associativecondition} Suppose that $S$ is a strong non-associative Ore extension of $R$. If $a \in R_{\delta}^{\sigma}[X]$ commutes with every element of $R$, and associates with all elements of $R$, then $a \in Z(S)$. \end{prop} \begin{proof} By Proposition~\ref{commutativecondition} we conclude that $a\in C(S)$. Since $Z(S) = C(S) \cap N(S)$, we need to show that $a \in N(S)$. First we show that $a \in N_l(S)$. Take $n,p \in \mathbb{N}$. Since $(a,R,R) = \{ 0 \}$ and $X \in N_m(S) \cap N_r(S)$, we get that $(a, RX^n , RX^p) = (a , RX^n , R)X^p = (a , R , X^n R)X^p \subseteq \sum_{i \in \mathbb{N}} (a , R , \pi_i^n(R) X^i) X^p \subseteq \sum_{i = 1}^n (a , R , R) X^{i+p} = \{ 0 \}.$ By Proposition \ref{intersection}, we are done if we can show that $a \in N_m(S)$ or $a \in N_r(S)$. Case 1: (N4) holds. We show that $a \in N_r(S)$. We wish to show that \begin{equation}\label{rightzero} (bX^n , cX^p , a) = 0. \end{equation} Since $X \in N_m(S) \cap N_r(S)$ and $a \in C(S)$, we get that $( (bX^n) (cX^p) ) a = ( ((bX^n) c) X^p) a = ( (bX^n)c ) (X^p a) = ( (bX^n ) c ) (a X^p) = (((bX^n)c) a)X^p = (( b (X^n c) ) a) X^p$ and, by Proposition \ref{generalaxioms}(a), we get that $bX^n ( (cX^p) a ) = bX^n ( c (X^p a) ) = bX^n ( c ( a X^p ) ) = bX^n ( (ca) X^p ) = (bX^n (ca) ) X^p = ( b ( X^n (ca))) X^p = ( b ( (X^n c) a ) ) X^p.$ This shows \eqref{rightzero}. Case 2: (N5) holds. We show that $a \in N_m(S)$. We wish to show that \begin{equation}\label{middlezero} (bX^n , a , cX^p) = 0. \end{equation} Since $X \in N_r(S)$, we only need to show \eqref{middlezero} for $p=0$. Since $X \in N_m(S) \cap N_r(S)$, $a \in C(S)$ and $a$ associates with all elements of $R$, we get that $((bX^n)a)c = (b (X^n a) )c = (b (a X^n))c = ( (ba)X^n ) c = (ba)(X^n c) = \sum_{i \in \mathbb{N}} (ba) \pi_i^n(c) X^i = \sum_{i \in \mathbb{N}} b (a \pi_i^n(c) ) X^i.$ On the other hand, since $a \in C(S)$, $X \in N_m(S) \cap N_r(S)$ and Proposition \ref{generalaxioms}(b) holds, we get that $bX^n (a c) = b (X^n (ac)) = b ( (X^n a) c) = b ( (a X^n) c) = b ( a (X^n c) ) = \sum_{i \in \mathbb{N}} b ( a \pi_i^n(c) ) X^i.$ This shows \eqref{middlezero}. \end{proof} \begin{cor}\label{corcenter} If $\delta$ is a kernel derivation on $R$ and we put $D = R[X ; \id_R , \delta]$, then $Z(D)$ is the set of all $a \in D$ such that (i) $a$ commutes with $X$, and (ii) $a$ commutes with all elements of $R$, and (iii) $a$ associates with all elements of $R$. \end{cor} \begin{prop}\label{regular} Let $\sigma$ be injective and suppose that $a,b \in S=R[X;\sigma, \delta]$ are elements such that $ab=ba=1$. If the leading coefficient of $a$ is a regular element of $R$, then $a,b \in R$. \end{prop} \begin{proof} Suppose that $b = \sum_{i=0}^m b_i X^i$, where $b_m \neq 0$. Comparing coefficients of $X^{n+m}$ in the relation $ab = 1$ we get that $a_n \sigma^n(b_m) = 0$ if $m + n > 0$. Since $a_n$ is regular, we therefore get that $\sigma^n(b_m)=0$ whenever $m+n > 0$. By injectivity of $\sigma$, we get $b_m=0$ if $m > 0$. Comparing coefficients of degree $n$ in the relation $ba = b_0 a = 1$ we get that $b_0 a_n = 0$ if $n > 0$. Since $b_0 = b \neq 0$ and $a_n$ is regular, we get that $n=0$. Hence $m=n=0$ and $a,b \in R$. \end{proof} \begin{prop}\label{sumdegrees} If $a,b \in S$, then $\deg(ab) \leq \deg(a) + \deg(b)$. Moreover, if $b$ is monic or $a$ is monic and $\sigma$ is injective, then equality holds. \end{prop} \begin{proof} Suppose that $\deg(a)=m$ and $\deg(b)=n$. Let $a_m$ and $b_n$ denote the leading coefficients of $a$ and $b$ respectively. Then $ab = a_m \sigma^m(b_n) X^{m+n} + [\mbox{lower terms}]$. So $\deg(ab) \leq m+n = \deg(a) + \deg(b)$. Equality holds if and only if $a_m \sigma^m(b_n) \neq 0$. This holds in particular if $b_n=1$ or if $a_m=1$ and $\sigma$ is injective. \end{proof} Next we show that there in some cases is a Euclidean algorithm for $S$. \begin{prop}\label{euclideanalgorithm} If $a,b \in S$ where $b$ is monic, then $a = qb + r$ for suitable $q,r \in S$ such that either $r=0$ or $\deg(r) < \deg(b)$. \end{prop} \begin{proof} We follow closely the proof in \cite[p. 94]{rowen1988} for the associative case. Without loss of generality, we may assume that $a\neq 0$. Suppose that $\deg(a)=m$ and $\deg(b)=n$. Let $a_m$ denote the leading coefficient of $a$. Case 1: $m < n$. Then we can put $q=0$ and $r=a$. Case 2: $m \geq n$. Put $c = a - (a_m X^{m-n}) b$. Then $\deg(c) < \deg(a)$. By induction there are $q',r' \in S$ with $c = q'b + r'$ and $r' = 0$ or $\deg(r') < n$. This implies that $a = (a_m X^{m-n}) b + c = (a_m X^{m-n}) b + q'b + r' = (a_m X^{m-n} + q')b + r'.$ So we can put $q = a_m X^{m-n} + q'$ and $r=r'$. \end{proof} \subsection*{Proof of Theorem \ref{maintheorem}} \subsubsection*{Proof of {\rm (a)}} Let $I$ be an ideal of $D$. Suppose that $m$ is the minimal degree of non-zero elements of $I$. Put $J = \{ r \in R \mid \exists r_0,r_1, \ldots, r_{m-1} \in R : rX^m + r_{m-1} X^{m-1} + \ldots + r_0 \in I \}.$ It is clear that $J$ is a ideal of $R$. From the fact that $XI - IX \subseteq I$ it follows that $J$ is $\delta$-invariant. Since $R$ is $\delta$-simple and $J$ is non-zero, we can conclude that $J = R$. In particular, $1 \in J$. Therefore there is a monic $a \in I$ of degree $m$. Now we show that $a \in Z(D)$. To this end, we check (i), (ii) and (iii) of Corollary \ref{corcenter}. Since $a \in D_{\delta}$, (i) holds. Now we check (ii). Take $r \in R$. Since $a$ is monic the leading coefficient of $[a,r]$ is $[1,r] = 0$. Thus $\deg([a,r]) < m$ which, since $[a,r] \in I$, implies that $[a,r]=0$, by minimality of $m$. Now we check (iii). Take $r,s \in R$. Since $a$ is monic and the leading coefficients of all the polynomials $(a,r,s)$, $(r,a,s)$ and $(r,s,a)$ equal zero, all of them have degree less that $m$. By minimality of $m$ and the fact that all of these polynomials belong to $I$, we get that they are zero. Thus (iii) holds. Next we show that $I = Da$. The inclusion $I \supseteq Da$ is clear. Now we show the reversed inclusion. Take a non-zero $c \in I$. Since $\deg(c) \geq \deg(a)$, we can use Proposition \ref{euclideanalgorithm} to conclude that $c = qa + r$, for some $q,r \in S$ with $\deg(r) < \deg(a)$. But then $r = c - qa \in I$, which, by minimality of $m$, implies that $r = 0$. Therefore $c = qa \in I$. Hence $I \subseteq Da$. Finally we show uniqueness of $a$. Suppose that $d \in D$ is monic and $I = Dd$. From the relations $a \in Dd$ and $d \in Da$ we get, respectively from Proposition \ref{sumdegrees}, that $\deg(a) \geq \deg(d)$ and $\deg(d) \geq \deg(a)$, which together imply that $\deg(a) = \deg(d)$. Since $a$ and $d$ are monic, we get that $\deg(a-d) < m$, which, by $a-d \in I$ and minimality of $m$, implies that $a=d$. \subsubsection*{Proof of {\rm (b)}} Case 1: $Z(D)$ only contains polynomials of degree zero. Then $Z(D) \subseteq Z(R)_{\delta}$. But since $Z(R)_{\delta} \subseteq Z(D)$ we get that $Z(D) = Z(R)_{\delta}$ and we can choose $b=1$. Case 2: $Z(D)$ contains polynomials of degree greater than zero. Let $n$ denote the least degree of non-constant polynomials in $Z(D)$. Take $b \in Z(D)$ such that $\deg(b)=n$. Now we show that we may choose $b$ to be monic. Since $I = Db$ is an ideal of $D$, by (a), we may choose a monic $f \in I \cap Z(D)_{\delta}$ such that $I = Df$. But then $b = cf$ for some $c \in D$. Since $f$ is monic we get that $n = \deg(b) = \deg(c) + \deg(f)$ which implies that $\deg(f) \leq n$. By minimality of $n$ we get that $\deg(f) = n$ and we may choose $b$ to be the monic $f$. Now take $g \in Z(D)$ of degree $m$. We will show by induction over the degree of $g$ that $g \in Z_{\delta}(R)[b]$. Base case: $m=0$, i.e. $g$ is constant. Then $g \in R \cap Z(S) = Z_{\delta}(R) \subseteq Z_{\delta}(R)[b]$. Induction step: suppose that $m > 0$ and that we have shown the claim for all $m' < m$. Since $b$ is monic, we can write $g = hb + k$ for some $h,k \in S$ with $\deg(k) < \deg(b)$. Note that, since $b$ is monic, we get that $\deg(h) < \deg(g)$. We claim that $h,k \in Z(D)$. If we assume that the claim holds, then, by the induction hypothesis, we are done. Now we show the claim. To this end, we will check (i),(ii) and (iii) in Corollary \ref{corcenter}. First we check (i). Note that $0 = [X,g] = [X,h]b + [X,k]$. Seeking a contradiction, suppose that $[X,h] \neq 0$. Since $b$ is monic and $\deg([X,k]) \leq \deg(k)$, we get the contradiction $-\infty = \deg(0) = \deg([X,g])= \deg( [X,h]b + [X,k] ) \geq n$. Therefore $[X,h]=0$ and hence $[X,k]=0$. In other words $h,k \in R_{\delta}[X]$. Now we show (ii). To this end, note that $0 = [r,g] = [r,h]b + [r,k]$. Seeking a contradiction, suppose that $[r,h] \neq 0$. Since $b$ is monic and $\deg([r,k]) \leq \deg(k)$, we get the contradiction $-\infty = \deg(0) = \deg([r,g]) = \deg( [r,h]b + [r,k] ) \geq n$. Therefore $[r,h]=0$ and hence $[r,k]=0$. Finally, we show (iii). Take $r,s \in R$. Let $\alpha(\cdot)$ denote either of the maps $(\cdot,r,s)$, $(r,\cdot,s)$ or $(r,s,\cdot)$. Then $0 = \alpha(g) = \alpha(h)b + \alpha(k)$. Seeking a contradiction, suppose that $\alpha(h) \neq 0$. Since $b$ is monic and $\deg(\alpha(k)) \leq \deg(k)$, we get the contradiction $-\infty = \deg(0) = \deg(\alpha(g)) = \deg( \alpha(h)b + \alpha(k) ) \geq n$. Therefore $\alpha(h)=0$ and hence $\alpha(k)=0$. This completes the induction step. Now we show uniqueness of $b$ up to addition by an element from $Z(R)_{\delta}$. Case 1: $Z(D)$ only contains polynomials of degree zero. Then there is only one monic polynomial in $Z(D)$, namely $b=1$. Case 2: $Z(D)$ contains polynomials of degree greater than zero i.e. $n > 0$. Suppose that there is another monic $b' \in R_{\delta}[X]$ such that $Z(D) = Z(R)_{\delta}[b']$. Then there is a polynomial $p \in Z(R)_{\delta}[X]$ such that $b = p(b')$. Hence $n = \deg(b) = \deg(p(b)) \geq \deg(b')$. By minimality of $n$, we get that $\deg(b')=n$. But then $b-b'$ is a polynomial in $Z(D)$ of degree less than $n$, which, by minimality of $n$, implies that $b - b' \in Z(R)_{\delta}$. \subsubsection*{Proof of {\rm (c)}} First we show the ''only if'' statement. Suppose that $D$ is simple. By Proposition \ref{sigmadeltasimple}, we get that $R$ is $\delta$-simple. By Proposition \ref{centerfield}, we get that $Z(D)$ is a field. Next we show the ''if'' statement. Suppose that $R$ is $\delta$-simple and that $Z(D)$ is a field. Let $I$ be a non-zero ideal of $D$. By (a) and Proposition \ref{regular}, this implies that the polynomial in $Z(D)$ corresponding to $I$ is $1$. This implies that $I = D$. By (b) and Proposition \ref{regular}, the ring $Z(R)_{\delta}[b]$ is a field precisely when $b=1$. \subsubsection*{Proofs of {\rm (d)} and {\rm (e)}} By Proposition \ref{rightbasis}, we can write $b = \sum_{i=0}^n b_i X^i$, where $b_i \in R$, for $i\in \{1,\ldots,n\}$, with $b_n = 1$. Since $b \in Z(D)$, we get, in particular, that $Xb = bX$. This implies that $\delta(b_i)=0$, for $i\in\{1,\ldots,n\}$. Therefore $b = \sum_{i=0}^n X^i b_i$. For every $j \in \{ 1,\ldots,n \}$ define the polynomial $c_j = \sum_{i=j}^n X^{i-j} {i \choose j} b_i$. We claim that each $c_j \in Z(D)$. If we assume that the claim holds, then, by minimality of $n$, we get that $b_j = c_j \in Z(R)_{\delta}$ and that ${i \choose j} b_i = 0$ whenever $1 \leq j < i \leq n$. In the case when the characteristic of $Z(R)_{\delta}$ is zero, we therefore get that $b=1$ or $b = b_0 + X$. The relation $br=rb$ now gives us that $\delta = \delta_{b_0}$. Now suppose that the characteristic of $Z(R)_{\delta}$ is a prime $p$. Fix $i \in \{ 1,\ldots,n \}$ such that $b_i$ is non-zero. Then ${i \choose j} = 0$ when $1 \leq j < i$. By Lucas' Theorem (see e.g. \cite{fine1947}) this implies that $i$ must be a power of $p$. Choose the smallest $q \in \mathbb{N}$ such that $p^q \leq n$. For each $i \in \mathbb{N}$ put $c_i = b_{p^i}$. Also put $c = b_0$. Then $b = c + \sum_{i = 0}^q c_i \delta^{p^i}$. The relation $br=rb$ now gives us that $\delta_c + \sum_{i=0}^n c_i \delta^{p^i} = 0$. Now we show the claim. To this end, we will check conditions (i), (ii) and (iii) of Corollary \ref{corcenter}. Since $\delta(b_i)=0$ we know that (i) holds. Now we show (ii). Take $r \in R$. First note that since $br = rb$, we can use Proposition \ref{rightformula} to conclude that \begin{equation}\label{notethat} b_v r = \sum_{i=v}^n (-1)^{i-v} {i \choose i-v} \delta^{i-v}(r) b_i \end{equation} for each $v \in \{ 0,\ldots,n \}$. Thus, \begin{align*} c_j r &= \sum_{i=j}^n r \left( X^{i-j} {i \choose j} b_i \right) \stackrel{[X \in N_m(D)]}{=} \sum_{i=j}^n \left( r X^{i-j} \right) {i \choose j} b_i \\ &= \sum_{i=j}^n \left( \sum_{k=0}^{i-j} X^{i-j-k} (-1)^k {i-j \choose k} \delta^k(r) \right) {i \choose j} b_i \stackrel{[X \in N_l(D)]}{=} \\ \end{align*} \begin{align*} &= \sum_{i=j}^n \sum_{k=0}^{i-j} X^{i-j-k} (-1)^k {i-j \choose k} \delta^k(r) {i \choose j} b_i \stackrel{[v = i-k]}{=} \\ &= \sum_{i=v}^n \sum_{v=j}^n X^{v-j} {i \choose j}{i-j \choose i-v} (-1)^{i-v} \delta^{i-v}(r) b_i \\ &= \sum_{i=v}^n \sum_{v=j}^n X^{v-j} {v \choose j}{i \choose i-v} (-1)^{i-v} \delta^{i-v}(r) b_i \stackrel{[{\rm Eq.} \ \eqref{notethat}]}{=} \sum_{v=j}^n X^{v-j} {v \choose j} b_v r = c_j r. \end{align*} Finally, we show (iii). Take $r,s \in R$. From the relations $(r,s,b)=0$ and $(b,r,s)=0$ it follows that $(r,s,b_i)=(b_i,r,s)=0$. Hence we get that $(r,s,c_j)=(c_j,r,s)=0$. Thus $c_j \in N_r(R) \cap N_l(R)$. Since $c_j \in C(R)$, we now automatically get that $(r,c_j,s) = (r c_j)s - r(c_j s) = (c_j r) s - r (s c_j) = c_j (rs) - (rs) c_j = 0$. Hence $c_j \in N_m(R)$. $\qed$ \begin{rem} Our proof of Theorem \ref{maintheorem}(d)(e) follows closely the proof of Amitsur \cite[Theorems 3 and 4]{amitsur1957} from the associative situation. We also remark that Amitsur's proof is much simpler in characteristic $p>0$ than the proofs given later by Jordan \cite[Theorem 4.1.6]{jordan1975} in the $\delta$-simple situation, although, as we show, Amitsur's original proof can be adapted to this situation. \end{rem} \section{Non-associative Weyl Algebras}\label{sectionweyl} In this section, we show that there are lots of natural examples of non-associative diffe\-rential polynomial rings. To this end, we introduce non-associative versions of the first Weyl algebra (see Definition \ref{definitionweyl}) and we show that they are often simple regardless of the characteristic (see Theorem \ref{theoremweyl}). Throughout this section, $T$ denotes a non-associative ring and $T[Y]$ denotes the polynomial ring over the indeterminate $Y$. In other words $T[Y] = T[Y ; \id_R , 0]$ as a generalized polynomial ring. \begin{defn}\label{definitionweyl} If $\delta : T[Y] \rightarrow T[Y]$ is a $T$-linear map such that $\delta(1)=0$, then the non-associative differential polynomial ring $T[Y] [X ; \id_R , \delta]$ is called a {\it non-associative Weyl algebra}. \end{defn} \begin{rem} A non-associative Weyl algebra is a generalization of the classical (associative) first Weyl algebra, hence the name. Recall that the first Weyl algebra, $A_1(\mathbb{C})=\mathbb{C}\langle X,Y\rangle / (XY-YX-1)$ may be regarded as a differential polynomial ring $\mathbb{C}[Y][X;\identity_\mathbb{C},\delta]$, where $\delta : \mathbb{C}[Y] \to \mathbb{C}[Y]$ is the standard derivation on $\mathbb{C}[Y]$. \end{rem} \begin{thm}\label{theoremweyl} If $T$ is simple and there for each positive $n \in \mathbb{N}$ is a non-zero $k_n \in Z(T)$ such that $\delta(Y^n) = k_n Y^{n-1}$, then the non-associative Weyl algebra $T[Y] [X ; \id_R , \delta]$ is simple. \end{thm} \begin{proof} Put $R = T[Y]$ and $S = R[X ; \id_R , \delta]$. First we show that $R$ is $\delta$-simple. Let $I$ be a non-zero $\delta$-invariant ideal of $R$. Take a non-zero $a \in I$. Suppose that the degree of $a$ is $n$. From the definition of $\delta$ it follows that $\delta^n(a)$ is a non-zero element of $I$ of degree zero. This means that $I \cap T$ is non-zero. By simplicity of $T$ it follows that $I \cap T = T$. In particular $1 \in T = I \cap T \subseteq I$. Hence $I = R$. It is clear that $\delta$ is a kernel derivation. Therefore, by Theorem \ref{maintheorem}(b)(c) we are done if we can show that every non-zero monic $b \in R_{\delta}[X] \cap Z(S)$ is of degree zero. It is clear that $R_{\delta} = T$. Therefore $b \in T[X] \cap Z(S)$. Seeking a contradiction, suppose that the degree of $b$ is $n > 0$. Put $b = X^n + c X^{n-1} + [\mbox{lower terms}]$. From $b \in Z(S)$ it follows that $c \in Z(T)$. Take $r \in R$. Then $0 = br - rb = (\delta(r) + cr - r c)X^{n-1} + [\mbox{lower terms}] = [c \in Z(T)] = \delta(r) X^{n-1} + [\mbox{lower terms}]$. Thus $\delta(r) = 0$, for all $r \in R$, which is a contradiction since e.g. $\delta(Y) = k_1 \neq 0$. \end{proof} \begin{cor} If $T$ is simple and $\delta$ is the classical derivative on $T[Y]$, then the non-associative Weyl algebra $T[Y] [X ; \id_R , \delta]$ is simple if and only if ${\rm char}(T)=0$. \end{cor} \begin{proof} The ''if'' statement follows immediately from Theorem \ref{theoremweyl} where $k_n = n$, for $n > 0$. Now we show the ''only if'' statement. Suppose that ${\rm char}(T)=p>0$. Then $Y^p \in Z(T[Y] [X ; \id_R , \delta])$. In particular, from Proposition \ref{regular}, we get that $Z(T[Y] [X ; \id_R , \delta])$ is not a field. By Theorem \ref{maintheorem}(c) we get that $T[Y] [X ; \id_R , \delta]$ is not simple. As an alternative proof it is easy to see that the proper non-zero ideal in $T[Y]$ generated by $Y^p$ is $\delta$-invariant. Thus $T[Y]$ is not $\delta$-simple. By Theorem \ref{maintheorem}(c), we get that $T[Y] [X ; \id_R , \delta]$ is not simple. \end{proof} \section{Kernel Derivations Defined by Automorphisms}\label{sectiondynamics} In this section, we show simplicity results for a differential polynomial ring version of the quantum plane (see Theorem \ref{theoremquantumtorus}) and for differential polynomial rings defined by actions on compact Hausdorff spaces (see Theorem \ref{theoremdynamics}). To this end, we introduce a class of $\sigma$-kernel derivations defined by ring morphisms (see Definition \ref{definitionkernel}). Throughout this section, $R$ denotes a non-associative ring. \begin{prop}\label{definitionkernel} If $\alpha : R \rightarrow R$ is a ring morphism, then the map $\delta_{\alpha} : R \rightarrow R$ defined by $\delta_{\alpha}(r) = \alpha(r) - r$, for $r \in R$, is a left and right $R_{\delta_{\alpha}}^{\identity_R}$-linear $\alpha$-kernel derivation. Moreover, an ideal $I$ of $R$ is $\delta_{\alpha}$-simple if and only if it is $\alpha$-simple. \end{prop} \begin{proof} It follows immediately that $\delta_{\alpha}(1)=0$ and that $\delta_{\alpha}$ is additive. Now we will show that $\delta_{\alpha}$ in fact is $R_{\delta_{\alpha}}^{\identity_R}$-linear both from the left and the right. In particular, $\delta_{\alpha}$ is an $\alpha$-kernel derivation. Take $r \in R$ and $s \in \ker(\delta_\alpha)$. Then $\delta_{\alpha}(rs) = \alpha(rs) - rs = \alpha(r)\alpha(s) - rs = \alpha(r)s - rs = (\alpha(r) - r)s = \delta_{\alpha}(r)s$. In the same way we get that $\delta_{\alpha}(sr) = s \delta_{\alpha}(r)$. The last statement is clear since if $a \in I$, then $\delta_\alpha(a) \in I$ if and only if $\alpha(a) - a \in I$. \end{proof} \begin{rem}\label{remarkderivation} The $\alpha$-kernel derivation $\delta_{\alpha}$ from Proposition \ref{definitionkernel} is seldom a derivation. In fact, suppose that $\delta_{\alpha}$ is a derivation. Take $r,s \in R$. Then the relation $\delta_{\alpha}(rs) = \delta_{\alpha}(r)s + r \delta_{\alpha}(s)$ may be rewritten as $\delta_{\alpha}(r) \delta_{\alpha}(s) = 0$. So in particular, we get that $\delta_{\alpha}(r)^2 = 0$. Hence, if $R$ is a reduced ring, i.e. a ring with no non-zero nilpotent elements, then $\delta_{\alpha}$ is a derivation if and only if $\alpha = \id_R$. Thus, $\delta_\alpha$ would have to be the zero map. \end{rem} Let $T$ be a simple non-associative ring and suppose that $q \in Z(T) \setminus \{ 0 \}$. Let $T[Y]$ denote the polynomial ring in the indeterminate $Y$ over $T$. Define a ring automorphism $\alpha_q : T[Y] \rightarrow T[Y]$ by the $T$-algebra extension of the relation $\alpha_q(Y) = qY$. By Proposition \ref{definitionkernel}, $\alpha_q$ in turn defines an $\alpha$-kernel derivation $\delta_{\alpha_q} : T[Y] \rightarrow T[Y]$. It is not hard to show, using Remark \ref{remarkderivation}, that $\delta_{\alpha_q}$ is a classical derivation if and only if $q$ is nilpotent. \begin{prop}\label{proprootunity} If $T$ is simple, then $T[Y]$ is $\delta_q$-simple if and only if $q$ is not a root of unity. \end{prop} \begin{proof} Put $R = T[Y]$. First we show the ''only if'' statement. Suppose that $q$ is a root of unity. Take a non-zero $n \in \mathbb{N}$ with $q^n = 1$. Then the ideal of $R$ generated by $Y^n$ is $\alpha_q$-simple. Thus, $R$ is not $\alpha_q$-simple. By Proposition \ref{definitionkernel} we get that $R$ is not $\delta_{\alpha_q}$-simple. Now we show the ''if'' statement. Suppose that $q$ is not a root of unity. Take a non-zero $\delta_{\alpha_q}$-invariant ideal $I$ of $R$. We wish to show that $I = R$. By Proposition \ref{definitionkernel} $I$ is $\alpha_q$-invariant. Take a non-zero $a \in I$ of least degree $m$. Seeking a contradiction, suppose that $m > 0$. Write $a = \sum_{i=0}^m a_i Y^i$, for some $a_i \in T$, for $i \in \{0,\ldots,n\}$. Then $\alpha_q(a) - k^m a$ is a non-zero element of $I$ of degree less than $m$. This contradicts the minimality of $m$. Thus $m=0$ and thus $a \in I \cap T$. Since $T$ is simple, we get that the ideal $J$ of $T$ generated by $a$ equals $T$. In particular, we get that $I \supseteq T \ni 1$. Thus $I = R$. \end{proof} \begin{thm}\label{theoremquantumtorus} If $T$ is simple and ${\rm char}(R)=0$, then the non-associative differential polynomial ring $D = T[Y][X ; \id_{T[Y]} , \delta_{\sigma_q}]$ is simple if and only if $q$ is not a root of unity. In that case, $Z(D) = Z(T)$. \end{thm} \begin{proof} The ''only if'' statement follows from Theorem \ref{maintheorem}(c) and Proposition \ref{proprootunity}. Now we show the ''if'' statement. Put $R = T[Y]$ and $\delta = \delta_{\alpha_q}$. Suppose that $q$ is not a root of unity. By Proposition \ref{proprootunity}, we get that $R$ is $\delta$-simple. By Theorem \ref{maintheorem}(c), we are done if we can show that $Z(S)$ is a field. To this end, we first note that, by Theorem \ref{maintheorem}(b), there is a unique monic $b \in Z(D)$ of least degree $n$. Seeking a contradiction, suppose that $n > 0$. Then $b = \sum_{i=0}^n b_i X^i$, for some $b_i \in R_{\delta}$. But since $q$ is not a root of unity, it follows that $R_{\delta} = T$. Thus $b \in T[X]$. From the fact that $b \in Z(D)$, we get that $bt = tb$, for $t \in T$, which in turn implies that $b_i \in Z(T)$, for $i\in\{0,\ldots,n\}$. By looking at the degree $n-1$ coefficient in the relations $br = rb$, for $r \in R$, we get that $\alpha_q = \id_R$, which contradicts the fact that $q \neq 1$. Thus $n=0$ and it follows that $b=1$. By Theorem \ref{maintheorem}(b), we get that $Z(D) = Z(R)_{\delta}[1] = Z(T)$. \end{proof} \begin{rem} Given a field $\mathbb{F}$ and $q\in \mathbb{F} \setminus\{0\}$, we may define the so called \emph{quantum plane} (see e.g. \cite[Chapter IV]{Kassel}) as $\mathbb{F}_q[X,Y] = \mathbb{F}\langle X,Y\rangle/(YX-qXY)$. The quantum plane is an associative algebra and it can be realized as a classical Ore extension. Indeed, if we define $\sigma : \mathbb{F}[X] \to \mathbb{F}[X]$ by $\sigma(X)=qX$, then the quantum plane $\mathbb{F}_q[X,Y]$ is isomorphic to the Ore extension $\mathbb{F}[X][Y,\sigma,0]$. While the quantum plane can be seen as a $q$-deformation, the non-associative Ore extension $D = T[Y][X ; \id_{T[Y]} , \delta_{\sigma_q}]$ that we study in Theorem \ref{theoremquantumtorus} can be seen as a non-associative deformation of the plane. \end{rem} There are several ways to associate an (associative) algebra to a dynamical system $(G,X)$, where $G$ is a group acting on a topological space $X$. By associating a skew group algebra (see \cite{oinert2014}) or a crossed product $C^*$-algebra (see \cite{Power}) to the dynamical system, it is possible to encode the dynamical system into the algebra in such a way that dynamical features (faithfulness, freeness, minimality etc) of the dynamical system correspond to algebraical properties of the algebra. We shall now show how to associate a non-associative differential polynomial ring to a dynamical system and exhibit a correspondence between minimality of the dynamical system and simplicity of the non-associative ring. For the rest of this section, let $K$ denote any of the real algebras $\mathbb{R}$ (real numbers), $\mathbb{C}$ (complex numbers), $\mathbb{H}$ (Hamilton's quaternions), $\mathbb{O}$ (Graves' octonions), $\mathbb{S}$ (sedenions), etc. obtained by iterating the classical Cayley-Dickson doubling procedure of the real numbers (for more details concerning this construction, see e.g. \cite{baez2002}). It is well known that $K$ is then a reduced ring. Also, apart from the cases when $K$ equals $\mathbb{R}$, $\mathbb{C}$ or $\mathbb{H}$, $K$ is not associative. Furthermore, there is an $\mathbb{R}$-linear involution $\overline{\cdot} : K \rightarrow K$ and a norm $| \cdot | : K \rightarrow \mathbb{R}_{\geq 0}$ satisfying $k \overline{k} = |k|^2$, for $k \in K$. For the rest of this section, let $Y$ be a compact Hausdorff space and let $g : Y \to Y$ be a continuous map. A closed subspace $Z$ of $Y$ is called $g$-invariant if $g(Z) \subseteq Z$. The action of $g$ on $Y$ is called {\it minimal} if $\emptyset$ and $Y$ are the only $g$-invariant subspaces of $Y$. By abuse of notation, we let $C(Y)$ denote the ring of continuous functions $Y \rightarrow K$. Since $K$ is reduced, we get that $C(Y)$ is also reduced. The homeomorphism $g : Y \rightarrow Y$ defines a ring homomorphism $\sigma(g) : C(Y) \rightarrow C(Y)$, where $\sigma(g)(f) = f \circ g$, for $f \in C(Y)$. By Proposition \ref{definitionkernel}, $\sigma(g)$ in turn defines a $\sigma$-kernel derivation $\delta_{\sigma(g)} : C(Y) \rightarrow C(Y)$. Note that, by Remark \ref{remarkderivation}, $\delta_{\sigma(g)}$ is a classical derivation if and only if $g = \id_Y$. \begin{prop}\label{NyPropMinimal} If the action of $g$ on $Y$ is minimal, then the ring $C(Y)$ is $\delta_{\sigma(g)}$-simple. \end{prop} \begin{proof} Suppose that $Y$ is $g$-minimal. We show that $C(Y)$ is $\delta_{\sigma(g)}$-simple. Suppose that $I$ is a non-zero $\delta_{\sigma(g)}$-invariant ideal of $C(Y)$. For a subset $J$ of $I$ define $N_J = \cap_{f \in J} f^{-1}(0)$. Since $I$ is $\sigma(g)$-invariant it follows that $N_I$ is $g$-invariant. It is clear that $N_J$ is closed. Since $I$ is non-zero it follows that $N_I$ is a proper subset of $Y$. By $g$-minimality of $Y$, we get that $N_I$ is empty. By compactness of $X$ we get that there is some finite subset $J$ of $I$ such that $N_J$ is empty. Define $h \in I$ by $h = \sum_{f \in J} f \overline{f} = \sum_{f \in J} |f|^2$. Since $N_J$ is empty we get that $h(x) \neq 0$ for all $x \in X$. Therefore $I$ contains the invertible element $h$ and hence $I = C(Y)$. \end{proof} \begin{thm}\label{NYtheoremdynamics} The non-associative differential polynomial ring $D = C(Y)[X ; \id_{C(Y)} , \delta_{\sigma(g)}]$ is simple if the action of $g$ on $Y$ is minimal and the topology on $Y$ is non-discrete. In that case, $Z(D) = \mathbb{C}$, if $K = \mathbb{C}$, and $Z(D) = \mathbb{R}$, otherwise. \end{thm} \begin{proof} Put $R = C(Y)$. Suppose that the action of $g$ on $Y$ is minimal. By Proposition \ref{NyPropMinimal} we get that $R$ is $\delta_{\sigma(g)}$-simple. By Theorem \ref{maintheorem}(c) we are done if we can show that $Z(D)$ is a field. To this end, we first note that, by Theorem \ref{maintheorem}(b), there is a unique monic $b \in Z(D)$ (up to addition of elements of $Z(R)_\delta$, which are of degree $0$) of least degree $n$. Seeking a contradiction, suppose that $n > 0$. Then $b = \sum_{i=0}^n b_i X^i$, for some $b_i \in R_{\delta_{\sigma(g)}}$. Take $i \in \{ 0,\ldots,n \}$ and $k_i \in b_i(Y)$. Since $b_i \in R_{\delta_{\sigma(g)}}$, we get that the set $b_i^{-1}(k_i)$ is non-empty and $g$-invariant. By $g$-minimality of $Y$, we get that $Y = b_i^{-1}(k_i)$, i.e. $b_i$ is the constant function $k_i$. Thus $b = \sum_{i=0}^n k_i X^i$. From the fact that $b \in Z(D)$, we get that $bk = kb$, for $k \in K$, which in turn implies that $b_i \in Z(K)$, for $i\in \{0,\ldots,n\}$. By looking at the degree $n-1$ coefficient in the relations $br = rb$, for $r \in R$, we get that $\sigma(g) = \id_R$. Since the topology on $Y$ is non-discrete, this contradicts $g$-minimality of $Y$. Thus $n=0$ and it follows that $b=1$. Thus, by Theorem \ref{maintheorem}(b), we get that $Z(D) = Z(R)_{\delta_{\sigma(g)}}[1] = Z(K)$. It is well known that $Z(K)=\mathbb{R}$ for all $K$ except $K = \mathbb{C}$. \end{proof} \begin{prop}\label{propminimal} Suppose that $g : Y \to Y$ is a homeomorphism. The ring $C(Y)$ is $\delta_{\sigma(g)}$-simple if and only if the action of $g$ on $Y$ is minimal. \end{prop} \begin{proof} The ''if'' statement follows from Proposition \ref{NyPropMinimal}. Now we show the ''only if'' statement. Suppose that $C(Y)$ is $\delta_{\sigma(g)}$-simple. We show that $Y$ is $g$-minimal. Suppose that $Z$ is a closed $g$-invariant subset of $Y$ with $Z \subsetneq Y$. We wish to show that $Z = \emptyset$. To this end, let $I_Z$ denote the set of continuous functions $X \rightarrow \mathbb{C}$ that vanish outside $Z$. It is clear that $I_Z$ is an ideal of $C(Y)$. It is also clear that $I_Z \subsetneq C(Y)$ since all non-zero constant maps belong to $C(Y) \setminus I_Z$. Now we show that $I_Z$ is $\delta_{\sigma(g)}$-invariant. Take $f \in I_Z$ and $x \in Y \setminus Z$. Then $\delta_{\sigma(g)}(f)(x) = \sigma(g)(f)(x) - f(x) = [f \in I_Y \Rightarrow f(x)=0] = \sigma(g)(f)(x) = f(g(x))=0$. The last equality follows since $g(x) \in Y\setminus Z$. Now we prove this. Seeking a contradiction, suppose that $g(x) \in Z$. Then, by the $g$-invariance of $Z$, we get $g^{-1}(Z)=Z$, and $x = g^{-1}(g(x)) \in Z$, which is a contradiction. By $\delta_{\sigma(g)}$-simplicity of $C(Y)$ this implies that $I_Z = \{ 0 \}$. Since $Y$ is compact, it is completely regular. Therefore, we get that $Z = \emptyset$. \end{proof} \begin{thm}\label{theoremdynamics} Suppose that $g : Y \to Y$ is a homeomorphism. The non-associative differential polynomial ring $D = C(Y)[X ; \id_{C(Y)} , \delta_{\sigma(g)}]$ is simple if and only if the action of $g$ on $Y$ is minimal and the topology on $Y$ is non-discrete. In that case, $Z(D) = \mathbb{C}$, if $K = \mathbb{C}$, and $Z(D) = \mathbb{R}$, otherwise. \end{thm} \begin{proof} Put $R = C(Y)$. The ''if'' statement follows from Theorem \ref{NYtheoremdynamics}. Now we show the ''only if'' statement. Suppose that $S$ is simple. By Theorem \ref{maintheorem} and Proposition \ref{propminimal}. it follows that the action of $g$ on $Y$ is minimal. Seeking a contradiction, suppose that the topology on $Y$ is discrete. Since the topology is Hausdorff it follows that $Y$ is a one-element set. Thus $S$ equals the polynomial ring $K[X]$ which is not simple. \end{proof} \section{Associative Coefficients}\label{sectionassociative} In this section, we show that if the ring of coefficients is associative, then we can often obtain simplicity of the differential polynomial ring just from the assumption that the map $\delta$ is not a derivation. \begin{thm}\label{theoremassociative} Suppose that $D = R[X ; \id_R , \delta]$ is a non-associative differential polynomial ring such that $R$ is associative and all positive integers are regular in $R$. If $R$ is $\delta$-simple but $\delta$ is not a derivation, then $D$ is simple. \end{thm} \begin{proof} Let $I$ be a non-zero ideal of $D$. We wish to show that $I=D$. Pick a non-zero element $b \in I$ of least degree $n$. Let $b=\sum_{i=0}^n c_i X^i$, for some $c_0,\ldots,c_n \in R$. By mimicking the proof of Theorem \ref{maintheorem}(a), we can conclude that we may choose $c_n=1$. Seeking a contradiction, suppose that $n > 0$. We claim that $(b,d,e)=0$, for all $d,e \in R$. If we assume that the claim holds, then by extracting the terms of degree $n-1$ from the relation $(b,d,e)=0$ we get that $n d\delta(e) +n\delta(d)e+ (c_{n-1}d)e-n\delta(de)-c_{n-1}(de) = 0$. But since $R$ is associative and $n$ is regular, this implies that $d \delta(e) + \delta(d)e = \delta(de)$ which contradicts the fact that $\delta$ is not a derivation. Thus $n=0$ and hence $1 = b \in I$ which in turn implies that $I=D$. Now we show the claim. The degree $n$ part of $(b,d,e)$ equals $(c_n d)e - c_n (de) = (1 \cdot d)e - 1 \cdot (de) = 0$. Thus, since $(b,d,e) \in I$, we get that $(b,d,e)=0$, from the minimality of $n$. \end{proof} \begin{rem} In the cases when $T$ is associative, i.e. in the cases when $K=\mathbb{R}$, $K = \mathbb{C}$ or $K = \mathbb{H}$, then Theorem \ref{theoremassociative} can be used to simplify the proofs of Theorem \ref{theoremquantumtorus} and Theorem \ref{theoremdynamics}. \end{rem} \end{document}
\begin{document} \articletype{Research Article} \author[1]{Gary McGuire} \author[2]{Daniela Mueller} \affil[1]{School of Mathematics and Statistics, University College Dublin, Ireland} \affil[2]{School of Mathematics and Statistics, University College Dublin, Ireland} \title{Some Results on Linearized Trinomials that Split Completely} \runningtitle{...} \abstract{Linearized polynomials over finite fields have been much studied over the last several decades. Recently there has been a renewed interest in linearized polynomials because of new connections to coding theory and finite geometry. We consider the problem of calculating the rank or nullity of a linearized polynomial $L(x)=\sum_{i=0}^{d}a_i x^{q^i}$ (where $a_i\in \mathbb{F}_{q^n}$) from the coefficients $a_i$. The rank and nullity of $L(x)$ are the rank and nullity of the associated $\mathbb{F}_q$-linear map $\mathbb{F}_{q^n} \longrightarrow \mathbb{F}_{q^n}$. McGuire and Sheekey \cite{MCGUIRE201968} defined a $d\times d$ matrix $A_L$ with the property that $$\mbox{nullity} (L)=\mbox{nullity} (A_L -I).$$ We present some consequences of this result for some trinomials that split completely, i.e., trinomials $L(x)=x^{q^d}-bx^q-ax$ that have nullity $d$. We give a full characterization of these trinomials for $n\le d^2-d+1$.} \keywords{Linearized Polynomials, Finite Field, ECDLP, elliptic curves, cryptography} \received{...} \accepted{...} \journalname{...} \journalyear{...} \journalvolume{..} \journalissue{..} \startpage{1} \aop \DOI{...} \maketitle \section{Introduction} Let $\mathbb{F}_{q^n}$ be the finite field with $q^n$ elements, where $q$ is a prime power. Let $$L(x)=a_0x+a_1x^q+a_2 x^{q^2}+\cdots+a_d x^{q^d}$$ be a $q$-linearized polynomial with coefficients in $\mathbb{F}_{q^n}$. The roots of $L(x)$ that lie in the field $\mathbb{F}_{q^n}$ form an $\mathbb{F}_q$-vector space, which can have dimension anywhere between 0 and $d$. The dimension of the space of roots of $L$ that lie in $\mathbb{F}_{q^n}$ is equal to the nullity of $L$ considered as an $\mathbb{F}_q$-linear map from $\mathbb{F}_{q^n}$ to $\mathbb{F}_{q^n}$. McGuire and Sheekey \cite{MCGUIRE201968} defined a $d\times d$ matrix $A_L$ with the property that $$\mbox{nullity} (L)=\mbox{nullity} (A_L -I_d).$$ The entries of $A_L$ can be computed directly from the coefficients of $L$. In this paper we focus on the case of largest possible nullity, i.e., the case that $L(x)$ has all its roots in $\mathbb{F}_{q^n}$. In this case, $\mbox{nullity}(L)=d$, and so $A_L-I_d$ has rank 0 and is therefore the zero matrix. Thus we will be studying when $A_L=I_d$. This case of largest possible nullity was also obtained in \cite{CMPZu}. We also restrict to trinomials. When computing the rank or nullity, we may assume without loss of generality that $L(x)$ is monic. We will study polynomials of the form $$L(x)=x^{q^d}-bx^q-ax\in\mathbb{F}_{q^n}[x]$$ where $q$ is a prime power and $n\geq1$. We want to find $a,b\in\mathbb{F}_{q^n}$ such that $L$ splits completely over $\mathbb{F}_{q^n}$, i.e., $L$ has $q^d$ roots in $\mathbb{F}_{q^n}$. Thus, the problem becomes finding $a,b\in\mathbb{F}_{q^n}$ such that $A_L=I_d$. We will provide a full characterization of this situation for $n\le d(d-1)+1$. Our results are summarized and stated in the following theorem. \begin{Theorem}\label{all} \begin{enumerate} \item If $n\leq(d-1)d$ and $d$ does not divide $n$, then there is no polynomial $L=x^{q^d}-bx^q-ax$ with $a,b\in\mathbb{F}_{q^n}$ that splits completely over $\mathbb{F}_{q^n}$. \item Let $n=id$ with $i\in\{1,\dots,d-1\}$. Let $L=x^{q^d}-bx^q-ax\in\mathbb{F}_{q^n}[x]$. Then $L$ has $q^d$ roots in $\mathbb{F}_{q^n}$ if and only if $a^{1+q^d+\dots+q^{(i-1)d}}=1$ and $b=0$. \item Let $n=(d-1)d+1$. Let $L=x^{q^d}-bx^q-ax\in\mathbb{F}_{q^n}[x]$. Then $L$ has $q^d$ roots in $\mathbb{F}_{q^n}$ if and only if all the following hold: \hskip1in $\bullet$ $N(a)=(-1)^{d-1}$ \hskip1in $\bullet$ $b=-a^{qe_1}$ where $e_1=\sum_{i=0}^{d-1} q^{id}$ \hskip1in $\bullet$ $d-1$ is a power of the characteristic of $\mathbb{F}_{q^n}$\\ where $N(a)=a^{1+q+\dots+q^{(d-1)d}}=a^{(q^n-1)/(q-1)}$. \end{enumerate} \end{Theorem} We will prove part 1 in Section \ref{nless}, part 2 in Section \ref{ddivn} and part 3 in Sections \ref{nexact} and \ref{converse}. In Section \ref{crypt} we present a possible application to elliptic curve cryptography. Our result generalizes a a result of Csajbok et al \cite{CMPZh} which states that $a_0x+a_1x^q+a_3x^{q^3}$ (where $a_i\in \mathbb{F}_{q^7}$) cannot have $q^3$ roots in $ \mathbb{F}_{q^7}$ if $q$ is odd. This is the $d=3$ case of our theorem. Also in that paper, the authors give one example of a trinomial that does split completely when $d=3$, $n=7$, and $q=2$. Our theorem characterizes fully the trinomials that split completely and allows us to count their number (for each nonzero $a$ of norm 1 there is one polynomial, so there are $\frac{q^n-1}{q-1}$ such trinomials). One can trivially obtain some results by taking $q$-th powers. For example, when $n=2d-2$, the trinomial $x^{q^d}-bx^q-ax$ cannot have $q^d$ roots in $\mathbb{F}_{q^n}$. This follows by taking the $q^{d-2}$ power of the trinomial. Our theorem extends this to a larger range of values of $n$. One recent application of calculating the rank of linearized polynomials concerns rank metric codes and MRD codes, see \cite{Sheekey}. In particular, we would obtain an $\mathbb{F}_{q^n}$-linear MRD code from a space of linearized polynomials of dimension $kn$ over $\mathbb{F}_q$, with the property that every nonzero element has rank at least $n-k+1$. For example, in the case $k=3$, we would obtain an MRD code from the set of all trinomials $cx^{q^d}-bx^q-ax$ ($a,b,c\in\mathbb{F}_{q^n}$) if all of them have nullity 0 or 1 or 2. Finally, we set the scene for our results. We are seeking $n\geq1$ and $a,b\in\mathbb{F}_{q^n}$ such that $L=x^{q^d}-bx^q-ax$ splits over $\mathbb{F}_{q^n}$. The companion matrix $C_L$ of $L(x)=x^{q^d}-bx^q-ax$ as defined in \cite{MCGUIRE201968} is the $d\times d$ matrix $$ \begin{pmatrix} 0 & 0 & \ldots & 0 & a\\ 1 & 0 & \ldots & 0 & b\\ 0 & 1 & \ldots & 0 & 0\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 & 0 & \ldots & 1 & 0\\ \end{pmatrix}. $$ We define $A_L=A_{L,n}=C_LC_L^q\cdots C_L^{q^{n-1}}$, where $C^q$ means raising every matrix entry to the power of $q$. As stated above, $L$ splits completely over $\mathbb{F}_{q^n}$ if and only if $A_L=I_d$. \section{Fixed $d$ not dividing $n$ and $n\leq(d-1)d$}\label{nless} In this section we will prove the first part of Theorem \ref{all}. \begin{Theorem}\label{small_n} If $n\leq(d-1)d$ and $d$ does not divide $n$, then there is no polynomial $L=x^{q^d}-bx^q-ax$ with $a,b\in\mathbb{F}_{q^n}$ that splits completely over $\mathbb{F}_{q^n}$. \end{Theorem} \begin{proof} We will write $A_{n}$ instead of $A_{L,n}$ as $L$ is fixed throughout the proof. If $n=1$ then $A_{1}=C_L\neq I_d$. Indeed, if $n\leq d-1$ then the $(1,1)$ entry of $A_{n}$ is $0$, so $A_{n}\neq I_d$. Note that $A_{n}=A_{n-1}C_L^{q^{n-1}}$. But the 1st column of $C_L^{q^{n-1}}$ is $\begin{pmatrix}0 1 0 \dots 0 \end{pmatrix}^T$. Thus, the $(1,1)$ entry of $A_{n}$ is the $(1,2)$ entry of $A_{n-1}$. If $n\geq d$ then the $(1,1)$ entry of $A_{n}$ is also the $(1,d)$ entry of $A_{n-d+1}$. Let $M_k$ denote the $(1,d)$ entry of $A_{k}$. Then $M_1=a$, and $M_k=0$ for $k=2,\dots,d-1$, since $M_k=\begin{pmatrix} 0 \dots 0 M_1 \dots M_{k-1} \end{pmatrix}\cdot\begin{pmatrix} a^{q^{k-1}} b^{q^{k-1}} 0 \dots 0 \end{pmatrix}^T$. Set $M_0=0$. Then for $k\geq d$, we have a recursive formula, which follows directly from matrix multiplication: \begin{equation}\label{recursion} M_k=M_{k-d}a^{q^{k-1}}+M_{k-d+1}b^{q^{k-1}}. \end{equation} \underline{Claim:} $M_j=0$ for $j=id+2,\dots,(i+1)d-(i+1)$ and $i=0,\dots,d-3$. \underline{Proof of Claim:} We prove the claim by induction on $i$. The base case $i=0$ was done above. Note that if $M_{k-d}=M_{k-d+1}=0$ then $M_k=0$. So if the claim is true for $i$, then we have $M_{id+2+d}=0,\dots,M_{(i+1)d-(i+1)+d-1}=0$, i.e. the claim is true for $i+1$. This completes the proof of the claim. Note that when $i=d-3$, then $id+2=(i+1)d-(i+1)$, so the claim is not true for $i=d-2$. For the remaining $n$ not divisible by $d$, we will show that the $(1,1)$ entry of $A_{n}$ cannot be 1 if the $(1,j)$ entry is 0 for some $j\in\{2,\dots,d\}$, and thus $A_{n}$ cannot be the identity matrix. Note that the $(1,j)$ entry of $A_{n}$ is $M_{n-d+j}$. For $i=1,\dots,d-2$, we have $M_{(i-1)d+2}=0$ and thus $$M_{id+1}=M_{(i-1)d+1}a^{q^{id}}.$$ Since $M_1=a$, we have $M_{id+1}=a^{1+q^d+\dots+q^{id}}$. But $M_{id+1}$ is the $(1,(i+1)d+1-n)$ entry of $A_{n}$ for $n=id+1,\dots,(i+1)d-1$. If $A_{n}=I_d$ then the $(1,(i+1)d+1-n)$ entry must be $0$, so we must have $M_{id+1}=a^{1+q^d+\dots+q^{id}}=0$, and thus $a=0$. Recall that the $(1,1)$ entry of $A_{n}$ is $M_{n-d+1}$. But $M_{n-d+1}$ must be either 0 or a power of $a$, since all initial values $M_0,\dots,M_{d-1}$ of the recursive formula are either $a$ or $0$. Therefore, if $a=0$, we have $M_{n-d+1}=0$ and so $A_{n}\neq I_d$. \end{proof} \begin{Remark}\label{A_id} The proof is not valid when $d$ divides $n$. If $n=id$ with $i\in\{1,\dots,d-1\}$, the $(1,1)$ entry of $A_{id}$ is $M_{(i-1)d+1}=a^{1+q^d+\dots+q^{(i-1)d}}$ for $i\geq1$, and so we have the equation $a^{1+q^d+\dots+q^{(i-1)d}}=1$ and cannot deduce that $a=0$. \end{Remark} The recursive formula \eqref{recursion} established in the proof of Theorem \ref{small_n} is valid in greater generality: Set $M_{l,l-d}=1$, and $M_{l,k}=0$ for $k\leq0$ and $k\ne l-d$. For $1\leq l\leq d$ and $k\geq1$, let \begin{equation}\label{allrecursion} M_{l,k}=M_{l,k-d}a^{q^{k-1}}+M_{l,k-d+1}b^{q^{k-1}}. \end{equation} Then $M_{l,k}$ is the $(l,d)$ entry of $A_{L,k}$. Furthermore, the $(l,j)$ entry of $A_{L,k}$ is $M_{l,k-d+j}$. \section{Fixed $d$ dividing $n$ and $n\leq(d-1)d$}\label{ddivn} In the case that $d$ divides $n$, we have a solution, namely $a=1$ and $b=0$, i.e., the polynomial $x^{q^d}-x$ splits completely because $\mathbb{F}_{q^n}$ has a subfield $\mathbb{F}_{q^d}$. We now characterize exactly which polynomials split completely. \begin{Theorem}\label{d divides n} Let $n=id$ with $i\in\{1,\dots,d-1\}$. Let $L=x^{q^d}-bx^q-ax\in\mathbb{F}_{q^n}[x]$. Then $L$ has $q^d$ roots in $\mathbb{F}_{q^n}$ if and only if $a^{1+q^d+\dots+q^{(i-1)d}}=1$ and $b=0$. \end{Theorem} \begin{proof} By Remark \ref{A_id}, if $L$ splits completely, we have $a^{1+q^d+\dots+q^{(i-1)d}}=1$. Now the $(1,d+1-i)$ entry of $A_{id}$ is $M_{1,i(d-1)+1}$. For $i=1$, this is $M_{1,d}=ab^{q^{d-1}}$. For $i\geq2$, we have $M_{1,i(d-1)+1}=M_{1,(i-1)d-(i-1)}a^{q^{i(d-1)}}+M_{1,(i-1)(d-1)+1}b^{q^{i(d-1)}}$. But by the claim in the proof of Theorem \ref{small_n}, we have $M_{1,(i-1)d-(i-1)}=0$ for $i=2,\dots,d-1$. Thus \begin{align*} M_{1,i(d-1)+1}&=M_{1,(i-1)(d-1)+1}b^{q^{i(d-1)}}\\ &=M_{1,(i-2)(d-1)+1}b^{q^{(i-1)(d-1)}+q^{i(d-1)}}\\ &=\dots\\ &=ab^{q^{d-1}+q^{2(d-1)}+\cdots+q^{i(d-1)}}. \end{align*} But if $A_{id}=I_d$ then $M_{1,i(d-1)+1}=0$, and since $a\neq0$, we must have $b=0$. To show the converse, assume that $a^{1+q^d+\dots+q^{(i-1)d}}=1$ and $b=0$. Then the $(l,1)$ entry of $A_{id}$ is \begin{align*} M_{l,(i-1)d+1}&=M_{l,(i-2)d+1}a^{q^{(i-1)d}}\\ &=\dots\\ &=M_{l,1-d}a^{1+q^d+\cdots+q^{(i-1)d}}\\ &=M_{l,1-d}\\ &= \begin{cases} 1 \text{ for } l=1,\\ 0 \text{ for } l=2,\dots,d. \end{cases} \end{align*} By \cite[Corollary 3.2]{CMPZu}, this implies that $A_{id}=I_d$. \end{proof} \section{Fixed $d$ and $n=(d-1)d+1$}\label{nexact} In this section we will prove some preliminary results which are part of the proof of Theorem \ref{all} part 3. \subsection{Assuming $L$ splits completely} If $n=(d-1)d+1$, then, the $(1,j)$ entry of $A_{L,n}$ is $M_{1,(d-2)d+j+1}$ (where $j=1,\dots,d$). So to get $A_{L,n}=I_d$, the following system of equations has to be satisfied for $l=1,\dots,d$ \begin{equation}\label{system1} \begin{cases} M_{l,(d-2)d+l+1}=1\\ M_{l,(d-2)d+j+1}=0 \text{ for } j=1,\dots,l-1,l+1,\dots,d \end{cases} \end{equation} \begin{Lemma}\label{a_11} $\displaystyle{M_{1,(d-2)d+2}=ab^{e_2}}$ where $e_2=\frac{q^{(d-1)d}-q^{d-1}}{q^{d-1}-1}$. \end{Lemma} \begin{proof} By the recursive formula \eqref{allrecursion}, $M_{1,(d-2)d+2}=M_{1,(d-3)d+2}a^{q^{(d-2)d+1}}+M_{1,(d-3)d+3}b^{q^{(d-2)d+1}}$. But it follows from the claim in the proof of Theorem \ref{small_n} that $M_{1,(d-j-1)d+j}=0$ for $j=2,\dots,d-1$. Thus \begin{align*} M_{1,(d-2)d+2}&=M_{1,(d-3)d+3}b^{q^{(d-2)d+1}}= M_{1,(d-4)d+4}b^{q^{(d-2)d+1}+q^{(d-3)d+2}}=\dots \\ &=M_{1,d} b^{q^{(d-2)d+1}+q^{(d-3)d+2}+\dots+q^{2d-2}}\\ &=ab^{q^{(d-2)d+1}+q^{(d-3)d+2}+\dots+q^{2d-2}+q^{d-1}}\\ &=ab^{\sum_{i=1}^{d-1}q^{i(d-1)}}\\ &=ab^{e_2}. \end{align*} \end{proof} \begin{Lemma}\label{a_1d} $M_{1,(d-1)d+1}=a^{e_1}+ab^{e_2+q^{(d-1)d}}$ where $e_1=\frac{q^{d^2}-1}{q^d-1}$ and $e_2=\frac{q^{(d-1)d}-q^{d-1}}{q^{d-1}-1}$. \end{Lemma} \begin{proof} By the recursive formula \eqref{allrecursion}, \[ M_{1,(d-1)d+1}=M_{1,(d-2)d+1}a^{q^{(d-1)d}}+M_{1,(d-2)d+2}b^{q^{(d-1)d}}. \] By Lemma \ref{a_11}, $M_{1,(d-2)d+2}=ab^{e_2}$. Also $M_{1,(d-2)d+1}=a^{1+q^d+\dots+q^{(d-2)d}}$ as established in the proof of Theorem \ref{small_n}. Therefore \begin{align*} M_{1,(d-1)d+1}&=a^{\sum_{i=0}^{d-1}q^{id}}+ab^{e_2+q^{(d-1)d}}\\ &=a^{e_1}+ab^{e_2+q^{(d-1)d}}. \end{align*} \end{proof} \begin{Theorem}\label{direction1} Let $n=(d-1)d+1$. Let $L=x^{q^d}-bx^q-ax\in\mathbb{F}_{q^n}[x]$. If $L$ has $q^d$ roots in $\mathbb{F}_{q^n}$ then \begin{enumerate} \item $a^{1+q+\dots+q^{(d-1)d}}=(-1)^{d-1} \mbox{ and } $ \item $a^{1+q e_1 e_2}=(-1)^{d-1} \mbox{ and }$ \item $b=-a^{qe_1},$ \end{enumerate} where $e_1=\frac{q^{d^2}-1}{q^d-1}$ and $e_2=\frac{q^{(d-1)d}-q^{d-1}}{q^{d-1}-1}$. \end{Theorem} \begin{proof} If $A_{L,n}=I_d$, then equation \eqref{system1} has to be satisfied. By Lemma \ref{a_11}, we have $ab^{e_2}=1$ (the $(1,1)$ entry of $A_{L,n}$), and by Lemma \ref{a_1d}, we have $a^{e_1}+ab^{e_2+q^{(d-1)d}}=0$ (the $(1,d)$ entry of $A_{L,n}$). But if $ab^{e_2}=1$, then $a^{e_1}+ab^{e_2+q^{(d-1)d}}=a^{e_1}+b^{q^{(d-1)d}}$, and thus we have $b^{q^{(d-1)d}}=-a^{e_1}$. Raising both sides to the power of $q$ gives us $b^{q^n}=(-1)^q a^{qe_1}$. Since $q$ is a prime power, $(-1)^q=-1$ in $\mathbb{F}_{q^n}$. Thus, $b=-a^{qe_1}$ which proves the third conclusion. Lemma \ref{a_11} says $ab^{e_2}=1$ which now implies $$a^{-1}=b^{e_2}=(-a^{qe_1})^{e_2}=(-1)^{e_2}a^{q e_1 e_2},$$ and so $a^{1+q e_1 e_2}=(-1)^{e_2}$. (Note that $a\ne0$ since $ab^{e_2}=1$.) Recall that $e_2=\sum_{i=1}^{d-1}q^{i(d-1)}$. So if $q$ is even, then $e_2$ is even. If $q$ is odd, then $q^{i(d-1)}$ is odd for all $i=1,\dots,d-1$. So if $d-1$ is even, then $e_2$ is an even sum of odd numbers and thus even, and if $d-1$ is odd, then $e_2$ is an odd sum of odd numbers and thus odd. Thus, $(-1)^{e_2}=(-1)^{q(d-1)}$. Since $(-1)^q=-1$ in $\mathbb{F}_{q^n}$ we have $(-1)^{e_2}=(-1)^{d-1}$. By \cite[Corollary 1]{MCGUIRE201968}, if $L$ splits, then $N(-a)=(-1)^{nd}N(1)$, where $N$ is the norm function over $\mathbb{F}_{q^n}$. So we have the additional condition $N(a)=(-1)^{n(d-1)}$ or $a^{\frac{q^n-1}{q-1}}=(-1)^{n(d-1)}$. But $n=(d-1)d+1$, so $n$ is always odd. Consequently, $(-1)^{n(d-1)}=(-1)^{d-1}$. Hence, $a$ satisfies the equations \begin{equation}\label{system2} \begin{cases} a^{1+q e_1 e_2}=(-1)^{d-1}\\ a^{\frac{q^n-1}{q-1}}=(-1)^{d-1}. \end{cases} \end{equation} \end{proof} In the next section we will show that conclusion 1 of this theorem actually implies conclusion 2. \subsection{GCD of $x^k\pm1$ and $x^l\pm1$} The GCD of $x^k-1$ and $x^l-1$ is well known to be $x^{\gcd(k,l)}-1$, but we are interested in the GCD of $x^k+1$ and $x^l+1$. The following is surely well known, but we include a proof. \begin{Theorem}\label{gcd} The GCD of $x^k+1$ and $x^l+1$ is $x^{\gcd(k,l)}+1$ if $\frac{k}{\gcd(k,l)}$ and $\frac{l}{\gcd(k,l)}$ are both odd, and $1$ otherwise. \end{Theorem} \begin{proof} Let $d=\gcd(k,l)$ and let $s,t$ be B\'{e}zout Coefficients for $k$ and $l$, i.e. $sk+tl=d$. Let $g=\gcd(x^k+1,x^l+1)$. Then $x^k\equiv-1 \Mod{g}$ and $x^l\equiv-1 \Mod{g}$. Thus $x^{sk+tl}\equiv(-1)^{s+t} \Mod{g}$. So $g$ divides $x^{sk+tl}-(-1)^{s+t}=x^d-(-1)^{s+t}$. We need to check if $x^d-(-1)^{s+t}$ divides $x^k+1$ and $x^l+1$. Let $e=\frac{k}{d}$ and $f=\frac{l}{d}$. Then $x^k+1=x^{ed}+1=((-1)^{s+t})^e+1 \Mod{x^d-(-1)^{s+t}}$ and similarly, $x^l+1=((-1)^{s+t})^f+1 \Mod{x^d-(-1)^{s+t}}$. So we need to have $(-1)^{(s+t)e}+1=0$ and $(-1)^{(s+t)f}+1=0$, i.e. $e,f,s+t$ all need to be odd. But $sk+tl=d$ implies $se+tf=1$, so $e,f$ odd implies $s+t$ odd. Thus if $e,f$ are odd, then $g=x^d-(-1)^{s+t}=x^d+1$. \end{proof} \begin{Remark} Similarly, one can show that $\gcd(x^k-1,x^l+1)=x^{\gcd(k,l)}+1$ if $\frac{k}{\gcd(k,l)}$ is even and $\frac{l}{\gcd(k,l)}$ is odd. \end{Remark} \begin{Lemma}\label{expos} Let $n=(d-1)d+1$ and let $e_1=\frac{q^{d^2}-1}{q^d-1}$ and $e_2=\frac{q^{(d-1)d}-q^{d-1}}{q^{d-1}-1}$. Then \[ \gcd(1+qe_1e_2,\frac{q^n-1}{q-1})=\frac{q^n-1}{q-1}. \] \end{Lemma} \begin{proof} We first show that $\frac{q^n-1}{q-1}=1+q(\frac{q^{d^2}-1}{q^d-1})(\frac{q^{(d-1)d}-q^{d-1}}{q^{d-1}-1}) \Mod{q^n-1}$. Recall that $\frac{q^n-1}{q-1}=\sum_{i=0}^{n-1}q^i=1+q+q^2+\dots+q^{n-1}$ and $n=d^2-d+1$. Then \begin{align*} 1+q(\frac{q^{d^2}-1}{q^d-1})(\frac{q^{(d-1)d}-q^{d-1}}{q^{d-1}-1})&=1+q(\sum_{i=0}^{d-1}q^{id})(\sum_{j=1}^{d-1}q^{j(d-1)})\\ &=1+\sum_{i=0}^{d-1}\sum_{j=1}^{d-1}q^{id+j(d-1)+1}. \end{align*} We claim that $id+j(d-1)+1 \Mod{n}$ with $i=0,\dots,d-1$ and $j=1,\dots,d-1$ gives us exactly the numbers $\{1,\dots,n-1\}$. Assuming the truth of this claim, $1+q(\frac{q^{d^2}-1}{q^d-1})(\frac{q^{(d-1)d}-q^{d-1}}{q^{d-1}-1}) \Mod{q^n-1}=1+q+q^2+\dots+q^{n-1}=\frac{q^n-1}{q-1}$. Since $\frac{q^n-1}{q-1}$ divides $q^n-1$, $\frac{q^n-1}{q-1}$ divides $1+q(\frac{q^{d^2}-1}{q^d-1})(\frac{q^{(d-1)d}-q^{d-1}}{q^{d-1}-1})$ and thus $\gcd(1+q(\frac{q^{d^2}-1}{q^d-1})(\frac{q^{(d-1)d}-q^{d-1}}{q^{d-1}-1}),\frac{q^n-1}{q-1})=\frac{q^n-1}{q-1}$ and the result is proved. It remains to prove the claim. To see this, we will show that the sets $$\{(i+j)d-(j-1)\mid i=0,\dots,d-1; j=1,\dots,d-1\}$$ and $$\{kd-m\mid m=0,\dots,d-2; k=m+1,\dots,m+d\}$$ are equal, and it is easy to see that all values in the second set are distinct. Fixing $j$ and varying $i=0,\dots,d-1$ gives us the numbers $$jd-(j-1),(j+1)d-(j-1),\dots,(j+d-1)d-(j-1).$$ When $i+j\leq d-1$ then $(i+j)d-(j-1)\leq n$ and all these numbers are of the form $kd-m$ where $m\in\{0,\dots,d-2\}$ and $m<k\leq d-1$.\\ When $i+j\geq d$, then $(i+j)d-(j-1)>n$ and we subtract $n$ to get $(i+j-d+1)d-j$. Now $i+j-d+1\leq j$ since $i\leq d-1$, and thus $(i+j-d+1)d-j$ is not of the above form $kd-m$ with $m<k\leq d-1$. \end{proof} \begin{Corollary} Let $n=(d-1)d+1$ and let $e_1=\frac{q^{d^2}-1}{q^d-1}$ and $e_2=\frac{q^{(d-1)d}-q^{d-1}}{q^{d-1}-1}$. Then \[ \gcd(x^{1+qe_1e_2}+(-1)^d,x^{\frac{q^n-1}{q-1}}+(-1)^d)=x^{\frac{q^n-1}{q-1}}+(-1)^d. \] \end{Corollary} \begin{proof} If $q$ is even, then both $1+q e_1 e_2$ and $\frac{q^n-1}{q-1}$ are odd. Recall that $\frac{q^n-1}{q-1}=\sum_{i=0}^{n-1}q^i$. So if $q$ is odd, then $\frac{q^n-1}{q-1}$ is odd if $n$ is odd, and even if $n$ is even. But $n=(d-1)d+1$ is always odd, so $\frac{q^n-1}{q-1}$ is odd. We have already established in the proof of Theorem \ref{direction1} that if $q$ is odd, then $e_2$ is odd if $d-1$ is odd, and even if $d-1$ is even. Now $e_1=\sum_{i=0}^{d-1}q^{id}$ is odd if $q$ and $d$ are odd and even if $q$ is odd but $d$ is even. But either $d$ or $d-1$ is always even, so $e_1e_2$ is even. Thus $1+q e_1 e_2$ is odd. Consequently, by Theorem \ref{gcd} $$\gcd(x^{1+q e_1 e_2}+(-1)^d,x^{\frac{q^n-1}{q-1}}+(-1)^d)=x^{\gcd(1+q e_1 e_2,\frac{q^n-1}{q-1})}+(-1)^d$$ for any $q,d$. \end{proof} \begin{Corollary}\label{direction1_2} In the conclusions of Theorem \ref{direction1}, conclusion 1 implies conclusion 2. \end{Corollary} \section{The Main Result}\label{converse} In this section, we will prove the third part of the main theorem as stated in the introduction. The following Lemma is surely well known but we include a short proof. \begin{Lemma}\label{binomodp} $\binom{n}{i}=0 \Mod{p}$ for all $i=1,2,\dots,n-1$ if and only if $n$ is a power of $p$. \end{Lemma} \begin{proof} If $n$ is a power of $p$, then the above binomial coefficients are divisible by $p$. On the other hand, we claim that if $n=p^kw$, where $p\nmid w$, $w>1$ and $k\geq0$, then $\binom{p^kw}{p^k}$ is not divisible by $p$. First note that $\binom{n}{m}=\frac{n!}{m!(n-m)!}=\frac{\prod_{i=1}^n i}{(\prod_{i=1}^m i)(\prod_{i=1}^{n-m}i)}=\frac{\prod_{i=n-m+1}^n i}{\prod_{i=1}^m i}=\frac{\prod_{i=0}^{m-1}(n-i)}{\prod_{i=1}^m i}=\frac{n}{m}\prod_{i=1}^{m-1}\frac{n-i}{i}$. Thus $\binom{p^kw}{p^k}=w\prod_{i=1}^{p^k-1}\frac{p^kw-i}{i}$. Now write $i=lp^j$ with $p\nmid l$. Then $\frac{p^kw-i}{i}=\frac{p^kw-lp^j}{lp^j}=\frac{(p^{k-j}w-l)p^j}{lp^j}=\frac{p^{k-j}w-l}{l}$ which is not divisible by $p$. \end{proof} Finally, we present the last part of the proof of the main theorem. \begin{Theorem}\label{gen_ab} Let $n=(d-1)d+1$ and $e_1=\frac{q^{d^2}-1}{q^d-1}$. Let $L=x^{q^d}-bx^q-ax\in\mathbb{F}_{q^n}[x]$. Then $L$ has $q^d$ roots in $\mathbb{F}_{q^n}$ if and only if each of the following holds: \begin{enumerate} \item $a^{1+q+\dots+q^{(d-1)d}}=(-1)^{d-1}$ \item $b=-a^{qe_1}$ \item $d-1$ is a power of the characteristic of $\mathbb{F}_{q^n}$. \end{enumerate} \end{Theorem} \begin{proof} Recall that the $(l,1)$ entry of $A_{L,n}$ is $M_{l,n-d+1}$. We will first show that \[ M_{l,n-d+1}=\begin{cases} 1 \text{ for } l=1,\\ 0 \text{ for } l=2,\dots,d \end{cases} \] whenever the three conditions of the theorem are fulfilled. By \cite[Corollary 3.2]{CMPZu}, this implies that $A_{L,n}=I_d$.\\ Let $k\geq d+1$. By the recursion \eqref{allrecursion}, \begin{align} M_{l,k}&=M_{l,k-d}a^{q^{k-1}}+M_{l,k-d+1}b^{q^{k-1}}\nonumber \\ &=(M_{l,k-2d}a^{q^{k-d-1}}+M_{l,k-2d+1}b^{q^{k-d-1}})a^{q^{k-1}}+\nonumber \\ &\qquad (M_{l,k-2d+1}a^{q^{k-d}}+M_{l,k-2d+2}b^{q^{k-d}})b^{q^{k-1}}\nonumber \\ &=M_{l,k-2d}a^{q^{k-d-1}+q^{k-1}}+M_{l,k-2d+1}(a^{q^{k-1}}b^{q^{k-d-1}}+a^{q^{k-d}}b^{q^{k-1}}) \nonumber \\ &\qquad +M_{l,k-2d+2}b^{q^{k-d}+q^{k-1}}.\label{hhh} \end{align} Since $b=-a^{qe_1}=-a^{1+\sum_{i=0}^{d-2} q^{id+1}} \Mod{a^{q^n}-a}$ (condition 2 in the statement of the theorem) we have \begin{align*} a^{q^{k-1}}b^{q^{k-d-1}} &=-a^{q^{k-1}+q^{k-d-1}(1+\sum_{i=0}^{d-2} q^{id+1})}\\ &=-a^{q^{k-1}+q^{k-d}+q^{k-d-1}+\sum_{i=1}^{d-2} q^{k+(i-1)d}}\\ &=-a^{q^{k-1}+q^{k-d}+q^{k-d-1}+\sum_{i=0}^{d-3} q^{k+id}}\\ &=-a^{q^{k-1}+q^{k-d}+q^{k+(d-2)d}+\sum_{i=0}^{d-3} q^{k+id}} \Mod{a^{q^n}-a}\\ &=-a^{q^{k-d}+q^{k-1}(1+\sum_{i=0}^{d-2} q^{id+1})}\\ &=a^{q^{k-d}}b^{q^{k-1}}. \end{align*} So the coefficient of $M_{l,k-2d+1}$ in \eqref{hhh} that comes from expanding $M_{l,k-d}$ is the same as the coefficient that comes from expanding $M_{l,k-d+1}$. \begin{center} \Tree [.$M_{l,k}$ [.$M_{l,k-d}$ [.$M_{l,k-2d}$ ] $M_{l,k-2d+1}$ ][.$M_{l,k-d+1}$ $M_{l,k-2d+1}$ $M_{l,k-2d+2}$ ] ] \end{center} Let $c_{2,0}=a^{q^{k-1}+q^{k-d-1}}$, $c_{2,1}=a^{q^{k-d}}b^{q^{k-1}}$, and $c_{2,2}=b^{q^{k-1}+q^{k-d}}$. Thus \eqref{hhh} is saying that $$M_{l,k}=c_{2,0}M_{l,k-2d}+2c_{2,1}M_{l,k-2d+1}+c_{2,2}M_{l,k-2d+2}.$$ One can see Pascal's triangle emerging. We claim that $$M_{l,k}=\sum_{i=0}^{j} \binom{j}{i} c_{j,i}M_{l,k-jd+i}$$ for all $j=0,\dots,\lfloor\frac{k-1}{d}+1\rfloor$, where $c_{j,i}$ are expressions in $a$ and $b$, determined by the following recursion: \[ c_{j,i} = \begin{cases} 1 & \text{for } j=i=0, \\ c_{j-1,0}a^{q^{k-(j-1)d-1}} & \text{for } i=0, \\ c_{j-1,i}a^{q^{k-(j-1)d+i-1}}=c_{j-1,i-1}b^{q^{k-(j-1)d+i-2}} & \text{for } 0<i<j,\\ c_{j-1,j-1}b^{q^{k-(j-1)d+j-2}} & \text{for } i=j. \end{cases} \] We have shown the statement for $j=2$. Assume that the statement is true for any index less than $j$. Then \begin{align*} M_{l,k}&=\sum_{i=0}^{j-1} \binom{j-1}{i} c_{j-1,i}M_{l,k-(j-1)d+i}\\ &=\sum_{i=0}^{j-1} \binom{j-1}{i} c_{j-1,i}(M_{l,k-jd+i}a^{q^{k-(j-1)d+i-1}}+M_{l,k-jd+i+1}b^{q^{k-(j-1)d+i-1}})\\ &=\binom{j-1}{0}c_{j-1,0}a^{q^{k-(j-1)d-1}}M_{l,k-jd}+\binom{j-1}{j-1}c_{j-1,j-1}b^{q^{k-(j-1)d+j-2}}M_{l,k-jd+j}\\ &+\sum_{i=1}^{j-1}M_{l,k-jd+i}\left(\binom{j-1}{i-1}c_{j-1,i-1}b^{q^{k-(j-1)d+i-2}}+\binom{j-1}{i}c_{j-1,i}a^{q^{k-(j-1)d+i-1}}\right). \end{align*} Let $m=k-(j-2)d+i-1$. Then $a^{q^{m-1}}b^{q^{m-d-1}}=a^{q^{m-d}}b^{q^{m-1}}$, i.e. \begin{align*} a^{q^{k-(j-2)d+i-2}}b^{q^{k-(j-1)d+i-2}}=a^{q^{k-(j-1)d+i-1}}b^{q^{k-(j-2)d+i-2}} \end{align*} and hence \begin{align}\label{c_m} c_{j-2,i-1}a^{q^{k-(j-2)d+i-2}}b^{q^{k-(j-1)d+i-2}}=c_{j-2,i-1}a^{q^{k-(j-1)d+i-1}}b^{q^{k-(j-2)d+i-2}}. \end{align} Then $$c_{j-1,i-1}b^{q^{k-(j-1)d+i-2}}=c_{j-2,i-1}a^{q^{k-(j-2)d+i-2}}b^{q^{k-(j-1)d+i-2}}$$ and $$c_{j-1,i}a^{q^{k-(j-1)d+i-1}}=c_{j-2,i-1}b^{q^{k-(j-2)d+i-2}}a^{q^{k-(j-1)d+i-1}}$$ and by \eqref{c_m}, these two expressions are equal.\\ Thus \begin{align*} M_{l,k}&=\binom{j}{0}c_{j-1,0}a^{q^{k-(j-1)d-1}}M_{l,k-jd}+\binom{j}{j}c_{j-1,j-1}b^{q^{k-(j-1)d+j-2}}M_{l,k-jd+j}\\ &+\sum_{i=1}^{j-1}\binom{j}{i}c_{j-1,i}a^{q^{k-(j-1)d+i-1}}M_{l,k-jd+i} \end{align*} as desired. This completes the proof of the claim. We now have \begin{align*} M_{l,n-d+1}&=M_{l,(d-2)d+2}=\sum_{i=0}^{d-1} \binom{d-1}{i} c_{d-1,i}M_{l,2-d+i}\\ &=\sum_{i=0}^{d-2} \binom{d-1}{i} c_{d-1,i}M_{l,2-d+i}+c_{d-1,d-1}M_{l,1}\\ &=\begin{cases} c_{d-1,d-1}M_{l,1} & \text{for } l=1, \\ \binom{d-1}{l-2} c_{d-1,l-2}M_{l,l-d}+c_{d-1,d-1}M_{l,1} & \text{for } l\geq2\\ \end{cases} \end{align*} since $M_{l,2-d+i}=0$ when $i\neq l-2$. As before, let $e_1=\frac{q^{d^2}-1}{q^d-1}$ and $e_2=\frac{q^{(d-1)d}-q^{d-1}}{q^{d-1}-1}$. Then \begin{align*} c_{d-1,d-1}&=b^{q^{k-1}+q^{k-d}+q^{k-2d+1}+\dots+q^{k-(d-2)d+d-3}}\\ &=b^{q^{(d-2)d+1}+q^{(d-3)d+2}+\dots+q^{d-1}}\text{ for }k=(d-2)d+2\\ &=b^{q^{d-1}+q^{2d-2}+\dots+q^{(d-1)(d-1)}}\\ &=b^{e_2}\\ &=(-a^{qe_1})^{e_2}\\ &=(-1)^{d-1} a^{qe_1e_2}\\ &=(-1)^{d-1} a^{\frac{q^n-1}{q-1}-1} \Mod{a^{q^n}-a} \text{ by Lemma \ref{expos}}. \end{align*} Also \begin{align*} c_{d-1,0}&=a^{q^{k-1}+q^{k-d-1}+\dots+q^{k-(d-2)d-1}}\\ &=a^{q^{(d-2)d+1}+q^{(d-3)d+1}+\dots+q} \text{ for }k=(d-2)d+2. \end{align*} Thus \begin{align*} c_{d-1,d-1}M_{l,1}&=(-1)^{d-1} a^{\frac{q^n-1}{q-1}-1}(aM_{l,1-d}+bM_{l,2-d})\\ &=(-1)^{d-1} a^{\frac{q^n-1}{q-1}}M_{l,1-d}+(-1)^{d} a^{\frac{q^n-1}{q-1}+\sum_{i=0}^{d-2} q^{id+1}}M_{l,2-d}\\ &=(-1)^{d-1} (-1)^{d-1}M_{l,1-d}+(-1)^{d} (-1)^{d-1}c_{d-1,0}M_{l,2-d}\\ &=M_{l,1-d}-c_{d-1,0}M_{l,2-d} \end{align*} since $a^{\frac{q^n-1}{q-1}}=(-1)^{d-1}$ (condition 1 in the statement of the theorem). Hence, \begin{align} M_{l,n-d+1}&=\begin{cases} M_{l,1-d}-c_{d-1,0}M_{l,2-d} & \text{for } l=1, \\ \binom{d-1}{l-2} c_{d-1,l-2}M_{l,l-d}+M_{l,1-d}-c_{d-1,0}M_{l,2-d} & \text{for } l\geq2\\ \end{cases}\nonumber \\ &= \begin{cases} 1 & \text{for } l=1, \\ 0 & \text{for } l=2, \\ \binom{d-1}{l-2} c_{d-1,l-2} & \text{for } l\geq3 \label{jjj} \end{cases} \end{align} since $M_{l,l-d}=1$ and $M_{l,k}=0$ when $k\neq l-d$ and $k\leq0$. So far we have only used conditions 1 and 2 in the statement of the theorem (so note for later that conditions 1 and 2 imply \eqref{jjj}). Assume now that condition 3 holds. By Lemma \ref{binomodp}, $M_{l,n-d+1}=0$ for all $l\geq3$ because $\binom{d-1}{l-2} =0$. This completes the proof that if the three conditions in the statement hold, then $L$ splits completely. Now we complete the proof of the theorem by showing the converse, i.e. we show that if $L$ splits completely then the three conditions in the statement hold. Theorem \ref{direction1} and Corollary \ref{direction1_2} show that if $L$ splits completely, then conditions 1 and 2 of the theorem hold. Because conditions 1 and 2 hold, we know that \eqref{jjj} holds. On the other hand, since $L$ splits completely, $M_{l,n-d+1}=0$ for all $l\geq3$. Therefore $\binom{d-1}{l-2} c_{d-1,l-2}=0$ for all $3 \leq l\leq d$. We now use the fact that $c_{d-1,l-2}$ is a power of $a$, and is therefore nonzero because $a$ is nonzero. We are forced to conclude that $\binom{d-1}{l-2} =0$ for all $3 \leq l\leq d$. This implies that $d-1$ is a power of the characteristic of $\mathbb{F}_{q^n}$ by Lemma \ref{binomodp}. \end{proof} \section{Possible Application to Cryptography}\label{crypt} \subsection{Quasi-Subfield Polynomials} The recent work \cite{Quasi} explored the use of quasi-subfield polynomials to solve the Elliptic Curve Discrete Logarithm Problem (ECDLP). They define quasi-subfield polynomials as polynomials of the form $x^{q^d}-\lambda(x)\in\mathbb{F}_{q^n}[x]$ which divide $x^{q^n}-x$ and where $\log_q(\deg(\lambda))<d^2/n$. For appropriate choices of $n$ and $d$, linearized polynomials have a chance of being quasi-subfield polynomials. We first observe that the polynomials in Theorem \ref{gen_ab} are quasi-subfield polynomials. \begin{Lemma}\label{linqs} The linearized polynomial $L=x^{q^d}-bx^q-ax\in\mathbb{F}_{q^{(d-1)d+1}}[x]$ is a quasi-subfield polynomial when all the following conditions are satisfied. \begin{enumerate} \item $a^{1+q+\dots+q^{(d-1)d}}=(-1)^{d-1}$ \item $b=-a^{qe_1}$ \item $d-1$ is a power of the characteristic of $\mathbb{F}_{q^n}$. \end{enumerate} \end{Lemma} \begin{proof} Here, $\log_q(\deg(\lambda))=1$ and $d^2>n=(d-1)d+1$ so the condition $\log_q(\deg(\lambda))<d^2/n$ is satisfied. By Theorem \ref{gen_ab}, $L(x)$ divides $x^{q^n}-x$. \end{proof} \subsection{The ECDLP} Let $E$ be an elliptic curve over a finite field $\mathbb{F}_q$, where $q$ is a prime power. In practice, $q$ is often a prime number or a large power of 2. Let $P$ and $Q$ be $\mathbb{F}_q$-rational points on $E$. The Elliptic Curve Discrete Logarithm Problem (ECDLP) is finding an integer $l$ (if it exists) such that $Q=lP$. The integer $l$ is called the discrete logarithm of $Q$ to base $P$. The ECDLP is a hard problem that underlies many cryptographic schemes and is thus an area of active research. The introduction of summation polynomials by \cite{Semaev04} has led to algorithms that resemble the index calculus algorithm of the DLP over finite fields. The algorithm to solve the ECDLP in \cite{Quasi} also uses summation polynomials, so we recall their definition. \begin{Definition}\cite{Semaev04} Let $E$ be an elliptic curve over a field $K$. For $m\geq 1$, we define the summation polynomial $S_{m+1}=S_{m+1}(X_0,X_1,\ldots,X_m)\in K[X_0,X_1,\ldots,X_m]$ of $E$ by the following property. Let $x_0,x_1,\ldots,x_m\in\overline{K}$, then $S_{m+1}(x_0,x_1,\ldots,x_m)=0$ if and only if there exist $y_0,y_1,\ldots,y_m\in\overline{K}$ such that $(x_i,y_i)\in E(\overline{K})$ and $(x_0,y_0)+(x_1,y_1)+\ldots+(x_m,y_m)=\mathcal{O}$, where $\mathcal{O}$ is the identity element of $E$. \end{Definition} The summation polynomials $S_m$ have many terms and have only been computed for $m\le 9$. \cite{Quasi} develop an algorithm to solve the ECDLP over the field $\mathbb{F}_{q^n}$ using a quasi-subfield polynomial $X^{q^d}-\lambda(X)\in\mathbb{F}_{q^n}[X]$ and the summation polynomial $S_{m+1}(X_0,X_1,\ldots,X_m)\in\mathbb{F}_{q^n}[X_0,X_1,\ldots,X_m]$. By \cite[Theorem 3.2]{Quasi} (see also Appendix A1) their algorithm has complexity $$m!q^{n-d(m-1)}\tilde{O}(m^{5.188}2^{7.376m(m-1)}\deg(\lambda)^{4.876m(m-1)})+mq^{2d}.$$ \subsection{Linearized Quasi-Subfield Polynomials} One of the problems outlined in \cite{Quasi} is to find suitable quasi-subfield polynomials that give optimal complexity in their algorithm. So in this section, we will investigate whether the linearized polynomials in this paper are a suitable choice. In our notation the field is $\mathbb{F}_{q^n}$ so brute force algorithms have $O(q^n)$ complexity and generic algorithms (Pollard Rho or Baby-Step-Giant-Step) have $O(q^{n/2})$ complexity. If $n=(d-1)d+1$ and we use $L=x^{q^d}-bx^q-ax\in\mathbb{F}_{q^{(d-1)d+1}}[x]$ as in Lemma \ref{linqs} as our quasi-subfield polynomial, then we get complexity $$m!q^{d^2-dm+1}\tilde{O}(m^{5.188}2^{7.376m(m-1)}q^{4.876m(m-1)})+mq^{2d}$$ for the algorithm in \cite{Quasi}. However, since $d^2-dm+1+4.876m(m-1)>n/2$ for any $d,m$, this will not beat generic discrete log algorithms. Thus it appears that the polynomials of Theorem \ref{gen_ab} will not lead to an ECDLP algorithm that beats generic algorithms, although they can beat brute force algorithms. \begin{Remark} We briefly discuss adding another term of small degree, for example, an $x^{q^2}$ term. Suppose we have a linearized polynomial $L=x^{q^d}-cx^{q^2}-bx^q-ax\in\mathbb{F}_{q^n}[x]$ which splits completely and with $d^2>2n$ (so $L(x)$ is a quasi-subfield polynomial). Then the algorithm of \cite{Quasi} has complexity $$m!q^{n-d(m-1)}\tilde{O}(m^{5.188}2^{7.376m(m-1)}(q^2)^{4.876m(m-1)})+mq^{2d}.$$ To beat generic discrete log algorithms, we require at least $n-d(m-1)\leq n/2$ and $2d\leq n/2$, which implies $\frac{n}{2(m-1)}\leq d\leq\frac{n}{4}$ and therefore $m\geq3$. As an example, if we choose $q=2$ and $m=4$ then we have $2^{7.376m(m-1)}(q^2)^{4.876m(m-1)}\approx1.45\cdot q^{205}$ inside the $\tilde{O}$. This means that the overall complexity can beat generic algorithms over $\mathbb{F}_{2^n}$ (for $n$ sufficiently large). For example, a choice of $d$ around $n/5$ when $q=2$, $m=4$, would give a complexity $O(q^{0.4n})$ for $n>500$. To obtain an estimate for smaller field sizes we may try $m=3$, which implies that $d\approx\frac{n}{4}$. These choices would give us complexity $$q^{n/2}\tilde{O}(3^{5.188}2^{44.256}q^{58.512})+3q^{n/2}$$ which is not better than generic algorithms. One example of a linearized polynomial which splits completely and matches these choices ($q=2$, $d\approx\frac{n}{4}$) is $L=x^{1024}+x^{4}+x\in\mathbb{F}_{2^{42}}[x]$. \end{Remark} \section{Conclusion and open questions} We have provided necessary and sufficient conditions for $L=x^{q^d}-bx^q-ax\in\mathbb{F}_{q^{(d-1)d+1}}[x]$ to have all $q^d$ roots in $\mathbb{F}_{q^{(d-1)d+1}}$. The recursive formula that we found for trinomial linearized polynomials is valid for more general linearized polynomials too: Let $L=x^{q^d}-\sum_{i=0}^{d-1}a_i x^{q^i}$. Set $M_{l,l-d}=1$, and $M_{l,k}=0$ for $k\leq0$ and $k\ne l-d$. For $1\leq l\leq d$ and $k\geq1$, let \begin{equation} M_{l,k}=M_{l,k-d}a_0^{q^{k-1}}+M_{l,k-d+1}a_1^{q^{k-1}}+\cdots+M_{l,k-1}a_{d-1}^{q^{k-1}} =\sum_{i=0}^{d-1} M_{l,k-d+i}a_i^{q^{k-1}}. \end{equation} Then $M_{l,k}$ is the $(l,d)$ entry of $A_{L,k}$. Furthermore, the $(l,j)$ entry of $A_{L,k}$ is $M_{l,k-d+j}$. We are currently working on extending these results to this more general case, for example, to polynomials of the form $L=x^{q^d}-cx^{q^2}-bx^q-ax$. \begin{funding} This research was supported by a Postgraduate Government of Ireland Scholarship from the Irish Research Council. \end{funding} \end{document}
\begin{document} \centerline{\bf The First Derivative of Ramanujans Cubic Continued Fraction}\vskip .10in \centerline{\bf Nikos Bagis} \centerline{Department of Informatics} \centerline{Aristotele University of Thessaloniki Greece} \centerline{[email protected]} \begin{quote} \begin{abstract} We give the complete evaluation of the first derivative of the Ramanujans cubic continued fraction using Elliptic functions. The Elliptic functions are easy to handle and give the results in terms of Gamma functions and radicals from tables. \end{abstract} \bf keywords \rm{Ramanujan's Cubic Fraction; Jacobian Elliptic Functions; Continued Fractions; Derivative} \end{quote} \section{Introduction} \label{intro} The Ramanujan's Cubic Continued Fraction is (see [3], [7], [8], [9], [11]). \begin{equation} V(q):=\frac{q^{1/3}}{1+}\frac{q+q^2}{1+}\frac{q^2+q^4}{1+}\frac{q^3+q^6}{1+}\ldots \end{equation} Our main result is the evaluation of the first derivative of Ramanujan's cubic fraction. For this, we follow a different way from previous works and use the theory of Elliptic functions. Our method consists to find the complete polynomial equation of the cubic fraction which is a solvable, in radicals, quartic equation, in terms only of the inverse elliptic nome $k_r$, Using the derivative of $k_r$ which we evaluate in Section 2 of this article, we find the desired formula of the first derivative. For beginning we give some definitions first.\\ Let \begin{equation} \left(a;q\right)_k=\prod^{k-1}_{n=0}(1-aq^n) \end{equation} Then we define \begin{equation} f(-q)=(q;q)_\infty \end{equation} and \begin{equation} \Phi(-q)=(-q;q)_\infty \end{equation} Also let \begin{equation} K(x)=\int^{\pi/2}_{0} \frac{1}{\sqrt{1-x^2\sin^2(t)}}dt \end{equation} be the elliptic integral of the first kind.\\ We denote \begin{equation} \theta_4(u,q)=\sum^{\infty}_{n=-\infty}(-1)^nq^{n^2}e^{2nui} \end{equation} the Elliptic Theta function of the 4th-kind. Also hold the following relations (see [16]): \begin{equation} \prod^{\infty}_{n=1}(1-q^{2n})^6=\frac{2kk'K(k)^3}{\pi^3q^{1/2}} \end{equation} \begin{equation} q^{1/3}\prod^{\infty}_{n=1}(1+q^n)^8=2^{-4/3}\left(\frac{k}{1-k^2}\right)^{2/3} \end{equation} and \begin{equation} f(-q)^8=\prod^{\infty}_{n=1}(1-q^n)^8=\frac{2^{8/3}}{\pi^4}q^{-1/3}k^{2/3}(k')^{8/3}K(k)^4 \end{equation} The variable $k$ is defined from the equation \begin{equation} \frac{K(k')}{K(k)}=\sqrt{r} \end{equation} where $r$ is positive , $q=e^{-\pi \sqrt{r}}$ and $k'=\sqrt{1-k^2}$. Note also that whenever $r$ is positive rational, the $k=k_r$ are algebraic numbers. \section{The Derivative $\left\{r,k\right\}$} \textbf{Lemma 1.}\\ If $\left|t\right|<\pi a/2$ and $q=e^{-\pi a}$ then \begin{equation} \sum^{\infty}_{n=1}\frac{\cosh(2tn)}{n\sinh(\pi a n)}=\log(f(-q^2))-\log\left(\theta_4(it,e^{-a\pi})\right) \end{equation} \textbf{Proof.}\\ From the Jacobi Triple Product Identity (see [4]) we have \begin{equation} \theta_4(z,q)=\prod^{\infty}_{n=0}(1-q^{2n+2})(1-q^{2n-1}e^{2iz})(1-q^{2n-1}e^{-2iz}) \end{equation} By taking the logarithm of both sides and expanding the logarithm of the individual terms in a power series it is simple to show (11) from (12). \[ \] \textbf{Lemma 2.}\\ Let $q=e^{-\pi \sqrt{r}}$ with $r$ real positive \begin{equation} \phi(x)=2\frac{d}{dx}\left(\frac{\partial}{\partial t}\log\left(\vartheta_4\left(\frac{it\pi}{2},e^{-2\pi x}\right)\right)_{t=x}\right) \end{equation} then \begin{equation} \frac{d(\sqrt{r})}{dk}=\frac{K^{(1)}(k)}{\phi\left(\frac{K(\sqrt{1-k^2})}{K(k)}\right)}= \frac{K^{(1)}(k)}{\phi\left(\frac{K(k')}{K(k)}\right)} \end{equation} Where $K^{(1)}(k)$ is the first derivative of $K$.\\ \textbf{Proof.}\\ From Lemma 1 we have $$2\frac{\partial}{\partial t}\log\left(\vartheta_4\left(\frac{it\pi}{2},e^{-2\pi x}\right)\right)_{t=x}=-\pi\sum^{\infty}_{n=1}\frac{1}{\cosh\left(n\pi x\right)}=\frac{\pi}{2}-K(k_x)$$ then $$\sqrt{x(k_2)}-\sqrt{x(k_1)}=-\int^{k_2}_{k_1}\frac{K^{(1)}(k)}{\phi\left(\frac{K\left(\sqrt{1-k^2}\right)}{K(k)}\right)}dk$$ Differentiating the above relation with respect to $k$ we get the result. \[ \] \textbf{Lemma 3.}\\ Set $q=e^{-\pi \sqrt{r}}$ and $$\left\{r,k\right\}:=\frac{dr}{dk}=2\frac{K(k')K^{(1)}(k)}{K(k)\phi\left(\frac{K(k')}{K(k)}\right)}$$ Then \begin{equation} \left\{r,k\right\}=\frac{\pi\sqrt{r}}{K^2(k_r)k_rk'^2_r} \end{equation} \textbf{Proof.}\\ From (9) taking the logarithmic derivative with respect to $k$ and using Lemma 2 we get: \begin{equation} \pi\left\{r,k\right\}\left(1-24\sum^{\infty}_{n=1}\frac{nq^n}{1-q^n}\right)=\left(\frac{1-5k^2}{(k-k^3)}+\frac{6K^{(1)}}{K}\right)\frac{4K'}{K} \end{equation} But it is known that \begin{equation} \sum^{\infty}_{n=1}\frac{nq^n}{1-q^n}=\frac{1}{24}+\frac{K}{6\pi^2}((5-k^2)K-6E) \end{equation} Hence \begin{equation} \left\{r,k\right\}=\frac{\pi K'}{K^2}\frac{\frac{1-5k^2}{k-k^3}+\frac{6K^{(1)}}{K}}{(k^2-5)K+6E} \end{equation} Also $$a(r)=\frac{\pi}{4K^2}+\sqrt{r}-\frac{E\sqrt{r}}{K} ,$$ where $a(r)$ is the elliptic alpha function. Using the above relations we get the result.\\ \textbf{Note.}\\ 1) The first derivative of $K$ is $$K^{(1)}=\frac{E}{k_r\cdot k'^2_r}-\frac{K}{k_r}$$ where $k=k_r$ and $k'=k'_r=\sqrt{1-k^2_r}$.\\ 2) In the same way we can find form the relation \begin{equation} k_{4r}=\frac{1-k'_r}{1+k'_r} \end{equation} the 2-degree modular equation of the derivative.\\ Noting first that (the proof is easy) \begin{equation} \left\{r,k'_{r}\right\}=\frac{k'_r}{k_{r}}\left\{r,k_r\right\} \end{equation} we have \begin{equation} \left\{r,k_{4r}\right\}=\frac{k'_r(1+k'_r)^2}{2k_r}\left\{r,k_r\right\} \end{equation} \section{The Ramanujan's Cubic Continued Fraction} Let \begin{equation} V(q):=\frac{q^{1/3}}{1+}\frac{q+q^2}{1+}\frac{q^2+q^4}{1+}\frac{q^3+q^6}{1+}\ldots \end{equation} is the Ramanujan's cubic continued fraction, then holds \[ \] \textbf{Lemma 4.} \begin{equation} V(q)=\frac{2^{-1/3}(k_{9r})^{1/4}(k'_{r})^{1/6}}{(k_r)^{1/12}(k'_{9r})^{1/2}} \end{equation} where the $k_{9r}$ are given by (see [7]): \begin{equation} \sqrt{k_rk_{9r}}+\sqrt{k'_rk'_{9r}}=1 \end{equation} \textbf{Proof.}\\ The proof can be found in [18]. \[ \] \textbf{Lemma 5.}\\ If $$G(x)=\frac{x}{\sqrt{2\sqrt{x}-3x+2x^{3/2}-2\sqrt{x}\sqrt{1-3\sqrt{x}+4x-3x^{3/2}+x^2}}}$$ and \begin{equation} k_r=G(w) \end{equation} then $$k_{9r}=\frac{w}{k_r}$$ and $$k'_{9r}=\frac{(1-\sqrt{w})^2}{k'_r}$$ \textbf{Proof.}\\ See [18]. \[ \] \textbf{Theorem 1.}\\ Set $T=\sqrt{1-8V^3(q)}$ then \begin{equation} (k_r)^2=\frac{(1-T)(3+T)^3}{(1+T)(3-T)^3} \end{equation} \textbf{Proof.}\\ See [18]. \[ \] Equation (26) is a solvable quartic equation with respect to $T$.\\ An example of evaluation is \begin{equation} V(e^{-\pi})=\frac{1}{2}\left(-2-\sqrt{3}+\sqrt{3(3+2\sqrt{3})}\right) \end{equation} \[ \] \textbf{Main Theorem.}\\ Let $q=e^{-\pi\sqrt{r}}$, then \begin{equation} V'(q)=\frac{dV(q)}{dq}=\frac{-2\sqrt{r}}{q\pi}\frac{dV}{dr}=\frac{4K^2(k_r)k'^2_r(V(q)+V^4(q))}{3q\pi^2\sqrt{r}\sqrt{1-8V^3(q)}} \end{equation} \textbf{Proof.}\\ Derivate (26) with respect to $r$ then \begin{equation} \sqrt{\frac{2k_r}{\left\{k,r\right\}}}=\frac{4T(3+T)}{(3-T)^2(1+T)}\sqrt{\frac{dT}{dr}} \end{equation} or \begin{equation} T_r=\frac{dT}{dr}=\frac{1}{8k_r\left\{r,k\right\}}\frac{(9-T^2)(1-T^2)}{T^2} \end{equation} Using the relation $T=\sqrt{1-8V(q)^3}$, we get $$ \frac{dV(q)}{dr}=-\frac{2}{3}\frac{V(q)+V^4(q)}{k_r \left\{r,k\right\} \sqrt{1-8V^3(q)}} \eqno{(a)}$$ which is the result.\\ Hence the problem of finding $V(q)$ and $V'(q)$ is completely solvable in radicals when we know $k_r$ and $K(k_r)$ (see [12]), $r\in\bf Q\rm$, $r>0$. \[ \] We often use the notations $V[r]:=V(e^{-\pi\sqrt{r}})$, $T[r]:=T(e^{-\pi\sqrt{r}})=t$. \[ \] \textbf{Proposition 1.} \begin{equation} V[4r]=\frac{1-T[r]}{4V[r]} \end{equation} \textbf{Proof.}\\ See [9]. \[ \] \textbf{Proposition 2.}\\ Set $T'[4r]=u$, $T'[r]=\nu$, then \begin{equation} \frac{u}{\nu}=\frac{(1-t)(3+t)}{8\sqrt{t}(1+t)^{5/3}(3-t)^{1/2}} \end{equation} \textbf{Proof.}\\ From (19), (20), (21) and (28) we get \begin{equation} V'[4r]=\frac{-2\left\{k,r\right\}}{3\frac{1-k'}{1+k'}\frac{k'(1+k')^2}{2k}}\frac{V[4r]+V[4r]^4}{T[r]} \end{equation} If we use the duplication formula (31) we get the result. \[ \] \textbf{Evaluations}.\\ 1) We can calculate now easy the values of $V'(q)$ from (28) using (26). An example of evaluation is $$k_1=\frac{1}{\sqrt{2}}$$, $$E(k_1)=\frac{4\pi^{3/2}}{\Gamma(-1/4)^2}+\frac{\Gamma(3/4)^2}{2\sqrt{\pi}}$$ and $$K(k_1)=\frac{8\pi^{3/2}}{\Gamma(-1/4)^2}$$ When $r=1$ we get $$\left\{r,k\right\}=\frac{8\sqrt{2}\Gamma(3/4)^4}{\pi^2}$$ Hence \begin{equation} V'(e^{-\pi})=-\frac{64 \left(-26-15 \sqrt{3}+10 \sqrt{3+2 \sqrt{3}}+6 \sqrt{9+6 \sqrt{3}}\right)}{\sqrt{45+26 \sqrt{3}-18 \sqrt{3+2 \sqrt{3}}-10 \sqrt{9+6 \sqrt{3}}}}\frac{e^{\pi } \pi}{\Gamma\left(-\frac{1}{4}\right)^4} \end{equation} 2) It is $$T_1=T(e^{-\pi\sqrt{3}})=-39+22\sqrt{3}-\frac{2\cdot 6^{2/3}(-123+71\sqrt{3})}{\left(-4725+2728\sqrt{3}-\sqrt{4053-2340\sqrt{3}}\right)^{1/3}}+ $$ $$ +2\cdot6^{1/3}\left(-4725+2728\sqrt{3}-\sqrt{4053-2340\sqrt{3}}\right)^{1/3}$$ and $V_1=V(e^{-\pi\sqrt{3}})=\frac{1}{2}\sqrt[3]{1-T_1^2}$.\\ From tables and (15) it is: $$ \left\{3,k_3\right\}=\frac{192 \sqrt{2} \left(-1+\sqrt{3}\right) \pi ^2}{\Gamma\left(\frac{1}{6}\right)^2 \Gamma\left(\frac{1}{3}\right)^2}$$ We find the value of $V'(e^{-\pi\sqrt{3}})$ in terms of Gamma function and algebraic numbers. $$V'(e^{-\pi\sqrt{3}})=\frac{4\sqrt{3}e^{\pi\sqrt{3}}}{3}\frac{V_1+V^4_1}{k_3 \left\{3,k_3\right\} \sqrt{1-8 V^3_1}}$$ \[ \] \centerline{\bf References}\vskip .2in \noindent [1]: M.Abramowitz and I.A.Stegun. 'Handbook of Mathematical Functions'. Dover Publications, New York. 1972. [2]: C. Adiga, T. Kim. 'On a Continued Fraction of Ramanujan'. Tamsui Oxford Journal of Mathematical Sciences 19(1) (2003) 55-56 Alethia University. [3]: C. Adiga, T. Kim, M.S. Naika and H.S. Madhusudhan. 'On Ramanujan`s Cubic Continued Fraction and Explicit Evaluations of Theta-Functions'. arXiv:math/0502323v1 [math.NT] 15 Feb 2005. [4]: G.E.Andrews. 'Number Theory'. Dover Publications, New York. 1994. [5]: B.C.Berndt. 'Ramanujan`s Notebooks Part I'. Springer Verlag, New York (1985). [6]: B.C.Berndt. 'Ramanujan`s Notebooks Part II'. Springer Verlag, New York (1989). [7]: B.C.Berndt. 'Ramanujan`s Notebooks Part III'. Springer Verlag, New York (1991). [8]: Bruce C. Berndt, Heng Huat Chan and Liang-Cheng Zhang. 'Ramanujan`s class invariants and cubic continued fraction'. Acta Arithmetica LXXIII.1 (1995). [9]: Heng Huat Chan. 'On Ramanujans Cubic Continued Fraction'. Acta Arithmetica. 73 (1995), 343-355. [10]: I.S. Gradshteyn and I.M. Ryzhik. 'Table of Integrals, Series and Products'. Academic Press (1980). [11]: Megadahalli Sidda Naika Mahadeva Naika, Mugur Chin [12]: Habib Muzaffar and Kenneth S. Williams. 'Evaluation of Complete Elliptic Integrals of The First Kind and Singular Moduli'. Taiwanese Journal of Mathematics. Vol. 10, No. 6, pp, 1633-1660, Dec 2006. [13]: L. Lorentzen and H. Waadeland. 'Continued Fractions with Applications'. Elsevier Science Publishers B.V., North Holland (1992). [14]: S.H.Son. 'Some integrals of theta functions in Ramanujan's lost notebook'. Proc. Canad. No. Thy Assoc. No.5 (R.Gupta and K.S.Williams, eds.), Amer. Math. Soc., Providence. [15]: H.S. Wall. 'Analytic Theory of Continued Fractions'. Chelsea Publishing Company, Bronx, N.Y. 1948. [16]:E.T.Whittaker and G.N.Watson. 'A course on Modern Analysis'. Cambridge U.P. (1927) [17]:I.J. Zucker. 'The summation of series of hyperbolic functions'. SIAM J. Math. Ana.10.192 (1979) [18]: Nikos Bagis. 'The complete evaluation of Rogers Ramanujan and other continued fractions with elliptic functions'. arXiv:1008.1304v1 \end{document}
\begin{document} \title{High-order Foldy-Wouthuysen transformations of the Dirac and Dirac-Pauli Hamiltonians in the weak-field limit} \author{Tsung-Wei Chen} \email{[email protected]}\affiliation{Department of Physics, National Sun Yat-sen University, Kaohsiung 80424, Taiwan} \author{Dah-Wei Chiou} \email{[email protected]} \affiliation{Department of Physics and Center for Condensed Matter Sciences, National Taiwan University, Taipei 10617, Taiwan} \begin{abstract} The low-energy and weak-field limit of Dirac equation can be obtained by an order-by-order block diagonalization approach to any desired order in the parameter $\boldsymbol{\pi}/mc$ ($\boldsymbol{\pi}$ is the kinetic momentum and $m$ is the mass of the particle). In the previous work, it has been shown that, up to the order of $(\boldsymbol{\pi}/mc)^8$, the Dirac-Pauli Hamiltonian in the Foldy-Wouthuysen (FW) representation may be expressed as a closed form and consistent with the classical Hamiltonian, which is the sum of the classical relativistic Hamiltonian for orbital motion and the Thomas-Bargmann-Michel-Telegdi (T-BMT) Hamiltonian for spin precession. In order to investigate the exact validity of the correspondence between classical and Dirac-Pauli spinors, it is necessary to proceed to higher orders. In this paper, we investigate the FW representation of the Dirac and Dirac-Pauli Hamiltonians by using Kutzelnigg's diagonalization method. We show that the Kutzelnigg diagonalization method can be further simplified if nonlinear effects of static and homogeneous electromagnetic fields are neglected (in the weak-field limit). Up to the order of $(\boldsymbol{\pi}/mc)^{14}$, we find that the FW transformation for both Dirac and Dirac-Pauli Hamiltonians is in agreement with the classical Hamiltonian with the gyromagnetic ratio given by $g=2$ and $g\neq2$ respectively. Furthermore, with higher-order terms at hand, it is demonstrated that the unitary FW transformation admits a closed form in the low-energy and weak-field limit. \end{abstract} \pacs{03.65.Pm, 11.10.Ef, 71.70.Ej} \maketitle \section{Introduction} The relativistic quantum theory for a spin-1/2 particle is described by a spinor satisfying the Dirac equation \cite{Dirac1928, Dirac1982}. The four-component spinor of the Dirac particle is composed of two two-component Weyl spinors which correspond to the particle and antiparticle parts. Rigourously, because of the non-negligible probability of creation/annihilation of particle-antiparticle pairs, the Dirac equation is self-consistent only in the context of quantum field theory. For the purpose of obtaining the low-energy limit of Dirac equation without accounting for the field-theory particle-antiparticle interaction, the Dirac equation is converted to a two-component equation. The Pauli substraction method eliminates the two small components from the four-component spinor of Dirac equation and leads to the block-diagonal but energy-dependent effective Hamiltonian in which some non-hermitian terms may appear. Apart from the difficulties, in the seminal paper \cite{Foldy1950}, Foldy and Wouthuysen (FW) established a series of successive unitary transformations via decomposing the Hamiltonian into even and odd matrices; a block-diagonalized effective Hamiltonian can be constructed up to a certain order of $\boldsymbol{\pi}/mc$. The series of successive unitary transformations in the FW method can be replaced by a single transformation via the L\"{o}wding partitioning method \cite{Lowdin1951,Winkler2003}.\footnote{It should be emphasized that The FW transformation is not meant to be used for the second quantization, and furthermore it exists only in the weak-field limit or in some special cases (as studied in \secref{sec:exact solution}). If we ruthlessly try to second quantize the theory in the FW representation, whether we can succeed or not, we should perform the second quantization in a way very different from the conventional approach. This is because, in the FW representation, we encounter the non-locality due to \emph{zitterbewegung} (see also P.\ Strange in Ref.~\cite{Winkler2003}).} Furthermore, Eriksen developed a systematic derivation of the unitary transformation and gave an exact FW transformation for a charged spin-1/2 particle in interaction with non-explicitly time-dependent field~\cite{Eriksen1958}. The validity of the Eriksen method is investigated in Ref.\ \cite{Vries1968}. In Ref.\ \cite{Kutz1990}, Kutzelnigg developed a single unitary transformation that allows one to obtain the block-diagonalized Dirac Hamiltonian without evoking the decomposition of even and odd matrices used in FW method. Alternatively, the Dirac Hamiltonian can also be diagonalized via expansion in powers of Planck constant $\hbar$ \cite{Silenko03,Bliokh05,Goss07}, which enables us to investigate the influences of quantum corrections on the classical dynamics in strong fields \cite{Silenko08}. On the other hand, the classical relativistic dynamics for a charged particle with intrinsic spin in static and homogeneous electromagnetic fields is well understood. The orbital motion is governed by the classical relativistic Hamiltonian \begin{equation}\label{H orbit} H^c_\mathrm{orbit} =\sqrt{c^2\boldsymbol{\pi}^2+m^2c^4}\, +V(\mathbf{x}), \end{equation} where $\boldsymbol{\pi}=\mathbf{p}-q\mathbf{A}/c$ is the kinetic momentum operator with $\mathbf{A}$ being the magnetic vector potential~\cite{Jackson} and $V(\mathbf{x})$ the electric potential energy. The spin motion is governed by the Thomas-Bargmann-Michel-Teledgi (T-BMT) equation which describes the precession of spin as measured by the laboratory observer~\cite{BMT59}, \begin{equation}\label{Thomas} \frac{d\mathbf{s}}{dt}=\frac{q}{mc}\,\mathbf{s}\times\mathbf{F}(\mathbf{x}) \end{equation} with \begin{equation}\label{ThomasF} \begin{split} \mathbf{F}&=\left(\frac{g}{2}-1+\frac{1}{\gamma}\right)\mathbf{B}-\left(\frac{g}{2}-1\right)\frac{\gamma}{\gamma+1}(\boldsymbol{\beta}\cdot\mathbf{B})\boldsymbol{\beta}\\ &~~-\left(\frac{g}{2}-\frac{\gamma}{\gamma+1}\right)\boldsymbol{\beta}\times\mathbf{E}, \end{split} \end{equation} where $g$ is the gyromagnetic ratio, $\boldsymbol{\beta}$ the boost velocity, $\gamma=1/\sqrt{1-\boldsymbol{\beta}^2}$ the Lorentz factor and $\mathbf{E}$ and $\mathbf{B}$ are electric and magnetic fields measured in the laboratory frame. The intrinsic spin $\mathbf{s}$ in Eq.~(\ref{Thomas}) is being observed in the rest frame of the particle. Because $\{s_i,s_j\}=\epsilon_{ijk}s_k$, Eq.~(\ref{Thomas}) can be recast as the Hamilton's equation: \begin{equation} \frac{d\mathbf{s}}{dt}=\{\mathbf{s},H^{\mathrm{c}}_\mathrm{spin}\} \end{equation} with \begin{equation}\label{H dipole} H^{\mathrm{c}}_\mathrm{spin}=-\frac{q}{mc}\,\mathbf{s}\cdot \mathbf{F} \end{equation} called the T-BMT Hamiltonian. The combination of Eqs (\ref{H orbit}) and (\ref{H dipole}) is hereafter called the classical Hamiltonian $H_{\mathrm{c}}$, \begin{equation}\label{H classical} \begin{split} H_{\mathrm{c}}&=H^{\mathrm{c}}_\mathrm{orbit}+H^{\mathrm{c}}_\mathrm{spin}\\ &=\sqrt{c^2\boldsymbol{\pi}^2+m^2c^4}\, +V-\boldsymbol{\mu}\cdot \mathbf{F}, \end{split} \end{equation} where $\boldsymbol{\mu}=q\mathbf{s}/mc$ is the intrinsic magnetic moment of an electron. The connection between the Dirac equation and classical Hamiltonian has been investigated by several authors \cite{Foldy1950, Rubinow1963, Rafa1964, Froh1993, Silenko1995}. For a free Dirac particle, it has been shown that the exactly diagonalized Dirac Hamiltonian corresponds to the classical relativistic Hamiltonian \cite{Foldy1950}. In Refs.~\cite{Rubinow1963,Rafa1964}, it was shown that the T-BMT equation may be derived from the WKB wavefunction solutions to the Dirac equation. In the presence of external electromagnetic fields, the Dirac Hamiltonian in the FW representation has been block-diagonalized up to the order of $(\boldsymbol{\pi}/mc)^4$, but the connection is not explicit \cite{Froh1993}. Recently, in Ref.~\cite{TWChen2010}, it has been shown that up to $(\boldsymbol{\pi}/mc)^8$, the resulting FW transformed Dirac, or more generic, Dirac-Pauli \cite{Pauli1941} Hamiltonian in the presence of static and homogeneous electromagnetic fields may agree with the classical Hamiltonian [Eq.~(\ref{H classical})] in the weak-field limit. The order-by-order block-diagonalization methods to higher orders of $\boldsymbol{\pi}/mc$ can be used to investigate the validity of the connection. Furthermore, if the connection is indeed establishable (in a closed form), corrections to the classical T-BMT equation due to field inhomogeneity, if any, could also be included. Motivated by these regards, we adopt a systematic method that can substantially simplify the calculation of FW transformation to any higher orders in the FW representation of Dirac Hamiltonian. It must be stressed that block diagonaliation of a four-component Hamiltonian into two uncoupled two-component Hamiltonian is not unique, as any composition with additional unitary transformations that act separately on the positive and negative energy blocks will also do the job. Different block-diagonalization transformations are however unitarily equivalent to one another, and thus yield the same physics.\footnote{Once the Hamiltonian is block-diagonalized, further unitary transformations that do not mix the positive and negative energy blocks merely rotate the $2\times2$ Pauli matrices independently for the two blocks, keeping the physics unchanged.} The truly vexed question is: whether does the unitary bock-diagonalization transformation exist at all? In the absence of electric fields, we will show that the answer is affirmative. On the other hand, in the presence of electric fields, the answer seems to be negative, as the energy interacting with electromagnetic fields renders the probability of creation/annihilation of particle-antiparticle pairs non-negligible and thus the particle-antiparticle separation inconsistent. Nevertheless, in the weak-field limit, the interacting energy is well below the Dirac energy gap ($2mc^2$) and we will demonstrate that the unitary transformation exists and indeed admits a closed form in the low-energy and weak-field limit. In this article, we derive the FW transformed Dirac Hamiltonian up to the order of $(\boldsymbol{\pi}/mc)^{14}$ by using Kutzelnigg's diagonalization method \cite{Kutz1990}. The key feature of the Kutzelnigg approach is that it provides an exact block-diagonalized form of Dirac Hamiltonian involving a self-consistent equation [see Eq.~(\ref{EqX})]. The explicit form of the FW transformed Dirac Hamiltonian can be obtained by solving the self-consistent equation. We will show that the Kutzelnigg method can be further simplified in the weak-field limit, and this simplification enables us to obtain the higher-order terms systematically. We will show that the block diagonalization of Dirac and Dirac-Pauli Hamiltonians up to the order of $(\boldsymbol{\pi}/mc)^{14}$ in the Foldy-Wouthuysen representation is in agreement with classical Hamiltonian, and the closed form of the unitary transformation can be found. This article is organized as follows. In Sec.~\ref{sec:method}, we construct a unitary operator based on the Kutzelnigg method to obtain the exact FW transformed Dirac Hamiltonian and the self-consistent equation. The exact solution of the self-consistent equation is discussed. The FW transformed Dirac Hamiltonian in the presence of inhomogeneous electromagnetic fields are derived in Sec.~\ref{sec:fields}. The effective Hamiltonian up to $(\boldsymbol{\pi}/mc)^4$ for the inhomogeneous electromagnetic field is in agreement with the previous result shown in Refs.~\cite{Foldy1950,Froh1993}. The static and homogeneous electromagnetic fields are considered in Sec.~\ref{sec:HFW}, where the simplification of the effective Hamiltonian is discussed and the FW transformed Dirac Hamiltonian is obtained up to $(\boldsymbol{\pi}/mc)^{14}$. In Sec.~\ref{sec:TBMT}, the comparison with the classical relativistic Hamiltonian and T-BMT equation with $g=2$ is discussed. The FW transformed Dirac-Pauli Hamiltonian is shown in Sec.~\ref{sec:TBMT2}. In Sec.~\ref{sec:EUT}, we demonstrate that the exact unitary transformation matrix in the low-energy and weak-field limit can be formally obtained. The conclusions are summarized in Sec.~\ref{sec:conclusions}. Some calculational details are supplemented in Appendices. \section{Kutzelnigg diagonalization method for Dirac Hamiltonian}\label{sec:method} In this section, we use the unitary operator based on the Kutzelnigg diagonalization method \cite{Kutz1990} and apply the unitary operator to the Dirac Hamiltonian. We obtain the formally exact Foldy-Wouthuysen transformed Hamiltonian by requiring that the unitary transformation yields a block-diagonal form. The Dirac Hamiltonian in the presence of electromagnetic fields can be written as \begin{equation}\label{Dirac} \begin{split} H_D&=\left(\begin{array}{cc} V+mc^2& c\boldsymbol{\sigma}\cdot\boldsymbol{\pi}\\ c\boldsymbol{\sigma}\cdot\boldsymbol{\pi}&V-mc^2 \end{array}\right)\\ &\equiv\left(\begin{array}{cc}h_+& h_0\\ h_0&h_- \end{array}\right) \end{split} \end{equation} where $\boldsymbol{\pi}=\mathbf{p}-q\mathbf{A}/c$ is the kinetic momentum operator and $V=q\phi$. The electric field and magnetic field are $\mathbf{E}=-\nabla\phi$ and $\mathbf{B}=\nabla\times\mathbf{A}$, respectively. We note that in the static case, $\nabla\times\mathbf{E}=0$, and thus, $\boldsymbol{\pi}\times\mathbf{E}=-\mathbf{E}\times\boldsymbol{\pi}$. The wave function of the Dirac equation $H_D\psi=i\hbar\frac{\partial}{\partial t}\psi$ is a two two-spinors \begin{equation} \psi=\left(\begin{array}{c} \psi_+\\ \psi_- \end{array}\right). \end{equation} A unitary operator $U$ which \emph{formally} decouples positive and negative energy states can be written as the following form \cite{Kutz1990} \begin{equation}\label{U} U=\left(\begin{array}{cc} Y&YX^{\dag}\\ -ZX&Z \end{array}\right), \end{equation} where operators $Y$ and $Z$ are defined as \begin{equation}\label{Def:YandZ} Y=\frac{1}{\sqrt{1+X^{\dag}X}}, \qquad Z=\frac{1}{\sqrt{1+XX^{\dag}}}. \end{equation} Applying the unitary transformation Eq.~(\ref{U}) to Eq.~(\ref{Dirac}), $UH_DU^{\dag}$ is of the form \begin{equation}\label{UHU} UH_DU^{\dag}=\left(\begin{array}{cc} H_{\mathrm{FW}}&H_{X^{\dag}}\\ H_{X}&H' \end{array}\right) \end{equation} The unitary transformation transforms the wave function $\psi$ to a two-spinor, \begin{equation} U\left(\begin{array}{c} \psi_+\\ \psi_- \end{array}\right)=\left(\begin{array}{c} \psi_{\mathrm{FW}}\\ 0 \end{array}\right), \end{equation} where the wave function for the negative energy state must be zero and the FW transformed wave function is given by \begin{equation} \psi_{\mathrm{FW}}=\sqrt{1+X^{\dag}X}\,\psi_+. \end{equation} We require that the transformed Hamiltonian takes the block-diagonal form: \begin{equation}\label{UHU-D} UH_DU^{\dag}=\left(\begin{array}{cc} H_{\mathrm{FW}}&0\\ 0&H' \end{array}\right). \end{equation} We find that the requirement of the vanishing off-diagonal term $H_{X}=0$ yields the constraint on the $X$ operator: \begin{equation}\label{EqX} X=\frac{1}{2mc^2}\left\{-Xh_0X+h_0+[V,X]\right\}. \end{equation} Equation (\ref{EqX}) is a self-consistent formula for operator $X$. The resulting FW transformed Hamiltonian $H_{\mathrm{FW}}$ is given by \begin{equation}\label{Un-HFW} \begin{split} H_{\mathrm{FW}}=Y\left(h_++X^{\dag}h_0+h_0X+X^{\dag}h_-X\right)Y. \end{split} \end{equation} Because the operator $X$ plays an important role in generating the FW transformed Hamiltonian and the corresponding unitary operator, the operator $X$ for the Dirac Hamiltonian is hereafter called the \emph{Dirac generating operator}. To our knowledge, the exact solution of Eq.\ (\ref{EqX}) for a general potential is still unknown except for the two cases: a free particle and a particle subject only to magnetic fields. For the case with a nontrivial electric potential, we assume that the solution of Eq. (\ref{EqX}) can be obtained by using series expansion.\footnote{The series solution of the Dirac generating operator $X$ is not unique because any unitary transformation would lead to a satisfactory $X$ as long as it does not mix positive and negative energy states. This implies that the form of the block-diagonalized Hamiltonian is not unique. In this regard, we focus only on the series solution of $X$ that can correctly generate the FW transformed Dirac Hamiltonian linear in EM fields and up to order of $(\boldsymbol{\pi}/mc)^4$, as shown in Sec.\ref{sec:fields}.} \subsection{Exact solution of Dirac generating operator}\label{sec:exact solution} For a free particle ($\mathbf{A}=0$ and $V=0$), it can be shown that Eq. (\ref{EqX}) has an exact solution \begin{equation}\label{free-X} X=\frac{c(\boldsymbol{\sigma}\cdot\mathbf{p})}{mc^2+E_p}, \end{equation} where $E_p=\sqrt{m^2c^4+\mathbf{p}^2c^2}$. Using Eqs. (\ref{U}) and (\ref{Def:YandZ}), the unitary transformation matrix can be written as \begin{equation}\label{U-free} U=\frac{1}{\sqrt{2E_p(E_p+mc^2)}}\left(\begin{array}{cc} E_p+mc^2&c\boldsymbol{\sigma}\cdot\mathbf{p}\\ -c\boldsymbol{\sigma}\cdot\mathbf{p}&E_p+mc^2\\ \end{array}\right). \end{equation} Equation (\ref{U-free}) is the same with the result obtained from the standard FW transformation~\cite{Foldy1950}. The resulting FW transformed free-particle Dirac Hamiltonian is block-diagonalized, \begin{equation}\label{FWfree} H_{\mathrm{FW}}=\left(\begin{array}{cc} E_p&0\\ 0&-E_p\\ \end{array}\right). \end{equation} It is interesting to note that in the absence of electric field (i.e., $V=\text{const}$), Eq. (\ref{EqX}) also admits an exact solution \begin{equation}\label{Magnetic-X} X=\frac{1}{mc^2+E_{\pi}}(c\boldsymbol{\sigma}\cdot\boldsymbol{\pi}), \end{equation} where $E_{\pi}=\sqrt{m^2c^4+c^2(\boldsymbol{\sigma}\cdot\boldsymbol{\pi})^2}$. This can be proved by directly substituting Eq. (\ref{Magnetic-X}) into Eq. (\ref{EqX}) with $V=\text{const}$. The exact unitary transformation matrix can be formally constructed and the resulting FW transformed Hamiltonian can be obtained. In the presence of a nontrivial electric potential, it is difficult to obtain an exact solution because the term $[V,X]$ does not vanish. Therefore, the diagonalization procedure for the Dirac Hamiltonian must be performed order-by-order. It is necessary to choose a dimensionless quantity as the order-expanding parameter. We note that the form of Eq. (\ref{Magnetic-X}) can be rewritten as $\{1/[1+(\boldsymbol{\sigma}\cdot\boldsymbol{\xi})^2]\}\boldsymbol{\sigma}\cdot\boldsymbol{\xi}$ with the dimensionless quantity $\boldsymbol{\xi}=\boldsymbol{\pi}/mc$. The order-by-order block-diagonalized Hamiltonian can be expressed in terms of the order parameter $\boldsymbol{\xi}$. In this paper, we further focus on the weak-field limit in order to compare the block-diagonalizaed Hamiltonian with the classical counterpart. \subsection{Series expansion of Dirac generating operator} The upper-left diagonal term of $UH_DU^{\dag}$ is the FW transformed Hamiltonian $H_{\mathrm{FW}}$ under the constraint Eq.~(\ref{EqX}) and it can be written as (see Appendix \ref{App:Ham}) \begin{equation}\label{HFW} \begin{split} H_{\mathrm{FW}}=mc^2+e^{G/2}Ae^{-G/2}, \end{split} \end{equation} where operators $A$ (hereafter called the \emph{Dirac energy operator}) and $G$ (hereafter called the \emph{Dirac exponent operator}) are defined as \begin{equation}\label{AG} A\equiv V+h_0X, \qquad G\equiv\ln\left(1+X^{\dag}X\right). \end{equation} The other requirement $H_{X^{\dag}}=0$ gives the constraint on the hermitian of the Dirac generating operator $X^{\dag}$, which is simply the hermitian conjugate of Eq.~(\ref{EqX}), namely, $H_{X}^{\dag}=H_{X^{\dag}}$. It can be shown that $H_{X}=0$ and $H_{X^{\dag}}=0$ imply that (see Appendix \ref{App:Ham}) \begin{equation}\label{HFWdag} H_{\mathrm{FW}}^{\dag}=mc^2+e^{-G/2}A^{\dag}e^{G/2} \end{equation} and then Eq.~(\ref{HFW}) is a hermitian operator since the Dirac exponent operator is a hermitian operator. Equation (\ref{HFW}) can be simplified if we rewrite the Dirac energy operator $A$ as the sum of its hermitian ($A^H$) and anti-hermitian ($A^N$) parts, $A=A^H+A^N$, where \begin{equation} \begin{split} A^H=\frac{A+A^{\dag}}{2}, \qquad A^N=\frac{A-A^{\dag}}{2}. \end{split} \end{equation} Combining Eqs.~(\ref{HFW}) together with (\ref{HFWdag}), the FW transformed Dirac Hamiltonian $H_{\mathrm{FW}}$ is made up of $H_{\mathrm{FW}}=\left(H_{\mathrm{FW}}+H_{\mathrm{FW}}^{\dag}\right)/2$, and it can be written as \begin{equation}\label{HFW NE} H_{\mathrm{FW}}=mc^2+A^H+S, \end{equation} where the \emph{Dirac string operator} $S$ is given by \begin{equation}\label{EqS NE} \begin{split} S&=\frac{1}{2}[G,A^N]+\frac{1}{2!2^2}[G,[G,A^H]]+\frac{1}{3!2^3}[G,[G,[G,A^N]]]\\ &~~+\frac{1}{4!2^4}[G,[G,[G,[G,A^H]]]]+\cdots, \end{split} \end{equation} where we have used the Baker-Campbell-Hausdorff formula \cite{Sak}: $e^{B}De^{-B}=D+[B,D]+[B,[B,D]]/2!+[B,[B,[B,D]]]/3!\cdots$. We note that the anti-hermitian part of the Dirac energy operator always appears in those terms with odd numbers of Dirac exponent operators, and the hermitian part of the Dirac energy operator always appears in those terms with even numbers of Dirac exponent operators. Since the commutator of two hermitian operators must be an anti-hermitian operator, it can be shown that the Dirac string operator is a hermitian operator. In order to compare FW transformed Dirac Hamiltonian to the classical Hamiltonian, we solve the self-consistent equation for $X$ [Eq.~(\ref{EqX})] by power series expansion in terms of orders of $1/c$, \begin{equation}\label{SeriesX} X=\frac{X_1}{c}+\frac{X_2}{c^2}+\frac{X_3}{c^3}+\cdots. \end{equation} $X_1$ is the first order of the Dirac generating operator, $X_2$ the second, and so on. Substituting Eq.~(\ref{SeriesX}) into Eq.~(\ref{EqX}), we can obtain each order of the Dirac generating operator $X_{\ell}$. For order of $1/c$ and $1/c^2$, we have \begin{equation}\label{App:EqX1} \begin{split} &2mX_1=\boldsigma\cdot\boldpi,\\ &2mX_{2}=0. \end{split} \end{equation} The expanding terms of the Dirac generating operator with third and higher orders can be determined by the following equations, \begin{equation}\label{App:EqXoe} \begin{split} 2mX_{2j}&=-\sum_{k_1+k_2=2j-1}X_{k_1}\boldsigma\cdot\boldpi X_{k_2}+[V,X_{2j-2}],\\ 2mX_{2j+1}&=-\sum_{k_1+k_2=2j}X_{k_1}\boldsigma\cdot\boldpi X_{k_2}+[V,X_{2j-1}], \end{split} \end{equation} where $j=1,2,3\cdots$. Consider terms of even order of $1/c$, namely, $X_{2j}$. The fourth order of the Dirac generating operator $X_{4}$ is determined by $2mX_4=-(X_{1}\boldsigma\cdot\boldpi X_{2}+X_{1}\boldsigma\cdot\boldpi X_{2})+[V,X_2]$. Since the second order of $X$ is zero ($X_2=0$), the fourth order of $X$ also vanishes, i.e., $X_4=0$. The sixth order of $X$ is obtained by $2mX_6=-(X_{1}\boldsigma\cdot\boldpi X_{4}+X_{2}\boldsigma\cdot\boldpi X_{3}+X_{3}\boldsigma\cdot\boldpi X_{2}+X_{4}\boldsigma\cdot\boldpi X_{1})+[V,X_4]$. Because both the second and fourth order of the Dirac generating operators vanish, we find that the sixth order of Dirac generating operator $X_6$ is also zero, as well as $X_8$, $X_{10}$, and so on. Therefore, we have \begin{equation} X_2=X_4=X_6=\cdots=0, \end{equation} and the non-zero terms are those expanding terms of the Dirac generating operators with odd subscripts, namely, $X=X_1/c+X_3/c^3+X_5/c^5+\cdots$. Furthermore, since the operator $h_0=c\boldsigma\cdot\boldpi$ is of the order of $c$, the series expansion of $A=V+h_0X$ has only even powers of $c$: \begin{equation} A=A_0+\frac{A_2}{c^2}+\frac{A_4}{c^4}+\cdots, \end{equation} where the $\ell$th order of the Dirac energy operator $A_{\ell}$ is related to the $\ell$th order of Dirac generating operator $X_{\ell}$ by \begin{equation}\label{EqA} \begin{split} &A_0=V+\frac{h_0}{c}X_1,\\ &A_{\ell}=\frac{h_0}{c}X_{\ell+1}, \end{split} \end{equation} where $\ell=2,4,6,\cdots$. On the other hand, the series expansion of $\ln(1+y)$ is $\ln(1+y)=y-y^2/2+y^3/3-y^4/4+\cdots$. Because $y=X^{\dag}X$, the power series of $y$ contains only even powers of $c$: $y=y_2/c^2+y_4/c^4+y_6/c^6+\cdots$, where $y_{\ell}$ are given by \begin{equation}\label{y ell} y_{\ell}=\sum_{k_1+k_2=\ell}X^{\dag}_{k_1}X_{k_2}. \end{equation} For example, $y_6=X_1^{\dag}X_5+X_3^{\dag}X_3+X_5^{\dag}X_1$. Consequently, the Dirac exponent operator can only have terms with even powers of $c$ (we note that $G_0=0$) \begin{equation} G=\frac{G_2}{c^2}+\frac{G_4}{c^4}+\frac{G_6}{c^6}+\cdots, \end{equation} where the $\ell$th order of the Dirac exponent operator $G_{\ell}$ can be expressed in terms of $y_{\ell}$: \begin{equation}\label{G ell} \begin{split} G_{\ell}&=y_{\ell}-\frac{1}{2}\sum_{k_1+k_2=\ell}y_{k_1}y_{k_2}+\frac{1}{3}\sum_{k_1+k_2+k_3=\ell}y_{k_1}y_{k_2}y_{k_3}\\ &~~-\frac{1}{4}\sum_{k_1+\cdots+k_4=\ell}y_{k_1}y_{k_2}y_{k_3}y_{k_4}+\cdots. \end{split} \end{equation} For example, $G_6=y_6-(y_2y_4+y_4y_2)/2+y_2^3/3$. Therefore, the FW transformed Dirac Hamiltonian can be expanded in terms of $A_{\ell}$ and $G_{\ell}$ and has only even powers of $c$. That is, \begin{equation}\label{HFW sum} H_{\mathrm{FW}}=mc^2+\sum_{\ell}H^{(\ell)}_{\mathrm{FW}}, \end{equation} where the $\ell$th order of the FW transformed Dirac Hamiltonian denoted as $H_{\mathrm{FW}}^{(\ell)}$ ($\ell=0,2,4,6,\cdots$) are given by (up to $c^{12}$) \begin{equation}\label{EqExpandH} \begin{split} H_{\mathrm{FW}}^{(0)}&=A^H_0,\\ c^{\ell}H_{\mathrm{FW}}^{(\ell)}&=A^H_{\ell}+S_{\ell},\\ \end{split} \end{equation} where $\ell=2,4,6,\cdots,12$. The $\ell$th order of Dirac string operator $S_{\ell}$ is given by \begin{equation}\label{EqS} \begin{split} &S_{\ell}=\frac{1}{2}\mathop{\sum_{\ell_1+\ell_2=\ell}}[G_{\ell_1},A^N_{\ell_2}]\\ &~~+\frac{1}{2!2^2}\sum_{\ell_1+\ell_2+\ell_3=\ell}[G_{\ell_1},[G_{\ell_2},A^H_{\ell_3}]]\\ &~~+\frac{1}{3!2^3}\sum_{\ell_1+\cdots+\ell_4=\ell}[G_{\ell_1},[G_{\ell_2},[G_{\ell_3},A^N_{\ell_4}]]]\\ &~~+\frac{1}{4!2^4}\sum_{\ell_1+\cdots+\ell_5=\ell}[G_{\ell_1},[G_{\ell_2},[G_{\ell_3},[G_{\ell_4},A^H_{\ell_5}]]]]\\ &~~+\cdots. \end{split} \end{equation} As mentioned above, any unitary transformation would lead to a satisfactory generating operator as long as it does not mix positive and negative energy states. The non-uniqueness property of generating operator can be easily seen as follows. If we perform the Kutzelnigg diagonalization method upon Eq.\ (\ref{UHU-D}) again, then we obtain another block-diaogonalized Hamiltonian with new operator equation for the generating operator. The new diagonalized Hamiltonian $H_{\mathrm{FW}}'$ is determined by Eq.\ (\ref{Un-HFW}) with the replacements: $h_-\rightarrow H'$, $h_0=0$, and $h_+ \rightarrow H_{\mathrm{FW}}$. The form of new diagonalized Hamiltonian depends on the solution of the new generating operator. We use series expansion to construct the generating operator and require that the resulting generating operator can go back to the exact solution in the free-particle case where the Hamiltonian is block-diagonalized to Eq. (\ref{FWfree}). In this representation, the positive and negative energies are decoupled and have classical relativistic energy representation [\textit{c.f.}\ Eqs.\ (\ref{FWfree}) and (\ref{H orbit})], which is \emph{the} FW representation obtained in this article. Importantly, we will show that the series expansion of generating operator [Eq.\ (\ref{SeriesX})] can indeed generate the FW representation. In this sense, interestingly, we can obtain an exact solution of generating operator and find that the spin part of the resulting block-diagonalized Hamiltonian is equivalent to the T-BMT Hamiltonian. In the next section, we will show that by using Eqs.\ (\ref{EqExpandH}) and (\ref{EqS}) the effective Hamiltonian resulting from Foldy-Wouthuysen diagonalization method is equivalent to that from the Kutzelnigg diagonalization method up to terms with order of $(\boldsymbol{\pi}/mc)^4$, from which the fine structure, Darwin term and spin-orbit interaction can be deduced. \section{Inhomogeneous fields}\label{sec:fields} Up to this step, only two assumptions are made: (1) the electromagnetic fields are static, and (2) the Dirac generating operator $X$ can be solved by series expansion. We calculate the first two terms $H^{(0)}_{\mathrm{FW}}+H^{(2)}_{\mathrm{FW}}$ and show that the resulting Hamiltonian $H_{\mathrm{FW}}=mc^2+H^{(0)}_{FW}+H^{(2)}_{\mathrm{FW}}$ is in agreement with the previous result. The zeroth order of the FW transformed Dirac Hamiltonian is \begin{equation}\label{H0FW_1} H_{\mathrm{FW}}^{(0)}=A^H_0, \end{equation} where $A_0=V+(h_0/c)X_1$. The first order of the Dirac generating operator $X_1$ is given in Eq.~(\ref{App:EqX1}), which is valid for inhomogeneous fields. Using $[\pi_i,\pi_j]=\frac{iq\hbar}{c}\epsilon_{ijk}B_k$, we have $(\boldsigma\cdot\boldpi)^2=\boldsymbol{\pi}^2-\frac{q\hbar}{c}\boldsigma\cdot\mathbf{B}$, and $H_{FW}^{(0)}$ [Eq.~(\ref{H0FW_1})] becomes \begin{equation}\label{H0FW_2} H^{(0)}_{\mathrm{FW}}=V+\frac{\boldsymbol{\pi}^2}{2m}-\frac{q\hbar}{2mc}\boldsigma\cdot\mathbf{B}. \end{equation} We note that $A_0$ is already a hermitian operator, and thus $A^N_0=0$. The second and third terms of Eq.~(\ref{H0FW_2}) are the kinetic energy and Zeeman energy. The second order $H_{\mathrm{FW}}^{(2)}$ is given by \begin{equation}\label{H2FW_1} c^2H^{(2)}_{\mathrm{FW}}=A^H_2+S_2, \end{equation} where $A_2=(h_0/c)X_3$, $S_2=[G_2,A_0^N]/2$. For $X_3$, from Eq.~(\ref{App:EqXoe}) we have $X_3=-X_1\boldsigma\cdot\boldpi X_1+[V,X_1]$. Using $[V,\,\boldsigma\cdot\boldpi]=-iq\hbar\boldsigma\cdot\mathbf{E}$, we obtain \begin{equation}\label{App:EqX3} X_3=-\frac{1}{4}\frac{T}{m^2}\boldsigma\cdot\boldpi-\frac{1}{4}\frac{i\hbar}{m^2}\boldsymbol{\sigma}\cdot\mathbf{E}, \end{equation} where \begin{equation} T\equiv(\boldsigma\cdot\boldpi)^2/2m \end{equation} is the kinetic energy operator. The operators $X_3$ is valid for inhomogeneous fields. From Eqs.~(\ref{y ell}) and (\ref{G ell}), the operator $G_2$ is $X^{\dag}_1X_1=T/2m$. Since $A^N_0=0$, we have $S_2=[G_2,A^N_0]/2=0$. Substituting Eq.~(\ref{App:EqX3}) into $A_2$, we have \begin{equation} A_2=-\frac{(\boldsigma\cdot\boldpi)^4}{8m^3}-\frac{iq\hbar}{4m^2}(\boldsigma\cdot\boldpi)(\boldsigma\cdot\mathbf{E}). \end{equation} The hermitian part of $A_2$ is given by $A^H_2=(A_2+A^{\dag}_2)/2$, \begin{equation}\label{H2FW_2} A^H_2=-\frac{(\boldsigma\cdot\boldpi)^4}{8m^3}-\frac{iq\hbar}{8m^2}[\boldsigma\cdot\boldpi,\boldsigma\cdot\mathbf{E}]. \end{equation} Using $\sigma_i\sigma_j=\delta_{ij}+i\epsilon_{ijk}\sigma_k$, we have $[\boldsigma\cdot\boldpi,\boldsigma\cdot\mathbf{E}]=-i\hbar\nabla\cdot\mathbf{E}+i\boldsymbol{\sigma}\cdot\boldsymbol{\pi}\times\mathbf{E}-i\boldsymbol{\sigma}\cdot\mathbf{E}\times\boldsymbol{\pi}$. For static case, we have $\boldsymbol{\pi}\times\mathbf{E}=-\mathbf{E}\times\boldsymbol{\pi}$. Therefore, up to the second order of magnetic field, Eq.~(\ref{H2FW_2}) becomes \begin{equation}\label{H2FW_3} \begin{split} H^{(2)}_{\mathrm{FW}}&=-\frac{\boldsymbol{\pi}^4}{8m^3c^2}+\frac{q\hbar}{8m^3c^3}\left[\boldsymbol{\pi}^2(\boldsigma\cdot\mathbf{B})+(\boldsigma\cdot\mathbf{B})\boldsymbol{\pi}^2\right]\\ &~~-\frac{q\hbar}{4m^2c^2}\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi})-\frac{q\hbar^2}{8m^2c^2}\nabla\cdot\mathbf{E}. \end{split} \end{equation} The first term of Eq.~(\ref{H2FW_3}) is the relativistic correction to the kinetic energy. The second term of Eq.~(\ref{H2FW_3}) is the relativistic correction to the Zeeman energy. The fourth and fifth terms of Eq.~(\ref{H2FW_3}) are the spin-orbit interaction and the Darwin term which provides heuristic evidence of the Zitterbewegung phenomenon \cite{Darwin1928}. Combining Eqs.\ (\ref{H0FW_2}) and (\ref{H2FW_3}), we obtain the Foldy-Wouthuysen transformed Dirac Hamiltonian up to terms with $(\boldsymbol{\pi}/mc)^4$: \begin{equation}\label{HFW02} \begin{split} H_{\mathrm{FW}}&=mc^2+H^{(0)}_{\mathrm{FW}}+H^{(2)}_{\mathrm{FW}}\\ &=mc^2+V+\frac{\boldsymbol{\pi}^2}{2m}-\frac{q\hbar}{2mc}\boldsigma\cdot\mathbf{B}-\frac{\boldsymbol{\pi}^4}{8m^3c^2}\\ &~~+\frac{q\hbar}{8m^3c^3}\left[\boldsymbol{\pi}^2(\boldsigma\cdot\mathbf{B})+(\boldsigma\cdot\mathbf{B})\boldsymbol{\pi}^2\right]\\ &~~-\frac{q\hbar^2}{8m^2c^2}\nabla\cdot\mathbf{E}-\frac{q\hbar}{4m^2c^2}\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}). \end{split} \end{equation} Equation (\ref{HFW02}) is in agreement with the earlier results \cite{Foldy1950, Froh1993} which are obtained by the standard FW method. If we take into account the terms of the second order in electromagnetic fields, our result gives $-\frac{e^2\hbar^2}{8m^3c^4}\mathbf{B}^2$. However, the FW diagonalization method shows that the terms of the second order in electromagnetic field should be $\frac{e^2\hbar^2}{8m^3c^4}(\mathbf{E}^2-\mathbf{B}^2)$. In comparison with the standard FW transformation method, this discrepancy suggests that the assumption that the series expansion of $X$ [Eq.~(\ref{SeriesX})] exists is valid only in the low-energy and weak-field limit or in the absence of an electric field. In the following sections, we will obtain the FW transformed Dirac and Dirac-Pauli Hamiltonians in the low-energy and weak-field limit. \section{FW transformed Dirac Hamiltonian}\label{sec:HFW} The previous section shows that the Kutzelnigg diagonalization method is valid when we consider only terms with linear electromagnetic fields. We focus only on linear terms of electromagnetic fields in comparison with the T-BMT equation. In this section, we consider the static and homogeneous electromagnetic field and neglect the product of fields in the FW transformed Dirac Hamiltonian. The FW transformed Dirac Hamiltonian contains the Dirac energy operator and Dirac string operator [see Eqs.\ (\ref{EqExpandH}) and (\ref{EqS})]. We will calculate $H_{\mathrm{FW}}^{(\ell)}$ from $\ell=0$ to $\ell=12$. Nevertheless, we have to emphasize that Eq.~(\ref{EqA}) implies that the $\ell$th order of Dirac energy operator is obtained from the next order of the Dirac generating operator. Therefore, we have to obtain the term of the generating operator up to the order of $1/c^{13}$, i.e. $X_{13}$. The explicit forms of the expanding terms of the generating operators can be derived by using Eqs.\ (\ref{App:EqX1}) and (\ref{App:EqXoe}). Up to the order of $1/c^{13}$, we have \begin{widetext} \begin{equation}\label{App:SolveX} \begin{split} &X_1=\frac{\boldsigma\cdot\boldpi}{2m},~X_3=-\frac{1}{4}\frac{T}{m^2}\boldsigma\cdot\boldpi-\frac{1}{4}\frac{iq\hbar}{m^2}\boldsymbol{\sigma}\cdot\mathbf{E},~X_5=\frac{1}{4}\frac{T^2}{m^3}\boldsigma\cdot\boldpi+\frac{3}{16}\frac{iq\hbar}{m^4}\boldsymbol{\pi}^2(\boldsigma\cdot\mathbf{E})+\frac{1}{8}\frac{iq\hbar}{m^4}(\mathbf{E}\cdot\boldsymbol{\pi})(\boldsigma\cdot\boldpi),\\ &X_7=-\frac{5}{16}\frac{T^3}{m^4}\boldsigma\cdot\boldpi-\frac{5}{32}\frac{iq\hbar}{m^6}\boldsymbol{\pi}^4(\boldsigma\cdot\mathbf{E})-\frac{3}{16}\frac{iq\hbar}{m^6}\boldsymbol{\pi}^2(\mathbf{E}\cdot\boldsymbol{\pi})(\boldsigma\cdot\boldpi),\\ &X_9=\frac{7}{16}\frac{T^4}{m^5}\boldsigma\cdot\boldpi+\frac{35}{256}\frac{iq\hbar}{m^8}\boldsymbol{\pi}^6(\boldsigma\cdot\mathbf{E})+\frac{29}{128}\frac{iq\hbar}{m^8}\boldsymbol{\pi}^4(\mathbf{E}\cdot\boldsymbol{\pi})(\boldsigma\cdot\boldpi),\\ &X_{11}=-\frac{21}{32}\frac{T^5}{m^6}\boldsigma\cdot\boldpi-\frac{63}{1024}\frac{iq\hbar}{m^{10}}\boldsymbol{\pi}^8(\boldsigma\cdot\mathbf{E})-\frac{65}{256}\frac{iq\hbar}{m^{10}}\boldsymbol{\pi}^6(\mathbf{E}\cdot\boldsymbol{\pi})(\boldsigma\cdot\boldpi),\\ &X_{13}=\frac{33}{32}\frac{T^6}{m^7}\boldsigma\cdot\boldpi+\frac{231}{2048}\frac{iq\hbar}{m^{12}}\boldsymbol{\pi}^{10}(\boldsigma\cdot\mathbf{E})+\frac{281}{1024}\frac{iq\hbar}{m^{12}}\boldsymbol{\pi}^{8}(\mathbf{E}\cdot\boldsymbol{\pi})(\boldsigma\cdot\boldpi), \end{split} \end{equation} \end{widetext} where $T=(\boldsigma\cdot\boldpi)^2/2m$. The forms of $X_1$ and $X_3$ in Eq.~(\ref{App:SolveX}) are also valid for inhomogeneous fields. The expanding terms of the Dirac generating operator from $X_5$ to $X_{13}$ in Eq.~(\ref{App:SolveX}) are valid only for homogeneous fields. Inserting Eqs.~(\ref{App:SolveX}) into Eq.~(\ref{EqA}), we can obtain each order of the Dirac energy operator. Furthermore, we rewrite each order of the Dirac energy operator $A_{\ell}$ as the combination of the hermitian part ($A^H_{\ell}$) and anti-hermitian part ($A^{N}_{\ell}$), \begin{equation}\label{EqAHN} A_{\ell}=A^{H}_{\ell}+A^{N}_{\ell}, \end{equation} where $A^H_{\ell}$ and $A^N_{\ell}$ satisfy $A^{H\dag}_{\ell}=A^H_{\ell}$ and $A^{N\dag}_{\ell}=-A^N_{\ell}$. The hermitian parts of the Dirac energy operator from $A^H_0$ to $A^H_{12}$ are given by \begin{equation}\label{EqAsH} \begin{split} &A^H_0=T+V,~A^H_2=-\frac{T^2}{2m}-\frac{q\hbar}{4m^2}\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ &A^H_4=\frac{T^3}{2m^2}+\frac{3}{16}\frac{q\hbar}{m^4}\boldsymbol{\pi}^2\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ &A^H_6=-\frac{5}{8}\frac{T^4}{m^3}-\frac{5}{32}\frac{q\hbar}{m^6}\boldsymbol{\pi}^4\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ &A^H_8=\frac{7}{8}\frac{T^5}{m^4}+\frac{35}{256}\frac{q\hbar}{m^8}\boldsymbol{\pi}^6\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ &A^H_{10}=-\frac{21}{16}\frac{T^6}{m^5}-\frac{63}{512}\frac{q\hbar}{m^{10}}\boldsymbol{\pi}^8\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ &A^H_{12}=\frac{33}{16}\frac{T^7}{m^6}+\frac{231}{2048}\frac{q\hbar}{m^{12}}\boldsymbol{\pi}^{10}\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ \end{split} \end{equation} The anti-hermitian parts of the Dirac energy operator from $A^N_0$ to $A^N_{12}$ are given by \begin{equation}\label{EqAsN} \begin{split} &A^N_0=0,~A^N_2=-\frac{iq\hbar}{4m^2}\mathbf{E}\cdot\boldsymbol{\pi},~A^N_4=+\frac{5}{16}\frac{iq\hbar}{m^4}\boldsymbol{\pi}^2\mathbf{E}\cdot\boldsymbol{\pi},\\ &A^N_6=-\frac{11}{32}\frac{iq\hbar}{m^6}\boldsymbol{\pi}^4\mathbf{E}\cdot\boldsymbol{\pi},~A^N_8=+\frac{93}{256}\frac{iq\hbar}{m^8}\boldsymbol{\pi}^6\mathbf{E}\cdot\boldsymbol{\pi},\\ &A^N_{10}=-\frac{193}{512}\frac{iq\hbar}{m^{10}}\boldsymbol{\pi}^8\mathbf{E}\cdot\boldsymbol{\pi},~A^N_{12}=+\frac{793}{2048}\frac{iq\hbar}{m^{12}}\boldsymbol{\pi}^{10}\mathbf{E}\cdot\boldsymbol{\pi}.\\ \end{split} \end{equation} We emphasize that the second and higher order of electromagnetic field will be neglected in Eqs.\ (\ref{EqAsH}) and (\ref{EqAsN}). In order to simplify the present expression, the form of $A_{\ell}$ still contains terms with non-linear electromagnetic fields because the operator $T$ can be written as $T=(\boldsigma\cdot\boldpi)^2/2m=\frac{1}{2m}(\boldsymbol{\pi}^2-\frac{q\hbar}{c}\boldsigma\cdot\mathbf{B})$. We will neglect these higher order terms when constructing Hamiltonian. On the other hand, to evaluate the Dirac string operator, we have to obtain the Dirac exponent operator by expanding $\ln(1+X^{\dag}X)$. After straightforward calculations, the expanding terms of the Dirac exponent operator $G_{\ell}$ (up to $1/c^{12}$) are given by \begin{equation}\label{EqGs} \begin{split} &G_2=\frac{T}{2m},~G_4=-\frac{5}{8}\frac{T^2}{m^2}-\frac{1}{4}\frac{q\hbar}{m^3}\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ &G_6=\frac{11}{12}\frac{T^3}{m^3}+\frac{5}{16}\frac{q\hbar}{m^5}\boldsymbol{\pi}^2\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ &G_8=-\frac{93}{64}\frac{T^4}{m^4}-\frac{11}{32}\frac{q\hbar}{m^7}\boldsymbol{\pi}^4\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ &G_{10}=\frac{193}{80}\frac{T^5}{m^5}+\frac{93}{256}\frac{q\hbar}{m^9}\boldsymbol{\pi}^6\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ &G_{12}=-\frac{793}{192}\frac{T^6}{m^6}-\frac{193}{512}\frac{q\hbar}{m^{11}}\boldsymbol{\pi}^8\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}). \end{split} \end{equation} Since we always neglect terms with $E^2$, $B^2$, $EB$ and multiple products of them, the kinetic energy operator $T$ commutes with $\boldsymbol{\pi}^{2k}\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi})$ and $\boldsymbol{\pi}^{2k}\mathbf{E}\cdot\boldsymbol{\pi}$, and we have \begin{equation}\label{T comm} \begin{split} &[T,\boldsymbol{\pi}^{2k}\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi})]=0+o(f^2),\\ &[T,\boldsymbol{\pi}^{2k}\mathbf{E}\cdot\boldsymbol{\pi}]=0+o(f^2),\\ &[\boldsymbol{\pi}^{2k}\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\boldsymbol{\pi}^{2n}\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi})]=0+o(f^2), \end{split} \end{equation} where $o(f^2)$ represents the second order and higher orders of homogeneous electromagnetic fields. Applying Eqs.\ (\ref{EqAsH}), (\ref{EqAsN}), (\ref{EqGs}) and (\ref{T comm}) to the Dirac string operators [Eq.~(\ref{EqS})], we find that all the non-vanishing Dirac string operators $S_{\ell}$ (from $\ell=2$ to $\ell=12$) are proportional to second and higher orders of electric and magnetic fields which are being neglected. This can also be proved as follows. Firstly, consider the Dirac string operator with only one Dirac exponent operator, $S=[G,A^N]/2+o(G^2)$. The Dirac exponent operator is $G=\sum_{\ell}G_{\ell}/c^{\ell}=G_T+G_{\mathrm{so}}$, where $G_T$ is the term with collections of the kinetic energy operator $T$, i.e., $G_T=T/2mc^2-(5/8)T^2/m^2c^4+(11/12)T^3/m^3c^6+\cdots$, and $G_{\mathrm{so}}=(-1/4m^3c^4+5\boldsymbol{\pi}^2/16m^5c^6-11\boldsymbol{\pi}^4/32m^7c^8+\cdots)\hbar\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi})=F(\boldsymbol{\pi}^2)\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi})$, where $F(\boldsymbol{\pi}^2)$ represents the power series of $\boldsymbol{\pi}^2$. The anti-hermitian part of the Dirac energy operator is $A^N=\sum_{\ell}A^{N}_{\ell}/c^{\ell}=(-1/4m^2c^2+5\boldsymbol{\pi}^2/16m^4c^4-11\boldsymbol{\pi}^4/32m^6c^6+\cdots)i\hbar\mathbf{E}\cdot\boldsymbol{\pi}=g(\boldsymbol{\pi}^2)\mathbf{E}\cdot\boldsymbol{\pi}$, where $g(\boldsymbol{\pi}^2)$ represents the power series of $\boldsymbol{\pi}^2$. Therefore, $[G,A^N]$ can be written as $[G,A^N]=[G_T,A^N]+[G_{\mathrm{so}},A^N]$. Since we have $[T,\boldsymbol{\pi}^{2k}]=[T,\mathbf{E}\cdot\boldsymbol{\pi}]=0+o(f^2)$, thus $[G_T,A^N]=0+o(f^2)$. The commutator $[G_{\mathrm{so}},A^N]=[F(\boldsymbol{\pi}^2)\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),g(\boldsymbol{\pi}^2)\mathbf{E}\cdot\boldsymbol{\pi}]$ also vanishes up to second-order terms of homogeneous electromagnetic field, because we have $[\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),g(\boldsymbol{\pi}^2)]=0+o(f^2)$, $[\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\mathbf{E}\cdot\boldsymbol{\pi}]=0+o(f^2)$, $[F(\boldsymbol{\pi}^2),\mathbf{E}\cdot\boldsymbol{\pi}]=0+o(f^2)$ and $[F(\boldsymbol{\pi}^2),g(\boldsymbol{\pi}^2)]=0$. We obtain $[G,A^N]=0+o(f^2)$, and thus, the terms containing odd numbers of $G$ in the Dirac string operator [see Eq.~(\ref{EqS NE})] always vanishes up to second-order terms of homogeneous electromagnetic fields. Secondly, consider the term containing two Dirac exponent operators in the Dirac string operator, $[G,[G,A^H]]$. The hermitian part of the Dirac energy operator can be written as $A^H=\sum_{\ell}A^{\ell}/c^{\ell}=V+A^H_T+A^H_{\mathrm{so}}$, where $A^H_{T}=T-T^2/2mc^2+T^3/2m^2c^4-5T^4/8m^3c^6+\cdots$ and $A^H_{\mathrm{so}}=K(\boldsymbol{\pi}^2)\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi})$, where $K(\boldsymbol{\pi}^2)$ is the power series of $\boldsymbol{\pi}^2$. The commutator $[G,A^H]$ becomes $[G,A^H]=[G_T,V]+[G_T,A^H_T]+[G_T,A^{H}_{\mathrm{so}}]+[G_{\mathrm{so}},V]+[G_{\mathrm{so}},A^H_T]+[G_{\mathrm{so}},A^H_{\mathrm{so}}]$. Since we have $[T,\boldsymbol{\pi}^{2k}\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi})]=0+o(f^2)$ and $[\boldsymbol{\pi}^{2k}\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\boldsymbol{\pi}^{2n}\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi})]=0+o(f^2)$, the commutators $[G_T,A^{H}_{\mathrm{so}}]$, $[G_{\mathrm{so}},A^H_T]$ and $[G_{\mathrm{so}},A^H_{\mathrm{so}}]$ vanish up to second-order terms of homogeneous electromagnetic fields as well as $[G_{\mathrm{so}},V]$. For the commutator $[G_T,V]$, using $[T,V]=-i\hbar\mathbf{E}\cdot\boldsymbol{\pi}/m$, we find that $[G_T,V]=R(\boldsymbol{\pi}^2)\mathbf{E}\cdot\boldsymbol{\pi}+o(f^2)$, where $R(\boldsymbol{\pi}^2)$ is the power series of $\boldsymbol{\pi}^2$. That is, $[G,A^H]=R(\boldsymbol{\pi}^2)\mathbf{E}\cdot\boldsymbol{\pi}+o(f^2)$. Similar to the commutator $[G,A^N]$, where $A^N=g(\boldsymbol{\pi}^2)\mathbf{E}\cdot\boldsymbol{\pi}$, we find that this implies that $[G,[G,A^H]]=0+o(f^2)$. Therefore, the terms containing even numbers of the Dirac exponent operators in the Dirac string operator [see Eq.~(\ref{EqS NE})] always vanishes up to second-order terms of homogeneous electromagnetic fields. In short, it can be shown that from $\ell=0$ to $\ell=12$, the expanding terms of the Dirac string operator satisfies \begin{equation}\label{EqVanS} S_{\ell}=0+o(f^2), \end{equation} where $o(f^2)$ represents the second and higher orders of electromagnetic fields. Consider Eq.~(\ref{EqExpandH}) together with (\ref{EqVanS}), we find that $H^{(\ell)}_{FW}$ is exactly equal to $A^H_{\ell}$, i.e. \begin{equation}\label{EqHFW} c^{\ell}H_{FW}^{(\ell)}=A^H_{\ell}+o(f^2), \end{equation} where $\ell=0,2,4,\cdots,12$. Equation (\ref{EqHFW}) is the main result of this paper. This implies that the FW transformed Dirac Hamiltonian is only determined by the hermitian part of the Dirac energy operator regardless of the Dirac exponent operator $G$. We have shown that Eq.~(\ref{EqHFW}) is valid at least up to $1/c^{12}$. We believe that this result is valid to all higher orders of $1/c$. Equation (\ref{EqHFW}) enables us to solely focus on the hermitian part of the Dirac energy operator since the anti-hermitian part can be exactly cancelled by the remaining string operators. As a consequence, this result provides us a method to obtain higher order terms faster than traditional Foldy-Wouthuysen transformation. Comparing the form of the resulting FW transformed Dirac Hamiltonian with the classical Hamiltonian, we define the magnetic moment $\boldsymbol{\mu}$ and the scaled kinetic momentum $\boldsymbol{\xi}$ as \begin{equation}\label{def} \boldsymbol{\mu}=\frac{q\hbar}{2mc}\boldsymbol{\sigma}, \qquad \boldsymbol{\xi}=\frac{\boldsymbol{\pi}}{mc}. \end{equation} On the other hand, the kinetic energy operator $T$ in Eq.~(\ref{EqAsH}) can be replaced by $T=\frac{\boldsymbol{\pi}^2}{2m}-\frac{q\hbar}{2mc}\boldsigma\cdot\mathbf{B}$, and after neglecting second and higher orders of electromagnetic fields, the FW transformed Hamiltonian [Eq.~(\ref{EqHFW})] becomes \begin{widetext} \begin{equation}\label{HFWs} \begin{split} H^{(0)}_{\mathrm{FW}}&=V+\frac{1}{2}mc^2\boldsymbol{\xi}^2-\boldmu\cdot\mathbf{B},\\ H^{(2)}_{\mathrm{FW}}&=-\frac{1}{8}mc^2\boldsymbol{\xi}^4+\frac{1}{2}\boldsymbol{\xi}^2\boldmu\cdot\mathbf{B}-\frac{1}{2}\boldmu\cdot(\mathbf{E}\times\boldxi),\\ H^{(4)}_{\mathrm{FW}}&=+\frac{1}{16}mc^2\boldsymbol{\xi}^6-\frac{3}{8}\boldsymbol{\xi}^6\boldmu\cdot\mathbf{B}+\frac{3}{8}\boldsymbol{\xi}^4\boldmu\cdot(\mathbf{E}\times\boldxi),\\ H^{(6)}_{\mathrm{FW}}&=-\frac{5}{128}mc^2\boldsymbol{\xi}^8+\frac{5}{16}\boldsymbol{\xi}^4\boldmu\cdot\mathbf{B}-\frac{5}{16}\boldsymbol{\xi}^4\boldmu\cdot(\mathbf{E}\times\boldxi),\\ H^{(8)}_{\mathrm{FW}}&=+\frac{7}{256}mc^2\boldsymbol{\xi}^{10}-\frac{35}{128}\boldsymbol{\xi}^8\boldmu\cdot\mathbf{B}+\frac{35}{128}\boldsymbol{\xi}^6\boldmu\cdot(\mathbf{E}\times\boldxi),\\ H^{(10)}_{\mathrm{FW}}&=-\frac{21}{1024}mc^2\boldsymbol{\xi}^{12}+\frac{63}{256}\boldsymbol{\xi}^{10}\boldmu\cdot\mathbf{B}-\frac{63}{256}\boldsymbol{\xi}^8\boldmu\cdot(\mathbf{E}\times\boldxi),\\ H^{(12)}_{\mathrm{FW}}&=+\frac{33}{2048}mc^2\boldsymbol{\xi}^{14}+\frac{231}{1024}\boldsymbol{\xi}^{12}\boldmu\cdot\mathbf{B}-\frac{231}{1024}\boldsymbol{\xi}^{10}\boldmu\cdot(\mathbf{E}\times\boldxi). \end{split} \end{equation} \end{widetext} After substituting Eq.~(\ref{HFWs}) into Eq.~(\ref{HFW sum}), the FW transformed Dirac Hamiltonian becomes a sum of two terms: \begin{equation}\label{HFW OandS} \begin{split} H_{\mathrm{FW}}&=\sum_{\ell}H^{(\ell)}_{\mathrm{FW}}\\ &=H_{\mathrm{orbit}}+H_{\mathrm{spin}}, \end{split} \end{equation} where the orbital Hamiltonian $H_{\mathrm{orbit}}$ is the kinetic energy (including the rest mass energy) plus the potential energy, \begin{equation}\label{EqHo} \begin{split} H_{\mathrm{orbit}}&=mc^2(1+\frac{1}{2}\boldsymbol{\xi}^2-\frac{1}{8}\boldsymbol{\xi}^4+\frac{1}{16}\boldsymbol{\xi}^6-\frac{5}{128}\boldsymbol{\xi}^8\\ &~~+\frac{7}{256}\boldsymbol{\xi}^{10}-\frac{21}{1024}\boldsymbol{\xi}^{12}+\frac{33}{2048}\boldsymbol{\xi}^{14})+V, \end{split} \end{equation} and the spin Hamiltonian $H_{\mathrm{spin}}$ is the Hamiltonian of intrinsic magnetic moment in electromagnetic fields, \begin{equation}\label{EqHs} \begin{split} H_{\mathrm{spin}}&=-(1-\frac{1}{2}\boldsymbol{\xi}^2+\frac{3}{8}\boldsymbol{\xi}^4-\frac{5}{16}\boldsymbol{\xi}^6+\frac{35}{128}\boldsymbol{\xi}^8-\frac{63}{256}\boldsymbol{\xi}^{10}\\ &~~+\frac{231}{1024}\boldsymbol{\xi}^{12})\boldmu\cdot\mathbf{B}+(-\frac{1}{2}+\frac{3}{8}\boldsymbol{\xi}^2-\frac{5}{16}\boldsymbol{\xi}^4+\frac{35}{128}\boldsymbol{\xi}^6\\ &~~-\frac{63}{256}\boldsymbol{\xi}^8+\frac{231}{1024}\boldsymbol{\xi}^{10})\boldmu\cdot(\mathbf{E}\times\boldxi).\\ \end{split} \end{equation} In the following section, we will show that the FW transformed Dirac Hamiltonian is equivalent to the Hamiltonian obtained from T-BMT equation with $g=2$. \section{FW transformed Dirac Hamiltonian and classical Hamiltonian}\label{sec:TBMT} The orbital Hamiltonian $H_{\mathrm{orbit}}$ [Eq.~(\ref{EqHo})] is expected to be equivalent to the classical relativistic energy $\gamma mc^2+V$. However, the boost velocity in T-BMT equation is not $\boldsymbol{\xi}$ \cite{TWChen2010}. Take the series expansion of $(1+\boldsymbol{\xi}^2)^{1/2}$ into account, \begin{equation}\label{series_xi} \begin{split} (1+\boldsymbol{\xi}^2)^{1/2}&=1+\frac{1}{2}\boldsymbol{\xi}^2-\frac{1}{8}\boldsymbol{\xi}^4+\frac{1}{16}\boldsymbol{\xi}^6-\frac{5}{128}\boldsymbol{\xi}^8\\ &~~+\frac{7}{256}\boldsymbol{\xi}^{10}-\frac{21}{1024}\boldsymbol{\xi}^{12}+\frac{33}{2048}\boldsymbol{\xi}^{14}\\ &~~-\frac{429}{32768}\boldsymbol{\xi}^{16}+\cdots, \end{split} \end{equation} we find that the series of $\boldsymbol{\xi}^2$ in Eq.~(\ref{EqHo}) is exactly equal to Eq.~(\ref{series_xi}) up to $\boldsymbol{\xi}^{14}$. This enable us to define the boost operator $\widehat{\boldsymbol{\beta}}$ via the Lorentz operator $\widehat{\gamma}$, \begin{equation}\label{xiboost} (1+\boldsymbol{\xi}^2)^{1/2}=\widehat{\gamma}=\frac{1}{\sqrt{1-\widehat{\boldsymbol{\beta}}^2}}. \end{equation} In this sense, the orbital Hamiltonian can now be written as $H_{\mathrm{orbit}}=\widehat{\gamma} mc^2+V$. In classical relativistic theory, the Lorentz factor $\gamma$ is related to boost velocity by $\gamma=1/\sqrt{1-\beta^2}$. However, in the relativistic quantum mechanics since different components of the kinetic momentum operator $\boldsymbol{\pi}$ do not commute with one another, the boost operator $\widehat{\boldsymbol{\beta}}$ should not simply satisfy the form $\widehat{\gamma}=1/\sqrt{1-\widehat{\boldsymbol{\beta}}^2}$. We will go back to this point when discussing the spin Hamiltonian. The boost operator $\widehat{\boldsymbol{\beta}}$ plays an important role on showing the agreement between the spin Hamiltonian $H_{\mathrm{spin}}$ and the T-BMT equation. The spin Hamiltonian can be written as a sum of Zeeman Hamiltonian $H_{\mathrm{ze}}$ and spin-orbit interaction $H_{\mathrm{so}}$, \begin{equation} H_{\mathrm{spin}}=H_{\mathrm{ze}}+H_{\mathrm{so}}. \end{equation} The Zeeman Hamiltonian $H_{\mathrm{ze}}$ is the relativistic correction to the Zeeman energy: \begin{equation}\label{Hze} \begin{split} H_{\mathrm{ze}}&=-(1-\frac{1}{2}\boldsymbol{\xi}^2+\frac{3}{8}\boldsymbol{\xi}^4-\frac{5}{16}\boldsymbol{\xi}^6+\frac{35}{128}\boldsymbol{\xi}^8-\frac{63}{256}\boldsymbol{\xi}^{10}\\ &~~+\frac{231}{1024}\boldsymbol{\xi}^{12})\boldmu\cdot\mathbf{B}. \end{split} \end{equation} The spin-orbit interaction $H_{\mathrm{so}}$ is the interaction of electric field and the electric dipole moment arising from the boost on the intrinsic spin magnetic moment: \begin{equation}\label{Hso} \begin{split} H_{\mathrm{so}}&=-(\frac{1}{2}-\frac{3}{8}\boldsymbol{\xi}^2+\frac{5}{16}\boldsymbol{\xi}^4-\frac{35}{128}\boldsymbol{\xi}^6\\ &~~+\frac{63}{256}\boldsymbol{\xi}^8-\frac{231}{1024}\boldsymbol{\xi}^{10})\boldmu\cdot(\mathbf{E}\times\boldxi). \end{split} \end{equation} We first focus on the series in the Zeeman Hamiltonian. Consider the series expansion of $(1+\boldsymbol{\xi}^2)^{-1/2}$, \begin{equation}\label{Series1} \begin{split} (1+\boldsymbol{\xi}^2)^{-1/2}&=1-\frac{1}{2}\boldsymbol{\xi}^2+\frac{3}{8}\boldsymbol{\xi}^4-\frac{5}{16}\boldsymbol{\xi}^6+\frac{35}{128}\boldsymbol{\xi}^8\\ &~~-\frac{63}{256}\boldsymbol{\xi}^{10}+\frac{231}{1024}\boldsymbol{\xi}^{12}-\frac{429}{2048}\boldsymbol{\xi}^{14}+\cdots, \end{split} \end{equation} we find that the series in $H_{\mathrm{ze}}$ is exactly equal to $(1+\boldsymbol{\xi}^2)^{-1/2}$ up to $\boldsymbol{\xi}^{12}$. Therefore, the Zeeman Hamiltonian Eq.~(\ref{Hze}) can be written as \begin{equation}\label{Hze_boost} H_{\mathrm{ze}}=-\frac{1}{\widehat{\gamma}}\boldmu\cdot\mathbf{B}. \end{equation} On the other hand, the the spin-orbit term in the T-BMT Hamiltonian transforms like $[g/2-\gamma/(1+\gamma)]$ and $g=2$ for the Dirac Hamiltonian. Therefore, consider the series expansion of $(1-\widehat{\gamma}/(1+\widehat{\gamma}))(1/\widehat{\gamma})$, we have \begin{equation}\label{Series2} \begin{split} \left(1-\frac{\widehat{\gamma}}{1+\widehat{\gamma}}\right)\frac{1}{\widehat{\gamma}}&=\frac{1}{\sqrt{1+\boldsymbol{\xi}^2}}-\frac{1}{1+\sqrt{1+\boldsymbol{\xi}^2}}\\ &=\frac{1}{2}-\frac{3}{8}\boldsymbol{\xi}^2+\frac{5}{16}\boldsymbol{\xi}^4-\frac{35}{128}\boldsymbol{\xi}^6\\ &~~+\frac{63}{256}\boldsymbol{\xi}^8-\frac{231}{1024}\boldsymbol{\xi}^{10}+\frac{429}{2048}\boldsymbol{\xi}^{12}+\cdots, \end{split} \end{equation} where $\widehat{\gamma}(1+\widehat{\gamma})^{-1}=(1+\widehat{\gamma})^{-1}\widehat{\gamma}$ was used.\footnote{The identity can be shown as follows. $\widehat{\gamma}(1+\widehat{\gamma})^{-1}=[(1+\widehat{\gamma})\widehat{\gamma}^{-1}]^{-1}=(\widehat{\gamma}^{-1}+1)^{-1}=[\widehat{\gamma}^{-1}(1+\widehat{\gamma})]^{-1}=(1+\widehat{\gamma})^{-1}\widehat{\gamma}$.} The series in Eq.~(\ref{Hso}) is in agreement with Eq.~(\ref{Series2}) up to $\boldsymbol{\xi}^{10}$. Therefore, we have \begin{equation}\label{Hso_boost} H_{\mathrm{so}}=-\left(1-\frac{\widehat{\gamma}}{1+\widehat{\gamma}}\right)\frac{1}{\widehat{\gamma}}\boldmu\cdot(\mathbf{E}\times\boldxi). \end{equation} We note that if Eq.~(\ref{Hso_boost}) is in complete agreement with the T-BMT equation, the boost velocity operator $\widehat{\boldsymbol{\beta}}$ must be defined by $\widehat{\boldsymbol{\beta}}=\frac{1}{\widehat{\gamma}}\boldsymbol{\xi}$. In general, the commutator $[\boldsymbol{\xi},1/\widehat{\gamma}]$ is not equal to zero, and Eq.~(\ref{xiboost}) cannot be satisfied. However, since we require that the FW transformed Dirac Hamiltonian $H_{\mathrm{FW}}$ is linear in electromagnetic fields, the magnetic field obtained from the operator $[\boldsymbol{\xi},1/\widehat{\gamma}]$ should be neglected. In that sense, the commutator $[\boldsymbol{\xi},1/\widehat{\gamma}]$ should be identified as zero in this case, and the boost operator can be written as \begin{equation}\label{xiboost1} \widehat{\boldsymbol{\beta}}=\frac{1}{\widehat{\gamma}}\boldsymbol{\xi}=\boldsymbol{\xi}\frac{1}{\widehat{\gamma}}. \end{equation} It can be shown that Eq.~(\ref{xiboost1}) satisfies Eq.~(\ref{xiboost}). Therefore, the spin Hamiltonian Eq.~(\ref{EqHs}) with substitutions of Eqs.~(\ref{Hze_boost}) and (\ref{Hso_boost}) becomes \begin{equation}\label{Hspin} \begin{split} H_{\mathrm{spin}}&=H_{\mathrm{ze}}+H_{\mathrm{so}}\\ &=-\boldsymbol{\mu}\cdot\left[\frac{1}{\widehat{\gamma}}\mathbf{B}-\left(1-\frac{\widehat{\gamma}}{1+\widehat{\gamma}}\right)\widehat{\boldsymbol{\beta}}\times\mathbf{E}\right]. \end{split} \end{equation} Up to the twentieth order there is a complete agreement between the spin part of the FW transformed Dirac Hamiltonian and the T-BMT equation with $g=2$. The FW transformed Hamiltonian is given by \begin{equation}\label{HFW total} \begin{split} H_{\mathrm{FW}}&=H_{\mathrm{orbit}}+H_{\mathrm{spin}}\\ &=V+\widehat{\gamma} mc^2-\boldsymbol{\mu}\cdot\left[\frac{1}{\widehat{\gamma}}\mathbf{B}-\left(1-\frac{\widehat{\gamma}}{1+\widehat{\gamma}}\right)\widehat{\boldsymbol{\beta}}\times\mathbf{E}\right], \end{split} \end{equation} which is in agreement with the classical Hamiltonian with $g=2$. In the next section, we take into account the Pauli anomalous magnetic moment and show that the classical correspondence of the Dirac-Pauli Hamiltonian is the classical Hamiltonian with $g\neq2$. \section{FW transformation for Dirac-Pauli Hamiltonian}\label{sec:TBMT2} In the previous section, the agreement to the classical Hamiltonian is shown to be complete up to terms of the order $(\boldsymbol{\pi}/mc)^{14}$ in the absence of anomalous electron magnetic moment, i.e., $g=2$. The Dirac electron including the Pauli anomalous magnetic moment can be described by the Dirac-Pauli Hamiltonian denoted by $\mathcal{H}$ which contains the Dirac Hamiltonian as well as anomalous magnetic interaction $V_B$ and anomalous electric interaction $V_E$, \begin{equation} \begin{split} \mathcal{H}&=H_D+\left(\begin{array}{cc} V_B&iV_E\\ -iV_E&-V_B \end{array}\right)\\ &=\left(\begin{array}{cc} H_+&H_0\\ H_0^{\dag}&H_- \end{array}\right) \end{split} \end{equation} where $H_+=V+V_B+mc^2$, $H_-=V-V_B-mc^2$ and $H_0=h_0+iV_E$. The Dirac Hamiltonian $H_D$ is given in Eq.~(\ref{Dirac}), and \begin{equation} V_B=-\mu'\boldsigma\cdot\mathbf{B},~V_E=\mu'\boldsigma\cdot\mathbf{E}. \end{equation} The coefficient $\mu'$ is defined as \begin{equation} \mu'=\left(\frac{g}{2}-1\right)\frac{q\hbar}{2mc}. \end{equation} For an electron with $g=2$, we have $V_B=0$ and $V_E=0$. Applying the unitary transformation Eq.~(\ref{U}) \begin{equation}\label{UHDPU} U=\left(\begin{array}{cc} \mathcal{Y}&\mathcal{Y}\mathcal{X}^{\dag}\\ -\mathcal{Z}\mathcal{X}&\mathcal{Z} \end{array}\right) \end{equation} to the Dirac-Pauli Hamiltonian, the self-consistent equation for the Dirac-Pauli generating operator $\mathcal{X}$ is given by the requirement of vanishing off-diagonal term of $U\mathcal{H}U^{\dag}$, i.e. \begin{equation}\label{DP X} \begin{split} 2mc^2\mathcal{X}&=[V,\mathcal{X}]+h_0-\mathcal{X}h_0\mathcal{X}\\ &~~-iV_E-i\mathcal{X}V_E\mathcal{X}-\{\mathcal{X},V_B\}, \end{split} \end{equation} where $h_0=c\boldsigma\cdot\boldpi$. The FW transformed Dirac-Pauli Hamiltonian can be obtained from the upper-left block diagonal term of $U\mathcal{H}U^{\dag}$, and it is given by \begin{equation}\label{HFWDP0} \mathcal{H}_{\mathrm{FW}}= \mathcal{Y}\left(H_++\mathcal{X}^{\dag}H_0^{\dag} +H_0\mathcal{X}+\mathcal{X}^{\dag}H_-\mathcal{X}\right)\mathcal{Y}. \end{equation} Similar to the derivation of Eq.~(\ref{HFW}), we find that the FW transformed Dirac-Pauli Hamiltonian [Eq.~(\ref{HFWDP0})] can also be simplified as (see Appendix~\ref{App:Ham2}) \begin{equation}\label{H FWDP} \mathcal{H}_{\mathrm{FW}}=mc^2+e^{\mathcal{G}/2}\mathcal{A}e^{-\mathcal{G}/2}, \end{equation} where the Dirac-Pauli energy operator $\mathcal{A}$ and the Pauli-Dirac exponent operator $\mathcal{G}$ are given by \begin{equation}\label{DP AG} \begin{split} &\mathcal{A}=V+h_0\mathcal{X}+V_B+iV_E\mathcal{X},\\ &\mathcal{G}=\ln\left(1+\mathcal{X}^{\dag}\mathcal{X}\right). \end{split} \end{equation} Similar to Eq.~(\ref{HFW NE}) obtained from the requirement of hermiticity of $H_{\mathrm{FW}}$, we find that the FW transformed Dirac-Pauli Hamiltonian also satisfies $\mathcal{H}_{\mathrm{FW}}=\mathcal{H}_{\mathrm{FW}}^{\dag}=mc^2+e^{-\mathcal{G}/2}\mathcal{A}e^{\mathcal{G}/2}$ and the FW transformed Dirac-Pauli Hamiltonian can be rewritten as \begin{equation} \mathcal{H}_{\mathrm{FW}}=mc^2+\mathcal{A}^H+\mathcal{S}, \end{equation} where $\mathcal{A}^H$ is the hermitian part of the Dirac-Pauli energy operator and the Dirac-Pauli string operator $\mathcal{S}$ is the same as Eq.~(\ref{EqS NE}) by the replacements $A\rightarrow\mathcal{A}$ and $G\rightarrow\mathcal{G}$, i.e. \begin{equation}\label{EqS DP NE} \begin{split} \mathcal{S}&=\frac{1}{2}[\mathcal{G},\mathcal{A}^N]+\frac{1}{2!2^2}[\mathcal{G},[\mathcal{G},\mathcal{A}^H]]+\frac{1}{3!2^3}[\mathcal{G},[\mathcal{G},[\mathcal{G},\mathcal{A}^N]]]\\ &~~+\frac{1}{4!2^4}[\mathcal{G},[\mathcal{G},[\mathcal{G},[\mathcal{G},\mathcal{A}^H]]]]+\cdots. \end{split} \end{equation} Similar to the Dirac string operator, the anti-hermitian part of the Dirac-Pauli energy operator always appears in those terms with odd numbers of Dirac-Pauli exponent operators, and the hermitian part of the Dirac-Pauli energy operator always appears in those terms with even numbers of Dirac exponent operators. The power series solutions to the Dirac-Pauli generating operator can be obtained by means of Eq.~(\ref{DP X}) via substitution of the series expansion $\mathcal{X}=\sum_i\mathcal{X}_k/c^k$, $k=1,2,3,\cdots$, and each order of Dirac-Pauli energy operator $\mathcal{A}_k$ can be obtained from $\mathcal{A}=\sum_k\mathcal{A}_k/c^k$ by using Eq.~(\ref{DP AG}). Each order of Dirac-Pauli energy operators can be decomposed into hermitian ($\mathcal{A}^H_k$) and anti-hermitian ($\mathcal{A}^N_k$) parts, $\mathcal{A}_k=\mathcal{A}^H_k+\mathcal{A}^N_k$. As a result, the FW transformed Dirac-Pauli Hamiltonian can be written as \begin{equation} \mathcal{H}_{FW}=mc^2+\sum_{k=0,1,2,\cdots}\mathcal{H}^{(k)}_{FW} \end{equation} with \begin{equation} c^k\mathcal{H}^{(k)}_{FW}=\mathcal{A}^H_{k}+\mathcal{S}_k. \end{equation} To obtain the FW transformed Dirac-Pauli Hamiltonian up to $k=12$, the largest order of the Dirac-Pauli generating operator must have the order of $k=13$, i.e., $\mathcal{X}_{13}$. This is because the operator $h_0=c\boldsymbol{\sigma}\cdot\boldsymbol{\pi}$ is of the order of $c$, the order of the Dirac-Pauli energy operator is lower than that of the Dirac-Pauli generating operator. Furthermore, since each order of the Dirac-Pauli generating operator must equal that of the Dirac generating operator when $g=2$, we can rewrite the Dirac-Pauli generating operator ($\mathcal{X}_k$) as the sum of the Dirac generating operator ($X_k$) and the anomalous generating operator ($X_k'$), namely, \begin{equation}\label{XX'} \mathcal{X}_k=X_k+X_k'+o(f^2), \end{equation} where the anomalous generating operator $X_k'$ vanishes when $g=2$. Similar to the derivation of power series solution to the Dirac generating operator shown in the previous section, the explicit forms of different orders of the anomalous generating operator $X_k'$ are given by ($k=1,2,\cdots,13$) \begin{widetext} \begin{equation}\label{X'k} \begin{split} &X_1'=0,~X_2'=0,~X_3'=-\frac{i\mu''}{2m}\boldsigma\cdot\mathbf{E},~X_4'=\frac{\mu''}{2m^2}\mathbf{B}\cdot\boldsymbol{\pi},~X_5'=\frac{3}{8}\frac{i\mu''}{m^3}\boldsymbol{\pi}^2(\boldsigma\cdot\mathbf{E})-\frac{i\mu''}{4m^3}(\boldsigma\cdot\boldpi)(\mathbf{E}\cdot\boldsymbol{\pi})\\ &X_6'=-\frac{3}{8}\frac{\mu''}{m^4}\boldsymbol{\pi}^2(\mathbf{B}\cdot\boldsymbol{\pi}),~X_7'=-\frac{5}{16}\frac{i\mu''}{m^5}\boldsymbol{\pi}^4(\boldsigma\cdot\mathbf{E})+\frac{1}{4}\frac{i\mu''}{m^5}\boldsymbol{\pi}^2(\boldsigma\cdot\boldpi)(\mathbf{E}\cdot\boldsymbol{\pi}),\\ &X_8'=\frac{5}{16}\frac{\mu''}{m^6}\boldsymbol{\pi}^4(\mathbf{B}\cdot\boldsymbol{\pi}),~X_9'=\frac{35}{128}\frac{i\mu''}{m^7}\boldsymbol{\pi}^6(\boldsigma\cdot\mathbf{E})-\frac{15}{64}\frac{i\mu''}{m^7}\boldsymbol{\pi}^4(\boldsigma\cdot\boldpi)(\mathbf{E}\cdot\boldsymbol{\pi}),\\ &X_{10}'=-\frac{35}{128}\frac{\mu''}{m^8}\boldsymbol{\pi}^6(\mathbf{B}\cdot\boldsymbol{\pi}),~X_{11}'=-\frac{63}{256}\frac{i\mu''}{m^9}\boldsymbol{\pi}^8(\boldsigma\cdot\mathbf{E})+\frac{7}{32}\frac{i\mu''}{m^9}\boldsymbol{\pi}^6(\boldsigma\cdot\boldpi)(\mathbf{E}\cdot\boldsymbol{\pi}),\\ &X_{12}'=\frac{63}{256}\frac{\mu''}{m^{10}}\boldsymbol{\pi}^8(\mathbf{B}\cdot\boldsymbol{\pi}),~X_{13}'=\frac{231}{1024}\frac{i\mu''}{m^{11}}\boldsymbol{\pi}^{10}(\boldsigma\cdot\mathbf{E})-\frac{105}{512}\frac{i\mu''}{m^{11}}\boldsymbol{\pi}^8(\boldsigma\cdot\boldpi)(\mathbf{E}\cdot\boldsymbol{\pi}),\\ \end{split} \end{equation} \end{widetext} where $\mu''=(g/2-1)q\hbar/2m=c\mu'$. We note that since the gyromagnetic ratio always accompanies linear-order terms of electric or magnetic fields, the operator $X_k'$ is proportional to electromagnetic fields and contains the kinetic momentum operator. Substituting Eq.~(\ref{XX'}) into Eq.~(\ref{DP AG}), the Dirac-Pauli energy operator ($\mathcal{A}_k$) can be written as the sum of the energy operator for the Dirac Hamiltonian ($A_k$) and anomalous energy operator ($A'_k$), \begin{equation} \mathcal{A}_k=A_k+A'_k+o(f^2), \end{equation} where the expanding terms of the Dirac energy operator $A_k$ from $k=0$ to $k=12$ are given in Eqs.~(\ref{EqAsH}) and (\ref{EqAsN}). The $k$th order of the anomalous energy operator is related to $k$th orders of Dirac generating operator and anomalous generating operator by \begin{equation} \begin{split} &A'_0=0,~A'_1=cV_B,\\ &A'_{k}=(h_0/c)X'_{k+1}+icV_EX_{k-1}, k=2,4,\cdots12,\\ &A'_{k}=(h_0/c)X_{k+1}, k=3,5,\cdots,11. \end{split} \end{equation} Using Eqs.~(\ref{App:SolveX}) and (\ref{X'k}), the hermitian parts of the expanding terms of the anomalous energy operator from zeroth order to twentieth orders are given by \begin{equation}\label{A'k H} \begin{split} &A'^H_0=0,~A'^H_1=-\mu''\boldsigma\cdot\mathbf{B},\\ &A'^H_2=-\frac{\mu''}{m}\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ &A'^H_3=\frac{1}{2}\frac{\mu''}{m^2}(\boldsigma\cdot\boldpi)(\mathbf{B}\cdot\boldpi),\\ &A'^H_4=\frac{\mu''}{m^3}\boldsymbol{\pi}^2\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ &A'^H_5=-\frac{3}{8}\frac{\mu''}{m^4}\boldsymbol{\pi}^2(\boldsigma\cdot\boldpi)(\mathbf{B}\cdot\boldpi),\\ &A'^H_6=-\frac{3}{8}\frac{\mu''}{m^5}\boldsymbol{\pi}^4\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ &A'^H_7=\frac{5}{16}\frac{\mu''}{m^6}\boldsymbol{\pi}^4(\boldsigma\cdot\boldpi)(\mathbf{B}\cdot\boldpi),\\ &A'^H_8=\frac{5}{16}\frac{\mu''}{m^7}\boldsymbol{\pi}^6\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi})\\ &A'^H_9=-\frac{35}{128}\frac{\mu''}{m^8}\boldsymbol{\pi}^6(\boldsigma\cdot\boldpi)(\mathbf{B}\cdot\boldpi),\\ &A'^H_{10}=-\frac{35}{128}\frac{\mu''}{m^9}\boldsymbol{\pi}^8\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ &A'^H_{11}=\frac{63}{256}\frac{\mu''}{m^{10}}\boldsymbol{\pi}^8(\boldsigma\cdot\boldpi)(\mathbf{B}\cdot\boldpi),\\ &A'^H_{12}=\frac{63}{256}\frac{\mu''}{m^{11}}\boldsymbol{\pi}^{10}\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}).\\ \end{split} \end{equation} For example, consider the twentieth order of the anomalous operator $A'_{12}$, which is given by $A'_{12}=(h_0/c)X'_{13}+icV_EX_{11}$. Substituting $V_E$, $X'_{13}$ and $X_{11}$ into $A'_{12}$ and neglecting the second order of homogeneous electromagnetic fields, we find that \begin{equation} \begin{split} A'_{12}&=\frac{231}{1024}\frac{i\mu''}{m^{11}}\boldsymbol{\pi}^{10}(\boldsigma\cdot\boldpi)(\boldsigma\cdot\mathbf{E})-\frac{21}{32}\frac{i\mu''}{m^6}T^5(\boldsigma\cdot\mathbf{E})(\boldsigma\cdot\boldpi)\\ &~~-\frac{105}{512}\frac{i\mu''}{m^{11}}\boldsymbol{\pi}^{10}\boldsigma\cdot\mathbf{E}\\ &=\left(\frac{231}{1024}+\frac{21}{32}\times\frac{1}{32}\right)\frac{\mu''}{m^{11}}\boldsymbol{\pi}^{10}\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi})\\ &~~+\left(-\frac{105}{512}+\frac{231}{1024}-\frac{21}{32}\times\frac{1}{32}\right)\frac{i\mu''}{m^{11}}\boldsymbol{\pi}^{10}\boldsigma\cdot\mathbf{E}\\ &=\frac{63}{256}\frac{\mu''}{m^{11}}\boldsymbol{\pi}^{10}\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}), \end{split} \end{equation} where in the second equality we have used $(\boldsigma\cdot\mathbf{E})(\boldsigma\cdot\boldpi)=\mathbf{E}\cdot\boldsymbol{\pi}+i\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi})$ and $\mathbf{E}\times\boldsymbol{\pi}=-\boldsymbol{\pi}\times\mathbf{E}$ for homogeneous fields, and the kinetic energy operator is replaced by $T\rightarrow\boldsymbol{\pi}^2/2m$. The anti-hermitian part of $A'_{12}$ is $i\mu''\boldsymbol{\pi}^{10}\boldsigma\cdot\mathbf{E}/m^{11}$ and its numerical coefficient is zero. Interestingly, we find that all the anti-hermitian part of $A'_k$ from $k=0$ to $k=12$ vanish up to second-order terms of homogeneous electromagnetic fields, i.e. \begin{equation}\label{A'k N} A'^N_k=0+o(f^2). \end{equation} On the other hand, the series expansion of the Dirac-Pauli exponent operator $\mathcal{G}=\ln(1+\mathcal{X}^{\dag}\mathcal{X})$ can also be written as $\mathcal{G}=\sum_{k}\mathcal{G}_k/c^k$ and $\mathcal{G}_{k}=G_k+G'_k$, where $G_k$ (the $k$th order of the Dirac exponent operator) is given in Eq.~(\ref{EqGs}) and $G'_k$ is the $k$th order of the anomalous exponent operator. The expanding terms of the anomalous exponent operators $G'_k$ from $k=1$ to $k=12$ are as follows: \begin{equation}\label{G'k} \begin{split} &G'_1=0,~G'_2=0,~G'_3=0,G'_4=-\frac{\mu''}{2m^2}\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ &G'_5=\frac{\mu''}{2m^3}(\boldsigma\cdot\boldpi)(\mathbf{B}\cdot\boldpi), G'_6=\frac{5}{8}\frac{\mu''}{m^4}\boldsymbol{\pi}^2\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ &G'_7=-\frac{5}{8}\frac{\mu''}{m^5}\boldsymbol{\pi}^2(\boldsigma\cdot\boldpi)(\mathbf{B}\cdot\boldpi),\\ &G'_8=-\frac{11}{16}\frac{\mu''}{m^6}\boldsymbol{\pi}^4\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ &G'_9=\frac{11}{16}\frac{\mu''}{m^7}\boldsymbol{\pi}^4(\boldsigma\cdot\boldpi)(\mathbf{B}\cdot\boldpi),\\ &G'_{10}=\frac{93}{128}\frac{\mu''}{m^8}\boldsymbol{\pi}^6\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ &G'_{11}=-\frac{93}{128}\frac{\mu''}{m^9}\boldsymbol{\pi}^6(\boldsigma\cdot\boldpi)(\mathbf{B}\cdot\boldpi),\\ &G'_{12}=-\frac{193}{256}\frac{\hbar}{m^{10}}\boldsymbol{\pi}^8\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\\ \end{split} \end{equation} which all vanish when $g=2$ and each order of the anomalous exponent operator must be proportional to electromagnetic fields. Substitute Eqs.~(\ref{A'k H}), (\ref{A'k N}) and (\ref{G'k}) into Eq.~(\ref{EqS DP NE}), it can be shown that similar to the result of the Dirac string operator, the Dirac-Pauli string operator also vanishes up to second-order terms of homogeneous electromagnetic fields; i.e., we have \begin{equation} \mathcal{S}_{k}=0+o(f^2). \end{equation} This can be proved as follows. Firstly, consider the term containing only one Dirac-Pauli exponent operator in the Dirac-Pauli string operator [see Eq.~(\ref{EqS DP NE})]. It is given by $[\mathcal{G},\mathcal{A}^{N}]/2$. Since $\mathcal{G}=G+G'$ and $\mathcal{A}^{N}=A^N+A'^N$, we have $[\mathcal{G},\mathcal{A}^{H}]/2=[G,A^N]/2+[G,A'^N]/2+[G',A^N]/2+[G',A'^N]/2$, where $[G,A^N]/2$ is the Dirac string operator containing only one Dirac exponent operator and it has been shown that $[G,A^N]/2=0+o(f^2)$. The anomalous exponent operator can be written as $G'=\sum_{k}G'_k/c^k=F_1(\boldsymbol{\pi}^2)\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi})+F_2(\boldsymbol{\pi}^2)(\boldsigma\cdot\boldpi)(\mathbf{B}\cdot\boldpi)$, where $F_1(\boldsymbol{\pi}^2)$ represents a power series of $\boldsymbol{\pi}^2$ as well as $F_2(\boldsymbol{\pi}^2)$. We note that $A'^N=\sum_{k}A'^N/c^k=0+o(f^2)$ [see Eq.~(\ref{A'k N})]. It is obvious that the second term $[G,A'^N]$ and fourth term $[G',A'^N]$ vanish up to second-order terms of homogeneous electromagnetic fields. The third term $[G',A^N]=[f_1(\boldsymbol{\pi}^2)\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi})+f_2(\boldsymbol{\pi}^2)(\boldsigma\cdot\boldpi)(\mathbf{B}\cdot\boldpi),g(\boldsymbol{\pi}^2)\mathbf{E}\cdot\boldsymbol{\pi}]$ also vanishes because we have $[f_1(\boldsymbol{\pi}^2),g(\boldsymbol{\pi}^2)]=[f_2(\boldsymbol{\pi}^2),g(\boldsymbol{\pi}^2)]=0$ and $[\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),\mathbf{E}\cdot\boldsymbol{\pi}]=[(\boldsigma\cdot\boldpi)(\mathbf{B}\cdot\boldpi),\mathbf{E}\cdot\boldsymbol{\pi}]=[\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),g(\boldsymbol{\pi}^2)]=[(\boldsigma\cdot\boldpi)(\mathbf{B}\cdot\boldpi),g(\boldsymbol{\pi}^2)]=[f_1(\boldsymbol{\pi}^2),\mathbf{E}\cdot\boldsymbol{\pi}]=[f_2(\boldsymbol{\pi}^2),\mathbf{E}\cdot\boldsymbol{\pi}]=0+o(f^2)$. Therefore, we have $[\mathcal{G},\mathcal{A}^N]/2=0+o(f^2)$. Since the commutator $[\mathcal{G},\mathcal{A}^N]/2$ always appears in those terms with odd numbers of the Dirac-Pauli exponent operators [see Eq.~(\ref{EqS DP NE})], this implies that the terms with odd numbers of $\mathcal{G}$ always vanish up to second-order terms of homogeneous electromagnetic fields. Secondly, consider the terms with two Dirac-Pauli exponent operators in the Dirac-Pauli string operator. It is given by $[\mathcal{G},[\mathcal{G},\mathcal{A}^{H}]]/2!2^2=[G,[G,A^H]]/2!2^2+[G,[G,A'^H]]/2!2^2+[G,[G',A^H]]/2!2^2+[G',[G,A^H]]/2!2^2+o(f^2)$, where we have neglected the second-order terms of electromagnetic fields, such as $[G,[G',A'^H]]$, $[G',[G,A'^H]]$, $[G',[G',A^H]]$ and $[G',[G',A'^H]]$. The first term $[G,[G,A^H]]$ is the Dirac string operator containing only two Dirac exponent operators and it has been show that $[G,[G,A^H]]=0+o(f^2)$. The anomalous energy operator can be written as $A'^H=\sum_kA'^H_k/c^k=K_1(\boldsymbol{\pi}^2)\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi})+K_2(\boldsymbol{\pi}^2)(\boldsigma\cdot\boldpi)(\mathbf{B}\cdot\boldpi)$, where $K_1(\boldsymbol{\pi}^2)$ represents the power series of $\boldsymbol{\pi}^2$ as well as $K_2(\boldsymbol{\pi}^2)$. The commutator $[G,A'^H]$ can be written as $[G,A'^H]=[G_T+G_{\mathrm{so}},A'^H]=[G_T,A'^H]+[G_{\mathrm{so}},A'^H]$, where $G_{\mathrm{so}}=F(\boldsymbol{\pi}^2)\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi})$ and $G_T=T/2m-(5/8)T^2/m^2+(11/12)T^3/m^3+\cdots$. Using Eq.~(\ref{T comm}), it can be shown that $[G_T,A'^H]=0+o(f^2)$ and $[G_{\mathrm{so}},A'^H]=0+o(f^2)$, and thus, the second term $[G,[G,A'^H]]$ vanishes up to second-order terms of homogeneous electromagnetic fields. Consider the third term $[G,[G',A^H]]$, where the commutator $[G',A^H]$ becomes $[G',A^H]=[F_1(\boldsymbol{\pi}^2)\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi})+F_2(\boldsymbol{\pi}^2)(\boldsigma\cdot\boldpi)(\mathbf{B}\cdot\boldpi),V+A^H_T+A^H_{\mathrm{so}}]=[F_1(\boldsymbol{\pi}^2)\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),V]+[F_2(\boldsymbol{\pi}^2)(\boldsigma\cdot\boldpi)(\mathbf{B}\cdot\boldpi),V]+o(f^2)$. However, the two commutators $[F_1(\boldsymbol{\pi}^2)\boldsigma\cdot(\mathbf{E}\times\boldsymbol{\pi}),V]$ and $[F_2(\boldsymbol{\pi}^2)(\boldsigma\cdot\boldpi)(\mathbf{B}\cdot\boldpi),V]$ are proportional to the second order of homogeneous electromagnetic fields since $[\boldsymbol{\pi},V]=iq\hbar\mathbf{E}$, and thus we have $[G,[G',A^H]]=0+o(f^2)$. The fourth term $[G',[G,A^H]]$ also vanishes up to $o(f^2)$ since it has been shown that $[G,A^H]=R(\boldsymbol{\pi}^2)\mathbf{E}\cdot\boldsymbol{\pi}+o(f^2)$ and $G'$ contains first-order terms of homogeneous electromagnetic fields. Therefore, we have shown that $[\mathcal{G},[\mathcal{G},\mathcal{A}^{H}]]/2!2^2=0+o(f^2)$. Since the commutator $[\mathcal{G},[\mathcal{G},\mathcal{A}^H]]$ always appears in those terms with even number of the Dirac-Pauli exponent operator [see Eq.~(\ref{EqS DP NE})], this implies that the terms with even numbers of $\mathcal{G}$ always vanishes up to second order of homogeneous electromagnetic fields. As a consequence, the FW transformed Dirac-Pauli Hamiltonian is determined only by the hermitian part of Dirac-Pauli energy operator, i.e. \begin{equation} c^k\mathcal{H}^{(k)}_{\mathrm{FW}}=\mathcal{A}_k^H+o(f^2).\\ \end{equation} Since the Dirac-Pauli energy operator is composed of the Dirac energy operator and anomalous energy operator, $A^{H}_k=A^H_k+A'^H_k$, and the Dirac energy operator is related to the FW transformed Dirac Hamiltonian by $c^kH^{(k)}_{\mathrm{FW}}=A^H_k$, the FW transformed Dirac-Pauli Hamiltonian can be written as the sum of the FW transformed Dirac Hamiltonian and the anomalous Hamiltonian: \begin{equation} \mathcal{H}_{\mathrm{FW}}=H_{\mathrm{FW}}+H'_{\mathrm{FW}}. \end{equation} The $k$th order of the FW transformed Pauli-Dirac Hamiltonian can be written as \begin{equation} \mathcal{H}^{(k)}_{\mathrm{FW}}=H^{(k)}_{\mathrm{FW}}+H'^{(k)}_{\mathrm{FW}}, \end{equation} where the the $k$th order of the anomalous Hamiltonian $H'_{\mathrm{FW}}$ denoted as $H'^{(k)}_{\mathrm{FW}}$ is determined by the $k$th order of the anomalous energy operator: \begin{equation} c^kH'^{(k)}_{\mathrm{FW}}=A'^H_k. \end{equation} Using Eqs.~(\ref{A'k H}), (\ref{A'k N}) and (\ref{def}), the terms $H_{FW}^{(k)}$ from $k=0$ to $k=12$ are given by \begin{equation} \begin{split} &H'^{(0)}_{\mathrm{FW}}=0,~H'^{(1)}_{\mathrm{FW}}=-\left(\frac{g}{2}-1\right)\boldsymbol{\mu}\cdot\mathbf{B},\\ &H'^{(2)}_{\mathrm{FW}}=-\left(\frac{g}{2}-1\right)\boldmu\cdot(\mathbf{E}\times\boldxi),\\ &H'^{(3)}_{\mathrm{FW}}=\frac{1}{2}\left(\frac{g}{2}-1\right)(\boldsymbol{\mu}\cdot\boldsymbol{\xi})(\mathbf{B}\cdot\boldsymbol{\xi}),\\ &H'^{(4)}_{\mathrm{FW}}=\left(\frac{g}{2}-1\right)\boldsymbol{\xi}^2\boldmu\cdot(\mathbf{E}\times\boldxi),\\ &H'^{(5)}_{\mathrm{FW}}=-\frac{3}{8}\left(\frac{g}{2}-1\right)\boldsymbol{\xi}^2(\boldsymbol{\mu}\cdot\boldsymbol{\xi})(\mathbf{B}\cdot\boldsymbol{\xi}),\\ &H'^{(6)}_{\mathrm{FW}}=-\frac{3}{8}\left(\frac{g}{2}-1\right)\boldsymbol{\xi}^4\boldmu\cdot(\mathbf{E}\times\boldxi),\\ &H'^{(7)}_{\mathrm{FW}}=\frac{5}{16}\left(\frac{g}{2}-1\right)\boldsymbol{\xi}^4(\boldsymbol{\mu}\cdot\boldsymbol{\xi})(\mathbf{B}\cdot\boldsymbol{\xi}),\\ &H'^{(8)}_{\mathrm{FW}}=\frac{5}{16}\left(\frac{g}{2}-1\right)\boldsymbol{\xi}^6\boldmu\cdot(\mathbf{E}\times\boldxi),\\ &H'^{(9)}_{\mathrm{FW}}=-\frac{35}{128}\left(\frac{g}{2}-1\right)\boldsymbol{\xi}^6(\boldsymbol{\mu}\cdot\boldsymbol{\xi})(\mathbf{B}\cdot\boldsymbol{\xi}),\\ &H'^{(10)}_{\mathrm{FW}}=-\frac{35}{128}\left(\frac{g}{2}-1\right)\boldsymbol{\xi}^8\boldmu\cdot(\mathbf{E}\times\boldxi),\\ &H'^{(11)}_{\mathrm{FW}}=\frac{63}{256}\left(\frac{g}{2}-1\right)\boldsymbol{\xi}^8(\boldsymbol{\mu}\cdot\boldsymbol{\xi})(\mathbf{B}\cdot\boldsymbol{\xi}),\\ &H'^{(12)}_{\mathrm{FW}}=\frac{63}{256}\left(\frac{g}{2}-1\right)\boldsymbol{\xi}^{10}\boldmu\cdot(\mathbf{E}\times\boldxi). \end{split} \end{equation} By using Eqs.~(\ref{Series1}) and (\ref{Series2}), the anomalous Hamiltonian can be written as \begin{equation}\label{HFW' total} \begin{split} H'_{\mathrm{FW}}&=\sum_{k=0}^{12}H'^{(k)}_{\mathrm{FW}}\\ &=-\left(\frac{g}{2}-1\right)\boldsymbol{\mu}\cdot\mathbf{B}-\left(\frac{g}{2}-1\right)\frac{1}{\widehat{\gamma}}\boldmu\cdot(\mathbf{E}\times\boldxi)\\ &~~+\left(\frac{g}{2}-1\right)\left(1-\frac{\widehat{\gamma}}{1+\widehat{\gamma}}\right)\frac{1}{\widehat{\gamma}}(\boldsymbol{\mu}\cdot\boldsymbol{\xi})(\mathbf{B}\cdot\boldsymbol{\xi}). \end{split} \end{equation} Combining the FW transformed Dirac Hamiltonian [Eq.~(\ref{HFW total})] and the anomalous Hamiltonain [Eq.~(\ref{HFW' total})], we have (up to terms of $(\boldsymbol{\pi}/mc)^{14}$) \begin{equation}\label{HDP FW} \begin{split} \mathcal{H}_{\mathrm{FW}}&=H_{\mathrm{FW}}+H_{\mathrm{FW}}'\\ &=V+\widehat{\gamma} mc^2-\left(\frac{g}{2}-1+\frac{1}{\widehat{\gamma}}\right)\boldsymbol{\mu}\cdot\mathbf{B}\\ &~~+\left(\frac{g}{2}-\frac{\widehat{\gamma}}{1+\widehat{\gamma}}\right)\boldsymbol{\mu}\cdot(\widehat{\boldsymbol{\beta}}\times\mathbf{E})\\ &~~+\left(\frac{g}{2}-1\right)\frac{\widehat{\gamma}}{1+\widehat{\gamma}}(\boldsymbol{\mu}\cdot\widehat{\boldsymbol{\beta}})(\mathbf{B}\cdot\widehat{\boldsymbol{\beta}}). \end{split} \end{equation} Equation (\ref{HDP FW}) is in agreement with the classical Hamiltonian with $g\neq2$. The FW transformed Dirac-Pauli Hamiltonian [Eq.~(\ref{HFW' total})] can also be obtained by directly evaluating Eq.~(\ref{HFWDP0}). Since Eq.~(\ref{HFWDP0}) is explicitly hermitian, the calculation can be done without accounting for the Dirac-Pauli string operator and the separation of hermitian and anti-hermitian parts of the Dirac-Pauli energy operator \cite{CLChang}. Up to $(\boldsymbol{\pi}/mc)^{14}$, we find that the result shown in Ref.~\cite{CLChang} is in agreement with the present result. To find the classical correspondence of the quantum theory of charged spin-1/2 particle, we have to perform the FW transformation on the quantum Hamiltonian. The procedure presented in this paper provides us a more systematic and efficient method to obtain higher order expansion in the FW representation. \section{Exact unitary transformation}\label{sec:EUT} We now turn to the discussion of the \emph{exact} series expansions of the Dirac and Dirac-Pauli generating operators. The exact unitary transformation of a free particle Dirac Hamiltonian has been given in Eq.~\ref{U-free}. In the presence of electromagnetic fields, the series of successive FW transformations becomes much more complicated. However, it is still possible to obtain the exact unitary transformation by deducing the close form from the finite-order series expansion, if the order we obtained is high enough. For example, the exact unitary transformation of the free particle Dirac Hamiltonian can be obtained from the successive FW transformations, if terms in the series expansion is many enough to determine the closed form. Therefore, in order to find the closed form for generic cases, we must proceed to higher orders. On the other hand, it has been proposed that the low-energy and weak-field limit of the Dirac (resp. Dirac-Pauli) Hamiltonian is consistent with the classical Hamiltonian, which is the sum of the classical relativistic Hamiltonian and T-BMT Hamiltonian with $g=2$ (resp. $g\neq2$). This suggests that there exists an exact unitary transformation for the low-energy and weak-field limit. In this section we will find the closed form of the unitary transformation from the high-order series expansions of the generating operators. The unitary transformation matrix is related to the generating operator by Eqs. (\ref{U}) and (\ref{Def:YandZ}) in Kutzelnigg's diagonalization method. If the closed form of generating operator is found, the exact unitary transformation matrix can be obtained. For the low-energy and weak-field limit of the Dirac Hamiltonian, the Dirac generating operator can be written as \begin{equation}\label{EUT-X} \begin{split} X=\frac{X_1}{c}+\frac{X_3}{c^3}+\frac{X_5}{c^5}+\cdots. \end{split} \end{equation} In Sec.~\ref{sec:HFW}, we have obtained the terms $X_{\ell}$ up to order of $\ell=13$, which are given in Eq. (\ref{App:SolveX}). We find that Eq. (\ref{EUT-X}) with Eq. (\ref{App:SolveX}) can be incorporated into the closed form \begin{widetext} \begin{equation}\label{EUT-X-exact} \begin{split} X&=\frac{1}{1+\sqrt{1+(\boldsymbol{\sigma}\cdot\boldsymbol{\xi})^2}}\boldsymbol{\sigma}\cdot\boldsymbol{\xi}+\left(\frac{1}{\sqrt{1+\boldsymbol{\xi}^2}}-\frac{1}{1+\sqrt{1+\boldsymbol{\xi}^2}}\right)\frac{-i}{mc^2}\boldsymbol{\mu}\cdot\mathbf{E}+\left(\frac{1}{\sqrt{1+\boldsymbol{\xi}^2}}\frac{1}{1+\sqrt{1+\boldsymbol{\xi}^2}}\right)^2(\boldsymbol{\mu}\cdot\boldsymbol{\xi})(\mathbf{E}\cdot\boldsymbol{\xi}), \end{split} \end{equation} \end{widetext} The magnetic field generated from the operator $(\boldsymbol{\sigma}\cdot\boldsymbol{\pi})^2$ is included in the first term of Eq. (\ref{EUT-X-exact}). In the absence of electromagnetic fields, the kinetic momentum $\boldsymbol{\pi}$ is replaced by the canonical momentum $\mathbf{p}$. In this case, Eq. (\ref{EUT-X-exact}) becomes $c\boldsymbol{\sigma}\cdot\mathbf{p}/[mc^2+\sqrt{m^2c^4+c^2\mathbf{p}^2}]$, which is the same as Eq. (\ref{free-X}), {and} the resulting unitary transformation is exactly Eq. (\ref{U-free}). We also note that in the absence of an electric field, Eq. (\ref{EUT-X-exact}) becomes Eq. (\ref{Magnetic-X}). Taking the anomalous magnetic moment into account, the Dirac-Pauli generating operator can be written as \begin{equation} \mathcal{X}=X+X', \end{equation} where $X$ is given in Eq. (\ref{EUT-X-exact}). We find that the anomalous generating operator $X'$ with Eq. (\ref{X'k}) can be incorporated into the closed form \begin{widetext} \begin{equation}\label{EUT-X'-exact} \begin{split} X'&=\frac{X_3'}{c^3}+\frac{X_4'}{c^4}+\frac{X_5'}{c^5}+\cdots\\ &=\left(\frac{1}{\sqrt{1+\boldsymbol{\xi}^2}}-\frac{1}{1+\sqrt{1+\boldsymbol{\xi}^2}}\right)\left(\frac{g}{2}-1\right)\frac{1}{mc^2}\left(-i\boldsymbol{\mu}\cdot\mathbf{E}+\frac{q\hbar}{2mc}\mathbf{B}\cdot\boldsymbol{\xi}\right)\\ &~~+\frac{1}{\sqrt{1+\boldsymbol{\xi}^2}}\left(\frac{1}{1+\sqrt{1+\boldsymbol{\xi}^2}}\right)^2\left[-\frac{i}{mc^2}\left(\frac{g}{2}-1\right)(\boldsymbol{\mu}\cdot\boldsymbol{\xi})(\mathbf{E}\cdot\boldsymbol{\xi})\right]. \end{split} \end{equation} \end{widetext} The closed forms of $X$ and $X'$ have been deduced from the high-order series expansions Eq. (\ref{App:SolveX}) and Eq. (\ref{X'k}) respectively, but the rigorous proofs are stilling missing. The merit of obtaining the closed forms is nevertheless enormous: it allows us to guess the generic forms of $X_\ell$ and $X'_\ell$ in the series expansions, which in turn enable us to conduct rigorous proofs by mathematical induction \cite{DWChiou2014}. With Eqs. (\ref{EUT-X-exact}) and (\ref{EUT-X'-exact}) at hand, we can formally construct the exact unitary transformation. However, the main problem to be addressed is that the resulting exact unitary matrix is valid only in the low-energy and weak-field limit. In this regard, when we apply the exact unitary transformation to the Dirac or Dirac-Pauli Hamiltonian, we have to neglect nonlinear electromagnetic effects. In strong fields, the particle's energy interacting with electromagnetic fields could exceed the Dirac energy gap ($2mc^2$) and it is no longer adequate to describe the relativistic quantum dynamics without taking into account the field-theory interaction to the antiparticle. In fact, some doubts have been thrown on the mathematical rigour of the FW transformation \cite{Thaller1992}. The study of this paper nevertheless suggests that the exact FW transformation indeed exists and is valid in the low-energy and weak-field limit and furthermore the FW transformed Hamiltonian agrees with the classical counterpart (see \cite{DWChiou2014} for closer investigations). \section{Conclusions and Discussion}\label{sec:conclusions} The motion of a particle endowed with charge and intrinsic spin is governed by the classical Lorentz equation and the T-BMT equation. Assuming that the canonical relation of classical spins (via Poisson brackets) is the same as that of quantum spins (via commutators), the T-BMT equation can be recast as the Hamilton's equation and the T-BMT Hamiltonian is obtained. By treating positions, momenta and spins as independent variables in pase space, the classical Hamiltonian describing the motion of spin-1/2 charged particle is the sum of the classical relativistic Hamiltonian and T-BMT Hamiltonian. On the other hand, the correspondence between the classical Hamiltonian and the low-energy and weak-field limit of Dirac equation has been investigated by several authors. For a free particle, the Foldy-Wouthuysen transformation of Dirac equation was shown to exactly lead to the classical relativistic Hamiltonian of a free particle. Intriguingly, when spin precession and interaction with electromagnetic fields are also taken into account, it was found that the connection between Dirac equation and classical Hamiltonian becomes explicit if the order-by-order block diagonalization of the the Dirac Hamiltonian can be proceed to higher-order terms. The low-energy and weak-field limit of the relativistic quantum theory of spin-1/2 charged particle is investigated by performing the Kutzelnigg diagonalisation method on the Dirac Hamiltonian. We show that in the presence of inhomogeneous electromagnetic fields the Foldy-Wouthuysen transformed Dirac Hamiltonian up to terms with $(\boldsymbol{\pi}/mc)^4$ can be reproduced by the Kutzelnigg diagonalisation method. When the electromagnetic fields are homogeneous and nonlinear effects are neglected, the Foldy-Wouthuysen transformation of the Dirac Hamiltonian is obtained up to terms of $(\boldsymbol{\pi}/mc)^{14}$. The series expansion of the orbital part of the transformed Dirac Hamiltonian in terms of the kinetic momentum enables us to define the boost velocity operator. According to the correspondence between the kinetic momentum and the boost velocity operator, we found that up to terms of $(\boldsymbol{\pi}/mc)^{14}$ the Foldy-Wouthuysen transformed Dirac Hamiltonian is consistent with the classical Hamiltonian with the gyromagnetic ratio given by $g=2$. Furthermore, when the anomalous magnetic moment is considered as well, we found that up to terms of $(\boldsymbol{\pi}/mc)^{14}$ the Foldy-Wouthuysen transformed Dirac-Pauli Hamiltonian is in agreement with the classical Hamiltonian with $g\neq2$. The investigation in this paper reveals the fact that the classical Hamiltonian (classical relativistic Hamiltonian plus the T-BMT Hamiltonian) must be the low-energy and weak-field limit of the Dirac-Pauli equation. As shown in the above sections, we can establish the connection order-by-order in the FW representation. Moreover, this implies that, in the low-energy and weak-field limit, there must exist an exact FW transformation that can block-diagonalize the Dirac-Pauli Hamiltonian to the form corresponding to the classical Hamiltonian. For a free particle, the exact unitary transformation has been obtained by Foldy and Wouthuysen, which alternatively can also be obtained by the order-by-order method. We found that the generating operators can be written as closed forms, and consequently we can formally construct the exact unitary transformation that block-diagonalizes the Dirac and Dirac-Pauli Hamiltonians. However, it should be emphasized that the exact unitary transformation is valid only in the low-energy and weak-field limit and existence of the exact unitary transformation demands a rigours proof \cite{DWChiou2014}. On the other hand, it is true that even if the unitary FW transformation exists, it is far from unique, as one can easily perform further unitary transformations which preserve the block decomposition upon the block-diagonalized Hamiltonian (see also \secref{sec:method}). While different block-diagonalization transformations are unitarily equivalent to one another and thus yield the same physics, however, the pertinent operators $\boldsymbol{\sigma}$, $\mathbf{x}$, and $\mathbf{p}$ may represent very different physical quantities in different representations. To figure out the operators' physical interpretations, it is crucial to compare the resulting FW transformed Hamiltonian to the classical counterpart in a certain classical limit via the \emph{correspondence principle}. In Kutzelnigg's method, $\boldsymbol{\sigma}$ , $\mathbf{x}$, and $\mathbf{p}$ simply represent the spin, position, and conjugate momentum of the particle (as decoupled from the antiparticle) in the resulting FW representation. In other words, Kutzelnigg's method does not give rise to further transformations that obscure the operators' interpretations other than block diagonalization. The correspondence we observed may be extended to the case of inhomogeneous electromagnetic fields (except that the Darwin term has no classical correspondence) \cite{TWChen2013}, but inhomogeneity gives rise to complications which make it cumbersome to obtain the FW transformation in an order-by-order scenario, including the Kutzelnigg method. We wish to tackle this problem in further research. \begin{acknowledgments} The authors are grateful to C.-L.\ Chang for sharing his calculations. T.W.C.\ would like to thank G.\ Y.\ Guo, R.\ Winkler and M.-C.\ Chang for valuable discussions. T.W.C.\ is supported by the National Science Council of Taiwan under Contract No.\ NSC 101-2112-M-110-013-MY3; D.W.C.\ is supported by the Center for Advanced Study in Theoretical Sciences at National Taiwan University. \end{acknowledgments} \appendix \section{Hermiticity of FW transformed Dirac Hamiltonian}\label{App:Ham} Under the unitary transformation [Eq.~(\ref{UHU})], the Foldy-Wouthuysen transformed Dirac Hamiltonian is given by the upper-left term of $UH_DU^{\dag}$, which is \begin{equation}\label{App:HFW} \begin{split} H_{\mathrm{FW}}&=\left(Yh_++YX^{\dag}h_0\right)Y+\left(Yh_0+YX^{\dag}h_-\right)XY\\ &=Y\left(h_++X^{\dag}h_0+h_0X+X^{\dag}h_-X\right)Y. \end{split} \end{equation} Since the operators $Y$, $h_+$ and $h_0$ are hermitian, it is easy to show that $H_{FW}$ also satisfies $H_{FW}=H_{FW}^{\dag}$. The two off-diagonal terms are given by \begin{equation}\label{App:HFWX} \begin{split} &H_{X}=Z\left(-Xh_++h_0-Xh_0X+h_-X\right)Y,\\ &H_{X^{\dag}}=Y\left(-h_+-X^{\dag}h_0X^{\dag}+h_0+X^{\dag}h_-\right)Z. \end{split} \end{equation} Equation (\ref{App:HFW}) can be further simplified by using $H_{X}=0$ and $H_{X^{\dag}}=0$. The brackets in the second equality of Eq.~(\ref{App:HFW}) can be rewritten as \begin{equation}\label{App:HFW1} \begin{split} &\left(h_++X^{\dag}h_0+h_0X+X^{\dag}h_-X\right)\\ &=V+mc^2+X^{\dag}h_0+h_0X+X^{\dag}\left(V-mc^2\right)X\\ &=V+h_0X+mc^2\left(1-X^{\dag}X\right)+\left(X^{\dag}VX+X^{\dag}h_0\right). \end{split} \end{equation} On the other hand, we have \begin{equation}\label{App:HFW2} \begin{split} &\left(X^{\dag}VX+X^{\dag}h_0\right)\\ &=X^{\dag}\left(VX+h_0\right)\\ &=X^{\dag}\left([V,X]+XV+h_0\right)\\ &=X^{\dag}\left(2mc^2X+Xh_0X+XV\right)\\ &=2mc^2X^{\dag}X+X^{\dag}Xh_0X+X^{\dag}XV, \end{split} \end{equation} where Eq.~(\ref{EqX}) was used in the third equality of Eq.~(\ref{App:HFW2}). Substituting Eq.~(\ref{App:HFW2}) into Eq.~(\ref{App:HFW1}), we have \begin{equation}\label{App:HFW3} \begin{split} &\left(h_++X^{\dag}h_0+h_0X+X^{\dag}h_-X\right)\\ &=V+h_0X+mc^2\left(1-X^{\dag}X\right)+2mc^2X^{\dag}X\\ &~~+X^{\dag}Xh_0X+X^{\dag}XV\\ &=\left(1+X^{\dag}X\right)V+\left(1+X^{\dag}X\right)h_0X+mc^2\left(1+X^{\dag}X\right)\\ &=Y^{-2}\left(V+h_0X+mc^2\right). \end{split} \end{equation} Inserting Eq.~(\ref{App:HFW3}) into Eq.~(\ref{App:HFW}), we obtain \begin{equation}\label{App:HFW4} \begin{split} H_{\mathrm{FW}}&=YY^{-2}\left(V+h_0X+mc^2\right)Y\\ &=mc^2+Y^{-1}\left(V+h_0X\right)Y. \end{split} \end{equation} The condition $H_{X^{\dag}}=0$ implies \begin{equation}\label{App:EqXdag} X^{\dag}=\frac{1}{2mc^2}\left(h_0-X^{\dag}h_0X^{\dag}+[X^{\dag},V]\right). \end{equation} Applying Eq.~(\ref{App:EqXdag}) to Eq.~(\ref{App:HFW}), we have \begin{equation} \begin{split} H_{\mathrm{FW}}&=Y\left(h_++X^{\dag}h_0+h_0X+X^{\dag}h_-X\right)Y\\ &=Y\left[V+X^{\dag}h_0+mc^2(1-X^{\dag}X)+\left(X^{\dag}V+h_0\right)X\right]Y\\ &=Y[V+X^{\dag}h_0+mc^2(1-X^{\dag}X)+2mc^2X^{\dag}X\\ &~~+VX^{\dag}X+X^{\dag}h_0X^{\dag}X]Y\\ &=Y\left(VY^{-2}+X^{\dag}h_0Y^{-2}+mc^2Y^{-2}\right)Y, \end{split} \end{equation} where Eq.~(\ref{App:EqXdag}) was used in the second equality. We obtain \begin{equation}\label{App:HFW5} H_{\mathrm{FW}}=mc^2+Y\left(V+X^{\dag}h_0\right)Y^{-1}. \end{equation} Because $Y$, $h_0$ and $V$ are hermitian operators, this implies that the hermitian of Eq.~(\ref{App:HFW4}) is $H^{\dag}_{\mathrm{FW}}=mc^2+Y\left(V+X^{\dag}h_0\right)Y^{-1}$, and this is the same as Eq.~(\ref{App:HFW5}). As a consequence, we have $H_{\mathrm{FW}}^{\dag}=H_{\mathrm{FW}}$. \section{Hermiticity of FW transformed Dirac-Pauli Hamiltonian}\label{App:Ham2} In this appendix, we will show that the FW transformed Dirac-Pauli Hamiltonian can be written as Eq.~(\ref{H FWDP}) and show that Eq.~(\ref{H FWDP}) is a hermitian operator. Under the unitary transformation [Eq.~(\ref{UHDPU})], the Foldy-Wouthuysen transformed Dirac-Pauli Hamiltonian is given by the upper-left term of $U\mathcal{H}U^{\dag}$: \begin{equation}\label{App:HDP FW} \mathcal{H}_{\mathrm{FW}}=\mathcal{Y}\left(H_++\mathcal{X}^{\dag}H^{\dag}_0+H_0\mathcal{X}+\mathcal{X}^{\dag}H_-\mathcal{X}\right)\mathcal{Y}, \end{equation} where $H_+=V+V_B+mc^2$, $H_0=h_0+iV_E$ and $H_=V-V_B-mc^2$. The operator $h_0$ is $h_0=c\,\boldsigma\cdot\boldpi$. Since the operator $\mathcal{Y}$ is hermitian, it is easy to show that Eq.~(\ref{App:HDP FW}) also satisfies $\mathcal{H}_{\mathrm{FW}}=\mathcal{H}_{\mathrm{FW}}^{\dag}$. The two off-diagonal terms are required to vanish and they are given by \begin{equation}\label{App:HDP FWX1} -\mathcal{X}H_++H_0^{\dag}-\mathcal{X}H_0\mathcal{X}+H_-\mathcal{X}=0, \end{equation} and \begin{equation}\label{App:HDP FWX2} -H_+\mathcal{X}^{\dag}-\mathcal{X}^{\dag}H_0^{\dag}\mathcal{X}^{\dag}+H_0+\mathcal{X}^{\dag}H_-=0. \end{equation} By multiplying $\mathcal{X}^{\dag}$ on the left-hand side of Eq.~(\ref{App:HDP FWX1}), we have \begin{equation}\label{App:HDP FWX3} \left(\mathcal{X}^{\dag}H_0^{\dag}+\mathcal{X}^{\dag}H_-\mathcal{X}\right)=\mathcal{X}^{\dag}\mathcal{X}H_++\mathcal{X}^{\dag}\mathcal{X}H_0\mathcal{X}. \end{equation} Substituting Eq.~(\ref{App:HDP FWX3}) into Eq.~(\ref{App:HDP FW}) by eliminating $\left(\mathcal{X}^{\dag}H_0^{\dag}+\mathcal{X}^{\dag}H_-\mathcal{X}\right)$, we obtain \begin{equation}\label{App:HDP FW1} \mathcal{H}_{\mathrm{FW}}=\mathcal{Y}^{-1}\left(H_++H_0\mathcal{X}\right)\mathcal{Y}, \end{equation} where the definition of the operator $\mathcal{Y}=1/\sqrt{1+\mathcal{X}^{\dag}{X}}$ was used. On the other hand, multiplying $\mathcal{X}$ on the right-hand side of Eq.~(\ref{App:HDP FWX2}), we have \begin{equation}\label{App:HDP FWX4} \left(H_0\mathcal{X}+\mathcal{X}^{\dag}H_-\mathcal{X}\right)=H_+\mathcal{X}^{\dag}\mathcal{X}+\mathcal{X}^{\dag}H_0^{\dag}\mathcal{X}^{\dag}\mathcal{X}. \end{equation} Substituting Eq.~(\ref{App:HDP FWX4}) into Eq.~(\ref{App:HDP FW}) and eliminating the term $\left(H_0\mathcal{X}+\mathcal{X}^{\dag}H_-\mathcal{X}\right)$, we obtain \begin{equation}\label{App:HDP FW2} \mathcal{H}_{\mathrm{FW}}=\mathcal{Y}\left(H_++\mathcal{X}^{\dag}H_0^{\dag}\right)\mathcal{Y}^{-1}. \end{equation} Because $\mathcal{Y}$ is a hermitian operator and so is $H_+$, this implies that the hermitian of Eq.~(\ref{App:HDP FW1}) is $\mathcal{H}^{\dag}_{\mathrm{FW}}=\mathcal{Y}\left(H_0+\mathcal{X}^{\dag}H_0^{\dag}\right)\mathcal{Y}^{-1}$, which is the same as Eq.~(\ref{App:HFW5}). As a consequence, we have $\mathcal{H}_{\mathrm{FW}}^{\dag}=\mathcal{H}_{\mathrm{FW}}$. On the other hand, $H_++H_0\mathcal{X}$ can be written as $\left(H_++H_0\mathcal{X}\right)=mc^2+V+V_B+(h_0+iV_E)\mathcal{X}$. Equation (\ref{App:HDP FW1}) can be simplified as \begin{equation} \mathcal{H}_{\mathrm{FW}}=mc^2+e^{\mathcal{G}/2}\mathcal{A}e^{-\mathcal{G}/2}, \end{equation} where the operators $\mathcal{A}$ and $\mathcal{G}$ are defined as $\mathcal{A}=V+h_0\mathcal{X}+V_B+iV_E\mathcal{X}$ and $\mathcal{G}=\ln\left(1+\mathcal{X}^{\dag}\mathcal{X}\right)$, respectively. \end{document}
\begin{document} \title{A Survey on Temporal Graph Representation Learning and Generative Modeling} \begin{abstract} Temporal graphs represent the dynamic relationships among entities and occur in many real life application like social networks, e-commerce, communication, road networks, biological systems, and many more. They necessitate research beyond the work related to static graphs in terms of their generative modeling and representation learning. In this survey, we comprehensively review the neural time-dependent graph representation learning and generative modeling approaches proposed in recent times for handling temporal graphs. Finally, we identify the weaknesses of existing approaches and discuss research proposal of our recently published paper \textsc{Tigger} \cite{tigger}. \end{abstract} \section{Introduction} Traditionally static graphs have been the de facto data structures in many real-world settings like social networks, biological networks, computer networks, routing networks, geographical weather networks, interaction networks, co-citation networks, traffic networks, and knowledge graphs \cite{10.5555/2361850,10.5555/1971972,10.5555/1050985}. These graphs are used to represent the relationships between various entities. Major tasks like community detection, graph classification, entity classification, link prediction, and combinatorial optimization are established research areas in this domain. These tasks have applications in recommendation systems \cite{graphrecsys}, anomaly detection\cite{graph_anamoly}, information retrieval using knowledge graphs \cite{Frber2018LinkedDQ}, drug discovery \cite{graphgen} , traffic prediction \cite{traffic_prediction}, molecule fingerprinting \cite{NIPS2015_f9be311e}, protein interface prediction \cite{NIPS2017_f5077839} and combinatorial optimization \cite{Peng2021Jun}. Recently, graph neural networks \cite{kipf2017semisupervised,hamilton2018inductive, gat, gin,pgnn} have been developed to improve the state-of-the-art in these applications. Moreover, much success has been achieved in terms of quality and scalability by the graph generative modeling methods \cite{you2018graphrnn,pgnn,liao2020efficient}. However, most of these datasets often have the added dimension of time. The researchers marginalize the temporal dimension to generate a static graph to execute the above tasks. Nonetheless, many tasks like future link prediction\cite{timeawarelinkprediction}, time of future link prediction \cite{knowevolve} and dynamic node classification\cite{jodie} require temporal attributes as well. This has led to recent advancements in temporal graphs in terms of defining and computing temporal properties \cite{Holme_2015}. Moreover, algorithmic problems like travelling salesman problem \cite{MICHAIL20161}, minimum spanning trees\cite{10.1145/2723372.2723717}, core decomposition\cite{7363809}, maximum clique \cite{himmel2017adapting} have been adopted to temporal graphs. Recent research on graphs has focused on dynamic representation learning \cite{tgn,tigecmn,dyrep,tgat} and achieved high fidelity on the downstream tasks. Research in temporal graph generative space\cite{TagGen,DYMOND} is in its early stages and requires focus, especially on scalability. Many surveys exist that separately study techniques on graph representation learning \cite{sg1,sg2,sg3,sg4,sg5}, temporal graph representation learning \cite{tg1,tg2,tg3}, and graph generative modeling \cite{gg1,gg2}. But this survey is a first attempt to unify these inter-related areas. It aims to be an initial point for beginners interested in the temporal graph machine learning domain. In this report, we outline the following- \begin{itemize} \item We initiate the discussion with definitions and preliminaries of temporal graphs. \item We then present the summary of graph representation learning methods and the static graph generative methods that are prerequisites to explore similar approaches for temporal graphs. \item We discuss in-depth the temporal graph representation approaches proposed in recent literature. \item Subsequently, we outline the existing temporal graph generative methods and highlight their weaknesses. \item Finally, we propose the problem formulation of our recently published temporal graph generative model \textsc{Tigger}\cite{tigger}. \end{itemize} \section{Definitions and Preliminaries} This section will formalize the definition of temporal graphs and their various representations. Furthermore, we will explain a temporal graph's node and edge attributes. We will then discuss the various tasks under temporal graph setting and frequent metrics used in literature. \subsection{Temporal Graph} Temporal graphs are an effective data structure to represent the evolving topology/relationships between various entities across time dimensions. However, temporal graphs are not only used to describe the phenomenon which simulates the evolving links but also the underlying process which triggers the entity addition or removal from the topology. In addition, they also represent the evolution of entity/edge attributes over time. For example, temporal graph-based modeling can explain the sign-up behavior of a user in a shopping network and the causes of churn apart from their shopping behavior. In this survey, we also use temporal graphs to characterize those dynamic graphs that are not evolving anymore. A temporal graph is either represented in continuous space or discrete space. In the below subsections, we describe the distinction between the two. \subsubsection{Continuous Time Temporal Graph} A continuous-time temporal graph is a stream of dyadic events happening sequentially. \begin{equation} G = \{(e_1,t_1,x_1),(e_2,t_2,x_2),(e_3,t_3,x_3)....(e_n,t_n,x_n)\} \end{equation} Each $e_{i}$ is a temporal event tuple at timestamp $t_i$ with attributes $x_i \in \mathcal{R}^F$. These attributes can be continuous, categorical, both, or none at all. Each event is defined depending on the type of dataset and tasks. For an interaction network like transaction network, shopping network, and communication networks, each $e_i = (u,v)$ is an interaction between a node $u$ and $v$ at a time $t_i$ where $u,v \in V $ and $V$ is a collection of entities/nodes in the network. In a general framework, $|V|$ is a variable across time. Nodes can be added and deleted, which is represented by the event $e$. An event $e_i =(u,"add")$ is specified as node addition event where a node $u$ with attributes $x_i \in \mathcal{R}^F$ at timestamp $t_i$ has been added to the network. A repeated node addition event is interpreted as a node update event with new attributes if the node is already existing in the network. For example, the role of an assistant professor node can change to associate professor node in a university network. Similarly, a node deletion event $e_i = (v_i,"delete")$ is also possible. In temporal graphs like knowledge graphs, co-citation networks, biological networks, transport network, each edge will have a time-span i.e, an edge $t_{e_i}$ can be added to the network at time $t_1$ and removed from $t_2$ where $ 0 <=t_1, t_2 <= T$. $T$ is the last observed time-stamp in network. In such cases, we will assume that each edge event is either a link addition event ($e_i =(u,v,"add")$) at $t_i$ in the network or a link deletion event ($e_i =(u,v,"delete")$) at $t_j$ in the network. Figure \ref{fig:ctg} shows an example of one of the interaction networks. Event representation is often designed as per the requirement in the existing literature. For example, \cite{dyrep} add a variable k in the event tuple $(e_i,t_i,k_i)$. Here, $k_i$ is a binary variable signifying whether this event is an association event (the permanent link between two nodes in the network) or a communication event (interaction between two nodes). We encapsulate such intricacies in the variable $e_i$ to simplify the presentation. \begin{figure} \caption{Temporal Interaction Network} \label{fig:ctg} \end{figure} \subsubsection{ Discrete-Time Temporal Graph} Generally, a temporal graph is generated by accumulating evolution across a window of consecutive timestamps to extract the desired information or apply static graph modeling techniques. We divide the time axis in equal lengths and perform aggregation to create a graph along with each such temporal window. Figure \ref{fig:dtg} displays one such aggregated temporal graph. This representation is known as a discrete-time temporal graph. \begin{figure} \caption{Evolving Temporal Network} \label{fig:dtg} \end{figure} A discrete-time temporal graph is defined as follows: \begin{equation} G = \{(G_1,t_1,{X_{1}}^v,{X_{1}}^e),(G_2,t_2,{X_{2}}^v,{X_{2}}^e)....(G_n,t_n,{X_{n}}^v,{X_{n}}^e) \} \end{equation} Here, each $G_i = (V_i,E_i)$ is a static network at time $t_i$ where $V_i$ is a collection of nodes in time-window $t_i$ and $E_i$ is the collection of edges $e=(u,v), u,v \in V_i$. ${X_{i}}^v \in \mathbb{R}^{|V_i| \times F_v}$ is the node feature matrix, where $F_v$ is the dimension of the feature vector for each node. Similarly, ${X_{i}}^e \in \mathbb{R}^{|E_i| \times F_e}$ is edge feature matrix and $F_e$ is the dimension of the edge feature vector. Please note that the meaning of the notation $t_i$ is dataset specific. Essentially, it is a custom representation of the period over which the $G_i$ graph has been observed. For example, $t_i$ can be a month if each snapshot is collected for each month. In many cases, it can simply be the mean of the time-window or last timestamp in the time-window as well. \subsection{Attributes} Attributes are a rich source of information, except for the structure of a graph. For example, a non-attributed co-citation network will only provide information about frequent co-authors and similarities between them, but an attributed network containing roles and titles of every node will also offer a holistic view of the evolving interest of co-authors and help in predicting the next co-author or next research area for co-authors. Attributes, in general, can take a categorical form like gender and location, which are represented as 1-hot vectors. They can also make a continuous form like age. For example, in a Wikipedia network \cite{jodie}, each interaction's attributes are word vector of text edits made in the wiki article. And the class (label) of each user explains whether the user has been banned from editing the page. Sometimes these attributes are present as meta-data in the network. \cite{Frey972} airline network dataset contains the meta-information in the form of the city's latitude, longitude, and population. \subsection{Tasks and Evaluating Metrics} The primary objective in representation learning is to project each node and edge into d-dimension vector space. It achieves it by learning a time-dependent function $f(G,v,t) : \mathcal{G} \times V \times R^+ \rightarrow \mathcal{R}^d$ $ \forall v \in V$ where $V$ is the set of nodes in the temporal network $G$ which has been observed until time T. Generally $t > T$ for future prediction tasks, but time-dependent requirement can be dropped to learn only a node-specific representation \cite{CTDNE}. If the function $f$ allows only an existing node $v \in V$ as an argument, then this setting in literature is generally known as \textbf{transductive representation learning} \cite{hamilton2018inductive}. Often this is desirable not to have this restriction since frequent model training is not possible in many real-life systems, and it is usually required for the trained model to generate representations for unseen nodes $v \not\in V$. This setting is recognized as \textbf{inductive representation learning} \cite{hamilton2018inductive}. The argument $G$ in the function $f$ encapsulates graph/node/edge attributes and is simplified according to design choices. Suppose we assume that temporal graph in the continuous domain, then $G$ can be approximated as a sequence of event streams or only those events in which another argument $v$ and its neighbors are involved. Neighbors of a node v in a temporal graph do not have a universal definition like static graphs. Most papers typically define the neighbourhood of a node $v$ at given time $t$ as a set of nodes which are $k_d$ hops away in topology from node $v$ and their time of interactions are within $k_t$ of given time $t$. $k_d$, and $k_t$ are application specific parameters. \cite{dyrep,tgat,tgn}. This definition typically selects the recent interactions of node v before time t. \cite{tigecmn,jodie} use the node $v$'s all previous interactions to learn its representation at time t. This is the special case of $k_d = 1$ and $k_t = \infty$. Every representation learning primarily focuses on learning the node representation and then using these representations to learn the edge embeddings. Edge $e=(u,v,t)$ representation is learnt by learning a function $g(G,u,v,t): \mathcal{G} \times V \times V \times R^+ \rightarrow \mathcal{R}^d$ which aggregates the node representation of its two end nodes and their attributes, including the edge in the consideration. Most often, $g$ is a concatenate/min/max/mean operator. It can also be a neural network-based function to learn the aggregation. We note that t is an argument to the function $g$, allowing the $g$ to learn a time-dependent embedding function. Similarly, discrete-time temporal graph snapshots are encoded into low-dimensional representation. Like the edge aggregator function $g$, a graph-level aggregator is learned, taking all nodes in that graph as input. The most frequent tasks using graph/node/edge representations are graph classification, node classification, and future link prediction. These are known as downstream tasks. These tasks cover many applications in recommendation systems, traffic prediction, anomaly detection, and combinatorial optimization. In recent work, we have also observed additional tasks like event time prediction \cite{dyrep} and clustering \cite{ige}. We will now detail each task and the metrics used to evaluate the model efficiency for these tasks. Please note that we overload the notation $f$ for each task. \subsubsection{ Node Classification} Given a temporal graph $G$ and node $v$, a function $f$ is trained to output its label. Formally, $$ f(G,v,t): \mathcal{G}\times V \times R^+ \rightarrow C $$ Where C is a set of categories possible of a node in a temporal graph $G$. Often a node $v$ can be represented as a time-dependent $d$ dimensional vector $\mathbf{h}\xspace_v(t)$, $f$ can be approximated to $$ f(\mathbf{h}\xspace_v(t)): R^d \rightarrow C. $$ Node classification can be conducted both in transductive and inductive learning depending upon the argument node to the function $f$. In the transductive setting, the argument node is already seen during training. It is unseen in the case of inductive learning. Accuracy, F1, and AUC are frequent metrics for evaluating node classification. The AUC is more favorable since it provides a reliable evaluation even in case high-class imbalance. Anomaly detection is a typical case where class imbalance is observed, and the positive class typically belongs to less than $1\%$ of the population. \subsubsection{ Future Link Prediction} Given a temporal graph $G$, two nodes $u$ and $v$, a future timestamp t, a function $f$ is learned which predicts the probability of these two nodes linking at a time $t > T$ where $T$ is the latest timestamp observed in the $G$. $$ f(G,u,v,t): \mathcal{G} \times V \times V \times R^+ \rightarrow R $$ Similarly, we can also predict the attributes of this future link. As simplified in the node classification, since the node representations $\mathbf{h}\xspace_u(t)$ and $\mathbf{h}\xspace_v(t)$ are dependent on $G$ and time $t$, we can write $f$ as follows: $$ f(\mathbf{h}\xspace_u(t),\mathbf{h}\xspace_v(t)): R^d \times R^d \rightarrow R $$ We further observe that in most future link prediction settings, node representations for $t > T$ are approximated as $$ \mathbf{h}\xspace_v(t) \approx \mathbf{h}\xspace_v(T) \ \forall v \in V $$ The argument $t$ is often not required in future link prediction functions for such settings since $f$ is simply predicting the probability of link formation in the future. In most methods, $f$ is the cosine similarity function between node embeddings or a neural function that aggregates the information from these embeddings. In some instances, \cite{dyrep}, $f$ is approximated by the temporal point processes \cite{rasmussen2018lecture}, which also allows for predicting the time of link formation as well. Like node classification, we can categorize the future link prediction task into transductive and inductive settings, depending upon whether the node $v$ in consideration is already seen during training. Future link prediction is evaluated in 2 significant settings. Researchers typically choose either of these two. In the first set-up, a link prediction task is considered a classification task. The test data consists of an equal number of positive and negative links. Positive links are the actual links present in the future subgraph or test graph. Generally, a test graph is split chronologically from the training graph to evaluate the model performance. Therefore, edges in this test graph or subgraph are considered positive examples. From the sample test graph, an equal number of non-edge node pairs are sampled as negative links. Accuracy, f1, or AUC are used to evaluate the performance of $f$. In another setting, future link prediction is seen as a ranking problem. For every test node, its most probable future neighbors are ranked and compared with actual future neighbors. In a slightly different framework, the top K possible edges are ranked in a test graph and compared with the ground truth edges. Preferred metrics in these ranking tasks are mean reciprocal rank(MRR), mean average precision(MAP) \cite{10.1145/2939672.2939753}, precision@K, recall@K. \subsubsection{ Event Time Prediction} \cite{dyrep} introduces a rather novel task of prediction of the time of the link in consideration. This task has applications in the recommendation system. Learning which items will be purchased by a user at a particular time t will result in better recommendations, optimized product shipment routing, and a better user experience. Mean absolute error (MAE) is the metric for evaluating this task. \section{Literature Review} We first summarize the prevalent static graph representation learning methods. We will later see that temporal graph representation methods are direct extensions of these approaches. \subsection{Static Graph Representation Learning Methods} These methods are divided into two main categories, a) Random walks based methods and b) Graph neural network. There are other categories as well, like factorization based approaches \cite{10.1145/2488388.2488393} \cite{NIPS2001_f106b7f9} and \cite{10.1145/2939672.2939751}. However, these are generally not used due to associated scalability problems and the inability to use available attributes. Moreover, random walk and GNN-based methods are also superior in quality. For this subsection, we assume $G=(V,E)$ as a static graph where $V$ is the node-set and $E=\{(u,v) \mid u,v \in V \}$ is the edge set. $N$ is number of nodes, and $M$ is number of edges. We denote the $1$ hop neighbourhood of node $v$ as $\mathcal{N}_v$ and $\mathbf{x}\xspace_v$ as input feature vector for node $v$. Also, the bold small case variable denotes a vector, and the bold large case variable denotes a matrix. \subsubsection{Random Walk based Methods} Node representation is often the reflection of the graph structure, i.e., the more similar the representation, the higher the chances of the corresponding nodes to co-occur in random walks. This intuition provides the unsupervised learning objective to learn the node representation. Building on this, \textsc{DeepWalk}\xspace \cite{Perozzi:2014:DOL:2623330.2623732} provided the first random walk-based method. They run the random walks $RW_v$ from each node $v$. Suppose one such k length random walk sequence is $RW_v = \{v_1,v_2 ... v_k\}$. Using the skip-gram objective \cite{word2vec}, they learn the representation $\mathbf{z}\xspace_{v} \; \forall v \in V$ by optimizing the following loss objective. \begin{equation} \begin{gathered} L_{RW_v} = -\sum_{v_i \in RW_v}\log P(v_{i-w}...v_{i+w} \mid {v_i}) = -\sum_{v_i \in RW_v} \sum_{v \in v_{i-w}...v_{i+w} \wedge v \neq v_i} \log P(v \mid v_i)\\ p(v \mid u)= \frac{exp({\mbd{z}'}_v^T \mbd{z}_u)}{\sum_{v\prime \in V}exp({\mathbf{z}\xspace'}_{v'}^T\mathbf{z}\xspace_u)} \end{gathered} \end{equation} where $\mathbf{z}\xspace_v$ is node representation, $\mathbf{z}\xspace'_v$ is also the node representation, but it's not used in the downstream tasks. This setup is similar to \cite{word2vec}. $w$ is the window size of window centred at $v_i \in RW_v$. $\log p(v\mid u)$ is often re-written using negative sampling method \cite{NIPS2013_9aa42b31} to avoid the computationally expensive operation in denominator of softmax as follows- \begin{equation} \log p(v \mid u) = \log \sigma(\mathbf{z}\xspace_v^T\mathbf{z}\xspace_u) + \sum_{k=1}^{k=K} \mathbb{E}_{v_n \sim P_n(v)} \log \sigma(-\mathbf{z}\xspace_{v_n}^T \mathbf{z}\xspace_u) \label{eq:negative_sampling} \end{equation} Where $\sigma$ is the sigmoid function and K is number of negative samples, typically 5, and $P_n(v)$ is a probability distribution over $v \in V$. It is often based on the degree of $v$ and the task. \textsc{DeepWalk}\xspace compute these losses for every node in each random walk and update the $\mathbf{z}\xspace_v \forall v \in V$ by using gradient descents methods \cite{pmlr-v28-sutskever13}. We note that the next node is selected uniformly from the current node's neighborhood in each random walk. \textsc{DeepWalk}\xspace also shows that these learned representations can be utilized in downstream tasks like node classification and missing link predictions. \textsc{LINE}\xspace is a direct extension of \textsc{DeepWalk}\xspace. They modify the \textsc{DeepWalk}\xspace loss by restricting co-occurring nodes to be directly connected. Furthermore, they add the following loss as well. \begin{equation} \begin{gathered} L = -\sum_{(u,v) \in E} \log(f(u,v))\\ f(u,v) = \frac{1}{1+\exp(-\mathbf{z}\xspace_u^T.\mathbf{z}\xspace_v)} \end{gathered} \end{equation} where $E$ is the edge set. This loss forces the neighbouring nodes to have similar representation. \textsc{Node2Vec}\xspace \cite{node2vec} uses the negative sampling \cite{word2vec} instead of hierarchical softmax to compute the expensive operation in the denominator of softmax. Furthermore, they introduce Breadth-First Search(BFS) and Depth First Search(DFS) biased random walks to learn the node representations. In BFS-based random walks, nodes near in terms of their hop distance are sampled more frequently. This biased sampling leads to learning community structures having similar embeddings for nearby nodes. In DFS-based random walks, nodes which are far are more likely to be sampled. This random walk assigns similar embeddings to nodes having similar roles/structures in the network. \par These methods directly work with node IDs and do not factor in the node features and associated meta-data. Thus, these approaches are not extendable to unseen nodes in the network since new node IDs are absent during training. Furthermore, these are unsupervised approaches, so embeddings cannot also be learned using available supervision on the nodes/edges. These challenges limit the practical uses of the above methods. \subsubsection{Graph Neural Network based Methods} \cite{kipf2017semisupervised} introduced Graph Convolutional Network (\textsc{GCN}\xspace) to learn the node representation based on graph adjacency matrix and node features. Below equation represents the layer wise message passing in multi-layer Graph Neural Network. \begin{equation} \begin{gathered} \mathbf{H}\xspace^{l+1} = \sigma (\tilde{\mathbf{A}\xspace}\mathbf{H}\xspace^l\mathbf{W}\xspace^l)\\ \mathbf{H}\xspace^0 = \mathbf{X}\xspace \end{gathered} \label{eq:gcn} \end{equation} where $\mathbf{X}\xspace$ is a node feature matrix, i.e. each $i^{th}$ row corresponding to the feature vector $\mathbf{x}\xspace_i$ of a node $i$, $\mathbf{H}\xspace^l$ is a node representation matrix at $l^{th}$ layer. $\mathbf{W}\xspace^l$ is a trainable weight matrix for message passing from layer $l$ to $l+1$. $\tilde{\mathbf{A}\xspace}= \mathbf{D}\xspace^{-\frac{1}{2}}(\mathbf{A}\xspace+\mathbf{I}\xspace_N)\mathbf{D}\xspace^{-\frac{1}{2}}$ where $\mathbf{A}\xspace$ is the adjacency matrix corresponding to the graph $G$. $D$ is a diagonal matrix, where each diagonal element is the degree of the corresponding node in $\mathbf{A}\xspace+I_N$ the matrix. $\sigma$ is a non-linear activation like Sigmoid, ReLU etc. Note that $\mathbf{I}\xspace_N$ is added to add the self-loops in formulation. This formulation allows the node representation at the next layer to be an aggregation of self features and feature of neighbour nodes. L layer network will cause node representation to be impacted by the L-hops neighbours, which is evident from the formulae. \par This proposed approach requires supervision, as the input network needs to have a few nodes labeled to learn the $\mathbf{W}\xspace^l \; \forall l \in \{0..L-1\}$. Authors train the $\mathbf{W}\xspace$ for each layer by cross-entropy loss after applying softmax over $\mathbf{h}\xspace_v^K$ for each labeled node $v$. This approach can not incorporate the unseen nodes as training requires a full adjacency matrix. This requirement also causes scalability issues for large graphs. \cite{hamilton2018inductive} identified that equation \ref{eq:gcn} essentially averages the node representation of the target node and its 1-hop neighbor nodes to learn the node representation for each node at the next layer. So, a complete matrix formulation is not needed. Furthermore, the W matrix is also not dependent on node identity. This relaxation motivates the inductive setting as well. \cite{hamilton2018inductive} authors proposed a method \textsc{GraphSage}\xspace with the node-level layer-wise message propagation formulation given below. \begin{equation} \mathbf{h}\xspace_v^{l+1} = \sigma (\mathbf{W}\xspace^l \text{CONCAT}(\mathbf{h}\xspace^l_v,\text{AGGREGATE}_l(\{\mathbf{h}\xspace^l_u \; \mid \forall u \in \mathcal{N}_v\}))) \end{equation} where $\mathbf{h}\xspace_v^0 = x_v$, $x_v$ is the feature vector for the node $v$. $\text{CONCAT}$ and $\text{AGGREGATE}_l$ are function which are defined as per the requirement. AGGREGATE simply learns the single representation from the set of neighbour node representations. \textsc{GraphSage}\xspace uses MEAN, MAX and RNN based aggregator functions. CONCAT is simply the concatenation of two embeddings. Also, note that after each layer propagation, $h_v^l$s are normalized using l-2 norm. The above formulation is effective since it allows the weights matrices to adjust importance on the current target node and neighbor nodes representations to learn the subsequent layer representation for the target node. Additionally, this formulation is inductive since node identity or adjacency matrix is not required. \textsc{GraphSage}\xspace utilizes supervised loss on node labels by applying MLP followed by softmax operation to convert the node embedding to a probability vector in node labels space. \textsc{GraphSage}\xspace additionally proposes unsupervised loss similar to equation \ref{eq:negative_sampling} where node u and v co-occurs on a short length sampled random walks.\par In the above formulation, neighbours are treated equal in aggregator functions like Max or Mean Pool. \cite{gat} proposed an attention based aggregator, namely \textsc{GAT}\xspace to compute the relative importance of each neighbour. Specifically, the authors proposed the following to compute the node representation of node $v$ at layer $l+1$. \begin{equation} \begin{gathered} \mathbf{h}\xspace_v^{l+1} = \sigma \left(\sum_{u \in \mathcal{N}_v \cup v}\alpha_{vu}\mathbf{W}\xspace\mathbf{h}\xspace_u^l\right)\\ \alpha_{vu} = \frac{\exp(\textbf{a}^T\text{LeakyRELU}(\mathbf{W}\xspace\mathbf{h}\xspace_v^l \Vert \mathbf{W}\xspace\mathbf{h}\xspace_u^l))}{\sum_{i \in \mathcal{N}_v \cup v}\exp(\textbf{a}^T\text{LeakyRELU}(\mathbf{W}\xspace\mathbf{h}\xspace_v^l \Vert \mathbf{W}\xspace\mathbf{h}\xspace_i^l))} \end{gathered} \label{eq:gat} \end{equation} $\alpha_{vu}$ indicates the importance of message from node $u$ to node $v$. Here, \textbf{a} is a trainable weight matrix. Additionally, \textsc{GAT}\xspace introduces the multi-head attention similar as \cite{NIPS2013_9aa42b31} to utilize the self-attention based learning process. So, the final aggregation function using K attention heads becomes as follows: \begin{equation} \mathbf{h}\xspace_v^{l+1} = \|_{i=1}^{i=K} \sigma \left( \sum_{u \in \mathcal{N}_v \cup v}\alpha_{vu}^i\mathbf{W}\xspace^i\mathbf{h}\xspace_u^l\right) \end{equation} Finally, \cite{DBLP:conf/iclr/XuHLJ19} proves that aggregator functions used in \textsc{GraphSage}\xspace and \textsc{GCN}\xspace like max-pool and mean-pool are less powerful in graph isomorphism task than sum aggregator by showing that mean-pool and max-pool can produce similar representation for different node multi-sets. They propose the following simpler GNN formulation - \begin{equation} \begin{gathered} \mathbf{h}\xspace_v^{l+1} = \text{MLP}^{l+1}\left( (1+\epsilon^{l+1})\mathbf{h}\xspace_v^l+ \sum_{u \in \mathcal{N}_v}\mathbf{h}\xspace_u\right)\\ \textbf{g} = \|_{l=0}^{l=L} \left(\sum_{v \in G=(V,E)}(\mathbf{h}\xspace_v^l)\right) \end{gathered} \end{equation} where g is graph embedding and L is number of layers in GNN and $\epsilon$ is a learnable irrational parameter. They also show that this formulation is as powerful in graph isomorphic test as WL-test if node features are from countable set\cite{leman1968reduction}. \par The above formulations assign similar embeddings to nodes with similar neighborhoods with similar attributes, even if they are distant in the network. However, embeddings need to account for the node's position in the network in many settings. Applications like routing, which involves the number of hops/distance between nodes for the end objective, are examples of this requirement. Specifically, the node's position in the network should also be a factor in the learned embeddings. \textsc{PGNN}\xspace \cite{pgnn} proposed a concept of anchor nodes to learn position-aware node embeddings. Assuming K anchor nodes as $\{v_1,v_2...v_K\}$ and distances of node $u$ from these nodes respectively as $\{d_{uv_1},d_{uv_2}...d_{uv_K}\}$ then a node $u$ can be represented using a position encoded vector $[d_{uv_1},d_{uv_2}...d_{uv_K}]$ of size K. More anchor nodes can provide better location estimates in different network regions. \textsc{PGNN}\xspace generalizes the concept of an anchor node with an anchor set, which contains a set of nodes. Node $v$'s distance from an anchor set is the minimum of distances from all nodes in the anchor set. We denote $i_{th}$ anchor set as $S_i$. Each $S_i$ contains nodes sampled from $G$. We note that each anchor set can contain a different no of nodes. These K anchor sets create a K size position encoded vector for all nodes. These vectors are used along with original node features to encode each node. However, since each dimension in the position vector is linked with an anchor set, changing the order of the anchor set/position vector should not change the meaning. This constraint requires using a permutation invariant function aggregator in the GNN. So, \textsc{PGNN}\xspace introduces the following formulation for position-aware node representation. \begin{equation} \begin{gathered} \mathbf{h}\xspace_v^l = \sigma(M_v^l\textbf{w})\\ M_v^l[i] = \text{MEAN}(\{f(v,u,\mathbf{z}\xspace_v^{l-1},\mathbf{z}\xspace_u^{l-1}) \;\mid \forall u \in S_i\})\\ f(v,u,\mathbf{z}\xspace_v^{l-1},\mathbf{z}\xspace_u^{l-1}) = s(v,u)\text{CONCAT}(\mathbf{z}\xspace_v^{l-1},\mathbf{z}\xspace_u^{l-1}) \\ s(v,u) = \frac{1}{d_{sp}(v,u)+1}\\ \mathbf{z}\xspace_v^{l-1} = \text{MEAN}(\{M_v^{l-1}[i] \; \mid \; \forall i \in (0...K-1) \}) \\ \mathbf{z}\xspace_v^0 = \mathbf{x}\xspace_v \end{gathered} \end{equation} where $\mathbf{h}\xspace^L_v$ is K size position aware representation for node v. \textbf{w} is a trainable parameter. Also, $d_{sp}(v,u)$ is the shortest path between node $v$ and $u$. If $d_{sp}$ is more than a certain threshold than it is assumed to be infinity. This assumption speeds up the all pair shortest path computation. The above GNNs methods work well in graphs having high homophily. \cite{Zhu2020BeyondHI} defines \textit{homophily} as the ratio of number of edges connecting nodes having the same label to the total number of edges. Many networks like citation networks have high homophily, but networks like dating networks have very low homophily or heterophily \cite{Zhu2020BeyondHI}. They show that state-of-the-art GNN methods work very well on high homophily networks but perform worse than simple MLPs in heterophily networks. Their method \textsc{H2GCN}\xspace proposes the following $3$ simple modifications in GNNs to improve their performance in heterophily networks. \begin{enumerate} \item Target node $v$'s embedding (ego embedding) should not be averaged with neighbourhood embeddings to compute the embedding at next layer, as done in \textsc{GCN}\xspace. \textsc{GraphSage}\xspace style concatenation is better. This is similar to the skip-connections in deep neural networks, as to increase the depth of the network. \begin{equation} \mathbf{h}\xspace_v^l = \text{COMBINE}(\mathbf{h}\xspace_v^{l-1},\text{AGGR}(\{\mathbf{h}\xspace_u^{l-1}, \forall u \in \mathcal{N}_v\})) \end{equation} \item Instead of using 1-hop neighbour's embedding at each layer to compute the ego-embedding, \textsc{H2GCN}\xspace proposes to use higher order neighbour as well as follows:- \begin{equation} \mathbf{h}\xspace_v^l = \text{COMBINE}(\mathbf{h}\xspace_v^{l-1},\text{AGGR}(\{\mathbf{h}\xspace_u^{l-1}, \forall u \in \mathcal{N}_v^1\}),\text{AGGR}(\{\mathbf{h}\xspace_u^{l-1}, \forall u \in \mathcal{N}_v^2\})...) \end{equation} where $\mathcal{N}^i_v$ denotes the nodes which are at $i$ hops away from node $v$. \item Instead of using only the last layer embedding as final embedding for each node, \textsc{H2GCN}\xspace proposes combination of representation at all GNN layers. \begin{equation} \mathbf{h}\xspace_v^{\text{final}} = \text{COMBINE}(\mathbf{h}\xspace_v^0,\mathbf{h}\xspace_v^1 \ldots \mathbf{h}\xspace_v^L) \end{equation} where $\mathbf{h}\xspace_v^0=\mathbf{x}\xspace_v$ is input feature representation of node $v$. \end{enumerate} \cite{10.1007/978-3-319-93417-4_38} observed that the proposed GNN architectures apply only to in-homogeneous networks where each node and edge is of a single type. For example, friendship networks. Thus, they proposed a \textsc{GCN}\xspace extension, namely \textsc{RGCN}\xspace, applicable for heterogeneous networks like knowledge graphs where entities and relations can be multiple types. \textsc{RGCN}\xspace suggested that instead of using the same weight matrix for each neighbor during neighborhood aggregation at each layer, use a separate weight matrix for each node type. Since number of unique relations and nodes are typically around millions, they propose either using block-diagonal matrices or decomposing using basis matrices and learning the coefficient of basis matrices for each entity and relations. Finally, \cite{Chiang_2019} proposed a method \textsc{ClusterGCN}\xspace to learn GNNs on large-scale networks which have millions of nodes and edges. They proposed to first cluster nodes based on any standard clustering approach and then randomly select multiple node-groups from the clusters and create an induced subgraph using these chosen nodes. Now, update the weights of GNN by running gradient descent on this sub-graph. Repeat the process of randomly sampling node groups, creating an induced sub-graph, and training the GNN until convergence. \subsection{Deep Generative Models for Static Graphs} Till now, we have viewed the static graph representation approaches. We now shortly summarize the deep generative models for static graphs. Given a collection of input graphs $\{G_1,G_2...\})$ which are assumed to be sampled from an unknown underlying distribution $p_{\text{data}}(G)$, the goal of generative methods is to learn a probability distribution $p_\theta(G)$ which is highly similar to $p_{\text{data}}(G)$ and produces graphs having highly similar structural properties as input graphs. Traditional graph generative models assume some prior structural form of the graph like degree distribution, diameter, community structure, or clustering coefficients. Examples include Erdős-Rényi~\cite{Karonski1997} graphs, small-world models~\cite{small_world}, and scale-free graphs~\cite{albert2002statistical}. Prior assumptions about the graph structures can be encoded using these approaches. Still, these approaches do not apply to practical applications like drug discovery, molecular property prediction, and modeling a friendship network. These models cannot automatically learn from the data. Learning a generative model on graphs is a challenging problem since the number of nodes varies in each graph of the input data. Furthermore, search space and running time complexity to generate edges is often quadratic in $N$ and $M$. Additionally, in naive graph representation, any observed node ordering of a graph has $\frac{1}{N!}$ probability, i.e., a single graph can be represented using $N!$ possible node ordering. So, the learned generative model should be able to navigate this large space, which is not the case with images, text, and other domains. We initiate the discussion with \textsc{NetGAN}\xspace \cite{netgan} which learn a generative model from the single input network. Then we compare the methods of \textsc{MolGAN}\xspace \cite{molgan}, \textsc{DeepGMG}\xspace \cite{deepgmg}, \textsc{GraphRNN}\xspace \cite{graphrnn}, \textsc{GraphGen}\xspace \cite{graphgen} and \textsc{GRAN}\xspace \cite{gran}. These methods take a collection of graphs as input to learn the generative model. Among these, \textsc{GraphGen}\xspace and \textsc{GRAN}\xspace currently, are state-of-the-art methods \textsc{NetGAN}\xspace samples a collection of random walks of max length $T$ using the sampling approach in \textsc{Node2Vec}\xspace and train a WGAN \cite{arjovsky2017wasserstein} on this collection. A GAN architecture primarily consists of a generator and a discriminator. The discriminator scores the probability of a given random walk as real. A discriminator is trained by collecting sampled random walks and generated random walks by generator, where its task is to assign high probabilities to real walks and low probabilities to synthetic walks. The discriminator and generator are trained in tandem until the generator generates walks indistinguishable from real walks and confuses the discriminator. The discriminator architecture LSTM based, where each node in a sequence is encoded using a one-hot vector where the size of the vector is $|N|$. Once the LSTM unit processes a sequence, it outputs a logit that provides the sequence scores. Generator architecture is a bit tricky since it involves a stochastic operation of sampling the next node during random walk sequence generation, i.e., $(v_1,v_2 \ldots v_T) \sim \mathcal{G}$. Basically, a vector $\mathbf{z}\xspace \sim Normal(\textbf{0},\textbf{1})$ is sampled from the normal distribution. $\mathbf{z}\xspace$ is transformed to a memory vector $\mathbf{m}\xspace_0=f_z(\mathbf{z}\xspace)$ where $f_z$ is a MLP based function. This $\mathbf{m}\xspace_0$ is used to initialize the LSTM cell along with $\mathbf{0}$ vector which outputs the next memory state $m_1$ and probability distribution $p_1$ over the next node $v_1$. From this multinomial distribution $p_1$, the next node $v_1$ is sampled, which along with $m_1$ is passed to LSTM cell to output $(m_2,p_2)$. This process repeats until the generator samples the $T$ length node sequence. To enable backpropagation using this method, the next node sampling $v_i$ from multinomial distribution $p_i=(p_i(1),p_i(2)\ldots p_i(N))$ is replaced with Gumbel-softmax trick \cite{jang2016categorical} which is essentially $p_i^{\ast} = \text{softmax}(\frac{p_i(1)+g_1}{\tau},\frac{p_i(1)+g_1}{\tau} \ldots \frac{p_i(N)+g_N}{\tau})$ where $g_i$s are sampled from Gumbel distribution with 0 mean and 1 scale. Forward pass is computed using $\text{argmax}(p_i^{\ast}$ and backpropagation is computed using continuous $p_i^{\ast}$. $\tau$ is a temperature parameter. Low $\tau$ works as argmax, and very high $\tau$ works as uniform distribution. Once the WGAN is trained, \textsc{NetGAN}\xspace samples a collection of random walk sequences from the generator. It creates a synthetic graph by selecting top $M$ edges by frequency counts. \textsc{NetGAN}\xspace uses multiple graph properties like max node degree, assortativity, triangle count, power-law exponent, clustering coefficients, and characteristic path length to compare synthetic graph $\tilde{G}$ with original graph $G$. We note that \textsc{NetGAN}\xspace's node space is the same as the input graph $G$'s node space $V$. Due to this limitation, its usages are limited to applications requiring samples with similar properties as the input graph but with the same node space. Next, we look at the \textsc{MolGAN}\xspace \cite{molgan} which is similar to \textsc{NetGAN}\xspace in terms of using GAN. \textsc{MolGAN}\xspace takes a collection of graphs $\{G_1, G_2 ...\}$ as input instead of a single graph. Further, \textsc{MolGAN}\xspace is specifically designed for molecular graphs, which will be clear with the design choices.\par \textsc{MolGAN}\xspace utilizes the GAN architecture to learn the generator. Generator $g$ takes a noise vector $\mathbf{z}\xspace \sim Normal(\textbf{0},\textbf{1})$ and transform it using MLPs to a probability-based adjacency matrix and node feature matrix. Using Gumbel-softmax, adjacency and node feature matrices are sampled from probability matrices. Finally, a GNN-based discriminator tries to classify actual graphs and generated graphs. As we see, in \textsc{MolGAN}\xspace generator produces a whole graph, but in \textsc{NetGAN}\xspace, it was used to produce a random walk. Finally, as \textsc{MolGAN}\xspace noted that there are available software packages \footnote{http://www.rdkit.org/} to evaluate the generated molecules in terms of desired chemical properties. These scores act as rewards in \textsc{MolGAN}\xspace to provide additional supervision to the generator. This model's parameters are trained using a deep deterministic reinforcement-learning framework \cite{lillicrap2019continuous}. This method is not scalable since it requires $O(N^2)$ computation and memory. Furthermore, a graph can be represented using $N!$ adjacency matrices, leading to significant training challenges. We now describe methods that solve these problems.\par \textsc{DeepGMG}\xspace \cite{deepgmg} observes graph generation as a sequential process. This process generates one node at a time and decides to connect the new node to existing nodes based on the current graph state and new node state. \textsc{DeepGMG}\xspace employs GNNs to model the states. Specifically, assuming an existing graph $G=(V, E)$ having $N$ nodes and $M$ edges, newly added node $v$, it works as follows: \begin{equation} \begin{aligned} ((\mathbf{v}\xspace_1,\mathbf{v}\xspace_2 \ldots \mathbf{v}\xspace_N),\mathbf{v}\xspace_G) &= \text{GNN}^L(G)\\ v_{\text{addnode}} &= \text{MLP}(\mathbf{v}\xspace_G)\\ v_{\text{addedge}} &= \text{MLP}((\mathbf{v}\xspace_1,\mathbf{v}\xspace_2 \ldots \mathbf{v}\xspace_N),\mathbf{v}\xspace_G)\\ s_{u} &= \text{MLP}(\mathbf{v}\xspace_u,\mathbf{v}\xspace_v) \forall u \in V \\ v_{\text{edges}} &= softmax(\textbf{s}) \end{aligned} \end{equation} For each round of new node addition, an L-layer GNN is executed on the existing graph to compute embeddings of nodes and graphs. Using graph embedding, first, a decision is taken to add a new node or not. If yes, then a decision is taken whether to add edges using this node or not. If yes, then using embedding of existing nodes and new nodes, a score is calculated, and subsequently, a probability distribution is computed over the edges of node $v$ with existing nodes. From this distribution, edges are sampled. Moreover, this process repeats till a decision is taken not to add a new node. The embedding of a new node is initialized using its features and graph state. This approach is also computationally expensive, $O(N(M+N))$ since it runs a GNN and softmax operation $O(N)$ over existing nodes for each new node. It has a lower memory requirement since it no longer needs to store the whole adjacency matrix. This process is trained using Maximum likelihood over the training graphs. \textsc{MolGAN}\xspace and \textsc{DeepGMG}\xspace evaluate the generative models' performance by visual inspection and using offline available quality scores provided by chemical software packages. \par \textsc{GraphRNN}\xspace \cite{graphrnn} follows similar graph representation approach as \textsc{DeepGMG}\xspace but replaces GNN with Recurrent neural architecture and uses Breadth-First Search(BFS) based graph representation. Furthermore, it introduced a comprehensive evaluation pipeline based on graph structural properties. Its contributions are summarized as follows: \begin{itemize} \item \textbf{Graph Representation:} Unlike previous methods, \textsc{GraphRNN}\xspace considers the BFS node ordering of each permutation to reduce the number of possible permutation as many permutations map to a single BFS ordering. Although a graph can still have multiple BFS sequences, permutation space is drastically reduced. This approach has two-fold benefits. \begin{enumerate} \item Training needs to be performed over possible BFS sequences instead of all possible graph permutations. \item A major issue with \textsc{DeepGMG}\xspace was that possible edges are computed for each node, with all previous nodes causing $O(N^2)$ computations. But \textsc{GraphRNN}\xspace makes the following observation in any BFS sequence, i.e. \textit{whenever a new node $i$ is added in the BFS node sequence $(v_1,v_2 \ldots v_{i-1})$ where it does not make an edge with a node $v_j$ for $j< i$, we can safely say that any node $\{v_k,k\leq j\}$ will not make an edge with $v_i$. } This observation implies that we do not need to consider all previously generated BFS nodes for possible edges with the new node. We can empirically calculate $W$ signifying latest $W$ generated nodes are needed for a possible edge with the new node. This reduces the computation to $O(WN)$. \end{enumerate} \item \textbf{Hierarchical Recurrent Architecture:} \textsc{GraphRNN}\xspace uses two-level RNNs for modelling each BFS sequence. Primary RNN decides the new node and its type. Node type also includes a stop node to signal the completion of the graph generation process. The hidden state of the primary RNN initializes the hidden state of the secondary RNN, which sequentially processes over the latest $W$ nodes in the BFS order sequence to create possible edges. These 2 RNNs are trained together using the maximum likelihood objective. \item \textbf{Metrics:} \textsc{GraphRNN}\xspace introduced Maximum Mean Discrepancy (MMD) \cite{10.5555/2188385.2188410} based metrics to compute the quantitative performance of a graph generator. For $I$ input graphs, a node degree distribution is calculated for each graph in the input graph set and generated graph set. MMD is used to compute the distance between these two distributions. \textsc{GraphRNN}\xspace shows the MMD distances for degree distributions, clustering coefficient distribution, and four-node size orbit counts. \end{itemize} \textsc{GRAN}\xspace \cite{gran} also follows a somewhat similar procedure as \textsc{DeepGMG}\xspace of learning edges for each new node with previously generated nodes during the graph generation process. \textsc{GRAN}\xspace observes the graph generation process of \textsc{DeepGMG}\xspace as creating a lower triangular part of adjacency matrix, i.e., generating 1 row at a time starting from the first row. This process requires $O(N)$ sequential computations. To scale this approach for large graphs $\sim 5$k, instead of generating 1 row at a time, they generate blocks of B rows, i.e., $O(N/B)$ sequential computation. Furthermore, they drop the recurrent architecture to model the sequential steps used in \textsc{DeepGMG}\xspace and \textsc{GraphRNN}\xspace. This step facilitates parallel training across sequential steps. At each sequential step, the main task is to discover edges between nodes within the new block and edges between existing nodes and new nodes. To do this, \textsc{GRAN}\xspace creates augmented edges between nodes of the new block. They also create augmented edges between new nodes and all existing nodes. Finally, they use rows of lower triangular adjacency matrix as features for existing nodes and $\textbf{0}$ for new nodes. They transform these features to low dimensions using a transformation matrix. Finally, \textsc{GRAN}\xspace runs a $r$ round of GNN updates on this graph to compute each node's embedding. Using these embeddings, they learn the Bernoulli distribution over each augmented edge. These learned distributions are utilized to sample edges. We note that \textsc{GRAN}\xspace has used MLP over a difference of embeddings of nodes during message passing and computation of attention coefficients. The overall time complexity of \textsc{GRAN}\xspace is the same as \textsc{DeepGMG}\xspace. Since the architecture is parallelizable, \textsc{GRAN}\xspace can train and generate graphs for large-scale datasets compared to \textsc{GraphRNN}\xspace. \par Finally, \textsc{GraphGen}\xspace introduces a minimum DFS code-based graph sequence representation. Minimum DFS code is the canonical label of a graph capturing structure and the node/edge labels. Canonical labels of two isomorphic graphs are the same. A DFS code sequence is of $M$ length where $M$ is number of edges. Each edge $(u,v)$ having node labels $label(a),label(b)$ and edge label $label(uv)$ in the sequence is represented as $(t_u,t_v, label(u),label(uv),label(v))$ where $t_u$, $t_v$ is time of discovery of node $u,v$ during DFS traversal. Essentially, DFS codes can be lexicographically ordered. The smallest DFS code among possible DFS codes of a graph is called the minimum DFS Code. Minimum DFS code is an interesting concept, and for thorough details, we refer to \textsc{GraphGen}\xspace \cite{graphgen}. This representation drastically reduces the possible permutations of each graph, which speeds up the training process. Using the maximum likelihood objective, an LSTM-based sequence generator is trained over the minimum DFS codes generated from input graphs. \subsection{Temporal Graph Representation Learning Methods} We now summarize the temporal graph representation learning based methods. Overall, we can classify these methods into two major categories, \textbf{Snapshot/discrete graph based methods} \cite{dysat,dyngem,dynamictriad,tNodeEmbed,evolvegcn} and \textbf{Continuous time/event-stream based temporal graphs} \cite{MDNE,FiTNE,HTNE,jodie,tigecmn,dyrep,tgat,tgn,caw,ige}.We discuss both of these categories separately. \subsubsection{Snapshot/Discrete Graph based Methods} For the following discussion in this subsection, we assume the following notation- \par A dynamic graph $G$ is represented as a collection of snapshots $\{G_1,G_2 ... G_T\}$ where each $G_t = (V_t,E_t,\mathbf{A}\xspace_t,\mathbf{X}\xspace_t)$, $V_t$ is node set at time $t$ and similarly $E_t, \mathbf{A}\xspace_t, \mathbf{X}\xspace_t$ is edge set, adjacency matrix and node feature matrix at time $t$. $N_t$ and $M_t$ are number of nodes and edges in graph $G_t$. \par Na\"ive method involves running a static graph embedding approach over each graph snapshot and aligning these embeddings across snapshots using certain heuristics \cite{hamilton-etal-2016-diachronic}. This is a very expensive operation. \textsc{DynGem}\xspace \cite{dyngem} proposes an auto-encoder based approach which initializes weights and node embeddings of $G_{t}$ using $G_{t-1}$. Mainly, an autoencoder network (MLP based) takes two nodes $u,v$ of an edge in $G_t$ represented by their neighbourhood vector $\textbf{s}_u \in \mathcal{R}^{N_t},\textbf{s}_v \in \mathcal{R}^{N_t}$ and computes a $d$ dimensional vector representation $\mathbf{h}\xspace_u,\mathbf{h}\xspace_v$. A decoder further takes $\mathbf{h}\xspace_u,\mathbf{h}\xspace_v$ as input and reconstructs the original $\textbf{s}_u,\textbf{s}_v$ as $\hat{\textbf{s}}_u,\hat{\textbf{s}}_v$. Following loss is optimized at each timestamp $t \in [1\ldots T]$ to compute the parameters. \begin{equation} \begin{gathered} L_t = L^{\text{global}}_t + \beta_1L^{\text{local}}_t + \beta_2L^1_t + \beta_3L^2_t \\ L_t^{\text{global}} = \sum_{u,v \in E_t} \| \hat{\textbf{s}}_u - \textbf{s}_u\|^2 + \| \hat{\textbf{s}}_v - \textbf{s}_v\|^2 \\ L^{\text{local}}_t = \sum_{u,v \in E_t}\|h_u - h_v\|^2 \end{gathered} \end{equation} where $L_1,L_2$ are $L1$-norm and $L2$-norm computed over networks weights to reduce the over-fitting. $L_{\text{global}}$ is a autoencoder reconstruction loss at $t$ snapshot and $L^{\text{local}}_t$ is a first order proximity loss to preserve the local structure. We note that parameters of autoencoder for $G_t$ are initialized using parameters of autoencoder for $G_{t-1}$ inducing stability over graph/node embeddings over consecutive snapshots and reducing the training computation assuming graph structures doesn't change drastically at consecutive snapshots. \textsc{tNodeEmbed}\xspace \cite{tNodeEmbed} is a similar method but instead of using autoencoder they directly learn node embedding matrix $\mathbf{W}\xspace_t \in R^{N\times d}$ for each snapshot by optimizing node classification task or edge reconstruction task. They also use the following loss to align the consecutive $\mathbf{W}\xspace_t,\mathbf{W}\xspace_{t+1}$ as follows: \begin{equation} \begin{gathered} \mathbf{R}\xspace_{t+1} = \argmin_{\mathbf{R}\xspace} (\|\mathbf{W}\xspace_{t+1}\mathbf{R}\xspace - \mathbf{W}\xspace_t\| + \lambda \|\mathbf{R}\xspace^T\mathbf{R}\xspace-\textbf{I} \|)\\ \mathbf{W}\xspace_{t+1} = \mathbf{W}\xspace_{t+1}\mathbf{R}\xspace_{t+1} \end{gathered} \end{equation} where first term forces stable consecutive embeddings and second terms requires $\mathbf{R}\xspace_{t+1}$ being a rotation matrix. Further, they employ recurrent neural network over node embeddings at each timestamp to connect time-dependent embeddings for each node and to learn its final representation. \textsc{DynamicTriad}\xspace \cite{dynamictriad} is a similar method based on regularizing embeddings of consecutive snapshots, but it models an additional phenomenon of \textit{triadic closure process}. Assuming 3 nodes $(u,v,w)$ in a evolving social network where $(u,w),(v,w)$ are connected but $(u,v)$ are not, i.e. $w$ is a common friend of $u$ and $v$. Depending upon the $w's$ social habits, it might introduce $u$ and $v$ or not. This signifies that the higher the number of common nodes between $u$ and $v$, the higher the chances of them being connected at the next snapshot. \textsc{DynamicTriad}\xspace models this phenomena by defining a strength of $w$ with $u,v$ at snapshot $t$ as follows: \begin{equation} \mathbf{s}\xspace_{uvw}^t= \mathbf{w}\xspace_{uw}^t(\mathbf{h}\xspace_w^t-\mathbf{h}\xspace_u^t) + \mathbf{w}\xspace_{vw}^t(\mathbf{h}\xspace_w^t-\mathbf{h}\xspace_v^t) \end{equation} where $\mathbf{s}\xspace_{uvw}^t \in \mathcal{R}^d$ and $\mathbf{w}\xspace_{uw}^t,\mathbf{w}\xspace_{vw}^t$ denotes the tie strength of $w$ with $u$ and $v$ respectively at time $t$. Utilizing this, \textsc{DynamicTriad}\xspace defines a following probability of $u,v,w$ becoming a close triad at snapshot $t+1$ given they are open triad at snapshot $t$ given $w$ is the common neighbour. \begin{equation} p^t(u,v,w) = \frac{1}{1+\exp{(-\boldsymbol{\theta}\mathbf{s}\xspace_{uvw}^t)}} \end{equation} Since $u$ and $v$ can have multiple common neighbours, any of the neighbour can connect, $u,v$ thus closing the triads with all the neighbours. But in real world, which neighbour(s) closed the triad is(are) unknown. To accommodate this, they introduce a vector $\boldsymbol{\alpha_{uv}^t}$ of length $B$. $B$ is number of common neighbours of, $u,v$ i.e. length of set $B^t(u,v) = \{w, \;(w,u) \in E_t \wedge (w,v) \in E_t \wedge (u,v) \notin E_t\} $. Formally, $\boldsymbol{\alpha_{uv}^t}=(\alpha_{uvw})_{w \in B^t(u,v) }$ where $\alpha_{uvw}=1$ denotes that $u,v$ will connect at time $t+1$ under the influence of $w$. Finally, they introduce probabilities that $u,v$ will connect at, $t+1$ given that they are not connected and in open triad(s) at the time $t$. \begin{equation} \begin{aligned} p^t_{+}(u,v) &= \sum_{\boldsymbol{\alpha_{uv}^t}\neq \textbf{0}}\prod_{k\in B^t(u,v)} p^t(u,v,w)^{(\alpha_{uvw})} \times (1-p^t(u,v,w))^{(1-\alpha_{uvw})}\\ p^t_{-}(u,v) &= \prod_{k\in B^t(u,v)}(1-p^t(u,v,w))^{(1-\alpha_{uvw})} \end{aligned} \end{equation} where $p_+,p_-$ denotes the $u,v$ connecting/not connecting at next snapshot $t+1$. Summation over $\boldsymbol{\alpha}$ denotes the iterating over all possible configuration of common neighbour(s) causing the connection between $u,v$. We note that at-least 1 entry in $\boldsymbol{\alpha}$ should non-zero to enable edge creation between $u,v$ at next step. Finally, their loss optimization function is: \begin{equation} \begin{aligned} L &= \sum_{t=1}^{t=T} L^t_{triad} +\beta_0 L^t_{ranking} + \beta_1 L^t_{smooth}\\ L^t_{triad} &= -\sum_{i,j \in S^t_+}p^t_{+}(u,v) -\sum_{i,j \in S^t_-}p^t_{-}(u,v)\\ L^t_{ranking} &= \sum_{u,v \in E_t,\hat{u},\hat{v} \notin E_t}\mathbf{w}\xspace^t_{uv}\max(0,\|\mathbf{h}\xspace_u^t - \mathbf{h}\xspace_v^t \|^2_2 - \|\mathbf{h}\xspace_{\hat{u}}^t - \mathbf{h}\xspace_{\hat{v}}^t \|^2_2)\\ L^t_{smooth} &= \sum_{u \in V_t}(\| \mathbf{h}\xspace_u^t - \mathbf{h}\xspace_u^{t-1} \|^2_2) \end{aligned} \end{equation} where $S^t_+$ is set of edges which form at $t+1$ and $S^t_-$ is set of edges not existing at $t+1$. $L^t_{ranking}$ is a ranking loss over edges which preserves the structural information of corresponding snapshot and $L^t_{smooth}$ is embedding stability constraint over embeddings of each node. These methods can't directly utilize the node features and only focuses on the first order proximity of graph structures, ignoring the higher order structures. \textsc{EvolveGCN}\xspace \cite{evolvegcn} and \textsc{DySAT}\xspace \cite{dysat} utilize the graph neural networks to calculate the node embeddings over each snapshot. Thus, these methods are additionally capable of using node features to model the higher order neighbourhood structure. We note that these features are also dynamic, i.e. can evolve over time. \par \textsc{EvolveGCN}\xspace is a natural temporal extension of \textsc{GCN}\xspace. Equation \ref{eq:gcn} is rewritten as follows: \begin{equation} \begin{gathered} \mathbf{H}\xspace_t^{l+1} = \sigma (\tilde{\mathbf{A}\xspace}_t\mathbf{H}\xspace^l_t\mathbf{W}\xspace^l_t)\\ \mathbf{H}\xspace_t^{0} = \mathbf{X}\xspace \end{gathered} \end{equation} Where sub-script $t$ denotes the time of the corresponding snapshot, we note that in \textsc{EvolveGCN}\xspace formulation, $\mathbf{W}\xspace^l_t \; \forall t \in [1\ldots T],l \in [1..L]$ are not learned using GNN but are the output of recurrent external network which incorporates the current as well the past snapshots' information. The parameters of a \textsc{GCN}\xspace at each snapshot are controlled by a recurrent model, while node embeddings at corresponding snapshots are learned using these parameters. Specifically, \textsc{EvolveGCN}\xspace proposes two variants of $W^l_t$ calculation. \begin{enumerate} \item This variant treats $\mathbf{W}\xspace^l_t$ as hidden state of a RNN cell. Specifically, \begin{equation} \mathbf{W}\xspace^l_t = \text{RNN}(H_t^l,\mathbf{W}\xspace^l_{t-1}) \end{equation} where $H_t^l$ is the input to the RNN and $\mathbf{W}\xspace^l_{t-1}$ is the hidden state at previous timestamp. This variant is useful if node features are strong and plays important role in end task. \item This variant treat $\mathbf{W}\xspace^l_{t}$ as output of a RNN cell. Specifically, \begin{equation} \mathbf{W}\xspace^l_t = \text{RNN}(\mathbf{W}\xspace^l_{t-1}) \end{equation} where $\mathbf{W}\xspace^l_{t-1}$ is a input which was the output of RNN cell at previous timestamp. \end{enumerate} The parameters are trained end-to-end using any loss associated with the node classification/link classification/link prediction tasks. \par Similar to \textsc{EvolveGCN}\xspace, \textsc{DySAT}\xspace \cite{dysat} is a gnn based model, namely \textsc{GAT}\xspace. \textsc{DySAT}\xspace runs two attention blocks. First, it runs a \textsc{GAT}\xspace style GNN over each snapshot separately. This provides the static node embeddings at each snapshot, which captures the structural and attributes based information. Then, \textsc{DySAT}\xspace run a temporal based \textit{self-attention} block i.e. for each node at time $t$, this module takes it's all previous embeddings and compute a new embedding using \textit{self-attention} which also incorporate temporal modalities. Specifically, \begin{itemize} \item \textbf{Structural Attention }: For each $G_t \in G=\{G_1\ldots G_T\}$, $h_v$ in equation \ref{eq:gat} is replaced as: \begin{equation} \begin{gathered} \mathbf{h}\xspace_{v_t}^{l+1} = \sigma \left(\sum_{u \in \mathcal{N}_v }\alpha_{vu}\mathbf{W}\xspace\mathbf{h}\xspace_{u_t}^l\right)\\ \alpha_{vu} = \frac{\exp(a_{vu}\textbf{a}^T\text{LeakyRELU}(\mathbf{W}\xspace\mathbf{h}\xspace_{v_t}^l \Vert \mathbf{W}\xspace\mathbf{h}\xspace_{u_t}^l))}{\sum_{i \in \mathcal{N}_v }\exp(a_{vi}\textbf{a}^T\text{LeakyRELU}(\mathbf{W}\xspace\mathbf{h}\xspace_{v_t}^l \Vert \mathbf{W}\xspace\mathbf{h}\xspace_{i_t}^l))} \end{gathered} \end{equation} where $a_{vu}$ is a graph input which denotes the weight on the edge $(v,u) \in E_t$. Please note that in \textsc{GAT}\xspace message is computed from self node, which is not the case here. \item \textbf{Temporal Self-attention:} We assume the learned node representation using structural attention for each node $v \in G$ as $\mathbf{H}\xspace_v=\{\mathbf{h}\xspace_{v_1}^L,\mathbf{h}\xspace_{v_2}^L \ldots \mathbf{h}\xspace_{v_T}^L\}\; \mathbf{h}\xspace_{v_t} \in \mathcal{R}^d$ and $\mathbf{Z}\xspace_v=\{\mathbf{z}\xspace_{v_1}^L,\mathbf{z}\xspace_{v_2}^L \ldots \mathbf{z}\xspace_{v_T}^L\}\; \mathbf{z}\xspace_{v_t} \in \mathcal{R}^{d'}$. $\mathbf{H}\xspace_v \in \mathcal{R}^{T \times d}$ is input to the temporal self-attention block and $\mathbf{Z}\xspace_v \in \mathcal{R}^{T \times d'}$ is the output. Also, $\mathbf{z}\xspace_{v_t}$ is the final output of node $v$ at snapshot $t$ which will be used for downstream tasks. Following similar design as \cite{attentionisallyouneed}, $\mathbf{H}\xspace_v$ is used as query, key and value, thus the name \textit{self-attention}. Specifically, $\mathbf{W}\xspace_q,\mathbf{W}\xspace_k,\mathbf{W}\xspace_v \in \mathcal{R}^{d\times d'}$ are matrices which are used to transform $\mathbf{H}\xspace_v$ to the corresponding query, key and value space. Essentially, query and key are used to compute an attention value for each timestamp $t$ with previous timestamps include $t$. Using these attention values, a final value is computed for the timestamp $t$ by aggregating using attention weights over previous timestamp values, including $t$. Specifically, \begin{equation} \begin{gathered} \mathbf{Z}\xspace_v = \beta_v(\mathbf{H}\xspace_v\mathbf{W}\xspace_v) ,\quad \beta_v^{ij}=\frac{\alpha_v^{ij}}{\sum_{k=1}^{k=T}\alpha_v^{ik}} \\ \alpha_v^{ij} = \frac{(\mathbf{H}\xspace_v\mathbf{W}\xspace_k)(\mathbf{H}\xspace_v\mathbf{W}\xspace_q)^T_{ij}}{\sqrt{d'}} + M_{ij} \end{gathered} \end{equation} where $M_{ij}=0 \; \forall i \leq j$ and $M_{ij}=-\infty\; \forall i > j$. \end{itemize} We note that the entire architecture is trained end to end by running random walks over each snapshot and using similar unsupervised loss as in \textsc{DeepWalk}\xspace for nodes co-occurring in the walks. \subsubsection{Continuous Time Graph/Event-Stream Graph-based Methods} Now, we turn our focus to the temporal embedding method for continuous graphs where each edge between two nodes is an instantaneous event. We will first discuss methods that model evolving network topology \cite{FiTNE, HTNE, MDNE}. Then we discuss methods that additionally model graph attributes as well. These include bi-partite interactions only methods \cite{tigecmn,jodie} and general interaction networks \cite{tgat,caw,dyrep,ige,tgn}. \textsc{HTNE}\xspace \cite{HTNE} notes that snapshot-based methods model the temporal network at pre-defined windows, thus ignoring the network/neighborhood formation process. \textsc{HTNE}\xspace remarks that neighborhood formation for each node is a vital process that excites the neighborhood formation for other nodes. For example, in the case of a co-authorship network, co-authors of a Ph.D. student will be her advisor or colleagues in the same research lab. However, as that student becomes a professor in the future, her students might collaborate with students of her previous colleagues, thus exciting edges between nodes. Furthermore, a neighborhood sequence of co-authors for that Ph.D. student might indicate the evolving research interests which will influence the future co-authors. \par For the below discussion, we denote a temporal graph as $G= (V,E,X)$ where $V$ is the collection of nodes in the network, $E = \{(u,v,t)\mid u,v \in V,\; t\in \mathcal{R}^{+}\}$ and $\mathbf{X}\xspace \in \mathcal{R}^{N\times F}$ is a feature matrix for nodes where $N=|V|$ and $F$ is the feature dimension. \textsc{HTNE}\xspace visualizes the neighbourhood formation as a sequence of events, where each event is a edge formation between source and target node at time $t$. A neighbourhood formation sequence $(\mathcal{N}(v))$ of a node $v$ can be represented as $\{(u_i,t_i)\mid i=1,2....I\}$ where $I$ is number of interactions/edges of node $v$. We observe that a node can interaction multiple times with the same node but at different timestamps. \textsc{HTNE}\xspace assumes a $\mathcal{N}(v)$ as a event sequence for each node $v$ and models these event sequences using marked temporal point processes(TPP) \cite{rizoiu2017tutorial}. TPP is a great tool to model the sequential event sequence, $\{(e_1,t_1),(e_2,t_2) ... (e_T,t_T\}$ where past events can impact the next event in the sequence. TPPs are mostly characterized by the conditional intensity function $\lambda(t)$. $\lambda(t)$ defines an event arrival rate at time $t$ given the past event history $H(t)$. $\lambda(t)$ number of expected events in a infinitesimal time interval $[t,t+\Delta t]$. \begin{equation} \lambda(t \mid H(t))=\lim _{\Delta t \rightarrow 0} \frac{\mathbb{E}\left[N(t+\Delta t) \mid \mathcal{H}_{t}\right]}{\Delta t} \end{equation} Particularly, \textsc{HTNE}\xspace utilizes temporal hawkes point process to model the neighbourhood formation sequence for each node $v$ in the network. Below formulation define the hawkes conditional intensity for edge between node $v$ and node $u$ at time $t$. \begin{equation} \begin{gathered} \tilde{\lambda}_{u \mid v}(t) = \mu_{u,v} \quad +\quad \sum_{\mathclap{w,t_w \in \{(w,t_w) \in N(v) \wedge t_w <t\}}}\alpha_{w,u}\kappa_v(t-t_w)\\ \mu_{u,v} = -\| \mathbf{h}\xspace_u - \mathbf{h}\xspace_v \|^2 \quad \kappa_v(t-t_w) = \exp(-\delta_v(t-t_w)) \quad \alpha_{w,u} = -\gamma_{w,v}\| \mathbf{h}\xspace_u - \mathbf{h}\xspace_w \|^2\\ \gamma_{w,v} = \frac{\exp(-\| \mathbf{h}\xspace_w - \mathbf{h}\xspace_v \|^2)}{\sum_{{w' \in \{(w,t_w) \in N(v) \wedge t_w <t\}}}\exp(-\| \mathbf{h}\xspace_{w'} - \mathbf{h}\xspace_v \|^2)} \label{eq:htne} \end{gathered} \end{equation} where $\mu_{u,v}$ is the base rate of edge formation event between node $u$ and $v$. $\alpha_{w,u}$ is the importance of node $v'$s historical neighbour $h$ in possible edge creation with node $u$. $\kappa$ is the time decay kernel to reduce the impact of old neighbours in current edge formation. Finally, following loss is optimized to train the node embeddings. \begin{equation} log L = \sum_{v\in V}\sum_{u,t \in N(v)} \frac{\lambda_{u\mid v}(t)}{\sum_{w\in V}\lambda_{w \mid v}(t)} \quad \lambda_{u\mid v}(t) = \exp(\tilde{\lambda}_{u \mid v}(t)) \label{eq:loss_htne} \end{equation} \textsc{HTNE}\xspace applies exponential over $\tilde{\lambda}$ to enable positive values for conditional intensity, as negative event rates are not possible. Furthermore, $exp$ enables softmax loss which can be optimized using negative sampling similar to previously seen approaches to enable faster training. \textsc{MDNE}\xspace \cite{MDNE} is a similar approach which additionally models the growth of the network i.e. number of edges $e(t)$ at each timestamp $t$ apart from edge formation process. Specifically, given number of nodes $n(t)$ by time t, the following defines the additional new edges at time $t$: \begin{equation} \Delta e'(t) = r(t)n(t)(\zeta(n(t)-1)^\gamma) \quad r(t)=\frac{\frac{1}{|E|}\sum_{(u,v,t) \in E}\sigma(-\|\mathbf{h}\xspace_u - \mathbf{h}\xspace_v\|^2)}{t^{\theta}} \end{equation} where $\zeta,\gamma,\theta$ are learnable parameters. $r(t)$ encodes the linking rate of each node with every other node in the network. Additionally, they add neighborhood factors of node $u$ and $v$ while calculating the intensity function for edge formation between node $v$ and $u$ in eq. \ref{eq:htne}. Finally, they add a loss term in eq. \ref{eq:loss_htne} on mean square error of predicted and ground truth new edges at each time t. \textsc{FiTNE}\xspace \cite{FiTNE} is a random walk-based method that defines a k length temporal walk $W$ over temporal graph $G$ as $\{w_1,w_2\ldots w_k\}\;w_i \in E$ and there is no constraint over times of straight edges. Essentially, \textsc{FiTNE}\xspace follows a similar approach as \textsc{DeepWalk}\xspace,\textsc{Node2Vec}\xspace. It collects a collection of random walks from temporal graph $G$ and learns node embeddings using a skip-gram-based learning framework by similar unsupervised loss. When running a random walk, a transition probability is defined over consecutive edges as follows: \begin{equation} p(e_{out}\mid e_{in}) = \frac{w(e_{out}\mid e_{in})}{\sum_{e\in I(v_{e_{in}})} w(e \mid e_{in})} \end{equation} where $I(v_{e_{in}})$ is the set of outgoing edges from $e_{in}$. Finally, they have proposed unbiased/time difference-based formulations for $w$. These methods do not utilize the graph attributes and generate static node embeddings. Specifically, learned node embeddings are not a function of time. Thus, we now summarize methods that utilize graph attributes and learn node embeddings as a function of time $t$. \par \textsc{JODIE}\xspace \cite{jodie} and \textsc{TigeCMN}\xspace \cite{tigecmn} are bipartite interactions network-based methods with specific focus on user-item interaction paradigm. Likewise, their architecture actively involves design choices for users and items. Consequently, these methods are not applicable in non-bipartite interaction networks. A major difference between bipartite and non-bipartite networks is the no interactions between the same type nodes(user-user, item-item). \textsc{JODIE}\xspace remarks that current temporal embedding methods produce a static embedding using the temporal network, as we saw in previous approaches. In recommendation applications, this will ensure similar recommendations even if the user revisits the site after one hr/1 day/1 month, and so on. This problem implies node embeddings should be a function of time, $t$, i.e., node embedding of a user should model the changing intent over time. Additionally, \textsc{JODIE}\xspace models the stationary nature of the user's intent. Since users interact with a million items every day, recommendation time should be sublinear in order of number of items. \textsc{JODIE}\xspace's architecture models all of these aspects. We note that following notation for a bipartite graph $G=(U,I,E)$ where $U$ is collection of user nodes, $I$ is collection of item nodes and $E$ is collection of interactions where $e \in E$ denotes the observed interaction and is represented as $e=(u,i,t,\mathbf{x}\xspace), u \in U, i \in I, t \in \mathcal{R}^{+},\mathbf{x}\xspace \in \mathcal{R}^F$. Assuming $\mathbf{h}\xspace_v(t)$ as embedding of node $v$ at time $t$, $\mathbf{h}\xspace_v(t^-)$ as embedding of node $v$ at time just before $t$ i.e. updated embedding of node $v$ after latest interaction at time $t' < t$ ,$\mathbf{h}\xspace_v$ as static embedding of node $v$, \textsc{JODIE}\xspace defines following formulation of user and item nodes' embeddings after observing an interaction $(u,i,t,\mathbf{x}\xspace)$: \begin{equation} \begin{gathered} \mathbf{h}\xspace_u(t) = \sigma(\mathbf{W}\xspace_1^\text{user}\mathbf{h}\xspace_u(t^-)+\mathbf{W}\xspace_2^\text{user}\mathbf{h}\xspace_i(t^-)+\mathbf{W}\xspace_3^\text{user}\mathbf{x}\xspace+\mathbf{W}\xspace_4^\text{user}\Delta t_u)\\ \mathbf{h}\xspace_i(t) = \sigma(\mathbf{W}\xspace_1^\text{item}\mathbf{h}\xspace_u(t^-)+\mathbf{W}\xspace_2^\text{item}\mathbf{h}\xspace_i(t^-)+\mathbf{W}\xspace_3^\text{item}\mathbf{x}\xspace+\mathbf{W}\xspace_4^\text{item}\Delta t_i)\\ \end{gathered} \end{equation} where $\Delta t_u,\Delta t_i$ are time difference of last interaction of $u,i$ with $t$ respectively. $\mathbf{W}\xspace^\text{user}$s are modelled using an item RNN and similarly $\mathbf{W}\xspace^\text{item}$s using a user RNN. We note that all user's RNNs share the same parameters, and similarly all item's RNNs share the same parameters. Finally, \textsc{JODIE}\xspace proposes the following projection operator for calculating the projected node embedding at $t+\Delta t$ where t is the last interaction time and $\Delta t$ is time spent after last interaction. \begin{equation} \mathbf{h}\xspace_v^{projected}(t+\Delta t) = (1+\mathbf{W}\xspace_p\Delta t)\mathbf{h}\xspace_v(t) \end{equation} In order to train the network, \textsc{JODIE}\xspace utilizes the future interactions of user nodes i.e. given a last interaction time $t$ with item $i$ for user $u$ and its future interaction time $t+\Delta t$ with item j, can we predict the item embedding $\mathbf{h}\xspace_j^{predicted}(t+\Delta t)$ just before $t+\Delta t$, which will same as actual item $j$'s embedding $\mathbf{h}\xspace_j \| \mathbf{h}\xspace_j((t+\Delta t)^-)$. Predicted item embedding $\mathbf{h}\xspace_j^{predicted}(t+\Delta t)$ at time $t+\Delta t$ is calculated given the last interaction of node $u$ at time $t$ with node $i$. \begin{equation} \mathbf{h}\xspace_j^{predicted}(t+\Delta t) = \mathbf{W}\xspace_1\mathbf{h}\xspace_u^{projected}(t+\Delta t) +\mathbf{W}\xspace_2\mathbf{h}\xspace_i((t+\Delta t)^-) + \mathbf{W}\xspace_3\mathbf{h}\xspace_u + \mathbf{W}\xspace_4\mathbf{h}\xspace_i+ b \end{equation} The architecture is trained end-to-end using the mean square error between predicted item embedding and actual item embedding. Further, it adds a regularization term over the change in consecutive dynamic embeddings for users and items. \textsc{JODIE}\xspace notes that since the architecture directly predicts the future item embedding, using \textit{LSH} techniques \cite{7025604}, the nearest item can be searched in constant time. \textsc{TigeCMN}\xspace \cite{tigecmn} utilizes memory networks to store each node's past interactions instead of a single latent vector in order to improve the systems' performance. Specifically, \textsc{TigeCMN}\xspace creates a value memory matrix $\mathbf{M}\xspace_u$ for each user $u$ and a value memory matrix $\mathbf{M}\xspace_i$ for each item $i$. These memory matrices have K slots of $d$ dimension. Furthermore, there exists a key memory matrix $\mathbf{M}\xspace^U$ for all users and another key memory matrix $\mathbf{M}\xspace^I$ for all items. Assuming a interaction having feature vector $\mathbf{x}\xspace_{u,i}$ between user $u$ and item $i$ at time $t$, memory matrix $M^u$ corresponding to user $u$ is updated as follows: \begin{equation} \begin{gathered} \mathbf{h}\xspace_{u,i} = g(\mathbf{W}\xspace_2(\text{DROPOUT}(\;\mathbf{W}\xspace_1[\mathbf{x}\xspace_{ui}\| \mathbf{h}\xspace_i \| \Delta u]+b)))\\ s_k = \frac{\mathbf{M}\xspace^U(k)\mathbf{h}\xspace_{ui}^T}{\|\mathbf{M}\xspace^U(k)\|_2\|\mathbf{h}\xspace_{ui}\|_2} \quad w_k = \frac{\exp(s_k)}{\sum_{j=1}^{j=K}\exp(s_j)} \quad k = 1,2 \ldots K \\ \mathbf{e}\xspace_{u,i} = \sigma (\mathbf{w}\xspace_e\mathbf{h}\xspace_{u,i}+ \mathbf{b}\xspace_1) \quad \mathbf{M}\xspace_u(k) = \mathbf{M}\xspace_u(k)\odot (1-w_k\mathbf{e}\xspace_{u,i}) \quad k = 1,2\ldots K\\ \mathbf{a}\xspace_{u,i} = \text{tanh}(\mathbf{w}\xspace_a\mathbf{h}\xspace_{u,i}) \quad \mathbf{M}\xspace_u(k) = \mathbf{M}\xspace_u(k) + \mathbf{a}\xspace_{u,i}w_k \quad k=1,2 \ldots K\\ \end{gathered} \end{equation} where $\mathbf{e}\xspace_{u,i}$ erases values from $\mathbf{M}\xspace_u$ using interaction $(u,i,t,\mathbf{x}\xspace_{u,i})$ embedding $\mathbf{h}\xspace_{u,i}$. $\mathbf{a}\xspace_{u,i}$ is a add vector which updates the $\mathbf{M}\xspace_u$. The same approach is followed to update the value memory matrix $M_i$ for item i. To compute the user $u$'s embedding at time $t$, its value matrix $M_u \in \mathcal{R}^{K\times d}$ is passed through self attention layer to provide context-aware embedding for each slot $k=1,2 \ldots K$ and finally mean-pooled to provide dynamic embedding $\mathbf{h}\xspace'_u$ which is concatenated with static node embedding (computed using transformation on one-hot vector representation) to provide $\mathbf{h}\xspace_u$. Finally, this network is trained end to end using cosine similarity between user and item of each interaction in conjunction with samples from negative interactions similar to \textsc{GraphSage}\xspace's unsupervised loss. We note that node embeddings learnt in \textsc{TigeCMN}\xspace are not function of time, i.e. the embedding corresponding to each node will not change after its last interaction. Another method \textsc{IGE}\xspace \cite{ige} uses skip-gram \cite{word2vec} style loss for training embeddings. First, they introduce a induced list $S_u={(u,v_1,t_1),(u,v_2,t_2) \ldots (u,v_L,t_L)}$ containing all interactions by each node $u$. \textsc{IGE}\xspace defines context window $W(v)$ of size $C$ for each node $v \in S_u$ using its neighbours in the induced list $S_u$ similar to word2vec method. Finally,\textsc{IGE}\xspace uses similar loss as skip-gram using negative sampling with these induced lists. Essentially, \textsc{IGE}\xspace observes each induced list as sentence or sequence of words to train their embeddings. \par We now summarize the temporal graph representation methods, which utilize both dynamic graph topology and associated attributes to learn the node representation of non-bipartite temporal graphs. Specifically, \textsc{DyRep}\xspace \cite{dyrep} remarks that most temporal graphs exhibit two dynamic processes realizing at different(same) timescale, mainly \textit{association} process and \textit{interaction} process. \textit{Association} process signifies the topological changes in the graph, and \textit{Communication} process implies the interaction/information exchange between connected(maybe disconnected) nodes. \textsc{DyRep}\xspace observes that these two processes are interleaving, i.e., an association event will impact the future communications between nodes, and a communication event between nodes can excite the association event. \textsc{DyRep}\xspace notes that association events have more global impact since they change the topology. In contrast, communications events are local, although indirectly capable of global impact by exciting topological changes in the network. Assuming $G = {V,E}, \;E = \{(u,v,t,k)\},u\in V,v \in V\;t\in [0,T],\;$ and k=0 implies association event and k= 1 implies communication event. We note that permanent topological changes are potentially using k=0. The following formulation shows the node embedding update corresponding to the event $e=(u,v,t,k)$. \begin{equation} \mathbf{h}\xspace_v(t) = \sigma(\mathbf{W}\xspace_1\mathbf{h}\xspace_v(t_p^v)+\mathbf{W}\xspace_2\mathbf{z}\xspace_u(t^-)+\mathbf{W}\xspace_3(t-t_p^v)) \quad \mathbf{h}\xspace_v(0) = \mathbf{x}\xspace_v \end{equation} where $t_p^v$ is the last event time by node $v$ and $\mathbf{z}\xspace_u(t^-)$ is the aggregated embedding of node $u$'s neighbourhood just before time $t$. We note that node embedding update formulation comprises three main principles. \begin{enumerate} \item \textbf{Self-Propagation:} A node representation should evolve from its previous representation. \item \textbf{Localized embedding propagation:} A event(association/communication) between nodes must be the outcome of neighbourhood of the node through which the information propagated. \item \textbf{Exogenous Drive:} Finally, a global process can update the node representation between successive events involving that node. \end{enumerate} $\mathbf{z}\xspace_u(t^-)$ is calculated using the neighbourhood of node $u$ as: \begin{equation} \mathbf{z}\xspace_u(t^-)=max(\{\sigma(q_{ui}(\mathbf{W}\xspace_4(\mathbf{W}\xspace_5\mathbf{h}\xspace_i(t^-)+\mathbf{b}\xspace))), i \in \mathcal{N}(u)\}) \end{equation} where $\mathbf{h}\xspace_i(t^-)$ is the latest node embedding of node $i$ before time $t$. $q_{ui}$ denotes the weight of each structural neighbour of node $u$ which reflect the tendency of node $u$ to communicate more with associated nodes, i.e. $q_{ui}$ is higher for those neighbours to which node $u$ has communicated more frequently. Although, we note that a more weightage should have been provided for neighbours among frequent communication neighbours with the latest exchanges. Anyway, for detailed calculation of, $q_{ui}$ we refer to \textbf{Algorithm 1} of \textsc{DyRep}\xspace. Finally, \textsc{DyRep}\xspace utilizes temporal point processes to model an event $e=(u,v,t,k)$. Formally, \begin{equation} \lambda_k^{u,v}(t) = f_k{\mathbf{W}\xspace_6^k(\mathbf{h}\xspace_u(t^-)\|\mathbf{h}\xspace_v(t^-))} \quad f_k(x) = \phi_k\log(1+\exp(\frac{x}{\phi_k})) \end{equation} where $\mathbf{h}\xspace_v(t^-)$ signifies the most recent updated node representation of node $v$. $f_k$ is a soft-plus function parameterized for each event type $k$. Finally, \textsc{DyRep}\xspace utilizes following loss for training models' parameters. \begin{equation} L = \sum_{(u,v,t,k)\in E)} -\log(\lambda_k^{u,v}(t)) + \int_{t=0}^{t=T}\sum_{u \in V}\sum_{v \in V}\lambda^{u,v}_k(t) dt \end{equation} We note that the second term in the loss signifies the survival probability for events that do not happen till time $T$. This term is computed by sampling techniques. For more details on the sampling procedure, we refer to Algorithm 2 of \textsc{DyRep}\xspace. Further, this approach can handle unseen nodes, i.e., transductive and inductive settings. We note that \textsc{DyRep}\xspace's methodology requires datasets that contain both evolution events and communication events. We see it as a major limitation since most of the available datasets do not have association events. We now focus on methods that require only interaction events and do not majorly differentiate between association and communication events. \textsc{TGAT}\xspace \cite{tgat} is a \textit{self-attention} based method similar to \textsc{GAT}\xspace but uses novel functional time encoding technique to encode time in an embedding space. Although previous approaches have projected time into embedding space using separate weight matrices, \textsc{TGAT}\xspace utilizes \textit{Bochner's theorem} to propose the following time encoding: \begin{equation} \mathbf{h}\xspace_T(t) = \frac{1}{\sqrt{d}}(\cos(\omega_1t),\sin(\omega_1t),\cos(\omega_2t),\sin(\omega_2t) \ldots \cos(\omega_dt),\sin(\omega_dt)) \end{equation} where $d$ is a hyperparameter and $\{\omega_1 \ldots \omega_d\}$ are learnable parameters. For more details on the derivation of this encoding using \textit{Bochner's theorem}, we refer to \cite{xu2019self}. Finally, \textsc{TGAT}\xspace defines neighbourhood of node $u$ at time $t$ as $\mathcal{N}_u(t)=\{(v,t'), (u,v,t')\in E \wedge t' < t\}$. So, \textsc{TGAT}\xspace defines the following temporal \textit{GAT} layer $l$ at time $t$- \begin{equation} \begin{gathered} \tilde{\mathbf{h}\xspace}^{l-1}_v(t) = \mathbf{h}\xspace_v^{l-1}(t) \| \mathbf{h}\xspace_T(t) \\ \alpha_u = \frac{\exp((\mathbf{W}\xspace_{query}\tilde{\mathbf{h}\xspace}^{l-1}_v(t))(\mathbf{W}\xspace_{key}\tilde{\mathbf{h}\xspace}^{l-1}_u(t))^T)}{\sum_{w\in \mathcal{N}_v(t)}\exp((\mathbf{W}\xspace_{query}\tilde{\mathbf{h}\xspace}^{l-1}_v(t))(\mathbf{W}\xspace_{key}\tilde{\mathbf{h}\xspace}^{l-1}_w(t))^T)} \quad u \in \mathcal{N}_v(t)\\ \mathbf{h}\xspace^l_v(t)^{\text{attention}} = \sum_{u\in \mathcal{N}_v(t)} \alpha_u \mathbf{W}\xspace_{value}\tilde{\mathbf{h}\xspace}^{l-1}_u(t)\\ \mathbf{h}\xspace^l_v(t) = \mathbf{W}\xspace^l_2\text{ReLU}(\mathbf{W}\xspace^l_1[\mathbf{h}\xspace^l_v(t)^{\text{attention}}\|\mathbf{x}\xspace_v]+\mathbf{b}\xspace_1^l]) +\mathbf{b}\xspace_2^l \quad \quad \mathbf{h}\xspace^0_v(t) = \mathbf{x}\xspace_v \end{gathered} \end{equation} \textsc{TGAT}\xspace extends this formulation to multi(k)-head attention by using separate $\mathbf{W}\xspace_{query},\mathbf{W}\xspace_{key},\mathbf{W}\xspace_{value}$ for each attention head as follows: \begin{equation} \mathbf{h}\xspace^l_v(t) = \mathbf{W}\xspace^l_2\text{ReLU}(\mathbf{W}\xspace^l_1[\mathbf{h}\xspace^l_v(t)^{\text{attention}_1}\|\mathbf{h}\xspace^l_v(t)^{\text{attention}_2}\|\ldots \mathbf{h}\xspace^l_v(t)^{\text{attention}_k} \|\mathbf{x}\xspace_v]+\mathbf{b}\xspace_1^l]) +\mathbf{b}\xspace_2^l \end{equation} Finally, $\mathbf{h}\xspace^L_v(t)$ is the final node representation of node $v$ at time $t$ using $L$ layer \textsc{TGAT}\xspace . This model is trained, similar to \textsc{GraphSage}\xspace, using link prediction or node classification loss. \textsc{TGAT}\xspace's temporal GNN layer is inductive, i.e., it can predict the embedding of unseen nodes since its parameters are not node dependent. Furthermore, it allows incorporating edge features. Moreover, it can also use evolving node attributes $\mathbf{x}\xspace_v(t)$. \par \textsc{TGN}\xspace \cite{tgn} attempts to unify the ideas proposed in previous approaches and provides a general framework for representation learning in continuous temporal graphs. This framework is inductive and consists of independent exchangeable modules. For example, there is a module for embedding time to a vector. This module is independent of the rest and is replaceable with a domain understanding-based embedding function. They also define a new event type which is node addition/updation $e=(v,t,\mathbf{x}\xspace)$ where a node $v$ with attributes $\mathbf{x}\xspace$ is added/updated in the temporal graph $G$ at time $t$. \textsc{TGN}\xspace defines the following modules: \begin{itemize} \item \textbf{Memory:} A memory vector $\mathbf{s}\xspace_v$ is kept for each node $v$. This memory stores the compressed information about the node $v$ and is updated only during a event involving the node $v$. For a new node, its memory is initialized to $\textbf{0}$. \item \textbf{Message Function:} Following messages are computed when a interaction $e=(u,v,t,\mathbf{x}\xspace_e)$ occurs. \begin{equation} \mathbf{m}\xspace_u(t) = \text{MLP}(\mathbf{s}\xspace_u,\mathbf{s}\xspace_v,\Delta t_u ,\mathbf{x}\xspace_e) \quad \mathbf{m}\xspace_v(t) = \text{MLP}(\mathbf{s}\xspace_v,\mathbf{s}\xspace_u,\Delta t_v ,\mathbf{x}\xspace_e) \end{equation} where $\mathbf{x}\xspace_e$ is corresponding edge attributes and $\Delta t_u$ is the time different between $t$ and last interaction time of node $u$. In case, the event is node addition/updation $e=(v,t,\mathbf{x}\xspace)$, the following message $\mathbf{m}\xspace_v = \text{MLP}(\mathbf{s}\xspace_v,\Delta t_v,\mathbf{x}\xspace)$ is computed. \item \textbf{Message Aggregation:} \textsc{TGN}\xspace processes interactions in a batch instead of one by one to parallelize the process. Given $B$ messages for a node $v$, its aggregated message is calculated at time $t$, given that the batch contains messages from $t^{start}$ to $t$: \begin{equation} \mathbf{m}\xspace^{agg}_v(t) = \text{AGGREGATOR}(\mathbf{m}\xspace_v(t_1),\mathbf{m}\xspace_v(t_2) \ldots \mathbf{m}\xspace_v(t_B)) \quad t^{start} \leq t_1 \leq t_2 \ldots t_B \leq t \end{equation} \item \textbf{Memory Updater:} After computing messages for each node $v$ involving events in the current batch, memory is updates using $\mathbf{s}\xspace_v = \text{MEM}(\mathbf{m}\xspace^{agg}_v(t),\mathbf{s}\xspace_v)$. \text{MEM} can be any neural based architecture. \textsc{TGN}\xspace notes that recurrent neural architecture is more suitable since current memory depends upon the previous memory. \item \textbf{Embedding:} This is the last module whose objective is to compute the embedding of required nodes using their neighbourhood and memory states. Specifically, \textsc{TGN}\xspace proposes the following general formulation to compute the embedding of a node $v$ at time $t$. \begin{equation} \mathbf{h}\xspace_v(t) = \sum_{u\in \mathcal{N}_u(t)}f(\mathbf{s}\xspace_v,\mathbf{s}\xspace_u,\mathbf{x}\xspace_{v,u}(t),\mathbf{x}\xspace_v(t),\mathbf{x}\xspace_u(t)) \end{equation} where $\mathbf{x}\xspace_v(t)$ denotes the latest attributes of node $v$,$\mathbf{x}\xspace_{v,u}(t)$ denotes the features of the latest interaction between node $v$ and $u$. \textsc{TGN}\xspace remarks that \textsc{TGAT}\xspace layer can be utilized here or any other simpler approximation as well basis the requirement. \textsc{TGN}\xspace also proposes following L layer GNN based formulation which is simple and fast. \begin{equation} \begin{gathered} \tilde{\mathbf{h}\xspace}^l(t) = \text{ReLU}(\sum_{u\in \mathcal{N}_u(t)}\mathbf{W}\xspace_1[\mathbf{h}\xspace_u^{l-1}(t)\| \mathbf{x}\xspace_{v,u} \| \text{TIME-ENCODER}(t-t_{v,u})]) \\ \mathbf{h}\xspace_v^l(t) = \mathbf{W}\xspace_2[\tilde{\mathbf{h}\xspace}^l_v(t) \| \mathbf{h}\xspace_v^{l-1}(t)] \quad \mathbf{h}\xspace_v(t) = \mathbf{h}\xspace_v^{L}{t} \end{gathered} \end{equation} \end{itemize} This architecture can be trained using link-prediction or node classification tasks. \textsc{TGN}\xspace observes that during training, there is an issue of target variable leak when predicting links in current batch, since memory of nodes will be updated using events of this batch. To encounter this, \textsc{TGN}\xspace creates a message storage which stores the events occurring in last batch corresponding to each node in the graph. Now given the current batch during training, they update the memory of nodes in this batch using the events from message storage and compute loss in this batch by using link prediction objective. This loss is used to update the model's parameter. Now, message storage is updated for nodes occurring in the current batch by replacing their events with events of the current batch. \par \textsc{CAW}\xspace \cite{caw} critiques that currently proposed methods work well in inductive setting only when there are rich node attributes available. \textsc{CAW}\xspace remarks that any temporal representation learning method, even in the absence of node features and identity, when trained on a temporal graph governed by certain dynamic laws, should perform well on an unseen graph governed by the same dynamic laws. For example, if a triadic closure process and feed-forward loop process \footnote{if node a excites node b and node c, then node b and c will also link in the future} govern link formation in the training graph. Then, a trained model on such data should be able to produce a similar performance on an unseen graph governed by these two laws even if node attributes are not available. Current methods rely heavily on attributes, and thus, an inductive setting cannot model these processes in the absence of node identity/attributes. \textsc{CAW}\xspace proposes a method that anonymizes node identity by exploiting temporal random walks. Specifically, a random walk $W = ((w_0,t_0),(w_1,t_1)\ldots (w_k,t_k)),t_0 >t_1>\ldots > t_k,(w_{i-1},w_i)\in E \; \forall i$ backtracks each consecutive step, i.e. each consecutive edge in the walk has a lower timestamp than the current edge. Further, $S_v(t)$ is a collection of $K$ random walks of length $k$ starting from node $v$ at time $t$. $g(w,S_v(t))$ is a $k+1$ size vector where each index $i$ specifies the count of node $w$ in $i^{th}$ position across every $W\in S_v(t)$. So, given a target link $u,v$ at time $t$, $I_{caw}(w,(S_u(t),S_v(t))) = \{g(w,S_u(t)),g(w,S_v(t))\}$ denotes the relative identity of node $w$ wrt to $S_u(t)$ and $S_v(t)$. We note that $w$ must occur at-least in any random walk $W \in S_u(t) \cup S_v(t)$. \textsc{CAW}\xspace notes such representation traces the evolution of motifs in random walks without explicitly using node identity and attributes. Finally, each random walk $W\in S_u(t) \cup S_v(t)$ can be represented as: \begin{equation} \begin{aligned} W = ((I_{caw}(w_0,(S_u(t),S_v(t))),t_0),&(I_{caw}(w_1,(S_u(t),S_v(t))),t_1)\\\ldots (&I_{caw}(w_k,(S_u(t),S_v(t))),t_k))) \end{aligned} \end{equation} The following formulation is used to encode the target link involving nodes $u,v$ at time $t$. \begin{equation} \begin{gathered} \mathbf{h}\xspace_W = \text{RNN}(\{(f_1(I_{caw}(w,(S_u(t),S_v(t)))),f_2(t-t')), \forall (w,t') \in W\})\\ f_1(I_{caw}(w,(S_u(t),S_v(t)))) = \text{MLP}(g(w,S_u(t))) + \text{MLP}(g(w,S_v(t))) \\ f_2(\Delta t) = [\cos(\omega_1\Delta t),\sin(\omega_1\Delta t) \ldots \cos(\omega_d\Delta t),\sin(\omega_d\Delta t)]\\ \mathbf{h}\xspace_{S_u(t),S_v(t)} = \frac{1}{2*K}\sum_{W\in S_u(t)\cup S_v(t)} \mathbf{h}\xspace_W \end{gathered} \end{equation} These target link embeddings are usable in future link prediction tasks. Finally, \textsc{CAW}\xspace observes that the $f_1/RNN$ layer can additionally incorporate node/edge attributes. We note that the \textsc{CAW}\xspace model is not applicable in node classification. \subsection{Deep Generative Models for Temporal Graphs} The current research in deep generative models for temporal graphs is lacking and still in a nascent stage. We briefly overview the non-neural/neural methods below.\par Erdős Rényi graph model and small-world model are de-facto models for static graphs. However, similar works are not available for temporal graphs. Few works have attempted to formulate the generation process of a network to understand the spreading of diseases via contact using temporal networks. \cite{10.1371/journal.pcbi.1003142} has proposed an approach where they start by generating a static network, and for each link in the network, they generate an active time span and start time. Further, they sample the sequence of contact times from the inter-event time distribution. Finally, they overlay this sequence with the active time span of each link to generate an interaction network. \cite{perra2012a} introduces activity-driven temporal network generative approach. At each timestamp, they start with N disconnected nodes. Each node $i$ b becomes active with probability $p_i$, also defined as an active potential, and connects with randomly chosen m nodes. At the next timestamp, the same process is repeated. \cite{Vestergaard_2014} proposes a memory-based method for both node and link activation. It stores a time delta since the last interaction $\tau_{i}$ for every node i and $\tau_{ij}$ for every link between node i and j. Initially, in an N-size network, all nodes are inactive. A node can become active with probability $bf_{node}(\tau_{i})$ and initiate a link with inactive node j with probability $g_{node}(\tau_{j})g_{link}(\tau_{ij})$. A link can de-activate with probability $zf_{link}(\tau_{ij})$. Both $b$ and $z$ are control parameters. $f$ and $g$ are memory kernels of a power-law distribution. More recently, \textsc{DYMOND}\xspace \cite{DYMOND} presented a non-neural, 3-node motif-based approach for the same problem. They assume that each type of motif follows a time-independent exponentially distributed arrival rate and learn the parameters to fit the observed arrival rate. \textsc{TagGen}\xspace \cite{TagGen} models temporal graphs by converting them into equivalent static graphs by merging node-IDs with their timestamps interactions (temporal edge) and connecting only those nodes in the resulting static graph that satisfy a specified temporal neighborhood constraint. They performed random walks on this transformed graph and modified them using heuristics-based local operations to generate many synthetic random walks. They further learn a discriminator to differentiate between actual random walks and synthetic random walks. Finally, the synthetic random walks classified by a discriminator as real are collected and combined to construct a synthetic static graph. Finally, they convert the sampled static graph to a temporal interaction graph by detaching node identity from time. These approaches suffer from the following limitations: \noindent $\bullet$ {\textbf{Weak Temporal Modeling:} \textsc{DYMOND}\xspace makes two key assumptions: first, the arrival rate of motifs is exponential; and second, the structural configuration of a motif remains the same throughout the observed time horizon. These assumptions do not hold in practice -- motifs may evolve with time and could arrive at time-dependent rates. This type of modeling leads to poor fidelity of structural and temporal properties of the generated graph. \textsc{TagGen}\xspace, on the other hand, does not model the graph evolution rate explicitly. It assumes that the timestamps in the input graph are discrete random variables, prohibiting \textsc{TagGen}\xspace from generating new(unseen in the source graph) timestamps. More critically, the generated graph duplicates many edges from the source graph -- our experiments found up to $80\%$ edge overlap between the generated and the source graph. While \textsc{TagGen}\xspace generates graphs that exhibit high fidelity of graph structural and temporal interaction properties, unfortunately, it achieves them by generating graphs indistinguishable from the source graph due to their poor modeling of interaction times.} \noindent $\bullet$ \textbf{Poor Scalability to Large Graphs:} Both \textsc{TagGen}\xspace and \textsc{DYMOND}\xspace are limited to graphs where the number of nodes is less than $\approx$10000, and the number of unique timestamps is below $\approx$200. However, real graphs are not only of much larger size but also grow with significantly high interaction frequency \cite{Paranjape_2017}. In such scenarios, the critical design choice of \textsc{TagGen}\xspace to convert the temporal graph into a static graph fails to scale over long time horizons since the number of nodes in the resulting static graph multiplies linearly with the number of timestamps. Further, \textsc{TagGen}\xspace also requires the computation of the inverse of a $N' \times N'$ matrix, where $N'$ is the number of nodes in the equivalent static graph to impute node-node similarity. This complexity adds to the quadratic increase in memory consumption and even higher cost of matrix inversion, thus making \textsc{TagGen}\xspace unscalable. On the other hand, \textsc{DYMOND}\xspace has a $O(N^3T)$ complexity, where $N$ is the number of nodes and $T$ is the number of timestamps. \noindent $\bullet$ \textbf{Lack of Inductive Modelling:} Inductivity allows the transfer of knowledge to unseen graphs~\cite{hamilton2018inductive}. In the context of generative graph modeling, inductive modeling is required to \textbf{(1)} upscale or downscale the source graph to a generated graph of a different size, and \textbf{(2)} prevent leakage of node-identity from the source graph. Both \textsc{TagGen}\xspace and \textsc{DYMOND}\xspace rely on one-to-one mapping from source graph node IDs to the generated graph and hence are non-inductive. \section{Proposed Work: Scalable Generative Modeling for Temporal Interaction Graphs} We now explain the problem statement, for which \textsc{Tigger}\cite{tigger} has proposed a solution. \subsection{Problem formulation} \begin{prob}[Temporal Interaction Graph Generator] \noindent \begin{defn}[Temporal Interaction Graph] \label{def:def_tempInt} A temporal interaction graph is defined as $\mathcal{G} = (\mathcal{V},\mathcal{E})$ where $\mathcal{V}$ is a set of $N$ nodes and $\mathcal{E}$ is a set of $M$ temporal edges $\{(u,v,t) \mid u,v \in \mathcal{V},t \in [0, T] \}$. $T$ is the maximum time of interaction. \end{defn} \begin{comment} We use the notation $t^i_v$ to denote the $i^{th}$ timestamp of all edges incident on $v$, when sorted in ascending order.In Fig.~\ref{fig:tigger}, \begin{ex} $t^1_{v_2}=1$, $t^3_{v_2}=7$, $t^2_{v_6}=14$. \end{ex} \end{comment} \noindent \textbf{Input:} A temporal interaction graph $\mathcal{G}$. \\ \textbf{Output:} Let there be a hidden joint distribution of structural and temporal properties, from which the given $\mathcal{G}$ has been sampled. Our goal is to learn this hidden distribution. Towards that end, we want to learn a generative model $p(\mathcal{G})$ that maximizes the likelihood of generating $\mathcal{G}$. This generative model, in turn, can be used to generate new graphs that come from the same distribution as $\mathcal{G}$, but not $\mathcal{G}$ itself. \end{prob} The above problem formulation is motivated by the \textit{one-shot generative modelling} paradigm, i.e., it only requires one temporal graph $\mathcal{G}$ to learn the hidden \textit{joint} distribution of structural and temporal interaction graph properties. Defining the joint distribution of temporal and structural properties is hard. In general, these properties are characterized by inter-interaction time distribution and evolution of static graph properties like degree distribution, power law exponent, number of connected components, the largest connected component, distribution of pair wise shortest distances, closeness centrality etc. Typically, a generative model optimizes over one of these properties under the assumption that the remaining properties are correlated and hence would be implicitly modelled. For example, \textsc{DYMOND}\xspace uses small structural motifs and \textsc{TagGen}\xspace uses random walks over the transformed static graph. \section{Conclusion} This survey presents a unified view of graph representation learning, temporal graph representation learning, graph generative modeling, and temporal graph generative modeling. We first introduced a taxonomy of temporal graphs and their tasks. Further, we explained static graph embedding techniques and, building over them, explained the temporal graph embedding techniques. Finally, we surveyed graph generative methods and highlighted the lack of similar research in temporal graph generative models. Inspired by this, we introduced the problem statement of our recently published paper \textsc{Tigger}. \end{document}
\begin{document} \title[GM varieties with many symmetries]{ Gushel--Mukai varieties with many symmetries and an explicit irrational Gushel--Mukai threefold} \author[O.\ Debarre]{Olivier Debarre} \thanks{This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Project HyperK --- grant agreement 854361).} \address{Universit\'e de Paris, CNRS, IMJ-PRG, F-75013 Paris, France} \email{{\tt [email protected]}} \author[G.\ Mongardi]{Giovanni Mongardi} \address{Dipartimento di Matematica, Universit\`a degli studi di Bologna, Piazza Di Porta San Donato 5, Bologna, Italia 40126} \email{{\tt [email protected]}} \date{\today} \subjclass[2020]{14E08, 14J45, 14J42, 14J30, 14J35, 14J40, 14J45, 14J50, 14K22, 14K30, 14C25, 14H52, 14J70 } \keywords{Fano varieties, Gushel--Mukai varieties, hyperk\"ahler varieties, EPW sextics, automorphisms, rationality, intermediate Jacobians, abelian varieties with complex multiplication. } \begin{abstract} We construct an explicit complex smooth Fano threefold with Picard number 1, index 1, and degree 10 (also known as a Gushel--Mukai threefold) and prove that it is not rational by showing that its intermediate Jacobian has a faithful $\PSL(2,{\bf F}_{11}) $-action.\ Along the way, we construct Gushel--Mukai varieties of various dimensions with rather large (finite) automorphism groups.\ The starting point of all these constructions is an Eisenbud--Popescu--Walter sextic with a faithful $\PSL(2,{\bf F}_{11}) $-action discovered by the second author in 2013. \end{abstract} \maketitle {\it To Fabrizio Catanese, on the occasion of his 70+1st birthday} \section{Introduction} The problem of the rationality of complex unirational smooth Fano threefolds has now been solved in most cases but there are still some unanswered questions.\ For example, Beauville established in \cite[Theorem.~5.6(ii)]{bea1}, by a degeneration argument using the Clemens--Griffiths criterion, that a {\em general} Fano threefold with Picard number 1, index 1, and degree 10 (also known as a Gushel--Mukai, or GM, threefold) is irrational, but not a single smooth example was known, although it is expected that all of these Fano threefolds are irrational.\ One of the main results of this article is the construction of a complete 2-dimensional family of such examples (Corollary~\ref{coro52}), including one such threefold defined (over~${\bf Q}$) by explicit equations (Section~\ref{se23}, Corollary~\ref{coro53}). Our starting point was a remarkable EPW (for Eisenbud--Popescu--Walter) sextic hypersurface $Y_{\mathbb A}\subset {\bf P}^5$, constructed in \cite{monphd}, with a faithful action by the simple group $\mathbb{G}:=\PSL(2,{\bf F}_{11}) $ of order $660$ (Section~\ref{sect32}).\ We prove that the automorphism group of~$Y_{\mathbb A}$ is exactly~$\mathbb{G}$ (Proposition~\ref{prop:all_autom_A}) and that it is the only quasi-smooth EPW sextic with an automorphism of order $11$ (Theorem~\ref{th47}).\ From this sextic, one can construct GM varieties of various dimensions with exotic properties.\ Using \cite{dkeven}, we obtain for example families of GM varieties of dimensions $4$ or~$6$ with middle-degree Hodge groups of maximal rank~22 (Section~\ref{sect46}).\ Another application is the construction of GM varieties with large (finite) automorphism groups.\ The foremost example is a GM fivefold $X^5_{\mathbb A}$ with automorphism group $\mathbb{G}$ (Corollary~\ref{cor48}(2)) but we also construct GM varieties of various dimensions with automorphism groups ${\bf Z}/11{\bf Z}$, $D_{12}$, ${\bf Z}/6{\bf Z}$, ${\bf Z}/3{\bf Z}$, $D_{10}$, ${\bf Z}/5{\bf Z}$, $\mathfrak A_4$, $({\bf Z}/2{\bf Z})^2$, or ${\bf Z}/2{\bf Z}$ (Table~\ref{tabaut}).\ By \cite{dkij}, the intermediate Jacobians of the GM varieties of dimension $3$ or $5$ obtained from the sextic $Y_{\mathbb A}$ are all isomorphic to a fixed principally polarized abelian variety\ $({\mathbb{J}},\theta)$ of dimension~$10$.\ This applies in particular to $X^5_{\mathbb A}$, and the $\mathbb{G}$-action on $X^5_{\mathbb A}$ induces a faithful $\mathbb{G}$-action on $({\mathbb{J}},\theta)$.\ We use this fact to prove that the GM threefolds that we construct from $Y_{\mathbb A}$ are not rational: by the Clemens--Griffiths criterion (\cite[Corollary~3.26]{cg}), it suffices to prove that their (common) intermediate Jacobian $({\mathbb{J}},\theta)$ is not a product of Jacobians of curves.\ For this, we follow \cite{bea2,bea3} and use the fact that $({\mathbb{J}},\theta)$ has ``too many automorphisms'' (because of the $\mathbb{G}$-action).\ Note that the GM threefolds themselves may have no nontrivial automorphisms.\ This is how we produce a complete 2-dimensional family of irrational GM threefolds, all mutually birationally isomorphic. The 10-dimensional principally polarized abelian variety\ $({\mathbb{J}},\theta)$ seems an interesting object of study.\ The 10-dimensional complex representation attached to the $\mathbb{G}$-action is irreducible and defined over ${\bf Q}$.\ This implies that $({\mathbb{J}},\theta)$ is indecomposable and isogeneous to the product of $10$ copies of an elliptic curve (Propositions~\ref{prop61} and~\ref{prop62}).\ We conjecture, but were unable to prove, that~$({\mathbb{J}},\theta)$ is isomorphic to an explicit 10-dimensional principally polarized abelian variety\ that we construct in Proposition~\ref{prop63}.\ The situation is reminiscent of that of the Klein cubic threefold $W\subset{\bf P}^4$: Klein proved in~\cite{kle} that $W$ has a faithful linear $\mathbb{G}$-action; one hundred years later, Adler proved in \cite{adl} that the automorphism group of $W$ is exactly~$\mathbb{G}$ and Roulleau showed in \cite{rou} that $W$ is the only smooth cubic threefold with an automorphism of order 11.\ The intermediate Jacobian of~$W$ is a principally polarized abelian variety\ of dimension $5$ isomorphic to the product of $5$ copies of an elliptic curve with complex multiplication and Adler proved in \cite{adls} that it is the only abelian variety\ of dimension 5 with a faithful action of $\mathbb{G}$.\ This is the reason why we call our sextic $Y_{\mathbb A}$ the Klein EPW sextic.\ We also refer to \cite{cks} for the construction of a one-dimensional family of threefolds with $\mathfrak S_6$-actions whose intermediate Jacobians are isogeneous to the product of $5$ copies of varying elliptic curves (\cite[Remark~4.5]{cks}). Our proofs heavily use the construction by O'Grady in~\cite{og7} of canonical double covers of quasi-smooth EPW sextics called double EPW sextics (see also \cite{dkcovers}).\ They are smooth {hyperk\"ahler}\ fourfolds whose automorphisms may, thanks to Verbitsky's Torelli Theorem, be determined using lattice theory.\ We also use the close relationship between EPW sextics and GM varieties developed in \cite{im,dkclass,dkeven,dkmoduli,dkij} and surveyed in \cite{debsur}. The article is organized as follows.\ In Section~\ref{sect2}, we recall basic facts about EPW sextics and GM varieties.\ In Section~\ref{se3}, we describe explicitly the {Klein}\ Lagrangian ${\mathbb A}$ and the Klein EPW sextic~$Y_{\mathbb A}$, and we prove that the EPW sextic $Y_{\mathbb A}$ is quasi-smooth.\ In Section~\ref{sect4}, we prove that the automorphism group of $Y_{\mathbb A}$ is $\mathbb{G}$; we also prove that $Y_{\mathbb A}$ is the only quasi-smooth EPW sextic with an automorphism of order~$11$.\ We also discuss the possible automorphism groups and some Hodge groups of the various GM varieties that can be constructed from the Lagrangian~${\mathbb A}$.\ In Section~\ref{sect5}, we introduce the important surface~$\widetilde{Y}_A^{\ge 2}$ (a double \'etale cover of the singular locus of $Y_{\mathbb A}$) and its Albanese variety $({\mathbb{J}},\theta)$.\ We prove our irrationality results for GM threefolds and discuss the structure of the 10-dimensional principally polarized abelian variety\ $({\mathbb{J}},\theta)$.\ The rest of the article consists of appendices.\ In the long Appendix~\ref{appC}, we gather old and new general results on automorphisms of double EPW sextics and of double EPW surfaces.\ Appendix~\ref{sea2} recalls a few classical facts about representations of the group $\mathbb{G}$.\ Appendix~\ref{b2} discusses decomposition results for abelian varieties with automorphisms. \noindent{\bf Notation.} Let $m$ be a positive integer; throughout this article, $V_m$ denotes a complex vector space of dimension~$m$ and we set $\zeta_m:=e^{\frac{2\pi i}{m}}$.\ As we did above, we denote by $\mathbb{G}$ the simple group $\PSL(2,{\bf F}_{11}) $ of order $660$.\ \noindent{\bf Acknowledgements.} We would like to thank B.~Gross, G.~Nebe, D.~Prasad, Yu.~Prokhorov, and O.~Wittenberg for fruitful exchanges.\ Special thanks go to A.~Kuznetsov, whose numerous comments and suggestions helped improve the exposition and the results of this article; in particular, Propositions~\ref{split1} and~\ref{split2} are his. \section{Eisenbud--Popescu--Walter sextics and Gushel--Mukai varieties}\label{sect2} We recall in this section a few basic facts about Eisenbud--Popescu--Walter (or EPW for short) sextics and Gushel--Mukai (or GM for short) varieties. \subsection{EPW sextics and their automorphisms}\label{se1} Let $V_6$ be a $6$-dimensional complex vector space.\ We endow $\bw3V_6$ with the $\bw6V_6$-valued symplectic form defined by wedge product.\ Given a Lagrangian subspace $A\subset \bw3V_6$ and a nonnegative integer $\ell$, one defines (see \cite[Section 2]{og1} or \cite[Appendix~B]{dkclass}) in ${\bf P}(V_6)$ the closed subschemes \begin{equation*}\label{yabot} Y_A^{\ge \ell}:=\bigl\{[x]\in{\bf P}(V_6) \mid \dim\bigl(A\cap (x \wedge\bw{2}{V_6} )\bigr)\ge \ell\bigr\} \end{equation*} and the locally closed subschemes \begin{equation*}\label{yaell} Y_A^\ell :=\bigl\{[x]\in{\bf P}(V_6) \mid \dim\bigl(A\cap (x \wedge\bw{2}{V_6} )\bigr)= \ell\bigr\} = Y_A^{\ge \ell} \smallsetminus Y_A^{\ge \ell + 1}. \end{equation*} We henceforth assume that $A$ contains no decomposable vectors (that is, no nonzero products $x\wedge y\wedge z$).\ The scheme $Y_A:=Y_A^{\ge 1}$ is then an integral sextic hypersurface (called an {\em EPW sextic}) whose singular locus is the integral surface $Y_A^{\ge 2}$; the singular locus of that surface is the finite set $Y_A^{\ge 3}$ (see \cite[Theorem~B.2]{dkclass}) which is empty for $A$ general.\ One has moreover (\cite[Proposition~B.9]{dkclass}) \begin{equation}\label{autya} \Aut(Y_A)=\{ g\in \PGL(V_6)\mid (\bw3g)(A)=A\} \end{equation} and this group is finite. \subsection{GM varieties and their automorphisms}\label{se22n} A (smooth ordinary) GM variety of dimension $n\in\{3,4,5\}$ is the smooth complete intersection, in ${\bf P}(\bw2V_5)$, of the Grassmannian~$\Gr(2,V_5)$ in its Pl\"ucker embedding, a linear space ${\bf P}^{n+4}$, and a quadric.\ It is a Fano variety with Picard number~$1$, index~$n-2$, and degree~$10$. There is a bijection between the set of isomorphism classes of (smooth ordinary) GM varieties~$X$ of dimension $n$ and the set of isomorphism classes of triples $(V_6,V_5,A)$, where $A\subset\bw3 V_6$ is a Lagrangian subspace with no decomposable vectors and $V_5\subset V_6$ is a hyperplane such that \begin{equation}\label{yperp} \dim (A\cap \bw3V_5)=5-n \end{equation} (this bijection was first described in the proof of~\cite[Proposition~2.1]{im} when $n=5$; for the general case, see \cite[Theorem~3.10 and Proposition~3.13(c)]{dkclass} or~\cite[(2)]{debsur}). By \cite[Lemma~2.29 and Corollary~3.11]{dkclass}, we have \begin{equation}\label{autxa} \Aut(X)\simeq \{ g\in \Aut(Y_A)\mid g(V_5)=V_5\}. \end{equation} \section{The {Klein}\ Lagrangian}\label{se3} The following construction of an EPW sextic with a faithful $\mathbb{G}$-action first appeared in \cite[Example~4.5.2]{monphd}.\ \subsection{The {Klein}\ Lagrangian ${\mathbb A}$ and the GM fivefold $X_{\mathbb A}^5$}\label{se31} Let $\xi\colon\mathbb{G}\to\GL(V_\xi)$ be the irreducible representation of~$ \mathbb{G}$ of dimension 5 described in Appendix~\ref{sea2}.\ From the existence of a unique (up to multiplication by a nonzero scalar) $\mathbb{G}$-equivariant symmetric isomorphism \begin{equation}\label{defw} w\colon \bw2V_\xi\isomlra \bw2V_\xi^\vee \end{equation} as in~\eqref{defu}, we infer that there is a unique $\mathbb{G}$-invariant quadric \begin{equation}\label{defq} \mathsf{Q} \subset {\bf P}(\bw2V_\xi) \end{equation} and that it is smooth.\ Since its equation does not lie in the image of the $\mathbb{G}$-equivariant morphism $$V_\xi\simeq \bw4V_\xi^\vee \ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow} \Sym^2(\bw2V_\xi^\vee), $$ which is the space of Pl\"ucker quadrics, the quadric $\mathsf{Q}$ does not contain the Grassmannian~$\Gr(2,V_\xi)$.\ Therefore, it defines a GM fivefold \begin{equation}\label{defx} X_{\mathbb A}^5:=\mathsf{Q}\cap \Gr(2,V_\xi) \end{equation} with a faithful $\mathbb{G}$-action (we will show below that $X_{\mathbb A}^5$ is smooth).\ The group $\mathbb{G}$ being simple nonabelian, the representation $\bw5\xi$ is trivial.\ The isomorphism~$w$ from~\eqref{defw} therefore induces an isomorphism of representations \begin{equation}\label{defv} v\colon \bw2V_\xi\isomlra \bw2V_\xi^\vee \otimes \bw5V_\xi \isomlra \bw3 V_\xi. \end{equation} Since $w$ is symmetric, $v$ satisfies $v(x)\wedge y=x\wedge v(y)$ for all $x,y\in \bw2V_\xi$.\ Let $\chi_0\colon \mathbb{G}\to V_{\chi_0}$ be the trivial representation and consider the $\mathbb{G}$-representation $$V_6:=V_{\chi_0}\oplus V_\xi.$$ The decomposition of~$\bw3V_6$ into irreducible $\mathbb{G}$-representations is \begin{equation}\label{deco} \bw3V_6=( V_{\chi_0}\wedge \bw2V_\xi)\oplus \bw3V_\xi \end{equation} and, if $e_0$ is a generator of $V_{\chi_0}$, the Lagrangian subspace ${\mathbb A}\subset \bw3V_6$ associated with the GM fivefold $X^5_{\mathbb A}$ according to the general procedure outlined in Section~\ref{se22n} is the graph $${\mathbb A}:=\{ e_0\wedge x+ v(x)\mid x\in \bw2V_\xi\}$$ of $v$.\ Conversely, $X^5_{\mathbb A}$ is the GM fivefold associated with the Lagrangian ${\mathbb A}$ and the hyperplane \mbox{$V_\xi\subset V_6$} (referring to~\eqref{yperp}, note that ${\mathbb A}\cap \bw3V_\xi=0$). We will use the following notation.\ Let $c$ and $a$ be the elements of $\mathbb{G}$ defined in Appendix~\ref{sea2} and let $(e_1,\dots,e_5)$ be a basis of $V_\xi$ in which $\xi(c)$ and $\xi(a)$ have matrices as in~\eqref{real}.\ Let $(e^\vee_1,\dots,e^\vee_5)$ be the dual basis of~$V_\xi^\vee$.\ We also set $e_{i_1\cdots i_r}=e_{i_1}\wedge \dots \wedge e_{i_r}\in \bw{r}V_6$. \begin{prop}\label{prop:GM5_smooth} The GM fivefold $X^5_{\mathbb A}$ is smooth and the Lagrangian subspace ${\mathbb A}$ contains no decomposable vectors. \end{prop} \begin{proof} The basis $(e_{ij})_{1\le i<j\le 5}$ of $\bw{2}V_\xi$ consists of eigenvectors of $\bw{2}\xi(c)$, with eigenvalues all the primitive $11^{\textnormal{th}}$ roots of $1$, and similarly for the dual basis $(e_{ij}^\vee)_{1\le i<j\le 5}$ of $\bw{2}V_\xi^\vee$.\ Looking at the corresponding eigenvalues, we see that we may normalize the isomorphism $w$ in~\eqref{defw} so that it satisfies $w(e_{12})=-e_{13}^\vee$ (both are eigenvectors of $\bw{2}\xi(c)$ with eigenvalue~$\zeta_{11}^{5}$).\ Applying $\bw{2}\xi(a)$, we find $$ w(e_{12})=-e_{13}^\vee,\ w(e_{23})=-e_{24}^\vee,\ w(e_{34})=-e_{35}^\vee,\ w(e_{45})=e_{14}^\vee,\ w(e_{15})=-e_{25}^\vee. $$ Since $w$ is symmetric, we also have $$ w(e_{13})= -e_{12}^\vee,\ w(e_{24})=- e_{23}^\vee,\ w(e_{35})=- e_{34}^\vee,\ w(e_{14})= e_{45}^\vee,\ w(e_{25})= -e_{15}^\vee.$$ The quadric $\mathsf{Q}$ from~\eqref{defq} is therefore defined by \begin{equation}\label{eqQ} x_{12}x_{13}+x_{23}x_{24}+x_{34}x_{35}-x_{45}x_{14}+x_{15}x_{25}=0. \end{equation} A computer check with \cite{m2} now ensures that the GM fivefold $X^5_{\mathbb A}$ defined by~\eqref{defx} is smooth.\ It follows from \cite[Theorem~3.16]{dkclass} that ${\mathbb A}$ contains no decomposable vectors. \end{proof} The group $\mathbb{G}$ acts faithfully on the GM fivefold $X^5_{\mathbb A}$.\ Using the isomorphism~\eqref{autxa}, we see that it also acts faithfully on the EPW sextic~$Y_{\mathbb A}$ by linear automorphisms that fix the hyperplane $V_\xi$.\ More precisely, the representation $\chi_0\oplus \xi\colon \mathbb{G}\hookrightarrow \GL(V_6)$ induces an embedding $ \mathbb{G} \hookrightarrow \Aut(Y_{\mathbb A})\subset \PGL(V_6)$.\ We will prove in Proposition \ref{prop:all_autom_A} that the embedding $ \mathbb{G} \hookrightarrow \Aut(Y_{\mathbb A})$ is in fact an isomorphism. \subsection{Explicit equations}\label{sect32} As we saw in the proof of Proposition~\ref{prop:GM5_smooth}, and with the notation of that proof, the isomorphism $v\colon \bw2V_\xi\isomto \bw3 V_\xi$ from~\eqref{defv} may be defined by \begin{equation}\label{v2} \begin{aligned} v(e_{12})=e_{245},\ v(e_{23})=e_{135},\ v(e_{34})&=e_{124},\ v(e_{45})=e_{235},\ v(e_{15})=-e_{134},\\ v(e_{13})= -e_{345},\ v(e_{24})= -e_{145},\ v(e_{35})&=- e_{125},\ v(e_{14})= e_{123},\ v(e_{25})=e_{234}. \end{aligned} \end{equation} This gives \begin{equation}\label{defA} \begin{aligned} {\mathbb A}= \langle & e_{012}+ e_{245}, e_{013} - e_{345}, e_{014} + e_{123}, e_{015} - e_{134}, e_{023} + e_{135},\\ &\qquad e_{024}- e_{145} , e_{025} + e_{234} , e_{034}+ e_{124} , e_{035}- e_{125} , e_{045}+ e_{235} \rangle. \end{aligned} \end{equation} One can readily see from this that the isomorphism $V_6\isomto V_6^\vee$ that sends $e_0$ to $-e_0^\vee$ and $e_j$ to~$e_j^\vee$ for $j\in\{1,\dots,5\}$ maps ${\mathbb A}$ onto its orthogonal ${\mathbb A}^\bot$, a Lagrangian subspace of $\bw3V_6^\vee$; we say that ${\mathbb A}$ is {\em self-dual.}\ Also, if one starts from the dual representation~$\xi^\vee$, one obtains the same Lagrangian~${\mathbb A}$. \begin{prop}\label{yaqs} The EPW sextic~$Y_{\mathbb A}$ is defined by the equation \begin{equation}\label{sextic_equation} \begin{aligned} &x_0^6+ 2x_0^3(x_1x_3^2+x_2x_4^2+x_3x_5^2+x_4x_1^2+x_5x_2^2)-4x_0(x_1^3x_2^2+ x_2^3x_3^2+ x_3^3x_4^2+x_4^3x_5^2+x_5^3x_1^2)\\ &{}+4x_0(x_1x_3x_4^3+x_2x_4x_5^3+x_3x_5x_1^3+ x_4x_1x_2^3 + x_5x_2x_3^3)-12x_0x_1x_2x_3x_4x_5\\ &{}+x_1^2x_3^4+x_2^2x_4^4 +x_3^2x_5^4+x_4^2x_1^4+x_5^2x_2^4 -4(x_1x_4x_5^4+x_2x_5x_1^4+x_3x_1x_2^4+x_4x_2x_3^4+x_5x_3x_4^4) \\ &{}-2(x_1x_3^3x_5^2+x_2x_4^3x_1^2+x_3x_5^3x_2^2+x_4x_1^3x_3^2+x_5x_2^3x_4^2)\\ &{}+6(x_1x_2x_3^2x_4^2+x_2x_3x_4^2x_5^2+x_3x_4x_5^2x_1^2+x_4x_5 x_1^2x_2^2 + x_5x_1x_2^2x_3^2)=0 \end{aligned} \end{equation} in ${\bf P}(V_6)$.\ The scheme $ Y^{\ge 2}_{\mathbb A}$ is a smooth irreducible surface, so that the scheme $Y^{\ge 3}_{\mathbb A}$ is empty.\ \end{prop} \begin{proof} The scheme $Y_{\mathbb A}$ is the locus in ${\bf P}(V_6)$ where the map $$ x\wedge \bw2 V_6\longrightarrow \bw3 V_6/{\mathbb A}$$ drops rank.\ In the decomposition~\eqref{deco}, the second summand is transverse to ${\mathbb A}$ and we can identify $\bw3 V_6/A$ with $\bw3 V_\xi$.\ Moreover, in the affine open subset $U_0$ of ${\bf P}(V_6)$ defined by $x_0\neq 0$, one has $x\wedge \bw2 V_6=x\wedge \bw2 V_\xi$.\ In $U_0$, the scheme $Y_{\mathbb A}$ is therefore the locus where the map $$ x\wedge \bw2 V_\xi\longrightarrow \bw3 V_\xi\xrightarrow{\ v^{-1}\ } \bw2 V_\xi$$ drops rank.\ Concretely, if $x=e_0+x_1e_1+\dots+x_5e_5$, we see, using~\eqref{defA} and~\eqref{v2}, that it maps \begin{equation*} \begin{aligned} e_{12}&\longmapsto x\wedge e_{12}=e_{012}+x_3e_{123}+x_4e_{124}+x_5e_{125}\\ &\longmapsto -e_{245}+x_3e_{123}+x_4e_{124}+x_5e_{125}\\ &\longmapsto -e_{12}+x_3e_{14}+x_4e_{34}-x_5e_{35}. \end{aligned} \end{equation*} All in all, using the basis $(e_{12},e_{13},e_{14},e_{15},e_{23},e_{24},e_{25},e_{34},e_{35},e_{45})$ of $\bw2V_\xi$, one sees that $Y_{\mathbb A}\cap U_0$ is defined as the determinant of the $10\times 10$ matrix \begin{equation*}\label{matrixA} \left( \begin{smallmatrix} -1&0&0&0&0&x_5&-x_4&0&0&x_2\\ 0&-1&0&0&0&0&0&-x_5&x_4&-x_3\\ x_3&-x_2&-1&0&x_1&0&0&0&0&0\\ 0&-x_4&x_3&-1&0&0&0&-x_1&0&0\\ 0&x_5&0&-x_3&-1&0&0&0&x_1&0\\ 0&0&-x_5&x_4&0&-1&0&0&0&-x_1\\ 0&0&0&0&x_4&-x_3&-1&x_2&0&0\\ x_4&0&-x_2&0&0&x_1&0&-1&0&0\\ -x_5&0&0&x_2&0&0&-x_1&0&-1&0\\ 0&0&0&0&x_5&0&-x_3&0&x_2&-1 \end{smallmatrix}\right). \end{equation*} We obtain the equation~\eqref{sextic_equation} by homogenizing this determinant, computed with Macaulay2 (\cite{m2}).\ We then check with Macaulay2 that $\Sing(Y_{\mathbb A})$ is a smooth surface (this reproves that ${\mathbb A}$ contains no decomposable vectors and proves in addition that $Y^{\ge 3}_{\mathbb A}$ is empty). \end{proof} \subsection{The GM threefold $X_{\mathbb A}^3$}\label{se23} We keep the notation above.\ By Proposition \ref{yaqs}, $Y^{\ge 3}_{\mathbb A} $ is empty and, since ${\mathbb A}$ is self-dual, so is $Y^{\ge 3}_{{\mathbb A}^\bot}$.\ For all hyperplanes $V_5\subset V_6$, we thus have \begin{equation}\label{y3vide} \dim ({\mathbb A}\cap \bw3V_5)\le 2. \end{equation} Consider the hyperplane $V_5\subset V_6$ spanned by $e_0,\dots,e_4$.\ From the description~\eqref{defA}, one sees that there is an inclusion \begin{equation*} \langle e_{014} + e_{123}, e_{034}+ e_{124}\rangle \subset {\mathbb A}\cap \bw3 V_5 \end{equation*} of vector spaces which, because of the inequality~\eqref{y3vide}, is an equality.\ The associated GM variety is therefore smooth of dimension~$3$ (see Section~\ref{se22n}).\ Using the automorphism $\xi(a)$ of $V_6$ that permutes the vectors $e_1,\dots, e_5$, we see that we get isomorphic GM threefolds if we start from hyperplanes spanned by $e_0$ and any four vectors among $e_1,\dots, e_5$.\ We denote it by~$X^3_{\mathbb A}$. Going through the procedure mentioned in Section~\ref{se22n}, A.~Kuznetsov found that $X^3_{\mathbb A}$ is the intersection, in ${\bf P}(\bw2 V_5)$, of the Grassmannian $\Gr(2,V_5)$, the linear space ${\bf P}^7$ with equations $$ x_{03} + x_{12} = x_{04} - x_{23} = 0, $$ and the quadric with equation $$ x_{01}x_{02} - x_{13}x_{14} - x_{24}x_{34} = 0. $$ \section{EPW sextics and GM varieties with many automorphisms}\label{sect4} As in Section~\ref{se1}, let $V_6$ be a $6$-dimensional complex vector space and let $A\subset \bw3V_6$ be a Lagrangian subspace with no decomposable vectors.\ It defines an integral EPW sextic $Y_A\subset {\bf P}(V_6)$.\ As explained in more detail in Appendix~\ref{se41}, there is a canonical double covering $\pi_A\colon \widetilde{Y}_A\to Y_A $ and, when $Y^{\ge 3}_A=\varnothing$, the fourfold $\widetilde{Y}_A$ is a smooth {hyperk\"ahler}\ variety of K3$^{[2]}$-type.\ \subsection{Automorphisms of the EPW sextic $Y_{\mathbb A}$}\label{sec43} We constructed at the end of Section~\ref{se31} an injection $\mathbb{G}\hookrightarrow \Aut(Y_{\mathbb A})$.\ It follows from Proposition~\ref{yaqs} that the double EPW sextic $\widetilde{Y}_{\mathbb A}$ is smooth and, by Proposition~\ref{split1}, the group $\Aut(Y_{\mathbb A})$ is isomorphic to the group~$\Aut_H^s(\widetilde{Y}_{\mathbb A}) $ of symplectic isomorphisms of $\widetilde{Y}_{\mathbb A}$ that preserve the polarization $H$.\ \begin{prop}\label{prop:all_autom_A} The automorphism group of the Klein EPW sextic $Y_{\mathbb A}$ is isomorphic to $\mathbb{G}$. \end{prop} \begin{proof} It is enough to prove that $ \Aut^s_H(\widetilde{Y}_{\mathbb A}) $ is isomorphic to $\mathbb{G}$.\ Let $g\in\Aut^s_H(\widetilde{Y}_{\mathbb A})$.\ It acts on the orthogonal of $H$ in $\Pic(\widetilde{Y}_{\mathbb A} )$ which, by Corollary \ref{th14}, is the rank-20 lattice~${\mathsf S} $ discussed in Section~\ref{secc1} and the action is faithful.\ Let us prove that $g$ acts trivially on the discriminant group~$\Disc({\mathsf S})$.\ By Corollary~\ref{th14}, the lattice $H^\perp\simeq (-2)^{\oplus 2}\oplus E_8(-1)^{\oplus 2}\oplus U^{\oplus 2}\subset H^2(\widetilde{Y}_{\mathbb A},{\bf Z})$ (see~\eqref{defhperp}) primitively contains the lattices $\Tr(\widetilde{Y}_{\mathbb A})\simeq (22)^{\oplus 2}$ and ${\mathsf S}$ and it is a finite extension of their direct sum.\ This extension is obtained by adding to $\Tr(\widetilde{Y}_{\mathbb A})\oplus{\mathsf S}$ two elements $\frac{a_1+b_1}{11}$ and $\frac{a_2+b_2}{11}$, where~$a_1$ and~$a_2$ are orthogonal generators of $\Tr(\widetilde{Y}_{\mathbb A})$ of square $22$, and $b_1$ and~$b_2$ are classes in~${\mathsf S}$ of divisibility 11.\ Since $g$ preserves $H^\perp$ and $\Tr(\widetilde{Y}_{\mathbb A})$, it follows readily that $g(b_i)=b_i+11c_i$ for some $c_i\in{\mathsf S}$, which implies that $g$ acts trivially on $\Disc({\mathsf S})$, as claimed. The proposition follows since, by \cite[Table 1, line 120]{HM}, the group of isometries of ${\mathsf S}$ that act trivially on $\Disc({\mathsf S})$ coincides with $\mathbb{G}$.\ \end{proof} \subsection{GM varieties with many symmetries}\label{sec42n} Proposition~\ref{prop:all_autom_A} can be used to determine the automorphism groups of the GM varieties constructed from the Lagrangian ${\mathbb A}$, and in particular the varieties~$X^5_{\mathbb A}$ and $X^3_{\mathbb A}$ defined in Sections~\ref{se31} and~\ref{se23}.\ By~\eqref{autxa}, all we have to do is determine the stabilizers of hyperplanes in $V_6$ under the $\mathbb{G}$-action.\ Since this action is conjugate to its dual, we might as well determine the stabilizers of lines in $V_6={\bf C} e_0\oplus V_\xi$.\ We proceed in three steps: \begin{itemize} \item determine the various fixed-point sets of all subgroups of $\mathbb{G}$, listed up to conjugacy in \cite[Figure~1]{bue}; \item compute the stabilizers of these fixed-points; \item find in which stratum $Y_{\mathbb A}^\ell$ they lie. \end{itemize} A first useful remark is the following: {\em if $g\in\mathbb{G}$ is a nontrivial element of odd order, the fixed-point set of $g$ in~$Y_{\mathbb A}$ is finite.}\ Indeed, we will see below by a case-by-case analysis that the fixed-point set $\Fix(g)$ of $g$ in ${\bf P}(V_6)$ is a union of lines and isolated points.\ Assume that a line $\Delta\subset \Fix(g)$ is contained in $Y_{\mathbb A}$.\ By Proposition~\ref{split1}, $g$ lifts to a symplectic automorphism~$\tilde g$ of~$\widetilde{Y}_{\mathbb A}$ which commutes with its covering involution $\iota$.\ For any $x$ in the curve $ \pi_{\mathbb A}^{-1}(\Delta)\subset \widetilde{Y}_{\mathbb A}$, one has either $\tilde g(x)=x$ or $\tilde g(x)=\iota(x)$, hence $\tilde g^2(x)=x$.\ The curve $ \pi_{\mathbb A}^{-1}(\Delta)\subset \widetilde{Y}_A$ is therefore contained in the fixed-point set of the nontrivial symplectic automorphism $\tilde g^2$.\ But this fixed-point set is, on the one hand, a disjoint union of surfaces and isolated points and, on the other hand, contained in $ \pi_{\mathbb A}^{-1}(\Fix(g^2))$, whose dimension is at most $1$ (because~$g^2$ is again nontrivial of odd order), so we reach a contradiction.\ Moreover, $1$ is not an eigenvalue for the action of $g$ on the tangent space at a fixed-point, hence any line in $\Fix(g)$ meets $Y^1_{\mathbb A}$ and $Y^2_{\mathbb A}$ transversely. Furthermore, since $g$ itself can be written as a square, we see that the fixed-point set of its symplectic lift $\tilde g$ (which has the same order) is the inverse image in $\widetilde{Y}_A$ of $\Fix(g)$. Our second tool will be the Lefschetz topological fixed-point theorem for an automorphism~$g$ {\em with finite fixed-point set} on the regular surface $Y_{\mathbb A}^{ \ge2}$.\ This theorem reads \begin{equation*} \#(\Fix(g)\cap Y_{\mathbb A}^{ \ge2})=\sum_{i=0}^4(-1)^i\Tr (g^*\vert_{H^i(Y_{\mathbb A}^{ \ge2},{\bf Q})})=2+\Tr (g^*\vert_{H^2(Y_{\mathbb A}^{ \ge2},{\bf Q})}). \end{equation*} The group $ \mathbb{G}$ acts on ${\mathbb A}$ (via the representation $\bw2 V_\xi$) and $Y_A^{ \ge2}$ and, by Proposition~\ref{propc7}, the isomorphism $H^2(Y_{\mathbb A}^{ \ge2},{\bf C})\simeq \bw2({\mathbb A}\oplus \bar {\mathbb A})$ from~\eqref{h1} is equivariant for these actions.\ Using the fact that the representation $\bw2 V_\xi$ is self-dual and the formula $$\chi_{\sbw2(\sbw2 V_\xi\oplus \sbw2 V_\xi)}(g) =2\chi_{\sbw2(\sbw2 V_\xi)}(g)+\chi_{ \sbw2V_\xi\otimes \sbw2V_\xi}(g)=2\chi_{ \sbw2 V_\xi}(g)^2-\chi_{ \sbw2V_\xi}(g^2), $$ one can then compute the numbers of fixed points of $g$ in $Y_{\mathbb A}^{ \ge2}$ given in Table~\ref{tabf}. The Lefschetz theorem was also used to the same effect in \cite[Section~6.2]{monphd} on {hyperk\"ahler}\ varieties of K3$^{[2]}$-type.\ It gives, for symplectic automorphisms of $\widetilde{Y}_{\mathbb A}$ of prime order, the number (when finite) of fixed-points on $\widetilde{Y}_{\mathbb A}$.\ By the remark made above, this is the number of fixed points on $Y^{\ge 2}_{\mathbb A}$ (which we get from Table~\ref{tabf}) plus twice the number of fixed points on~$Y^1_{\mathbb A}$.\ So we get from \cite[Section~6.2]{monphd} the following numbers (except for the information between parentheses (when~$g$ has order $2$ or $6$), which will be a consequence of the discussion below---where it will not be used). \begin{table}[h] \renewcommand\arraystretch{1.5} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline order of $g$&$11$&$5$&$6$&$3$&$2$ \\ \hline $\# (\Fix (g)\cap Y^{ \ge2}_{\mathbb A})$ &$5$&$2 $&$ 3$ &$3 $ & $(\dim 1)$ \\ \hline $\# (\Fix (g)\cap Y_{\mathbb A})$ &$5$&$8 $&$(7) $&$ 15$& $(\dim 2)$ \\ \hline \end{tabular} \captionsetup{justification=centering} \caption{Number (when finite) of fixed-points on\\ the surface $Y^{ \ge2}_{\mathbb A}$ and the fourfold $Y_{\mathbb A}$}\label{tabf} \end{table} We will see in the discussion below that these sets are in fact always finite, except when~$g$ has order 2.\ We can now go through the list of all subgroups of $\mathbb{G}$ from \cite[Figure~1]{bue} and determine their various fixed-point sets.\ We will use the notation and results of Appendix~\ref{sea2}. \subsubsection{The subgroups $\mathbb{G}$ and $ {\bf Z}/11{\bf Z}\rtimes {\bf Z}/5{\bf Z}$}\label{sec421} The subgroups $ {\bf Z}/11{\bf Z}\rtimes {\bf Z}/5{\bf Z}$ of $\mathbb{G}$ are all conjugate to the subgroup generated by the elements $a$ and $c$ of $\mathbb{G}$.\ We see from~\eqref{real} that their only fixed-point is~$[e_0]$.\ It is on $Y_{\mathbb A}^0$ hence defines a GM fivefold,~$X^5_{\mathbb A}$, already defined in Section~\ref{se31}, with automorphism group $\mathbb{G}$. \subsubsection{The subgroups $ {\bf Z}/11{\bf Z}$}\label{sec422} The subgroups $ {\bf Z}/11{\bf Z} $ of $\mathbb{G}$ are all conjugate to the subgroup generated by the element $c$ of $\mathbb{G}$.\ We see from~\eqref{real} that there are $6$ fixed-points: the point~$[e_0]$ (on~$Y_{\mathbb A}^0$) and 5 other points.\ For these $5$ points, which are all in the same~$\mathbb{G}$-orbit, the stabilizers are exactly $ {\bf Z}/11{\bf Z}$ (because the only nontrivial oversubgroups are $ {\bf Z}/11{\bf Z}\rtimes {\bf Z}/5{\bf Z}$ and $\mathbb{G}$).\ Furthermore, using Table~\ref{tabf}, one sees that they are in $Y_{\mathbb A}^2$ (this was already observed in Section~\ref{se23}).\ So we get isomorphic GM threefolds,~$X^3_{\mathbb A}$, already defined in Section~\ref{se23}, with automorphism groups~$ {\bf Z}/11{\bf Z}$. \subsubsection{The subgroups $ {\bf Z}/3{\bf Z}$, $ {\bf Z}/6{\bf Z}$, and $D_{12}$}\label{sec423} The elements of order 6 of $\mathbb{G}$ are all conjugate to the element $b$ of $\mathbb{G}$.\ Since its character in the representation $\xi$ is $1$, it acts on $V_\xi$ with eigenvalues $1,\zeta_6,\zeta_6^2,\zeta_6^4,\zeta^5_6$, for which we choose eigenvectors $w_0,w_1,w_2,w_4,w_5$.\ The fixed-point set of $b$ consists of the line~$\Delta_6=\langle [e_0],[w_0]\rangle$ and the $4$ isolated points $[w_1]$, $[w_2]$, $[w_4]$, $[w_5]$.\ Any involution $\tau$ in $\mathbb{G}$ that, together with~$b$, generates a dihedral group $D_{12}$, exchanges the eigenspaces corresponding to conjugate eigenvalues.\ Looking at the subgroup pattern of $\mathbb{G}$, one sees that the stabilizers of the $4$ isolated points are~$ {\bf Z}/6{\bf Z}$, whereas those of points of $\Delta_6\smallsetminus\{[e_0]\}$ are $D_{12}$ (a maximal proper subgroup).\ The fixed-point set of an element of $\mathbb{G}$ of order 3 (such as~$b^2$; they are all conjugate) is the union of~$\Delta_6$ and two other disjoint lines,~$\Delta_3=\langle [w_1],[w_4]\rangle$ and~$\Delta'_3=\tau(\Delta_3)=\langle [w_2],[w_5]\rangle$.\ The fixed-point set of the subgroup $D_{6}=\langle b^2,\tau\rangle$ is therefore the line $\Delta_6$.\ Consider now the isomorphism of representations $v\colon \bw2V_\xi\isomto \bw3 V_\xi$ from~\eqref{defv}.\ Looking at the eigenspaces for the action of $b$, we see that we can write $$ v(w_0\wedge w_2)=\alpha w_1\wedge w_2\wedge w_5 $$ for some $\alpha\in {\bf C}$.\ By definition of ${\mathbb A}$, this implies $w_2\wedge (e_0\wedge w_0-\alpha w_1\wedge w_5)\in {\mathbb A}$.\ Similarly, one can write $$ v(w_2\wedge w_5)= \beta w_1\wedge w_2\wedge w_4+\gamma w_0\wedge w_2\wedge w_5, $$ for some $\beta,\gamma\in {\bf C} $, so that $w_2\wedge (e_0\wedge w_5+\beta w_1 \wedge w_4+\gamma w_0 \wedge w_5)\in {\mathbb A}$.\ This proves that $[w_2]$ is in~$Y^{\ge2}_{\mathbb A}$, and so is $[w_4]=\tau([w_2])$. Consider the length-$18$ scheme $\Fix(g^2)\cap Y_{\mathbb A}=Y_{\mathbb A}\cap (\Delta_6\cup \Delta_3\cup \Delta'_3)$.\ We see from Table~\ref{tabf} that it has 15 points, 3 of them in $Y^{\ge2}_{\mathbb A}$ (hence nonreduced) and fixed by $g$, therefore $12$ of them in $Y^1_{\mathbb A} $ (reduced by the remark made above), none fixed by $g$.\ Since the set $\Fix(g^2)\cap Y^{\ge2}_{\mathbb A}$ is $\tau$-invariant and contains $[w_2]$ and $ [w_4]$, and $g$ acts as an involution with no fixed-points on the set $\Fix(g^2)\cap Y^1_{\mathbb A}\cap \Delta_3$, whose cardinality is thus even, we see that each line $\Delta_6$, $ \Delta_3$, $ \Delta'_3$ contains a single point of $Y^{\ge2}_{\mathbb A}$ and $4$ points of~$Y^1_{\mathbb A}$; the points $[w_1]$ and $[w_5]$ are in $Y^{0}_{\mathbb A}$.\ In particular, the set $ \Fix (g)\cap Y_{\mathbb A}$ has 7 points, as claimed in Table~\ref{tabf}. So altogether, we get GM varieties of dimensions $3$, $4$, or $5$, with automorphism groups ${\bf Z}/3{\bf Z}$, of dimensions $3$ or $5$ with automorphism groups~${\bf Z}/6{\bf Z}$, and of dimensions $3$ or $4$ with automorphism groups $D_{12}$, and we see that no GM varieties $X_{{\mathbb A},V_5}$ have automorphism groups the dihedral group~$D_{6}$ or the alternating group~$\mathfrak A_5$. \subsubsection{The subgroups $ {\bf Z}/5{\bf Z}$ and $D_{10}$} The subgroups $ {\bf Z}/5{\bf Z} $ of $\mathbb{G}$ are all conjugate to the subgroup generated by the element $a$ of $\mathbb{G}$.\ Since its character is $0$, it acts on $V_\xi$ with eigenvalues $1,\zeta_5,\zeta_5^2,\zeta_5^3,\zeta_5^4$.\ Its fixed-point set in ${\bf P}(V_6)$ therefore consists of a line~$\Delta_5$ passing through~$[e_0]$ and~$4$ isolated points.\ Any involution $\tau$ in $\mathbb{G}$ that, together with~$a$, generates a dihedral group~$D_{10}$, exchanges the eigenspaces corresponding to conjugate eigenvalues.\ Looking at the subgroup pattern of $\mathbb{G}$, one sees that the stabilizers of the $4$ isolated points are~$ {\bf Z}/5{\bf Z}$, whereas those of points of~$\Delta_5$ contain $D_{10}$.\ Since we saw above that $\mathfrak A_5$-stabilizers are not possible, the stabilizers are therefore ~$D_{10}$ for all points of $\Delta_5\smallsetminus\{[e_0]\}$. Since $\# (\Fix (g)\cap Y_{\mathbb A})=8$ (Table~\ref{tabf}), one sees that the line $\Delta_5$ meets $Y_{\mathbb A}$ in only 4 points.\ Since $Y^1_{\mathbb A}\cap \Delta_5$ is reduced, at least one of them must be in~$Y_{\mathbb A}^{\ge2}$.\ Among the $4$ isolated fixed-points, the involution $\tau$ acts with no fixed-points on the set of those that are in $Y_{\mathbb A}^{\ge2}$, hence its cardinality is even.\ Since $\# (\Fix (g)\cap Y_{\mathbb A}^{\ge2})=2$ (Table~\ref{tabf}), the only possibility is that $\Delta_5$ contain~$2$ points in~$Y_{\mathbb A}^1$ and 2 points in~$Y_{\mathbb A}^2$, and the $4$ isolated points are in~$Y_{\mathbb A}^1$.\ So altogether, we get GM fourfolds with automorphism groups~${\bf Z}/5{\bf Z}$ and GM varieties of dimensions 3, 4, or~$5$ with automorphism groups $D_{10}$.\ \subsubsection{The subgroups $ {\bf Z}/2{\bf Z}$, $ ({\bf Z}/2{\bf Z})^2$, and $\mathfrak A_4$} Since its character is $1$, any order-2 element $g$ of~$\mathbb{G}$ acts on $V_\xi$ with eigenvalues $1,1,1,-1,-1$.\ Its fixed-point set in ${\bf P}(V_6)$ therefore consists of the disjoint union of a 3-space ${\bf P}(V_4)$ passing through $[e_0]$ and a line $\Delta_2$.\ Double EPW sextics with a symplectic involution were studied in \cite[Theorem~5]{cam} and \cite[Theorem~6.2.3]{monphd}: they prove that the fixed-point set is always the union of a smooth K3 surface and $28$ isolated points.\ By \cite[Proposition~17]{cam} (which holds under some generality assumptions which are satisfied by~${\mathbb A}$ because it contains no decomposable vectors), we obtain: \begin{itemize} \item $\Fix(g)\cap Y_{\mathbb A}$ is the union of a smooth quadric $Q$ and a Kummer quartic $S$, both contained in ${\bf P}(V_4)$, and the 6 distinct points of $Y_{\mathbb A}\cap \Delta_2$; \item $\Fix(g)\cap Y_{\mathbb A}^{\ge2}$ is contained in $E_2$ and is the disjoint union of the smooth curve $Q\cap S$ and the $16$ singular points of $S$.\end{itemize} The fixed K3 surface in $\widetilde{Y}_{\mathbb A}$ mentioned above is a double cover of $Q$ branched along $Q\cap S$.\ The images in $Y_{\mathbb A}$ of the $28$ fixed-points are the $6$ points of $Y_{\mathbb A}\cap \Delta_2$ and the $16$ singular points of $S$. The fixed-point set of any subgroup $ ({\bf Z}/2{\bf Z})^2$ of $\mathbb{G}$ is a plane~$\Pi_4$ passing through $[e_0]$ and~$3$ isolated points.\ This plane is contained in ${\bf P}(V_4)$ and contains the line $\Delta_6$ fixed by any $D_{12}$ containing $ ({\bf Z}/2{\bf Z})^2$.\ For points in $\Pi_4\smallsetminus \Delta_6$ and the $3$ isolated points, the stabilizers are either $ ({\bf Z}/2{\bf Z})^2$ or~$\mathfrak A_4$.\ As an $\mathfrak A_4$-representation, $V_6$ splits as the direct sum of the $3$ characters (which span the plane~$\Pi_4$) and the one irreducible representation of dimension~$3$.\ It follows that the fixed-point set of any $\mathfrak A_4$ containing $ ({\bf Z}/2{\bf Z})^2$ has $3$ points (corresponding to the $3$ characters), all in $\Pi_4$.\ One of them is~$[e_0]$ and the stabilizer of the other two is indeed $\mathfrak A_4$. The plane $\Pi_4$ meets $Y_{\mathbb A}$ along the union of the conic $\Pi_4\cap Q$ and the quartic curve $\Pi_4\cap S$.\ Since the $1$-dimensional part of $\Fix(g)\cap Y_A^{\ge2}$ is the smooth octic curve $Q\cap S$, its intersection with the plane~$\Pi_4$ is finite nonempty.\ So we get points in $\Pi_4\smallsetminus \Delta_6$ (with stabilizers $({\bf Z}/2{\bf Z})^2$) in each of the strata. Finally, the two fixed-points of~$\mathfrak A_4$ are not in $Y_{\mathbb A}$: if they were, we would obtain a point of $\widetilde{Y}_{\mathbb A}$ fixed by a symplectic action of $\mathfrak A_4$; however there are no representations of $\mathfrak A_4$ in $\Sp({\bf C}^4)$ without trivial summands, so there are no points in $\widetilde{Y}_{\mathbb A}$ fixed by $\mathfrak A_4$. Therefore, we only get GM varieties of dimension~$5$ with automorphism groups~$\mathfrak A_4$. We sum up our results in a table: \begin{table}[h] \renewcommand\arraystretch{1.5} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline aut. groups&$\mathbb{G}$&${\bf Z}/11{\bf Z}$&$D_{12}$&${\bf Z}/6{\bf Z}$&${\bf Z}/3{\bf Z}$&$D_{10}$&${\bf Z}/5{\bf Z}$&$\mathfrak A_4$&$({\bf Z}/2{\bf Z})^2$&${\bf Z}/2{\bf Z}$&$\{1\}$ \\ \hline $\dim(X_{{\mathbb A},V_5})$ &$5$& $3 $& $ 3$, $4$ & $ 3$, $5$ &$ 3$, $ 4$, $5$&$ 3$, $4$, $5$& $ 4$& $5$& $ 3$, $4$, $5$& $ 3$, $4$, $5$&$ 3$, $4$, $5$ \\ \hline \end{tabular} \captionsetup{justification=centering} \caption{Possible automorphisms groups of (ordinary) GM varieties associated with the Lagrangian ${\mathbb A}$} \label{tabaut} \end{table} \subsection{EPW sextics with an automorphism of order $11$}\label{sect44} We use the injectivity of the period map~\eqref{defp} to characterize quasi-smooth EPW sextics with an automorphism of prime order at least~$11$. \begin{theo}\label{th47} The only quasi-smooth EPW sextic with an automorphism of prime order~$p\ge11$ is the EPW sextic $Y_{\mathbb A}$, and $p=11$. \end{theo} \begin{proof} Let $Y_A$ be a quasi-smooth EPW sextic with an automorphism $g$ of prime order~$p\ge11$.\ By Proposition~\ref{split1}, $g$ lifts to a symplectic automorphism of the same order of the smooth double EPW sextic $\widetilde{Y}_A$ which fixes the polarization~$H$.\ By Corollary~\ref{th14}, the transcendental lattice $\Tr(\widetilde{Y}_A)$ is isomorphic to the lattice $T:= (22)^{\oplus 2}$ and is primitively embedded in the lattice~$H^\bot$, with orthogonal complement isomorphic to ${\mathsf S}$. \begin{lemm} Any two primitive embeddings of $T$ into the lattice $h^\bot$ with orthogonal complements isomorphic to ${\mathsf S}$ differ by an isometry in~$\widetilde O(h^\bot)$. \end{lemm} \begin{proof} According to~\cite[Proposition~1.5.1]{nik} (see also \cite[Proposition~2.7]{bcs}), to primitively embed the lattice $T$ into the lattice $h^\bot$, one needs subgroups $K_T\subset \Disc(T)\simeq ({\bf Z}/22{\bf Z})^2$ and $K_{h^\bot}\subset \Disc(h^\bot)\simeq ({\bf Z}/2{\bf Z})^2$ and an isometry $u\colon K_T\isomto K_{h^\bot}$ for the canonical ${\bf Q}/2{\bf Z}$-valued quadratic forms on these groups.\ The discriminant of the orthogonal complement is then \mbox{$22^2\cdot 2^2/\Card(K_T)^2$.}\ In our case, we want this orthogonal complement to be ${\mathsf S}$, with discriminant group $({\bf Z}/11{\bf Z})^2$.\ The only choice is therefore to take $K_T$ to be the $2$-torsion part of $\Disc(T)$ and $K_{h^\bot}=\Disc(h^\bot)$.\ There are only two choices for $u$ and they correspond to switching the two factors of $({\bf Z}/2{\bf Z})^2$.\ Any two such embeddings $T\hookrightarrow h^\bot$ therefore differ by an isometry of $h^\bot$ and, upon composing with the involution of $h^\bot$ that switches the two $(-2)$-factors, we may assume that this isometry is in~$\widetilde O(h^\bot)$. \end{proof} If we fix any embedding $T\hookrightarrow h^\bot$ as in the lemma, the period of $\widetilde{Y}_A$ therefore belongs to the (uniquely defined) image in the quotient $ \widetilde O(h^\bot)\backslash \Omega_{h}$ of the set $ {\bf P}(T\otimes {\bf C})\cap \Omega_h$.\ This set consists of two conjugate points, one on each component of $\Omega_h$, hence they are mapped to the same point in the period domain $ \widetilde O(h^\bot)\backslash \Omega_{h}$.\ The theorem now follows from the injectivity of the polarized period map, which implies that~$\widetilde{Y}_A$ and $\widetilde{Y}_{\mathbb A}$ are isomorphic by an isomorphism that respects the polarizations.\ Since these polarizations define the double covers $\pi_A$ and $\pi_{\mathbb A}$, this isomorphism descends to an isomorphism between~$Y_A$ and $Y_{\mathbb A}$. \end{proof} \begin{coro}\label{cor48} {\rm(1)} The only smooth double EPW sextic with a symplectic automorphism of prime order~$p\ge11$ fixing the polarization $H$ is the {Klein}\ double sextic $\widetilde{Y}_{\mathbb A}$, and $p=11$. {\rm(2)} The only (smooth ordinary) GM varieties with an automorphism of prime order~$p\ge11$ are the GM varieties $X_{\mathbb A}^3$ and $X_{\mathbb A}^5$, and $p=11$. \end{coro} \begin{proof} Part (1) is only a rephrasing of Theorem~\ref{th47}, using the isomorphism $ \Aut_H^s(\widetilde{Y}_A)\isomto \Aut(Y_A)$ from Proposition~\ref{split1}.\ For part (2), let $X$ be a (smooth ordinary) GM variety with an automorphism of prime order~$p\ge11$ and let $A$ be an associated Lagrangian.\ By \eqref{autxa}, the quasi-smooth EPW sextic $Y_A$ also has an automorphism of order~$p$.\ It follows from Theorem~\ref{th47} that we can take $A={\mathbb A}$ and that $p=11$.\ The result now follows from Sections~\ref{sec421} and~\ref{sec422}. \end{proof} \subsection{GM varieties of dimensions 4 and 6 with many Hodge classes}\label{sect46} GM sixfolds do not appear in the definition given in Section~\ref{se22n}.\ This is because they are {\em special} (as opposed to {\em ordinary}): they are double covers $\gamma\colon X\to \Gr(2,V_5)$ branched along the smooth intersection of $\Gr(2,V_5)$ with a quadric (a GM fivefold!).\ To the GM fivefold correspond a Lagrangian $A$ and a hyperplane $V_5\subset V_6$ such that $A\cap \bw3V_5=\{0\}$.\ When $X$ is a GM fourfold, we let $\gamma\colon X\to \Gr(2,V_5)$ be the inclusion (in both cases, $\gamma$ is called the Gushel map in \cite{dkclass}). One can use the results of~\cite{dkeven} to construct explicit GM varieties $X$ of even dimensions~$2m\in\{4,6\}$ with groups $\Hdg^m(X):=H^{m,m}(X)\cap H^{2m}(X,{\bf Z})$ of Hodge classes of maximal rank $h^{m,m}(X)=22$ (\cite[Proposition~3.1]{dkeven}).\ The main ingredient is~\cite[Theorem~5.1]{dkeven}: there is an isomorphism $$(H^{2m}(X,{\bf Z})_{00},\smile)\simeq (H^2(\widetilde{Y}_A,{\bf Z})_0,(-1)^{ m-1}q_{BB}) $$ of polarized Hodge structures, where $$H^{2m}(X,{\bf Z})_{00}:=\gamma^*H^{2m}(\Gr(2,V_5),{\bf Z})^\bot\subset H^{2m}(X,{\bf Z})$$ and $H^2(\widetilde{Y}_A,{\bf Z})_0$ is, in our previous notation, $H^\bot\subset H^2(\widetilde{Y}_A,{\bf Z})$.\ If we start from the Lagrangian ${\mathbb A}$ and any hyperplane $V_5\subset V_6$ that satisfies the condition $\dim(A\cap \bw3V_5)=3-m$, we obtain, by Corollary~\ref{th14}, a family (parametrized by the fourfold~$Y_{\mathbb A}^1 $ when $m=2$ and by the fivefold~${\bf P}(V_6)\smallsetminus Y_{\mathbb A}$ when $m=3$) of GM $2m$-folds $X$ that satisfy $$ \begin{aligned} \Hdg^m(X)( (-1)^{m-1})&\simeq \gamma^* H^{2m}(\Gr(2,V_5),{\bf Z})( (-1)^{m-1})\oplus {\mathsf S}\\ &\simeq (2)^{\oplus 2} \oplus {\mathsf S} \\ &\simeq (2)^{\oplus 2}\oplus E_8( -1)^{\oplus 2}\oplus \begin{pmatrix} -2 &-1 \\ -1 & -6 \end{pmatrix}^{\oplus 2}\!\!\!\! , \end{aligned}$$ a rank-$ 22$ lattice, the maximal possible rank (the last isomorphism follows from the last isomorphism in the statement of Corollary~\ref{th14}).\ Indeed, $\Hdg^m(X)((-1)^{ m-1})$ contains the lattice on the right and the latter has no overlattices (its discriminant group has no nontrivial isotropic elements; see Section~\ref{secc1}). \begin{rema}\upshape Take $m=2$.\ The integral Hodge conjecture in degree $2$ for GM fourfolds was recently proved in \cite[Corollary~1.2]{per}.\ Therefore, we get a family (parametrized by the fourfold~$Y_{\mathbb A}^1 $) of GM fourfolds $X$ such that all classes in $\Hdg^2(X)$ are classes of algebraic cycles. \end{rema} \begin{exam}\upshape Take $m=3$ and $V_5=V_\xi$.\ We get a GM sixfold $X^6_{\mathbb A}$ which can be defined inside ${\bf P}({\bf C} e_{00}\oplus \bw2V_\xi)$ by the quadratic equation \begin{equation*} x_{00}^2=x_{12}x_{13}+x_{23}x_{24}+x_{34}x_{35}-x_{45}x_{14}+x_{15}x_{25} \end{equation*} (the right side is the equation~\eqref{eqQ} of the $\mathbb{G}$-invariant quadric $\mathsf{Q}\subset {\bf P}(\bw2V_\xi)$) and the Pl\"ucker quadrics in the $(x_{ij})_{1\le i<j\le 5}$ that define $\Gr(2,V_\xi)$ in ${\bf P}(\bw{2}V_\xi)$.\ Since the equation of $\mathsf{Q}$ is $\mathbb{G}$-invariant, we see that ${\bf Z}/2{\bf Z}\times \mathbb{G}$ acts on ${\bf C} e_{00}\oplus \bw2V_\xi$ component-wise, and this group is~$\Aut(X^6_{\mathbb A})$. The integral Hodge conjecture in degree $3$ is not known in general for GM sixfolds $X$, but it was proved in \cite[Corollary~8.4]{per} that the cokernel $V^3(X)$ (the {\em Voisin group}) of the cycle map $$\CH^3(X)\longrightarrow \Hdg^3(X) $$ is $2$-torsion.\ When $X=X^6_{\mathbb A}$, since the cycle map is surjective for $\Gr(2,V_\xi)$, the image of the cycle map, modulo $\gamma^* H^{2m}(\Gr(2,V_5),{\bf Z})$, is a $\mathbb{G}$-invariant, not necessarily saturated, sublattice of ${\mathsf S}$ of index a power of $2$. \end{exam} \section{Irrational GM threefolds}\label{sect5} \subsection{Double EPW surfaces and their automorphisms}\label{se51} Let $Y_A\subset {\bf P}(V_6)$ be a quasi-smooth EPW sextic, where $A\subset \bw3V_6$ is a Lagrangian subspace with no decomposable vectors.\ Its singular locus is the smooth surface $Y_A^{\ge 2}$ and, as explained in Appendix~\ref{sec3}, there is a canonical connected \'etale double covering $\widetilde{Y}_A^{\ge 2}\to Y_A^{\ge 2}$. Let $X$ be any (smooth) GM variety of dimension $3$ or $5$ associated with $A$ and let~$\Jac(X)$ be its intermediate Jacobian.\ It is a 10-dimensional abelian variety\ endowed with a canonical principal polarization $\theta_X$.\ By \cite[Theorem~1.1]{dkij}, there is a canonical principal polarization $\theta$ on $\Alb (\widetilde{Y}_A^{\ge 2})$ and a canonical isomorphism \begin{equation}\label{jxa} (\Jac(X),\theta_X)\isomlra (\Alb (\widetilde{Y}_A^{\ge 2}),\theta) \end{equation} between $10$-dimensional principally polarized abelian varieties.\ By~\eqref{h1}, the tangent spaces at the origin of these abelian varieties\ are isomorphic to $A$. The subgroup $\Aut(X)$ of $\Aut(Y_A)$ (see~\eqref{autxa}) acts faithfully on both $\Jac(X) $ and $\Alb (\widetilde{Y}_A^{\ge 2})$ and, by~Proposition~\ref{propc8}, the isomorphism above is $\Aut(X)$-equivariant. \subsection{Explicit irrational GM threefolds}\label{sec52} Consider the {Klein}\ Lagrangian ${\mathbb A}$.\ By Proposition~\ref{prop:all_autom_A}, we have $\Aut(Y_{\mathbb A})\simeq \mathbb{G}$ and the analytic representation of the action of that group on $\Alb (\widetilde{Y}_A^{\ge 2})$ is, by~Proposition~\ref{propc7}, the representation of $\mathbb{G}$ on ${\mathbb A}$, that is, the irreducible representation~$\bw2\xi$ of $\mathbb{G}$ (Section~\ref{se31}).\ In particular,~$\mathbb{G}$ acts faithfully on the $10$-dimensional principally polarized abelian variety\ \begin{equation}\label{defj} ({\mathbb{J}},\theta):=(\Alb (\widetilde{Y}_{\mathbb A}^{\ge 2}),\theta) \end{equation} by automorphisms that preserve the principal polarization~$\theta$.\ By Lemma~\ref{lb3}, any $\mathbb{G}$-invariant polarization on ${\mathbb{J}}$ is proportional to~$\theta$.\ \begin{prop}\label{prop61} The principally polarized abelian variety\ $({\mathbb{J}},\theta)$ is indecomposable. \end{prop} \begin{proof} If $({\mathbb{J}},\theta)$ is isomorphic to a product of $m\ge 2$ nonzero indecomposable principally polarized abelian varieties, such a decomposition is unique up to the order of the factors hence induces a morphism $u\colon\mathbb{G}\to\mathfrak S_m$ (the group $\mathbb{G}$ permutes the factors).\ Since the analytic representation is irreducible, the image of $u$ is nontrivial and, the group $\mathbb{G}$ being simple, $u$ is injective; but this is impossible because~$\mathbb{G}$ contains elements of order $11$ but not $ \mathfrak S_m$, because $m\le 10$. \end{proof} We can now prove our main result. \begin{theo}\label{main} Any smooth GM threefold associated with the Lagrangian ${\mathbb A}$ is irrational. \end{theo} \begin{proof} Let $X$ be such a threefold.\ By Proposition~\ref{propc7}, the isomorphism $(\Jac(X),\theta_X)\isomto ( {\mathbb{J}},\theta)$ in~\eqref{jxa} is $\mathbb{G}$-equivariant.\ We follow \cite{bea2,bea3}: to prove that~$X$ is not rational, we apply the Clemens--Griffiths criterion (\cite[Corollary~3.26]{cg}); in view of Proposition~\ref{prop61}, it suffices to prove that $( {\mathbb{J}},\theta)$ is not the Jacobian of a smooth projective curve.\ Suppose $( {\mathbb{J}},\theta)\simeq (\Jac(C),\theta_C)$ for some smooth projective curve $C$ of genus $10$.\ The group~$\mathbb{G}$ then embeds into the group of automorphisms of $(\Jac(C),\theta_C)$; by the Torelli theorem, this group is isomorphic to $\Aut(C)$ if $C$ is hyperelliptic and to $\Aut(C)\times{\bf Z}/2{\bf Z}$ otherwise.\ Since any morphism from $\mathbb{G}$ to ${\bf Z}/2{\bf Z}$ is trivial, we see that $\mathbb{G}$ is a subgroup of $\Aut(C)$.\ This contradicts the fact that the automorphism group of a curve of genus $10$ has order at most $432$ (\cite{lmfd}). \end{proof} \begin{coro}\label{coro52} There exists a complete family, with finite moduli morphism, parametrized by the smooth projective surface $Y^{\ge2}_{{\mathbb A}}$, of irrational smooth ordinary GM threefolds. \end{coro} \begin{proof} This follows from the theorem and \cite[Example~6.8]{dkmoduli}. \end{proof} The theorem applies in particular to the GM threefold $X^3_{\mathbb A}$ defined in Section~\ref{se23}. \begin{coro}\label{coro53} The GM threefold $X^3_{\mathbb A}$ is irrational. \end{coro} \begin{rema}\label{rema53} It is a general fact that all smooth GM varieties of the same dimension constructed from the same Lagrangian are birationally isomorphic (\cite[Corollary~4.16]{dkclass}); in particular, all threefolds in the family of Corollary~\ref{coro52} are mutually birationally isomorphic.\ \end{rema} \begin{rema}\label{rema56a} The Clemens--Griffiths component of a principally polarized abelian variety\ is the product of its indecomposable factors that are not isomorphic to Jacobians of smooth projective curves, and the Clemens--Griffiths component of a Fano threefold is the Clemens--Griffiths component of its intermediate Jacobian; it follows from the Clemens--Griffiths method that the Clemens--Griffiths component of a Fano threefold is a birational invariant.\ By Proposition~\ref{prop61}, the Clemens--Griffiths component of the GM threefolds constructed from the Lagrangian~${\mathbb A}$ is $({\mathbb{J}},\theta)$; in particular, these threefolds are not birationally isomorphic to any smooth cubic threefold (because their Clemens--Griffiths components all have dimension $5$). \end{rema} \begin{rema}\label{rema53a} All GM fivefolds are rational (\cite[Proposition~4.2]{dkclass}).\ We do not know whether the smooth GM fourfolds associated with the Lagrangian ${\mathbb A}$ are rational (folklore conjectures say that they should be irrational, because they have no associated K3 surfaces; see Proposition~\ref{assoc}). \end{rema} Let us go back to the 10-dimensional principally polarized abelian variety\ $({\mathbb{J}},\theta)$ defined by~\eqref{defj}.\ It is acted on faithfully by the group~$\mathbb{G}$, and the associated analytic representation $\mathbb{G}\to \GL(T_{{\mathbb{J}},0})$ is the irreducible representation $\bw2\xi$ of $\mathbb{G}$ (Sections~\ref{se51} and~\ref{sec52}).\ \begin{prop}\label{prop62} The abelian variety ${\mathbb{J}} $ is isogeneous to $E^{10}$, for some elliptic curve $E$. \end{prop} \begin{proof} Since the analytic representation is irreducible and defined over ${\bf Q}$ (Appendix~\ref{sea2}), the proposition follows from~Proposition~\ref{propb1}. \end{proof} Unfortunately, we were not able to say more about the elliptic curve $E$ in the proposition: as explained in Remark~\ref{remb4}, the mere existence of a $\mathbb{G}$-action on $E^{10}$ with prescribed analytic representation and of a $\mathbb{G}$-invariant polarization does not put any restriction on $E$.\ We suspect that this curve $E$ is isomorphic to the elliptic curve $E_\lambda:={\bf C}/{\bf Z}[\lambda]$, which has complex multiplication by~${\bf Z}[\lambda]$, where $ \lambda:=\tfrac12(-1+\sqrt{-11})$.\ More precisely, we conjecture that~$({\mathbb{J}},\theta)$ is isomorphic to the principally polarized abelian variety\ constructed in Proposition~\ref{prop63}. \appendix\section{Automorphisms of double EPW sextics}\label{appC} \subsection{Double EPW sextics and their automorphisms}\label{se41} As in Section~\ref{se1}, let $V_6$ be a $6$-dimensional complex vector space and let $A\subset \bw3V_6$ be a Lagrangian subspace with no decomposable vectors, with associated EPW sextic $Y_A\subset {\bf P}(V_6)$.\ There is a canonical double covering \begin{equation}\label{piA} \pi_A\colon \widetilde{Y}_A\longrightarrow Y_A \end{equation} branched along the integral surface $Y^{\ge 2}_A$.\ The fourfold $\widetilde{Y}_A$ is called a {\em double EPW sextic} and its singular locus is the finite set~$\pi_A^{-1}(Y^{\ge 3}_A)$ (\cite[Section~1.2]{og7} or \cite[Theorem~B.7]{dkclass}).\ It carries the canonical polarization $H:=\pi_A^*\mathcal{O}_{Y_A}(1)$ and the image of the associated morphism $\widetilde{Y}_A\to {\bf P}(H^0(\widetilde{Y}_A,H)^\vee)$ is isomorphic to $Y_A$.\ When $Y_A^{\ge3}=\varnothing$, we say that $Y_A$ is {\em quasi-smooth} and~$\widetilde{Y}_A$ is a smooth {hyperk\"ahler}\ variety of K3$^{[2]}$-type. Every automorphism of $Y_A$ induces an automorphism of~$\widetilde{Y}_A$ (see the proof of \cite[Proposition~B.8(b)]{dkclass}) that fixes the class~$H$.\ Conversely, let $\Aut_H(\widetilde{Y}_A)$ be the group of automorphisms of $\widetilde{Y}_A$ that fix the class~$H$.\ It contains the covering involution $\iota$ of~$\pi_A$.\ Any element of $\Aut_H(\widetilde{Y}_A)$ induces an automorphism of ${\bf P}(H^0(\widetilde{Y}_A,H)^\vee)\simeq {\bf P}( V_6)$ hence descends to an automorphism of~$Y_A$.\ This gives a central extension \begin{equation}\label{central} 0\to \langle \iota\rangle \to \Aut_H(\widetilde{Y}_A) \to \Aut(Y_A)\to 1. \end{equation} As we will check in~\eqref{h2o}, the space $H^2(\widetilde{Y}_A, \mathcal{O}_{\widetilde{Y}_A}) $ has dimension 1.\ It is acted on by the group of automorphisms of $\widetilde{Y}_A$ and this defines another extension \begin{equation}\label{defm} 1\to \Aut_H^s(\widetilde{Y}_A) \to \Aut_H(\widetilde{Y}_A) \to{\boldsymbol \mu}_r\to 1. \end{equation} The image of ~$\iota$ in ${\boldsymbol \mu}_r$ is $-1$ and $\Aut_H^s(\widetilde{Y}_A)$ is the subgroup of elements of $\Aut_H(\widetilde{Y}_A)$ that act trivially on $H^2(\widetilde{Y}_A, \mathcal{O}_{\widetilde{Y}_A}) $ (when $Y_A^{\ge3}=\varnothing$, these are exactly, by Hodge theory, the symplectic automorphisms---those that leave any symplectic $2$-form on $\widetilde{Y}_A$ invariant).\ We will show in the next proposition (which was kindly provided by A. Kuznetsov) that these extensions are both trivial.\ For that, we construct an extension \begin{equation}\label{exttilde} 1 \to {\boldsymbol \mu}_2 \to \widetilde\Aut(Y_A) \to \Aut(Y_A) \to 1 \end{equation} as follows.\ Recall from~\eqref{autya} that there is an embedding $\Aut(Y_A) \hookrightarrow \PGL(V_6)$.\ Let $G$ be the inverse image of $\Aut(Y_A)$ via the canonical map $\SL(V_6)\to \PGL(V_6)$.\ It is an extension of~$\Aut(Y_A)$ by~${\boldsymbol \mu}_6$ and we set $\widetilde\Aut(Y_A):=G/{\boldsymbol \mu}_3$.\ The action of $G$ on $V_6$ induces an action on $\bw3V_6$ such that ${\boldsymbol \mu}_6$ acts through its cube, hence the latter action factors through an action of $ \widetilde\Aut(Y_A) $.\ The subspace $A \subset \bw3V_6$ is preserved by this action, hence we have a morphism of central extensions \begin{equation}\label{esss} \begin{aligned} \xymatrix @R=5mm@M=2mm { 1\ar[r]&{\boldsymbol \mu}_2\ar[r]\ar@{_(->}[d]&\widetilde\Aut(Y_A)\ar[r]\ar[d]&\Aut(Y_A)\ar[r]\ar[d]&1\\ 1\ar[r]&{\bf C}^\times\ar[r] &\GL(A)\ar[r]&\PGL(A)\ar[r]&1. } \end{aligned} \end{equation} \begin{lemm}\label{nlem} The vertical morphisms in~\eqref{esss} are injective. \end{lemm} \begin{proof} Let $g\in G\subset \SL(V_6)$.\ Assume that $g$ acts trivially on $A$.\ Then it also acts trivially on~$A^\vee$.\ There is a $G$-equivariant exact sequence $0 \to A \to \bw3V_6 \to A^\vee \to 0$ which splits $G$-equivariantly because~$G$ is finite.\ It follows that $G$ also acts trivially on $\bw3V_6$.\ The natural morphism $\PGL(V_6) \to \PGL(\bw3V_6)$ being injective, $g$ is in ${\boldsymbol \mu}_6$.\ Finally, ${\boldsymbol \mu}_6/{\boldsymbol \mu}_3 $ acts nontrivially on $A$, hence $g$ is in ${\boldsymbol \mu}_3$ and its image in $ \widetilde\Aut(Y_A)$ is~$1$.\ This proves that the middle vertical map in~\eqref{esss} is injective. Assume now that $g $ acts as $\lambda \Id_A$ on $A$.\ Its eigenvalues on $\bw3V_6$ are then~$\lambda$ and $\lambda^{-1}$, both with multiplicity $10$.\ Let $\lambda_1,\dots,\lambda_6$ be its eigenvalues on $V_6$.\ For all $1\le i<j<k\le 6$, one then has $\lambda_i\lambda_j\lambda_k= \lambda$ or $\lambda^{-1}$.\ It follows that if $i,j,k,l,m$ are all distinct, $\lambda_i\lambda_j\lambda_k,\lambda_i\lambda_j\lambda_l,\lambda_i\lambda_j\lambda_m$ can only take 2 values, hence $\lambda_k,\lambda_l,\lambda_m$ can only take 2 values.\ So, there are at most $2$ distinct eigenvalues and one of the eigenspaces, say $E_{\lambda_1}$, has dimension at least~$ 3$.\ If $\lambda\ne \lambda^{-1}$, the eigenspace in $\bw3V_6$ for the eigenvalue $\lambda_1^3$, which is either $A$ or $A^\vee$, contains $\bw3E_{\lambda_1}$.\ This contradicts the fact that $A$ and $A^\vee$ contain no decomposable vectors.\ Therefore, $\lambda= \lambda^{-1}$ and~$g$ acts as $\pm \Id_A$, and the first part of the proof implies that the image of $\pm g$ in $ \widetilde\Aut(Y_A)$ is~$ 1$.\ This proves that the rightmost vertical map in~\eqref{esss} is injective. \end{proof} \begin{prop}[Kuznetsov]\label{split1} Let $A\subset \bw3V_6$ be a Lagrangian subspace with no decomposable vectors.\ The extensions~\eqref{central} and~\eqref{defm} are trivial and $r=2$; more precisely, there is an isomorphism $$\Aut_H(\widetilde{Y}_A)\simeq \Aut(Y_A)\times \langle \iota\rangle$$ that splits~\eqref{central} and the factor $\Aut(Y_A)$ corresponds to the subgroup $\Aut_H^s(\widetilde{Y}_A)$ of $\Aut_H(\widetilde{Y}_A)$. \end{prop} \begin{proof} We briefly recall from \cite[Section~1.2]{og7} (see also \cite{dkcovers}) the construction of the double cover $\pi_A\colon \widetilde{Y}_A\to Y_A$.\ In the terminology of the latter article, one considers the Lagrangian subbundles $\mathcal{A}_1:=A \otimes \mathcal{O}_{{\bf P}(V_6)}$ and $\mathcal{A}_2:=\bw2T_{{\bf P}(V_6)}(-3)$ of the trivial vector bundle $\bw3V_6 \otimes \mathcal{O}_{{\bf P}(V_6)}$, and the first Lagrangian cointersection sheaf $ \mathcal{R}_1 := \coker(\mathcal{A}_2\hookrightarrow \mathcal{A}_1^\vee) $, a rank-$1$ sheaf with support~$Y_A$.\ One sets (\cite[Theorem~5.2(1)]{dkcovers}) $$ \widetilde{Y}_A = \Spec(\mathcal{O}_{Y_A} \oplus \mathcal{R}_1(-3)). $$ In particular, one has \begin{equation}\label{h2o} H^2(\widetilde{Y}_A, \mathcal{O}_{\widetilde{Y}_A}) \simeq H^2(Y_A, \mathcal{R}_1(-3)) \simeq H^3({\bf P}(V_6), \mathcal{A}_2(-3))= H^3({\bf P}(V_6), \bw2T_{{\bf P}(V_6)}(-6))\simeq {\bf C}. \end{equation} The subbundles $\mathcal{A}_1$ and $\mathcal{A}_2$ are invariant for the action of $\widetilde\Aut(Y_A)$ on~$\bw3V_6$, hence the sheaf~$\mathcal{R}_1 $ is $\widetilde\Aut(Y_A)$-equivariant.\ Finally, the line bundle $\mathcal{O}_{{\bf P}(V_6)}(-1) $ has a $G$-linearization (the subgroup $G\subset \SL(V_6)$ was defined right before Lemma~\ref{nlem}).\ It follows that $\mathcal{O}_{{\bf P}(V_6)}(-3)$ has an $\widetilde\Aut(Y_A)$-linearization, hence the same is true for the sheaf $\mathcal{R}_1(-3)$.\ Therefore, the group $\widetilde\Aut(Y_A)$ acts on $\widetilde{Y}_A$ and fixes the polarization $H$. Observe now that since the nontrivial element of $ {\boldsymbol \mu}_2 \subset \widetilde\Aut(Y_A)$ acts by $-1$ on $A$, hence also on $ \mathcal{R}_1$, and since it acts by $-1$ on $\mathcal{O}(-1)$, hence also on $\mathcal{O}(-3)$, it follows that $ {\boldsymbol \mu}_2$ acts trivially on $\mathcal{R}_1(-3)$, hence also on $\widetilde{Y}_A$.\ Therefore, the morphism $\widetilde\Aut(Y_A)\to \Aut_H(\widetilde{Y}_A)$ factors through the quotient $\widetilde\Aut(Y_A)/{\boldsymbol \mu}_2 = \Aut(Y_A)$.\ In other words, the surjection $\Aut_H(\widetilde{Y}_A) \to \Aut(Y_A)$ in~\eqref{central} has a section and this central extension is trivial. The action of the group $\Aut(\widetilde{Y}_A)$ on the 1-dimensional vector space $H^2(\widetilde{Y}_A, \mathcal{O}_{\widetilde{Y}_A})$ defines a morphism $\Aut(\widetilde{Y}_A)\to{\bf C}^\star$ that maps $\iota$ to $-1$.\ The lift $\widetilde\Aut(Y_A)\to \Aut(Y_A) \hookrightarrow \Aut_H(\widetilde{Y}_A)$ acts trivially on $H^2(\widetilde{Y}_A, \mathcal{O}_{\widetilde{Y}_A})$ because its action is induced by the action of $\PGL(V_6)$, which has no nontrivial characters.\ This gives a surjection $\Aut_H(\widetilde{Y}_A) \to \langle \iota\rangle$ which is trivial on the image of the section $\Aut(Y_A) \hookrightarrow \Aut_H(\widetilde{Y}_A)$.\ This implies that the extension~\eqref{defm} is also trivial and $r=2$.\ The theorem is therefore proved. \end{proof} \subsection{Moduli space and period map of (double) EPW sextics} \label{sec42} Quasi-smooth EPW sextics admit an affine coarse moduli space ${\mathbf{M}^{\mathrm{EPW},0}}$, constructed in \cite{og5} as a GIT quotient by $\PGL(V_6)$ of an affine open dense subset of the space of Lagrangian subspaces in~$\bw3V_6$.\ Let $\widetilde{Y}$ be a hyperk\"ahler fourfold of K3$^{[2]}$-type (such as a double EPW sextic).\ The lattice $H^2(\widetilde{Y},{\bf Z})$ (endowed with the Beauville--Bogomolov quadratic form $q_{BB}$) is isomorphic to the lattice \begin{equation}\label{defL} L:=U^{\oplus 3}\oplus E_8(-1)^{\oplus 2}\oplus (-2), \end{equation} where $U$ is the hyperbolic plane $\bigl( {\bf Z}^2, \bigl(\begin{smallmatrix} 0& 1\\ 1 & 0 \end{smallmatrix}\bigr)\bigr)$, $ E_8(-1)$ is the negative definite even rank-8 lattice, and $(m)$ is the rank-$1$ lattice with generator of square $m$.\ Fix a class $h\in L$ with $h^2=2$.\ These classes are all in the same $O(L)$-orbit and \begin{equation}\label{defhperp} h^\bot \simeq U^{\oplus 2}\oplus E_8(-1)^{\oplus 2}\oplus (-2)^{\oplus 2}. \end{equation} The space $$ \begin{aligned} \Omega_{h} :={}& \{ [x]\in {\bf P}(L \otimes {\bf C})\mid x\cdot h=0,\ x\cdot x=0,\ x\cdot \bar x>0\}\\ {}={}& \{ [x]\in {\bf P}(h^\bot \otimes {\bf C})\mid x\cdot x=0,\ x\cdot \bar x>0\} \end{aligned} $$ has two connected components, interchanged by complex conjugation, which are Hermitian symmetric domains.\ It is acted on by the group $$ \{g\in O(L)\mid g(h)=h\},$$ also with two connected components, which is also the index-2 subgroup $\widetilde O(h^\bot)$ of $O(h^\bot)$ that consists of isometries that act trivially on the discriminant group $\Disc(h^\bot)\simeq ({\bf Z}/2{\bf Z})^2$.\ The quotient is an irreducible quasi-projective variety (Baily--Borel) and the {\em period map} \begin{equation}\label{defp} \wp\colon {\mathbf{M}^{\mathrm{EPW},0}} \longrightarrow \widetilde O(h^\bot)\backslash \Omega_{h},\quad [\widetilde{Y}]\longmapsto [H^{2,0}(\widetilde{Y})] \end{equation} is algebraic (Griffiths).\ It is an open embedding by Verbitsky's Torelli theorem (\cite{ver, marsur, huybki}).\ If $A\subset \bw3V_6$ is a Lagrangian such that $\widetilde{Y}_A$ is smooth with period $[x]\in{\bf P}(L \otimes {\bf C})$ (well defined only up to the action of $ \widetilde O(h^\bot)$), the Picard group $\Pic(\widetilde{Y}_A)$ is, by Hodge theory, isomorphic to $x^\bot\cap L$.\ It contains the class $h$ (of square $2$) but, as explained in \cite[Theorem~5.1]{dm}, no class orthogonal to $h$ of square $-2$. \subsection{Automorphisms of prime order }\label{secc1} Let $\widetilde{Y}$ be a hyperk\"ahler fourfold of K3$^{[2]}$-type.\ In the lattice $(H^2(\widetilde{Y},{\bf Z}),q_{BB})$ mentioned in Appendix~\ref{sec42}, we consider the {\em transcendental lattice} $$\Tr(\widetilde{Y}) :=\Pic(\widetilde{Y})^\bot\subset H^2(\widetilde{Y},{\bf Z}) .$$ The automorphism group $\Aut(\widetilde{Y})$ acts faithfully by isometries on the lattice $(H^2(\widetilde{Y},{\bf Z}),q_{BB})$ and preserves the sbulattices $\Pic(\widetilde{Y})$ and $\Tr(\widetilde{Y})$.\ If $G$ is a subset of~$\Aut(\widetilde{Y})$, we denote by $T_G(\widetilde{Y})$ the invariant lattice (of elements of~$H^2(\widetilde{Y},{\bf Z}) $ that are invariant by all elements of $G$) and by $S_G(\widetilde{Y}):=T_G(\widetilde{Y})^\bot$ its orthogonal in $H^2(\widetilde{Y},{\bf Z}) $.\ Many results are known about automorphisms of prime order $p$ of {hyperk\"ahler}\ fourfolds.\ We restrict ourselves to the case $p\ge 11$.\ In the statement below, the rank-$20$ lattice ${\mathsf S}$ was defined in \cite[Example~2.9]{mon3} by an explicit $20\times 20$ Gram matrix (see also \cite[Example~2.5.9]{monphd}); it is negative definite, even, contains no $(-2)$-classes, its discriminant group is $({\bf Z}/11{\bf Z})^2$, and its discriminant form is $\left(\begin{smallmatrix} -2/11 & 0\\ 0 & -2/11 \end{smallmatrix}\right)$.\ \begin{theo}\label{thc1} Let $\widetilde{Y}$ be a projective hyperk\"ahler fourfold of K3$^{[2]}$-type and let $g$ be a symplectic automorphism of~$\widetilde{Y}$ of prime order $p\ge11$.\ There are inclusions $\Tr(\widetilde{Y})\subset T_g(\widetilde{Y})$ and $S_g(\widetilde{Y})\subset \Pic(\widetilde{Y})$, and \mbox{$p= 11$.}\ The lattice $S_g(\widetilde{Y})$ is isomorphic to ${\mathsf S}$ and $\rho(\widetilde{Y})=21$.\ The possible lattices~$T_g(\widetilde{Y})$ are $$ \begin{pmatrix} 2 &1 &0 \\ 1 &6&0\\ 0&0&22 \end{pmatrix}\quad or \quad \begin{pmatrix} 6&2 &2 \\ 2 &8&-3\\ 2&-3&8 \end{pmatrix}.$$ \end{theo} \begin{proof} The proof is a compilation of previously known results on symplectic automorphisms.\ The bound $p\le 11$ is \cite[Corollary~2.13]{mon3}.\ The inclusions and the properties of the lattice~$S_g(\widetilde{Y})$ are in \cite[Lemma~3.5]{mon2}, the equality $\rho(\widetilde{Y})=21$ is in \cite[Proposition~1.2]{mon3}, the lattice $S_g(\widetilde{Y})$ is determined in \cite[Theorem~7.2.7]{monphd}, and the possible lattices $T_g(\widetilde{Y})$ in \cite[Section~5.5.2]{bns}. \end{proof} This theorem applies in particular to (smooth) double EPW sextics~$\widetilde{Y}_A$.\ We are interested in automorphisms that preserve the canonical degree-2 polarization $H$.\ By Proposition~\ref{split1}, the group of these automorphisms, modulo the covering involution $\iota$, is isomorphic to the group of automorphisms of the EPW sextic~$Y_A$. \begin{coro}\label{th14} Let $\widetilde{Y}_A $ be a smooth double EPW sextic and let $g$ be an automorphism of $\widetilde{Y}_A$ of prime order $p\ge 11$ that fixes the polarization~$H$.\ Then $p=11$ and\,\footnote{In the given decomposition of the lattice $\Pic(\widetilde{Y}_A)$, the summand $(2)$ is {\em not} generated by the polarization $H$, because~${\mathsf S}$ contains no $(-2)$-classes.} $$ \begin{aligned} S_{g}(X) \simeq {\mathsf S},\qquad T_{g}(\widetilde{Y}_A)&\simeq \begin{pmatrix} 2 &1 \\ 1 &6 \end{pmatrix}\oplus (22),\qquad \Tr(\widetilde{Y}_A)\simeq (22)^{\oplus 2}, \\ \Pic(\widetilde{Y}_A)= {\bf Z} H \oplus {\mathsf S}&\simeq (2)\oplus E_8(-1)^{\oplus 2}\oplus \begin{pmatrix} -2 &-1 \\ -1 & -6 \end{pmatrix}^{\oplus 2} . \end{aligned} $$ In particular, the fourfold $\widetilde{Y}_A$ has maximal Picard number $21$.\ \end{coro} \begin{proof} By Proposition~\ref{split1}, the automorphism $g$ is symplectic (all nonsymplectic automorphisms have even order).\ Since $H\in T_{g}(\widetilde{Y}_A)$ and $q_{BB}(H)=2$, and the second lattice in Theorem~\ref{thc1} contains no classes of square $2$, there is only one possibility for~$T_{g}(\widetilde{Y}_A)$ (see also \cite[Section~7.4.4]{monphd}).\ There are only two (opposite) classes of square 2 in that lattice, so we find $ \Tr(\widetilde{Y}_A)$ as their orthogonal.\ We know that $\Pic(\widetilde{Y} _A)$ is an overlattice of ${\bf Z} H\oplus S_{g} (\widetilde{Y}_A)$.\ Since the latter has no nontrivial overlattices (its discriminant group has no nontrivial isotropic elements), they are equal.\ Finally, the negative definite lattices ${\mathsf S}$ and $$S:= E_8(-1)^{\oplus 2}\oplus \begin{pmatrix} -2 &-1 \\ -1 & -6 \end{pmatrix}^{\oplus 2}$$ are in the same genus.\footnote{By Nikulin's celebrated result \cite[Corollary~1.9.4]{nik}, this means that they have same ranks, same signatures, and that their discriminant forms coincide.}\ They are not isomorphic (because ${\mathsf S}$ does not represent $-2$) but the indefinite lattices $(2)\oplus {\mathsf S}$ and $(2)\oplus S$ are by \cite[Corollary~1.13.3]{nik}. \end{proof} We prove in Theorem~\ref{th47} that the double EPW sextic $\widetilde{Y}_{\mathbb A}$ is the only smooth double EPW sextic with an automorphism of order $ 11$ that fixes the polarization $H$.\ In Hassett's terminology (recalled in \cite[Section~4]{dm}), a (smooth) double EPW sextic $\widetilde{Y}_A$ is {\em special of discriminant $d$} if there exists a primitive rank-2 lattice $K\subset \Pic(\widetilde{Y}_A)$ containing the polarization $H$ such that $\disc(K^\bot)=-d$ (the orthogonal is taken in $(H^2(\widetilde{Y}_A,{\bf Z}),q_{BB})$); this may only happen when $d\equiv 0,2,4\pmod{8}$ and $d>8$ (\cite[Proposition~4.1 and Remark~6.3]{dm}).\ The fourfold $\widetilde{Y}_A$ has an {\em associated K3 surface} if moreover the lattice $K^\bot$ is isomorphic to the opposite of the primitive cohomology lattice of a pseudo-polarized K3 surface (necessarily of degree $d$); a necessary condition for this to happen is $d\equiv 2,4\pmod{8}$ (this was proved in \cite[Proposition~6.6]{dim} for GM fourfolds but the computation is the same). \begin{prop}\label{assoc} The double EPW sextic $\widetilde{Y}_{\mathbb A}$ is special of discriminant $d $ if and only if $d$ is a multiple of $8$ greater than~$8$.\ In particular, it has no associated K3 surfaces. \end{prop} \begin{proof} Assume that $\widetilde{Y}_{\mathbb A}$ is special of discriminant $d $.\ Since $\Pic(\widetilde{Y}_{\mathbb A})\simeq {\bf Z} H\oplus {\mathsf S}$, the required lattice~$K$ as above is of the form $\langle H,\kappa\rangle$, where $\kappa\in {\mathsf S}$ is primitive.\ Since $\Disc({\mathsf S})\simeq ({\bf Z}/11{\bf Z})^2$, the divisibility $\mathop{\rm div}\nolimits_{{\mathsf S}}(\kappa)$ divides $11$ and, since $\Disc(H^\bot)\simeq ({\bf Z}/2{\bf Z})^2$ (see \cite[(1)]{dm}), the divisibility $\mathop{\rm div}\nolimits_{H^\bot}(\kappa)$ divides~$2$, but also divides $\mathop{\rm div}\nolimits_{{\mathsf S}}(\kappa)$ (because ${\mathsf S}\subset H^\bot$).\ It follows that $\mathop{\rm div}\nolimits_{H^\bot}(\kappa)=1$.\ The lattice $\langle H,\kappa\rangle^\bot$ therefore has discriminant~$4\kappa^2$ by the formula \cite[(4)]{dm}.\ It follows that $\widetilde{Y}_{\mathbb A}$ is special of discriminant $d$ if and only if $d\equiv 0 \pmod8$ and ${\mathsf S}$ primitively represents $-d/4$.\ A direct computation shows that the lattice ${\mathsf S}$ contains the rank-$5$ lattice with diagonal quadratic form $(-4,-4,-4,-6,-8)$.\ By \cite[Section 6(iii)]{Bharg}, the quadratic form on the last four variables represents every even negative integer with the exception of $-2$, and the first variable can be used to ensure that all these integers can be primitively represented.\ This proves the proposition. \end{proof} \subsection{Double EPW surfaces and their automorphisms}\label{sec3} Let $Y_A\subset {\bf P}(V_6)$ be an EPW sextic, where $A\subset \bw3V_6$ is a Lagrangian subspace with no decomposable vectors.\ By \cite[Theorem~5.2(2)]{dkcovers}, there is a canonical connected double covering \begin{equation}\label{piA2} \widetilde{Y}_A^{\ge 2}\longrightarrow Y_A^{\ge 2} \end{equation} between integral surfaces, with covering involution $\tau$, branched over the finite set $Y_A^{\ge 3}$. We compare automorphisms of $Y_A$ with those of $\widetilde{Y}_A^{\ge 2}$.\ Any automorphism of $Y_A$ induces an automorphism of its singular locus $Y_A^{\ge 2}$.\ This defines a morphism $\Aut(Y_A)\to \Aut(Y_A^{\ge 2})$.\ Since $\Aut(Y_A)$ is a subgroup of $\PGL(V_6)$ and the surface $Y_A^{\ge 2}$ is not contained in a hyperplane, this morphism is injective. \begin{prop}[Kuznetsov]\label{split2} Let $A\subset \bw3V_6$ be a Lagrangian subspace with no decomposable vectors.\ Any element of $\Aut(Y_A)$ lifts to an automorphism of $\widetilde{Y}_A^{\ge 2}$.\ These lifts form a subgroup of $ \Aut(\widetilde{Y}_A^{\ge 2})$ which is isomorphic to the group $\widetilde\Aut(Y_A)$ in the extension~\eqref{exttilde} via an isomorphism that takes $\langle \tau\rangle$ to ${\boldsymbol \mu}_2$. \end{prop} \begin{proof} The proof follows the exact same steps as the proof of Proposition~\ref{split1}, whose notation we keep.\ By \cite[Theorem~5.2(2)]{dkcovers}, the surface $\widetilde{Y}_A^{\ge 2}$ is defined as \begin{equation}\label{y2} \widetilde{Y}^{\ge 2}_A = \Spec(\mathcal{O}_{Y^{\ge 2}_A} \oplus \mathcal{R}_2(-3)), \end{equation} where $ \mathcal{R}_2 = (\bw2\mathcal{R}_1\vert_{Y^{\ge 2}_A})^{\vee\vee}$.\ As in the proof of Proposition~\ref{split1}, the group $\widetilde\Aut(Y_A) $ acts on $\widetilde{Y}^{\ge 2}_A$ and the nontrivial element of $ {\boldsymbol \mu}_2 $ acts by~$-1$ on both~$\mathcal{R}_1$ and $\mathcal{O}(-3)$.\ It follows that it acts by~$1$ on $ \mathcal{R}_2$ and by $-1$ on $ \mathcal{R}_2(-3)$, hence as the involution~$\tau$ on $\widetilde{Y}^{\ge 2}_A$.\ This proves the proposition. \end{proof} It is possible to deform the double cover~\eqref{piA2} to the canonical double \'etale covering associated with the (smooth) variety of lines on a quartic double solid (see the proof of \cite[Proposition~2.5]{dkij}), so we can use Welters' calculations in \cite[Theorem (3.57) and Proposition~(3.60)]{wel}.\ In particular, the abelian group $H_1(\widetilde{Y}_A^{\ge 2},{\bf Z})$ is free of rank $20$ (and $\tau$ acts as $-\Id$) and there are canonical isomorphisms (\cite[Proposition~2.5]{dkij}) \begin{equation} \begin{aligned} T_{\Alb (\widetilde{Y}_A^{\ge 2}),0}&\simeq H^1(\widetilde{Y}_A^{\ge 2},\mathcal{O}_{\widetilde{Y}_A^{\ge 2}})\simeq A, \label{h1}\\ H^2(Y_A^{\ge 2},{\bf C})& \simeq \bw2 H^1(\widetilde{Y}_A^{\ge 2},{\bf C})\simeq \bw2(A\oplus \bar A). \end{aligned} \end{equation} The Albanese variety $\Alb (\widetilde{Y}_A^{\ge 2})$ is thus an abelian variety\ of dimension 10 and one can consider the analytic representation (see Section~\ref{sectb1}) $$\rho_a\colon \Aut(\widetilde{Y}_A^{\ge 2})\longrightarrow \GL(T_{\Alb (\widetilde{Y}_A^{\ge 2}),0})\simeq\GL(A). $$ Recall from Proposition~\ref{split2} that there is an injective morphism $\widetilde\Aut(Y_A)\hookrightarrow \Aut(\widetilde{Y}_A^{\ge 2})$. \begin{prop}\label{propc7} Let $Y_A$ be a quasi-smooth EPW sextic.\ The restriction of the analytic representation $\rho_a$ to the subgroup $\widetilde\Aut(Y_A)$ of $ \Aut(\widetilde{Y}_A^{\ge 2})$ is the injective middle vertical map in the diagram~\eqref{esss}. \end{prop} \begin{proof} The morphism $\rho_a$ is the representation of the group $\Aut(\widetilde{Y}_A^{\ge 2})$ on the vector space $$ T_{\Alb (\widetilde{Y}_A^{\ge 2}),0}\simeq H^1(\widetilde{Y}_A^{\ge 2},\mathcal{O}_{\widetilde{Y}_A^{\ge 2}}). $$ As in the proof of \cite[Proposition~2.5]{dkij}), there are canonical isomorphisms $$ H^1(\widetilde{Y}_A^{\ge 2},\mathcal{O}_{\widetilde{Y}_A^{\ge 2}})\simeq H^1(Y_A^{\ge 2},\mathcal{R}_2(-3)) \simeq H^1(Y_A^{\ge 2},\mathcal{O}_{Y_A^{\ge 2}}(3))^\vee , $$ where the first isomorphism comes from~\eqref{y2} and the second one from Serre duality (because~$ \mathcal{R}_2 $ is the canonical sheaf of $Y_A^{\ge 2} $). As in the proof of Proposition~\ref{split2}, the sheaf $ \mathcal{O}_{Y_A^{\ge 2}}(3)$ has an $\widetilde\Aut(Y_A) $-linearization, where~$\Aut(Y_A)$ acts on $Y^{\ge 2}_A$ by restriction and the nontrivial element of~$ {\boldsymbol \mu}_2 $ acts by~$-1$ on~$ \mathcal{O}_{Y_A^{\ge 2}}(3)$.\ By construction, the resolution $$0\to (\bw2\mathcal{A}_2)(-6)\to (\mathcal{A}_1^\vee\otimes \mathcal{A}_2)(-6)\to (\Sym^2\!\mathcal{A}_1)(-6)\oplus\mathcal{O}_{{\bf P}(V_6)}(-6)\to \mathcal{O}_{{\bf P}(V_6)}\to \mathcal{O}_{Y_A^{\ge 2}}\to 0 $$ given in \cite[(33)]{dkeven} is $\widetilde\Aut(Y_A) $-equivariant, hence induces an $\widetilde\Aut(Y_A)$-equivariant isomorphism $$H^1(Y_A^{\ge 2},\mathcal{O}_{Y_A^{\ge 2}}(3))\simeq H^3({\bf P}(V_6),(\mathcal{A}_1^\vee\otimes \mathcal{A}_2)(-3)) =A^\vee\otimes H^3({\bf P}(V_6), \mathcal{A}_2(-3)). $$ As already noted during the proof of Proposition~\ref{split1}, $\widetilde\Aut(Y_A)$ acts trivially on the $1$-dimensional vector space $H^3({\bf P}(V_6), \mathcal{A}_2(-3))=H^3({\bf P}(V_6), \bw2T_{{\bf P}(V_6)}(-6))$.\ All this proves that the action of~$\Aut(\widetilde{Y}_A^{\ge 2})$ on $T_{\Alb (\widetilde{Y}_A^{\ge 2}),0}$ is indeed given by the desired morphism. \end{proof} \subsection{Automorphisms of GM varieties}\label{appc4} Let as before $V_6$ be a 6-dimensional vector space and let $A\subset \bw3V_6$ be a Lagrangian subspace with no decomposable vectors.\ Let $V_5\subset V_6$ be a hyperplane and let $X$ be the associated (smooth ordinary) GM variety (Section~\ref{se22n}).\ One has (see~\eqref{autxa}) \begin{equation*} \Aut(X)\simeq \{ g\in \PGL(V_6)\mid \bw3g(A)=A,\ g(V_5)=V_5\}. \end{equation*} Since the extension~\eqref{exttilde} splits (Proposition~\ref{split1}), there is a lift \begin{equation}\label{repx2} \Aut(X)\longrightarrow \GL(A) \end{equation} (see~\eqref{esss}) which is injective by Lemma~\ref{nlem}. When the dimension of $X$ is either $3$ or $5$, its intermediate Jacobian~$\Jac(X)$ is a 10-dimensional abelian variety.\ By \cite[Theorem~1.1]{dkij}, it is canonically isomorphic to $ \Alb (\widetilde{Y}_A^{\ge 2}) $ (see~\eqref{jxa}).\ Therefore, there is an isomorphism \begin{equation*} T_{\Jac(X),0}\isomlra T_{\Alb (\widetilde{Y}_A^{\ge 2}),0}. \end{equation*} Together with the isomorphism~\eqref{h1}, this gives an analytic representation $$\rho_{a,X}\colon \Aut(X)\longrightarrow \GL(T_{\Jac(X),0})\isomlra \GL(A).$$ \begin{prop}\label{propc8} The analytic representation $\rho_{a,X}$ coincides with the injective morphism~\eqref{repx2}.\ Equivalently, the isomorphism ~\eqref{jxa} is $\Aut(X)$-equivariant. \end{prop} \begin{proof} Assume $\dim(X)=3$ and choose a line $L_0\subset X$.\ The isomorphism $ \Alb (\widetilde{Y}_A^{\ge 2})\isomto \Jac(X) $ was then constructed in \cite[Theorem~4.4]{dkij} from the Abel--Jacobi map $$\AJ_{Z_{L_0}}\colon H_1( \widetilde{Y}_A^{\ge 2},{\bf Z})\longrightarrow H_3(X,{\bf Z}) $$ associated with a family $Z_{L_0}\subset X\times \widetilde{Y}_A^{\ge 2}$ of curves on $X$ parametrized by $\widetilde{Y}_A^{\ge 2}$.\ Although the family $Z_{L_0}$ does depend on the choice of $L_0$, the map $\AJ_{Z_{L_0}}$ does not. Let $g\in \Aut(X)$ (also considered as an automorphism of $\widetilde{Y}_A^{\ge 2}$).\ By the functoriality properties of the Abel--Jacobi map (\cite[Lemma~3.1]{dkij}), we obtain $$\AJ_{Z_{L_0}}\circ g_*=\AJ_{(\Id_{X}\times g)^*(Z_{L_0})}= \AJ_{( g\times \Id_{\widetilde{Y}_A^{\ge 2}})_*(Z_{g^{-1}(L_0)})}=g_*\circ \AJ_{Z_{g^{-1}(L_0)}}, $$ which proves the proposition.\ When $\dim(X)=5$, the proof is similar, except that $Z_{\Pi_0}$ is now a family of surfaces in $X$ that depends on a plane $\Pi_0\subset X$. \end{proof} \section{Representations of the group $\mathbb{G}$}\label{sea2} The group $\mathbb{G}:=\PSL(2,{\bf F}_{11})$ is the only simple group of order $660=2^2\cdot 3\cdot 5\cdot 11$.\ It can be generated by the classes $$ a=\begin{pmatrix} 5 &0 \\ 0 & 9 \end{pmatrix},\quad b=\begin{pmatrix} 3 &5 \\ -5 & 3 \end{pmatrix},\quad c=\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}, $$ and $a^5=-b^6=c^{11}=I_2$, the identity matrix. The group $\mathbb{G}$ has 8 irreducible ${\bf C}$-representations, of dimensions $1$, $5$, $5$, $10$, $10$, $11$, $12$, and~$12$.\ Here is a character table for four of these irreducible representations. \begin{table}[h] \renewcommand\arraystretch{1.5} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Conjugation class&$[I_2]$&$[c]$&$[c^2]$&$[a]=[a^4]$&$[a^2]=[a^3]$&$[b]=[b^5]$&$[b^2]=[b^4]$&$[b^3]$ \\ Cardinality&$1$&$60$&$60$&$132$&$132$&$110$&$110$&$55$ \\ Order&$1$&$11$&$11$&$5$&$5$&$6$&$3$&$2$ \\ \hline $\chi_0$ &$1$&$1 $&$1 $ &$1$ &$1 $ &$1 $&$1 $&$1 $ \\ \hline $\xi$&$5$&$\lambda$&$\bar\lambda$&$0$&$0$&$ 1$&$-1$&$1$ \\ \hline $\xi^\vee$&$5$&$\bar\lambda$&$\lambda$&$0$ &$0$ &$1$&$-1$&$1$ \\ \hline $\bw2\xi $&$ 10$&$ -1$&$ -1$&$ 0$&$ 0$&$1 $&$ 1$&$ -2$ \\ \hline \end{tabular} \captionsetup{justification=centering} \caption{Partial character table for $\mathbb{G}$}\label{tab1} \end{table} As before, we have set (where $\zeta_{11}=e^{\frac{2i\pi}{11}}$) \begin{equation*}\label{defgamma} \lambda:=\zeta_{11}^{1^2}+\zeta_{11}^{2^2}+\zeta_{11}^{3^2}+\zeta_{11}^{4^2}+\zeta_{11}^{5^2}=\zeta_{11}+\zeta_{11}^3+\zeta_{11}^4+\zeta_{11}^5+\zeta_{11}^9=\tfrac12(-1+\sqrt{-11}). \end{equation*} The representation $\xi$ has a realization in the matrix ring $\mathcal{M}_5({\bf C})$ for which \begin{equation}\label{real} \xi(a)= \begin{pmatrix} 0&0&0&0&1\\ 1&0&0&0&0\\ 0&1&0&0&0\\ 0&0&1&0&0\\ 0&0&0&1&0 \end{pmatrix},\quad \xi(c)= \begin{pmatrix} \zeta_{11}&0&0&0&0\\ 0&\zeta_{11}^4&0&0&0\\ 0&0&\zeta_{11}^5&0&0\\ 0&0&0&\zeta_{11}^9&0\\ 0&0&0&0&\zeta_{11}^3 \end{pmatrix} . \end{equation} Every irreducible character of $\mathbb{G}$ has Schur index 1 (\cite[\S~12.2]{ser}, \cite[Theorem~6.1]{fei}).\ In particular, the representation $\bw2\xi$, having an integral character, can be defined over ${\bf Q}$ and even, by a theorem of Burnside (\cite{bur}), over ${\bf Z}$, that is, by a morphism $\mathbb{G}\to \GL(10,{\bf Z})$.\ The representation $\bw2\xi$ is self-dual, so there is a $\mathbb{G}$-equivariant isomorphism \begin{equation}\label{defu} w\colon \bw2V_\xi\isomlra \bw2V_\xi^\vee, \end{equation} unique up to multiplication by a nonzero scalar, and it is symmetric (\cite[prop.~38]{ser}).\ \section{Decomposition of abelian varieties with automorphisms}\label{b2} We gather here a few very standard notation and facts about abelian varieties.\ Let $X$ be a complex abelian variety.\ We denote by $\Pic(X)$ the group of isomorphism classes of line bundles on~$X$, by $\Pic^0(X)\subset \Pic(X)$ the subgroup of classes of line bundles that are algebraically equivalent to $0$, and by $\NS(X)$ the N\'eron--Severi group $\Pic(X)/\Pic^0(X)$, a free abelian group of finite rank.\ The group $\Pic^0(X)$ has a canonical structure of an abelian variety; it is called the dual abelian variety.\ Any endomorphism $u$ of $X$ induces an endomorphism $\widehat u$ of $\Pic^0(X)$. Given the class $\theta\in \NS(X)$ of a line bundle $L$ on $X$, we let $\varphi_{\theta}$ be the morphism $$ \begin{aligned} X&\longrightarrow \Pic^0(X) \\ x& \longmapsto \tau_x^*L\otimes L^{-1} \end{aligned} $$ of abelian varieties, where $\tau_x$ is the translation by $x$ (it is independent of the choice of the representative $L$ of $\theta$).\ When $\theta$ is a polarization, that is, when $L$ is ample, $\varphi_{\theta}$ is an isogeny.\ We say that $\theta$ is a principal polarization when $\varphi_\theta$ is an isomorphism.\ If $n:=\dim(X)$, this is equivalent to saying that the self-intersection number $\theta^n$ is $n!$.\ The associated {\em Rosati involution} on $\End(X)$ is then defined by $u\mapsto u':=\varphi_{\theta}^{-1}\circ \widehat u \circ \varphi_{\theta}$.\ The map $$ \begin{aligned} \iota_{\theta}\colon \NS(X) &\ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow} \End(X) \\ \theta'& \longmapsto \varphi_{\theta}^{-1}\circ \varphi_{\theta'} \end{aligned} $$ is an injective morphism of free abelian groups whose image is the group $\End^s(X)$ of symmetric elements for the Rosati involution (\cite[Theorem~5.2.4]{bl}).\ If $u\in \End(X)$, one has $\varphi_{u^*\theta'}=\widehat u\circ \varphi_{\theta'}\circ u$ hence \begin{equation}\label{for} \iota_{\theta}(u^*\theta')=\varphi_{\theta}^{-1}\circ \varphi_{u^*\theta'} =\varphi_{\theta}^{-1}\circ \widehat u\circ \varphi_{\theta'}\circ u=u'\circ \varphi_{\theta}^{-1} \circ \varphi_{\theta'}\circ u= u'\circ \iota_{\theta}(\theta')\circ u. \end{equation} Set $\NS_{\bf Q}(X)=\NS(X)\otimes {\bf Q}$ and $\End_{\bf Q}(X)=\End(X)\otimes {\bf Q}$ (both are finite-dimensional ${\bf Q}$-vector spaces).\ If the polarization $\theta$ is no longer principal, or if $\theta\in \NS_{\bf Q}(X)$ is only a ${\bf Q}$-polarization, the Rosati involution is still defined on $\End_{\bf Q}(X)$ by the same formula and we may view $\iota_{\theta}$ as an injective morphism $$ \begin{aligned} \iota_{\theta}\colon \NS(X)_{\bf Q} &\ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow} \End_{\bf Q}(X) \end{aligned} $$ with image $\End^s_{\bf Q}(X)$ (\cite[Remark~5.2.5]{bl}).\ Formula \eqref{for} remains valid for $u\in \End(X)$ and $\theta'\in \NS(X)_{\bf Q}$. We will also need the so-called {\em analytic} representation \begin{equation*} \rho_a\colon \End_{\bf Q}(X) \ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow}\End_{\bf C}(T_{X,0}). \end{equation*} It sends an endomorphism of $X$ to its tangent map at $0$. \subsection{${\bf Q}$-actions on abelian varieties}\label{sectb1} Let $X$ be an abelian variety\ and let $G$ be a finite group.\ A ${\bf Q}$-action of $G$ on $X$ is a morphism $\rho\colon {\bf Q}[G]\to \End_{\bf Q}(X)$ of ${\bf Q}$-algebras.\ The composition $$ G\xrightarrow{\ \rho\ } \End_{\bf Q}(X) \xrightarrow{\ \rho_a\ } \End_{\bf C}(T_{X,0}) $$ is called the analytic representation of $G$. \begin{prop}\label{propb1} Let $X$ be an abelian variety\ of dimension~$n$ with a ${\bf Q}$-action of a finite group~$G$.\ Assume that the analytic representation of $G$ is irreducible and defined over~${\bf Q}$.\ Then~$X$ is isogeneous to the product of $n$ copies of an elliptic curve. \end{prop} \begin{proof} This follows from~\cite[(3.1)--(3.4)]{es} (see also \cite[Section~1]{kr} and \cite[Proposition~13.6.2]{bl}).\ This reference assumes that we have a bona fide action of $G$ on $X$ but only uses the induced morphism ${\bf Q}[G]\to \End_{\bf Q}(X)$ of ${\bf Q}$-algebras. \end{proof} In the situation of Proposition~\ref{propb1}, we prove that any $G$-invariant ${\bf Q}$-polarization is essentially unique. \begin{lemm}\label{lb3} Let $X$ be an abelian variety\ with a ${\bf Q}$-action of a finite group $G$ and let~$\theta $ be a $G$-invariant polarization on $X$.\ If the analytic representation of $G$ is irreducible, any $G$-invariant ${\bf Q}$-polarization on $X$ is a rational multiple of $\theta$. \end{lemm} \begin{proof} Let $g\in G$, which we view as an invertible element of $\End_{\bf Q}(X) $.\ Since~$\theta$ is $g$-invariant, identity~\eqref{for} (applied with $\theta'=\theta$ and $u=g$) implies $g'\circ g=\Id_X$.\ Let $\theta'\in \NS(X)_{\bf Q}$.\ Applying~\eqref{for} again, we get $$\iota_{\theta}(g^*\theta')=g'\circ \iota_{\theta}(\theta')\circ g=g^{-1}\circ \iota_{\theta}(\theta')\circ g.$$ If $\theta'$ is $G$-invariant, we obtain $\iota_{\theta}(\theta')=g^{-1}\circ \iota_{\theta}(\theta')\circ g$ for all $g\in G$.\ If the analytic representation of $G$ is irreducible, $\rho_a(\iota_{\theta}(\theta'))$ must, by Schur's lemma, be a multiple of the identity, hence $\theta'$ must be a multiple of $\theta$. \end{proof} \subsection{Polarizations on self-products of elliptic curves} Let $E$ be an elliptic curve, so that $\mathfrak o_E:=\End(E)$ is either ${\bf Z}$ or an order in an imaginary quadratic extension of ${\bf Q}$.\ We have $$\End(E^n)\simeq \mathcal{M}_{n}(\mathfrak o_E)\quad\textnormal{and}\quad \End_{\bf Q}(E^n)\simeq \mathcal{M}_{n}(\mathfrak o_E\otimes{\bf Q}), $$ and $\rho_a$ is the embedding of these matrix rings into the ring $\mathcal{M}_n({\bf C})$ induced by the choice of an embedding $\mathfrak o_E\hookrightarrow {\bf C}$. Polarizations on $E^n$ were studied in particular by Lange in~\cite{lan}.\ We denote by $\theta_0$ the product principal polarization on $E^n$. \begin{prop}\label{propb3} Let $E$ be an elliptic curve.\ \begin{itemize} \item The Rosati involution defined by $\theta_0$ on $\End(E^n)$ corresponds to the involution $M\mapsto \overline M^T$ on $\mathcal{M}_{n}(\mathfrak o_E)$. \item Via the embedding $\iota_{\theta_0}$, polarizations $\theta$ on $E^n$ correspond to positive definite Hermitian matrices $M_\theta\in\mathcal{M}_{n}(\mathfrak o_E)$ and the degree of the polarization $\theta$ is $ \det(M_\theta)$. \item The group of automorphisms $\Aut(E^n,\theta)$ is the unitary group $${\bf U}(n,M_\theta):= \{M\in \mathcal{M}_{n}(\mathfrak o_E)\mid \overline M^T M_\theta\, M= M_\theta\}.$$ \end{itemize} \end{prop} \begin{proof} If we write $E={\bf C}/({\bf Z}\oplus \tau{\bf Z})$, the period matrix for $E^n$ is $\begin{pmatrix}I_n&\tau I_n\end{pmatrix}$.\ The first item then follows from \cite[Lemma~2.3]{lan} and elements of $ \NS(E^n)$ correspond to Hermitian matrices.\ By \cite[Theorem~5.2.4]{bl}, polarizations correspond to positive definite Hermitian matrices and the degree of the polarization is the determinant of the matrix.\ More precisely, one has (\cite[Proposition~5.2.3]{bl}) $$\det(T I_n-M_\theta)=\sum_{j=0}^n (-1)^{n-j}\frac{\theta_0^j\cdot \theta^{n-j}}{j!(n-j)!}\,T^j .$$ The last item follows from~\eqref{for}. \end{proof} \begin{rema}\label{remb4} Let $G$ be a finite group with a ${\bf Q}$-representation $\rho\colon {\bf Q}[G]\to \mathcal{M}_{n}({\bf Q})$.\ For any elliptic curve $E$, this defines a ${\bf Q}$-action of $G$ on $E^n$.\ It follows from the proposition that any positive definite symmetric matrix $M_\theta\in\mathcal{M}_{n}({\bf Q})$ such that, for all $g\in G$, $$\rho(g)^T M_\theta\, \rho(g)= M_\theta$$ defines a $G$-invariant ${\bf Q}$-polarization on $E^n$.\ Such a matrix always exists: take for example $M_\theta:=\sum_{g\in G} \rho(g)^T \rho(g)$ (it corresponds to the~${\bf Q}$-polarization $ \sum_{g\in G} g^*\theta_0$).\ The analytic representation is $\rho_{\bf C}\colon {\bf C}[G]\to \mathcal{M}_{n}({\bf C})$.\ If it is irreducible, every $G$-invariant ${\bf Q}$-polarization on $E^{ n}$ is, by Lemma~\ref{lb3}, a rational multiple of $\theta$.\ \end{rema} We end this section with the construction of an explicit abelian variety\ of dimension~$10$ with a $\mathbb{G}$-action, such that the associated analytic representation is the irreducible representation $\bw2\xi$, together with a $\mathbb{G}$-invariant {\em principal} polarization.\ Set $ \lambda:=\tfrac12(-1+\sqrt{-11})$ and consider the elliptic curve $E_\lambda:={\bf C}/{\bf Z}[\lambda]$, which has complex multiplication by~${\bf Z}[\lambda]$.\ \begin{prop}\label{prop63} There exists a principal polarization $\theta$ on the abelian variety\ $E_\lambda^{10}$ and a faithful action $\mathbb{G}\hookrightarrow\Aut(E_\lambda^{10},\theta )$ such that the associated analytic representation is the irreducible representation $\bw2\xi$ of $\mathbb{G}$. \end{prop} \begin{proof} By \cite[Table~1]{sch}), there is a positive definite unimodular ${\bf Z}[\lambda]$-sesquilinear Hermitian form~$H'$ on~${\bf Z}[\lambda]^5$ with an automorphism of order $11$.\ Its Gram matrix in the canonical ${\bf Z}[\lambda]$-basis $(e_1,\dots,e_5)$ of~${\bf Z}[\lambda]^5$ is $$ \begin{pmatrix} 3 & 1-\bar\lambda &-\lambda&1&-\bar\lambda \\ 1-\lambda & 3 & -1& - \lambda&1 \\ -\bar\lambda & -1 & 3&\lambda&-1+\lambda \\ 1 & -\bar\lambda& \bar\lambda&3&1-\bar \lambda \\ - \lambda & 1 & -1+\bar\lambda& 1-\lambda&3 \end{pmatrix} $$ and its unitary group has order $2^{3}\cdot 3\cdot 5\cdot 11=1\,320 $ (\cite{sch}). By Proposition~\ref{propb3}, this form defines a principal polarization $\theta'$ on the abelian variety~$E_\lambda^5$ and the group $\Aut(E_\lambda^5,\theta')$ has order $1\,320 $; in particular, it contains an element of order~$11$.\ It follows from \cite{bb} that the group $\Aut(E_\lambda^5,\theta')$ is isomorphic to $\mathbb{G}\times\{\pm 1\} $ and the faithful representation $\mathbb{G}\hookrightarrow\Aut(E_\lambda^5,\theta')\hookrightarrow {\bf U}(5,H')$ given by Proposition~\ref{propb3} is $\xi$.\footnote{The principally polarized abelian fivefold $(E_\lambda^5,\theta')$ was studied in \cite{adl,adls,gon,rou}: it is the intermediate Jacobian of the Klein cubic threefold with equation $x_1^2x_2+x_2^2x_3+x_3^2x_4+x_4^2x_5 +x_5^2x_1 =0$ in ${\bf P}^4$.} The Hermitian form $H'$ on ${\bf Z}[\lambda]^5$ induces a positive definite unimodular Hermitian form~$H$ on $\bw2{\bf Z}[\lambda]^5={\bf Z}[\lambda]^{10}$ by the formula $$ H(x_1\wedge x_2,x_3\wedge x_4):=H'(x_1,x_3)H'(x_2,x_4)-H'(x_1,x_4)H'(x_2,x_3). $$ The matrix of $H$ (in the basis $(e_{12},e_{13},e_{14},e_{15},e_{23},e_{24},e_{25},e_{34},e_{35},e_{45})$) is \begin{equation}\label{mat10} \left(\begin{smallmatrix} 4& 2\lambda&-1 -2\lambda&-1-\lambda&-2+2\lambda &-\lambda &-1-2 \lambda&-2-\lambda &1& -2 \\ 2\bar\lambda&6 &-1+2\lambda &-1+2\lambda&6+2\lambda &-2+\lambda &-4+\lambda&\lambda &-\lambda &2+\lambda \\ -1-2\bar\lambda&-1+2\bar\lambda & 8&5+2\lambda&-2-2\lambda & 5+2\lambda &3+2\lambda&1-2\lambda &1&-1-2\lambda \\ -1-\bar\lambda&-1+2\bar\lambda & 5+2\bar\lambda& 6&-1-2 \lambda&4&5+2\lambda&-1-\lambda &-1-\lambda&-1-\lambda \\ -2+2\bar\lambda& 6+2\bar\lambda&-2-2\bar\lambda &-1-2\bar \lambda & 8&2\lambda & -2+3\lambda& 2\lambda& -2-\lambda&3+\lambda \\ -\bar\lambda & -2+\bar\lambda& 5+2\bar\lambda &4 & 2\bar\lambda& 6& 5+2\lambda&0 &-1 &-\lambda \\ -1-2 \bar\lambda&-4+\bar\lambda &3+2\bar\lambda & 5+2\bar\lambda&-2+3\bar\lambda &5+2\bar\lambda & 8&2 &-1+\lambda&-1 -2\lambda \\ -2-\bar\lambda &\bar\lambda &1-2\bar\lambda &-1-\bar\lambda & 2\bar\lambda &0 &2 & 6&2+2 \lambda & -2\lambda \\ 1&-\bar\lambda &1 &-1-\bar\lambda & -2-\bar\lambda & -1&-1+\bar\lambda &2+2\bar \lambda &4 & -2 \\ -2&2+\bar\lambda&-1-2\bar\lambda&-1-\bar\lambda&3+\bar\lambda &-\bar\lambda &-1 -2\bar\lambda & -2\bar\lambda &-2 & 4 \end{smallmatrix}\right). \end{equation} By Proposition~\ref{propb3} again, the form $H$ defines a principal polarization $\theta$ on the abelian variety~$E_\lambda^{10}$, the group $\Aut(E_\lambda^{10},\theta)$ contains $\mathbb{G} $, and the corresponding analytic representation is~$\bw2\xi$.\ \end{proof} The $\mathbb{G}$-action on~$E_\lambda^{10}$ in the proposition is not the $\mathbb{G}$-action described in Remark~\ref{remb4} (otherwise, since $\mathbb{G}$-invariant polarizations are proportional, the matrix~\eqref{mat10} would, by Lemma~\ref{lb3}, have rational coefficients): these actions are only conjugate by a ${\bf Q}$-automorphism of $E_\lambda^{10}$. \end{document}
\begin{document} \title[Universal Dynamical Decoherence Control]{Universal dynamical decoherence control of noisy single- and multi-qubit systems} \author{Goren Gordon\footnote{[email protected]}, Noam Erez\footnote{[email protected]}, Gershon Kurizki\footnote{[email protected]} } \address{Department of Chemical Physics, Weizmann Institute of Science, 76100 Rehovot, Israel} \begin{abstract} In this article we develop, step by step, the framework for universal dynamical control of two-level systems (TLS) or qubits experiencing amplitude- or phase-noise (AN or PN) due to coupling to a thermal bath. A comprehensive arsenal of modulation schemes is introduced and applied to either AN or PN, resulting in completely analogous formulae for the decoherence rates, thus underscoring the unified nature of this universal formalism. We then address the extension of this formalism to multipartite decoherence control, where symmetries are exploited to overcome decoherence. \end{abstract} \maketitle \section{Introduction} \label{sec-intro} In-depth study of the mechanisms of decoherence and disentanglement and their prevention in bipartite or multipartite open systems is an essential prerequisite for applications involving quantum information processing or communications \cite{nie00}. The present article is aimed at furthering our understanding of these formidable issues, which is scanty at best. It is based on recent progress by our group, as well as others, towards a unified approach to the dynamical control of decoherence and disentanglement. This unified approach culminates in universal formulae allowing design of the required control fields. The topic of multipartite decoherence has been well-investigated in two limits. One of these is relaxation toward steady-state of one-body coherence of spins, atoms, excitons, quantum dots, etc. that are in contact with a much larger reservoir. The other is the collective decoherence of a small (\emph{localized}) two-body or many-body system, which typically occurs more rapidly than one-body decoherence \cite{coh92,scu97}. By contrast, more general problems of decay of \emph{non-local} mutual entanglement of two or more small systems are less well understood. This decoherence process may occur on a time scale much shorter than the time for either body to undergo local decoherence, but much larger than the time each takes to become disentangled from its environment. The disentanglement of individual particles from their environment is dynamically controlled by interactions on non-Markovian time-scales, as discussed below \cite{aku05}. Their disentanglement from each other, however, may be purely Markovian \cite{ban04a,yu04,yu06}, in which case the present non-Markovian approach to dynamical control/prevention is insufficient. \subsection{Dynamical control of decay and decoherence on non-Markovian time scales} \label{ch-1-sec-1} Quantum-state decay to a continuum or changes in its population via coupling to a thermal bath is known as amplitude noise (AN). It characterizes decoherence processes in many quantum systems, e.g., spontaneous emission of photons by excited atoms \cite{coh92}, vibrational and collisional relaxation of trapped ions \cite{sac00} and the relaxation of current-biased Josephson junctions \cite{cla88}. Another source of decoherence in the same systems is proper dephasing or phase noise (PN) \cite{scu97}, which does not affect the populations of quantum states, but randomizes their energies or phases. A thoroughly studied approach to suppression of decoherence is the ``dynamical decoupling'' of the system from the bath \cite{aga99,aga01a,alicki2004oss,vio98,shi04,vit01,fac01,fac04,zan03}. In particular, ``bang-bang'' (BB) pulses have been proposed for \emph{stroboscopic} suppression of proper dephasing: $\pi$-phase flips of the coupling via strong and sufficiently fast resonant pulses applied to the system \cite{vio98,shi04,vit01}. The identification of a decoherence-free subspace (DFS), wherein symmetrically degenerate states are decoupled from the bath, constitutes a complementary approach \cite{zan03,zan97,lid98,wu02,kof96}. Our group has purported to substantially expand the arsenal of decay and decoherence control. We have presented a {\em universal form of the decay rate} of unstable states into {\em any} reservoir (continuum), dynamically modified by perturbations with arbitrary time dependence, focusing on non-Markovian time-scales \cite{kof96,kof00,kof01,kof04,kof05}. An analogous form has been obtained by us for the dynamically modified rate of proper dephasing \cite{kof04,kof05,kof01a}. Our unified, optimized approach reduces to the BB method in the particular case of proper dephasing or decay via coupling to {\em spectrally symmetric} (e.g., Lorentzian or Gaussian) noise baths with limited spectral width (see below). The type of phase modulation advocated for the suppression of coupling to {\em asymmetric} baths (e.g., phonon or photon baths with frequency cutoff \cite{pel04}) is, however, drastically different from the BB method. Other situations to which our approach applies, but not the BB method, include {\em amplitude modulation} of the coupling to the continuum, as in the case of decay from quasibound states of a periodically tilted washboard potential \cite{kof01}: such modulation has been experimentally shown \cite{fis01} to give rise to either slowdown of the decay (Zeno-like behavior) or its speedup (anti-Zeno-like behavior), depending on the modulation rate. The theory has been generalized by us to finite temperatures and to qubits driven by an {\em arbitrary} time-dependent field, which may cause the failure of the rotating-wave approximation \cite{kof04}. It has also been extended to the analysis of {\em multi-level systems}, where quantum interference between the levels may either inhibit or accelerate the decay \cite{gor06a}. Our general approach \cite{kof01} to dynamical control of states coupled to an arbitrary ``bath'' or continuum has reaffirmed the intuitive anticipation that, in order to suppress their decay, we must modulate the system-bath coupling at a rate exceeding the spectral interval over which the coupling is significant. Yet our analysis can serve as a general recipe for {\em optimized} design of the modulation aimed at an effective use of the fields for decay and decoherence suppression or enhancement. The latter is useful for the control of chemical reactions \cite{pre00}. \subsection{Control of symmetry-breaking multipartite decoherence} \label{ch-1-sec-2} Symmetry is a powerful means of protecting entangled quantum states against decoherence, since it allows the existence of a decoherence-free subspace or a decoherence-free subsystem \cite{cla88,scu97,aga99,aga01,aga01a,vio98,shi04,vit01,fac01,fac04,zan03,zan97,lid98,wu02,kof96,vio99,vio00,aku05}. In multipartite systems, this requires that all particles be perturbed by the {\em same} environment. In keeping with this requirement, quantum communication protocols based on entangled two-photon states have been studied under {\em collective} depolarization conditions, namely, {\em identical} random fluctuations of the polarization for both photons \cite{ban04}. Entangled qubits that reside at the same site or at equivalent sites of the system, e.g. atoms in optical lattices, have likewise been assumed to undergo identical decoherence \cite{aku05}. Locally-decohering entangled states of two or more particles, such that each particle travels along a different channel or is stored at a different site in the system, may break the state symmetry. A possible consequence of this symmetry breaking is the abrupt ``death'' of the entanglement \cite{ban04a,yu04,yu06}. Such systems, composed of particles undergoing individual or ``local'' decoherence, do not possess a natural DFS and thus present more challenging problems insofar as decoherence effects are concerned \cite{lis02}. Our group has recently addressed these challenges by developing a generalized treatment of multipartite entangled states (MES) decaying into zero-temperature baths and subject to {\em arbitrary} external perturbations whose role is to provide {\em dynamical protection} from decay and decoherence \cite{gor06a, gor06b}. Our treatment applies {\em to any difference} between the couplings of individual particles to the baths. It does not assume the perturbations to be stroboscopic, i.e. strong or fast enough, but rather to act concurrently with the particle-bath interactions. Our main results are to show that by applying {\em local} (selective) perturbations to multilevel particles, i.e. by {\em addressing each level and each particle individually}, one can create a decoherence-free system of many entangled qubits. Alternatively, one may reduce the problem of locally decohering MES to that of a single decohering particle, whose dynamical control has been thoroughly investigated \cite{vio98,shi04,kof01,kof04,kof05}. On the other hand, the combined effect of dephasing and relaxation (phase and amplitude noises) on MES and its control, which constitute a much more formidable problem, have not yet been studied by us. \subsection{Outline} \label{Subsec-layout} In this article we develop, step by step, the framework for universal dynamical control by modulating fields of two-level systems or qubits, aimed at suppressing or preventing their noise, decoherence or relaxation in the presence of a thermal bath. To this end, a comprehensive treatment is developed in Sec.~\ref{Sec-ME} in a more complete and transparent fashion than its brief sketch in Ref.\cite{kof04}. Its crux is the derivation of a more general master equation (ME) than in previous treatments of a multilevel, multipartite system, weakly coupled to an arbitrary bath and subject to arbitrary temporal driving or modulation. The present ME, derived by the Nakajima-Zwanzig technique \cite{nakajima1958qtt,zwanzig1960emt}, is more general than the ones obtained previously in that it does not invoke the rotating wave approximation and therefore applies at arbitrarily short times or for arbitrarily fast modulations. Remarkably, when our general ME is applied to either AN or PN in Sec.~\ref{Sec-Bloch}, the resulting dynamically-controlled relaxation or decoherence rates obey \emph{analogous formulae} provided the the corresponding density-matrix (generalized Bloch) equations are written in the appropriate basis. This underscores the universality of our treatment. The choice of an appropriate time-dependent basis allows here to simplify the AN treatment of Ref. \cite{kof04}. More importantly, it allows us to present a PN treatment that does not describe noise phenomenologically as in Ref. \cite{kof04}, but rather dynamically, starting from the ubiquitous spin-boson Hamiltonian. We then discuss in Sec.~\ref{Sec-modulation}, more comprehensively than in previous treatments, the possible modulation arsenal for either AN or PN control. The present formalism is applicable in a natural and straightforward manner to multipartite and/or multilevel systems \cite{gor06b}. It allows us to focus in Sec.~\ref{Sec-multi} on the ability of symmetries to overcome multipartite decoherence\cite{zan03,zan97,lid98,wu02,vio00}. Our conclusions are presented in Sec.~\ref{Sec-conc}. \section{Master equation (ME) for dynamically controlled systems coupled to thermal baths} \label{Sec-ME} \subsection{Derivation of the reduced density matrix ME by the Nakajima-Zwanzig method} We shall consider the most general unitary evolution of a system coupled to a thermal reservoir, governed by the Liouville operator equation (we shall take $\hbar=1$ throughout the rest of the paper): \begin{eqnarray} \dot{\rho}_{\text{tot}}(t) &=&-i [H(t),\rho_{tot}(t)] \equiv -i \mc{L}(t) \rho_{tot}(t), \label{Leq} \end{eqnarray} \begin{subequations} \begin{eqnarray} H(t) &=& H_0(t)+H_I(t), \\ H_0(t)&\equiv& H_S(t) +H_B, \end{eqnarray} \end{subequations} \begin{eqnarray} \mc{L}(t)&=& \mc{L}_S(t)+\mc{L}_B+\mc{L}_I(t) \end{eqnarray} where $\rho_{tot}$ is the density matrix of the system+reservoir and $H_S,~H_B,~H_I$ are the Hamiltonians of the system, bath and their interaction, respectively. As usual, $\mc{L}$ denotes the Liouville operator, which acts linearly on \emph{operators} on our Hilbert space. We shall use the notation $H_0(t)$ for the ``unperturbed'' Hamiltonian, assuming weak system-bath coupling, and $\mc{L}_0(t)$ for its associated Liouville operator. We seek a master equation for the reduced density matrix of the system alone $\rho\equiv Tr_B \rho_{tot}$, allowing for \emph{arbitrary} time dependence of $H(t)$ \emph{without} resorting to the rotating-wave approximation \cite{coh92,scu97}. This can be accomplished by the Nakajima-Zwanzig \cite{nakajima1958qtt,zwanzig1960emt,prigogine1962nes} projection-operator technique (see also \cite{breuer2002toq,yan2005qmd}). Let us define $\rho_B\equiv Z^{-1}e^{-H_B/k_B T}$ ($T$ being the reservoir temperature, $Z$ normalization to unit trace), the projection operator $\mathcal{P}(\cdot)\equiv \text{Tr}_B(\cdot)\otimes \rho_B$ (satisfying $\mathcal{P}^2=\mathcal{P}$), and the complementary (projection) operator $\mathcal{Q} \equiv 1-\mathcal{P}$. In terms of these definitions, Eq.\eqref{Leq} is equivalent to: \begin{subequations} \begin{eqnarray} \label{e2a}\mathcal{P} \dot{\rho}_{\text{tot}} (t) &=& -i\mathcal{P} \mc{L}(t) \mathcal{P} \rho_{\text{tot}}(t) -i \mathcal{P} \mc{L}(t) \mathcal{Q} \rho_{\text{tot}} (t) \\ \mathcal{Q} \dot{\rho}_{\text{tot}} (t) &=& -i\mathcal{Q} \mc{L}(t) \mathcal{P} \rho_{\text{tot}}(t) -i \mathcal{Q} \mc{L}(t) \mathcal{Q} \rho_{\text{tot}} (t) \label{e2b} \end{eqnarray} \end{subequations} Equation \eqref{e2b} is then formally integrated to give: \begin{equation} \mathcal{Q} \rho_{\text{tot}}(t) = -i\int_0^t \mc{K}_+(t,\tau) \mathcal{Q} \mc{L}(\tau) \mathcal{P} \rho_{\text{tot}}(\tau)d\tau +\mc{K}_+(t,0)\mathcal{Q} \rho_{\text{tot}}(0) \label{e3} \end{equation} where \begin{equation} \mc{K}_+(t,\tau) = \text{T}_+e^{-i\mathcal{Q}\int_\tau^t \mc{L}(s)ds}, \end{equation} $\text{T}_+$ denoting time-ordering. If this expression is then plugged into Eq.\eqref{e2a} we get a non-Markovian ME for $P\rho_{\text{tot}}$: \begin{equation} \mathcal{P} \dot{\rho}_{\text{tot}}(t) = -i\mathcal{P} \mc{L}(t)\mathcal{P} \rho_{\text{tot}}(t) - \int_0^t \mathcal{P} \mc{L}(t) \mc{K}_+(t,\tau) \mathcal{Q} \mc{L}(\tau)\mathcal{P} \rho_{\text{tot}} (\tau) d\tau -i\mathcal{P} \mc{L} \mc{K}_+(t,0)\mathcal{Q} \rho_{\text{tot}}(0) \end{equation} which yields a ME for $\rho$ after tracing out the bath. Rather than apply a perturbative treatment directly to this equation, it is useful to first transform it to a ``time-convolutionless'' form (TCL) \cite{hashitsumae1977qme,shibata1977gsl,chaturvedi1979tcp,shibata1980efn}. In this form, the memory effect (the presence of $\rho_{\text{tot}}(\tau)$ in the integrand) is transferred to the integration kernel and $\rho(t)$ is taken out of the integral. Only then, is the perturbative expansion (in $\mc{L}_I$) applied. Formally, \begin{equation} \rho_{\text{tot}}(\tau) = \mc{G}_-(t,\tau) \rho_{\text{tot}}(t) \end{equation} where, writing $\text{T}_-$ for anti-chronological ordering: \begin{equation} \mc{G}_-(t,\tau) \equiv \text{T}_-e^{+i\int_\tau^t \mc{L}(s) ds}. \end{equation} Substituting this expression for $\rho_{\text{tot}}(\tau)$ into Eq.\eqref{e3}, one obtains: \begin{subequations} \begin{equation} \mathcal{Q} \rho_{\text{tot}}(t) = - \int_0^t \mc{K}_+(t,\tau)i\mathcal{Q} \mc{L}(\tau)\mathcal{P} \mc{G}_-(t,\tau)d\tau \left(\mathcal{P}+\mathcal{Q} \right) \rho_{\text{tot}} (t) + \mc{K}_+(t,0)\mathcal{Q} \rho_{\text{tot}}(0) \end{equation} Collecting all $\mathcal{Q} \rho_{\text{tot}}$ terms on the left, we obtain: \begin{eqnarray} &\mc{F}(t)&\mathcal{Q} \rho_{\text{tot}} (t) = \left\{ 1-\mc{F}(t)\right\} \mathcal{P} \rho_{\text{tot}} (t) +\mc{K}_+(t,0)\mathcal{Q} \rho_{\text{tot}} (0) \\ &\mc{F}(t)& = 1 + \int_0^t \mc{K}_+(t,\tau) i \mathcal{Q} \mc{L}(\tau) \mathcal{P} \mc{G}_-(t,\tau)d \tau \equiv 1+ \Sigma(t) \label{e10c} \end{eqnarray} Assuming $\mc{F}(t)$ can be inverted (which is expected to hold for short times in the weak coupling limit), and writing $\Theta(t) = \mc{F}(t)^{-1}$, one obtains the equation: \begin{equation} \mathcal{Q} \rho_{\text{tot}} (t) = \left\{ \Theta(t)-1\right\} \mathcal{P} \rho_{\text{tot}} (t) +\Theta(t) \mc{K}_+(t,0) \mathcal{Q} \rho_{\text{tot}} (0). \end{equation} \end{subequations} Finally, plugging this expression for $\mathcal{Q} \rho_{\text{tot}}$ into Eq.\eqref{e2a}, the formal TCL ME is obtained: \begin{subequations} \begin{equation} \mathcal{P} \dot{\rho}_{\text{tot}} (t) = -i \mathcal{P} \mc{L}(t)\mathcal{P} \rho_{\text{tot}}(t) -i \mathcal{P} \mc{L}\left\{ \Theta(t)-1 \right\} \mathcal{P}\rho_{\text{tot}}(t) - i\mathcal{P} \mc{L} \Theta(t)\mc{K}_+(t,0) \mathcal{Q} \rho_{\text{tot}}(0). \label{e11a} \end{equation} If the initial condition is such that $\mathcal{Q}\rho_{\text{tot}}(0)=0$, so that the last term vanishes (as will indeed be our assumption below), then all memory effects are contained in $\Theta(t)$. In what follows, \emph{we shall always assume that} \begin{equation} \rho_{\text{tot}}(0)=\rho_S(0)\otimes\rho_B, \end{equation} so that this condition is fulfilled. \end{subequations} The operators $\mc{L}_S$ and $\mc{L}_B$ both commute with $\mathcal{P}$ and $\mathcal{Q}$, and $\mathcal{P} \mc{L}_B =0$. This implies $\mathcal{P} \mc{L} \mathcal{Q} = \mathcal{P} \mc{L}_I \mathcal{Q}$ (and note that $\Theta - 1 = \mathcal{Q} (\Theta -1)$). With the notation $\langle \cdot \rangle_B \equiv \text{Tr}_B\left(\cdot \rho_B \right)$, the ME for the reduced density matrix of the system can be written in the form: \begin{subequations} \begin{equation} \dot{\rho}(t) = -i \left[\mc{L}_S +\langle \mc{L}_I \rangle_B \right]\rho(t) - \Xi(t)\rho(t) \label{ME} \end{equation} \begin{equation} \Xi (t) = \langle i\mc{L}_I\left\{\Theta(t)-1\right\} \rangle_B \label{Xi} \end{equation} \end{subequations} \subsection{Born approximation} At this point, it is expedient to follow the perturbational method of \cite{emch1968nsm,PhysRev.178.2025}. We begin by noting that (cf. Eqs.\eqref{e10c},\eqref{Xi}): \begin{equation} \Xi (t) = -i \langle \mc{L}_I\left\{\Sigma(t) + \mc{O}(\Sigma^2)\right\} \rangle_B ;~~\Sigma(t)=\mc{O}(\mc{L}_I). \end{equation} Therefore, to expand Eq.\eqref{Xi} to second order in $\mc{L}_I$, we need to evaluate $\mc{K}_+$ and $G$ only to 0th order! The operator $\mc{K}_+(t,\tau)$ can be factored: \begin{subequations} \begin{equation} \mc{K}_+(t,\tau) = \mc{V}_0(t,\tau) \text{T}_+ e^{-i\mathcal{Q}\int_\tau^t \mc{V}_0(s,\tau)^{-1}\mc{L}_I(s)\mc{V}_0(s,\tau) ds} \end{equation} where \begin{equation} \mc{V}_0(t,\tau)=\text{T}_+ e^{-i\mathcal{Q}\int_\tau^t \mc{L}_0(s) ds}. \end{equation} \emph{To zeroth order in $\mc{L}_I$, $\mc{K}_+$ is just $\mc{V}_0$}. Furthermore, it is important to note that \end{subequations} \begin{subequations} \begin{equation} \mc{V}_0 \mathcal{Q} = \mathcal{Q}\mc{U}_0\mathcal{Q} = \mathcal{Q}\mc{U}_0 \label{e14a} \end{equation} where \begin{equation} \mc{U}_0(t,\tau) = \text{T}_+e^{-i\int_\tau^t \mc{L}_0(s)ds}, \end{equation} and in Eq.\eqref{e14a} we have used the fact that $\mc{U}_0$ commutes with $\mathcal{Q}$. \end{subequations} Similarly, \begin{equation} \mc{G}_-(t,\tau) = \left( 1+ \mc{O}(\mc{L}_I) \right) \text{T}_- e^{+i \int_\tau^t \mc{L}_0(s) ds} = \mc{U}_0(t,\tau)^{-1} + \mc{O}(\mc{L}_I). \end{equation} We have for $\Sigma(t)$ in the Born approximation: \begin{equation} \Sigma(t) = i\int_0^t \mc{V}_0(t,\tau)\mathcal{Q} \mc{L}_I(\tau) \mathcal{P} \mc{U}_0(t,\tau)^{-1}d\tau. \end{equation} Finally, after making use of Eq.\eqref{e14a}, we get the ME for $\rho$ in the Born approximation, which implies the neglect of the back-effect of the system on the bath, consistently with the weak-coupling assumption: \begin{equation} \dot{\rho}(t) = -i\left( \mc{L}_S(t)+\langle \mc{L}_I\rangle_B\right) \rho(t) - \int_0^t\langle \mc{L}_I(t)\mathcal{Q} \mc{U}_0(t,\tau)\mc{L}_I(\tau)\mathcal{P} \mc{U}_0(t,\tau)^{-1}\rangle_Bd\tau \rho(t) \label{MEBorn} \end{equation} for $\mathcal{Q}\rho_{\text{tot}}(0)=0$. \subsection{Explicit equations for factorizable interaction Hamiltonians} We now wish to write the ME explicitly for time-dependent Hamiltonians of the following form \cite{kof04}: \begin{subequations} \begin{eqnarray} \label{general} H(t) &=& H_S(t)+H_B+H_I(t), \\ H_I(t) &=& S(t) B; \end{eqnarray} \end{subequations} where $H_S$ and $H_B$ are the system and bath Hamiltonians, respectively; and $H_I$, the interaction Hamiltonian, is the product of operators $S,B$ which act on the system (resp. bath) alone. We assume $\langle \mc{L}_I \rangle_B =0$, by virtue of $\langle B \rangle_B =0$. This also implies $\mathcal{P} \mc{L}_I\mathcal{P} =0$. Equation \eqref{MEBorn} now simplifies to: \begin{equation} \dot\rho(t) = -i\mc{L}_S(t)\rho(t) - \int_0^t d\tau \langle \mc{L}_I(t) \mathcal{U}_0(t,\tau) \mc{L}_I(\tau) \mathcal{P} \mathcal{U}_0(t,\tau)^{-1} \rangle_B \rho(t). \label{e20} \end{equation} Let us now write out the action of the operator $\mc{U}_0$ in terms of the unitary evolution operators of the system and bath: \begin{subequations} \begin{equation} \mc{U}_0(t,\tau)A = U_S(t,\tau) U_B(t-\tau)A U_B(\tau-t) U_S(t,\tau)^\dagger \\ \end{equation} \begin{equation} \label{U-s-def} U_S(t,\tau) \equiv \text{T}_+e^{-i\int_\tau^t H_S(t')dt'} \end{equation} \begin{equation} U_B(t) \equiv e^{-iH_Bt}. \end{equation} \end{subequations} The integrand of the second term in Eq.\eqref{e20}, can now be written explicitly as: \begin{eqnarray} I(t,\tau)&=&\text{Tr}_B \left[ S(t)B, \mc{U}_0(t,\tau) \left[S(\tau)B,\mc{U}_0(t,\tau)^{-1}\rho(t)\rho_B \right]\right] \nonumber \\ &=& \text{Tr}_B \left[ S(t)B, \left[\tilde{S}(t,\tau)\tilde{B}(t-\tau),\rho(t) \rho_B \right]\right] \end{eqnarray} where $\tilde{S}$ and $\tilde{B}$ are defined as: \begin{eqnarray} \label{S-tilde-def} \tilde{S}(t,\tau)&\equiv& U_S(t,\tau)S(\tau)U_S(t,\tau)^\dagger, \\ \tilde{B}(\tau)&\equiv& U_B(\tau)BU_B(-\tau). \end{eqnarray} Using the commutativity of $S$ and $B$, and of $\rho_B$ and $H_B$, as well as the cyclic property of the trace, this gives after some rearrangement: \begin{equation} I(t,\tau) = \langle B\tilde{B}(t-\tau)\rangle_B \left[S(t),\tilde{S}(t,\tau)\rho(t)\right] + H.c. \end{equation} Finally, defining the correlation function for the bath, \begin{subequations} \begin{equation} \Phi_T(t) = \langle B \tilde{B}(t) \rangle_B, \end{equation} we obtain the ME for $\rho$ in the Born approximation: \begin{equation} \label{gen-ME} \dot{\rho}(t) = -i\left[H_S,\rho(t)\right]+\int_0^t d\tau \left\{ \Phi_T(t-\tau) \left[\tilde{S}(t,\tau)\rho(t),S(t)\right] +H.c. \right\}. \end{equation} \end{subequations} \section{Generalized Bloch equations} \label{Sec-Bloch} Having derived the master equation, we focus on two regimes: a two-level system coupled to either an amplitude- or phase-noise (AN or PN) thermal bath. The bath Hamiltonian (in either regime) will be explicitly taken to consist of harmonic oscillators and be linearly coupled to the system (generalizations to other baths and couplings are obvious): \begin{eqnarray} &H_B=\sum_\lambda\omega_\lambda a_\lambda^\dagger a_\lambda\\ &B=\sum_\lambda(\kappa_\lambda a_\lambda+\kappa_\lambda^*a_\lambda^\dagger). \end{eqnarray} Here $a_\lambda,a_\lambda^\dagger$ are the annihilation and creation operators of mode $\lambda$, respectively, and $\kappa_\lambda$ is the coupling amplitude to mode $\lambda$. We use different modulation schemes for each regime, namely, dynamical {\em off-resonant} fields for the AN regime and time-dependent {\em resonant} fields for the PN regime. We derive the generalized Bloch equations for the two cases. \subsection{Two-level system coupled to a thermal amplitude-noise bath} \label{Subsec-amplitude-bloch} We first consider the AN regime of a two-level system coupled to a thermal bath. We will use off-resonant dynamic modulations, resulting in AC-Stark shifts. The Hamiltonian (Eq.~\eqref{general}) then assumes the following form: \begin{eqnarray} \label{sigma-x-1} &H_S(t)=(\omega_a+\delta_a(t))\ket{e}\bra{e}\\ \label{sigma-x-2} &S(t)=\tilde\epsilon(t)\sigma_x \end{eqnarray} where $\delta_a(t)$ is the dynamical AC-Stark shifts, $\tilde{\epsilon}(t)$ is the time-dependent modulation of the interaction strength, and the Pauli matrix $\sigma_x=\ket{e}\bra{g}+\ket{g}\bra{e}$. We derive the Bloch equations for the explicit case discussed above. Inserting Eqs.~\eqref{sigma-x-1}-\eqref{sigma-x-2} into Eq.~\eqref{U-s-def} and Eq.~\eqref{S-tilde-def}, we get: \begin{eqnarray} U_S(t,\tau)&=&e^{-i\omega_a(t-\tau)-i\int_\tau^t dt_1 \delta_a(t_1)}\ket{e}\bra{e}+\ket{g}\bra{g}\\ \tilde{S}(t,\tau)&=&e^{-i\omega_a(t-\tau)-i\int_\tau^t dt_1 \delta_a(t_1)}\tilde{\epsilon}(\tau)\ket{e}\bra{g}+H.c. \end{eqnarray} Plugging this into the ME \eqref{gen-ME}, we arrive at the following modified Bloch equations: \begin{eqnarray} \label{AN-Bloch-1} \dot\rho_{ee}=-\dot\rho_{gg}&=&-R_e(t)\rho_{ee}+R_g(t)\rho_{gg} \\ \label{AN-Bloch-2} \dot\rho_{eg}=\dot\rho_{ge}^*&=&-\left\{(R(t)+i\Delta_a(t))+i[\omega_a+\delta_a(t)]\right\}\rho_{eg} \nonumber\\&&+\left\{R(t)-i\Delta_a(t)\right\}\rho_{ge}, \end{eqnarray} where \begin{eqnarray} &&R(t)=[R_{e}(t)+R_{g}(t)]/2\\ &&\Delta_a(t)=\Delta_{e}(t)-\Delta_{g}(t)\\ \label{R-eg} &&R_{e(g)}(t)/2+i\Delta_{e(g)}(t)=\int_0^tdt'\Phi_T(t-t')K_{e(g)}(t,t')e^{\pm i\omega_a(t-t')}\\ &&K_e(t,t')=K_g^*(t,t')=\epsilon(t)\epsilon^*(t')\\ &&\epsilon(t)=\tilde{\epsilon}(t)e^{i\int_0^tdt_1\delta_a(t_1)}. \end{eqnarray} $R_{e(g)}(t)$ is the modified downward (upward) transition rate of the excited (ground) state to the ground (excited) state. Their half-rate contributes to the decoherence rate, and $\Delta_a(t)$ is the resonance (transition frequency) shift in energy due to the modified coupling to the bath. \subsection{Two-level system coupled to thermal phase-noise bath} \label{Subsec-phase-bloch} Next, we consider the PN regime of a two-level system coupled to thermal bath, where we will use near-resonant fields with time-varying amplitude as our control. The Hamiltonians (Eq.~\eqref{general}) then assume the following forms: \begin{eqnarray} \label{sigma-z} &H_S(t)=\omega_a\ket{e}\bra{e}+V(t)\sigma_x\\ &S(t)=\tilde\epsilon(t)\sigma_z \end{eqnarray} where $V(t)=V_0(t)e^{-i\omega_at}+c.c$ is the time-dependent resonant field, with real envelope $V_0(t)$, $\tilde{\epsilon}(t)$ is the time-dependent modulation of the interaction strength, $\sigma_z=\ket{e}\bra{e}-\ket{g}\bra{g}$. Since we are interested in dephasing, phases due to the (unperturbed) energy difference between the levels are immaterial. We eliminate this dependence by moving to the rotating frame. To avoid the need to time-order the propagator of the system Hamiltonian we tilt the rotating frame to the time-dependent basis: \begin{equation} \ket{\uparrow}=\frac{1}{\sqrt{2}}\left( e^{-i\omega_at}\ket{e}+\ket{g}\right)\quad\ket{\downarrow}=\frac{1}{\sqrt{2}}\left( e^{-i\omega_at}\ket{e}-\ket{g}\right) \end{equation} In this frame, the system and bath Hamiltonians become: \begin{eqnarray} \label{sigma-hat-z-1} &\hat{H}_S(t)=\frac{V_0(t)}{2}\hat\sigma_z\\ \label{sigma-hat-z-2} &\hat{S}(t)=\tilde\epsilon(t)\hat\sigma_x \end{eqnarray} where $\hat{}$ denotes the rotated and tilted frame, $\hat\sigma_z=\ket{\uparrow}\bra{\uparrow}-\ket{\downarrow}\bra{\downarrow}$ and $\hat\sigma_x=\ket{\uparrow}\bra{\downarrow}+\ket{\downarrow}\bra{\uparrow}$. We can now derive the Bloch equations for the PN regime discussed above, and demonstrate their analogy to their AN counterparts \eqref{AN-Bloch-1},\eqref{AN-Bloch-2}. To this end we insert Eqs.~\eqref{sigma-hat-z-1}-\eqref{sigma-hat-z-2} into Eq.~\eqref{U-s-def} and Eq.~\eqref{S-tilde-def}, to get: \begin{eqnarray} U_S(t,\tau)&=&e^{-i\int_\tau^t dt_1 V_0(t_1)/2}\ket{\uparrow}\bra{\uparrow}+e^{i\int_\tau^t dt_1 V_0(t_1)/2}\ket{\downarrow}\bra{\downarrow}\\ \tilde{S}(t,\tau)&=&e^{-i\int_\tau^t dt_1 V_0(t_1)}\tilde{\epsilon}(t)\ket{\uparrow}\bra{\downarrow}+H.c. \end{eqnarray} Plugging this into the ME \eqref{gen-ME}, we arrive at the following modified Bloch equations: \begin{eqnarray} \dot\rho_{\uparrow\uparrow}&=&-\dot\rho_{\downarrow\downarrow}=-R_\uparrow(t)\rho_{\uparrow\uparrow}+R_\downarrow(t)\rho_{\downarrow\downarrow} \\ \dot\rho_{\uparrow\downarrow}&=&\dot\rho_{\downarrow\uparrow}^*=-\left\{(R(t)+i\Delta_a(t))+iV_0(t)/2\right\}\rho_{\uparrow\downarrow} +\left\{R(t)-i\Delta_a(t)\right\}\rho_{\downarrow\uparrow}, \end{eqnarray} where \begin{eqnarray} &&R(t)=[R_{\uparrow}(t)+R_{\downarrow}(t)]/2\\ &&\Delta_a(t)=\Delta_{\uparrow}(t)-\Delta_{\downarrow}(t)\\ \label{R-arrows} &&R_{\uparrow(\downarrow)}(t)/2+i\Delta_{\uparrow(\downarrow)}(t)=\int_0^tdt'\Phi_T(t-t')K_{\uparrow(\downarrow)}(t,t')\\ &&K_\uparrow(t,t')=K_\downarrow^*(t,t')=\epsilon(t)\epsilon^*(t')\\ \label{PN-epsilon} &&\epsilon(t)=\tilde{\epsilon}(t)e^{i\int_0^tdt_1V_0(t_1)}. \end{eqnarray} As can be clearly seen, these modified Bloch equations are completely analogous to their AN counterparts, Eqs.~\eqref{AN-Bloch-1},\eqref{AN-Bloch-2}, provided we change the basis as follows: \begin{equation} e\leftrightarrow\uparrow\quad g\leftrightarrow\downarrow. \end{equation} Despite their analogy, Eqs.~\eqref{R-eg} and \eqref{R-arrows} are \emph{not identical}, due to the use of the rotating frame in the PN case. Nevertheless, this analogy underscores the universality of our approach. \subsection{Spectral domain representation} For both AN and PN regime, one can have a more insightful representation of the modified rates by transforming them to the frequency domain. For the long time limits (see Sec.~\ref{Sec-multi}), one arrives at the form \begin{eqnarray} \label{53} &&R_{e(g)}(t)=2\pi\int_{-\infty}^\infty d\omega G_T(\pm(\omega_a+\omega))F_t(\omega)\\ &&R_{\uparrow(\downarrow)}(t)=2\pi\int_{-\infty}^\infty d\omega G_T(\pm\omega)F_t(\omega) \end{eqnarray} where the difference is due to the fact that we used the rotating and tilted frame in the PN regime. Here $G_T(\omega)$ is the temperature-dependent bath coupling spectrum given by \begin{equation} G_T(\omega)=(2\pi)^{-1}\int_{-\infty}^\infty dt\Phi_T(t)e^{i\omega t}. \end{equation} Introducing the control-field fluence $Q(t)$, the spectral modulation $F_t(\omega)$ can be normalized to unity: \begin{eqnarray} \label{TLS-Fluence} &Q(t)=\int_0^t d\tau |\epsilon(\tau)|^2,\\ &F_t(\omega)=\frac{|\epsilon_t(\omega)|^2}{Q_(t)}, \ea{TLS-F} where \begin{equation} \epsilon_t(\omega)=\frac{1}{\sqrt{2\pi}}\int_{0}^td\tau \epsilon(\tau) e^{i\omega \tau} \end{equation} is the finite-time Fourier transform of $\epsilon(t)$. One can consider a more specific scenario, namely, coupling to zero-temperature ($T=0$) AN bath. The effects of the bath then amount to the decay of the excited state's population, which can be written as: \begin{equation} P_e(t)=\exp[-R_e(t)Q(t)]. \e{55} \section{Modulation arsenal for AN and PN} \label{Sec-modulation} Any modulation with quasi-discrete, finite spectrum is deemed quasiperiodic, implying that it can be expanded as \begin{equation} \label{quasi-def} \epsilon(t)=\sum_k\epsilon_{k}e^{-i\nu_{k}t} \end{equation} where $\nu_{k}\,(k=0,\pm1,...)$ are arbitrary discrete frequencies such that \begin{equation} \label{quasi-cond} |\nu_{k}-\nu_{k'}|\geq \Omega \quad \forall k\neq k', \end{equation} where $\Omega$ is the minimal spectral interval. One can define the long-time limit of the quasi-periodic modulation, when \begin{equation} \label{long-times} \Omega t\gg 1 \quad{\rm and}\quad t\gg t_c, \end{equation} where $t_c$ is the bath-memory (correlation) time, defined as the inverse of the largest spectral interval over which $G_T(\omega)$ and $G_T(-\omega)$ change appreciably near the relevant frequencies $\omega_a+\nu_k$. In this limit, the fluence is given by \begin{equation} \label{eppsilon-c} Q(t)\approx\epsilon_ct\quad\epsilon_c=\sum_k|\epsilon_k|^2, \end{equation} resulting in the average decay rate: \begin{eqnarray} \label{75} &&R_e=2\pi\sum_k|\lambda_k|^2G(\omega_a+\nu_k), \\ &&\lambda_k=\epsilon_k/\epsilon_c. \end{eqnarray} \subsection{Phase modulation (PM) of the coupling} \label{Sec-Qubit-PM} \subsubsection{Monochromatic perturbation} \label{Sec-Mono} Let \begin{equation} \epsilon(t)=\epsilon_0e^{-i\Delta t}. \e{3.1} Then \begin{equation} R_e=2\pi G_T(\omega_a+\Delta), \e{3.2} where $\Delta=\mbox{const.}$ is a frequency shift, induced by the AC Stark effect (in the case of atoms) or by the Zeeman effect (in the case of spins). In principle, such a shift may drastically enhance or suppress $R$ relative to the Golden - Rule decay rate, i.e. the decay rate without any perturbation \begin{equation} R_{\rm GR} = 2\pi G_T(\omega_a). \e{3.2.1} Equation \r{3.2} provides the {\em maximal change} of $R$ achievable by an external perturbation, since it does not involve any averaging (smoothing) of $G(\omega)$ incurred by the width of $F_t(\omega)$: the modified $R$ can even {\em vanish}, if the shifted frequency $\omega_a+\Delta$ is beyond the cutoff frequency of the coupling, where $G(\omega)=0$ (Figure \ref{Fig-1}a). This would accomplish the goal of dynamical decoupling \cite{aga99,aga01a,alicki2004oss,vio98,shi04,vit01,fac01,fac04,zan03}. Conversely, the increase of $R$ due to a shift can be much greater than that achievable by repeated measurements, i.e. the anti-Zeno effect \cite{kof00,kof01b,kof01a,kof96}. In practice, however, AC Stark shifts are usually small for (cw) monochromatic perturbations, whence pulsed perturbations should often be used, resulting in multiple $\nu_k$ shifts, as per Eq.~\eqref{75}. \subsubsection{Impulsive phase modulation} \label{Qubit-Impulsive} Let the phase of the modulation function periodically jump by an amount $\phi$ at times $\tau,2\tau,\dots$. Such modulation can be achieved by a train of identical, equidistant, narrow pulses of nonresonant radiation, which produce pulsed AC Stark shifts of $\omega_a$. Now \begin{equation} \epsilon(t)=e^{i[t/\tau]\phi}, \e{3.3} where $[\dots]$ is the integer part. One then obtains that \begin{equation} Q(t)=t,\ \ \epsilon_c=1, \e{3.4} \begin{equation} F_{n\tau}(\omega)=\frac{2\sin^2(\omega\tau/2) \sin^2[n(\phi+\omega\tau)/2]} {\pi n\tau\omega^2\sin^2[(\phi+\omega\tau)/2]}. \e{87} The excited-state decay, according to equation \r{55}, has then the form (at $t=n\tau$) \begin{equation} P_e(n\tau)=\exp[-R_e(n\tau)n\tau], \e{63} where $R_e(n\tau)$ is defined by Eqs. \r{53} and \r{87}. For sufficiently long times (Eq.~\eqref{long-times}) one can use Eq.~\eqref{75}, with \begin{equation} \nu_k=\frac{2k\pi}{\tau}-\frac{\phi}{\tau},\ \ |\lambda_k|^2=\frac{4\sin^2(\phi/2)}{(2k\pi-\phi)^2} \e{89} For {\em small phase shifts}, $\phi\ll 1$, the $k=0$ peak dominates, \begin{equation} |\lambda_0|^2\approx 1-\frac{\phi^2}{12}, \e{3.8} whereas \begin{equation} |\lambda_k|^2\approx\frac{\phi^2}{4\pi^2k^2}\ \ (k\ne 0). \e{3.9} In this case one can retain only the $k=0$ term in Eq.~\eqref{75}, unless $G(\omega)$ is changing very fast with frequency. Then the modulation acts as a constant shift, (Fig.~\ref{Fig-1}a) \begin{equation} \Delta=-\phi/\tau. \e{3.10} As $|\phi|$ increases, the difference between the $k=0$ and $k=1$ peak heights diminishes, {\em vanishing} for $\phi=\pm\pi$. Then \begin{equation} |\lambda_0|^2=|\lambda_1|^2=4/\pi^2, \e{3.11} i.e., $F_t(\omega)$ for $\phi=\pm\pi$ contains {\em two identical peaks symmetrically shifted in opposite directions} (Figure \ref{Fig-1}b) [the other peaks $|\lambda_k|^2$ decrease with $k$ as $(2k-1)^{-2}$, totaling 0.19]. The foregoing features allow one to adjust the modulation parameters for a given scenario to obtain an {\em optimal} decrease or increase of $R$. Thus, the phase-modulation (PM) scheme with a small $\phi$ is preferable near a continuum edge (Figure \ref{Fig-1}a,b), since it yields a spectral shift in the required direction (positive or negative). The adverse effect of $k\ne 0$ peaks in $F_t(\omega)$ then scales as $\phi^2$ and hence can be significantly reduced by decreasing $|\phi|$. On the other hand, if $\omega_a$ is near a {\em symmetric} peak of $G(\omega)$, $R$ is reduced more effectively for $\phi\simeq\pi$, as in Refs. \cite{aga01a,aga01}, since the main peaks of $F_t(\omega)$ at $\omega_0$ and $\omega_1$ then shift stronger with $\tau^{-1}$ than the peak at $\omega_0=-\phi/\tau$ for $\phi\ll 1$. \begin{figure} \caption{Spectral representation of the bath coupling, $G(\omega)$, and the modulation, $F_t(\omega)$. (a) Monochromatic modulation, or impulsive phase modulation, with small phase shifts, $\phi\ll1$, and $1/\tau$ repetition rate. (b) Impulsive phase modulation, ($\pi$-pulses), $\phi=\pi$. (c) On-off modulation, with $1/\tau_1$ repetition rate for $\tau_1\ll\tau_0$. } \label{Fig-1} \end{figure} \subsection{Amplitude modulation (AM) of the coupling} \label{Sec-Qubit-AM} Amplitude modulation (AM) of the coupling may be applicable to certain AN or PN scenarios. It arises, e.g., for radiative-decay modulation due to atomic motion through a high-$Q$ cavity or a photonic crystal \cite{she92,jap96} or for atomic tunneling in optical lattices with time-varying lattice acceleration \cite{fis01,niu98}. \subsubsection{On-off modulation} \label{Sec-On-Off} The simplest form of AM is to let the coupling be turned on and off periodically, for the time $\tau_1$ and $\tau_0-\tau_1$, respectively, i.e., \begin{equation} \epsilon(t)=\left\{\begin{array}{ll} 1&\mbox{for}\ n\tau_0<t<n\tau_0+\tau_1,\\ 0&\mbox{for}\ n\tau_0+\tau_1<t<(n+1)\tau_0 \end{array}\right. \e{3.12} ($n=0,1,\dots$). Now $Q(t)$ in \r{55} is the total time during which the coupling is switched on, whereas \begin{equation} F_{n\tau_0}(\omega)=\frac{2\sin^2(\omega\tau_1/2) \sin^2(n\omega\tau_0/2)}{\pi n\tau_1\omega^2\sin^2(\omega\tau_0/2)}, \e{91} so that \begin{equation} P(n\tau_0)=\exp[-R(n\tau_0)n\tau_1], \e{3.21} where $R(n\tau_0)$ is given by Eqs. \r{53} and \r{91}. This case is also covered by \r{75}, where the parameters are now found to be \begin{equation} \epsilon_c^2=\frac{\tau_1}{\tau_0},\ \ \nu_k=\frac{2k\pi}{\tau_0},\ \ |\lambda_k|^2=\frac{\tau_1}{\tau_0}\mbox{sinc}^2\left( \frac{k\pi\tau_1}{\tau_0}\right). \e{3.13} It is instructive to consider the limit wherein $\tau_1\ll\tau_0$ and $\tau_0$ is much greater than the correlation time of the continuum, i.e., $G(\omega)$ does not change significantly over the spectral intervals $(2\pi k/\tau_0,2\pi(k+1)/\tau_0)$. In this case one can approximate the sum \r{75} by the integral \r{53} with \begin{equation} F_t(\omega)\approx(\tau_1/2\pi)\mbox{sinc}^2(\omega\tau_1/2), \e{3.14} characterized by the spectral broadening $\sim 1/\tau_1$ (figure \ref{Fig-1}c). Then equation \r{53} for $R$ reduces to that obtained when ideal projective measurements are performed at intervals $\tau_1$ \cite{kof00}. Thus the AM on-off coupling scheme {\em can imitate measurement-induced (dephasing) effects} on quantum dynamics, if the interruption intervals $\tau_0$ {\em exceed the correlation time of the continuum}. \section{Multipartite decoherence control} \label{Sec-multi} Multipartite decoherence control, for many qubits coupled to thermal baths, is a much more challenging task than single-qubit control since: (i) entanglement between the qubits is typically more vulnerable and more rapidly destroyed by the environment than single qubit coherence \cite{ban04a,yu04,yu06}; (ii) the possibility of cross-decoherence, whereby qubits are coupled to each other through the baths, considerably complicates the control. We have recently analyzed this situation and extended \cite{gor06a,gor06b} the decoherence control approach of Sec.~\ref{Sec-ME}-\ref{Sec-modulation} to multipartite scenarios, where the qubits are either coupled to zero-temperature baths or undergoing proper dephasing. \subsection{Multipartite AN control by off-resonant modulation: singly excited systems coupled to $T=0$ baths} \label{Subsec-multi-amplitude} The decay of a singly excited multi-qubit system (under amplitude noise) to the ground state, in the presence of off-resonant modulating fields is described by the following relaxation matrix \cite{gor06a,gor06b}: \begin{eqnarray} \label{zero-gen-J-def} &&J_{jj'}(t) = 2\pi \int_{-\infty}^\infty d\omega G_{jj'}(\omega)F_{t,jj'}(\omega)\\ &&G_{jj'}(\omega)=\hbar^{-2}\sum_k\mu_{k,j}\mu^*_{k,j'} \delta(\omega-\omega_k)\\ \label{K-def} &&F_{t,jj'}(\omega)= \epsilon^*_{t,j}(\omega-\omega_{j}) \epsilon_{t,j'}(\omega-\omega_{j'}) \end{eqnarray} Here $G_{jj'}(\omega)$ is the coupling spectrum matrix given by nature and $F_{t,jj'}(\omega)$ is the dynamical modulation matrix, which we design at will to suppress the decoherence. The diagonal elements of the decoherence matrix are the time-integrated individual qubits' decay rates, while the off-diagonal elements are the cross-relaxation rates, pertaining to the coupling of the different qubits through the bath: virtual emission into the bath by qubit $j$ and its virtual reabsorption by qubit $j'$. As an example, we may control the relaxation matrix elements by local (qubit-addressing) impulsive phase modulation, (see Sec.~\ref{Qubit-Impulsive}), described by \begin{equation} \label{epsilon-omega} \epsilon_{t,j}(\omega)=\frac{\left(e^{i\omega\tau_{j}}-1\right) \left(e^{i(\phi_{j}+\omega\tau_{j})[t/\tau_{j}]}-1\right)} {i\omega\left(e^{i(\phi_{j}+\omega\tau_{j})}-1\right)}. \end{equation} Here $[...]$ denote the integer part, $\tau_{j}$ and $\phi_{j}$ are the pulse duration and the phase change for particle $j$, respectively. In the limit of weak pulses, of area $|\phi_{j}|\ll\pi$, Eq.~\eqref{epsilon-omega} yields $\epsilon_{t,j}(\omega)\cong\epsilon_{t,j}\delta(\omega-\Delta_{j})$, where $\Delta_{j}=\phi_{j}/\tau_{j}$ is the effective spectral shift caused by the pulses. One can define the fidelity, $F(t)$, total excitation probability, $F_p(t)$, and the autocorrelation function, $F_c(t)$ as follows: \begin{eqnarray} &&F(t)=Tr_{\{j\}}\left(\rho(0)\rho(t)\right)\\ &&F_p(t)=Tr_{\{j'\neq j\}}\left({}_j\bra{e}\rho(t)\ket{e}_j\right)\\ &&F_c(t)=F(t)/F_p(t) \end{eqnarray} where $Tr_{\{j\}}$ denotes tracing over all qubits. In the absence of dynamical control, the autocorrelation decays much faster than the total excitation probability, and is much more sensitive to the asymmetry between local particle-bath couplings. Thus, for initial Bell singlet and triplet states, which do not experience cross-decoherence but only different local decoherence rates, we find: \begin{eqnarray} &\ket{\Psi(0)}=1/\sqrt{2}(\ket{g}_A\ket{e}_B\pm\ket{e}_A\ket{g}_B),\\ &F_p(t)=(e^{-2J_A(t)}+e^{-2J_B(t)})/2;\\ &F_c(t)=(1+C(t))/2=1/2+e^{-\Delta J(t)}/(1+e^{-2\Delta J(t)}),\\ &\Delta J(t) = J_A(t)-J_B(t) \end{eqnarray} where $C(t)$ is the concurrence \cite{woo98}. Without any modulations, decoherence in this scenario has no inherent symmetry. Our point is that one can symmetrize the decoherence by appropriate modulations. The key is that different, ``local'', phase-locked modulations applied to the individual particles, according to Eq.~\eqref{K-def}, can be chosen to cause {\em controlled interference} and/or spectral shifts between the particles' couplings to the bath. The $F_{t,jj'}(\omega)$ matrices (cf.\eqref{K-def}) can then satisfy $2N$ requirements at all times and be tailored to impose the advantageous symmetries described below. By contrast, a ``global'' (identical) modulation, characterized by $F_{t,jj'}(\omega)=|\epsilon_t(\omega)|^2$, is not guaranteed to satisfy $N\gg1$ symmetrizing requirements at all times (Fig.~\ref{Fig-2}a). \begin{figure} \caption{Two two-level particles in a cavity, coupled to the cavity modes (thin lines) and subject to local control fields (thick lines). (a,b) Frequency domain overlap of coupling spectrum (dotted) and modulation matrix elements(solid), resulting in modified decoherence matrix elements (shaded), for: (a) global modulation (ICP symmetry), (b) cross-decoherence elimination (IIP symmetry). (c) General modulation scheme.} \label{Fig-2} \end{figure} The most desirable symmetry is that of {\em identically coupled particles} (ICP), which would emerge if all the modulated particles could acquire the {\em same} dynamically modified decoherence and cross-decoherence yielding the following $N\times N$ fully symmetrized decoherence matrix \begin{equation} \label{J-ICP} J_{jj'}^{\rm ICP}(t)=r(t)\quad\forall j,j'. \end{equation} ICP would then give rise to a $(N-1)$-dimensional decoherence-free subspace: the entire single-excitation sector less the totally symmetric entangled state. An initial state in this DFS \cite{zan97} would neither lose its population nor its initial correlations (or entanglement). Unfortunately, it is generally impossible to ensure this symmetry, since it amounts to satisfying $N(N-1)/2$ conditions using $N$ modulating fields. Even if we accidentally succeed with $N$ particles, the success is not scalable to $N+1$ or more particles. Moreover, the ability to impose the ICP symmetry by local modulation fails completely if not all particles are coupled to all other particles through the bath, i.e. if some $G_{jj'}(\omega)$ elements vanish. A more limited symmetry that we may {\em ensure} for $N$ qubits is that of {\em independent identical particles} (IIP). This symmetry is formed when spectral shifts and/or interferences imposed by $N$ modulations cause the $N$ different qubits to acquire the {\em same} single-qubit decoherence $r(t)$ and experience no cross-decoherence. To this end, we may choose $\epsilon_{t,j}(\omega)\simeq\epsilon_{t,j}\delta(\omega-\Delta_j)$. We shall deal with $N$ identical qubits, and set $\omega_j\equiv\omega_0$. We also require that at any chosen time $t=T$, the AC Stark shifts satisfy $\int_0^Td\tau\delta_j(\tau)=2\pi m$, where $m=0,\pm1,...$. This requirement ensures that modulations only affect the decoherence matrix \eqref{zero-gen-J-def}, but do not change the relative phases of the entangled qubits when their MES is probed or manipulated by logic operations at $t=T$. The spectral shifts $\Delta_j$ can be different enough to couple each particle to a different spectral range of bath modes so that their cross-coupling vanishes: \begin{equation} \label{no-cross} J_{jj'}(t)=2\pi\epsilon^*_{t,j}\epsilon_{t,j'}\int d\omega G_{jj'}(\omega_0+\omega)\delta(\omega-\Delta_j)\delta(\omega-\Delta_{j'})\rightarrow0. \end{equation} Here, the vanishing of $G_{jj'}(\omega)$ for some $j,j'$ is not a limitation. The $N$ single-particle decoherence rates can be equated by an appropriate choice of $N$ parameters $\{\Delta_j\}$: \begin{equation} J_{jj'}^{\rm IIP}(t)=2\pi|\epsilon_{t,j}|^2G_{jj}(\omega_0+\Delta_j)=\delta_{jj'}r(t), \end{equation} where $\delta_{jj'}$ is Kronecker's delta (Fig.~\ref{Fig-1}b). The IIP symmetry results in complete correlation preservation, i.e. $F_c(t)=1$, but still permits excited-state population loss, $F_p(t)=e^{-2{\rm Re} \{r(t)\}}$ (Fig.~\ref{Fig-fidelity}). If the single-particle $r(t)$ may be dynamically suppressed, i.e. if the spectrally shifted bath response $G_{jj}(\omega_j+\Delta_j)$ is small enough, this $F_p(t)$ will be kept close to $1$. \begin{figure} \caption{Fidelity of the IIP symmetry for two TLS coupled to zero-temperature baths. The initial state is entalged, $\ket{\psi(0)}=1/\sqrt{2}\left(\ket{g}_A\ket{e}_B+\ket{e}_A\ket{g}_B\right)$.} \label{Fig-fidelity} \end{figure} \subsection{Multipartite PN control by resonant modulation} \label{Subsec-multi-phase} One can describe phase-noise, or proper dephasing, by a stochastic fluctuation of the excited-state energy, $\omega_a\rightarrow\omega_a+\delta_r(t)$, where $\delta_r(t)$ is a stochastic variable with zero mean, $\mean{\delta(t)}=0$, and $\mean{\delta(t)\delta(t')}=\Phi^P(t-t')$ is the second moment. For multipartite systems, where each qubit can undergo different proper dephasing, $\delta_j(t)$, one has an additional second moment for the cross-dephasing, $\mean{\delta_j(t)\delta_{j'}(t')}=\Phi^P_{jj'}(t-t')$. A general treatment of multipartite systems undergoing this type of proper dephasing is given in Ref.~\cite{gor06a}. Here we give the main results for the case of two qubits. Let us take two TLS, or qubits, which are initially prepared in a Bell state. We wish to obtain the conditions that will preserve it. In order to do that, we change to the Bell basis, which is given by \begin{eqnarray} \label{dec-two-TLS-Bell-basis} &&\ket{B_{1,2}}=1/\sqrt{2}e^{i\omega_at}\left(\ket{e}_1\ket{g}_2 \pm \ket{g}_1\ket{e}_2\right)\\ \label{proper-bell-def} &&\ket{B_{3,4}}=1/\sqrt{2}\left(e^{i2\omega_at}\ket{e}_1\ket{e}_2 \pm \ket{g}_1\ket{g}_2\right). \end{eqnarray} For an initial Bell-state $\overline{\4\rho}_l(0)=\ket{B_l}\bra{B_l}$, where $l=1...4$, one can then obtain the fidelity, $F_l(t)=\bra{B_l}\overline{\4\rho}_l(t)\ket{B_l}$, as: \begin{eqnarray} \label{proper-two-TLS-F} &F_{l}(t)=\cos(\phi_\pm(t)){\rm Re}\left[e^{i\phi_\pm(t)} \left(1-\frac{1}{2}\sum_{jj'}J^P_{jj',l}(t)\right)\right], \end{eqnarray} where \begin{eqnarray} &\phi_j(t)=2\int_0^td\tau V_{0,j}(\tau)\\ \label{proper-two-TLS-J-def} &J^P_{jj',l}(t)=2\pi\int_{-\infty}^\infty d\omega G^P_{jj'}(\omega)F_{t,jj',l}(\omega)\\ &G^P_{jj'}(\omega)=\int_{-\infty}^\infty dt \Phi^P_{jj'}(t)e^{i\omega t}\\ \label{proper-two-TLS-Lambda-k-2} &F_{t,jj,l}(\omega)=|\epsilon_{t,j}(\omega)|^2\\ \label{proper-two-TLS-Lambda-k-3} &F_{t,jj',3}(\omega)=-F_{t,jj',1}(\omega)=\epsilon^*_{t,j}(\omega)\epsilon^*_{t,j'}(\omega)\\ \label{proper-two-TLS-Lambda-k-4} &F_{t,jj',4}(\omega)=-F_{t,jj',2}(\omega)=\epsilon_{t,j}(\omega)\epsilon^{*}_{t,j'}(\omega) \end{eqnarray} where $V_{0,j}(t)$ is the amplitude of the resonant field applied on qubit $j$, $\phi_\pm(t)=(\phi_1(t)\pm\phi_2(t))/2$ and the $\phi_+$ corresponds to $k=1,3$ and $\phi_-$ to $k=2,4$. Expressions \eqref{proper-two-TLS-F}-\eqref{proper-two-TLS-Lambda-k-4} provide our recipe for minimizing the Bell-state fidelity losses. They hold for {\em any} dephasing time-correlations and {\em arbitrary} modulation. One can choose between two modulation schemes, depending on our goals. When one wishes to preserve and initial quantum state, one can equate the modified dephasing and cross-dephasing rates of all qubits, $J_{jj',l}(t)=J(t)$. This results in complete preservation of the singlet only, i.e. $F_2(t)=1$, for all $t$, but reduces the fidelity of the triplet state. On the other hand, if one wishes to equate the fidelity for all initial states, one can eliminate the cross-dephasing terms, by applying different modulations to each qubit (Fig.~\ref{Fig-cross}), causing $F_{t,jj',l}(\omega)=0$ $\forall j\neq j'$. This requirement can be important for quantum communication schemes. \begin{figure} \caption{Cross-decoherence as a function of local modulation. Here two qubits are modulated by continuous resonant fields, with amplitudes $\Omega_{1,2}$. The cross-decoherence decays as the two qubits' modulations become increasingly different. The bath parameters are $\Phi_T(t)=e^{-t/t_c}$, where $t_c=0.5$ is the correlation time; and $\Omega_1=3$.} \label{Fig-cross} \end{figure} \section{Conclusions} \label{Sec-conc} In this paper we have expounded our universal approach to the dynamical control of qubits subject to AN and PN, by either off- or on-resonant modulating fields, respectively. It is based on a general non-Markovian master equation valid for weak system-bath coupling and arbitrary modulations, since it does not invoke the rotating wave approximation. The resulting universal convolution formulae provide intuitive clues as to the optimal tailoring of modulation and noise spectra. Our analysis of multiple, field-driven, qubits which are coupled to partly correlated or independent baths or undergo locally varying random dephasing has resulted in the universal formula \eqref{zero-gen-J-def} for coupling to zero-temperature bath, and \eqref{proper-two-TLS-F} for Bell-state preservation under local proper dephasing. Our general analysis allows one to come up with an optimal choice between global and local control, based on the observation that the maximal suppression of decoherence is not necessarily the best one. Instead, we demand an optimal {\em phase-relation} between different, but {\em synchronous} local modulations of each particle. The merits of local vs. global modulations have been shown to be essentially twofold: \begin{itemize} \item Local modulation can effectively {\em decorrelate} the different proper dephasings of the multiple TLS, resulting in equal dephasing rates for all states. For two TLS, we have shown that the singlet and triplet Bell-states acquire the same dynamically-modified dephasing rate. This should be beneficial compared to the standard global ``Bang-Bang'' ($\pi$-phase flips) if both states are used (intermittently) for information transmission or storage. \item For different couplings to a zero-temperature bath, one can better preserve any initial state by using local modulation which can reduce the decay as well as the mixing with other states, than by using global modulation. It was shown that local modulation which eliminates the cross-decoherence terms, increases the fidelity more than the global modulation alternative. For two TLS, it was shown that local modulation better preserves an initial Bell-state, whether a singlet or a triplet, compared to global $\pi$-phase ``parity kicks''. \end{itemize} {\bf Acknowledgement.} We acknowledge the support of the EC (SCALA NoE). \end{document}
\begin{document} \author{Nicki Holighaus\texorpdfstring{$^\dag$}{}} \address{$^\dag$ Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12--14, A-1040 Vienna, Austria} \thanks{This work was supported by the Austrian Science Fund (FWF): I\,3067--N30. NH is grateful for the hospitality and support of the Katholische Universität Eichstätt-Ingolstadt during his visit. FV would like to thank the Acoustics Research Institute for the hospitality during several visits, which were partially supported by the Austrian Science Fund (FWF): 31225--N32.} \email{[email protected]} \author{Felix Voigtlaender\texorpdfstring{$^{*\S}$ $^\ddag$}{}} \address{$^*$ Lehrstuhl Wissenschaftliches Rechnen, Katholische Universität Eichstätt-Ingolstadt, Ostenstraße 26, 85072 Eichstätt, Germany} \address{$^{\S}$ Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1 A-1090 Vienna, Austria} \email{[email protected]} \title[Schur-type Banach modules acting on mixed-norm Lebesgue spaces] {\texorpdfstring{Schur-type Banach modules of integral kernels\\{} acting on mixed-norm Lebesgue spaces} {Schur-type Banach modules of integral kernels acting on mixed-norm Lebesgue spaces}} \subjclass[2010]{47G10, 47L80, 46E30, 47L10} \keywords{integral operators, mixed-norm Lebesgue spaces, Schur's test, algebras of integral operators, coorbit spaces, spaces of operators} \begin{abstract} Schur's test for integral operators states that if a kernel $K : X \times Y \to \mathbb{C}$ satisfies $\int_Y |K(x,y)| \, d \nu(y) \leq C$ and $\int_X |K(x,y)| \, d \mu(x) \leq C$, then the associated integral operator is bounded from $\bd{L}^p (\nu)$ into $\bd{L}^p(\mu)$, simultaneously for all $p \in [1,\infty]$. We derive a variant of this result which ensures that the integral operator acts boundedly on the (weighted) \emph{mixed-norm} Lebesgue spaces $\bd{L}_w^{p,q}$, simultaneously for all $p,q \in [1,\infty]$. For \emph{non-negative} integral kernels our criterion is sharp; that is, the integral operator satisfies our criterion \emph{if and only if} it acts boundedly on all of the mixed-norm Lebesgue spaces. Motivated by this new form of Schur's test, we introduce solid Banach modules $\mathcal{B}_m(X,Y)$ of integral kernels with the property that all kernels in $\mathcal{B}_m(X,Y)$ map the mixed-norm Lebesgue spaces $\bd{L}_w^{p,q}(\nu)$ boundedly into $\bd{L}_v^{p,q}(\mu)$, for arbitrary $p,q \in [1,\infty]$, provided that the weights $v,w$ are $m$-moderate. Conversely, we show that if $\mathbf A$ and $\mathbf B$ are non-trivial solid Banach spaces for which all kernels $K \in \mathcal{B}_m(X,Y)$ define bounded maps from $\mathbf A$ into $\mathbf B$, then $\mathbf A$ and $\mathbf B$ are related to mixed-norm Lebesgue-spaces, in the sense that \( \left(\bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1}\right)_v \hookrightarrow \mathbf B \) and \( \mathbf A \hookrightarrow \left( \bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1} \right)_{1/w} \) for certain weights $v,w$ depending on the weight $m$ used in the definition of $\mathcal B_m$. The kernel algebra $\mathcal{B}_m(X,X)$ is particularly suited for applications in (generalized) coorbit theory. Usually, a host of technical conditions need to be verified to guarantee that the coorbit space $\operatorname{Co}_\Psi (\mathbf A)$ associated to a continuous frame $\Psi$ and a solid Banach space $\mathbf A$ are well-defined and that the discretization machinery of coorbit theory is applicable. As a simplification, we show that it is enough to check that certain integral kernels associated to the frame $\Psi$ belong to $\mathcal{B}_m(X,X)$; this ensures that the spaces $\operatorname{Co}_\Psi (\bd{L}_\kappa^{p,q})$ are well-defined for all $p,q \in [1,\infty]$ and all weights $\kappa$ compatible with $m$. Further, if some of these integral kernels have sufficiently small norm, then the discretization theory is also applicable. \end{abstract} \maketitle \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \footnotetext[3]{Corresponding author} \renewcommand*{\fnsymbol{footnote}}{\arabic{footnote}} \markright{} \section{Introduction} \label{sec:Intro} For integral kernels that do not exhibit cancellations---in particular for non-negative kernels--- Schur's test is one of the most important criteria to verify that the associated integral operator acts boundedly on the Lebesgue space $\bd{L}^p$. More precisely, given $\sigma$-finite measure spaces $(X,\mathcal{F},\mu)$ and $(Y,\mathcal{G},\nu)$ and a measurable integral kernel $K : X \times Y \to \mathbb{C}$, the \emph{integral operator} $\Phi_K$ associated to the kernel $K$ is defined by \[ [\Phi_K \, f] (x) := \int_{Y} K(x,y) \, f(y) \, d \nu(y) \text{ for measurable } f : Y \to \mathbb{C} \text{ and } x \in X, \text{ if the integral exists}; \] one then wants to study conditions on $K$ which guarantee that $\Phi_K$ acts boundedly on a given function space. \subsection{The different versions of \texorpdfstring{Schur's}{Schurʼs} test} Schur's test provides such a criterion for operators acting on the Lebesgue spaces $\bd{L}^p$. In fact, there are two somewhat different results in the literature that are commonly referred to as ``Schur's test''. The first variant yields boundedness of $\Phi_K$ on $\bd{L}^p$ for a \emph{specific} choice of $p \in (1,\infty)$: \begin{thm*}[Schur's test, specific version] Under the above assumptions, let $p,q \in (1,\infty)$ be conjugate exponents. Assume that there exist $A,B > 0$ and measurable functions $g : Y \to (0,\infty)$ and $h : X \to (0,\infty)$ satisfying \[ \int_Y |K(x,z)| \cdot [g(z)]^q \, d \nu (z) \leq A \cdot [h(x)]^q \qquad \text{and} \qquad \int_X |K(z,y)| \cdot [h(z)]^p \, d \mu(z) \leq B \cdot [g(y)]^p \] for $\mu$-almost all $x \in X$ and $\nu$-almost all $y \in Y$. Then $\Phi_K : \bd{L}^p(\nu) \to \bd{L}^p(\mu)$ is bounded with $\| \Phi_K \|_{\bd{L}^p \to \bd{L}^p} \leq A^{1/q} \, B^{1/p}$. \end{thm*} The above formulation of Schur's test appeared for the first time in \mbox{\cite[Pages~239--240]{AronszajnSpacesOfPotentialsConnectedWithLP}}; it is a highly generalized version of a result by Schur \cite{Schur1911}, which considered matrix operators acting on $\ell^2$. Gagliardo showed for non-negative kernels $K$ that the sufficient condition given above is ``almost'' necessary; see \cite{GagliardoIntegralTransformstionWithPositiveKernel} for the details. While the above theorem is very flexible and in particular allows to prove boundedness of operators which act boundedly on $\bd{L}^p$ only for \emph{some but not all} exponents $p$, the second version of Schur's test---which yields boundedness on $\bd{L}^p$ \emph{simultaneously} for all $p \in [1,\infty]$---is more frequently used in applications related to time-frequency analysis \cite[Lemma~6.2.1]{gr01} and coorbit theory \cite{FeichtingerCoorbit0,FeichtingerCoorbit1,FeichtingerCoorbit2,GeneralizedCoorbit1,RauhutCoorbitQuasiBanach,kempka2015general}. This second version reads as follows: \begin{thm*}[Schur's test, uniform version] Assume that the integral kernel $K : X \times Y \to \mathbb{C}$ is measurable and such that \begin{equation}\label{eq:schurtest0} \int_X |K(z,y)| \, d \mu(z) \leq C \,\,\, \text{and} \,\,\, \int_Y |K(x,z)| \, d \nu(z) \leq C \,\,\, \text{for almost all } x \in X \text{ and } y \in Y. \end{equation} Then $\Phi_K : \bd{L}^p(\nu) \to \bd{L}^p(\mu)$ is bounded for all $p \in [1,\infty]$, with $\| \Phi_K \|_{\bd{L}^p \to \bd{L}^p} \leq C$. \end{thm*} This second version of Schur's test is again a generalization of an estimate in Schur's original work \cite{Schur1911}; it is a folklore result and can be found for instance in \cite[Theorem~6.18]{FollandRA}. It also seems to be folklore that the above form of Schur's test is \emph{sharp}; that is, \eqref{eq:schurtest0} holds \emph{if and only if} $\Phi_K : \bd{L}^p(\nu) \to \bd{L}^p(\mu)$ is bounded for all $p \in [1,\infty]$. Since we could not locate a reference for this fact in the setting of general measure spaces, we provide a proof in Appendix~\ref{sec:SharpnessComplexValued}. {} An important application of this second form of Schur's test occurs in \emph{generalized coorbit theory} \cite{GeneralizedCoorbit1}, where a \emph{Schur-type Banach algebra of integral kernels} is considered. More precisely, in the setting where $(X,\mathcal{F},\mu) = (Y,\mathcal{G},\nu)$, define \[ \mathcal A(X) := \bigl\{ K : X \times X \to \mathbb{C} \quad\colon\quad K \text{ measurable and } \| K \|_{\mathcal A} < \infty \bigr\}, \] where \[ \| K \|_{\mathcal A} := \max \left\{ \mathop{\operatorname{ess~sup}}_{y \in X} \int_X |K(x,y)| \, d \mu(x), \quad \mathop{\operatorname{ess~sup}}_{x \in X} \int_X |K(x,y)| \, d \mu(y) \right\} \in [0,\infty] . \] Clearly, $\mathcal A(X)$ contains all kernels that satisfy Schur's test, so that $\mathcal A(X) \hookrightarrow \mathscr{B}(\bd{L}^p(\mu))$. Moreover, $\mathcal A(X)$ is a Banach algebra with multiplication given by \[ L \odot K(x,z) := \int_Y L(x,y) \, K(y,z) d\mu(y), \text{ for (almost) all } x,z \in X. \] \subsection{Our contribution} \label{sub:IntroContribution} We are concerned with extending the ``uniform version'' of Schur's test for integral operators acting on the Lebesgue spaces $\bd{L}^p$ to a version for operators acting on the \emph{mixed-norm} Lebesgue spaces $\bd{L}^{p,q}$, which were originally introduced in \cite{MixedLpSpaces}. Precisely, given a measure space $(X,\mathcal{F},\mu) = (X_1 \times X_2, \mathcal{F}_1 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)$ which is the product of two $\sigma$-finite measure spaces $(X_i,\mathcal{F}_i,\mu_i)$, the mixed Lebesgue-norm with exponents $p,q \in [1,\infty]$ is given by \begin{equation} \|f\|_{\bd{L}^{p,q}(\mu)} := \Big\| x_2 \mapsto \big\| f(\bullet, x_2) \big\|_{\bd{L}^p (\mu_1)} \Big\|_{\bd{L}^q (\mu_2)} \in [0,\infty] \quad \text{for} \quad f : X \to \mathbb{C} \text{ measurable} . \label{eq:MixedLebesgueNormDefinition} \end{equation} As usual, $\bd{L}^{p,q}(\mu)$ is the set of all (equivalence classes of) measurable functions for which this norm is finite. One can show that $\bd{L}^{p,q}(\mu)$ is indeed a Banach space; see \cite{MixedLpSpaces}. {} We make the following contributions regarding Schur's test for mixed-norm Lebesgue spaces: \begin{itemize} \item We derive a variant of the condition \eqref{eq:schurtest0} which guarantees that $\Phi_K : \bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu)$ is bounded simultaneously for all $p,q \in [1,\infty]$. \item This sufficient condition turns out to be reasonably sharp, meaning that for \emph{non-negative} integral kernels $K$, our sufficient condition is in fact necessary. For complex-valued integral kernels, however, this is no longer true in general. \item In the same way that the Banach algebra $\mathcal A(X)$ relates to the Schur-type condition \eqref{eq:schurtest0}, we introduce a novel family $\mathcal B(X,Y)$ of Banach spaces of integral kernels related to our generalized Schur-type condition and study its properties. \item In particular, we study \emph{necessary} conditions that a function space $\mathbf{A}$ has to satisfy in order for $\mathcal B(X,X)$ to act boundedly on it. Our main result in this direction shows that such a space $\mathbf{A}$ necessarily satisfies \[ \bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1} \hookrightarrow \mathbf{A} \hookrightarrow \bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}. \] \item We indicate how our results can be used to obtain streamlined conditions for the applicability of generalized coorbit theory using the kernel spaces $\mathcal B(X,Y)$. \end{itemize} In the above list, we only restricted ourselves to the unweighted spaces $\mathcal B(X,Y)$ for simplicity. In fact, each of the listed questions is also considered for the weighted spaces $\mathcal B_m(X,Y)$, acting boundedly on weighted mixed-norm Lebesgue spaces $\bd{L}^{p,q}_w$. \subsection{Related work} Mixed-norm Lebesgue spaces---originally introduced in \cite{MixedLpSpaces}---appear naturally whenever one considers functions of more than one variable where the different variables are related to fundamentally different notions or physical quantities. Such is the case in time-frequency analysis~\cite{gr01}, where the variables represent time (or space) and frequency, respectively. In particular, the \emph{modulation space norms} \cite{gr01,feichtinger1983modulation,feichtinger1989atomic,grochenig1999modulation} are defined by putting a (weighted) mixed Lebesgue norm on the short-time Fourier transform of the considered functions. Generalizing from this, Feichtinger and Gröchenig developed \emph{coorbit theory} \cite{FeichtingerCoorbit0,FeichtingerCoorbit1,FeichtingerCoorbit2,GroechenigDescribingFunctions}, a general framework for defining function spaces by putting certain function space norms on suitable integral transforms of the functions under consideration, and for discretizing such function spaces. The prototypical examples of coorbit spaces are the modulation spaces and the Besov spaces \cite{TriebelTheoryOfFunctionSpaces,TriebelTheoryOfFunctionSpaces3}, which can be described by putting a weighted mixed Lebesgue norm on either the short-time Fourier transform or the continuous wavelet transform of the given functions. Other examples include Triebel-Lizorkin spaces; see for instance \cite{UllrichContinuousCharacterizationsOfBesovTLSpaces,UllrichYangNewCharacterizationsOfBesovTLSpaces}. The coorbit description of Besov spaces was generalized in \cite{FuehrVoigtlaenderCoorbitSpacesAsDecompositionSpaces} to function spaces associated to more general wavelet-type transforms, including the \emph{anisotropic} Besov spaces studied in \cite{BownikAnisotropicBesovSpaces}. In the original setup of Feichtinger and Gröchenig, the considered integral transforms were required to stem from an irreducible, integrable group representation. This assumptions has been significantly relaxed by the combined work of several authors \cite{GeneralizedCoorbit1,GeneralizedCoorbit2,kempka2015general}, leading to the theory of \emph{general coorbit spaces}, for which the integral transform is merely required to be induced by a continuous (Parseval) frame; see Sections~\ref{sec:IntroCoorbitTheory} and \ref{sub:CoorbitReview}. However, while mixed-norm Lebesgue spaces have seen substantial interest and applications for group-based coorbit theory, they have been mostly neglected in \emph{general} coorbit theory. In Sections~\ref{sec:IntroCoorbitTheory} and \ref{sec:CoorbitTheory}, we demonstrate how our results can be applied to fill this gap. Extensions of Schur's test to mixed-norm Lebesgue spaces have been studied in \cite{TaylorPhDThesis} and \cite{samarah2005schur}. In \cite{TaylorPhDThesis}, Taylor considers generalizations of the ``specific version'' of Schur's test to the setting of $n$-variable mixed-norm Lebesgue spaces $\bd{L}^{P}$ with $P = (p_1,\dots,p_n) \in (1,\infty)^n$. In addition to generalizing the sufficient conditions to this setting, most of \cite{TaylorPhDThesis} is concerned with showing that these sufficient conditions are also (almost) necessary, similar to the results of Gagliardo \cite{GagliardoIntegralTransformstionWithPositiveKernel} for the usual Lebesgue spaces. Motivated by applications in time-frequency analysis, the work \cite{samarah2005schur} by Samarah et al.~studies sufficient criteria regarding the integral kernel $K$ which ensure that $\Phi_K : \bd{L}^{p,q} \to \bd{L}^{p,q}$ is bounded \footnote{It should be observed that while the theorem statements in \cite{samarah2005schur} seem to consider boundedness of ${\Phi_K : \bd{L}^{p,q} \to \bd{L}^{p',q'}}$, an analysis of the proofs shows that actually $\Phi_K : \bd{L}^{p,q} \to \bd{L}^{p,q}$ is meant.} for $p,q \in (1,\infty)$. Again, these sufficient conditions are in the spirit of the ``specific version'' of Schur's test. Furthermore, it should be noted that most of the results in \cite{samarah2005schur} assume one of the measure spaces under consideration to have \emph{finite} measure; the only result which does not do so is \cite[Proposition~4]{samarah2005schur}, in which it is assumed instead that $1 < p \leq q < \infty$, leaving open the case where $q < p$. For the Lebesgue spaces $\bd{L}^p$, the ``uniform'' version of Schur's test (for $p \in (1,\infty)$) is a straightforward consequence of the ``specific'' version, derived by simply choosing $h \equiv g \equiv 1$. However, it is unclear how such a derivation can be obtained from the results in \cite{TaylorPhDThesis,samarah2005schur} in the case of the \emph{mixed-norm} Lebesgue spaces. Besides formalizing the ``uniform'' version of Schur's test in that setting, our work confirms its necessity and studies the properties of the closely related kernel spaces $\mathcal B_m(X,Y)$, thereby complementing the results in \cite{TaylorPhDThesis,samarah2005schur} by the addition of a new toolset for the study of integral operators on mixed-norm Lebesgue spaces. \subsection{Structure of the paper} \label{sub:Structure} In the next section we state the main results of this paper. Namely, we formulate the ``uniform version'' of Schur's test for mixed-norm Lebesgue spaces and discuss its relevance as sufficient and necessary criterion for integral operators mapping $\bd{L}^{p,q}(\nu)$ into $\bd{L}^{p,q}(\mu)$. We also introduce the Schur-type Banach modules $\mathcal B(X,Y)$ and their weighted variants, and we discuss the properties of these spaces. In Section~\ref{sec:IntroCoorbitTheory}, we present an overview of general coorbit theory with respect to mixed-norm Lebesgue spaces. All proofs are deferred to the later sections: Section~\ref{sec:MixedNormSchur} covers all proofs related directly to the proposed extension of Schur's test, while Sections~\ref{sec:KernelModulePropertiesProof} and \ref{sec:StructureOfBBmCompatibleSpaces} are concerned with proving the properties of $\mathcal B(X,Y)$ and the kernels contained therein. In Section~\ref{sec:EmbeddingIntoWeightedLInfty} we consider additional mapping properties of the integral operators derived from kernels in $\mathcal B(X,Y)$. Finally, Section~\ref{sec:CoorbitTheory} closes our treatment with a more detailed account of general coorbit theory using mixed-norm Lebesgue spaces. Several technical proofs are deferred to the Appendices. In particular, we show in Appendix~\ref{sec:DualCharacterizationOfSumSpace} that $\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}$ is the associate space of the space $\bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1}$, which might be of independent interest. \section{\texorpdfstring{Schur's}{Schurʼs} test for mixed-norm Lebesgue spaces} \label{sec:IntroMixedNormSchur} As for the classical Schur test, we aim for readily verifiable conditions concerning the kernel $K : X \times Y \to \mathbb{C}$ which guarantee that the associated integral operator $\Phi_K$ defines a bounded linear map $\Phi_K : \bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu)$, simultaneously for all $p,q \in [1,\infty]$. To conveniently state our result, given a measurable function $K : X \times Y \to \mathbb{C}$, we define \begin{equation} \begin{split} C_1 (K) & := \mathop{\operatorname{ess~sup}}_{x \in X} \int_Y |K(x,y)| \, d \nu (y) \in [0,\infty] , \\ C_2 (K) & := \mathop{\operatorname{ess~sup}}_{y \in Y} \int_X |K(x,y)| \, d \mu(x) \in [0,\infty] , \\ C_3 (K) & := \mathop{\operatorname{ess~sup}}_{x_2 \in X_2} \left[ \int_{Y_2} \bigg( \mathop{\operatorname{ess~sup}}_{y_1 \in Y_1} \int_{X_1} \big| K \big( (x_1, x_2), (y_1,y_2) \big) \big| \, d \mu_1 (x_1) \bigg) \, d \nu_2 (y_2) \right] \in [0,\infty] , \\ C_4 (K) & := \mathop{\operatorname{ess~sup}}_{y_2 \in Y_2} \left[ \int_{X_2} \bigg( \mathop{\operatorname{ess~sup}}_{x_1 \in X_1} \int_{Y_1} \big| K \big( (x_1, x_2), (y_1,y_2) \big) \big| \, d \nu_1 (y_1) \bigg) \, d \mu_2 (x_2) \right] \in [0,\infty] . \end{split} \label{eq:MixedNormSchurConstants} \end{equation} It should be observed that $C_3(K), C_4(K) \in [0,\infty]$ are well-defined ---that is, all appearing integrands are indeed measurable---as follows by combining Tonelli's theorem with Lemma~\ref{lem:CountableLInfinityCharacterization}. Using the quantities $C_1(K),\dots,C_4(K)$, we can now conveniently state our version of Schur's test for mixed Lebesgue spaces: \begin{theorem}\label{thm:SchurTestSufficientUnweighted} Let $(X,\mathcal{F},\mu) \!=\! (X_1 \! \times \! X_2, \mathcal{F}_1 \! \otimes \! \mathcal{F}_2, \mu_1 \! \otimes \! \mu_2)$ and $(Y,\mathcal{G},\nu) \!=\! (Y_1 \! \times \! Y_2, \mathcal{G}_1 \! \otimes \! \mathcal{G}_2, \nu_1 \! \otimes \! \nu_2)$ and assume that $\mu_1,\mu_2,\nu_1,\nu_2$ are all $\sigma$-finite. If $K : X \times Y \to \mathbb{C}$ is measurable and satisfies $C_i (K) < \infty$ for all $i \in \{1,2,3,4\}$, where the $C_i (K)$ are as in Equation~\eqref{eq:MixedNormSchurConstants}, then the integral operator $\Phi_K$ is well-defined and bounded as an operator $\Phi_K : \bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu)$ for all $p,q \in [1,\infty]$, with absolute convergence a.e.~of the defining integral. More precisely: \begin{itemize} \item If $p \leq q$ and $C_i (K) < \infty$ for $i \in \{1,2,3\}$, then \( \vertiii{\Phi_K}_{\bd{L}^{p,q} \to \bd{L}^{p,q}} \leq {\displaystyle \max_{i \in \{ 1,2,3 \}} C_i (K) } < \infty . \) \item If $p > q$ and $C_i (K) < \infty$ for $i \in \{1,2,4\}$, then \( \vertiii{\Phi_K}_{\bd{L}^{p,q} \to \bd{L}^{p,q}} \leq {\displaystyle \max_{i \in \{ 1,2,4 \}} C_i (K) } < \infty . \) \end{itemize} \end{theorem} \begin{proof} The proof is given in Section~\ref{sub:MixedNormSchurSufficient}. \end{proof} Even though the developed criterion is very convenient, one might wonder how sharp it is. At first sight, it might be possible that there are kernels $K$ for which $\Phi_K : \bd{L}^{p,q}(\nu) \to \bd{L}^{p,q} (\mu)$ is bounded for all $p,q \in [1,\infty]$, but for which the generalized Schur test does not prove this boundedness. At least for kernels without cancellations---that is, non-negative kernels---this does not happen. \begin{theorem}\label{thm:SchurNecessity} Let $(X,\mathcal{F},\mu) \!=\! (X_1 \! \times \! X_2, \mathcal{F}_1 \! \otimes \! \mathcal{F}_2, \mu_1 \! \otimes \! \mu_2)$ and $(Y,\mathcal{G},\nu) \!=\! (Y_1 \! \times \! Y_2, \mathcal{G}_1 \! \otimes \! \mathcal{G}_2, \nu_1 \! \otimes \! \nu_2)$, and assume that $\mu_1,\mu_2,\nu_1,\nu_2$ are all $\sigma$-finite. Let $K : X \times Y \to [0,\infty]$ be measurable, and let the constants $C_i(K)$ be as defined in Equation~\eqref{eq:MixedNormSchurConstants}. Then the following hold: \begin{enumerate} \item If $\Phi_K : \bd{L}^1(\nu) \to \bd{L}^1(\mu)$ is well-defined and bounded, then $C_2 (K) \leq \vertiii{\Phi_K}_{\bd{L}^1 \to \bd{L}^1}$. \par \item If $\Phi_K : \bd{L}^\infty(\nu) \to \bd{L}^\infty(\mu)$ is well-defined and bounded, then $C_1(K) \leq \vertiii{\Phi_K}_{\bd{L}^\infty \to \bd{L}^\infty}$. \par \item If $\Phi_K : \bd{L}^{1,\infty}(\nu) \to \bd{L}^{1,\infty}(\mu)$ is well-defined and bounded, then $C_3 (K) \leq \vertiii{\Phi_K}_{\bd{L}^{1,\infty} \to \bd{L}^{1,\infty}}$. \par \item If $\Phi_K : \bd{L}^{\infty,1}(\nu) \to \bd{L}^{\infty,1}(\mu)$ is well-defined and bounded, then $C_4 (K) \leq \vertiii{\Phi_K}_{\bd{L}^{\infty,1} \to \bd{L}^{\infty,1}}$. \end{enumerate} \end{theorem} \begin{proof} The proof is given in Section~\ref{sub:MixedNormSchurNecessary}. \end{proof} \begin{rem*} a) This shows in particular that if $K$ is non-negative, then $\Phi_K : \bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu)$ is well-defined and bounded simultaneously for \emph{all} $p,q \in [1,\infty]$ if and only if this holds for all the ``boundary cases'' $p,q \in \{1, \infty\}$. b) The non-negativity assumption regarding $K$ cannot be dropped in general. Indeed, in Appendix~\ref{sec:SharpnessComplexValued} we provide an example of a complex-valued integral kernel for which $\Phi_K$ acts boundedly on the mixed Lebesgue space $\bd{L}^{p,q}$ for all $p,q \in [1,\infty]$, but such that $C_3 (K) = \infty$. \end{rem*} The stated results readily generalize to the \emph{weighted} mixed-norm Lebesgue spaces $\bd{L}^{p,q}_w$. To describe this, let us first define these spaces. In general, given any Banach space $(\mathbf{A}, \| \bullet \|_{\mathbf{A}})$ whose elements are (equivalence classes of) measurable functions $f : X \to \mathbb{C}$ on a measure space $(X,\mathcal{A},\mu)$, and given any measurable function $w : X \to (0,\infty)$ (called a \emph{weight}), we define the \emph{weighted space} $\mathbf{A}_w$ as \begin{equation} \mathbf{A}_w := \big\{ f : X \to \mathbb{C} \colon f \text{ measurable and } w \cdot f \in \mathbf{A} \big\} \quad \text{ with norm } \quad \| f \|_{\mathbf{A}_w} := \| w \cdot f \|_{\mathbf{A}} . \label{eq:WeightedSpaceDefinition} \end{equation} Now, given weights $v : X \to (0,\infty)$ and $w : Y \to (0,\infty)$, and an integral kernel $K : X \times Y \to \mathbb{C}$, define \begin{equation} K_{v,w} : X \times Y \to \mathbb{C}, (x,y) \mapsto \frac{v(x)}{w(y)} \cdot K(x,y) . \label{eq:WeightedKernelDefinition} \end{equation} It is then straightforward to verify that for a given measurable function $f : Y \to \mathbb{C}$, $\Phi_{K} f$ is well-defined if and only if $\Phi_{K_{v,w}} (w \cdot f)$ is well-defined, and in this case we have \[ v \cdot (\Phi_K \, f) = \Phi_{K_{v,w}} (w \cdot f). \] Then, by applying the results from above for the unweighted spaces to the weighted kernel $K_{v,w}$, we obtain the following generalized Schur-test concerning the boundedness of the integral operator $\Phi_K$ acting on weighted mixed-norm Lebesgue spaces. \begin{theorem}\label{thm:SchurTestWeighted} Let $(X,\mathcal{F},\mu) \!=\! (X_1 \! \times \! X_2, \mathcal{F}_1 \! \otimes \! \mathcal{F}_2, \mu_1 \! \otimes \! \mu_2)$ and $(Y,\mathcal{G},\nu) \!=\! (Y_1 \! \times \! Y_2, \mathcal{G}_1 \! \otimes \! \mathcal{G}_2, \nu_1 \! \otimes \! \nu_2)$, and assume that $\mu_1,\mu_2,\nu_1,\nu_2$ are all $\sigma$-finite. Let $v : X \to (0,\infty)$ and $w : Y \to (0,\infty)$, as well as $K : X \times Y \to \mathbb{C}$ be measurable, and let the weighted kernel $K_{v,w}$ be as defined in Equation~\eqref{eq:WeightedKernelDefinition}. Then the following hold for $p,q \in [1,\infty]$: \begin{itemize} \item If $p \leq q$ and if $C_i (K_{v,w}) < \infty$ for $i \in \{1,2,3\}$, then $\Phi_K : \bd{L}^{p,q}_w (\nu) \to \bd{L}^{p,q}_v (\mu)$ is well-defined and bounded, with \( \vertiii{\Phi_K}_{\bd{L}^{p,q}_w \to \bd{L}^{p,q}_v} \leq {\displaystyle \max_{1 \leq i \leq 3} C_i (K_{v,w}) } \). \item If $p > q$ and if $C_i (K_{v,w}) < \infty$ for $i \in \{ 1,2,4 \}$ then $\Phi_K : \bd{L}^{p,q}_w (\nu) \to \bd{L}^{p,q}_v (\mu)$ is well-defined and bounded, with \( \vertiii{\Phi_K}_{\bd{L}^{p,q}_w \to \bd{L}^{p,q}_v} \leq {\displaystyle \max_{i \in \{ 1,2,4 \}} C_i (K_{v,w}) } \). \end{itemize} Finally, if $K : X \times Y \to [0,\infty]$ is non-negative, then the following hold: \begin{itemize} \item If $\Phi_K : \bd{L}^{1}_w (\nu) \to \bd{L}^{1}_v (\mu)$ is well-defined and bounded, then $C_2 (K_{v,w}) \leq \vertiii{\Phi_K}_{\bd{L}^1_w \to \bd{L}^1_v}$. \item If $\Phi_K : \bd{L}^{\infty}_w (\nu) \to \bd{L}^{\infty}_v (\mu)$ is well-defined and bounded, then $C_1 (K_{v,w}) \leq \vertiii{\Phi_K}_{\bd{L}^\infty_w \to \bd{L}^\infty_v}$. \item If $\Phi_K : \bd{L}^{1,\infty}_w (\nu) \to \bd{L}^{1,\infty}_v (\mu)$ is well-defined and bounded, then $C_3 (K_{v,w}) \leq \vertiii{\Phi_K}_{\bd{L}^{1,\infty}_w \to \bd{L}^{1,\infty}_v}$. \item If $\Phi_K : \bd{L}^{\infty,1}_w (\nu) \to \bd{L}^{\infty,1}_v (\mu)$ is well-defined and bounded, then $C_4 (K_{v,w}) \leq \vertiii{\Phi_K}_{\bd{L}^{\infty,1}_w \to \bd{L}^{\infty,1}_v}$. \end{itemize} \end{theorem} \subsection{A novel Banach module of integral kernels} \label{sub:IntroKernelAlgebras} Motivated by the classical form of Schur's test, Fornasier and Rauhut considered in \cite{GeneralizedCoorbit1} the algebra $\mathcal A$ of integral kernels, and also its weighted variant $\mathcal A_m$. Precisely, given $\sigma$-finite measure spaces $(X,\mathcal{F},\mu)$ and $(Y,\mathcal{G},\nu)$, and an integral kernel $K : X \times Y \to \mathbb{C}$, the $\mathcal A$-norm of $K$ is given by \begin{equation}\label{eq:normA1} \|K\|_{\mathcal A} := \|K\|_{\mathcal A (X,Y)} := \max \left\{ \mathop{\operatorname{ess~sup}}_{x \in X} \| K(x,\bullet) \|_{\bd{L}^1(\nu)} \quad \mathop{\operatorname{ess~sup}}_{y \in Y} \| K(\bullet, y) \|_{\bd{L}^1(\mu)} \right\} \in [0,\infty] . \end{equation} Given this norm, the associated space is defined as \[ \mathcal A := \mathcal A (X,Y) := \big\{ K : X \times Y \to \mathbb{C} \,\,\colon\, K \text{ measurable and } \| K \|_{\mathcal A} < \infty \big\} . \] In \cite{GeneralizedCoorbit1}, it was observed that if $(X,\mathcal{F},\mu) = (Y,\mathcal{G},\nu)$, then $\mathcal A (X,X)$ is a Banach algebra which satisfies $\mathcal A(X,X) \hookrightarrow \mathscr{B} (\bd{L}^p, \bd{L}^p)$ for all $p \in [1,\infty]$, where we denote by $\mathscr{B} (\mathbf{A}, \mathbf{B})$ the space of all bounded linear operators $T : \mathbf{A} \to \mathbf{B}$, for given Banach spaces $\mathbf{A}, \mathbf{B}$. Here and in the following, we frequently identify a space of kernels, e.g., $\mathcal A(X,X)$, with the space of integral operators induced by such kernels, e.g., $\{ \Phi_K \colon K \!\in\! \mathcal A(X,X)\}$. Hence, the statement $\mathcal A(X,X) \hookrightarrow \mathscr{B} (\mathbf{A}, \mathbf{B})$ is to be interpreted as follows: For all $K\in \mathcal A(X,X)$, the operator $\Phi_K \colon \mathbf{A} \rightarrow \mathbf{B}$ is well-defined and satisfies $\vertiii{\Phi_K}_{\mathbf{A} \rightarrow \mathbf{B}} \lesssim \|K\|_{\mathcal A}$, where the implied constant is independent of the choice of $K \in \mathcal A(X,X)$. \begin{rem*} The space $\mathcal A$ can also be expressed in terms of mixed-norm Lebesgue spaces. In fact, using the notation $K^T (y,x) := K(x,y)$ for the \emph{transposed kernel}, it follows directly from the definition that \( \| K \|_{\mathcal A} = \max \big\{ \| K^T \|_{\bd{L}^{1,\infty}(\nu \otimes \mu)}, \| K \|_{\bd{L}^{1,\infty}(\mu \otimes \nu)} \big\} . \) \end{rem*} Given our generalized form of Schur's test, it is natural to introduce an associated kernel algebra similar to the algebra $\mathcal A$, and to study its basic properties. In fact, we will \emph{not} assume that $(X,\mathcal{F},\mu) = (Y,\mathcal{G},\nu)$, so that in general we will not obtain a \emph{Banach algebra} of kernels, but rather a \emph{Banach module} of kernels. The formal definition is as follows: \begin{definition}\label{def:NewKernelModule} Let $(X_i,\mathcal{F}_i,\mu_i)$ and $(Y_i,\mathcal{G}_i,\nu_i)$ be $\sigma$-finite measure spaces for $i \in \{ 1,2 \}$, and let $(X,\mathcal{F},\mu) = (X_1 \times X_2, \mathcal{F}_1 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)$ and $(Y,\mathcal{G},\nu) = (Y_1 \times Y_2, \mathcal{G}_1 \otimes \mathcal{G}_2, \nu_1 \otimes \nu_2)$. Given a measurable kernel $K : X \times Y \to \mathbb{C}$ (or $K : X \times Y \to [0,\infty]$), we define \begin{equation} K^{(x_2,y_2)} (x_1, y_1) := K \big( (x_1,x_2), (y_1,y_2) \big) \quad \text{ for } \quad (x_1,x_2) \in X \text{ and } (y_1,y_2) \in Y. \label{eq:PartialKernelDefinition} \end{equation} Using this notation, we define \[ \| K \|_{\mathcal B} := \| K \|_{\mathcal B (X,Y)} := \Big\| (x_2,y_2) \mapsto \| K^{(x_2,y_2)} \|_{\mathcal A (X_1, Y_1)} \Big\|_{\mathcal A (X_2, Y_2)} \in [0,\infty] , \] and \( \mathcal B (X,Y) := \{ K : X \times Y \to \mathbb{C} \, \colon K \text{ measurable and } \| K \|_{\mathcal B} < \infty \} . \) Finally, given a weight $m : X \times Y \to (0,\infty)$, we define $\| K \|_{\mathcal B_m} := \| m \cdot K \|_{\mathcal B} \in [0,\infty]$ and \( \mathcal B_m (X,Y) := \{ K : X \times Y \to \mathbb{C} \, \colon m \cdot K \in \mathcal B (X,Y) \}. \) \end{definition} \begin{rem*} It should be observed that the norm $\| K \|_{\mathcal B} \in [0,\infty]$ is indeed well-defined, since ${X_2 \times Y_2 \to [0,\infty], (x_2, y_2) \mapsto \| K^{(x_2,y_2)} \|_{\mathcal A(X_1,Y_1)}}$ is measurable, as can be seen by recalling the definition of $\| \bullet \|_{\mathcal A}$ and by combining Tonelli's theorem with Lemma~\ref{lem:CountableLInfinityCharacterization}. \end{rem*} We now collect important basic properties of the spaces $\mathcal B_m(X,Y)$. The proofs of Propositions~\ref{prop:NewKernelModuleBasicProperties1}--\ref{prop:NewKernelModuleBasicProperties3} are given in Section~\ref{sec:KernelModulePropertiesProof}. \begin{proposition}\label{prop:NewKernelModuleBasicProperties1} Let $(X_i,\mathcal{F}_i,\mu_i)$ and $(Y_i,\mathcal{G}_i,\nu_i)$ be $\sigma$-finite measure spaces for $i \in \{ 1,2 \}$, and let $(X,\mathcal{F},\mu) = (X_1 \times X_2, \mathcal{F}_1 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)$ and $(Y,\mathcal{G},\nu) = (Y_1 \times Y_2, \mathcal{G}_1 \otimes \mathcal{G}_2, \nu_1 \otimes \nu_2)$. The following hold for any (measurable) weight $m : X \times Y \to (0,\infty)$: \begin{enumerate} \item \label{enu:NewKernelModuleSolid} $\mathcal B_m(X,Y)$ is solid. That is, if $K \in \mathcal B_m(X,Y)$ and if $L : X \times Y \to \mathbb{C}$ is measurable with $|L| \leq |K|$ almost everywhere, then $L \in \mathcal B_m(X,Y)$ and $\| L \|_{\mathcal B_m} \leq \| K \|_{\mathcal B_m}$. \item \label{enu:NewKernelModuleFatouProperty} $\| \bullet \|_{\mathcal B_m}$ satisfies the \emph{Fatou property}: If $K_n : X \times Y \to [0,\infty]$ with $K_n \leq K_{n+1}$ and if $K := \lim_{n \to \infty} K_n$ (pointwise), then $\| K_n \|_{\mathcal B_m} \to \| K \|_{\mathcal B_m}$. \item \label{enu:NewKernelModuleComplete} $\big( \mathcal B_m(X,Y), \| \bullet \|_{\mathcal B_m} \big)$ is a Banach space. \item \label{enu:NewKernelModuleEmbedsInOld} Each $K \in \mathcal B_m (X,Y)$ satisfies $K \in \mathcal A_m(X,Y)$ and $\| K \|_{\mathcal A_m} \leq \| K \|_{\mathcal B_m}$. \end{enumerate} \end{proposition} Proposition~\ref{prop:NewKernelModuleBasicProperties2} below concerns transpositions and compositions of kernels. In particular, it shows that the spaces $\mathcal B_m(X,Y)$ are compatible with the usual \emph{product of kernels}, given by \[ (K \odot L) (x,z) := \int_Y K(x,y) \, L(y,z) \, d \nu (y) \quad \text{for all } (x,z) \in X \times Z \text{ for which the integral exists}, \] for arbitrary measurable kernels $K : X \times Y \to \mathbb{C}$ and $L : Y \times Z \to \mathbb{C}$. \begin{proposition}\label{prop:NewKernelModuleBasicProperties2} Let $(X_i,\mathcal{F}_i,\mu_i)$ and $(Y_i,\mathcal{G}_i,\nu_i)$ be $\sigma$-finite measure spaces for $i \in \{ 1,2 \}$, and let $(X,\mathcal{F},\mu) = (X_1 \times X_2, \mathcal{F}_1 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)$ and $(Y,\mathcal{G},\nu) = (Y_1 \times Y_2, \mathcal{G}_1 \otimes \mathcal{G}_2, \nu_1 \otimes \nu_2)$. The following hold for any (measurable) weight $m : X \times Y \to (0,\infty)$: \begin{enumerate} \item \label{enu:NewKernelModuleAdjoint} Define $m^T : Y \times X \to (0,\infty), (y,x) \mapsto m(x,y)$. If $K \in \mathcal B_m(X,Y)$, then the transposed kernel $K^T : Y \times X \to \mathbb{C}, (y,x) \mapsto K(x,y)$ satisfies $K^T \in \mathcal B_{m^T}(Y,X)$ and furthermore \( \| K^T \|_{\mathcal B_{m^T}(Y,X)} = \| K \|_{\mathcal B_m(X,Y)} . \) \item \label{enu:NewKernelModuleMultiplicationProperty} Let \( (Z,\mathcal{H},\varrho) = (Z_1 \times Z_2, \mathcal{H}_1 \otimes \mathcal{H}_2, \varrho_1 \otimes \varrho_2) , \) where $(Z_1, \mathcal{H}_1, \varrho_1)$ and $(Z_2, \mathcal{H}_2, \varrho_2)$ are $\sigma$-finite measure spaces. Let $\omega : X \times Y \to (0,\infty)$, $\sigma : Y \times Z \to (0,\infty)$ and $\tau : X \times Z \to (0,\infty)$ be measurable and such that $\tau (x,z) \leq C \cdot \omega(x,y) \cdot \sigma(y,z)$ for all $x \in X, y \in Y$ and $z \in Z$, and some $C > 0$. If $K \in \mathcal B_\omega (X,Y)$ and $L \in \mathcal B_\sigma (Y,Z)$, then \[ K \odot L \in \mathcal B_\tau (X,Z) \quad \text{and} \quad \| K \odot L \|_{\mathcal B_\tau} \leq C \, \| K \|_{\mathcal B_\omega} \, \| L \|_{\mathcal B_\sigma} . \] Furthermore, the integral defining $(K \odot L)(x,z)$ converges absolutely for almost all $(x,z) \in X \times Z$. In particular, if the assumptions hold for $X = Y = Z$ and $\tau = \omega = \sigma$, then $\mathcal B_\tau(X,X)$ is an algebra with respect to $\odot$, and a \emph{Banach} algebra if $C = 1$. \end{enumerate} \end{proposition} Finally, the integral operators $\Phi_K$ induced by kernels $K\in \mathcal B_m(X,Y)$ act boundedly on mixed-norm Lebesgue spaces. \begin{proposition}\label{prop:NewKernelModuleBasicProperties3} Let $(X_i,\mathcal{F}_i,\mu_i)$ and $(Y_i,\mathcal{G}_i,\nu_i)$ be $\sigma$-finite measure spaces for $i \in \{ 1,2 \}$, and let $(X,\mathcal{F},\mu) = (X_1 \times X_2, \mathcal{F}_1 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)$ and $(Y,\mathcal{G},\nu) = (Y_1 \times Y_2, \mathcal{G}_1 \otimes \mathcal{G}_2, \nu_1 \otimes \nu_2)$. Let further $v : X \to (0,\infty)$, $w : Y \to (0,\infty)$, and $m: X\times Y \rightarrow (0,\infty)$ be measurable, and assume that there is $C > 0$ such that $\frac{v(x)}{w(y)} \leq C \cdot m(x,y)$ for all $(x,y) \in X \times Y$. For any $K \in \mathcal B_m(X,Y)$ and $p,q \in [1,\infty]$, the integral operator $\Phi_K$ is well-defined and bounded as an operator $\Phi_K : \bd{L}^{p,q}_w (\nu) \to \bd{L}^{p,q}_v (\mu)$, with absolute convergence almost everywhere of the defining integral. Finally, \( \vertiii{\Phi_K}_{\bd{L}^{p,q}_w (\nu) \to \bd{L}^{p,q}_v (\mu)} \leq C \cdot \| K \|_{\mathcal B_m (X,Y)}. \) \end{proposition} \subsection{Necessary conditions for spaces compatible with the kernel modules \texorpdfstring{$\mathcal B_m$}{ğ“‘ₘ}} \label{sub:RestrictionsOnSolidSpaces} Proposition~\ref{prop:NewKernelModuleBasicProperties3} shows in particular that kernels belonging to $\mathcal B_m$ induce integral operators that act boundedly on mixed-norm Lebesgue spaces. In addition to these spaces, there certainly exist other function spaces $\mathbf{A}, \mathbf{B}$ such that every kernel $K \in \mathcal B_m$ gives rise to a bounded integral operator $\Phi_K : \mathbf{A} \to \mathbf{B}$. To mention just one possibility, we note that if $\mathbf{A} = \bd{L}^{1,\infty} (\nu) \cap \bd{L}^{\infty,1} (\nu)$ and $\mathbf{B} = \bd{L}^{1,\infty} (\mu) \cap \bd{L}^{\infty,1} (\mu)$, then each kernel $K \in \mathcal B (X,Y)$ defines a bounded integral operator $\Phi_K : \mathbf{A} \to \mathbf{B}$. Here, we equip the space $\mathbf{A}$ with the norm \( \| f \|_{\mathbf{A}} = \max \big\{ \| f \|_{\bd{L}^{1,\infty} (\nu)} , \, \| f \|_{\bd{L}^{\infty,1}(\nu)} \big\} \), and similarly for $\mathbf{B}$. Of course, a host of other choices for $\mathbf{A}, \mathbf{B}$ are possible as well. Still, the following question appears natural: Given function spaces $\mathbf{A}, \mathbf{B}$ such that each kernel $K \in \mathcal B_m (X,Y)$ induces a bounded integral operator $\Phi_K : \mathbf{A} \to \mathbf{B}$, what can be said about the spaces $\mathbf{A}, \mathbf{B}$? Are they ``similar'' to (weighted) mixed-norm Lebesgue spaces in some sense? We will give a (partial) answer to this question for the case that $\mathbf{A}, \mathbf{B}$ are \emph{solid function spaces}. Intuitively, the membership of a function $f$ in a solid space does not depend on any regularity properties of $f$, but only on the magnitude of the function values $f(x)$. Formally, we say that a normed vector space $(\mathbf{A}, \| \bullet \|_{\mathbf{A}})$ is a solid function space on a measure space $(X,\mathcal{F},\mu)$ if $\mathbf{A}$ consists of (equivalence classes of almost everywhere equal) measurable functions $f : X \to \mathbb{C}$, with the additional property that if $f \in \mathbf{A}$ and if $g : X \to \mathbb{C}$ is measurable with $|g| \leq |f|$ almost everywhere, then $g \in \mathbf{A}$ and $\| g \|_\mathbf{A} \leq \| f \|_\mathbf{A}$. In case that a solid function space $\mathbf{A}$ is complete, we say that it is a \emph{solid Banach function space}. For more details on such spaces, we refer to \cite{ZaanenIntegration} and \cite{BennettSharpleyInterpolationOfOperators}, or to \cite[Section~2.2]{VoigtlaenderPhDThesis} for readers interested in Quasi-Banach spaces. We will show in Theorem~\ref{thm:NecessaryConditionsForCompatibleSpaces} below that Banach function spaces ``compatible'' with the kernel modules $\mathcal B_m(X,Y)$ are indeed somewhat similar to certain weighted mixed-norm Lebesgue spaces. To formulate this conveniently, we first introduce two special spaces related to mixed-norm Lebesgue spaces. \begin{definition}\label{def:SumAndIntersectionSpaces} Let $(X,\mathcal{F},\mu) = (X_1 \times X_2, \mathcal{F}_1 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)$ where $\mu_1,\mu_2$ are $\sigma$-finite. We denote by \( \bd{L}^1 (\mu) + \bd{L}^\infty (\mu) + \bd{L}^{1,\infty} (\mu) + \bd{L}^{\infty,1} (\mu) \) the space of all (equivalence classes of almost everywhere equal) measurable functions $f: X \rightarrow \mathbb{C}$, for which the norm \begin{equation} \begin{split} & \| f \|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}} \\ & := \inf \Big\{ \| f_1 \|_{\bd{L}^1} + \| f_2 \|_{\bd{L}^\infty} + \| f_3 \|_{\bd{L}^{1,\infty}} + \| f_4 \|_{\bd{L}^{\infty,1}} \,\,\colon \begin{array}{l} f_1,f_2,f_3,f_4 : X \to \mathbb{C} \text{ measurable} \\ \text{and } f = f_1 + f_2 + f_3 + f_4 \end{array} \Big\} \end{split} \label{eq:SumNormDefinition} \end{equation} is finite. We further denote by \( \bd{L}^1 (\mu) \cap \bd{L}^\infty (\mu) \cap \bd{L}^{1,\infty} (\mu) \cap \bd{L}^{\infty,1} (\mu) \) the space of all (equivalence classes of almost everywhere equal) measurable functions $f: X \rightarrow \mathbb{C}$, for which the norm \begin{equation} \| f \|_{\bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1}} := \IntersectionNorm{f} \label{eq:IntersectionNormDefinition} \end{equation} is finite. \end{definition} \begin{rem*} It is not quite obvious that the functional $\| \bullet \|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}}$ is indeed definite, and hence a norm. We verify definiteness as part of Theorem~\ref{thm:DualOfIntersection}. \end{rem*} \begin{theorem}\label{thm:NecessaryConditionsForCompatibleSpaces} Let $(X,\mathcal{F},\mu) \!=\! (X_1 \times X_2, \mathcal{F}_1 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)$ and $(Y,\mathcal{G},\nu) \!=\! (Y_1 \times Y_2, \mathcal{G}_1 \otimes \mathcal{G}_2, \nu_1 \otimes \nu_2)$, where $\mu_1,\mu_2, \nu_1,\nu_2$ are all $\sigma$-finite. Let $m : X \times Y \to (0,\infty)$ be measurable, and let $\mathbf{A}$ be a solid Banach function space on $Y$ and $\mathbf{B}$ a solid Banach function space on $X$. Finally, let $v : X \to (0,\infty)$ and $w : Y \to (0,\infty)$ be measurable and such that $m(x,y) \leq C \cdot v(x) \cdot w(y)$ for all $(x,y) \in X \times Y$ and some $C > 0$. Then the following hold: \begin{enumerate} \item \label{enu:KernelCoDomainEmbedding} If $\mathbf{A} \neq \{0\}$ and if each $K \in \mathcal B_m(X,Y)$ induces a well-defined operator $\Phi_K : \mathbf{A} \to \mathbf{B}$, then \[ \big( \bd{L}^1(\mu) \cap \bd{L}^\infty (\mu) \cap \bd{L}^{1,\infty}(\mu) \cap \bd{L}^{\infty,1}(\mu) \big)_v \hookrightarrow \mathbf{B} . \] \item \label{enu:KernelDomainEmbedding} If $\mu(X) \neq 0$ and if for each non-negative kernel $K \in \mathcal B_m (X,Y)$ and each non-negative $f \in \mathbf{A}$, the function $\Phi_K f : X \to [0,\infty]$ is almost-everywhere finite-valued, then \[ \mathbf{A} \hookrightarrow \big( \bd{L}^1 (\nu) + \bd{L}^\infty (\nu) + \bd{L}^{1,\infty} (\nu) + \bd{L}^{\infty,1} (\nu) \big)_{1/w} . \] \end{enumerate} \end{theorem} \begin{proof} The proof of Theorem~\ref{thm:NecessaryConditionsForCompatibleSpaces} is given in Section~\ref{sec:StructureOfBBmCompatibleSpaces}. \end{proof} \begin{remark}\label{rem:NecessaryConditionsSharpnessRemark} In many cases, \( \mathbf{C} = \mathscr{H} := \big( \bd{L}^1(\nu) + \bd{L}^\infty (\nu) + \bd{L}^{1,\infty}(\nu) + \bd{L}^{\infty,1} (\nu) \big)_{1/w} \) turns out to be the \emph{minimal} space with the property that $\mathbf{A} \hookrightarrow \mathbf{C}$ for every solid Banach function space $\mathbf{A}$ on $Y$ for which $\Phi_K f$ is almost-everywhere finite-valued for all $0 \leq K \in \mathcal B_m (X,Y)$ and $0 \leq f \in \mathbf{A}$. Indeed, let us assume that $\mathbf{C}$ is a space with this property and that there is a weight ${u : X \to (0,\infty)}$ satisfying \begin{equation} \frac{u(x)}{1/w(y)} = u(x) \cdot w(y) \leq m(x, y) \qquad \text{for all} \qquad (x,y) \in X \times Y . \label{eq:NecessaryConditionsSharpnessAssumption} \end{equation} Then, Proposition~\ref{prop:NewKernelModuleBasicProperties3} shows for $K \in \mathcal B_m(X,Y)$ that $\Phi_K : \bd{L}^{p,q}_{1/w}(\nu) \to \bd{L}^{p,q}_u (\mu)$ is well-defined and bounded, for arbitrary $p,q \in \{1,\infty\}$. In particular, this means that $\Phi_K f (x) < \infty$ almost everywhere for all $0 \leq K \in \mathcal B_m(X,Y)$ and $0 \leq f \in \bd{L}^{p,q}_{1/w} (\nu)$, with $p,q \in \{1,\infty\}$. By our assumption on $\mathbf{C}$, this means $\bd{L}^{p,q}_{1/w} (\nu) \hookrightarrow \mathbf{C}$ for all $p,q \in \{1,\infty\}$, and hence $\mathscr{H} \hookrightarrow \mathbf{C}$. The existence of $u$ with $u(x) \cdot w(y) \leq m(x,y)$ is for example satisfied in the common case that $X = Y$ and \begin{equation} m(x,y) = m_w (x,y) := \max \Big\{ \frac{w(x)}{w(y)}, \frac{w(y)}{w(x)} \Big\} \quad \text{for a weight} \quad w : X \to (c,\infty) \text{ with } c > 0. \label{eq:SeparableMatrixWeight} \end{equation} Indeed, if we set $u : X \to (0,\infty), x \mapsto 1/w(x)$, then $u(x) \cdot w(y) \leq m(x,y) \leq \frac{1}{c^2} \cdot w(x) \cdot w(y)$. \end{remark} \subsection{\texorpdfstring{$\mathcal A_m$}{ğ“ₘ} as a special case of \texorpdfstring{$\mathcal B_m$}{ğ“‘ₘ}} \label{sub:OldAlgebraIsSpecialCase} The classical spaces $\mathcal A_m (X_1, Y_1)$ arise as special cases of the newly introduced spaces $\mathcal B_m(X,Y)$. Indeed, given $\sigma$-finite measure spaces $\XIndexTuple{1}$ and $\YIndexTuple{1}$, define $X_2 := Y_2 := \{0\}$ and $\mu_2 := \nu_2 := \delta_0$, and as usual $X := X_1 \times X_2$ and $Y := Y_1 \times Y_2$, as well as $\mu := \mu_1 \otimes \mu_2$ and $\nu := \nu_1 \otimes \nu_2$. Now, let us identify a function ${f : X_1 \to \mathbb{C}}$ with $\widetilde{f} : X \to \mathbb{C}, (x_1, x_2) \mapsto f(x_1)$, and $g : Y_1 \to \mathbb{C}$ with $\widetilde{g} : Y \to \mathbb{C}, (y_1, y_2) \mapsto g(y_1)$. A direct calculation shows that $\| \widetilde{f} \, \|_{\bd{L}^{p,q}(\mu)} = \| f \|_{\bd{L}^p (\mu_1)}$ and similarly $\| \widetilde{g} \|_{\bd{L}^{p,q}(\nu)} = \| g \|_{\bd{L}^p (\nu_1)}$, for arbitrary $p,q \in [1,\infty]$. In other words, ${\bd{L}^{p,q}(\mu) = \bd{L}^{p}(\mu_1)}$ and ${\bd{L}^{p,q}(\nu) = \bd{L}^p (\nu_1)}$, up to canonical identifications. Finally, an easy calculation shows that if we identify a given kernel ${K : X_1 \times Y_1 \to \mathbb{C}}$ with ${\widetilde{K} : X \times Y \to \mathbb{C}, \big( (x_1,x_2), (y_1,y_2) \big) \mapsto K(x_1, y_1)}$, then ${\| \widetilde{K} \|_{\mathcal B(X,Y)} = \| K \|_{\mathcal A(X_1, Y_1)}}$, and thus $\mathcal A_m(X_1,Y_1) = \mathcal B_m(X,Y)$, up to canonical identifications. By using this identification of the spaces $\mathcal A_m(X_1, Y_1)$ as special $\mathcal B_m(X,Y)$ spaces, all of our results imply a corresponding result for $\mathcal A_m(X_1, Y_1)$. Most of the conclusions obtained in this way are already well-known, but some seem to be new. As an example of such a result, we explicitly state the consequences of Theorem~\ref{thm:NecessaryConditionsForCompatibleSpaces} for the spaces $\mathcal A_m$. \begin{corollary}\label{cor:NecessaryConditionsForSpacesCompatibleWithAAm} Let $(X,\mathcal{F},\mu)$ and $(Y,\mathcal{G},\nu)$ be $\sigma$-finite measure spaces, let $\mathbf{A}$ be a solid Banach function space on $Y$, and $\mathbf{B}$ a solid Banach function space on $X$. Finally, let $m : X \times Y \to (0,\infty)$, $v : X \to (0,\infty)$, and $w : Y \to (0,\infty)$ be measurable and such that $m(x,y) \leq C \cdot v(x) \cdot w(y)$ for all $(x,y) \in X \times Y$ and some $C > 0$. Then the following hold: \begin{enumerate} \item If $\mathbf{A} \neq \{0\}$ and if each $K \in \mathcal A_m(X,Y)$ induces a well-defined operator $\Phi_K : \mathbf{A} \to \mathbf{B}$, then \( \big( \bd{L}^1(\mu) \cap \bd{L}^\infty(\mu) \big)_{v} \hookrightarrow \mathbf{B} . \) \item If $\mu(X) \neq 0$ and if for each $0 \leq K \in \mathcal A_m(X,Y)$ and each $0 \leq f \in \mathbf{A}$, the function $\Phi_K f : X \to [0,\infty]$ is almost-everywhere finite-valued, then \( \mathbf{A} \hookrightarrow \big( \bd{L}^1(\nu) + \bd{L}^\infty (\nu) \big)_{1/w} . \) \end{enumerate} \end{corollary} \begin{rem*} Part~(2) of the corollary is strictly stronger than the inclusion $\mathbf{A} \hookrightarrow \mathcal{D} \big( \mathcal{U}, L^1, (L^\infty_{1/w})^\natural \big)$ obtained in \cite[Lemma~8(a)]{GeneralizedCoorbit1}. Furthermore, the space $(\bd{L}^1 + \bd{L}^\infty)_{1/w}$ seems more natural than the space $\mathcal{D} \big( \mathcal{U}, L^1, (L^\infty_{1/w})^\natural \big)$, the definition of which is more involved (see \mbox{\cite[Pages 261 and 265]{GeneralizedCoorbit1}}). To see that the above result is indeed a strict improvement, first note that Remark~\ref{rem:NecessaryConditionsSharpnessRemark} shows \( (\bd{L}^1 + \bd{L}^\infty)_{1/w} \hookrightarrow \mathcal{D} \big( \mathcal{U}, L^1, (L^\infty_{1/w})^\natural \big) . \) Here, Condition~\eqref{eq:NecessaryConditionsSharpnessAssumption} is satisfied under the assumptions imposed in \cite{GeneralizedCoorbit1}, since these imply that $X = Y$ and that $\frac{w(y)}{w(x)} \leq m(x,y) \leq w(x) \cdot w(y)$ for $w(x) := m(x,z)$ with $z \in X$ fixed; see \cite[Equations~(3.2), (3.3), and (3.7)]{GeneralizedCoorbit1}. Finally, if one chooses $(X,\mu) = (\mathbb{R}, \lambda)$ with the Lebesgue measure $\lambda$, and $m \equiv w \equiv 1$, as well as $\mathcal{U} = (U_k)_{k \in \mathbb{Z}} = \big( (k-1,k+1) \big)_{k \in \mathbb{Z}}$, then the function $f := \sum_{n=1}^\infty 2^n \cdot {\mathds{1}}_{[n,n+2^{-n}]}$ satisfies $f \notin \bd{L}^1 + \bd{L}^\infty$, but $f \in \mathcal{D} \big( \mathcal{U}, L^1, (L^\infty_{1/w})^\natural \big)$. To see this, note that $\| f \cdot {\mathds{1}}_{U_k} \|_{\bd{L}^1} \leq 3$ for all $k \in \mathbb{Z}$ and hence $f \in \mathcal{D} \big( \mathcal{U}, L^1, (L^\infty_{1/w})^\natural \big)$, but that $\int_E f \, d \lambda = \infty$ for the finite measure set $E := \bigcup_{n \in \mathbb{N}} [n, n + 2^{-n}]$; this shows $f \notin \bd{L}^1 + \bd{L}^\infty$. \end{rem*} \subsection{Boundedness of \texorpdfstring{$\Phi_K : \mathbf{A} \to \bd{L}^\infty_{1/v}$} {ΦK from ğ€ into a weighted ğ‘³âˆ space}} \label{sub:IntroEmbeddingIntoWeightedLInfty} In some applications it is not enough to merely know that an integral kernel $K$ induces a bounded integral operator $\Phi_K : \mathbf{A} \to \mathbf{A}$. Instead, it might be required that $\Phi_K : \mathbf{A} \to \mathbf{C}$ is well-defined and bounded, for some space $\mathbf{C}$ that does not necessarily contain $\mathbf{A}$. In particular for general coorbit theory---see Section~\ref{sec:IntroCoorbitTheory}--- such a property is required for $\mathbf{C} = \bd{L}^\infty_{1/v}$, for a suitable weight $v$. Having to verify this additional condition can be a serious obstruction for the application of coorbit theory. Using the kernel modules $\mathcal B_m(X,X)$, it is possible to simplify proving that $\Phi_K : \mathbf{A} \to \bd{L}^\infty_{1/v}$ is indeed bounded for a suitable choice of $v$. All one needs to verify are the following two conditions: \begin{enumerate} \item $\mathcal B_m(X,X) \hookrightarrow \mathscr{B}(\mathbf{A}, \mathbf{A})$, which holds for all (suitably) weighted mixed-norm Lebesgue spaces $\mathbf{A} = \bd{L}^{p,q}_w$; and \item a certain \emph{maximal kernel} $\MaxKernel{\mathcal{U}} K$ of the kernel $K$ belongs to $\mathcal B_m(X,X)$. \end{enumerate} The maximal kernel $\MaxKernel{\mathcal{U}} K$ is defined using a suitable covering $\mathcal{U}$ of $X = X_1 \times X_2$. The following definition clarifies the conditions that this covering has to satisfy, and formally introduces the maximal kernel $\MaxKernel{\mathcal{U}} K$. \begin{definition}\label{def:ProductAdmissibleCovering} Let $(X,\mathcal{F},\mu) = (X_1 \times X_2, \mathcal{F}_1 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)$, where $\XIndexTuple{i}$ is a $\sigma$-finite measure space for $i \in \{ 1, 2 \}$. A family $\mathcal{U} = (U_j)_{j \in J}$ is said to be a \emph{product-admissible covering} of $X$, if it satisfies the following conditions: \begin{enumerate} \item the index set $J$ is countable; \item we have $X = \bigcup_{j \in J} U_j$; \item each $U_j$ is of the form $U_j = V_j \times W_j$ with $V_j \in \mathcal{F}_1$ and $W_j \in \mathcal{F}_2$ and we have $\mu(U_j) > 0$; \item there is a constant $C > 0$ such that the \emph{covering weight} $w_{\mathcal{U}}$ defined by \begin{equation} (w_{\mathcal{U}})_j := \min \big\{ 1, \mu_1 (V_j), \mu_2 (W_j), \mu(U_j) \big\} \qquad \text{for } j \in J \label{eq:CoveringWeightDefinition} \end{equation} satisfies $(w_{\mathcal{U}})_i \leq C \cdot (w_{\mathcal{U}})_j$ for all $i,j \in J$ for which $U_i \cap U_j \neq \emptyset$. \end{enumerate} Given such a product-admissible covering $\mathcal{U} = (U_j)_{j \in J}$ and any kernel $K : X \times X \to \mathbb{C}$, the associated \emph{maximal kernel} $\MaxKernel{\mathcal{U}} K$ is defined as \begin{equation} \MaxKernel{\mathcal{U}} K : X \times X \to [0,\infty], (x,y) \mapsto \sup_{z \in \mathcal{U}(x)} |K(z,y)| \quad \text{where} \quad \mathcal{U}(x) := \bigcup_{j \in J \text{ with } x \in U_j} U_j . \label{eq:MaximalKernelDefinition} \end{equation} Finally, given a weight $u : X \to (0,\infty)$, we say that $u$ is \emph{$\mathcal{U}$-moderate}, if there is a constant $C' > 0$ such that $u(x) \leq C' \cdot u(y)$ for all $j \in J$ and all $x,y \in U_j$. \end{definition} Our precise sufficient criterion for ensuring that $\Phi_K : \mathbf{A} \to \bd{L}^\infty_{1/v}$ reads as follows: \begin{theorem}\label{thm:BoundednessIntoWeightedLInfty} Let $(X,\mathcal{F},\mu) = (X_1 \times X_2, \mathcal{F}_1 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)$, where $\XIndexTuple{i}$ is a $\sigma$-finite measure space for $i \in \{ 1, 2 \}$. Let $m : X \times X \to (0,\infty)$ and $u : X \to (0,\infty)$ be measurable and such that $m(x,y) \leq C \cdot u(x) \cdot u(y)$ for all $x,y \in X$ and some constant $C > 0$. Let $\mathcal{U} = (U_j)_{j \in J}$ be a product-admissible covering of $X$ for which $u$ is \emph{$\mathcal{U}$-moderate}. Then the following hold: \begin{enumerate} \item There is a measurable weight $w_{\mathcal{U}}^c : X \to (0,\infty)$ (a ``continuous version'' of the discrete weight $w_{\mathcal{U}}$ defined in \eqref{eq:CoveringWeightDefinition}) such that \begin{equation} \sup_{j \in J} \sup_{x \in U_j} \bigg[ \frac{(w_{\mathcal{U}})_j}{w_{\mathcal{U}}^c (x)} + \frac{w_{\mathcal{U}}^c (x)}{(w_{\mathcal{U}})_j} \bigg] < \infty . \label{eq:ContinuousCoveringWeightCondition} \end{equation} For any two such weights $w_{\mathcal{U}}^c, v_{\mathcal{U}}^c$, we have $w_{\mathcal{U}}^c \asymp v_{\mathcal{U}}^c$. \item Let $\mathbf{A}$ be a solid Banach function space on $X$ such that $\mathcal B_m(X,X) \hookrightarrow \mathscr{B}(\mathbf{A})$. Let the kernel ${K : X \times X \to \mathbb{C}}$ be measurable, and assume that there is a measurable kernel ${L : X \times X \to [0,\infty]}$ satisfying $\MaxKernel{\mathcal{U}} K \leq L$ and $\| L \|_{\mathcal B_m} < \infty$. Define \begin{equation} v : X \to (0,\infty), x \mapsto \frac{u(x)}{w_{\mathcal{U}}^c (x)} . \label{eq:SpecialLInftyWeight} \end{equation} Then $K \in \mathcal B_m(X,X)$ (so that $\Phi_K : \mathbf{A} \to \mathbf{A}$ is bounded), and $K$ induces a well-defined and bounded integral operator ${\Phi_K : \mathbf{A} \to \bd{L}^\infty_{1/v}}$. \end{enumerate} \end{theorem} \begin{rem*} The slightly convoluted formulation involving the additional kernel $L$ is only chosen to avoid having to assume that the maximal kernel $\MaxKernel{\mathcal{U}} K$ is measurable. \end{rem*} \begin{proof} The proof of Theorem~\ref{thm:BoundednessIntoWeightedLInfty} is given in Section~\ref{sec:EmbeddingIntoWeightedLInfty}. \end{proof} \section{Application: Simplified conditions for coorbit theory using \texorpdfstring{$\mathcal B_m(X)$}{ğ“‘ₘ(X)}} \label{sec:IntroCoorbitTheory} The main idea of coorbit theory is to quantify the ``niceness'' of a function in terms of the decay of its frame coefficients with respect to a fixed continuous frame $\mathcal{F}$. This approach gives rise to a family of function spaces---the so-called \emph{coorbit spaces}--- and an associated discretization theory for these spaces. Coorbit theory was originally introduced by Feichtinger and Gröchenig in the seminal papers~\cite{FeichtingerCoorbit0,FeichtingerCoorbit1,FeichtingerCoorbit2} for frames generated by an integrable, irreducible group representation. It was later generalized to arbitrary continuous frame; see \cite{GeneralizedCoorbit1,GeneralizedCoorbit2}. Our presentation is based on the recent contribution by Kempka et al.~\cite{kempka2015general}, which generalizes and streamlines coorbit theory for continuous frames (\emph{general coorbit theory}). In this introduction, we only present a sketch of the theory and give an honest but simplified account of how the kernel spaces $\mathcal B_m$ can be used to simplify the process of verifying the necessary assumptions. The reader interested in the finer details is referred to Section~\ref{sec:CoorbitTheory}. Let $\mathcal{H}$ be a separable Hilbert space, and let $(X,\mathcal{F},\mu) = (X_1 \times X_2, \mathcal{F}_1 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)$, where $\XIndexTuple{j}$ ($j \in \{1,2\}$) is a $\sigma$-compact, second countable, locally compact Hausdorff space with Borel $\sigma$-algebra $\mathcal{F}_j$, and a Radon measure $\mu_j$ with $\mathop{\operatorname{supp}} \mu_j = X_j$. Furthermore, fix a \emph{continuous Parseval frame} $\Psi = (\psi_x)_{x \in X} \subset \mathcal{H}$, which by definition means that the \emph{voice transform} \begin{equation} V_\Psi f : X \to \mathbb{C}, x \mapsto \langle f, \psi_x \rangle_{\mathcal{H}} \label{eq:VoiceTransformDefinition} \end{equation} is measurable for each $f \in \mathcal{H}$ and satisfies \begin{equation} \| f \|_{\mathcal{H}}^2 = \| V_\Psi f \|_{\bd{L}^2 (\mu)}^2 \qquad \forall \, f \in \mathcal{H} . \label{eq:ParsevalFrameCondition} \end{equation} We will only be working with Parseval frames as above; for a detailed treatment of general continuous frames, we refer to \cite{AliContinuousFrames,SpeckbacherReproducingPairs1}. The idea of coorbit theory is to generalize the description \eqref{eq:ParsevalFrameCondition} of the $\mathcal{H}$-norm in terms of the $L^2$-norm of the voice transform to different decay conditions regarding the voice transform. Somewhat more precisely, we say that a solid Banach function space $\mathbf{A}$ on $X$ is \emph{rich}, if ${\mathds{1}}_K \in \mathbf{A}$ for all compact sets $K \subset X$. Given such a rich solid Banach function space $\mathbf{A}$ satisfying certain additional technical conditions, the associated \emph{coorbit space} $\operatorname{Co} (\mathbf{A})$ is defined as \[ \operatorname{Co} (\mathbf{A}) := \operatorname{Co}_\Psi (\mathbf{A}) := \big\{ f \in \mathscr{R} \,\,\colon\, V_\Psi f \in \mathbf{A} \big\} , \qquad \text{with norm} \qquad \| f \|_{\operatorname{Co} (\mathbf{A})} := \| V_\Psi f \|_{\mathbf{A}} . \] Here, the space $\mathscr{R}$---the ``reservoir'' from which the elements of $\operatorname{Co}(\mathbf{A})$ are taken---can be thought of as a generalization of the space of (tempered) distributions to the general setting considered here; see Lemma~\ref{lem:ReservoirConditions} for the precise definition. The first component of coorbit theory is a set of ``compatibility criteria'' between the \emph{reproducing kernel} (or Gramian kernel) \begin{equation} K_\Psi : X \times X \to \mathbb{C}, (x,y) \mapsto \langle \psi_y, \psi_x \rangle_{\mathcal{H}} \label{eq:ReproducingKernelDefinition} \end{equation} of the frame $\Psi$, and the Banach function space $\mathbf{A}$, which ensure that the coorbit spaces are actually well-defined Banach spaces. In their usual formulation, these conditions are quite technical. Using the kernel spaces $\mathcal B_m$, however, one can formulate a set of conditions which is less tedious to verify. To conveniently formulate these conditions, we will assume the following: \begin{equation} \begin{split} & \mathcal{U} = (U_j)_{j \in J} \text{ is a product admissible covering of } X, \\ & u : X \to (0,\infty) \text{ is $\mathcal{U}$-moderate and locally bounded}, \\ & v : X \to [1,\infty) \text{ is measurable with } v(x) \gtrsim \max \big\{ \| \psi_x \|_{\mathcal{H}}, \,\, [w_{\mathcal{U}}^c (x)]^{-1} \, u(x) \big\} , \\ & m_0 \!:\! X \!\times\! X \to (0,\infty) \text{ is measurable with } m_0(x,y) \!\leq\! C \, u(x) \, u(y) \text{ and } m_0(x,y) \!=\! m_0(y,x), \\ & \mathbf{A} \neq \{ 0 \} \text{ is a solid Banach function space on $X$, and } \| \Phi_K \|_{\mathbf{A} \to \mathbf{A}} \leq \| K \|_{\mathcal B_{m_0}} \,\,\, \forall \, K \in \mathcal B_{m_0} . \end{split} \label{eq:CoorbitWeightConditions} \end{equation} Here, $w_{\mathcal{U}}^c$ is as defined in Part~(1) of Theorem~\ref{thm:BoundednessIntoWeightedLInfty}. \begin{rem*} \emph{(i):} By means of Proposition~\ref{prop:NewKernelModuleBasicProperties3}, we see that the condition $\| \Phi_K \|_{\mathbf{A} \to \mathbf{A}} \leq \| K \|_{\mathcal B_{m_0}}$ is satisfied if $\mathbf{A} = \bd{L}_\kappa^{p,q}(\mu)$ and $\frac{\kappa(x)}{\kappa(y)} \leq m_0 (x,y)$. {} \emph{(ii):} If only $\| \Phi_K \|_{\mathbf{A} \to \mathbf{A}} \leq C_0 \, \| K \|_{\mathcal B_{m_0}}$ holds, then one can achieve $C_0 = 1$ by replacing $m_0$ with $C_0 \cdot m_0$. Thus, there is no real loss of generality in assuming $C_0 = 1$. \end{rem*} Our simplified condition for the well-definedness of the coorbit spaces reads as follows: \begin{theorem}\label{thm:IntroductionCoorbitWellDefinedConditions} Let $(X,\mathcal{F},\mu)$ and $\Psi$ as above, and assume that Condition~\eqref{eq:CoorbitWeightConditions} is satisfied. With $m_v$ as in Equation~\eqref{eq:SeparableMatrixWeight} and $M_{\mathcal{U}} K_\Psi$ as in Equation~\eqref{eq:MaximalKernelDefinition}, assume that \begin{equation} K_\Psi \in \mathcal A_{m_v} \quad \text{and} \quad \text{there exists } L \in \mathcal B_{m_0} \text{ satisfying } M_{\mathcal{U}} K_\Psi \leq L . \label{eq:CoorbitWellDefinedConditions} \end{equation} Then all technical assumptions imposed in \cite[Sections 2.3 and 2.4]{kempka2015general} are satisfied; consequently, the coorbit space $\operatorname{Co}_\Psi (\mathbf{A})$ is a well-defined Banach space. \end{theorem} \begin{rem*} The conclusion of the theorem is deliberately left somewhat vague; the finer details are given in Proposition~\ref{pro:coorbitsWithBm}. \end{rem*} The second component of coorbit theory is the \emph{discretization theory}. This provides conditions under which a sampled version $\Psi_d = (\psi_{x_i})_{i \in I}$ of the continuous frame $\Psi = (\psi_x)_{x \in X}$ can be used to describe the coorbit space $\operatorname{Co}_{\Psi} (\mathbf{A})$. More precisely, if the discretization theory is applicable, then there are solid sequence spaces $\mathbf{A}^{\flat}, \mathbf{A}^{\sharp} \subset \mathbb{C}^{I}$ associated to $\mathbf{A}$ (see Equation~\eqref{eq:SequenceSpaceNorms}) such that the coefficient and synthesis maps \begin{equation} C_{\Psi_d} : \operatorname{Co}_\Psi(\mathbf{A}) \to \mathbf{A}^{\flat}, f \mapsto \big( \langle f, \psi_{x_i} \rangle \big)_{i \in I} \quad \text{and} \quad D_{\Psi_d} : \mathbf{A}^{\sharp} \to \operatorname{Co}_\Psi (\mathbf{A}), (c_i)_{i \in I} \mapsto \sum_{i \in I} c_i \, \psi_{x_i} \label{eq:CoefficientSynthesisOperator} \end{equation} are well-defined and bounded, and such that $C_{\Psi_d}$ has a bounded linear left-inverse, while $D_{\Psi_d}$ has a bounded linear right-inverse. In the language of \cite{GroechenigDescribingFunctions}, the former means that the sampled frame $\Psi_d$ forms a \emph{Banach frame} and the latter implies that $\Psi_d$ is an \emph{atomic decomposition} for $\operatorname{Co}_\Psi (\mathbf{A})$. Put briefly, we say that $\Psi_d$ forms a \emph{Banach frame decomposition} of $\operatorname{Co}_\Psi (\mathbf{A})$ if both conditions are satisfied. To conveniently state our simplified criteria for the applicability of the discretization theory, we require the notion of the \emph{oscillation} ${\mathrm{osc}}_{\UU,\Gamma} (K)$ of a kernel $K : X \times X \to \mathbb{C}$ with respect to a covering $\mathcal{U} = (U_i)_{i \in I}$ and a (not necessarily measurable) \emph{phase function} $\Gamma : X \times X \to S^1$, where $S^1 := \{ z \in \mathbb{C} \colon |z| = 1 \}$. This oscillation is defined as \begin{equation} {\mathrm{osc}}_{\UU,\Gamma}(K) : X \times X \rightarrow [0,\infty], \quad (x, y) \mapsto \sup_{z \in \mathcal{U}(y)} \big| K(x, y) - \Gamma(y, z) K(x, z) \big|, \label{eq:OscillationDefinition} \end{equation} with $\mathcal{U}(y) := \bigcup_{i \in I \text{ with } y \in U_i} U_i$ as in Equation~\eqref{eq:MaximalKernelDefinition}. Furthermore, we need the notion of an \emph{admissible covering}. Precisely, a family $\mathcal{U} = (U_i)_{i \in I}$ of subsets of $X$ is an \emph{admissible covering} of $X$, if it satisfies the following properties: \begin{enumerate} \item $\mathcal{U}$ is a covering of $X$; that is, $X = \bigcup_{i \in I} U_i$; \item $\mathcal{U}$ is locally finite, meaning that each $x \in X$ has an open neighborhood intersecting only finitely many of the $U_i$; \item each $U_i$ is measurable, relatively compact, and has non-empty interior; \item the intersection number \( \sigma(\mathcal{U}) := \sup_{i \in I} |\{ \ell \in I \colon U_\ell \cap U_i \neq \emptyset \}| \) is finite. \end{enumerate} \begin{theorem}\label{thm:IntroductionCoorbitDiscretizationConditions} Suppose that the assumptions of Theorem~\ref{thm:IntroductionCoorbitWellDefinedConditions} hold, and define ${m := m_v + m_0}$. If there exist an admissible covering $\widetilde{\mathcal{U}} = (\widetilde{U}_i)_{i \in I}$ of $X$, a phase function $\Gamma : X \times X \to S^1$, and some $L \in \mathcal B_m$ such that \[ {\mathrm{osc}}_{\widetilde{\UU},\Gamma} (K_\Psi) \leq L \quad \text{ and } \quad \| L \|_{\mathcal B_m} \cdot \bigl( 2 \, \| K_\Psi \|_{\mathcal B_m} + \| L \|_{\mathcal B_m} \bigr) < 1, \] and if for each $i \in I$ some $x_i \in \widetilde{U}_i$ is chosen, then all technical assumptions imposed in \mbox{\cite[Theorem~2.48]{kempka2015general}} are satisfied; consequently, $\Psi_d = (\psi_{x_i})_{i \in I}$ is a Banach frame decomposition for $\operatorname{Co}_\Psi(\mathbf{A})$. \end{theorem} \begin{rem*} Again, the conclusion of the theorem is deliberately not completely precise; a rigorous version is given in Proposition~\ref{pro:coorbitsWithBm2}. \end{rem*} \section{Proving \texorpdfstring{Schur's}{Schurʼs} test for weighted mixed-norm Lebesgue spaces} \label{sec:MixedNormSchur} In this section, we prove our generalization of Schur's test to (weighted) mixed-norm Lebesgue spaces; that is, we prove Theorems~\ref{thm:SchurTestSufficientUnweighted} and \ref{thm:SchurNecessity}. We will assume throughout that $(X_i, \mathcal{F}_i, \mu_i)$ and $(Y_i, \mathcal{G}_i, \nu_i)$ are $\sigma$-finite measure spaces for $i = 1,2$. Define $X := X_1 \times X_2$ and $Y := Y_1 \times Y_2$, which we equip with the respective product $\sigma$-algebras $\mathcal{F} := \mathcal{F}_1 \otimes \mathcal{F}_2$ or $\mathcal{G} := \mathcal{G}_1 \otimes \mathcal{G}_2$, and with the product measures $\mu := \mu_1 \otimes \mu_2$ and $\nu := \nu_1 \otimes \nu_2$. \subsection{Sufficiency of the generalized Schur condition} \label{sub:MixedNormSchurSufficient} In this subsection, we prove that the integral operator $\Phi_K : \bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu)$ is indeed well-defined and bounded if $C_i (K) < \infty$ for all $i \in \{1,2,3,4\}$, with $C_i (K)$ as defined in Equation~\eqref{eq:MixedNormSchurConstants}. To simplify the subsequent proofs, we first show that it suffices to only consider non-negative kernels $K \geq 0$ and functions $f \geq 0$, and show that \( \|\Phi_K \, f\|_{\bd{L}^{p,q}(\mu)} \leq [\max_{i \in J} C_i (K)] \cdot \|f\|_{\bd{L}^{p,q}(\nu)} \) for some subset $J \subset \{1,2,3,4\}$ that might depend on $p,q$. To see why this is enough, suppose we know that \( \|\Phi_{|K|} \, f\|_{\bd{L}^{p,q}(\mu)} \leq \bigl[ \max_{i \in J} C_i (|K|) \bigr] \cdot \|f\|_{\bd{L}^{p,q}(\nu)} \) for all $f \geq 0$. Now, first note that $C_i (K) = C_i (|K|)$; hence, if $C_i (K) < \infty$, then $C_i (|K|) < \infty$ as well. Next, note that if $C_i (K) < \infty$ for all $i \in J$, then \[ |\Phi_K f (x)| \leq \int_{Y} |K|(x,y) \, |f|(y) \, d \nu(y) = [\Phi_{|K|} \, |f|](x) < \infty \] for $f \in \bd{L}^{p,q}(\nu)$ and almost all $x \in X$. Hence, $\Phi_K \, f$ is an almost-everywhere well-defined function which is measurable by Fubini's theorem, and which satisfies \[ \|\Phi_K \, f\|_{\bd{L}^{p,q}(\mu)} \leq \big\| \, \Phi_{|K|} |f| \, \big\|_{\bd{L}^{p,q}(\mu)} \leq \big[ \max_{i \in J} C_i (|K|) \big] \cdot \| \, |f| \, \|_{\bd{L}^{p,q}(\nu)} = \big[ \max_{i \in J} C_i (K) \big] \cdot \| f \|_{\bd{L}^{p,q}(\nu)}, \] as desired. Therefore, in the following proofs we will concentrate on the case where $K,f$ are non-negative. This has the advantage that $\Phi_K \, f : X \to [0,\infty]$ is always a well-defined measurable function, as a consequence of Tonelli's theorem. {} We start our proof by considering the case $p \leq q$. In the proof and in the remainder of the paper, we will frequently use the (elementary) estimate \begin{equation} \mathop{\operatorname{ess~sup}}_{\omega \in \Omega} \int_{\Lambda} F(\omega,\lambda) \, d \mu(\lambda) \leq \int_{\Lambda} \mathop{\operatorname{ess~sup}}_{\omega \in \Omega} F(\omega,\lambda) \, d \mu(\lambda), \label{eq:EsssupOfIntegral} \end{equation} which holds for any measurable function $F : \Omega \times \Lambda \to [0,\infty]$, where $(\Omega,\nu)$ and $(\Lambda,\mu)$ are $\sigma$-finite measure spaces. To see this, first note that $\lambda \mapsto \mathop{\operatorname{ess~sup}}_{\omega \in \Omega} F(\omega,\lambda)$ is measurable, for instance as a consequence of Lemma~\ref{lem:CountableLInfinityCharacterization}. Next, Equation~\eqref{eq:EsssupOfIntegral} is trivial if the right-hand side is infinite. On the other hand, if the right-hand side is finite, we see for arbitrary $g \in \bd{L}^1 (\nu)$ with $g \geq 0$ that \[ \int_\Omega \! g(\omega) \!\! \int_\Lambda \!\! F(\omega,\lambda) \, d\mu(\lambda) \, d \nu(\omega) = \!\! \int_\Lambda \! \int_\Omega \! g(\omega) F(\omega,\lambda) \, d \nu(\omega) \, d\mu(\lambda) \leq \! \|g\|_{\bd{L}^1(\nu)} \! \int_\Lambda \! \mathop{\operatorname{ess~sup}}_{\omega \in \Omega} F(\omega,\lambda) \, d\mu(\lambda). \] In view of the characterization of the $\bd{L}^\infty(\nu)$-norm by duality (see \mbox{\cite[Theorem~6.14]{FollandRA}}), this implies \( \|\omega \mapsto \int_\Lambda F(\omega,\lambda) \, d \mu(\lambda)\|_{\bd{L}^\infty(\nu)} \leq \int_\Lambda \mathop{\operatorname{ess~sup}}_{\omega \in \Omega} F(\omega,\lambda) \, d \mu (\lambda) \), which is precisely \eqref{eq:EsssupOfIntegral}. With these preparations, we now prove Theorem~\ref{thm:SchurTestSufficientUnweighted} for the case $p \leq q$. \begin{proposition}\label{prop:SchurTestMixedCase1} Let $\XIndexTuple{i}$ and $\YIndexTuple{i}$ be $\sigma$-finite measure spaces for $i \in \{ 1, 2 \}$, and let $(X,\mathcal{F},\mu) = (X_1 \times X_2, \mathcal{F}_1 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)$ and $(Y,\mathcal{G},\nu) = (Y_1 \times Y_2, \mathcal{G}_1 \otimes \mathcal{G}_2, \nu_1 \otimes \nu_2)$. Assume that $K : X \times Y \to \mathbb{C}$ is measurable and satisfies $C_i (K) < \infty$ for $i \in \{1,2,3\}$, where $C_i (K)$ is as defined in Equation~\eqref{eq:MixedNormSchurConstants}. Finally, assume that $p,q \in [1, \infty]$ with $p \leq q$. Then \[ \Phi_{K} : \bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu) \] is well-defined (with absolute convergence of the defining integral for $\mu$-almost all $x \in X$) and bounded, with \( \vertiii{\Phi_{K}}_{\bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu)} \leq \max_{1 \leq i \leq 3} C_i (K) . \) \end{proposition} \begin{proof} As discussed at the beginning of Section~\ref{sub:MixedNormSchurSufficient}, it suffices to consider non-negative kernels $K \geq 0$, and to show that \( \| \Phi_K \, f \|_{\bd{L}^{p,q}(\mu)} \leq \max \{ C_1 (K) , C_2 (K), C_3(K) \} \cdot \|f\|_{\bd{L}^{p,q}(\nu)} \) for $f \in \bd{L}^{p,q} (\nu)$ with $f \geq 0$. We divide the proof of this fact into three steps. {} \noindent \textbf{Step 1} \emph{(the case $p = \infty$):} This implies $q = \infty$, since $p \leq q$. For $f \in \bd{L}^{p,q}(\nu) = \bd{L}^\infty(\nu)$ with $f \geq 0$, we have \[ 0 \leq \Phi_K f (x) \leq \int_Y K(x,y) \, f(y) \, d \nu (y) \leq \int_Y K(x,y) \, d \nu (y) \cdot \|f\|_{\bd{L}^{\infty}(\nu)} \leq C_1 (K) \cdot \|f\|_{\bd{L}^{p,q}(\nu)} < \infty \] for $\mu$-almost all $x \in X$, and hence $\| \Phi_K f \|_{\bd{L}^{p,q}(\mu)} \leq C_1 (K) \cdot \| f \|_{\bd{L}^{p,q}(\nu)}$. {} \noindent \textbf{Step 2} \emph{(the case $p = 1$):} If we also have $q = 1$, then $\bd{L}^{p,q} = \bd{L}^1$, so that the standard version of Schur's test (see \cite[Theorem~6.18]{FollandRA}) shows that $\Phi_K : \bd{L}^1(\nu) \to \bd{L}^1 (\mu)$ is well-defined and bounded, with \( \vertiii{\Phi_K}_{\bd{L}^{p,q} \to \bd{L}^{p,q}} = \vertiii{\Phi_K}_{\bd{L}^1 \to \bd{L}^1} \leq \max \{ C_1(K), C_2(K) \} , \) as desired. For the case $q \in (1,\infty]$, define \[ H : X_2 \times Y \to [0,\infty], (x_2, y) \mapsto \int_{X_1} K \big( (x_1,x_2), y \big) \, d \mu_1 (x_1), \] and note that \( \int_{X_2} H(x_2, y) \, d \mu_2(x_2) \leq C_2 (K) \) for $\nu$-almost all $y \in Y$. Now, we distinguish two cases. First, if $q = \infty$, we see for $f \in \bd{L}^{p,q}(\nu)$ with $f \geq 0$ that \begin{align*} \|\Phi_K f\|_{\bd{L}^{p,q}(\mu)} & = \mathop{\operatorname{ess~sup}}_{x_2 \in X_2} \int_{X_1} \int_Y K \big( (x_1,x_2), y \big) f(y) \, d \nu(y) \, d \mu_1 (x_1) \\ ({\scriptstyle{\text{Tonelli's theorem}}}) & = \mathop{\operatorname{ess~sup}}_{x_2 \in X_2} \int_{Y_2} \int_{Y_1} H \big( x_2, (y_1,y_2) \big) f(y_1,y_2) \, d\nu_1(y_1) \, d \nu_2(y_2) \\ ({\scriptstyle{\text{Hölder}}}) & \leq \mathop{\operatorname{ess~sup}}_{x_2 \in X_2} \int_{Y_2} \|f(\bullet,y_2)\|_{\bd{L}^1 (\nu_1)} \mathop{\operatorname{ess~sup}}_{y_1 \in Y_1} H \big( x_2, (y_1,y_2) \big) \, d \nu_2(y_2) \\ & \leq \|f\|_{\bd{L}^{1,\infty}(\nu)} \cdot C_3 (K) < \infty. \end{align*} It remains to consider the case $q \in (1,\infty)$. Here, we see by repeated applications of Tonelli's theorem and Hölder's inequality for $f \in \bd{L}^{p,q}(\nu)$ and $h \in \bd{L}^{q'}(\mu_2)$ with $f, h \geq 0$ that \begin{align*} & \int_{X_2} \big\| (\Phi_K f)(\bullet, x_2) \big\|_{\bd{L}^1(\mu_1)} \cdot h(x_2) \, d \mu_2 (x_2) \\ ({\scriptstyle{\text{Tonelli}}}) & = \int_Y f(y) \int_{X_2} h(x_2) \cdot [H(x_2, y)]^{\frac{1}{q'} + \frac{1}{q}} \, d \mu_2 (x_2) \, d \nu(y) \\ ({\scriptstyle{\text{Hölder}}}) & \leq \int_Y f(y) \left[ \int_{X_2} [h(x_2)]^{q'} H(x_2, y) \, d \mu_2(x_2) \right]^{\frac{1}{q'}} \left[ \int_{X_2} H(x_2,y) \, d \mu_2(x_2) \right]^{\frac{1}{q}} \, d \nu(y) \\ ({\scriptstyle{\text{Tonelli}}}) & \leq [C_2 (K)]^{\frac{1}{q}} \int_{Y_2} \int_{Y_1} \!\! f(y_1, y_2) \left[ \int_{X_2} [h(x_2)]^{q'} H \big( x_2, (y_1, y_2) \big) \, d \mu_2(x_2) \right]^{\frac{1}{q'}} \!\!\! \, d \nu_1(y_1) \, d \nu_2(y_2) \\ ({\scriptstyle{\text{Eq.~}\eqref{eq:EsssupOfIntegral}, \, p = 1}}) & \leq [C_2 (K)]^{\frac{1}{q}} \! \int_{Y_2} \!\! \|f(\bullet,y_2)\|_{\bd{L}^{p}(\nu_1)} \! \left[ \! \int_{X_2} \!\! [h(x_2)]^{q'} \mathop{\operatorname{ess~sup}}_{y_1 \in Y_1} H \big( x_2, (y_1,y_2) \big) \, d \mu_2(x_2) \right]^{\!\frac{1}{q'}} \!\!\!\! d \nu_2(y_2) \\ ({\scriptstyle{\text{Hölder, Tonelli}}}) & \leq [C_2 (K)]^{\frac{1}{q}} \, \|f\|_{\bd{L}^{p,q}(\nu)} \left[ \int_{X_2} [h(x_2)]^{q'} \int_{Y_2} \! \mathop{\operatorname{ess~sup}}_{y_1 \in Y_1} H \big( x_2, (y_1,y_2) \big) \, d \nu_2(y_2) d \mu_2(x_2) \right]^{\frac{1}{q'}} \\ & \leq [C_2 (K)]^{\frac{1}{q}} \, \|f\|_{\bd{L}^{p,q}(\nu)} [C_3 (K)]^{\frac{1}{q'}} \left[ \int_{X_2} [h(x_2)]^{q'} d \mu_2(x_2) \right]^{\frac{1}{q'}} \\ & = [C_2(K)]^{\frac{1}{q}} [C_3(K)]^{\frac{1}{q'}} \|f\|_{\bd{L}^{p,q}(\nu)} \|h\|_{\bd{L}^{q'} (\mu_2)} < \infty. \end{align*} Using the characterization of the $\bd{L}^{q}(\mu_2)$-norm by duality (see \cite[Theorem~6.14]{FollandRA}), and recalling again that $p = 1$, this implies \( \|\Phi_K f\|_{\bd{L}^{p,q}(\mu)} \leq [C_2(K)]^{\frac{1}{q}} \, [C_3(K)]^{\frac{1}{q'}} \, \|f\|_{\bd{L}^{p,q}(\nu)} < \infty \). This completes the proof for the case $p =1$. {} \noindent \textbf{Step 3} \emph{(the case $p \in (1,\infty)$):} Define $r := \tfrac{q}{p}$ (with the understanding that $r = \infty$ if $q = \infty$), and note $r \in [1,\infty]$ since $p \leq q$. Let $f \in \bd{L}^{p,q}(\nu)$ with $f \geq 0$, and set $g := f^p$. A straightforward calculation shows that $g \in \bd{L}^{1,r}(\nu)$, with $\|g\|_{\bd{L}^{1,r}(\nu)} = \|f\|_{\bd{L}^{p,q}(\nu)}^{p}$. Furthermore, we see by Hölder's inequality and by definition of $C_1 (K)$ that \begin{align*} 0 \leq (\Phi_K f)(x) & = \int_{Y} [K(x,y)]^{\frac{1}{p'} + \frac{1}{p}} f(y) \, d \nu (y) \leq \biggl[ \int_Y K(x,y) \, d \nu(y) \biggr]^{\frac{1}{p'}} \biggl[ \int_Y K(x,y) g(y) \, d \nu(y) \biggr]^{\frac{1}{p}} \\ & \leq [C_1(K)]^{\frac{1}{p'}} \cdot [(\Phi_K \, g)(x)]^{\frac{1}{p}} \end{align*} for $\mu$-almost all $x \in X$. Therefore, by employing the result from Step~2, we see that \begin{align*} \|\Phi_K f\|_{\bd{L}^{p,q}(\mu)} & \leq [C_1(K)]^{\frac{1}{p'}} \cdot \Big\| x_2 \mapsto \big\| [(\Phi_K \, g) (\bullet, x_2)]^{1/p} \big\|_{\bd{L}^p(\mu_1)} \Big\|_{\bd{L}^q(\mu_2)} \\ & = [C_1(K)]^{\frac{1}{p'}} \cdot \Big\| x_2 \mapsto \big\| (\Phi_K \, g) (\bullet, x_2) \big\|_{\bd{L}^1(\mu_1)} \Big\|_{\bd{L}^r(\mu_2)}^{1/p} \\ & \leq [C_1(K)]^{\frac{1}{p'}} \cdot \Big[ \|g\|_{\bd{L}^{1,r}} \cdot \max_{1 \leq i \leq 3} C_i (K) \Big]^{1/p} \leq \|f\|_{\bd{L}^{p,q}(\nu)} \cdot \max_{1 \leq i \leq 3} C_i (K) < \infty. \qedhere \end{align*} \end{proof} For the proof of the case $p > q$, we will make use of the following duality result: \begin{lemma}\label{lem:AdjointBoundedness} Let $(X,\mathcal{F},\mu) = (X_1 \times X_2, \mathcal{F}_1 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)$ and $(Y,\mathcal{G},\nu) = (Y_1 \times Y_2, \mathcal{G}_1 \otimes \mathcal{G}_2, \nu_1 \otimes \nu_2)$ and assume that $\mu_1,\mu_2,\nu_1,\nu_2$ are $\sigma$-finite. Let $K : X \times Y \to [0,\infty]$ be measurable, and define the \emph{transposed kernel} as \begin{equation} K^T : Y \times X \to [0,\infty], (y,x) \mapsto K(x,y) . \label{eq:TransposedKernelDefinition} \end{equation} Let $p,q \in [1,\infty]$. Then $\Phi_K : \bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu)$ is well-defined and bounded if and only if $\Phi_{K^T} : \bd{L}^{p',q'}(\mu) \to \bd{L}^{p',q'}(\nu)$ is. In this case, \( \vertiii{\Phi_K}_{\bd{L}^{p,q} \to \bd{L}^{p,q}} = \vertiii{\Phi_{K^T}}_{\bd{L}^{p',q'} \to \bd{L}^{p',q'}} . \) \end{lemma} The proof of the above result is based on the following characterization of the mixed $\bd{L}^{p,q}$-norm by duality. \begin{theorem}\label{thm:MixedNormDuality}(easy consequence of \cite[Theorem~2 in Section~2]{MixedLpSpaces}) Let $(Y_1, \mathcal{G}_1, \nu_1)$ and $(Y_2, \mathcal{G}_2, \nu_2)$ be $\sigma$-finite measure spaces. Let $f : Y_1 \times Y_2 \to \mathbb{C}$ be measurable, and let $p,q \in [1, \infty]$. Then \[ \|f\|_{\bd{L}^{p,q}(\nu)} = \sup_{\substack{g : Y_1 \times Y_2 \to [0,\infty)\\ \|g\|_{\bd{L}^{p',q'}} \leq 1}} \int_{Y} |f(y)| \cdot g(y) \, d \nu (y) . \] \end{theorem} \begin{proof}[Proof of Lemma~\ref{lem:AdjointBoundedness}] By symmetry (noting that $(K^T)^T = K$), it is enough to prove that if the operator ${\Phi_{K^T} : \bd{L}^{p',q'}(\mu) \to \bd{L}^{p',q'}(\nu)}$ is bounded, then so is $\Phi_K : \bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu)$, with operator norm \( \vertiii{\Phi_K}_{\bd{L}^{p,q} \to \bd{L}^{p,q}} \leq \vertiii{\Phi_{K^T}}_{\bd{L}^{p',q'} \to \bd{L}^{p',q'}} . \) Furthermore, as explained at the beginning of Section~\ref{sub:MixedNormSchurSufficient}, it is enough to prove \( \| \Phi_K f \|_{\bd{L}^{p,q}(\mu)} \leq \vertiii{\Phi_{K^T}}_{\bd{L}^{p',q'} \to \bd{L}^{p',q'}} \cdot \| f \|_{\bd{L}^{p,q}(\nu)} \) for $f \in \bd{L}^{p,q}(\nu)$ with $f \geq 0$; here we use that $K$ is non-negative in Lemma~\ref{lem:AdjointBoundedness}. To see that the last inequality holds, let $f \in \bd{L}^{p,q}(\nu)$ and $g \in \bd{L}^{p',q'}(\mu)$ with $f,g \geq 0$. Tonelli's theorem (which is applicable since all involved functions are non-negative), combined with Hölder's inequality for the mixed-norm Lebesgue spaces (see \cite[Equation~(1) in Section~2]{MixedLpSpaces}) shows that \begin{equation} \begin{split} \int_X [\Phi_K \, f] (x) \cdot g(x) \, d \mu(x) & = \int_Y \int_X K^T(y,x) \, f(y) \, g(x) \, d \mu(x) \, d \nu(y) \\ & = \int_{Y} f(y) \cdot [\Phi_{K^T} \, g] (y) \, d \nu(y) \leq \|f\|_{\bd{L}^{p,q}(\nu)} \cdot \|\Phi_{\! K^T} \, g\|_{\bd{L}^{p',q'}(\nu)} \\ & \leq \vertiii{\Phi_{K^T}}_{\bd{L}^{p',q'} \to \bd{L}^{p',q'}} \cdot \|f\|_{\bd{L}^{p,q}(\nu)} \cdot \|g\|_{\bd{L}^{p',q'}(\mu)} < \infty . \end{split} \label{eq:TransposedKernelBoundednessProof} \end{equation} On the one hand, this implies \footnote{Otherwise, since $\mu_1,\mu_2$ are $\sigma$-finite, there would be sets $M \in \mathcal{F}$, as well as $A \in \mathcal{F}_1$, $B \in \mathcal{F}_2$ with $\mu_1(A) < \infty$ and $\mu_2(B) < \infty$ and such that $\mu(M \cap (A \times B)) > 0$ and $\Phi_K f \equiv \infty$ on $M \cap (A \times B)$. But then the left-hand side of the inequality \eqref{eq:TransposedKernelBoundednessProof} would be infinite for the choice $g := {\mathds{1}}_{M \cap (A \times B)}$, although $g \in \bd{L}^{p',q'}(\mu)$.} that $[\Phi_K f](x) < \infty$ for $\mu$-almost all $x \in X$. Then, using the dual characterization of the $\bd{L}^{p,q}$-norm from Theorem~\ref{thm:MixedNormDuality}, we see that $\Phi_K f \in \bd{L}^{p,q} (\mu)$, and furthermore \({ \|\Phi_K f\|_{\bd{L}^{p,q}(\mu)} \leq \vertiii{\Phi_{K^T}}_{\bd{L}^{p',q'} \to \bd{L}^{p',q'}} \cdot \|f\|_{\bd{L}^{p,q}(\nu)} }\). \end{proof} Finally, we will also use the following relation between the constants $C_i (K)$ from Equation~\eqref{eq:MixedNormSchurConstants} for the kernel $K$ and the constants $C_i (K^T)$ for the transposed kernel. \begin{lemma}\label{lem:SchurConstantsForAdjointKernel} Let $(X,\mathcal{F},\mu) = (X_1 \times X_2, \mathcal{F}_1 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)$ and $(Y,\mathcal{G},\nu) = (Y_1 \times Y_2, \mathcal{G}_1 \otimes \mathcal{G}_2, \nu_1 \otimes \nu_2)$ and assume that $\mu_1,\mu_2,\nu_1,\nu_2$ are $\sigma$-finite. Let $K : X \times Y \to [0,\infty]$ be measurable, and let the transposed kernel $K^T : Y \times X \to [0,\infty]$ be as in Equation~\eqref{eq:TransposedKernelDefinition}. Then the constants $C_i$ introduced in Equation~\eqref{eq:MixedNormSchurConstants} satisfy \[ C_1(K^T) = C_2 (K), \quad C_2 (K^T) = C_1 (K), \quad C_3 (K^T) = C_4(K), \quad \text{and} \quad C_4 (K^T) = C_3 (K). \] \end{lemma} \begin{proof} The assertion follows easily from the definitions. \end{proof} With this, we can now handle the case $p > q$. \begin{proposition}\label{prop:SchurTestMixedCase2} Let $\XIndexTuple{i}$ and $\YIndexTuple{i}$ be $\sigma$-finite measure spaces for $i \in \{ 1,2 \}$, and let $(X,\mathcal{F},\mu) = (X_1 \times X_2, \mathcal{F}_1 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)$ and $(Y,\mathcal{G},\nu) = (Y_1 \times Y_2, \mathcal{G}_1 \otimes \mathcal{G}_2, \nu_1 \otimes \nu_2)$. Assume that $K : X \times Y \to \mathbb{C}$ is measurable, and that the constants $C_1(K)$, $C_2(K)$, and $C_4(K)$ introduced in Equation~\eqref{eq:MixedNormSchurConstants} are finite. Furthermore, let $p,q \in [1, \infty]$ with $p > q$. Then \[ \Phi_K : \bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu) \] is well-defined and bounded (with absolute convergence of the defining integral for $\mu$-almost all $x \in X$), with \( \vertiii{\Phi_K}_{\bd{L}^{p,q} \to \bd{L}^{p,q}} \leq \max \big\{ C_1(K), C_2(K), C_4(K) \big\} . \) \end{proposition} \begin{proof} Let $L := |K|^T : Y \times X \to [0,\infty)$ denote the transposed kernel of the (pointwise) absolute value of $K$. By Lemma~\ref{lem:SchurConstantsForAdjointKernel} and the explanation at the beginning of Subsection~\ref{sub:MixedNormSchurSufficient}, we see that ${C_1 (L) = C_2(|K|) = C_2(K) < \infty}$, $C_2 (L) = C_1(|K|) = C_1(K) < \infty$, and finally also ${C_3(L) = C_4(|K|) = C_4 (K) < \infty}$. Furthermore, since $p^{-1} < q^{-1}$, we see that the conjugate exponents $p', q'$ satisfy $\tfrac{1}{p'} = 1 - p^{-1} > 1 - q^{-1} = \tfrac{1}{q'}$, and hence $p' < q'$. Therefore, Proposition~\ref{prop:SchurTestMixedCase1} shows that $\Phi_L : \bd{L}^{p',q'}(\mu) \to \bd{L}^{p',q'}(\nu)$ is well-defined and bounded, with \( \vertiii{\Phi_L}_{\bd{L}^{p',q'} \to \bd{L}^{p',q'}} \leq \max_{1 \leq i \leq 3} C_i (L) = \max\{ C_1(K), C_2(K), C_4(K) \} =: C . \) Since $L^T = |K|$, Lemma~\ref{lem:AdjointBoundedness} shows that $\Phi_{|K|} : \bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu)$ is bounded, with $\vertiii{\Phi_{|K|}}_{\bd{L}^{p,q} \to \bd{L}^{p,q}} \leq C$. Finally, the reasoning from the beginning of Subsection~\ref{sub:MixedNormSchurSufficient} shows that $\Phi_K : \bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu)$ is bounded, with \( \vertiii{\Phi_K}_{\bd{L}^{p,q} \to \bd{L}^{p,q}} \leq \vertiii{\Phi_{|K|}}_{\bd{L}^{p,q} \to \bd{L}^{p,q}} \leq C , \) as claimed. \end{proof} By combining Propositions~\ref{prop:SchurTestMixedCase1} and \ref{prop:SchurTestMixedCase2}, we obtain Theorem~\ref{thm:SchurTestSufficientUnweighted}. \subsection{Necessity of the generalized Schur condition} \label{sub:MixedNormSchurNecessary} In this subsection, we prove Theorem~\ref{thm:SchurNecessity}. For this, we will need the following technical result, the proof of which we defer to Appendix~\ref{sec:CountableDualityCharacterization}. \begin{lemma}\label{lem:CountableDualityCharacterization} Let $(\Omega, \mathcal{C})$ be a measurable space, and let $(Y, \mathcal{G}, \nu) = (Y_1 \times Y_2, \mathcal{G}_1 \otimes \mathcal{G}_2, \nu_1 \otimes \nu_2)$, where $\nu_1, \nu_2$ are $\sigma$-finite measures. Finally, let $H : \Omega \times Y \to [0,\infty]$ be measurable. Then there is a countable family $(h_n)_{n \in \mathbb{N}}$ of measurable functions $h_n : Y \to [0,\infty)$ such that $\| h_n \|_{\bd{L}^{1,\infty}(\nu)} \leq 1$, $h_n \in \bd{L}^1(\nu)$, and \[ \| H(\omega, \bullet) \|_{\bd{L}^{\infty,1}(\nu)} = \sup_{n \in \mathbb{N}} \int_{Y} H(\omega, y) \cdot h_n (y) \, d \nu (y) \qquad \forall \, \omega \in \Omega. \] \end{lemma} \begin{rem*} The claim is non-trivial, since usually neither of the spaces $\bd{L}^{\infty,1} (\nu)$ or $\bd{L}^{1,\infty} (\nu)$ is separable. Furthermore, it should be noted that \emph{the countability of the family $(h_n)_{n \in \mathbb{N}}$ is crucial for our purposes}. Indeed, for each $\omega \in \Omega$ and $k \in \mathbb{N}$, it follows by the dual characterization of the mixed Lebesgue norm (see Theorem~\ref{thm:MixedNormDuality}) that there is a function $h_{\omega,k} \geq 0$ satisfying $\| h_{\omega,k} \|_{\bd{L}^{1,\infty} (\nu)} \leq 1$ and \( \int_Y H(\omega,y) h_{\omega,k}(y) \, d \nu(y) \geq (1-k^{-1}) \| H(\omega,\bullet) \|_{\bd{L}^{\infty,1}(\nu)} . \) Yet, in the proof of Part~(3) of Theorem~\ref{thm:SchurNecessity}, we will have to introduce an exceptional null-set $N_n \subset \Omega$ for each function $h_n$, where in the setting of the proof, $\Omega = X_2$ is equipped with a measure. Since the family $(h_n)_{n \in \mathbb{N}}$ is countable, we know that $\bigcup_{n \in \mathbb{N}} N_n$ is still a null-set. If instead of the $h_n$ we would use the \emph{uncountable} family $(h_{\omega,k})_{\omega \in \Omega, k \in \mathbb{N}}$, it could happen that the union of the exceptional null-sets $N_{\omega,k}$ is no longer a null-set---in fact, these sets could cover all of $\Omega$. \end{rem*} \begin{proof}[Proof of Theorem~\ref{thm:SchurNecessity}] \textbf{Ad (1):} Define $H : Y \to [0,\infty], y \mapsto \int_X K(x,y) \, d \mu(x)$ and furthermore ${C := \vertiii{\Phi_K}_{\bd{L}^1 \to \bd{L}^1}}$. For arbitrary $f \in \bd{L}^1 (\nu)$ with $f \geq 0$, we see by Tonelli's theorem that \[ \int_Y H(y) \cdot f(y) \, d \nu(y) = \int_{X} \int_Y K(x,y) \, f(y) \, d \nu(y) \, d \mu(x) = \|\Phi_K \, f\|_{\bd{L}^1 (\mu)} \leq C \cdot \|f\|_{\bd{L}^1 (\nu)} . \] In view of the dual characterization of the $\bd{L}^\infty$-norm (see \cite[Theorem~6.14]{FollandRA}), the preceding estimate implies that $C_2 (K) = \|H\|_{\bd{L}^\infty (\nu)} \leq C < \infty$, as desired. Note that it is enough to consider only non-negative functions for the dual characterization of the $\bd{L}^\infty$-norm of $H$, since $H \geq 0$. Also recall that $\nu$ is $\sigma$-finite, so that the dual characterization is indeed applicable. {} \noindent \textbf{Ad (2):} Let $f : Y \to [0,\infty), y \mapsto 1$, and note $f \in \bd{L}^\infty(\nu)$. We then get \[ \vertiii{\Phi_K}_{\bd{L}^\infty \to \bd{L}^\infty} \geq \vertiii{\Phi_K}_{\bd{L}^\infty \to \bd{L}^\infty} \cdot \|f\|_{\bd{L}^\infty} \geq \|\Phi_K \, f\|_{\bd{L}^\infty (\mu)} = \mathop{\operatorname{ess~sup}}_{x \in X} \int_Y K(x,y) \, d \nu(y) = C_1 (K). \] {} \noindent \textbf{Ad (3):} Let $C := \vertiii{\Phi_K}_{\bd{L}^{1,\infty} \to \bd{L}^{1,\infty}}$ and \( H : X_2 \times Y \! \to [0,\infty], (x_2, y) \mapsto \! \int_{X_1} \! K \big( (x_1,x_2) ,y \big) \, d \mu_1(x_1) . \) By Tonelli's theorem, $H$ is $\mathcal{F}_2 \otimes \mathcal{G}$-measurable. Thus, Lemma~\ref{lem:CountableDualityCharacterization} yields a sequence $(h_n)_{n \in \mathbb{N}}$ of measurable functions $h_n : Y \to [0,\infty)$ with $\| h_n \|_{\bd{L}^{1,\infty}(\nu)} \leq 1$ and such that \begin{equation} \| H(x_2, \bullet) \|_{\bd{L}^{\infty,1} (\nu)} = \sup_{n \in \mathbb{N}} \int_Y H(x_2, y) \cdot h_n (y) \, d \nu (y) \qquad \forall \, x_2 \in X_2 . \label{eq:SchurNecessityCountableCharacterization} \end{equation} For each $n \in \mathbb{N}$, there is a $\mu_2$-null-set $N_n \subset X_2$ with \({ \| (\Phi_K \, h_n) (\bullet,x_2) \|_{\bd{L}^1(\mu_1)} \leq \! \| \Phi_K \, h_n \|_{\bd{L}^{1,\infty} (\mu)} \leq C }\) for all $x_2 \in X_2 \setminus N_n$. Define $N := \bigcup_{n \in \mathbb{N}} N_n$. For $x_2 \in X_2 \setminus N$, we then see by definition of $H$ and by Tonelli's theorem that \[ \int_Y \! H(x_2,y) \cdot h_n (y) \, d \nu (y) \!=\! \int_{X_1} \! \int_Y \! K \big( (x_1,x_2) , y \big) h_n (y) \, d \nu(y) \, d \mu_1 (x_1) = \| (\Phi_K \, h_n) (\bullet, x_2) \|_{\bd{L}^1 (\mu_1)} \!\leq\! C . \] In view of Equation~\eqref{eq:SchurNecessityCountableCharacterization}, this implies $\| H(x_2, \bullet) \|_{\bd{L}^{\infty,1}(\nu)} \leq C$ for all $x_2 \in X_2 \setminus N$. Directly from the definitions of $H$ and of $C_3(K)$, we see that this implies $C_3(K) \leq C = \vertiii{\Phi_K}_{\bd{L}^{1,\infty} \to \bd{L}^{1,\infty}} < \infty$, as claimed. {} \noindent \textbf{Ad (4):} Let $K^T$ denote the transposed kernel of $K$. By Lemma~\ref{lem:AdjointBoundedness}, $\Phi_{K^T} : \bd{L}^{1,\infty} (\mu) \to \bd{L}^{1,\infty}(\nu)$ is bounded, with \( \vertiii{\Phi_{K^T}}_{\bd{L}^{1,\infty} \to \bd{L}^{1,\infty}} \leq \vertiii{\Phi_{K}}_{\bd{L}^{\infty,1} \to \bd{L}^{\infty,1}} =: C < \infty. \) By Part~(3) (applied to $K^T$, with interchanged roles of $\mu$ and $\nu$), this implies $C_3(K^T) \leq C$. Finally, Lemma~\ref{lem:SchurConstantsForAdjointKernel} shows that $C_4(K) = C_3(K^T) \leq C < \infty$, as claimed. \end{proof} \section{Proofs for the properties of the kernel modules \texorpdfstring{$\mathcal B_m(X,Y)$}{ğ“‘ₘ(X,Y)}} \label{sec:KernelModulePropertiesProof} In this section, we prove the properties of the kernel module $\mathcal B_m (X,Y)$ that are stated in Propositions~\ref{prop:NewKernelModuleBasicProperties1}--\ref{prop:NewKernelModuleBasicProperties3}. \begin{proof}[Proof of Proposition~\ref{prop:NewKernelModuleBasicProperties1}] \textbf{Ad (\ref{enu:NewKernelModuleSolid}):} This is an immediate consequence of the definitions, once one notes that if $|L| \leq |K|$ holds $\mu \otimes \nu$-almost everywhere, then Tonelli's theorem shows that for $\mu_2 \otimes \nu_2$-almost every $(x_2, y_2) \in X_2 \times Y_2$, we have $|L^{(x_2,y_2)}| \leq |K^{(x_2,y_2)}|$ $\mu_1 \otimes \nu_1$-almost everywhere. {} \noindent \textbf{Ad (\ref{enu:NewKernelModuleFatouProperty}):} We first prove that $\|\bullet\|_{\mathcal A(X,Y)}$ satisfies the Fatou property, for \emph{arbitrary} $\sigma$-finite measure spaces $(X,\mathcal{F},\mu), (Y,\mathcal{G},\nu)$ (not necessarily of product structure). Indeed, if a sequence of kernels ${K_n : X \times Y \to [0,\infty]}$ satisfies $K_n \nearrow K$ pointwise, then the monotone convergence theorem shows that \( \| K_n (x,\bullet) \|_{\bd{L}^1(\nu)} \nearrow \| K (x,\bullet) \|_{\bd{L}^1(\nu)} \) and \( \| K_n (\bullet,y) \|_{\bd{L}^1(\mu)} \nearrow \| K (\bullet,y) \|_{\bd{L}^1(\mu)} . \) Next, it is easy to see that if $0 \leq G_n \nearrow G$, then $\| G_n \|_{\bd{L}^\infty} \nearrow \| G \|_{\bd{L}^\infty}$. If we combine this with the preceding observations, we see \( \mathop{\operatorname{ess~sup}}_{x \in X} \| K_n (x,\bullet) \|_{\bd{L}^1(\nu)} \nearrow \mathop{\operatorname{ess~sup}}_{x \in X} \| K (x,\bullet) \|_{\bd{L}^1(\nu)} \) and \( \mathop{\operatorname{ess~sup}}_{y \in Y} \| K_n (\bullet,y) \|_{\bd{L}^1(\mu)} \nearrow \mathop{\operatorname{ess~sup}}_{y \in Y} \| K (\bullet,y) \|_{\bd{L}^1(\mu)} . \) Recalling the definition of $\| \bullet \|_{\mathcal A}$, this implies $\| K_n \|_{\mathcal A} \nearrow \| K \|_{\mathcal A}$. Now we prove the actual claim, first for the unweighted case. Clearly, $K_n^{(x_2,y_2)} \nearrow K^{(x_2,y_2)}$. Therefore, the preceding considerations show that \[ \Gamma_n (x_2,y_2) := \| K_n^{(x_2,y_2)} \|_{\mathcal A (X_1,Y_1)} \nearrow \| K^{(x_2,y_2)} \|_{\mathcal A (X_1,Y_1)} =: \Gamma(x_2,y_2). \] Applying the Fatou property for $\|\bullet\|_{\mathcal A}$, we see \( \| K_n \|_{\mathcal B} = \| \Gamma_n \|_{\mathcal A (X_2,Y_2)} \nearrow \| \Gamma \|_{\mathcal A (X_2,Y_2)} = \| K \|_{\mathcal B} . \) Finally, for the weighted case, note that $\| K_n \|_{\mathcal B_m} = \| m \cdot K_n \|_{\mathcal B} \nearrow \| m \cdot K \|_{\mathcal B} = \| K \|_{\mathcal B_m}$, since $m \cdot K_n \nearrow m \cdot K$. {} \noindent \textbf{Ad (\ref{enu:NewKernelModuleComplete}):} It is not hard to see that $\| \bullet \|_{\mathcal B_m}$ is a function-norm in the sense of Zaanen (see \cite[Section 63]{ZaanenIntegration}); that is, for $K,L : X \times Y \to [0,\infty]$ measurable and $\alpha \in [0,\infty)$, the following hold: \begin{align*} & \| K \|_{\mathcal B_m} = 0 \quad \Longleftrightarrow \quad K = 0 \text{ almost everywhere}, \\ & \| K + L \|_{\mathcal B_m} \leq \| K \|_{\mathcal B_m} + \| L \|_{\mathcal B_m} \qquad \text{and} \qquad \| \alpha K \|_{\mathcal B_m} = \alpha \, \| K \|_{\mathcal B_m}, \\ \text{as well as} \quad & \| K \|_{\mathcal B_m} \leq \| L \|_{\mathcal B_m} \text{ if } K \leq L \text{ almost everywhere}. \end{align*} Since we just showed that $\| \bullet \|_{\mathcal B_m}$ also satisfies the Fatou property, and since $\| K \|_{\mathcal B_m} = \| \,|K|\, \|_{\mathcal B_m}$ for $K : X \times Y \to \mathbb{C}$ measurable, it follows from the theory of normed Köthe spaces that $\big( \mathcal B_m(X,Y), \| \bullet \|_{\mathcal B_m} \big)$ is indeed a Banach space; see \cite[Section~65, Theorem~1]{ZaanenIntegration}. {} \noindent \textbf{Ad (\ref{enu:NewKernelModuleEmbedsInOld}):} We start with the unweighted case. On the one hand, we have for each $x_2 \in X_2$ that \begin{align*} \mathop{\operatorname{ess~sup}}_{x_1 \in X_1} \int_Y |K \big( (x_1,x_2),y \big)| \, d \nu(y) & = \mathop{\operatorname{ess~sup}}_{x_1 \in X_1} \int_{Y_2} \| K^{(x_2,y_2)} (x_1, \bullet) \|_{\bd{L}^1(\nu_1)} \, d \nu_2 (y_2) \\ ({\scriptstyle{\text{Eq. } \eqref{eq:EsssupOfIntegral}}}) & \leq \int_{Y_2} \mathop{\operatorname{ess~sup}}_{x_1 \in X_1} \| K^{(x_2,y_2)} (x_1, \bullet) \|_{\bd{L}^1(\nu_1)} \, d \nu_2(y_2) \\ & \leq \int_{Y_2} \| K^{(x_2,y_2)} \|_{\mathcal A} \, d \nu_2 (y_2) . \end{align*} By definition of $\| \bullet \|_{\mathcal B}$, we have $\int_{Y_2} \| K^{(x_2,y_2)} \|_{\mathcal A} \, d \nu_2 (y_2) \leq \| K \|_{\mathcal B}$ for almost all $x_2 \in X_2$. Overall, we thus see \[ C_1(K) = \mathop{\operatorname{ess~sup}}_{x \in X} \int_Y |K(x,y)| \, d \nu(y) \leq \| K \|_{\mathcal B}. \] Now, combine the identity $C_2(K) = C_1(K^T)$ from Lemma~\ref{lem:SchurConstantsForAdjointKernel} with Part~(\ref{enu:NewKernelModuleAdjoint}) of Proposition~\ref{prop:NewKernelModuleBasicProperties2} (which will be proven independently) to get \({ C_2(K) = C_1(K^T) \leq \| K^T \|_{\mathcal B(Y,X)} = \| K \|_{\mathcal B(X,Y)} . }\) But directly from the definition of $\| \bullet \|_{\mathcal A}$, we see $\| K \|_{\mathcal A(X,Y)} = \max \{ C_1(K), C_2(K) \} \leq \| K \|_{\mathcal B(X,Y)}$. The weighted case is now a direct consequence of the definitions. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:NewKernelModuleBasicProperties2}] \textbf{Ad (\ref{enu:NewKernelModuleAdjoint}):} We start with the unweighted case. By definition, we see $\| L^T \|_{\mathcal A(W,V)} = \| L \|_{\mathcal A(V,W)}$ for arbitrary ($\sigma$-finite) measure spaces $(V,\mathcal{V},\gamma)$ and $(W,\mathcal{W},\theta)$, and any measurable function $L : V \times W \to \mathbb{C}$. Also, it is easy to see that ${(K^T)^{(y_2,x_2)} = (K^{(x_2,y_2)})^T}$. By combining these observations, we see \[ \Psi(y_2,x_2) := \big\| (K^T)^{(y_2,x_2)} \big\|_{\mathcal A(Y_1,X_1)} = \big\| (K^{(x_2,y_2)})^T \big\|_{\mathcal A(Y_1,X_1)} = \big\| K^{(x_2,y_2)} \big\|_{\mathcal A(X_1,Y_1)} =: \Gamma(x_2,y_2). \] In other words, $\Psi = \Gamma^T$. This finally implies $K^T \in \mathcal B (Y,X)$, since \[ \| K^T \|_{\mathcal B(Y,X)} = \| \Psi \|_{\mathcal A(Y_2,X_2)} = \| \Gamma^T \|_{\mathcal A(Y_2,X_2)} = \| \Gamma \|_{\mathcal A(X_2,Y_2)} = \| K \|_{\mathcal B(X,Y)} < \infty. \] For the weighted case, note that $K^T \in \mathcal B_{m^T}(Y,X)$ if $m^T K^T = (m \, K)^T \in \mathcal B(Y,X)$, which holds by the unweighted case, since $m \, K \in \mathcal B(X,Y)$. Finally, we also see \[ \| K^T \|_{\mathcal B_{m^T}(Y,X)} = \| m^T \, K^T \|_{\mathcal B(Y,X)} = \| (m \, K)^T \|_{\mathcal B(Y,X)} = \| m \, K \|_{\mathcal B(X,Y)} = \| K \|_{\mathcal B_m(X,Y)}. \] {} \noindent \textbf{Ad (\ref{enu:NewKernelModuleMultiplicationProperty}):} We start with the unweighted case and with non-negative kernels ${K : X \times Y \to [0,\infty]}$ and $L : Y \times Z \to [0,\infty]$. For brevity, let us define $K_0 (x_2,y_2) := \| K^{(x_2,y_2)} \|_{\mathcal A (X_1,Y_1)}$ and $L_0(y_2,z_2) := \| L^{(y_2,z_2)} \|_{\mathcal A (Y_1 , Z_1)}$. By definition of the product $K \odot L$, by Tonelli's theorem, and since $\| L^{(y_2,z_2)} (y_1, \bullet) \|_{\bd{L}^1(\varrho_1)} \leq L_0(y_2,z_2)$ for almost every $y_1 \in Y_1$, we see for all $(x_2, z_2) \in X_2 \times Z_2$ that \begin{align*} & \mathop{\operatorname{ess~sup}}_{x_1 \in X_1} \big\| (K \odot L)^{(x_2,z_2)} (x_1,\bullet) \big\|_{\bd{L}^1(\varrho_1)} \\ ({\scriptstyle{\text{Definitions, Tonelli}}}) & = \mathop{\operatorname{ess~sup}}_{x_1 \in X_1} \int_{Y_2} \int_{Y_1} K^{(x_2,y_2)} (x_1,y_1) \cdot \| L^{(y_2,z_2)} (y_1, \bullet) \|_{\bd{L}^1(\varrho_1)} \, d \nu_1 (y_1) \, d \nu_2(y_2) \\ ({\scriptstyle{\text{Eq. } \eqref{eq:EsssupOfIntegral}, \text{ Def.~of } L_0}}) & \leq \int_{Y_2} L_0(y_2,z_2) \cdot \mathop{\operatorname{ess~sup}}_{x_1 \in X_1} \| K^{(x_2,y_2)}(x_1, \bullet) \|_{\bd{L}^1(\nu_1)} \, d \nu_2(y_2) \\ ({\scriptstyle{\text{Def. of } K_0}}) & \leq \int_{Y_2} L_0(y_2, z_2) \cdot K_0(x_2,y_2) \, d \nu_2 (y_2) . \end{align*} Using an almost identical calculation, we see for every $(x_2, z_2) \in X_2 \times Z_2$ that \[ \mathop{\operatorname{ess~sup}}_{z_1 \in Z_1} \big\| (K \odot L)^{(x_2,z_2)} (\bullet,z_1) \big\|_{\bd{L}^1 (\mu_1)} \leq \int_{Y_2} K_0(x_2, y_2) \cdot L_0(y_2, z_2) \, d \nu_2(y_2) . \] By definition of $\| \cdot \|_{\mathcal A(X_1,Z_1)}$, we have thus shown \[ \Gamma(x_2,z_2) := \| (K \odot L)^{(x_2,z_2)} \|_{\mathcal A(X_1,Z_1)} \leq \int_{Y_2} K_0(x_2, y_2) \cdot L_0(y_2, z_2) \, d \nu_2(y_2) . \] By another application of Tonelli's theorem, and since $\| L_0(y_2, \bullet) \|_{\bd{L}^1(\varrho_2)} \leq \| L_0 \|_{\mathcal A} = \| L \|_{\mathcal B}$ for almost all $y_2 \in Y_2$, this implies \begin{align*} \mathop{\operatorname{ess~sup}}_{x_2 \in X_2} \| \Gamma (x_2, \bullet) \|_{\bd{L}^1(\varrho_2)} & \leq \mathop{\operatorname{ess~sup}}_{x_2 \in X_2} \int_{Y_2} K_0 (x_2,y_2) \cdot \| L_0(y_2, \bullet) \|_{\bd{L}^1(\varrho_2)} \, d \nu_2 (y_2) \\ & \leq \| L \|_{\mathcal B} \cdot \mathop{\operatorname{ess~sup}}_{x_2 \in X_2} \| K_0 (x_2, \bullet) \|_{\bd{L}^1(\nu_2)} \leq \| L \|_{\mathcal B} \cdot \| K_0 \|_{\mathcal A} = \| L \|_{\mathcal B} \cdot \| K \|_{\mathcal B} . \end{align*} Using similar arguments, we also see \( \mathop{\operatorname{ess~sup}}_{z_2 \in Z_2} \| \Gamma (\bullet, z_2) \|_{\bd{L}^1(\mu_2)} \leq \| K \|_{\mathcal B} \cdot \| L \|_{\mathcal B} . \) By definition of $\Gamma$ and of $\| \Gamma \|_{\mathcal A}$, we have thus shown \( \| K \odot L \|_{\mathcal B} = \| \Gamma \|_{\mathcal A} \leq \| K \|_{\mathcal B} \cdot \| L \|_{\mathcal B} < \infty , \) which completes the proof for the unweighted case and non-negative kernels. Note in particular that we get $(K \odot L) (x,z) < \infty$ for almost all $(x,z) \in X \times Z$, since $\| K \odot L \|_{\mathcal B}$ is finite. Finally, we consider complex-valued kernels including weights. Since ${K_0 := \omega \cdot |K| \in \mathcal B (X,Y)}$ and $L_0 := \sigma \cdot |L| \in \mathcal B (Y,Z)$ are non-negative, the above considerations imply $K_0 \odot L_0 \in \mathcal B (X,Z)$ and \( \| K_0 \odot L_0 \|_{\mathcal B} \leq \| K_0 \|_{\mathcal B} \cdot \| L_0 \|_{\mathcal B} = \| K \|_{\mathcal B_\omega} \cdot \| L \|_{\mathcal B_\sigma}. \) Since $\tau(x,z) \leq C \cdot \omega(x,y) \cdot \sigma(y,z)$, we have $\tau(x,z) \cdot |K(x,y)| \cdot |L(y,z)| \leq C \cdot K_0(x,y) \cdot L_0(y,z)$, and thus \begin{align*} \tau(x,z) \cdot |K \odot L (x,z)| & \leq \tau(x,z) \cdot \int_Y |K(x,y)| \cdot |L(y,z)| \, d \nu(y) \\ & \leq C \cdot \int_Y K_0(x,y) \cdot L_0(y,z) \, d \nu(y) = C \cdot (K_0 \odot L_0) (x,z) < \infty \end{align*} for almost all $(x,z) \in X \times Z$. Finally, we see by solidity of $\mathcal B(X,Z)$ that $K \odot L \in \mathcal B_\tau (X,Z)$ with \( \| K \odot L \|_{\mathcal B_\tau} = \| \tau \cdot (K \odot L) \|_{\mathcal B} \leq C \cdot \| K_0 \odot L_0 \|_{\mathcal B} \leq C \cdot \| K \|_{\mathcal B_\omega} \cdot \| L \|_{\mathcal B_\sigma}. \) \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:NewKernelModuleBasicProperties3}] We again start with the unweighted case $v,w,m \equiv 1$ and $C = 1$. By Theorem~\ref{thm:SchurTestSufficientUnweighted}, it suffices to prove $C_i (K) \leq \| K \|_{\mathcal B(X,Y)}$ for $i \in \{1,2,3,4\}$, with $C_i(K)$ as defined in Equation~\eqref{eq:MixedNormSchurConstants}. The cases $i \in \{1,2\}$ were already handled in the proof of Part~(\ref{enu:NewKernelModuleEmbedsInOld}) of Proposition~\ref{prop:NewKernelModuleBasicProperties1}. Furthermore, once we show $C_3(K) \leq \| K \|_{\mathcal B(X,Y)}$, a combination of Lemma~\ref{lem:SchurConstantsForAdjointKernel} and Part~(\ref{enu:NewKernelModuleAdjoint}) of Proposition~\ref{prop:NewKernelModuleBasicProperties2} will show that $C_4(K) = C_3(K^T) \leq \| K^T \|_{\mathcal B(Y,X)} = \| K \|_{\mathcal B(X,Y)}$, so that it suffices to consider the case $i = 3$. Let $\Gamma(x_2,y_2) := \| K^{(x_2,y_2)} \|_{\mathcal A(X_1,Y_1)}$, and note $\Gamma(x_2,y_2) \geq \mathop{\operatorname{ess~sup}}_{y_1 \in Y_1} \| K^{(x_2,y_2)}(\bullet,y_1) \|_{\bd{L}^1(\mu_1)}$, as well as $\| K \|_{\mathcal B} = \| \Gamma \|_{\mathcal A}$. By definition of $C_3(K)$, this implies as desired that \begin{align*} C_3(K) & = \mathop{\operatorname{ess~sup}}_{x_2 \in X_2} \int_{Y_2} \mathop{\operatorname{ess~sup}}_{y_1 \in Y_1} \| K^{(x_2,y_2)}(\bullet, y_1) \|_{\bd{L}^{1}(\mu_1)} \, d \nu_2 (y_2) \\ & \leq \mathop{\operatorname{ess~sup}}_{x_2 \in X_2} \| \Gamma(x_2, \bullet) \|_{\bd{L}^1(\nu_2)} \leq \| \Gamma \|_{\mathcal A} = \| K \|_{\mathcal B} . \end{align*} Finally, we handle the weighted case. As noted in the discussion around Equation~\eqref{eq:WeightedKernelDefinition}, if we define $K_{v,w} : X \times Y \to \mathbb{C}, (x,y) \mapsto \frac{v(x)}{w(y)} \cdot K(x,y)$, then $\Phi_K : \bd{L}^{p,q}_w (\nu) \to \bd{L}^{p,q}_v (\mu)$ is well-defined and bounded if and only if $\Phi_{K_{v,w}} : \bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu)$ is, and in this case we have \( \| \Phi_K \|_{\bd{L}^{p,q}_w (\nu) \to \bd{L}^{p,q}_v (\mu)} = \| \Phi_{K_{v,w}} \|_{\bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu)} . \) Next, by our assumptions on $v,w,m$, and by the unweighted case, we see that $\Phi_{K_{v,w}} : \bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu)$ is indeed well-defined and bounded, with \[ \| \Phi_{K_{v,w}} \|_{\bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu)} \leq \| K_{v,w} \|_{\mathcal B} \vphantom{\overset{a}{\leq}} \,\smash{\overset{(\dagger)}{\leq}}\, C \cdot \| m \cdot K \|_{\mathcal B} = C \cdot \| K \|_{\mathcal B_m} . \] Here, the step marked with $(\dagger)$ used that $|K_{v,w}(x,y)| = \big| \frac{v(x)}{w(y)} \cdot K(x,y) \big| \leq C \cdot |(m \cdot K)(x,y)|$. \end{proof} \section{\texorpdfstring{Spaces compatible with the kernel modules $\mathcal B_m$} {Spaces compatible with the new kernel modules}} \label{sec:StructureOfBBmCompatibleSpaces} In this section, we prove the necessary conditions for spaces compatible with $\mathcal B_m$ that we stated in Section~\ref{sub:RestrictionsOnSolidSpaces}. The proof itself is presented in Section~\ref{sub:StructureOfBBmCompatibleSpaces}, preceded by a \emph{richness result} for the space $\mathcal B_m (X,Y)$ which is given in Section~\ref{sub:KernelModuleRichnessResults} below. This result, which provides sufficient conditions on $f : X \to \mathbb{C}$ and $g : Y \to \mathbb{C}$ which guarantee that the \emph{tensor product} ${f \otimes g : X \times Y \to \mathbb{C}, (x,y) \mapsto f(x) \cdot g(y)}$ belongs to $\mathcal B_m (X,Y)$, will be useful for proving Theorem~\ref{thm:NecessaryConditionsForCompatibleSpaces} in Section~\ref{sub:StructureOfBBmCompatibleSpaces}. \subsection{A richness result for the kernel modules \texorpdfstring{$\mathcal B_m (X,Y)$}{ğ“‘ₘ(X,Y)}} \label{sub:KernelModuleRichnessResults} Before we state our richness result, we fix the following notation for the spaces introduced in Definition~\ref{def:SumAndIntersectionSpaces}: \begin{equation}\label{eq:IntersectionSpaceSymbol} \mathscr{G} := \mathscr{G} (\mu) := \bd{L}^1 (\mu) \cap \bd{L}^\infty(\mu) \cap \bd{L}^{1,\infty}(\mu) \cap \bd{L}^{\infty,1}(\mu), \quad \text{with}\quad \| \bullet \|_{\mathscr{G}} := \| \bullet \|_{\bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1}}. \end{equation} Likewise, $\mathscr{H} := \mathscr{H}(\mu)$, where \begin{equation}\label{eq:SumSpaceSymbol} \mathscr{H} (\mu) := \bd{L}^1 (\mu) + \bd{L}^\infty (\mu) + \bd{L}^{1,\infty} (\mu) + \bd{L}^{\infty,1} (\mu), \quad \text{with}\quad \| \bullet \|_{\mathscr{H}} := \| \bullet \|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}}. \end{equation} Given a measurable function $w : X \to (0,\infty)$, the weighted spaces $\mathscr{G}_w$ and $\mathscr{H}_w$ are defined as in Equation~\eqref{eq:WeightedSpaceDefinition}. This notation will allow for a succinct formulation of the following results. \begin{rem*} We will see in Theorem~\ref{thm:DualOfIntersection} that $\| \bullet \|_{\mathscr{H}}$ is indeed a norm, and that with this norm $\mathscr{H}$ becomes a Banach space. \end{rem*} Having introduced the proper notation, we can now state and prove the announced richness result for the kernel module $\mathcal B_m (X,Y)$. \begin{lemma}\label{lem:KernelModuleRichnessResult} With notation and assumptions as in Definition~\ref{def:NewKernelModule}, let $m : X \times Y \to (0,\infty)$, $v : X \to (0,\infty)$, and $w : Y \to (0,\infty)$ be measurable and such that \( m(x,y) \leq C \cdot {v}(x) \cdot {w}(y) \) for all $x \in X$ and $y \in Y$ and some $C > 0$. If $f \in \mathscr{G}_{v}(\mu)$ and $g \in \mathscr{G}_{w}(\nu)$, then $f \otimes g \in \mathcal B_m (X \times Y)$ with \( \| f \otimes g \|_{\mathcal B_m} \leq 2C \, \| f \|_{\mathscr{G}_{v}} \cdot \| g \|_{\mathscr{G}_{w}}. \) Here, \( f \otimes g : X \times Y \to \mathbb{C}, (x,y) \mapsto f(x) \cdot g(y) . \) \end{lemma} \begin{proof} We first consider the case where $m,{v},{w} \equiv 1$ and $C = 1$. Directly from the definitions, we see for arbitrary $(x_2,y_2) \in X_2 \times Y_2$ that \begin{align*} & \mathop{\operatorname{ess~sup}}_{x_1 \in X_1} \big\| (f \otimes g)^{(x_2,y_2)} (x_1, \bullet) \big\|_{\bd{L}^1 (\nu_1)} = \| f(\bullet,x_2) \|_{\bd{L}^\infty} \cdot \| g(\bullet,y_2) \|_{\bd{L}^1} \\ \quad \text{and} \quad & \mathop{\operatorname{ess~sup}}_{y_1 \in Y_1} \big\| (f \otimes g)^{(x_2,y_2)} (\bullet, y_1) \big\|_{\bd{L}^1 (\mu_1)} = \| f(\bullet, x_2) \|_{\bd{L}^1} \cdot \| g(\bullet, y_2) \|_{\bd{L}^\infty} . \end{align*} Therefore, \[ \Gamma(x_2,y_2) := \| (f \otimes g)^{(x_2,y_2)} \|_{\mathcal A (X_1 , Y_1)} \leq \| f(\bullet,x_2) \|_{\bd{L}^\infty} \cdot \| g(\bullet,y_2) \|_{\bd{L}^1} + \| f(\bullet, x_2) \|_{\bd{L}^1} \cdot \| g(\bullet, y_2) \|_{\bd{L}^\infty} , \] which easily implies \begin{align*} & \mathop{\operatorname{ess~sup}}_{x_2 \in X_2} \big\| \Gamma(x_2,\bullet) \big\|_{\bd{L}^1(\nu_2)} \leq \| f \|_{\bd{L}^\infty} \, \| g \|_{\bd{L}^1} + \| f \|_{\bd{L}^{1,\infty}} \, \| g \|_{\bd{L}^{\infty,1}} \\ \quad \text{and} \quad & \mathop{\operatorname{ess~sup}}_{y_2 \in Y_2} \big\| \Gamma(\bullet,y_2) \big\|_{\bd{L}^1(\mu_2)} \leq \| f \|_{\bd{L}^{\infty,1}} \, \| g \|_{\bd{L}^{1,\infty}} + \| f \|_{\bd{L}^{1}} \, \| g \|_{\bd{L}^\infty} . \end{align*} Overall, this implies \( \| f \otimes g \|_{\mathcal B} = \| \Gamma \|_{\mathcal A} \leq 2 \, \| f \|_{\mathscr{G}(\mu)} \, \| g \|_{\mathscr{G} (\nu)} . \) It remains to consider the case including the weights. But by assumption on the weights, we have $|m \cdot (f \otimes g)| \leq C \cdot ({v} \cdot |f|) \otimes ({w} \cdot |g|)$, where ${v} \cdot |f| \in \mathscr{G}(\mu)$ and ${w} \cdot |g| \in \mathscr{G}(\nu)$. By the unweighted case, this implies $m \cdot (f \otimes g) \in \mathcal B$. In other words, $f \otimes g \in \mathcal B_m$, and \[ \| f \otimes g \|_{\mathcal B_m} = \big\| m \cdot (f \otimes g) \big\|_{\mathcal B} \leq C \cdot \big\| ({v} \cdot |f|) \otimes ({w} \cdot |g|) \big\|_{\mathcal B} \leq 2C \cdot \big\| {v} \cdot |f| \big\|_{\mathscr{G}} \cdot \big\| {w} \cdot |g| \big\|_{\mathscr{G}} = 2C \cdot \| f \|_{\mathscr{G}_{v}} \cdot \| g \|_{\mathscr{G}_{w}} , \] as claimed. \end{proof} In view of the preceding lemma, it is natural to ask for which spaces $\mathbf{A}$ we can guarantee that $\mathbf{A} \cap \mathscr{G}_{v}$ is nontrivial. The next lemma shows that this is always the case if $\mathbf{A}$ is a non-trivial \emph{solid} function space on $(X,\mathcal{F},\mu)$; the definition of these spaces was given in Section~\ref{sub:RestrictionsOnSolidSpaces}. \begin{lemma}\label{lem:SolidSpacesContainGoodFunctions} Let ${(X,\mathcal{F},\mu) := (X_1 \times X_2, \mathcal{F}_1 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)}$, where $\XIndexTuple{1}$ and $\XIndexTuple{2}$ are $\sigma$-finite measure spaces. Let $\mathbf{A}$ be a solid function space on $X$ with $\mathbf{A} \neq \{0\}$, and let ${{v} : X \to (0,\infty)}$ be measurable. Then there exists a measurable set $E \subset X$ such that $\mu(E) > 0$ and ${{\mathds{1}}_E \in \mathbf{A} \cap \mathscr{G}_{v} \cap \mathscr{G}}$. \end{lemma} \begin{proof} $\mathbf{A}$ is non-trivial and solid; hence, there exists a non-negative function $g \in \mathbf{A}$ such that ${\{ x \in X \colon g(x) > 0 \}}$ has positive measure. Thus, ${E_{1} := \{ x \in X \colon g(x) \geq n_0^{-1} \}}$ has positive measure for a suitable $n_0 \in \mathbb{N}$. Next, since $X = \bigcup_{k \in \mathbb{N}} \{ x \in X \colon {v}(x) \leq k \}$, we see by $\sigma$-additivity that there is $k \in \mathbb{N}$ such that $E_2 := E_1 \cap \{ x \in X \colon {v}(x) \leq k \}$ has positive measure. Furthermore, since $X_1, X_2$ are $\sigma$-finite, we have $X_i = \bigcup_{n \in \mathbb{N}} X_i^{(n)}$ for certain $X_i^{(n)} \in \mathcal{F}_i$ satisfying $X_i^{(n)} \subset X_i^{(n+1)}$ and $\mu_i \big( X_i^{(n)} \big) < \infty$ for all $n \in \mathbb{N}$ and $i \in \{ 1,2 \}$. Because of $X = \bigcup_{n \in \mathbb{N}} \big( X_1^{(n)} \times X_2^{(n)} \big)$, there is thus some $n \in \mathbb{N}$ such that $E := E_2 \cap \big( X_1^{(n)} \times X_2^{(n)} \big)$ has positive measure. Define $f := {\mathds{1}}_E$, and note that $g \geq n_0^{-1}$ on $E_1 \supset E_2 \supset E$, so that $0 \leq f \leq n_0 \cdot g$. By solidity of $\mathbf{A}$ and since $g \in \mathbf{A}$, this implies $f \in \mathbf{A}$. Furthermore, $0 \leq (1 + {v}) \cdot f \leq (1+k) \cdot {\mathds{1}}_{X_1^{(n)} \times X_2^{(n)}}$, so that \begin{align*} \| (1+{v}) \cdot f\|_{\bd{L}^1(\mu)} & \leq (1+k) \cdot \mu_1 \big( X_1^{(n)} \big) \cdot \mu_2 \big( X_2^{(n)} \big) < \infty , \\ \| (1+{v}) \cdot f\|_{\bd{L}^\infty(\mu)} & \leq 1+k < \infty, \\ \| (1+{v}) \cdot f\|_{\bd{L}^{1,\infty}(\mu)} & \leq (1+k) \cdot \mu_1 \big( X_1^{(n)} \big) < \infty, \\ \| (1+{v}) \cdot f\|_{\bd{L}^{\infty,1}(\mu)} & \leq (1+k) \cdot \mu_2 \big( X_2^{(n)} \big) < \infty , \end{align*} and thus $f \in \mathscr{G}_{v} \cap \mathscr{G}$. Finally, $\mu(E) > 0$ holds by our choice of $n_0, k$, and $n$. \end{proof} \subsection{\texorpdfstring{Necessary conditions for spaces compatible with the kernel modules $\mathcal B_m$} {Necessary conditions for spaces compatible with the new kernel modules}} \label{sub:StructureOfBBmCompatibleSpaces} In this section, we prove Theorem~\ref{thm:NecessaryConditionsForCompatibleSpaces}, considering both parts of the theorem individually. {} \noindent \textbf{Proof of Part~(\ref{enu:KernelCoDomainEmbedding}) of Theorem~\ref{thm:NecessaryConditionsForCompatibleSpaces}:} Since $\mathbf{B}$ is a solid function space, convergence in $\mathbf{B}$ implies local convergence in measure (that is, convergence in measure on sets of finite measure); see \mbox{\hspace{1sp}\cite[Lemma~2.2.8]{VoigtlaenderPhDThesis}}. The same holds for the space $\mathscr{G}_v(\mu)$ introduced in Definition~\ref{def:SumAndIntersectionSpaces} and Equation~\eqref{eq:IntersectionSpaceSymbol}. Since $X$ is $\sigma$-finite, local convergence in measure determines the limit uniquely, up to changes on a set of measure zero. Thus, the closed graph theorem shows that the continuous embedding $\mathscr{G}_v(\mu) \hookrightarrow \mathbf{B}$ holds if and only if we have $\mathscr{G}_v(\mu) \subset \mathbf{B}$ as sets, which we now prove. Since $\mathbf{A} \neq \{0\}$ is a solid function space on $Y$, Lemma~\ref{lem:SolidSpacesContainGoodFunctions} produces a measurable set $E \subset Y$ satisfying $\nu(E) \in (0,\infty)$ and such that ${\mathds{1}}_E \in \mathbf{A} \cap \mathscr{G}_w(\nu)$. Define $g := {\mathds{1}}_E / \nu(E)$. Now, let $f \in \mathscr{G}_v(\mu)$ be arbitrary and note $|f| \in \mathscr{G}_v(\mu)$ as well. According to Lemma~\ref{lem:KernelModuleRichnessResult}, we have $K := |f| \otimes g \in \mathcal B_m(X,Y)$. By assumption, this implies that $\Phi_K : \mathbf{A} \to \mathbf{B}$ is well-defined, and thus $h := \Phi_K [{\mathds{1}}_E] \in \mathbf{B}$. But we have \[ (\Phi_K {\mathds{1}}_E)(x) = |f(x)| \cdot \int_Y g(y) \cdot {\mathds{1}}_E (y) \, d \nu(y) = |f(x)| \qquad \forall \, x \in X, \] and thus $|f| = \Phi_K [{\mathds{1}}_E] \in \mathbf{B}$. Since $\mathbf{B}$ is solid, we see $f \in \mathbf{B}$, and thus $\mathscr{G}_v(\mu)\subset \mathbf{B}$, since $f \in \mathscr{G}_v(\mu)$ was arbitrary. As seen above, this proves $\mathscr{G}_v(\mu) \hookrightarrow \mathbf{B}$. $\square$ {} \noindent \textbf{Proof of Part~(\ref{enu:KernelDomainEmbedding}) of Theorem~\ref{thm:NecessaryConditionsForCompatibleSpaces}:} Our proof of Part~(\ref{enu:KernelDomainEmbedding}) of Theorem~\ref{thm:NecessaryConditionsForCompatibleSpaces} is crucially based on the following description of the space $\mathscr{H}(\mu)$ as the associate space of $\mathscr{G}(\mu)$. The proof of this result is surprisingly involved, and thus postponed to Appendix~\ref{sec:DualCharacterizationOfSumSpace}. \begin{theorem}\label{thm:DualOfIntersection} Let $(X,\mathcal{F},\mu) = (X_1\otimes X_2,\mathcal{F}_2 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)$, where $\mu_1,\mu_2$ are $\sigma$-finite. Let furthermore $\mathscr{G}$ and $\mathscr{H}$ be as in \eqref{eq:IntersectionSpaceSymbol} and \eqref{eq:SumSpaceSymbol}, respectively. Then the space $\big( \mathscr{H}, \|\bullet\|_{\mathscr{H}} \big)$ is a Banach space. Furthermore, if $f : X \to \mathbb{C}$ is measurable, then $f \in \mathscr{H}$ if and only if $f \cdot g \in \bd{L}^1 (\mu)$ for all $g \in \mathscr{G}$. Finally, for $f \in \mathscr{H}$, we have \begin{equation*} \frac{1}{16} \, \|f\|_{\mathscr{H}} \leq \sup \Big\{ \int_{X} |f \cdot g| \, d\mu \, \colon \, g \in \mathscr{G} \text{ with } \|g\|_{\mathscr{G}} \leq 1 \Big\} \leq 16 \, \|f\|_{\mathscr{H}}. \end{equation*} \end{theorem} With this dual characterization of $\mathscr{H} = \bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}$ at hand, we can now complete the proof of Theorem~\ref{thm:NecessaryConditionsForCompatibleSpaces}. \begin{proof}[Proof of Part~(\ref{enu:KernelDomainEmbedding}) of Theorem~\ref{thm:NecessaryConditionsForCompatibleSpaces}] By the same reasoning as in the proof of Part~(\ref{enu:KernelCoDomainEmbedding}), the \emph{continuous} embedding $\mathbf{A} \hookrightarrow \mathscr{H}_{1/w}(\nu)$ holds if and only if we have $\mathbf{A} \subset \mathscr{H}_{1/w}(\nu)$ as sets. Thus, let $f \in \mathbf{A}$. Proving $f \in \mathscr{H}_{1/w}(\nu)$ means proving that $g := \frac{1}{w} \cdot f \in \mathscr{H}(\nu)$. For this, it suffices by Theorem~\ref{thm:DualOfIntersection} to prove that $g \cdot h \in \bd{L}^{1}(\nu)$ for all \( h \in \mathscr{G}(\nu). \) Thus, let $h \in \mathscr{G}(\nu)$ be arbitrary, and define $\psi := \frac{1}{w} \cdot |h|$, noting that $\psi \in \mathscr{G}_w(\nu)$. The rest of the proof again proceeds similar to the proof of Part~(\ref{enu:KernelCoDomainEmbedding}): By applying Lemma~\ref{lem:SolidSpacesContainGoodFunctions} (to the measure space $(X,\mu)$ and with $\mathbf{A} = \bd{L}^1(\mu)$, noting that $\bd{L}^1(\mu)$ is non-trivial since $\mu(X) \neq 0$ and $\mu$ is $\sigma$-finite), we obtain a measurable set $E \subset X$ such that $\mu(E) > 0$ and such that $\varphi := {\mathds{1}}_E \in \mathscr{G}(\mu) \cap \mathscr{G}_v(\mu)$. Finally, Lemma~\ref{lem:KernelModuleRichnessResult} shows that $K := \varphi \otimes \psi \in \mathcal B_m(X,Y)$; by our assumption, this means that $\big( \Phi_K |f| \big) (x) < \infty$ for $\mu$-almost all $x \in X$, since $|f| \in \mathbf{A}$. In particular since $\mu(E) > 0$, there is some $x \in E$ such that \[ \infty > \big( \Phi_K |f| \big) (x) = \varphi(x) \cdot \int_Y |f(y)| \cdot \psi(y) \, d \nu(y) = \int_Y |g(y)| \cdot |h(y)| \, d \nu(y) , \] which means that indeed $g \cdot h \in \bd{L}^1(\nu)$, as was to be shown. \end{proof} \section{Boundedness of \texorpdfstring{$\Phi_K : \mathbf{A} \to \bd{L}^\infty_{1/v}$} {ΦK from ğ€ into a weighted ğ‘³âˆ space}} \label{sec:EmbeddingIntoWeightedLInfty} In this section, we prove Theorem~\ref{thm:BoundednessIntoWeightedLInfty}. For doing so, we will need the following auxiliary result, the proof of which can be found in Appendix~\ref{sub:CartesianProductSumNormProof}. \begin{lemma}\label{lem:CartesianProductSumNorm} Let $(X,\mathcal{F},\mu) = (X_1 \times X_2, \mathcal{F}_1 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)$, where $\XIndexTuple{i}$ is a $\sigma$-finite measure space for $i \in \{ 1, 2 \}$. For $V \in \mathcal{F}_1$ and $W \in \mathcal{F}_2$, we have \[ \| {\mathds{1}}_{V \times W} \|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}} \geq \min \{ 1, \mu_1(V), \mu_2(W), \mu(V \times W) \}. \] \end{lemma} Using this estimate, we can now prove Theorem~\ref{thm:BoundednessIntoWeightedLInfty}. \begin{proof}[Proof of Theorem~\ref{thm:BoundednessIntoWeightedLInfty}] \textbf{Ad (1):} Since $J$ is countable, there exists a (not necessarily injective) surjection $\mathbb{N} \to J, n \mapsto j_n$. Define $\Omega_n := U_{j_n} \setminus \bigcup_{\ell = 1}^{n-1} U_{j_\ell}$, noting that the $\Omega_n$ are pairwise disjoint and satisfy $\biguplus_{n \in \mathbb{N}} \Omega_n = \bigcup_{n \in \mathbb{N}} U_{j_n} = \bigcup_{j \in J} U_j = X$, as well as $\Omega_n \subset U_{j_n}$. Using this partition, define \[ w_{\mathcal{U}}^c : X \to (0,\infty), x \mapsto \sum_{n \in \mathbb{N}} \Big[ (w_{\mathcal{U}})_{j_n} \cdot {\mathds{1}}_{\Omega_n} (x) \Big] . \] Clearly, $w_{\mathcal{U}}^c$ is well-defined (and in particular $(0,\infty)$-valued) and measurable. To prove Equation~\eqref{eq:ContinuousCoveringWeightCondition}, first note that since $\mathcal{U}$ is a product-admissible covering, there is $C_0 > 0$ satisfying $(w_{\mathcal{U}})_i \leq C_0 \cdot (w_{\mathcal{U}})_j$ for all $i,j \in J$ for which $U_i \cap U_j \neq \emptyset$. Now, let $j \in J$ and $x \in U_j$ be arbitrary. We have $x \in \Omega_n$ for a unique $n \in \mathbb{N}$, and then $w_{\mathcal{U}}^c (x) = (w_{\mathcal{U}})_{j_n}$. Furthermore, since $x \in U_j \cap \Omega_n \subset U_j \cap U_{j_n}$, we have $(w_{\mathcal{U}})_{j_n} \leq C_0 \cdot (w_{\mathcal{U}})_j$ and $(w_{\mathcal{U}})_{j} \leq C_0 \cdot (w_{\mathcal{U}})_{j_n}$, meaning \( \frac{w_{\mathcal{U}}^c (x)}{(w_{\mathcal{U}})_j} + \frac{(w_{\mathcal{U}})_j}{w_{\mathcal{U}}^c (x)} \leq 2 \, C_0 . \) Overall, we see \( \sup_{j \in J} \sup_{x \in U_j} \Big[ \frac{w_{\mathcal{U}}^c (x)}{(w_{\mathcal{U}})_j} + \frac{(w_{\mathcal{U}})_j}{w_{\mathcal{U}}^c (x)} \Big] \leq 2 \, C_0 < \infty , \) as desired. Finally, let $v_{\mathcal{U}}^c : X \to (0, \infty)$ satisfy \eqref{eq:ContinuousCoveringWeightCondition}; that is, \({ C_v := \sup_{j \in J} \sup_{x \in U_j} \Big[ \frac{(w_{\mathcal{U}})_j}{v_{\mathcal{U}}^c (x)} + \frac{v_{\mathcal{U}}^c (x)}{(w_{\mathcal{U}})_j} \Big] < \infty . }\) Let $x \in X$ and note that $x \in U_j$ for some $j \in J$. Hence, \( \vphantom{\sum^i} v_{\mathcal{U}}^c (x) \leq C_v \cdot (w_{\mathcal{U}})_j \leq 2 C_0 C_v \cdot w_{\mathcal{U}}^c (x) . \) Conversely, \( w_{\mathcal{U}}^c (x) \leq 2 C_0 \cdot (w_{\mathcal{U}})_j \leq 2 C_0 C_v \cdot v_{\mathcal{U}}^c (x) . \) Overall, this shows that $w_{\mathcal{U}}^c \asymp v_{\mathcal{U}}^c$, as claimed. {} \noindent \textbf{Ad (2):} First, note that for arbitrary $x,y \in X$, we have $x \in U_j \subset \mathcal{U}(x)$ for some $j \in J$, and hence $|K(x,y)| \leq \MaxKernel{\mathcal{U}} K (x,y) \leq L(x,y)$. Since $K$ is measurable and because of $\| L \|_{\mathcal B_m} < \infty$, this implies $K, |K| \in \mathcal B_m(X,X) \hookrightarrow \mathscr{B}(\mathbf{A})$. In case of $\mathbf{A} = \{ 0 \}$, the operator $\Phi_K : \mathbf{A} \to \bd{L}_{1/v}^\infty$ is trivially bounded, so that we can assume in the following that $\mathbf{A} \neq \{ 0 \}$. Note that this implies $\mu(X) > 0$. As in Definition~\ref{def:ProductAdmissibleCovering}, let us write $U_j = V_j \times W_j$. Now, let $f \in \mathbf{A}$. For arbitrary $j \in J$ and $x,y \in U_j$, we then have $x \in U_j \subset \mathcal{U}(y)$, and hence $|K(x,z)| \leq \MaxKernel{\mathcal{U}} K (y,z) \leq L(y,z)$ for arbitrary $z \in X$. Therefore, \[ |\Phi_K f (x)| \leq \int_X |K(x,z)| \cdot |f(z)| \, d \mu(z) \leq \int_X L(y,z) \cdot |f(z)| \, d \mu (z) = (\Phi_L \, |f|) (y) . \] Since this holds for all $x,y \in U_j$, we see that if we define \[ \Theta_j := \sup_{x \in U_j} |\Phi_K f (x)| \qquad \text{for } j \in J, \] then $\Theta_j \leq (\Phi_L \, |f|)(y)$ for all $y \in U_j$, meaning that \( \Theta_j \cdot {\mathds{1}}_{U_j} (y) \leq (\Phi_L \, |f|) (y) \) for all $j \in J$ and $y \in X$. Since $\Phi_L \, |f| \in \mathbf{A}$ and by solidity of $\mathbf{A}$, this implies \begin{equation} \Theta_j \cdot \big\| {\mathds{1}}_{U_j} \big\|_{\mathbf{A}} \leq \big\| \Phi_L \, |f| \big\|_{\mathbf{A}} \leq C^{(1)} \cdot \| \Phi_L \|_{\mathcal B_m} \cdot \| f \|_{\mathbf{A}} =: C^{(2)} \cdot \| f \|_{\mathbf{A}} \qquad \forall \, j \in J . \label{eq:WeightedLInfityEmbeddingStep1} \end{equation} Here, the constant $C^{(1)} = C^{(1)}(\mathbf{A}, m)$ is provided by our assumption $\mathcal B_m(X,X) \hookrightarrow \mathscr{B}(\mathbf{A})$. Next, Part~(\ref{enu:KernelDomainEmbedding}) of Theorem~\ref{thm:NecessaryConditionsForCompatibleSpaces} shows because of $\mathcal B_m(X,X) \hookrightarrow \mathscr{B}(\mathbf{A})$ and $\mu(X) \neq 0$, and thanks to our assumption $m(x,y) \leq C \cdot u(x) \cdot u(y)$ that $\mathbf{A} \hookrightarrow \mathscr{H}_{1/u}$. As before, we use the notation $\mathscr{H} := \bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}$ for brevity. Hence, there is $C^{(3)} = C^{(3)} (\mathbf{A}, u) > 0$ such that \( \| {\mathds{1}}_{U_j} \|_{\mathscr{H}_{1/u}} \leq C^{(3)} \cdot \| {\mathds{1}}_{U_j} \|_{\mathbf{A}} \) for all $j \in J$. Now, since $u$ is $\mathcal{U}$-moderate, there is $C^{(4)} > 0$ satisfying $u (y) \leq C^{(4)} \cdot u(x)$ and hence also $\frac{1}{u(x)} \leq C^{(4)} \cdot \frac{1}{u(y)}$ for all $j \in J$ and all $x, y \in U_j$. This entails $\frac{1}{u(x)} {\mathds{1}}_{U_j} (y) \leq C^{(4)} \cdot \frac{1}{u(y)} {\mathds{1}}_{U_j}(y)$ for all $x \in U_j$ and all $y \in X$. In combination with Lemma~\ref{lem:CartesianProductSumNorm}, this implies \begin{align*} \frac{(w_{\mathcal{U}})_j}{u(x)} & = \frac{\min \{ 1, \mu_1 (V_j), \mu_2 (W_j), \mu(V_j \times W_j) \}}{u(x)} \leq \frac{1}{u(x)} \cdot \| {\mathds{1}}_{V_j \times W_j} \|_{\mathscr{H}} = \frac{1}{u(x)} \cdot \| {\mathds{1}}_{U_j} \|_{\mathscr{H}} \\ & \leq C^{(4)} \cdot \Big\| \frac{1}{u} \, {\mathds{1}}_{U_j} \Big\|_{\mathscr{H}} = C^{(4)} \cdot \| {\mathds{1}}_{U_j} \|_{\mathscr{H}_{1/u}} \leq C^{(3)} C^{(4)} \cdot \| {\mathds{1}}_{U_j} \|_{\mathbf{A}} \qquad \forall \, j \in J \text{ and } x \in U_j . \end{align*} By construction of the weight $w_{\mathcal{U}}^c$, the constant $C^{(5)} := \sup_{j \in J} \sup_{x \in U_j} \frac{w_{\mathcal{U}}^c (x)}{(w_{\mathcal{U}})_j}$ is finite; furthermore, by definition of the weight $v$ (see Equation~\eqref{eq:SpecialLInftyWeight}), we see $\frac{1}{v(x)} = \frac{w_{\mathcal{U}}^c (x)}{u(x)} \leq C^{(5)} \cdot \frac{(w_{\mathcal{U}})_j}{u(x)}$ for all $x \in U_j$. By combining these observations with Equation~\eqref{eq:WeightedLInfityEmbeddingStep1} and recalling the definition of $\Theta_j$, we conclude that \[ \frac{1}{v(x)} \, |\Phi_K f (x)| \leq \Theta_j \cdot \frac{1}{v(x)} \leq C^{(3)} C^{(4)} C^{(5)} \cdot \Theta_j \cdot \| {\mathds{1}}_{U_j} \|_{\mathbf{A}} \leq C^{(2)} C^{(3)} C^{(4)} C^{(5)} \cdot \| f \|_{\mathbf{A}} \] for all $j \in J$ and $x \in U_j$. Since $X = \bigcup_{j \in J} U_j$, this implies $\| \Phi_K f \|_{\bd{L}^{\infty}_{1/v}} \leq C^{(6)} \cdot \| f \|_{\mathbf{A}}$. Note that the constant $C^{(6)} := C^{(2)} C^{(3)} C^{(4)} C^{(5)}$ satisfies $C^{(6)} = C^{(6)} (\mathbf{A}, \mathcal{U}, u, m, w_{\mathcal{U}}^c, L)$; that is, it is independent of the choice of $f \in \mathbf{A}$. \end{proof} \section{Application: General coorbit theory with mixed-norm Lebesgue spaces} \label{sec:CoorbitTheory} In this section, we first give a more precise exposition of coorbit theory, based on the article \cite{kempka2015general} by Kempka et al. We then give rigorous formulations and proofs of Theorems~\ref{thm:IntroductionCoorbitWellDefinedConditions} and \ref{thm:IntroductionCoorbitDiscretizationConditions}. \subsection{A formal review of general coorbit theory} \label{sub:CoorbitReview} As in Section~\ref{sec:IntroCoorbitTheory}, we assume throughout that $\mathcal{H}$ is a separable Hilbert space, and that $(X,\mathcal{F},\mu) = (X_1 \times X_2, \mathcal{F}_1 \otimes \mathcal{F}_2, \mu_1 \otimes \mu_2)$, where $\XIndexTuple{j}$ ($j \in \{1,2\}$) is a $\sigma$-compact, locally compact Hausdorff space with Borel $\sigma$-algebra $\mathcal{F}_j$, and a Radon measure $\mu_j$ with $\mathop{\operatorname{supp}} \mu_j = X_j$. Furthermore, we assume that $\mathcal{F}_1 \otimes \mathcal{F}_2$ is the Borel $\sigma$-algebra on $X$ and that $\mu$ is a Radon measure; this holds for instance if $X_1, X_2$ are second-countable; see \cite[Theorem~7.20]{FollandRA}. Finally, we assume that $\Psi = (\psi_x)_{x \in X} \subset \mathcal{H}$ is a continuous Parseval frame (see Equation~\eqref{eq:ParsevalFrameCondition}) with reproducing kernel $K_\Psi$ as in Equation~\eqref{eq:ReproducingKernelDefinition}. As mentioned in Section~\ref{sec:IntroCoorbitTheory}, the reproducing kernel $K_\Psi$ and the Banach function space $\mathbf{A}$ have to satisfy certain technical conditions to ensure that the coorbit space $\operatorname{Co}_\Psi (\mathbf{A})$ is well-defined. In the following two lemmas, we first collect the conditions from \cite{kempka2015general} which ensure that the reservoir $\mathscr{R}$ is well-defined, and then the conditions which guarantee that also the coorbit spaces $\operatorname{Co}_\Psi (\mathbf{A})$ are well-defined. Simplifications of these conditions will be presented in the next subsection. As previously in Equation~\eqref{eq:SeparableMatrixWeight}, for a measurable function $v : X \to (0,\infty)$, we define ${ m_v : X \times X \rightarrow (0,\infty), (x,y) \mapsto \max\{v(x)/v(y),v(y)/v(x)\} }$. \begin{lemma}[{\cite[Section 2.3]{kempka2015general}}]\label{lem:ReservoirConditions} Let $v : X \to [1,\infty)$ be measurable and assume that there is a constant $C_B > 0$ such that \begin{equation} \|\psi_x\|_{\mathcal{H}} \leq C_B \, v(x) \quad \text{for all} \quad x \in X, \qquad \text{and} \qquad K_\Psi \in \mathcal A_{m_v}. \label{eq:CoorbitReservoirAssumptions} \end{equation} Then all results in \cite[Section~2.3]{kempka2015general} apply. In particular, the following hold: \begin{itemize} \item[(1)] The space \( \mathcal{H}^1_v := \{ f \in \mathcal{H} \colon \| V_\Psi f \|_{\bd{L}^1_v (\mu)} < \infty \} , \) equipped with the norm \( \|f\|_{\mathcal{H}^1_v} := \|V_\Psi f\|_{\bd{L}^1_v(\mu)} \) is a Banach space which satisfies $\mathcal{H}^1_v \hookrightarrow \mathcal{H}$. Furthermore, there is a null-set $N \subset X$ such that $\psi_x \in \mathcal{H}^1_v$ for all $x \in X \setminus N$. Finally, the map $X \setminus N \to \mathcal{H}^1_v, x \mapsto \psi_x$ is Bochner measurable. \item[(2)] The extension of $V_\Psi$ to the antidual\footnote{That is, $\mathscr{R} = \{ \varphi : \mathcal{H}_v^1 \to \mathbb{C} \colon \varphi \text{ continuous and antilinear} \}$, where $\varphi$ is called antilinear if $\varphi(x + y) = \varphi(x) + \varphi(y)$ and $\varphi(\alpha x) = \overline{\alpha} \, \varphi(x)$ for all $x,y \in \mathcal{H}_v^1$ and $\alpha \in \mathbb{C}$.} $\mathscr{R} := (\mathcal{H}^1_v)^\urcorner$ of $\mathcal{H}^1_v$, given by \begin{equation} V_{\Psi} : \mathscr{R} \to \bd{L}^\infty_{1/v}(\mu), f \mapsto V_{\Psi} f \quad \text{with} \quad V_{\Psi} f \colon X \setminus N \to \mathbb{C}, x \mapsto \langle f,\psi_x\rangle_{(\mathcal{H}^1_v)^\urcorner,\mathcal{H}^1_v} \end{equation} is a well-defined, continuous and injective map from $(\mathcal{H}^1_v)^\urcorner$ into $\bd{L}^\infty_{1/v}(\mu)$, and $f \mapsto \| V_\Psi f \|_{\bd{L}_{1/v}^\infty}$ is an equivalent norm on $\mathscr{R}$. \item[(2)] For ${F \in \bd{L}^\infty_{1/v}(\mu)}$, we have $F = \Phi_{K_\Psi}(F)$ if and only if $F = V_\Psi f$ for some $f \in (\mathcal{H}^1_v)^\urcorner$. \end{itemize} \end{lemma} \begin{rem*} The possible issue that $\psi_x \notin \mathcal{H}_v^1$ on a null-set can be circumvented by redefining $\psi_x = 0$ on this null-set. Thus, as in \cite{kempka2015general} (see \cite[Remark~2.16]{kempka2015general}), we assume in the following that $\psi_x \in \mathcal{H}_v^1$ for \emph{all} $x \in X$. \end{rem*} \begin{lemma}[{\cite[Section 2.4]{kempka2015general}}]\label{lem:CoorbitConditions} Suppose that the assumptions of Lemma~\ref{lem:ReservoirConditions} are satisfied. Let $\mathbf{A}$ be a rich solid Banach function space on $X$, and assume that \begin{enumerate} \item The integral operator $\Phi_{K_\Psi}$ acts continuously on $\mathbf{A}$; \item We have $\Phi_{K_\Psi}(\mathbf{A}) \hookrightarrow \bd{L}^\infty_{1/v} (\mu)$, meaning that there is a constant $C > 0$ such that \begin{equation} \| \Phi_{K_\Psi} (f) \|_{\bd{L}^\infty_{1/v}} \leq C \cdot \| \Phi_{K_\Psi} (f) \|_{\mathbf{A}} \qquad \forall \, f \in \mathbf{A} . \label{eq:CoorbitLInftyEmbeddingAssumption} \end{equation} \end{enumerate} Then all results in \cite[Section~2.4]{kempka2015general} apply; in particular the space \[ \operatorname{Co} (\mathbf{A}) = \operatorname{Co}_{\Psi,v} (\mathbf{A}) := \big\{ f \in (\mathcal{H}^1_v)^{\urcorner} \colon V_\Psi f \in \mathbf{A} \big\} \quad \text{with norm} \quad \| f \|_{\operatorname{Co} (\mathbf{A})} := \| V_\Psi f \|_{\mathbf{A}} \] is a Banach space; also, $F \in \mathbf{A}$ is of the form $F = V_\Psi f$ for some $f \in \operatorname{Co}(\mathbf{A})$ if and only if ${F = \Phi_{K_\Psi}(F)}$. \end{lemma} \begin{remark} By definition, $\operatorname{Co}_{\Psi,v} (\mathbf{A})$ is dependent on the weight $v$ and the analyzing frame $\Psi$. However, in \cite[Lemma~2.26]{kempka2015general}, it is shown that for any other weight $\tilde{v}$ for which the assumptions of Lemmas~\ref{lem:ReservoirConditions}--\ref{lem:CoorbitConditions} are satisfied, $\operatorname{Co}_{\Psi,v} (\mathbf{A})= \operatorname{Co}_{\Psi,\tilde{v}} (\mathbf{A})$. Furthermore, in \cite[Lemma~2.29]{kempka2015general}, it is shown that $\operatorname{Co}_{\Psi,v} (\mathbf{A}) = \operatorname{Co}_{\widetilde{\Psi},v} (\mathbf{A})$ for any continuous Parseval frame $\widetilde{\Psi} = \{\widetilde{\psi}_x\}_{x\in X}$ for which the mixed kernel \( K_{\Psi,\widetilde{\Psi}} : X \times X \rightarrow \mathbb{C}, (x,y) \mapsto \langle \psi_y, \widetilde{\psi}_x \rangle \) and its transpose $K_{\Psi,\widetilde{\Psi}}^T$ are both contained in $\mathcal A_{m_v}$ and induce bounded integral operators on $\mathbf{A}$. All in all, this justifies the compact notation $\operatorname{Co} (\mathbf{A})$ for $\operatorname{Co}_{\Psi,v} (\mathbf{A})$. \end{remark} To formally state the discretization theory for the coorbit spaces, we first need to properly define the sequence spaces $\mathbf{A}^\flat$ and $\mathbf{A}^\sharp$ that occur in the definition of the analysis and synthesis operators; see Equation~\eqref{eq:CoefficientSynthesisOperator}. \begin{definition}[{see \cite[Section~2.2]{kempka2015general}}]\label{def:SequenceSpaces} Let $\mathcal{U} = (U_i)_{i \in I}$ be an admissible covering of $X$ and let $\mathbf{A}$ be a rich solid Banach function space on $X$. The spaces $\mathbf{A}^\flat = \mathbf{A}^\flat(\mathcal{U})$ and $\mathbf{A}^\sharp = \mathbf{A}^\sharp(\mathcal{U})$ are comprised by the sequences $(\lambda_i)_{i\in I} \in \mathbb{C}^I$ for which the norms \begin{equation} \| (\lambda_i)_{i \in I} \|_{\mathbf{A}^\flat} := \bigg\| \sum_{i \in I} |\lambda_i| \, {\mathds{1}}_{U_i} \bigg\|_{\mathbf{A}} \qquad \text{and} \qquad \| (\lambda_i)_{i \in I} \|_{\mathbf{A}^\sharp} := \bigg\| \sum_{i \in I} \frac{|\lambda_i|}{\mu(U_i)} \cdot {\mathds{1}}_{U_i} \bigg\|_{\mathbf{A}} \label{eq:SequenceSpaceNorms} \end{equation} are finite, respectively. \end{definition} As a convenient shorthand, we will use the notation $\|K\|_{\mathcal A_{v,\mathbf{A}}} := \max \{\|m_v \, K\|_{\mathcal A}, \| \Phi_K \|_{\mathbf{A} \to \mathbf{A}}\}$ for a given kernel $K$, where $m_v$ is as defined in Equation~\eqref{eq:SeparableMatrixWeight}. \begin{theorem}[{\cite[Theorem 2.48]{kempka2015general}}]\label{thm:coorbitsdisc} Suppose that the continuous frame $\Psi = (\psi_x)_{x \in X}$, the weight $v : X \to [1,\infty)$ and the solid Banach function space $\mathbf{A}$ satisfy the assumptions of Lemma~\ref{lem:CoorbitConditions} (and thus also those of Lemma~\ref{lem:ReservoirConditions}). Assume $\| \, |K_\Psi| \, \|_{{\mathcal A_{v,\mathbf{A}}}} < \infty$. Let $\mathcal{U} = (U_i)_{i\in I}$ an admissible covering of $X$, let ${\Gamma : X \times X \to S^1}$, let ${\mathrm{osc}}_{\UU,\Gamma}$ be as defined in Equation~\eqref{eq:OscillationDefinition}, and let $L : X \times X \to [0,\infty]$ be measurable with ${\mathrm{osc}}_{\UU,\Gamma} (K_\Psi) \leq L$. Define \[ \delta = \delta(v, \mathbf{A}, L) := \max \big\{ \| L \|_{\mathcal A_{v,\mathbf{A}}}, \quad \big\| L^T \big\|_{\mathcal A_{v,\mathbf{A}}} \big\} \in [0,\infty] . \] If \begin{equation}\label{eq:DiscretizationInvertibilityEstimate} \delta \cdot \big( 2 \, \| \, | K_\Psi | \, \|_{{\mathcal A_{v,\mathbf{A}}}} + \delta \big) < 1, \end{equation} and if for each $i \in I$ some $x_i \in U_i$ is chosen arbitrarily, then there exists a ``dual'' family ${\Phi_d = (\varphi_i)_{i\in I} \subset \mathcal{H}^1_v \cap \operatorname{Co}(\mathbf{A})}$, such that the following are true: \begin{itemize} \item[(1)] A given $f \in (\mathcal{H}^1_v)^\urcorner$ is an element of $\operatorname{Co}(\mathbf{A})$ if and only if \( \big( \langle f, \psi_{x_i} \rangle_{(\mathcal{H}^1_v)^{\urcorner}, \mathcal{H}_v^1} \big)_{i\in I} \in \mathbf{A}^\flat(\mathcal U) , \) if and only if \( \big( \langle f, \varphi_{i} \rangle_{(\mathcal{H}_v^1)^{\urcorner}, \mathcal{H}_v^1} \big)_{i\in I} \in \mathbf{A}^\sharp(\mathcal U) . \) In this case, we have \begin{equation} \|f\|_{\operatorname{Co}(\mathbf{A})} \asymp \big\| \big( \langle f, \psi_{x_i}\rangle_{(\mathcal{H}^1_v)^{\urcorner}, \mathcal{H}_v^1} \big)_{i \in I} \big\|_{\mathbf{A}^\flat(\mathcal U)} \asymp \big\| \big( \langle f, \varphi_{i} \rangle_{(\mathcal{H}^1_v)^{\urcorner}, \mathcal{H}_v^1} \big)_{i \in I} \big\|_{\mathbf{A}^\sharp (\mathcal U)} . \end{equation} \item[(2)] If $(\lambda_i)_{i\in I} \in \mathbf{A}^\sharp(\mathcal U)$, then $\sum_{i\in I} \lambda_i \, \psi_{x_i} \in \operatorname{Co}(\mathbf{A})$ with \( \big\| \sum_{i \in I} \lambda_i \, \psi_{x_i} \big\|_{\operatorname{Co}(\mathbf{A})} \lesssim \|(\lambda_i)_{i\in I}\|_{\mathbf{A}^\sharp(\mathcal U)} , \) with unconditional convergence of the series in the weak-$\ast$ topology induced by $(\mathcal{H}^1_v)^\urcorner$.\\ If $(\lambda_i)_{i \in I} \in \mathbf{A}^\flat(\mathcal U)$, then $\sum_{i\in I} \lambda_i \, \varphi_{i} \in \operatorname{Co}(\mathbf{A})$ with \( \big\| \sum_{i\in I} \lambda_i \, \varphi_{i} \big\|_{\operatorname{Co}(\mathbf{A})} \lesssim \|(\lambda_i)_{i \in I}\|_{\mathbf{A}^\flat(\mathcal U)} . \) \item[(3)] For all $f\in\operatorname{Co}(\mathbf{A})$, we have \( \sum_{i \in I} \, \langle f, \varphi_{i} \rangle_{(\mathcal{H}_v^1)^{\urcorner}, \mathcal{H}_v^1} \,\, \psi_{x_i} = f = \sum_{i \in I} \, \langle f, \psi_{x_i}\rangle_{(\mathcal{H}_v^1)^{\urcorner}, \mathcal{H}_v^1} \,\, \varphi_{i} . \) \end{itemize} \end{theorem} \begin{rem*} In \cite{kempka2015general}, the above conditions are formulated directly in terms of ${\mathrm{osc}}_{\UU,\Gamma} (K_\Psi)$ and not using the auxiliary kernel $L$. This, however, has the minor issue that in general ${\mathrm{osc}}_{\UU,\Gamma} (K_\Psi)$ might not be measurable, since $\Gamma$ might not be measurable and also since the definition of ${\mathrm{osc}}_{\UU,\Gamma}(K_\Psi)$ involves taking a supremum over a (potentially) uncountable set. The above formulation circumvents these problems; it can be obtained via straightforward minor modifications of the arguments given in \cite{kempka2015general}. \end{rem*} \subsection{Simplification of the well-definedness conditions} \label{sub:CoorbitWelldefinedSimplification} While Lemma~\ref{lem:ReservoirConditions} relies only on properties of the reproducing kernel $K_\Psi$, Lemma~\ref{lem:CoorbitConditions} must be verified individually for every target space $\mathbf{A}$, except when we generally have $\mathcal A_{m_v} \hookrightarrow \mathscr{B}(\mathbf{A},\mathbf{A})$ and $\mathcal A_{m_v}(\mathbf{A}) \hookrightarrow \bd{L}^\infty_{1/v}(\mu)$. As shown in \cite[Corollary~4]{GeneralizedCoorbit1}, both conditions are satisfied if $\mathbf{A} = \bd{L}^p(\mu)$ and if there is an admissible covering $\mathcal{U} = (U_i)_{i \in I}$ such that the maximal kernel $\MaxKernel{\mathcal{U}}(K_\Psi)$ is in $\mathcal A_{m_v}$ and ${\sup_{i \in I} \sup_{x,y \in U_i} v(x) / v(y) < \infty}$. For more general choices of $\mathbf{A}$, either condition may be violated. In particular, the embedding $\mathcal A_{m_v} \hookrightarrow \mathscr{B}(\mathbf{A},\mathbf{A})$ is not generally true when $\mathbf{A} = \bd{L}^{p,q}_w(\mu)$. This is a significant obstruction to the development of a coherent coorbit theory for such spaces. Using the kernel algebras $\mathcal B_m$, we will now state a set of conditions which ensures that Lemmas~\ref{lem:ReservoirConditions} and \ref{lem:CoorbitConditions} can be applied for a larger family of functions spaces, including weighted, mixed-norm Lebesgue spaces. \begin{proposition}\label{pro:coorbitsWithBm} Let $(X,\mathcal{F},\mu)$ and $\Psi = (\psi_x)_{x \in X} \subset \mathcal{H}$ as at the beginning of Section~\ref{sub:CoorbitReview}. Furthermore, assume that Conditions~\eqref{eq:CoorbitWeightConditions} and \eqref{eq:CoorbitWellDefinedConditions} are satisfied. Then the conditions of Lemmas~\ref{lem:ReservoirConditions} and \ref{lem:CoorbitConditions} are satisfied. Therefore, $\operatorname{Co}(\mathbf{A}) = \operatorname{Co}_{\Psi,v}(\mathbf{A})$ is a well-defined Banach space. \end{proposition} \begin{proof} Condition~\eqref{eq:CoorbitReservoirAssumptions} is an immediate consequence of Conditions~\eqref{eq:CoorbitWeightConditions} and \eqref{eq:CoorbitWellDefinedConditions}. Thus, all assumptions of Lemma~\ref{lem:ReservoirConditions} are satisfied. Next, since $\mathcal B_{m_0}(X) \hookrightarrow \mathscr{B}(\mathbf{A})$ and $\mathbf{A} \neq \{ 0 \}$ as well as $m_0(x,y) \leq C \cdot u(x) \, u(y)$ by Condition~\eqref{eq:CoorbitWeightConditions}, Theorem~\ref{thm:NecessaryConditionsForCompatibleSpaces}(\ref{enu:KernelCoDomainEmbedding}) shows that \( (\bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1})_u \hookrightarrow \mathbf{A}. \) Since $u$ is locally bounded and since each compact set $K \subset X$ satisfies $K \subset K_1 \times K_2$ for suitable compact sets $K_i \subset X_i$, we see that the space on the left-hand side contains ${\mathds{1}}_K$ for each compact $K \subset X$, which shows that $\mathbf{A}$ is rich. Further, note that for each $x \in X$, we have $x \in U_j \subset \mathcal{U}(x)$ for some $j \in J$, and hence ${L(x,y) \geq (M_{\mathcal{U}} K_\Psi ) (x,y) \geq |K_\Psi (x,y)|}$. Since $L \in \mathcal B_{m_0}$ and since $K_\Psi$ is measurable, this implies by solidity of $\mathcal B_{m_0}$ that $K_\Psi, |K_\Psi| \in \mathcal B_{m_0} (X) \hookrightarrow \mathscr{B}(\mathbf{A})$, so that $\Phi_{K_\Psi} : \mathbf{A} \to \mathbf{A}$ and $\Phi_{|K_\Psi|} : \mathbf{A} \to \mathbf{A}$ are bounded. Finally, since $u$ is $\mathcal{U}$-moderate with $m_0(x,y) \leq C \cdot u(x) \, u(y)$, and since ${M_{\mathcal{U}} K_\Psi \leq L \in \mathcal B_{m_0}(X)}$ and $\mathcal B_{m_0}(X) \hookrightarrow \mathscr{B}(\mathbf{A})$, Part~(2) of Theorem~\ref{thm:BoundednessIntoWeightedLInfty} shows for $v_0 : X \to (0,\infty), x \mapsto [w_{\mathcal{U}}^c (x)]^{-1} \, u(x)$ that $\Phi_{K_\Psi} : \mathbf{A} \to \bd{L}_{1/v_0}^\infty (\mu)$ is well-defined and bounded. Note that Condition~\eqref{eq:CoorbitWeightConditions} implies that $v \gtrsim v_0$, so that also $\Phi_{K_\Psi} : \mathbf{A} \to \bd{L}_{1/v}^\infty (\mu)$ is well-defined and bounded. By Lemma~\ref{lem:BoundednessEmbeddingEquivalence}, this implies that $\Phi_{K_\Psi} (\mathbf{A}) \hookrightarrow \bd{L}_{1/v}^\infty (\mu)$. We have thus verified all assumptions of Lemma~\ref{lem:CoorbitConditions}. \end{proof} \subsection{Simplification of the discretization conditions} \label{sub:CoorbitDiscretizationSimplification} Here we present our unified conditions for discretization in coorbit spaces $\operatorname{Co}(\mathbf{A})$ for the setting where $\mathcal B_{m_{0}}(X) \hookrightarrow \mathscr{B}(\mathbf{A})$. \begin{proposition}\label{pro:coorbitsWithBm2} Suppose that the assumptions of Proposition~\ref{pro:coorbitsWithBm} hold, and set ${m := m_v + m_0}$. If there exist an admissible covering $\widetilde{\mathcal{U}} = ( \widetilde{U}_i )_{i \in I}$ of $X$, a phase function ${\Gamma : X \times X \to S^1}$, and some $L \in \mathcal B_m$ such that \begin{equation}\label{eq:KernelsInSmallAlgebra} K_\Psi, L \in \mathcal B_m \quad \text{and} \quad {\mathrm{osc}}_{\widetilde{\UU},\Gamma}(K_\Psi) \leq L \end{equation} and \begin{equation}\label{eq:IdMinusDiscNormEstimate} \| L \|_{\mathcal B_{m}} \cdot (2 \, \| K_\Psi \|_{\mathcal B_{m}} + \| L \|_{\mathcal B_{m}}) < 1, \end{equation} then all assumptions of Theorem~\ref{thm:coorbitsdisc} (with $\widetilde{\mathcal{U}}$ instead of $\mathcal{U}$) are satisfied. In particular, if for each $i \in I$ a point $x_i \in \widetilde{U}_i$ is chosen arbitrarily, then there exists a ``dual'' family $\Phi_d = (\varphi_i)_{i\in I} \subset \mathcal{H}^1_{v} \cap \operatorname{Co}(\mathbf{A})$, such that the following are true: \begin{itemize} \item[(1)] $f\in (\mathcal{H}^1_{v})^\urcorner$ is an element of $\operatorname{Co}(\mathbf{A})$ if and only if \( \big( \langle f, \psi_{x_i} \rangle_{(\mathcal{H}_v^1)^{\urcorner}, \mathcal{H}_v^1} \big)_{i \in I} \in \mathbf{A}^\flat(\widetilde{\mathcal U}) \) if and only if \( \big( \langle f, \varphi_{i} \rangle_{(\mathcal{H}_v^1)^{\urcorner}, \mathcal{H}_v^1} \big)_{i\in I} \in \mathbf{A}^\sharp(\widetilde{\mathcal U}) , \) and in this case \begin{equation} \| f \|_{\operatorname{Co}(\mathbf{A})} \asymp \big\| \big( \langle f, \psi_{x_i} \rangle_{(\mathcal{H}_v^1)^{\urcorner}, \mathcal{H}_v^1} \big)_{i \in I} \big\|_{\mathbf{A}^\flat(\widetilde{\mathcal U})} \asymp \big\| \big( \langle f, \varphi_{i} \rangle_{(\mathcal{H}_v^1)^{\urcorner}, \mathcal{H}_v^1} \big)_{i\in I} \big\|_{\mathbf{A}^\sharp(\widetilde{\mathcal U})} . \end{equation} \item[(2)] If $(\lambda_i)_{i\in I} \in \mathbf{A}^\sharp(\widetilde{\mathcal U})$, then $\sum_{i\in I} \lambda_i \, \psi_{x_i} \in \operatorname{Co}(\mathbf{A})$ with \( \| \sum_{i \in I} \lambda_i \, \psi_{x_i} \|_\mathbf{A} \lesssim \|(\lambda_i)_{i \in I}\|_{\mathbf{A}^\sharp(\widetilde{\mathcal U})} , \) with unconditional convergence of the series in the weak-$\ast$ topology induced by $(\mathcal{H}^1_{v})^\urcorner$. Further, if $(\lambda_i)_{i\in I} \! \in \! \mathbf{A}^\flat(\widetilde{\mathcal U})$, then $\sum_{i\in I} \lambda_i \, \varphi_{i} \! \in \! \operatorname{Co}(\mathbf{A})$ and \( \|\sum_{i \in I} \lambda_i \, \varphi_{i}\|_\mathbf{A} \lesssim \|(\lambda_i)_{i \in I}\|_{\mathbf{A}^\flat(\widetilde{\mathcal U})} . \) \item[(3)] For all $f\in\operatorname{Co}(\mathbf{A})$, we have \( \sum_{i \in I} \, \langle f, \varphi_{i}\rangle_{(\mathcal{H}_v^1)^{\urcorner}, \mathcal{H}_v^1} \,\, \psi_{x_i} = f = \sum_{i \in I} \, \langle f, \psi_{x_i}\rangle_{(\mathcal{H}_v^1)^{\urcorner}, \mathcal{H}_v^1} \,\, \varphi_{i} . \) \end{itemize} \end{proposition} \begin{proof} Since by assumption Proposition~\ref{pro:coorbitsWithBm} is applicable, that proposition shows that the assumptions of Lemma~\ref{lem:CoorbitConditions} are satisfied. Next, recall from Part~(4) of Proposition~\ref{prop:NewKernelModuleBasicProperties1} that $\| \bullet \|_{\mathcal A_{m_v}} \leq \| \bullet \|_{\mathcal B_{m_v}} \leq \| \bullet \|_{\mathcal B_m}$. Furthermore, Condition~\eqref{eq:CoorbitWeightConditions} shows that \( \| \Phi_\bullet \|_{\mathbf{A} \to \mathbf{A}} \leq \| \bullet \|_{\mathcal B_{m_0}} \leq \| \bullet \|_{\mathcal B_m} . \) Overall, we thus see with $\| \bullet \|_{{\mathcal A_{v,\mathbf{A}}}}$ as defined before Theorem~\ref{thm:coorbitsdisc} that $\| K \|_{{\mathcal A_{v,\mathbf{A}}}} \leq \| K \|_{\mathcal B_m}$ for each measurable kernel $K$. Since $K_\Psi, L \in \mathcal B_m$, we thus see that $|K_\Psi|, L \in {\mathcal A_{v,\mathbf{A}}}$, where we note that ${\mathrm{osc}}_{\widetilde{\UU},\Gamma}(K_\Psi) \leq L$, as required in Theorem~\ref{thm:coorbitsdisc}. Furthermore, since $m_v^T = m_v$ and $m_0^T = m_0$ we also have $m^T = m$, so that Proposition~\ref{prop:NewKernelModuleBasicProperties2} shows $\| K^T \|_{\mathcal B_m} = \| K \|_{\mathcal B_m}$ for each measurable kernel $K$. Overall, we thus see that the constant $\delta$ in Theorem~\ref{thm:coorbitsdisc} satisfies \[ \delta = \max \big\{ \| L \|_{{\mathcal A_{v,\mathbf{A}}}}, \| L^T \|_{{\mathcal A_{v,\mathbf{A}}}} \big\} \leq \max \big\{ \| L \|_{\mathcal B_m}, \| L^T \|_{\mathcal B_m} \big\} = \| L \|_{\mathcal B_m}, \] and hence \[ \delta \cdot \big( 2 \, \| \,|K_\Psi|\, \|_{{\mathcal A_{v,\mathbf{A}}}} + \delta \big) \leq \| L \|_{\mathcal B_m} \cdot \big( 2 \, \| \, |K_\Psi| \, \|_{\mathcal B_m} + \| L \|_{\mathcal B_m} \big) = \| L \|_{\mathcal B_m} \cdot \big( 2 \, \| K_\Psi \|_{\mathcal B_m} + \| L \|_{\mathcal B_m} \big) < 1; \] see Equation~\eqref{eq:IdMinusDiscNormEstimate}. This completes the proof. \end{proof} \appendix \section{A dual characterization of the space \texorpdfstring{$\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}$} {ğ‘³Â¹ + ğ‘³âˆ + ğ‘³Â¹âˆ + ğ‘³âˆÂ¹}} \label{sec:DualCharacterizationOfSumSpace} The main objective of this appendix is to prove Theorem~\ref{thm:DualOfIntersection}. That is, we show that if $F : X_1 \times X_2 \to \mathbb{C}$ satisfies $F \cdot G \in \bd{L}^1 (\mu_1 \otimes \mu_2)$ for all $G \in \bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1}$, then ${F \in \bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}}$, with a corresponding norm estimate. Along the way, we will also obtain everything necessary to prove Lemma~\ref{lem:CartesianProductSumNorm}; see Appendix~\ref{sub:CartesianProductSumNormProof}. The general structure of the proof of Theorem~\ref{thm:DualOfIntersection} is as follows: First, we show that there is an equivalent norm $\| \bullet \|_\ast$ for the space ${\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}}$, such that with this norm, ${\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}}$ is a so-called \emph{normed Köthe space}, whose defining \emph{function norm} satisfies the \emph{Fatou property}. Once this is established, the claim of Theorem~\ref{thm:DualOfIntersection} is proven in Appendix~\ref{sub:DualOfIntersectionProof} as a consequence of the \emph{Luxemburg representation theorem}. All of these notions (Köthe spaces, function norms, etc) will be recalled below. In the following, we will always use the convention $\infty - \lambda = \infty$ for $\lambda \in \mathbb{R}$. Note that this implies $(\theta - \lambda) + \lambda = \theta = (\theta + \lambda) - \lambda$ for all $\theta \in [0,\infty]$ and $\lambda \in [0,\infty)$. We begin by studying the norm that defines the space $\bd{L}^1 + \bd{L}^\infty$. \begin{lemma}\label{lem:LebesgueSumFunctionNorm} Let $(X,\mathcal{F},\mu)$ be a measure space. For any measurable function $f : X \to [0,\infty]$, define \[ \varrho(f) := \inf \big\{ \|g\|_{\bd{L}^\infty} + \|h\|_{\bd{L}^1} \colon g, h : X \to [0,\infty] \text{ measurable and } f = g + h \big\} \in [0,\infty] \] and $X_{f,\lambda} := \{ x \in X \colon f(x) > \lambda \}$, for $\lambda \in [0,\infty]$. Then the following properties hold: \begin{enumerate} \item We have \( \varrho(f) = \min_{\lambda \in [0,\infty)} \big[ \lambda + \|{\mathds{1}}_{X_{f,\lambda}} \cdot (f - \lambda)\|_{\bd{L}^1} \big] \). In particular, the minimum is attained. \item We have \( \varrho(f) = \inf_{\lambda \in [0,\infty) \cap \mathbb{Q}} \big[ \lambda + \|{\mathds{1}}_{X_{f,\lambda}} \cdot (f - \lambda)\|_{\bd{L}^1} \big] \). \item If $F : X \to \mathbb{C}$ is measurable and $\alpha := \varrho(|F|) \in [0,\infty]$, then \[ \big\| F \cdot (1-{\mathds{1}}_{X_{|F|,2\alpha}}) \big\|_{\bd{L}^\infty} \leq 2 \, \varrho(|F|) \quad \text{and} \quad \big\| F \cdot {\mathds{1}}_{X_{|F|,2\alpha}} \big\|_{\bd{L}^1} \leq 2 \, \varrho(|F|) . \] \end{enumerate} \end{lemma} \begin{proof} We start by showing that the auxiliary function \[ \Psi_f : [0,\infty) \to [0,\infty], \lambda \mapsto \lambda + \|{\mathds{1}}_{X_{f,\lambda}} \cdot (f - \lambda) \|_{\bd{L}^1} \] is lower semicontinuous; this means that if $(\lambda_n)_{n \in \mathbb{N}} \subset [0,\infty)$ satisfies $\lambda_n \to \lambda \in [0,\infty)$, then $\Psi_f (\lambda) \leq \liminf_{n \to \infty} \Psi_f (\lambda_n)$. Since the identity map $\lambda \mapsto \lambda$ is continuous, it suffices to consider only the second summand in the definition of $\Psi_f (\lambda)$. If $x \in X_{f,\lambda}$, then $f(x) > \lambda_n$ for all $n \geq n_x$, for a suitable $n_x \in \mathbb{N}$. From this, we derive \[ {\mathds{1}}_{X_{f,\lambda}} (x) \cdot (f(x) - \lambda) \leq \liminf_{n \to \infty} \big[ {\mathds{1}}_{X_{f,\lambda_n}} (x) \cdot (f(x) - \lambda_n) \big] \qquad \forall \, x \in X, \] where all involved functions are non-negative. Thus, an application of Fatou's lemma shows as claimed that \({ \|{\mathds{1}}_{X_{f,\lambda}} \cdot (f - \lambda)\|_{\bd{L}^1} \leq \liminf_{n \to \infty} \|{\mathds{1}}_{X_{f,\lambda_n}} \cdot (f - \lambda_n)\|_{\bd{L}^1} }\). {} We now prove each claim individually. {} \noindent \textbf{Ad (1):} Define \( \varrho^\ast (f) := \inf_{\lambda \in [0,\infty)} \, [\lambda + \|{\mathds{1}}_{X_{f,\lambda}} \cdot (f - \lambda)\|_{\bd{L}^1}] = \inf_{\lambda \geq 0} \Psi_f (\lambda) \). We first show ${\varrho (f) \leq \varrho^\ast (f)}$. Indeed, if $\lambda \in [0,\infty)$ is arbitrary, define \( g_\lambda := \min\{f, \lambda\} = f \cdot {\mathds{1}}_{f \leq \lambda} + \lambda \cdot {\mathds{1}}_{X_{f,\lambda}} \), and $h_\lambda := {\mathds{1}}_{X_{f,\lambda}} \cdot (f - \lambda)$. Then $g_\lambda, h_\lambda : X \to [0,\infty]$ are measurable and satisfy $f = g_\lambda + h_\lambda$. Therefore, \[ \varrho(f) \leq \|g_\lambda\|_{\bd{L}^\infty} + \|h_\lambda\|_{\bd{L}^1} \leq \lambda + \|{\mathds{1}}_{X_{f,\lambda}} \cdot (f - \lambda)\|_{\bd{L}^1}. \] Since this holds for all $\lambda \in [0,\infty)$, we see $\varrho(f) \leq \varrho^\ast (f)$. In case of $\varrho(f) = \infty$, the preceding derivations imply \( \infty = \varrho(f) \leq \varrho^\ast(f) \leq \lambda + \| {\mathds{1}}_{X_{f,\lambda}} \cdot (f - \lambda) \|_{\bd{L}^1} \) for \emph{every} $\lambda\in [0,\infty)$, which shows that the desired equality holds and that the infimum is attained. Therefore, let us assume $\varrho(f) < \infty$, and let $g,h : X \to [0,\infty]$ be measurable with $f = g + h$ and $\|g\|_{\bd{L}^\infty} + \|h\|_{\bd{L}^1} < \infty$. Define $\lambda := \|g\|_{\bd{L}^\infty}$. For $\mu$-almost every $x \in X$, we then have $|g(x)| \leq \lambda < \infty$, and hence $h (x) = f(x) - g(x) \geq f(x) - \lambda$. This implies $0 \leq {\mathds{1}}_{X_{f,\lambda}} \cdot (f - \lambda) \leq h$ $\mu$-almost everywhere, and hence \[ \varrho^\ast (f) \leq \lambda + \big\| {\mathds{1}}_{X_{f,\lambda}} \cdot (f - \lambda) \big\|_{\bd{L}^1} \leq \|g\|_{\bd{L}^\infty} + \|h\|_{\bd{L}^1}. \] Since this holds for all admissible choices of $g,h$, we get $\varrho^\ast (f) \leq \varrho(f)$. It remains to show that the infimum in the definition of $\varrho^\ast (f)$ is attained. Choose a sequence $(\lambda_n)_{n \in \mathbb{N}} \subset [0,\infty)$ satisfying $\Psi_f (\lambda_n) \to \varrho^\ast (f)$. Note that $0 \leq \lambda_n \leq \Psi_f (\lambda_n) \to \varrho^\ast(f) \leq \varrho(f) < \infty$, so that the sequence $(\lambda_n)_{n \in \mathbb{N}}$ is bounded. Therefore, there is a subsequence $(\lambda_{n_\ell})_{\ell \in \mathbb{N}}$ and some $\lambda \in [0,\infty)$ satisfying ${\lambda_{n_\ell} \to \lambda}$. By lower semicontinuity of $\Psi_f$, this implies \({ \varrho^\ast (f) \leq \Psi_f (\lambda) \leq {\displaystyle{\liminf_{\ell \to \infty}}} \, \Psi_f(\lambda_{n_\ell}) = \varrho^\ast(f) , }\) proving that the infimum is attained. {} \noindent \textbf{Ad (2):} Let $\varrho^{\natural} (f) := \inf_{\lambda \in [0,\infty) \cap \mathbb{Q}} \Psi_f (\lambda)$. By Part (1), we see $\varrho(f) = \varrho^\ast (f) \leq \varrho^{\natural}(f)$. To prove the converse estimate, we first show that $\Psi_f$ is right continuous. To see this, let $(\lambda_n)_{n \in \mathbb{N}} \subset [0,\infty)$ be a non-increasing sequence. Then $(f(x) - \lambda_n)_{n \in \mathbb{N}}$ is a non-decreasing sequence which converges pointwise to $f(x) - \lambda$, where $\lambda = \inf_{n \in \mathbb{N}} \lambda_n = \lim_{n \to \infty} \lambda_n$. Furthermore, we have ${\mathds{1}}_{X_{f,\lambda_n}} \leq {\mathds{1}}_{X_{f,\lambda_{n+1}}} \leq {\mathds{1}}_{X_{f,\lambda}}$. Finally, if $x \in X$ is such that $f(x) > \lambda$, then $f(x) > \lambda_n$ for all $n \geq n_x$ (for a suitable $n_x \in \mathbb{N}$), proving that ${\mathds{1}}_{X_{f,\lambda_n}} \nearrow {\mathds{1}}_{X_{f,\lambda}}$ pointwise. Overall, we see \( 0 \leq {\mathds{1}}_{X_{f,\lambda_n}} \cdot (f - \lambda_n) \nearrow {\mathds{1}}_{X_{f,\lambda}} \cdot (f - \lambda) \), so that the monotone convergence theorem shows \( \|{\mathds{1}}_{X_{f,\lambda_n}} \cdot (f - \lambda_n)\|_{\bd{L}^1} \to \|{\mathds{1}}_{X_{f,\lambda}} \cdot (f - \lambda)\|_{\bd{L}^1} \). In view of this, we easily see that $\Psi_f$ is right continuous. Now, let $\lambda \in [0,\infty)$. There is a non-increasing sequence $(\lambda_n)_{n \in \mathbb{N}} \subset [0,\infty) \cap \mathbb{Q}$ such that $\lambda_n \to \lambda$. Note that $\varrho^\natural (f) \leq \Psi_f (\lambda_n)$ for all $n \in \mathbb{N}$, and hence $\varrho^{\natural} (f) \leq \lim_{n \to \infty} \Psi_f(\lambda_n) = \Psi_f (\lambda)$. Since $\lambda \in [0,\infty)$ was arbitrary, this implies \( \varrho^{\natural} (f) \leq \inf_{\lambda \in [0,\infty)} \Psi_f(\lambda) = \varrho^\ast (f) = \varrho (f) . \) {} \noindent \textbf{Ad (3)} The claim is trivial in case of $\alpha = \infty$, so that we can assume $\alpha < \infty$. Part~(1) shows that there is some $\lambda \in [0,\infty)$ satisfying $\alpha = \lambda + \|{\mathds{1}}_{X_{|F|,\lambda}} \cdot (|F| - \lambda)\|_{\bd{L}^1}$. In particular, this implies that $2 \alpha \geq \alpha \geq \lambda$. Next, note that if $|F(x)| > 2 \alpha$, then $|F(x)| \leq 2 \cdot (|F(x)| - \alpha) \leq 2 \cdot (|F(x)| - \lambda)$. Therefore, \[ \big\| F \cdot {\mathds{1}}_{X_{|F|,2\alpha}} \big\|_{\bd{L}^1} \leq 2 \cdot \big\| {\mathds{1}}_{X_{|F|,2\alpha}} \cdot (|F| - \lambda) \big\|_{\bd{L}^1} \leq 2 \cdot \big\| {\mathds{1}}_{X_{|F|,\lambda}} \cdot (|F| - \lambda) \big\|_{\bd{L}^1} \leq 2 \alpha . \] The estimate $\|F \cdot (1-{\mathds{1}}_{X_{|F|,2\alpha}})\|_{\bd{L}^\infty} \leq 2 \alpha$ is trivial. \end{proof} As a first corollary of the preceding lemma, we can now show that $\varrho$ is a so-called \emph{function-norm} which satisfies the \emph{Fatou property}. Before we prove this, we recall the pertinent definitions for the convenience of the reader. \begin{definition}\label{def:FunctionNorm}(see \cite[§§ 63 and 65]{ZaanenIntegration}) Let $(X,\mathcal{F},\mu)$ be a $\sigma$-finite measure space. We denote by $\mathcal{M}^+$ the set of all equivalence classes of measurable functions $f : X \to [0,\infty]$, where two functions are equivalent if they agree $\mu$-almost everywhere. By the usual abuse of notation, we will often not distinguish between a function and its equivalence class. A map $\varrho : \mathcal{M}^+ \to [0,\infty]$ is called a \emph{function seminorm} if it satisfies the following properties: \begin{enumerate} \item $\varrho(f) = 0$ if $f \in \mathcal{M}^+$ with $f = 0$ almost everywhere; \item $\varrho(a \, f) = a \, \varrho(f)$ for all $f \in \mathcal{M}^+$ and $a \in [0,\infty)$; \item $\varrho (f + g) \leq \varrho(f) + \varrho(g)$ for all $f,g \in \mathcal{M}^+$; \item if $f,g \in \mathcal{M}^+$ satisfy $f \leq g$ almost everywhere, then $\varrho(f) \leq \varrho(g)$. \end{enumerate} A function seminorm is called a \emph{function norm} if it has the additional property that $f = 0$ almost everywhere for every $f \in \mathcal{M}^+$ with $\varrho(f) = 0$. A function seminorm $\varrho$ is said to have the \emph{Fatou property} if for every sequence $(f_n)_{n \in \mathbb{N}} \subset \mathcal{M}^+$ with $\liminf_{n \to \infty} \varrho(f_n) < \infty$, we have $\varrho(\liminf_{n \to \infty} f_n) \leq \liminf_{n \to \infty} \varrho(f_n)$. \end{definition} \begin{rem*} The definition of the Fatou property given above is not the one given in \cite{ZaanenIntegration}, but it is equivalent, as shown in \cite[§65, Theorem 3]{ZaanenIntegration}. \end{rem*} \begin{proposition}\label{prop:LebesgueSumFunctionNormFatouProperty} Let $(X,\mathcal{F},\mu)$ be a $\sigma$-finite measure space. The map $\varrho : \mathcal{M}^+ \to [0,\infty]$ introduced in Lemma~\ref{lem:LebesgueSumFunctionNorm} is a function norm which satisfies the Fatou property. \end{proposition} \begin{proof} The first two properties in Definition~\ref{def:FunctionNorm} are trivially satisfied. Next, if $f, g \in \mathcal{M}^+$ with $f \leq g$, then \( 0 \leq {\mathds{1}}_{X_{f,\lambda}} \cdot (f - \lambda) \leq {\mathds{1}}_{X_{g,\lambda}} \cdot (g - \lambda) , \) and hence \( \lambda + \|{\mathds{1}}_{X_{f,\lambda}} \cdot (f - \lambda)\|_{\bd{L}^1} \leq \lambda + \|{\mathds{1}}_{X_{g,\lambda}} \cdot (g - \lambda) \|_{\bd{L}^{1}} \) for all $\lambda \in [0,\infty)$. In view of the first part of Lemma~\ref{lem:LebesgueSumFunctionNorm}, this implies $\varrho(f) \leq \varrho(g)$. Furthermore, if $f,g \in \mathcal{M}^+$ and $f = f_1 + f_2$ as well as $g = g_1 + g_2$ for measurable functions $f_1, f_2, g_1, g_2 : X \to [0,\infty]$, then $f + g = (f_1 + g_1) + (f_2 + g_2)$. By definition of $\varrho$, this implies \[ \varrho(f+g) \leq \|f_1 + g_1\|_{\bd{L}^\infty} + \|f_2 + g_2\|_{\bd{L}^1} \leq \big( \|f_1\|_{\bd{L}^\infty} + \|f_2\|_{\bd{L}^1} \big) + \big( \|g_1\|_{\bd{L}^\infty} + \|g_2\|_{\bd{L}^1} \big). \] Since this holds for all admissible $f_1,f_2,g_1,g_2$, we get by definition of $\varrho$ that $\varrho(f+g) \leq \varrho(f) + \varrho(g)$. To verify that $\varrho$ is a function norm, let $f \in \mathcal{M}^+$ satisfy $\varrho(f) = 0$. By the first part of Lemma~\ref{lem:LebesgueSumFunctionNorm}, there is $\lambda \in [0,\infty)$ such that $0 = \varrho(f) = \lambda + \|{\mathds{1}}_{X_{f,\lambda}} \cdot (f - \lambda)\|_{\bd{L}^1}$. This implies $\lambda = 0$, and then $0 = \|f\|_{\bd{L}^1}$, so that we see $f = 0$ almost everywhere. {} Finally, we verify the Fatou property. Let $(f_n)_{n \in \mathbb{N}} \subset \mathcal{M}^+$ with $\theta := \liminf_{n \to \infty} \varrho(f_n) < \infty$. Set $f := \liminf_{n \to \infty} f_n$. Choose a subsequence $(f_{n_k})_{k \in \mathbb{N}}$ such that $\varrho(f_{n_k}) \to \theta$. The first part of Lemma~\ref{lem:LebesgueSumFunctionNorm} yields a sequence $(\lambda_k)_{k \in \mathbb{N}} \subset [0,\infty)$ satisfying \( \varrho(f_{n_k}) = \lambda_k + \|{\mathds{1}}_{f_{n_k} > \lambda_k} \cdot (f_{n_k} - \lambda_k)\|_{\bd{L}^1} \) for all $k \in \mathbb{N}$. In particular, $0 \leq \lambda_k \leq \varrho(f_{n_k}) \to \theta$, so that $(\lambda_k)_{k \in \mathbb{N}}$ is a bounded sequence. Thus, there is a subsequence $(\lambda_{k_\ell})_{\ell \in \mathbb{N}}$ and some $\lambda \in [0,\infty)$ satisfying $\lambda_{k_\ell} \to \lambda$. Note $f \leq \liminf_{\ell \to \infty} f_{n_{k_\ell}}$, and hence \( f - \lambda \leq \liminf_{\ell \to \infty} ( f_{n_{k_\ell}} - \lambda_{k_\ell} ) = {\raisebox{0.1cm}{$\displaystyle{\sup_{L \in \mathbb{N}}}$} \,\, \raisebox{0.05cm}{$\displaystyle{\inf_{\ell \geq L}}$}} \, (f_{n_{k_{\ell}}} - \lambda_{k_\ell}) \). Thus, for each $x \in X$ satisfying $f(x) - \lambda > 0$, there is $L_x \in \mathbb{N}$ such that $f_{n_{k_\ell}} (x) - \lambda_{k_\ell} \geq \frac{f(x) - \lambda}{2} > 0$ for all $\ell \geq L_x$. Overall, we see \({ {\mathds{1}}_{X_{f,\lambda}} (x) \cdot (f(x) - \lambda) \leq \liminf_{\ell \to \infty} \Big[ {\mathds{1}}_{f_{n_{k_\ell}} > \lambda_{k_\ell}} (x) \cdot \big( f_{n_{k_\ell}} (x) -\! \lambda_{k_\ell} \big) \! \Big] , }\) where all involved functions are non-negative. Therefore, Fatou's lemma and Part~(1) of Lemma~\ref{lem:LebesgueSumFunctionNorm} imply that \begin{align*} \varrho(f) & \leq \lambda + \| (f - \lambda) \cdot {\mathds{1}}_{X_{f,\lambda}}\|_{\bd{L}^1} \leq \lim_{\ell \to \infty} \lambda_{k_\ell} + \Big\| \liminf_{\ell \to \infty} \big[ (f_{n_{k_\ell}} - \lambda_{k_\ell}) \cdot {\mathds{1}}_{f_{n_{k_\ell}} > \lambda_{k_\ell}} \big] \Big\|_{\bd{L}^1} \\ & \leq \liminf_{\ell \to \infty} \Big( \lambda_{k_\ell} + \big\| (f_{n_{k_\ell}} - \lambda_{k_\ell}) \cdot {\mathds{1}}_{f_{n_{k_\ell}} > \lambda_{k_\ell}} \big\|_{\bd{L}^1} \Big) = \lim_{\ell \to \infty} \varrho(f_{n_{k_\ell}}) = \theta = \liminf_{n \to \infty} \varrho(f_n). \qedhere \end{align*} \end{proof} Up to now, we have shown that the norm defining the space $\bd{L}^1 + \bd{L}^\infty$ is a function norm satisfying the Fatou property. Our next goal is to construct a function-norm $\varrho_{\otimes}$ satisfying the Fatou property and such that $F \mapsto \varrho_{\otimes}(|F|)$ is equivalent to the norm defining the space ${\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}}$. For this, the following property will be crucial. \begin{lemma}\label{lem:LebesgueSumNormMeasurable} Let $(X, \mathcal{F}, \mu)$ be a $\sigma$-finite measure space, and let $(Y,\mathcal{G})$ be a measurable space. Let $\varrho$ be as defined in Lemma~\ref{lem:LebesgueSumFunctionNorm}. If $F : X \times Y \to [0,\infty]$ is measurable with respect to the product $\sigma$-algebra $\mathcal{F} \otimes \mathcal{G}$, then the map \[ Y \to [0,\infty], y \mapsto \varrho \big(F (\bullet, y) \big) \] is measurable. \end{lemma} \begin{proof} For each $\lambda \in [0,\infty)$, the map \[ Y \to [0,\infty], y \mapsto \Big\| {\mathds{1}}_{F (\bullet,y) > \lambda} \cdot \big( F(\bullet,y) - \lambda \big) \Big\|_{\bd{L}^1} = \int_X {\mathds{1}}_{F(x,y) > \lambda} \cdot \bigl( F(x,y) - \lambda \bigr) \, d \mu (x) \] is measurable as a consequence of the Fubini-Tonelli theorem. Now, the formula \[ \varrho \big( F(\bullet,y) \big) = \inf_{\lambda \in [0,\infty) \cap \mathbb{Q}} \Big[ \lambda + \big\| {\mathds{1}}_{F(\bullet,y) > \lambda} \cdot \big( F(\bullet,y) - \lambda \big) \big\|_{\bd{L}^1} \Big] \] given in the second part of Lemma~\ref{lem:LebesgueSumFunctionNorm} shows that $y \mapsto \varrho \big( F(\bullet,y) \big)$ is measurable as the infimum of a countable family of non-negative measurable functions. \end{proof} In view of Lemma~\ref{lem:LebesgueSumNormMeasurable}, we see that if $(X, \mathcal{F}, \mu)$ and $(Y, \mathcal{G}, \nu)$ are $\sigma$-finite measure spaces, and if $F : X \times Y \to [0,\infty]$ is measurable, then the map $y \mapsto \varrho \big( F(\bullet,y) \big)$ is a measurable non-negative function to which we can again apply $\varrho$, which is as defined in Lemma~\ref{lem:LebesgueSumFunctionNorm}, but now on $Y$ instead of on $X$. Thus, the following definition makes sense. \begin{definition}\label{def:IteratedLebesgueSumNorm} Let $(X, \mathcal{F}, \mu)$ and $(Y, \mathcal{G}, \nu)$ be $\sigma$-finite measure spaces. Let $\varrho_X$ and $\varrho_Y$ be as defined in Lemma~\ref{lem:LebesgueSumFunctionNorm}, applied to the measure spaces $(X,\mathcal{F},\mu)$ or $(Y,\mathcal{G},\nu)$, respectively. For every measurable function $F : X \times Y \to [0,\infty]$, define \[ \varrho_{\otimes} (F) := \varrho_Y \Big( y \mapsto \varrho_X \big( F(\bullet,y) \big) \Big) \in [0,\infty]. \] \end{definition} \begin{lemma}\label{lem:IteratedLebesgueSumNormIsFunctionNorm} Let $(X, \mathcal{F}, \mu)$ and $(Y, \mathcal{G}, \nu)$ be $\sigma$-finite measure spaces. The map $\varrho_{\otimes}$ introduced in Definition~\ref{def:IteratedLebesgueSumNorm} is a function norm on $(X \times Y, \mathcal{F} \otimes \mathcal{G}, \mu \otimes \nu)$ which satisfies the Fatou property. \end{lemma} \begin{proof} The first two properties in Definition~\ref{def:FunctionNorm} are clear. Next, if ${F,G : X \times Y \to [0,\infty]}$ are measurable, then \( \varrho_X \big( [F+G](\bullet,y) \big) \leq \varrho_X \big( F(\bullet,y) \big) + \varrho_X \big( G(\bullet,y) \big) \) for all $y \in Y$, since $\varrho_X$ is a function norm. By the monotonicity and subadditivity of $\varrho_Y$, this entails \begin{align*} \varrho_{\otimes} (F + G) & = \varrho_Y \Big( y \mapsto \varrho_X \big( [F+G](\bullet,y) \big) \Big) \\ & \leq \varrho_Y \Big( y \mapsto \varrho_X \big( F(\bullet,y) \big) \Big) + \, \varrho_Y \Big( y \mapsto \varrho_X \big( G(\bullet,y) \big) \Big) = \varrho_{\otimes}(F) + \varrho_{\otimes} (G), \end{align*} as desired. The monotonicity of $\varrho_{\otimes}$ follows easily from that of $\varrho_X$ and $\varrho_Y$. Next, if $\varrho_{\otimes} (F) = 0$, then since $\varrho_Y$ is a function-norm, there is a $\nu$-null-set $N \subset Y$ such that $\varrho_X (F(\bullet,y)) = 0$ for all $y \in Y \setminus N$. Since $\varrho_X$ is a function-norm, this implies $F(\bullet,y) = 0$ $\mu$-almost everywhere for $y \in Y \setminus N$. Hence, Tonelli's theorem shows $\|F\|_{\bd{L}^1} = \int_Y \int_X F(x,y) d \mu(x) d \nu (y) = 0$, and hence $F = 0$ almost everywhere with respect to $\mu \otimes \nu$. It remains to verify the Fatou property. If $(F_n)_{n \in \mathbb{N}}$ is a sequence of measurable functions $F_n : X \times Y \to [0,\infty]$ and $F = \liminf_{n \to \infty} F_n$, then $F(\bullet,y) = \liminf_{n \to \infty} F_n (\bullet,y)$, so that the Fatou property for $\varrho_X$ implies $\varrho_X \big( F(\bullet,y) \big) \leq \liminf_{n \to \infty} \varrho_X \big( F_n (\bullet, y) \big)$ for all $y \in Y$. Strictly speaking, this only follows from the Fatou property if the right-hand side is finite; but otherwise the estimate is trivially satisfied. By the monotonicity and the Fatou property of $\varrho_Y$, we thus see \begin{align*} \varrho_\otimes (F) & = \varrho_Y \Big( y \mapsto \varrho_X \big( F(\bullet,y) \big) \Big) \leq \varrho_Y \Big( y \mapsto \liminf_{n \to \infty} \varrho_X \big( F_n (\bullet,y) \big) \Big) \\ & \leq \liminf_{n \to \infty} \varrho_Y \Big( y \mapsto \varrho_X \big( F_n (\bullet,y) \big) \Big) = \liminf_{n \to \infty} \varrho_\otimes (F_n). \qedhere \end{align*} \end{proof} The following proposition shows that the norm $F \mapsto \varrho_{\otimes}(|F|)$ is equivalent to the defining norm of the space $\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}$. \begin{proposition}\label{prop:IteratedFunctionNormEquivalentToSumNorm} Let $(X, \mathcal{F}, \mu)$ and $(Y, \mathcal{G}, \nu)$ be $\sigma$-finite measure spaces. With $\varrho_{\otimes}$ as in Definition~\ref{def:IteratedLebesgueSumNorm} and $\| \bullet \|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}}$ as introduced in Definition~\ref{def:SumAndIntersectionSpaces}, we then have \[ \frac{1}{16} \cdot \|F\|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}} \leq \varrho_{\otimes} (|F|) \leq \|F\|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}} \] for each measurable $F : X \times Y \to \mathbb{C}$. \end{proposition} \begin{rem*} In the terminology of solid function spaces, the proposition shows (in combination with Lemma~\ref{lem:IteratedLebesgueSumNormIsFunctionNorm}) that the space ${\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}}$ with its canonical norm satisfies the \emph{weak Fatou property}; see \mbox{\cite[§ 65]{ZaanenIntegration}} for the definition. Here, we would like to remark that there is a characterization of (possibly infinite) families $(X_i)_{i \in I}$ of solid Banach spaces for which the sum $\sum_{i \in I} X_i$ with its natural norm satisfies the \emph{Fatou property}; see \cite{BanachLatticeSumsFatouProperty}. This characterization, however, is quite technical, and we were unable to verify it in our setting. Thus, although we know that $\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}$ with its natural norm satisfies the \emph{weak} Fatou property, we could not confirm whether it actually satisfies the Fatou property as well. \end{rem*} \begin{proof} We first show $\varrho_{\otimes} (|F|) \leq \|F\|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}}$, which is trivial if the right-hand side is infinite. Thus, we can assume $\|F\|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}} < \infty$. Let $F_1,\dots,F_4 : X \times Y \to \mathbb{C}$ be measurable with $F = F_1 + \cdots + F_4$ and such that \({ \|F_1\|_{\bd{L}^1} + \|F_2\|_{\bd{L}^\infty} + \|F_3\|_{\bd{L}^{1,\infty}} + \|F_4\|_{\bd{L}^{\infty,1}} < \infty. }\) With $F_y^{(1)} := F_2 (\bullet,y) + F_4 (\bullet,y)$ and $F_y^{(2)}:= F_1 (\bullet,y) + F_3(\bullet,y)$, we have \({ F(\bullet,y) = F_y^{(1)} + F_y^{(2)} , }\) and therefore $\big| F(\bullet, y) \big| \leq \big| F_y^{(1)} \big| + \big| F_y^{(2)} \big|$. Using the monotonicity and the definition of $\varrho_X$ (see Lemma~\ref{lem:LebesgueSumFunctionNorm}), this implies for each $y \in Y$ that \begin{align*} \varrho_X \big( |F(\bullet,y)| \big) & \leq \varrho_X \big( \, \big| F_y^{(1)} \big| + \big| F_y^{(2)} \big| \, \big) \leq \big\| F_y^{(1)} \big\|_{\bd{L}^\infty} + \big\| F_y^{(2)} \big\|_{\bd{L}^1} \\ & \leq \big(\underbrace{ \| F_2(\bullet, y) \|_{\bd{L}^\infty} + \| F_3 (\bullet, y) \|_{\bd{L}^1} }_{\textstyle =:G_1 (y)}\big) + \big(\underbrace{ \| F_1(\bullet, y) \|_{\bd{L}^1} + \| F_4 (\bullet, y) \|_{\bd{L}^{\infty}} }_{\textstyle =:G_2 (y)}\big). \end{align*} By using the monotonicity of $\varrho_Y$ and the definition of $\varrho_Y$, we finally arrive at \begin{align*} \varrho_{\otimes} (|F|) & = \varrho_Y \Big( y \mapsto \varrho_X \big( |F(\bullet,y)| \big) \Big) \leq \varrho_Y \big( G_1 + G_2 \big) \\ & \leq \|G_1\|_{\bd{L}^\infty} + \|G_2\|_{\bd{L}^1} \leq \|F_2\|_{\bd{L}^\infty} + \|F_3\|_{\bd{L}^{1,\infty}} + \|F_1\|_{\bd{L}^1} + \|F_4\|_{\bd{L}^{\infty,1}} . \end{align*} Since this holds for all admissible $F_1,\dots,F_4$, we see $\varrho_{\otimes} (|F|) \leq \|F\|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}}$. {} We now prove $\|F\|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}} \leq 16 \, \varrho_{\otimes} (|F|)$. In case of $\varrho_{\otimes}(|F|) = \infty$ this is trivial, so that we can assume $\alpha := \varrho_{\otimes}(|F|) \in [0,\infty)$. Define $G : Y \to [0,\infty], y \mapsto \varrho_X \big( |F(\bullet,y)| \big)$ and note $\alpha = \varrho_Y (G)$. Note that $G$ is measurable by Lemma~\ref{lem:LebesgueSumNormMeasurable}. Let \[ A := \big\{ y \in Y \colon G(y) > 2 \alpha \big\} \quad \text{and} \quad B := \big\{ (x,y) \in X \times Y \colon |F(x,y)| > 2 G(y) \big\}. \] Finally, define $F_1,\dots,F_4 : X \times Y \to \mathbb{C}$ by \[ F_1 (x,y) := F(x,y) \cdot {\mathds{1}}_{A}(y) \cdot {\mathds{1}}_{B}(x,y), \qquad \quad F_2 (x,y) := F(x,y) \cdot {\mathds{1}}_{A^c}(y) \cdot {\mathds{1}}_{B^c}(x,y) \] and \[ F_3 (x,y) := F(x,y) \cdot {\mathds{1}}_{A^c}(y) \cdot {\mathds{1}}_{B}(x,y), \qquad \quad F_4 (x,y) := F(x,y) \cdot {\mathds{1}}_{A}(y) \cdot {\mathds{1}}_{B^c}(x,y). \] We clearly have $F = F_1 + \cdots + F_4$. Furthermore, by the last part of Lemma~\ref{lem:LebesgueSumFunctionNorm}, and since $G(y) = \varrho_X \big( |F(\bullet,y)| \big)$ and $\alpha = \varrho_Y (G)$, we see the following: \begin{enumerate} \item \( \|F_1 (\bullet,y)\|_{\bd{L}^1} = {\mathds{1}}_A (y) \cdot \big\| F(\bullet,y) \cdot {\mathds{1}}_{|F(\bullet,y)| > 2 G(y)} \big\|_{\bd{L}^1} \leq {\mathds{1}}_A (y) \cdot 2 \varrho_X \big( |F(\bullet,y)| \big) = {\mathds{1}}_A (y) \cdot 2 G(y) \), and hence \( \|F_1\|_{\bd{L}^1(\mu \otimes \nu)} \leq 2 \| G \cdot {\mathds{1}}_A \|_{\bd{L}^1 (\nu)} = 2 \| G \cdot {\mathds{1}}_{G > 2 \alpha} \|_{\bd{L}^1(\nu)} \leq 4 \varrho_Y (G) = 4 \, \varrho_{\otimes} (|F|) \). \item \( |F_2 (x,y)| \leq 2 G(y) \, {\mathds{1}}_{A^c} (y) \leq 4 \, \alpha = 4 \, \varrho_{\otimes} (|F|) \) for all $(x,y) \in X \times Y$, which shows that $\|F_2\|_{\bd{L}^\infty} \leq 4 \, \varrho_{\otimes} (|F|)$. \item Similar to the estimate for $\|F_1\|_{\bd{L}^1}$, we see that \[ \quad \qquad \|F_3 (\bullet,y)\|_{\bd{L}^1} \leq {\mathds{1}}_{A^c}(y) \cdot \| F(\bullet,y) \cdot {\mathds{1}}_{|F(\bullet,y)| > 2 G(y)} \|_{\bd{L}^1} \leq {\mathds{1}}_{A^c}(y) \cdot 2 G(y) \leq 4 \, \alpha = 4 \, \varrho_{\otimes} (|F|) \] for all $y \in Y$, and hence $\|F_3\|_{\bd{L}^{1,\infty}} \leq 4 \, \varrho_{\otimes}(|F|)$. \item $|F_4(x,y)| \leq 2 G(y) \cdot {\mathds{1}}_A (y)$ for all $x \in X$ and $y \in Y$. From this estimate, it follows that \( \|F_4\|_{\bd{L}^{\infty,1}} \leq 2 \, \| G \cdot {\mathds{1}}_{A}\|_{\bd{L}^1} \leq 4 \, \varrho_{\otimes} (|F|) \). Here, the last step was justified in the estimate of $\|F_1\|_{\bd{L}^1}$ above. \end{enumerate} Overall, we see \( \|F\|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}} \leq \|F_1\|_{\bd{L}^1} + \|F_2\|_{\bd{L}^\infty} + \|F_3\|_{\bd{L}^{1,\infty}} + \|F_4\|_{\bd{L}^{\infty,1}} \leq 16 \, \varrho_{\otimes} (|F|) \), which completes the proof. \end{proof} \subsection{Proof of Lemma \ref{lem:CartesianProductSumNorm}} \label{sub:CartesianProductSumNormProof} We will now use Lemma~\ref{lem:LebesgueSumFunctionNorm} and Proposition~\ref{prop:IteratedFunctionNormEquivalentToSumNorm} to prove Lemma~\ref{lem:CartesianProductSumNorm}. \begin{proof}[Proof of Lemma~\ref{lem:CartesianProductSumNorm}] With the function norm $\varrho_{\otimes}$ introduced in Definition~\ref{def:IteratedLebesgueSumNorm}, Proposition~\ref{prop:IteratedFunctionNormEquivalentToSumNorm} shows that $\| {\mathds{1}}_{V \times W} \|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}} \geq \varrho_{\otimes} ({\mathds{1}}_{V \times W})$. Next, with $\varrho$ as in Lemma~\ref{lem:LebesgueSumFunctionNorm}, and writing $\varrho_{X_1}$ and $\varrho_{X_2}$ to indicate the space on which $\varrho$ acts, a direct computation shows that \[ \varrho_{\otimes} ({\mathds{1}}_{V \times W}) = \varrho_{X_2} \Big( y \mapsto \varrho_{X_1} \big( {\mathds{1}}_{V \times W} (\bullet, y) \big) \Big) = \varrho_{X_2} \Big( y \mapsto {\mathds{1}}_W (y) \cdot \varrho_{X_1} ({\mathds{1}}_V) \Big) = \varrho_{X_1} ({\mathds{1}}_V) \cdot \varrho_{X_2} ({\mathds{1}}_W) . \] To complete the proof, it therefore suffices to show that $\varrho_{X_1} ({\mathds{1}}_V) \geq \min\{ 1, \mu_1(V) \}$ and likewise $\varrho_{X_2} ({\mathds{1}}_W) \geq \min\{ 1, \mu_2(W) \}$, since this implies the claimed estimate \begin{align*} \| {\mathds{1}}_{V \times W} \|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}} & \geq \varrho_{\otimes} ({\mathds{1}}_{V \times W}) = \varrho_{X_1} ({\mathds{1}}_V) \cdot \varrho_{X_2} ({\mathds{1}}_W) \\ & \geq \min \big\{ 1, \mu_1 (V) \big\} \cdot \min \big\{ 1, \mu_2 (W) \big\} \\ & \geq \min \big\{ 1, \mu_1 (V), \mu_2 (W), \mu_1 (V) \cdot \mu_2 (W) \big\} \\ & = \min \big\{ 1, \mu_1 (V), \mu_2 (W), \mu(V \times W) \big\} . \end{align*} We only prove that $\varrho_{X_1} ({\mathds{1}}_V) \geq \min\{ 1, \mu_1(V) \}$, since $\varrho_{X_2} ({\mathds{1}}_W) \geq \min\{ 1, \mu_2(W) \}$ can be shown with the same arguments. Let $V_{\lambda} := \{x_1\in X_1 \colon {\mathds{1}}_V(x_1) > \lambda \}$, for $\lambda\geq 0$. Recall from Lemma~\ref{lem:LebesgueSumFunctionNorm} that \[ \varrho_{X_1} ({\mathds{1}}_V) = \inf_{\lambda \in [0,\infty)} \Big[ \lambda + \| {\mathds{1}}_{V_{\lambda}} \cdot ({\mathds{1}}_V - \lambda) \|_{\bd{L}^1} \Big] . \] Now, in case of $\lambda \geq 1$, we trivially have \( \lambda + \| {\mathds{1}}_{V_{\lambda}} \cdot ({\mathds{1}}_V - \lambda) \|_{\bd{L}^1} \geq \lambda \geq 1 \geq \min \{ 1, \mu_1 (V) \}. \) Finally, if $0 \leq \lambda < 1$ then ${\mathds{1}}_{V_{\lambda}} = {\mathds{1}}_V$, and hence \( {\mathds{1}}_{V_{\lambda}} \cdot ({\mathds{1}}_V - \lambda) = {\mathds{1}}_V \cdot ({\mathds{1}}_V - \lambda) = (1 - \lambda) \cdot {\mathds{1}}_V , \) which implies that \[ \lambda + \big\| {\mathds{1}}_{V_{\lambda}} \cdot ({\mathds{1}}_V - \lambda) \big\|_{\bd{L}^1} = \lambda + (1 - \lambda) \cdot \big\| {\mathds{1}}_V \big\|_{\bd{L}^1} = \lambda + (1 - \lambda) \cdot \mu_1 (V) \geq \min \{ 1, \mu_1 (V) \}. \] In combination, the two cases show that indeed $\varrho_{X_1}({\mathds{1}}_V) \geq \min\{ 1, \mu_1 (V) \}$. \end{proof} \subsection{Proving Theorem~\ref{thm:DualOfIntersection} using the Lorentz-Luxemburg representation theorem} \label{sub:DualOfIntersectionProof} We now proceed with the proof of Theorem~\ref{thm:DualOfIntersection}, based on the Lorentz-Luxemburg representation theorem. To that end, we first recall the concept of associate function seminorms. \begin{definition}\label{def:AssociateNorm}(see \cite[§ 68]{ZaanenIntegration}) Let $(X,\mathcal{F},\mu)$ be a $\sigma$-finite measure space, and let $\varrho$ be a function seminorm on $X$. The \emph{associate function seminorm} $\varrho'$ of $\varrho$ is defined as \[ \varrho ' (f) := \sup \Big\{ \int_X f \cdot g \, d \mu \quad \colon \quad g : X \to [0,\infty] \text{ measurable and } \varrho(g) \leq 1 \Big\} \in [0, \infty] \] for $f : X \to [0,\infty]$ measurable. \end{definition} As shown in \cite[Theorem 1 in § 68]{ZaanenIntegration}, $\varrho'$ is always a function seminorm which satisfies the Fatou property. We now compute---up to a constant factor---the associated seminorm $\varrho_{\otimes}'$ of the function norm $\varrho_{\otimes}$. \begin{lemma}\label{lem:LebesgueSumNormAssociateNorm} Let $(X,\mathcal{F},\mu)$ and $(Y,\mathcal{G},\nu)$ be $\sigma$-finite measure spaces, and let $\varrho_{\otimes}$ as in Definition~\ref{def:IteratedLebesgueSumNorm}. For $F : X \times Y \to [0,\infty]$ measurable, let us write \[ \|F\|_{\bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1}} := \IntersectionNorm{F}. \] Then, the associate seminorm $\varrho_{\otimes}'$ satisfies \[ \|F\|_{\bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1}} \leq \varrho_{\otimes}' (F) \leq 16 \cdot \|F\|_{\bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1}} \] for any measurable function $F : X \times Y \to [0,\infty]$. In particular, $\varrho_{\otimes}'$ is a function norm, not just a function seminorm. \end{lemma} \begin{proof} We first prove the right-hand estimate. If the right-hand side is infinite, this estimate is trivial; hence, we can assume that ${\theta := \|F\|_{\bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1}} < \infty}$. Let $G : X \times Y \to [0,\infty]$ be measurable with $\varrho_{\otimes} (G) \leq 1$. Since $\varrho_{\otimes}$ is a function norm (see Lemma~\ref{lem:IteratedLebesgueSumNormIsFunctionNorm}), this implies that ${G < \infty}$ almost everywhere; see \cite[Theorem~1 in § 63]{ZaanenIntegration}. Hence, we can assume ${G < \infty}$ everywhere. Let $\varepsilon > 0$. By Proposition~\ref{prop:IteratedFunctionNormEquivalentToSumNorm}, there exist measurable functions ${G_1,\dots,G_4 : X \times Y \to \mathbb{C}}$ such that $G = G_1 + \dots + G_4$ with \( \|G_1\|_{\bd{L}^1} + \|G_2\|_{\bd{L}^\infty} + \|G_3\|_{\bd{L}^{1,\infty}} + \|G_4\|_{\bd{L}^{\infty,1}} \leq 16 + \varepsilon \). Therefore, Hölder's inequality for the mixed-norm Lebesgue spaces (see \cite[Equation~(1) in Section~2]{MixedLpSpaces}) shows that \begin{align*} \int_{X \times Y} \!\!\! F \!\cdot\! G \, d (\mu \otimes \nu) & \leq \sum_{j=1}^4 \int_{X \times Y} F \cdot |G_j| \, d (\mu \otimes \nu) \\ & \leq \|F\|_{\bd{L}^\infty} \, \|G_1\|_{\bd{L}^1} + \|F\|_{\bd{L}^1} \, \|G_2\|_{\bd{L}^\infty} + \|F\|_{\bd{L}^{\infty,1}} \, \|G_3\|_{\bd{L}^{1,\infty}} + \|F\|_{\bd{L}^{1,\infty}} \, \|G_4\|_{\bd{L}^{\infty,1}} \\ & \leq \theta \cdot (\|G_1\|_{\bd{L}^1} + \|G_2\|_{\bd{L}^\infty} + \|G_3\|_{\bd{L}^{1,\infty}} + \|G_4\|_{\bd{L}^{\infty,1}}) \\ & \leq (16 + \varepsilon) \cdot \| F \|_{\bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1}} . \end{align*} Since $\varepsilon > 0$ was arbitrary, and by definition of the associate norm, this proves the right-hand estimate. {} For proving the left-hand estimate, we can assume that $\theta := \varrho_{\otimes}' (F) < \infty$. First, note that if $G \in \bd{L}^1 (\mu \otimes \nu)$ with $\|G\|_{\bd{L}^1} \leq 1$, then $\varrho_{\otimes}(|G|) \leq 1$, by Proposition~\ref{prop:IteratedFunctionNormEquivalentToSumNorm}. Therefore, the definition of $\varrho_{\otimes}'$ yields \( \big| \int_{X \times Y} F \cdot G \, d (\mu \otimes \nu) \big| \leq \int_{X \times Y} F \cdot |G| \, d (\mu \otimes \nu) \leq \varrho_{\otimes}' (F) \). By the dual characterization of the $\bd{L}^\infty$-norm (see \cite[Theorem~6.14]{FollandRA}), this implies $\|F\|_{\bd{L}^\infty} \leq \varrho_{\otimes}' (F) < \infty$. In the same way (taking $G$ such that $\|G\|_{\bd{L}^\infty} \leq 1$), we also see $\|F\|_{\bd{L}^1} \leq \varrho_{\otimes}' (F) < \infty$. In particular, this implies that $F$ is finite almost everywhere. Finally, note that if $G \in \bd{L}^{1,\infty} (\mu \otimes \nu)$ with $\|G\|_{\bd{L}^{1,\infty}} \leq 1$, then $\varrho_{\otimes}(|G|) \leq 1$, by Proposition~\ref{prop:IteratedFunctionNormEquivalentToSumNorm}. Therefore, \( \big| \int_{X \times Y} F \cdot G \, d (\mu \otimes \nu) \big| \leq \int_{X \times Y} F \cdot |G| \, d (\mu \otimes \nu) \leq \varrho_{\otimes}' (F) \). By the dual characterization of the $\bd{L}^{\infty,1}$-norm (see Theorem~\ref{thm:MixedNormDuality}, or \cite[Theorem~2 in Section~2]{MixedLpSpaces}), this implies ${\|F\|_{\bd{L}^{\infty,1}} \leq \varrho_{\otimes} '(F) < \infty}$. Again, we get in the same way (by taking $G$ such that $\|G\|_{\bd{L}^{\infty,1}} \leq 1$) that $\|F\|_{\bd{L}^{1,\infty}} \leq \varrho_{\otimes}' (F) < \infty$. Overall, these considerations establish the left-hand estimate. {} The left-hand estimate also shows that $\varrho_{\otimes}'$ is a function norm, since if $\varrho_{\otimes}'(F) = 0$, then in particular $\|F\|_{\bd{L}^\infty} = 0$, and thus $F = 0$ almost everywhere. \end{proof} We will derive Theorem~\ref{thm:DualOfIntersection} as a consequence of the preceding lemma and of three beautiful results from the theory of Köthe spaces that we now recall. \begin{theorem}\label{thm:LorentzLuxemburg} (Lorentz-Luxemburg representation theorem; see \cite[Theorem~1 in § 71]{ZaanenIntegration}) Let $(X,\mathcal{F},\mu)$ be a $\sigma$-finite measure space, and let $\varrho$ be a function seminorm on $X$ that satisfies the Fatou property. Then $\varrho = \varrho''$; that is, $\varrho$ coincides with the associate seminorm of the associate seminorm $\varrho'$ of $\varrho$. \end{theorem} \begin{proposition}\label{prop:SecondDualEasyCharacterization} Let $(X, \mathcal{F}, \mu)$ be a $\sigma$-finite measure space, and let $\varrho$ be a function seminorm on $X$. If the associate seminorm $\varrho'$ of $\varrho$ is in fact a function \emph{norm}, then we have the following equivalence for every measurable function $f : X \to \mathbb{C}$: \[ \varrho '' (|f|) < \infty \quad \Longleftrightarrow \quad \forall \, g : X \to \mathbb{C} \text{ measurable with } \varrho' (|g|) < \infty: \int_X |f \cdot g| \, d \mu < \infty. \] \end{proposition} \begin{proof} Since $\varrho'$ is a function norm, \cite[Theorem~4 in § 68]{ZaanenIntegration} shows that $\varrho$ is \emph{saturated}. Therefore, \cite[Corollary in § 71]{ZaanenIntegration} yields the claim. \end{proof} \begin{proposition}\label{prop:FunctionSpaceCompleteness} (consequence of \cite[Theorem~1 in § 65]{ZaanenIntegration}) Let $(X,\mathcal{F},\mu)$ be a $\sigma$-finite measure space, and let $\varrho$ be a function \emph{norm} on $X$ which satisfies the Fatou property. Then the space \[ L_\varrho := \big\{ f : X \to \mathbb{C} \quad \colon \quad f \text{ measurable and } \varrho(|f|) < \infty \big\} \] is a Banach space when equipped with the norm $\|f\|_{L_\varrho} := \varrho(|f|)$. As usual, one identifies two elements of $L_\varrho$ if they agree almost everywhere. \end{proposition} We can finally prove Theorem~\ref{thm:DualOfIntersection}. \begin{proof}[Proof of Theorem~\ref{thm:DualOfIntersection}] We first show that $(\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}, \|\bullet\|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}})$ is a Banach space. It is not hard to verify that $\| \bullet \|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}}$ is a seminorm. Further, Proposition~\ref{prop:IteratedFunctionNormEquivalentToSumNorm} states (in the language of Proposition~\ref{prop:FunctionSpaceCompleteness}) that ${\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1} = L_{\varrho_{\otimes}}}$, and that the (semi)-norms $\|\bullet\|_\ast := \varrho_{\otimes}(|\bullet|)$ and $\|\bullet\|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}}$ are equivalent. Furthermore, Lemma~\ref{lem:IteratedLebesgueSumNormIsFunctionNorm} shows that the function \emph{semi}-norm $\varrho_{\otimes}$ is in fact a function \emph{norm}, and that $\varrho_{\otimes}$ satisfies the Fatou property. In particular, this implies that $\| \bullet \|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}}$ is definite, and hence a norm. Since the function norm $\varrho_{\otimes}$ satisfies the Fatou property, Proposition~\ref{prop:FunctionSpaceCompleteness} shows that $(L_{\varrho_{\otimes}}, \|\bullet\|_\ast)$ is a Banach space. Hence so is $(\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}, \|\bullet\|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}})$. {} Next, since $\varrho_{\otimes}$ satisfies the Fatou property (Lemma~\ref{lem:IteratedLebesgueSumNormIsFunctionNorm}), the Lorentz-Luxemburg representation theorem (Theorem~\ref{thm:LorentzLuxemburg}) shows that $\varrho_{\otimes}'' = \varrho_{\otimes}$. Furthermore, we saw above that $\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1} = L_{\varrho_{\otimes}}$. Finally, Lemma~\ref{lem:LebesgueSumNormAssociateNorm} shows that $\varrho_{\otimes}'$ is a function \emph{norm}. Therefore, Proposition~\ref{prop:SecondDualEasyCharacterization} shows for every measurable function $F : X_1 \times X_2 \to \mathbb{C}$ that \begin{align*} F \! \in \! \bd{L}^1 \! + \! \bd{L}^\infty \! + \! \bd{L}^{1,\infty} \! + \! \bd{L}^{\infty,1} & \Longleftrightarrow F \in L_{\varrho_{\otimes}} \quad \overset{\varrho_{\otimes} = \varrho_{\otimes}''}{\Longleftrightarrow} \quad \varrho_{\otimes} '' (|F|) < \infty \\ & \Longleftrightarrow F \!\cdot\! G \in \bd{L}^1 (\mu_1 \!\otimes\! \mu_2) \quad \forall \, G \!:\! X_1 \!\times\! X_2 \to \mathbb{C} \text{ meas.~with } \varrho_{\otimes}' (|G|) < \infty \\ ({\scriptstyle{\text{Lemma~}\ref{lem:LebesgueSumNormAssociateNorm}}}) & \Longleftrightarrow F \cdot G \in \bd{L}^1 (\mu_1 \otimes \mu_2) \quad \forall \, G \in \bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1}. \end{align*} Here, we used in the last step that Lemma~\ref{lem:LebesgueSumNormAssociateNorm} shows that $\varrho_{\otimes}' (|G|) < \infty$ if and only if ${|G| \in \bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1}}$ if and only if ${G \in \bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1}}$ (if $G$ is measurable). {} It remains to prove the norm equivalence. To this end, note as a consequence of Proposition~\ref{prop:IteratedFunctionNormEquivalentToSumNorm}, Lemma~\ref{lem:IteratedLebesgueSumNormIsFunctionNorm}, Theorem~\ref{thm:LorentzLuxemburg}, and Lemma~\ref{lem:LebesgueSumNormAssociateNorm} that \begin{align*} & \|F\|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}} \leq 16 \, \varrho_{\otimes} (|F|) = 16 \, \varrho_{\otimes} '' (|F|) \\ & = 16 \, \sup \Big\{ \int_{X_1 \times X_2} \!\!\! |F| \cdot G \, d(\mu_1 \otimes \mu_2) \, \Big| \, G : X_1 \!\times\! X_2 \to [0,\infty] \text{ meas.~and } \varrho_{\otimes}' (G) \leq 1 \Big\} \\ ({\scriptstyle{\text{Lemma}~\ref{lem:LebesgueSumNormAssociateNorm}}}) & \leq 16 \, \sup \Big\{ \int_{X_1 \times X_2} |F \cdot G| \, d(\mu_1 \otimes \mu_2) \, \Big| \, \begin{array}{l} G \in \bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1} \\ \text{and } \|G\|_{\bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1}} \leq 1 \end{array} \Big\}, \end{align*} which is precisely the first estimate claimed in Theorem~\ref{thm:DualOfIntersection}. In a similar way, we get \begin{align*} & \qquad \sup \Big\{ \int_{X_1 \times X_2} |F \cdot G| \, d(\mu_1 \otimes \mu_2) \, \Big| \, \begin{array}{l} G \in \bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1} \\ \text{and } \|G\|_{\bd{L}^1 \cap \bd{L}^\infty \cap \bd{L}^{1,\infty} \cap \bd{L}^{\infty,1}} \leq 1 \end{array} \Big\} \\ ({\scriptstyle{\text{Lemma } \ref{lem:LebesgueSumNormAssociateNorm}}}) & \leq 16 \, \sup \Big\{ \int_{X_1 \times X_2} \!\!\! |F| \cdot G \, d(\mu_1 \!\otimes\! \mu_2) \, \Big| \, G : X_1 \!\times\! X_2 \to [0,\infty] \text{ meas.~and } \varrho_{\otimes}' (G) \leq 1 \Big\} \\ & = 16 \cdot \varrho_{\otimes} '' (|F|) = 16 \cdot \varrho_{\otimes} (|F|) \leq 16 \cdot \|F\|_{\bd{L}^1 + \bd{L}^\infty + \bd{L}^{1,\infty} + \bd{L}^{\infty,1}} \,\,\,. \qedhere \end{align*} \end{proof} \section{Proof of Lemma~\ref{lem:CountableDualityCharacterization}} \label{sec:CountableDualityCharacterization} For the proof of Lemma~\ref{lem:CountableDualityCharacterization}, we need two auxiliary results from measure theory that we first collect. \begin{lemma}\label{lem:LebesgueSpaceSeparability}(see \cite[Proposition~3.4.5]{CohnMeasureTheory}) Let $(X,\mathcal{F},\mu)$ be a $\sigma$-finite measure space, and assume that $\mathcal{F}$ is \emph{countably generated} (meaning that there is a countable set $\mathcal{F}_0 \subset \mathcal{F}$ such that $\mathcal{F}$ is generated by $\mathcal{F}_0$; that is, $\sigma_X (\mathcal{F}_0) = \mathcal{F}$). Then the space $\bd{L}^1 (\mu)$ is separable. \end{lemma} In connection with this criterion, the following result will also turn out to be helpful: \begin{lemma}\label{lem:EverySetOnlyNeedsCountableGenerator} Let $X$ be a set, and let $\mathcal{F}_0 \subset 2^X$ be an arbitrary subset of the power set of $X$. Let $\mathcal{F} := \sigma_X (\mathcal{F}_0)$ be the $\sigma$-algebra generated by $\mathcal{F}_0$. For each $A \in \mathcal{F}$, there is a countable family $\mathcal{F}_A \subset \mathcal{F}_0$ such that $A \in \sigma_X (\mathcal{F}_A)$. \end{lemma} \begin{proof}(see \cite[Exercise~7 in Section~1.1]{CohnMeasureTheory}) Define \[ \mathcal{G} := \big\{ A \in \mathcal{F} \quad\colon\quad \exists \, \mathcal{F}_A \subset \mathcal{F}_0 \text{ countable such that } A \in \sigma_X (\mathcal{F}_A) \big\}. \] It is straightforward to verify that $\mathcal{G}$ is a $\sigma$-algebra. Furthermore, $\mathcal{F}_0 \subset \mathcal{G}$, because one can choose $\mathcal{F}_A := \{A\}$ for $A \in \mathcal{F}_0$. Therefore, $\mathcal{F} = \sigma_X (\mathcal{F}_0) \subset \mathcal{G} \subset \mathcal{F}$. \end{proof} We will heavily use the following consequence of Lemma~\ref{lem:EverySetOnlyNeedsCountableGenerator}. \begin{lemma}\label{lem:MeasurableFunctionCountablyGeneratedSigmaAlgebras} Let $(\Theta,\mathcal{A})$ and $(\Lambda,\mathcal{B})$ be measurable spaces. If $F : \Theta \times \Lambda \to [0,\infty]$ is $\mathcal{A} \otimes \mathcal{B}$-measurable, then there are countably generated $\sigma$-algebras $\mathcal{A}_0 \subset \mathcal{A}$ and $\mathcal{B}_0 \subset \mathcal{B}$ such that $F$ is $\mathcal{A}_0 \otimes \mathcal{B}_0$-measurable. Furthermore, if $\mu : \mathcal{B} \to [0,\infty]$ is a $\sigma$-finite measure, then $\mathcal{B}_0$ can be chosen in such a way that $\mu|_{\mathcal{B}_0}$ is still $\sigma$-finite. \end{lemma} \begin{proof} First, note that \[ \sigma(F) := \big\{ F^{-1} (M) \colon M \subset [0,\infty] \text{ measurable} \big\} \subset \mathcal{A} \otimes \mathcal{B} \] is countably generated. One way to see this is that---since $F$ is $\sigma(F)$-measurable---there is a sequence $(F_n)_{n \in \mathbb{N}}$ of simple, non-negative, $\sigma(F)$-measurable functions such that $F_n \nearrow F$ pointwise; see \cite[Proposition~2.1.8]{CohnMeasureTheory}. Let us write ${F_n = \sum_{\ell=1}^{N_n} \alpha_\ell^{(n)} {\mathds{1}}_{L_\ell^{(n)}}}$ with $\alpha_\ell^{(n)} \in [0,\infty)$ and $L_\ell^{(n)} \in \sigma(F)$. Then, each $F_n$ is $\Sigma$-measurable, where $\Sigma := \sigma(\{ L_\ell^{(n)} \colon n \in \mathbb{N}, 1 \leq \ell \leq N_n \}) \subset \sigma(F)$ is a countably generated $\sigma$-algebra. As a pointwise limit of the $F_n$, also $F$ is $\Sigma$-measurable, and hence $\sigma(F) \subset \Sigma \subset \sigma(F)$. Therefore, $\sigma(F) = \Sigma$ is indeed countably generated. Next, note that \[ L_\ell^{(n)} \in \sigma(F) \subset \mathcal{A} \otimes \mathcal{B} = \sigma ( \{ A \times B \colon A \in \mathcal{A}, B \in \mathcal{B} \}) \quad \text{for each } n \in \mathbb{N} \text{ and } 1 \leq \ell \leq N_n . \] In combination with Lemma~\ref{lem:EverySetOnlyNeedsCountableGenerator}, this implies that there are countable families $(A_m)_{m \in \mathbb{N}} \subset \mathcal{A}$ and $(B_m)_{m \in \mathbb{N}} \subset \mathcal{B}$ such that $L_\ell^{(n)} \in \sigma (\{ A_m \times B_m \colon m \in \mathbb{N} \})$ for all $n \in \mathbb{N}$ and $1 \leq \ell \leq N_n$. Therefore, if we define $\mathcal{A}_0 := \sigma(\{ A_m \colon m \in \mathbb{N} \})$ and $\mathcal{B}_0 := \sigma(\{ B_m \colon m \in \mathbb{N} \})$, then both $\mathcal{A}_0 \subset \mathcal{A}$ and $\mathcal{B}_0 \subset \mathcal{B}$ are countably generated, and \( \sigma(F) = \sigma(\{ L_\ell^{(n)} \colon n \in \mathbb{N}, 1 \leq \ell \leq N_n \}) \subset \mathcal{A}_0 \otimes \mathcal{B}_0 . \) Since $F$ is $\sigma(F)$-measurable, this implies that $F$ is $\mathcal{A}_0 \otimes \mathcal{B}_0$-measurable. Finally, if $\mu : \mathcal{B} \to [0,\infty]$ is $\sigma$-finite, then $\Lambda = \bigcup_{n \in \mathbb{N}} E_n$ for suitable $E_n \in \mathcal{B}$ with $\mu(E_n) < \infty$. Instead of the definition of $\mathcal{B}_0$ from above, we then define $\mathcal{B}_0 := \sigma(\{ B_m \colon m \in \mathbb{N} \} \cup \{ E_n \colon n \in \mathbb{N} \})$, so that $\mathcal{B}_0 \subset \mathcal{B}$ is still countably generated, $\mu|_{\mathcal{B}_0}$ is $\sigma$-finite, and one sees precisely as before that $F$ is $\mathcal{A}_0 \otimes \mathcal{B}_0$-measurable. \end{proof} Using Lemmas~\ref{lem:LebesgueSpaceSeparability} and \ref{lem:MeasurableFunctionCountablyGeneratedSigmaAlgebras}, we prove the following final technical ingredient that we need for the proof of Lemma~\ref{lem:CountableDualityCharacterization}. \begin{lemma}\label{lem:CountableLInfinityCharacterization} Let $(\Theta, \mathcal{A})$ be a measurable space and let $(\Lambda, \mathcal{B}, \mu)$ be a $\sigma$-finite measure space with $\mu(\Lambda) > 0$. If $F : \Theta \times \Lambda \to [0,\infty]$ is measurable with respect to the product $\sigma$-algebra $\mathcal{A} \otimes \mathcal{B}$, then there is a countable family $(M_n)_{n \in \mathbb{N}} \subset \mathcal{B}$ of sets of finite, positive measure such that if we set $f_n := {\mathds{1}}_{M_n} / \mu(M_n)$, then \[ \mathop{\operatorname{ess~sup}}_{\lambda \in \Lambda} F(\theta, \lambda) = \sup_{n \in \mathbb{N}} \int_\Lambda F(\theta, \lambda) \cdot f_n (\lambda) \, d \mu(\lambda) \qquad \forall \, \theta \in \Theta . \] In particular, the map \( \theta \mapsto \mathop{\operatorname{ess~sup}}_{\lambda \in \Lambda} F(\theta, \lambda) \) is measurable. \end{lemma} \begin{proof} First, Lemma~\ref{lem:MeasurableFunctionCountablyGeneratedSigmaAlgebras} yields countably generated sub-$\sigma$-algebras $\mathcal{A}_0 \subset \mathcal{A}$ and $\mathcal{B}_0 \subset \mathcal{B}$ such that $F$ is $\mathcal{A}_0 \otimes \mathcal{B}_0$-measurable and such that $\mu|_{\mathcal{B}_0}$ is still $\sigma$-finite. The remainder of the proof proceeds in two steps. {} \noindent \textbf{Step 1} \emph{(Constructing the sets $M_n$):} Since $\mathcal{B}_0$ is countably generated and $\mu|_{\mathcal{B}_0}$ is $\sigma$-finite, Lemma~\ref{lem:LebesgueSpaceSeparability} shows that $\bd{L}^1 (\mu|_{\mathcal{B}_0})$ is separable. Since non-empty subsets of separable metric spaces are again separable (see \cite[Corollary~3.5]{Hitchhiker}), this implies that \[ \mathscr{F} := \big\{ {\mathds{1}}_M / \mu(M) \quad\colon\quad M \in \mathcal{B}_0 \text{ and } 0 < \mu(M) < \infty \big\} \subset \bd{L}^1 (\mu|_{\mathcal{B}_0}) \] is separable, so that there is a sequence of sets $(M_n)_{n \in \mathbb{N}} \subset \mathcal{B}_0$ such that $0 < \mu(M_n) < \infty$ for all $n \in \mathbb{N}$, and such that $(f_n)_{n \in \mathbb{N}} := \big( {\mathds{1}}_{M_n} / \mu(M_n) \big)_{n \in \mathbb{N}} \subset \mathscr{F}$ is dense. Here, we implicitly used that $\mu(\Lambda) > 0$, which implies that $\mathscr{F} \neq \emptyset$, by $\sigma$-finiteness of $\mu|_{\mathcal{B}_0}$. Now, for each $n \in \mathbb{N}$, let us define \[ \Psi_n : \Theta \to [0,\infty], \theta \mapsto \int_\Lambda F(\theta, \lambda) \cdot f_n (\lambda) \, d \mu(\lambda) \qquad \text{and} \qquad \Psi : \Theta \to [0,\infty], \theta \mapsto \sup_{n \in \mathbb{N}} \Psi_n (\theta), \] as well as \( \widetilde{\Psi} : \Theta \to [0,\infty], \theta \mapsto \mathop{\operatorname{ess~sup}}_{\lambda \in \Lambda} F(\theta, \lambda) \). {} Since \( \Theta \times \Lambda \to [0,\infty], (\theta, \lambda) \mapsto F(\theta, \lambda) \cdot f_n(\lambda) \) is measurable, it follows from Tonelli's theorem (see \cite[Proposition~5.2.1]{CohnMeasureTheory}) that each $\Psi_n$ is measurable, so that $\Psi$ is measurable as well; see \cite[Proposition~2.1.5]{CohnMeasureTheory}. Note that strictly speaking, we need to have a $\sigma$-finite measure $\nu$ on $(\Theta,\mathcal{A})$ to apply Tonelli's theorem, but we can simply take $\nu \equiv 0$. {} \noindent \textbf{Step 2} \emph{(Proving the claim of the lemma, that is, $\Psi = \widetilde{\Psi}$):} Since $\| f_n \|_{\bd{L}^1 (\mu)} = 1$ for all $n \in \mathbb{N}$, it is clear that $\Psi_n \leq \widetilde{\Psi}$ for all $n \in \mathbb{N}$, and hence $\Psi \leq \widetilde{\Psi}$. Now, assume towards a contradiction that $\Psi (\theta) < \widetilde{\Psi} (\theta)$ for some $\theta \in \Theta$. Then we can choose $\alpha \in \mathbb{R}$ with $0 \leq \Psi(\theta) < \alpha < \widetilde{\Psi} (\theta)$. By definition of $\widetilde{\Psi}$, this implies that \[ M := [F(\theta,\bullet)]^{-1} ( [\alpha,\infty]) = \{ \lambda \in \Lambda \colon F(\theta,\lambda) \geq \alpha \} \] has positive measure. Furthermore, since $F$ is $\mathcal{A}_0 \otimes \mathcal{B}_0$-measurable, \cite[Lemma 5.1.2]{CohnMeasureTheory} shows that $F(\theta,\bullet)$ is $\mathcal{B}_0$-measurable, and hence $M \in \mathcal{B}_0$. Since $\mu|_{\mathcal{B}_0}$ is $\sigma$-finite, we see that there is some set $M' \in \mathcal{B}_0$ satisfying $M' \subset M$ and furthermore $0 < \mu(M') < \infty$. Hence, $f := {\mathds{1}}_{M'} / \mu(M') \in \mathscr{F}$, so that there is a sequence $(n_k)_{k \in \mathbb{N}}$ such that $f_{n_k} \to f$, with convergence in $\bd{L}^1(\mu|_{\mathcal{B}_0})$. By \cite[Propositions 3.1.3 and 3.1.5]{CohnMeasureTheory}, it follows that there is a further subsequence $(n_{k_\ell})_{\ell \in \mathbb{N}}$ such that $f_{n_{k_\ell}} \to f$ $\mu$-almost everywhere. Therefore, recalling that $F(\theta,\bullet) \geq \alpha$ on $M' \subset M$, while $f \equiv 0$ on $\Lambda \setminus M'$, and recalling that $\int_{\Lambda} f_{n_k} (\lambda) \, d \mu(\lambda) = 1$, we see as a consequence of Fatou's lemma that \begin{align*} \alpha & = \alpha \cdot \int_\Lambda \, f d \mu \leq \int_\Lambda f(\lambda) \cdot F(\theta,\lambda) \, d \mu (\lambda) = \int_\Lambda \liminf_{\ell \to \infty} \Big( f_{n_{k_\ell}} (\lambda) \cdot F(\theta, \lambda) \Big) \, d \mu (\lambda) \\ & \leq \liminf_{\ell \to \infty} \int_\Lambda f_{n_{k_\ell}} (\lambda) \cdot F(\theta, \lambda) \, d \mu (\lambda) = \liminf_{\ell \to \infty} \Psi_{n_{k_\ell}} (\theta) \leq \Psi (\theta) , \end{align*} which is the desired contradiction, since $\Psi(\theta) < \alpha$. \end{proof} Finally, we prove Lemma~\ref{lem:CountableDualityCharacterization}. \begin{proof}[Proof of Lemma~\ref{lem:CountableDualityCharacterization}] In case of $\nu_1(Y_1) = 0$ or $\nu_2(Y_2) = 0$, we have $\| H(\omega,\bullet) \|_{\bd{L}^{\infty,1}(\nu)} = 0$ for all $\omega \in \Omega$, so that we can simply take $h_n \equiv 0$ for all $n \in \mathbb{N}$. In the following, we can thus assume that $\nu_1(Y_1) > 0$ and $\nu_2(Y_2) > 0$. The proof is divided into three steps. {} \noindent \textbf{Step 1} \emph{(Writing $\| H \big(\omega, (\bullet,y_2) \big) \|_{\bd{L}^{\infty}(\nu_1)}$ as a countable supremum of integrals):} We aim to apply Lemma~\ref{lem:CountableLInfinityCharacterization}. To this end, define $(\Theta, \mathcal{A}) := (\Omega \times Y_2, \mathcal{C} \otimes \mathcal{G}_2)$ and $(\Lambda,\mathcal{B},\mu) := (Y_1, \mathcal{G}_1,\nu_1)$. Finally, set \( F : \Theta \times \Lambda \to [0,\infty], \big( (\omega, y_2), y_1 \big) \mapsto H \big(\omega, (y_1,y_2) \big) \). By applying Lemma~\ref{lem:CountableLInfinityCharacterization} with these choices, we obtain a sequence $(f_n)_{n \in \mathbb{N}} \subset \bd{L}^1(\nu_1)$ of non-negative functions which satisfy $\| f_n \|_{\bd{L}^1 (\nu_1)} = 1$ and \begin{equation} \begin{split} \big\| H \big( \omega, (\bullet,y_2) \big) \big\|_{\bd{L}^\infty (\nu_1)} & = \big\| F \big( (\omega,y_2), \bullet \big) \big\|_{\bd{L}^\infty (\mu)} = \sup_{n \in \mathbb{N}} \int_\Lambda F \big( (\omega,y_2), \lambda \big) \cdot f_n (\lambda) \, d \mu (\lambda) \\ & = \sup_{n \in \mathbb{N}} \int_{Y_1} H \big( \omega, (y_1,y_2) \big) \cdot f_n (y_1) \, d \nu_1 (y_1) =: \sup_{n \in \mathbb{N}} \Psi_n (\omega, y_2) \end{split} \label{eq:EssentialSupremumAsCountableSupremum} \end{equation} for all $(\omega,y_2) \in \Omega \times Y_2$. Here, it follows from Tonelli's theorem that $\Psi_n : (\Omega \times Y_2, \mathcal{C} \otimes \mathcal{G}_2) \to [0,\infty]$ is measurable. {} \noindent \textbf{Step 2} \emph{(Finding $g_k : \Omega \times Y \to [0,\infty)$ such that \( \int_Y H(\omega,y) g_k(\omega,y) \, d \nu(y) \xrightarrow[k\to\infty]{} \| H(\omega,\bullet) \|_{\bd{L}^{\infty,1}} \) and $\| g_k (\omega, \bullet) \|_{\bd{L}^{1,\infty}} \leq 1$, as well as $g_k (\omega, \bullet) \in \bd{L}^1(\nu)$):} Since $\nu_2$ is $\sigma$-finite, we have $Y_2 = \bigcup_{n \in \mathbb{N}} E_n$ for certain $E_n \in \mathcal{G}_2$ with $\nu_2 (E_n) < \infty$ and $E_n \subset E_{n+1}$ for all $n \in \mathbb{N}$. Now, given $k \in \mathbb{N}$, define $g_k : \Omega \times Y \to [0,\infty)$ by \[ g_k \big( \omega, (y_1,y_2) \big) := {\mathds{1}}_{E_k} (y_2) \cdot \sum_{\ell=1}^k \bigg( f_\ell (y_1) \cdot \prod_{m=1}^{\ell - 1} {\mathds{1}}_{\Psi_\ell > \Psi_m} (\omega, y_2) \cdot \prod_{m = \ell+1}^k {\mathds{1}}_{\Psi_\ell \geq \Psi_m} (\omega, y_2) \bigg) . \] The significance of this convoluted-seeming definition will be explained shortly. Before that, however, it should be noted that $g_k$ is $\mathcal{C} \otimes \mathcal{G}$-measurable. Now, given fixed $(\omega, y_2) \in \Omega \times Y_2$, let $\ell_0 = \ell_0 (\omega, y_2, k) \in \{ 1,\dots,k \}$ be minimal with $\Psi_{\ell_0} (\omega,y_2) = \max_{1 \leq \ell \leq k} \Psi_\ell (\omega, y_2)$. We claim that then $g_k (\omega, (\bullet, y_2)) = {\mathds{1}}_{E_k} (y_2) \cdot f_{\ell_0}$. Indeed, for $\ell \in \{ 1,\dots,k \}$, there are three cases: \begin{itemize} \item If $\ell < \ell_0$, then $\Psi_\ell (\omega,y_2) < \Psi_{\ell_0} (\omega,y_2)$, while $\ell_0 \in \{ \ell+1,\dots,k \}$. Hence, the $\ell$-th summand in the definition of $g_k \big( \omega, (\bullet, y_2) \big)$ vanishes. \item If $\ell > \ell_0$, then $\Psi_\ell (\omega,y_2) \leq \Psi_{\ell_0} (\omega,y_2)$ and $\ell_0 \in \{1,\dots,\ell-1\}$. Hence, the $\ell$-th summand in the definition of $g_k \big (\omega, (\bullet,y_2) \big)$ again vanishes. \item If $\ell = \ell_0$, then $\Psi_{\ell_0} (\omega,y_2) > \Psi_m (\omega,y_2)$ for $m \in \{ 1,\dots,\ell_0 - 1 \}$ and $\Psi_{\ell_0} (\omega,y_2) \geq \Psi_m (\omega, y_2)$ for $m \in \{ \ell_0+1,\dots,k \}$. Therefore, the $\ell_0$-th summand in the definition of $g_k \big( \omega, (\bullet,y_2) \big)$ is simply $f_{\ell_0}$. \end{itemize} This representation of $g_k(\omega, (\bullet,y_2))$ has two crucial implications: \begin{enumerate} \item With $\ell_0 = \ell_0(\omega, y_2, k)$ as above, we have $g_k (\omega, (\bullet,y_2)) = {\mathds{1}}_{E_k} (y_2) \cdot f_{\ell_0}$, which implies that $\| g_k (\omega, (\bullet,y_2)) \|_{\bd{L}^1(\nu_1)} \leq {\mathds{1}}_{E_k}(y_2)$, since $\| f_{\ell_0} \|_{\bd{L}^1 (\nu_1)} = 1$. Therefore, \( \| g_k (\omega, \bullet) \|_{\bd{L}^{1,\infty}(\nu)} \leq 1 \) and \[ \| g_k (\omega, \bullet) \|_{\bd{L}^1 (\nu)} = \int_{Y_2} \big\| g_k \big( \omega, (\bullet,y_2) \big) \big\|_{\bd{L}^1(\nu_1)} \, d \nu_2(y_2) \leq \nu_2 (E_k) < \infty. \] \item For fixed $(\omega,y_2) \in \Omega \times Y_2$ and with $\ell_0 = \ell_0(\omega, y_2, k)$ as above, we have \begin{align*} \qquad \int_{Y_1} \!\! H \big( \omega, (y_1,y_2) \big) \cdot g_k \big( \omega, (y_1,y_2) \big) \, d \nu_1 (y_1) & = {\mathds{1}}_{E_k} (y_2) \,\, \int_{Y_1} H \big( \omega, (y_1,y_2) \big) \cdot f_{\ell_0} (y_1) \, d \nu_1(y_1) \\ & = {\mathds{1}}_{E_k} (y_2) \cdot \Psi_{\ell_0} (\omega, y_2) = {\mathds{1}}_{E_k} (y_2) \cdot \max_{1 \leq \ell \leq k} \Psi_\ell (\omega,y_2) \\ & \nearrow \sup_{n \in \mathbb{N}} \Psi_n (\omega,y_2) = \big\| H \big( \omega, (\bullet,y_2) \big) \big\|_{\bd{L}^\infty(\nu_1)} \end{align*} as $k \to \infty$, thanks to Equation~\eqref{eq:EssentialSupremumAsCountableSupremum}. By the monotone convergence theorem and Tonelli's theorem, this shows for arbitrary $\omega \in \Omega$ that \begin{align*} \qquad \qquad \int_Y H ( \omega, y ) \cdot g_k ( \omega, y ) \, d \nu(y) & = \int_{Y_2} \int_{Y_1} H \big( \omega, (y_1,y_2) \big) \cdot g_k \big( \omega, (y_1, y_2) \big) \, d \nu_1(y_1) \, d \nu_2 (y_2) \\ & \xrightarrow[k\to\infty]{} \int_{Y_2} \big\| H \big( \omega, (\bullet,y_2) \big) \big\|_{\bd{L}^\infty (\nu_1)} \, d \nu_2(y_2) = \| H(\omega,\bullet) \|_{\bd{L}^{\infty,1}(\nu)}. \end{align*} \end{enumerate} {} \noindent \textbf{Step 3} \emph{(Completing the proof):} Since $g_k$ is $\mathcal{C} \otimes \mathcal{G}$-measurable, Lemma~\ref{lem:MeasurableFunctionCountablyGeneratedSigmaAlgebras} yields for each $k \in \mathbb{N}$ countably generated $\sigma$-algebras $\mathcal{C}^{(k)} \subset \mathcal{C}$ and $\mathcal{G}^{(k)} \subset \mathcal{G}$ ---say $\mathcal{G}^{(k)} = \sigma \big( \big\{ M_n^{(k)} \colon n \in \mathbb{N} \big\} \big)$--- such that $g_k$ is $\mathcal{C}^{(k)} \otimes \mathcal{G}^{(k)}$-measurable and such that $\nu|_{\mathcal{G}^{(k)}}$ is $\sigma$-finite. Define \[ \mathcal{G}_0 := \sigma(\{ M_n^{(k)} \colon n,k \in \mathbb{N} \}) \subset \mathcal{G}, \] and note that $\nu|_{\mathcal{G}_0}$ is $\sigma$-finite. Lemma~\ref{lem:LebesgueSpaceSeparability} shows that $\bd{L}^1(\nu|_{\mathcal{G}_0})$ is separable. Since any non-empty subset of a separable metric space is again separable (see \cite[Corollary~3.5]{Hitchhiker}), this implies that \[ \mathscr{F} := \big\{ h \in \bd{L}^1(\nu|_{\mathcal{G}_0}) \quad\colon\quad h \geq 0 \text{ and } \| h \|_{\bd{L}^{1,\infty}} \leq 1 < \infty \big\} \] is separable as well. Thus, let $\{ h_n \colon n \in \mathbb{N} \} \subset \mathscr{F}$ be dense with respect to $\| \bullet \|_{\bd{L}^1(\nu)}$. Note that since $g_k$ is measurable with respect to $\mathcal{C}^{(k)} \otimes \mathcal{G}^{(k)} \subset \mathcal{C} \otimes \mathcal{G}_0$, and by the properties of $g_k$ derived in Step~2, we have $g_k (\omega, \bullet) \in \mathscr{F}$ for all $\omega \in \Omega$. For brevity, write $\Psi (\omega) := \| H(\omega, \bullet) \|_{\bd{L}^{\infty,1}}$ and $\widetilde{\Psi} (\omega) := \sup_{n \in \mathbb{N}} \int_Y H(\omega,y) \, h_n(y) \, d \nu(y)$. We want to prove that $\Psi = \widetilde{\Psi}$. First, by the Hölder inequality for the mixed-norm Lebesgue spaces (see \cite[Equation~(1) in Section~2]{MixedLpSpaces}), we see that \[ 0 \leq \int_Y H(\omega, y) \, h_n (y) \, d \nu(y) \leq \| H(\omega, \bullet) \|_{\bd{L}^{\infty,1}} \| h_n \|_{\bd{L}^{1,\infty}} \leq \Psi(\omega) \] for all $n \in \mathbb{N}$, and hence $\widetilde{\Psi} \leq \Psi$. To prove the converse, let $\omega \in \Omega$ and $k \in \mathbb{N}$ be arbitrary. Since $g_k (\omega, \bullet) \in \mathscr{F}$, there is a sequence $(n_m)_{m \in \mathbb{N}}$ satisfying $h_{n_m} \to g_k (\omega, \bullet)$ as $m \to \infty$, with convergence in $\bd{L}^1 (\nu)$. It is well-known (see for instance \mbox{\cite[Propositions 3.1.3 and 3.1.5]{CohnMeasureTheory}}) that this implies that there is a subsequence $(n_{m_\ell})_{\ell \in \mathbb{N}}$ such that $h_{n_{m_\ell}} \to g_k (\omega, \bullet)$ as $\ell \to \infty$, with convergence $\nu$-almost everywhere. By Fatou's lemma, this implies that \begin{align*} \int_Y H(\omega, y) \cdot g_k (\omega,y) \, d \nu(y) & = \int_Y \liminf_{\ell \to \infty} \Big( H(\omega, y) \cdot h_{n_{m_\ell}} (y) \Big) \, d \nu(y) \\ & \leq \liminf_{\ell \to \infty} \int_Y H(\omega,y) \cdot h_{n_{m_\ell}} (y) \, d \nu(y) \leq \widetilde{\Psi} (\omega) . \end{align*} Since we saw in Step~2 that \( \int_Y H(\omega, y) \cdot g_k (\omega,y) \, d \nu(y) \smash{\xrightarrow[k\to\infty]{}} \Psi(\omega), \) we arrive at $\Psi(\omega) \leq \widetilde{\Psi}(\omega)$, as desired. \end{proof} \section{A technical result concerning the embedding \texorpdfstring{$\Phi_{K_\Psi} (\mathbf{A}) \hookrightarrow \bd{L}_{1/v}^{\infty}$} {into a weighted ğ‘³âˆ space}} \label{sec:BoundednessEmbeddingEquivalence} In \cite{kempka2015general}, it is assumed that $\Phi_{K_\Psi}(\mathbf{A}) \hookrightarrow \bd{L}_{1/v}^\infty (\mu)$, meaning that Equation~\eqref{eq:CoorbitLInftyEmbeddingAssumption} holds. For general integral kernels $K$ instead of $K_\Psi$, this would be a much stronger condition than boundedness of $\Phi_K : \mathbf{A} \to \bd{L}_{1/v}^\infty (\mu)$. Since $K_\Psi$ is a reproducing kernel, however, the two conditions are actually equivalent, as we now show. \begin{lemma}\label{lem:BoundednessEmbeddingEquivalence} Let $(X,\mathcal{F},\mu)$ be a $\sigma$-finite measure space, let $\mathcal{H}$ be a separable Hilbert space, and let $\Psi = (\psi_x)_{x \in X} \subset \mathcal{H}$ be a continuous Parseval frame for $\mathcal{H}$. Finally, let $\mathbf{A}$ be a solid Banach function space on $X$ and let $v : X \to (0,\infty)$ be measurable. With $K_\Psi$ as defined in Equation~\eqref{eq:ReproducingKernelDefinition}, assume that $\Phi_{|K_\Psi|} : \mathbf{A} \to \mathbf{A}$ and $\Phi_{K_\Psi} : \mathbf{A} \to \bd{L}_{1/v}^\infty$ are well-defined and bounded. Then $\Phi_{K_\Psi}(\mathbf{A}) \hookrightarrow \bd{L}_{1/v}^\infty$, meaning that Equation~\eqref{eq:CoorbitLInftyEmbeddingAssumption} holds. \end{lemma} \begin{proof} Note that if $(\varphi_n)_{n \in I}$ is a \emph{countable} orthonormal basis for $\mathcal{H}$ (which exists by separability), then \( K_\Psi (x,y) = \langle \psi_y, \psi_x \rangle_{\mathcal{H}} = \sum_{n \in I} \langle \psi_y, \varphi_n \rangle_{\mathcal{H}} \langle \varphi_n, \psi_x \rangle_{\mathcal{H}} , \) where $x \mapsto \langle \varphi_n, \psi_x \rangle_{\mathcal{H}}$ and $y \mapsto \langle \psi_y, \varphi_n \rangle_{\mathcal{H}}$ are measurable by definition of a continuous frame. Hence, $K := K_\Psi : X \times X \to \mathbb{C}$ is measurable. By definition of a continuous Parseval frame, the voice transform \( V_\Psi : \mathcal{H} \to \bd{L}^2(\mu), f \mapsto V_\Psi f \) with $V_\Psi f (x) = \langle f, \psi_x \rangle_{\mathcal{H}}$ is an isometry, so that $V_\Psi^\ast V_\Psi = \mathrm{id}_{\mathcal{H}}$. Thus, ${P := V_\Psi V_\Psi^\ast : \bd{L}^2(\mu) \to \bd{L}^2(\mu)}$ satisfies $P P = P$. Now, note that $V_\Psi^\ast F = \int_{X} F(y) \, \psi_y \, d \mu (y)$ (with the integral understood in the weak sense), so that \( (P F)(x) = \langle V_\Psi^\ast F, \psi_x \rangle = \int_X F(y) \langle \psi_y, \psi_x \rangle \, d \mu (y) = (\Phi_K F) (x), \) meaning that $P = \Phi_K$. Because of $P P = P$, this means that $\Phi_K \Phi_K F = \Phi_K F$ for all $F \in \bd{L}^2(\mu)$. We now claim that $\Phi_K \Phi_K F = \Phi_K F$ also holds for all $F \in \mathbf{A}$. Once we show this, we immediately get the claim of the lemma, since then \[ \| \Phi_{K} F \|_{\bd{L}_{1/v}^\infty} = \| \Phi_K \Phi_{K} F \|_{\bd{L}_{1/v}^\infty} \leq C \cdot \| \Phi_K F \|_{\mathbf{A}} \qquad \forall \, F \in \mathbf{A} , \] since $\Phi_K : \mathbf{A} \to \bd{L}_{1/v}^\infty(\mu)$ is bounded by assumption of the lemma. Let $F \in \mathbf{A}$ be arbitrary. Since $X$ is $\sigma$-finite, we have $X = \bigcup_{n=1}^\infty X_n$ where $X_n \subset X_{n+1}$ and $\mu(X_n) < \infty$. Define $Y_n := \{ x \in X_n \colon |F(x)| \leq n \}$, and note that $Y_n \subset Y_{n+1}$, $\mu(Y_n) < \infty$, and $X = \bigcup_{n =1}^\infty Y_n$, so that $F_n := {\mathds{1}}_{Y_n} \cdot F \in \mathbf{A} \cap \bd{L}^2(\mu)$ satisfies $F_n \to F$ pointwise. Note that $\Phi_K \Phi_K F_n = \Phi_K F_n$ and $|F_n| \leq |F|$. Since $\Phi_{|K|} : \mathbf{A} \to \mathbf{A}$ is well-defined, we have ${G := \Phi_{|K|} |F| \in \mathbf{A}}$, and in particular $(\Phi_{|K|} |F|) (x) < \infty$ for $\mu$-almost all $x \in X$. For each such $x$, we see by the dominated convergence theorem that \[ G_n (x) := \Phi_K F_n (x) = \int_{X} K(x,y) \, F_n (y) \, d \mu(y) \xrightarrow[n\to\infty]{} \int_X K(x,y) \, F(y) \, d \mu(y) = \Phi_K F (x) . \] Next, we also have $\int_X |K(x,y)| \, G(y) \, d \mu(y) = (\Phi_{|K|} G)(x) < \infty$ for $\mu$-almost all $x \in X$. Since $|G_n (y)| \leq G (y)$, the dominated convergence theorem thus shows for $x \in X$ with $(\Phi_{|K|} G) (x) < \infty$ (and hence for $\mu$-almost all $x \in X$) that \[ \Phi_K G_n(x) = \int_X K(x,y) \, G_n (y) \, d \mu(y) \xrightarrow[n\to\infty]{} \int_X K(x,y) \, \Phi_K F(y) \, d \mu(y) = \Phi_K [\Phi_K F] (x) . \] Since we also have $\Phi_K G_n = \Phi_K \Phi_K F_n = \Phi_K F_n \xrightarrow[n\to\infty]{} \Phi_K F$ almost everywhere, we thus see ${\Phi_K [\Phi_K F] = \Phi_K F}$ almost everywhere, as desired. \end{proof} \section{Sharpness of Schur's test for complex-valued kernels} \label{sec:SharpnessComplexValued} The classical form of Schur's test gives a \emph{complete characterization} of the boundedness of $\Phi_K : \bd{L}^p \to \bd{L}^p$ for $p \in [1,\infty]$, even for \emph{complex-valued} integral kernels $K : X \times Y \to \mathbb{C}$. This seems to be a folklore result, but we could not locate an appropriate reference. Therefore, we provide a proof below. Somewhat surprisingly, it turns out that our generalized form of Schur's test for mixed Lebesgue-spaces does not provide such a complete characterization of the boundedness of $\Phi_K$ for complex-valued kernels $K$, as we will show using an example. \begin{proposition}\label{prop:ClassicalSchurSharpness} Let $(X,\mathcal{F},\mu)$ and $(Y,\mathcal{G},\nu)$ be $\sigma$-finite measure spaces, and let $K : X \times Y \to \mathbb{C}$ be measurable. Let $(Y_n)_{n \in \mathbb{N}} \subset \mathcal{G}$ with $Y = \bigcup_{n=1}^\infty Y_n$ and $\nu(Y_n) < \infty$, as well as $Y_n \subset Y_{n+1}$, and such that $\Phi_K f : X \to \mathbb{C}$ is well-defined ($\mu$-almost everywhere) for every $f \in \mathscr{G}$, for the vector space \[ \mathscr{G} := \bigl\{ f \in \bd{L}^\infty(\nu) \quad \colon \quad \exists \, n \in \mathbb{N}: f = 0 \text{ a.e.~on } Y \setminus Y_n \bigr\} \subset \bd{L}^1(\nu) \cap \bd{L}^\infty (\nu) . \] Finally, assume for each $p \in \{ 1, \infty \}$ that $\Phi_K : (\mathscr{G}, \| \bullet \|_{\bd{L}^p (\nu)}) \to \bd{L}^p(\mu)$ is bounded, with operator norm $\theta_p$. Then we have $C_1 (K) \leq \theta_\infty < \infty$ and $C_2(K) \leq \theta_1 < \infty$. \end{proposition} \begin{rem*} The space $\mathscr{G}$ is introduced to avoid having to assume a priori that $\Phi_K f$ is a well-defined function for $f \in \bd{L}^1 (\nu) + \bd{L}^\infty (\nu)$, which would be unnecessarily restrictive. \end{rem*} \begin{proof} \textbf{Step~1:} (\emph{Showing $C_1(K) \leq \theta_\infty$}): The claim is clear in case of $\nu(Y) = 0$; therefore, let us assume $\nu(Y) > 0$. Since $[\Phi_K {\mathds{1}}_{Y_\ell}] (x)$ is well-defined almost everywhere, for each $\ell \in \mathbb{N}$ there is a $\mu$-null-set $M_\ell \subset X$ such that $\int_{Y_\ell} |K (x,y)| \, d \nu (y) < \infty$ for all $x \in X \setminus M_\ell$. Next, by splitting $K$ into the positive- and negative part of its real- and imaginary parts, Lemma~\ref{lem:MeasurableFunctionCountablyGeneratedSigmaAlgebras} yields a countably generated $\sigma$-algebra $\mathcal{G}_0 \subset \mathcal{G}$ such that $K$ is $\mathcal{F} \otimes \mathcal{G}_0$-measurable. By enlarging $\mathcal{G}_0$, we can assume that $Y_n \in \mathcal{G}_0$ for all $n \in \mathbb{N}$. Now, Lemma~\ref{lem:LebesgueSpaceSeparability} shows that $\bd{L}^1(\nu|_{\mathcal{G}_0})$ is separable. Since nonempty subsets of separable metric spaces are again separable (see \cite[Corollary~3.5]{Hitchhiker}), setting \( \mathscr{G}_0 := \{ f \in \bd{L}^1(\nu|_{\mathcal{B}_0}) \cap \mathscr{G} \colon \| f \|_{\bd{L}^\infty} \leq 1 \} , \) we can find a countable family $(g_n)_{n \in \mathbb{N}} \subset \mathscr{G}_0$ which is dense (with respect to $\| \bullet \|_{\bd{L}^1(\nu)}$) in $\mathscr{G}_0$. Define $h_{\ell,n} := {\mathds{1}}_{Y_\ell} \cdot g_n \in \mathscr{G}_0 \subset \mathscr{G}$, and note by our assumptions that for each $\ell,n \in \mathbb{N}$, there is a $\mu$-null-set $N_{\ell,n} \subset X$ satisfying $|(\Phi_K h_{\ell,n}) (x)| \leq \theta_\infty < \infty$ for all $x \in X \setminus N_{\ell,n}$. Define ${N := \bigcup_{\ell=1}^\infty M_\ell \cup \bigcup_{\ell,n=1}^\infty N_{\ell,n}}$ and note $\mu(N) = 0$. Fix $x \in X \setminus N$ and $\ell \in \mathbb{N}$ for the moment, and define ${f_\ell := {\mathds{1}}_{Y_\ell} \cdot \overline{\mathrm{sign} \bigl( K(x,\bullet) \bigr)}}$, noting that $f_\ell \in \mathscr{G}_0$. Since convergence in $\bd{L}^1(\nu)$ implies existence of a subsequence converging almost everywhere, we thus obtain a sequence $(n_k)_{k \in \mathbb{N}}$ such that $g_{n_k} \to f_\ell$ $\nu$-almost everywhere, and hence also $h_{\ell, n_k} = {\mathds{1}}_{Y_\ell} \cdot g_{n_k} \to f_\ell$ $\nu$-almost everywhere as $k \to \infty$. Since also $|K(x,y) \, h_{\ell,n_k}(y)| \leq {\mathds{1}}_{Y_\ell}(y) \, |K(x,y)|$ and $\int_{Y_\ell} |K(x,y)| \, d \nu(y) < \infty$, we can apply the dominated convergence theorem to conclude \begin{align*} \int_{Y_\ell} |K(x,y)| \, d \nu(y) & = \bigg| \int_Y f_\ell(y) \, K(x,y) \, d \nu(y) \bigg| \\ & = \lim_{k \to \infty} \bigg| \int_Y K(x,y) \, h_{\ell,n_k}(y) \, d \nu(y) \bigg| = \lim_{k \to \infty} \bigl| [\Phi_K \, h_{\ell,n_k}](x) \bigr| \leq \theta_\infty . \end{align*} Since this holds for all $\ell \in \mathbb{N}$, while $Y = \bigcup_{\ell = 1}^\infty Y_\ell$ and $Y_\ell \subset Y_{\ell+1}$, we see $\int_Y |K(x,y)| \, d \nu(y) \leq \theta_\infty$ for all $x \in X \setminus N$, and hence $C_1 (K) \leq \theta_\infty < \infty$. It should be noted that this step did not use that $\theta_1 < \infty$. {} \noindent \textbf{Step~2:} (\emph{Showing $C_2(K) \leq \theta_1$}): This is essentially a duality argument. Since $\mu$ is $\sigma$-finite, we can find $(X_n)_{n \in \mathbb{N}} \subset \mathcal{F}$ with $\mu(X_n) < \infty$, $X_n \subset X_{n+1}$ and $X = \bigcup_{n=1}^\infty X_n$. Let us define \({ \widetilde{\mathscr{G}} := \bigl\{ g \in \bd{L}^\infty (\mu) \quad \colon \quad \exists \, n \in \mathbb{N} : g = 0 \text{ a.e.~on } X \setminus X_n \bigr\} . }\) Since $C_1 (K) < \infty$ by the previous step, we have \[ \int_Y \int_{X_\ell} |K^T (y,x)| \, d \mu(x) \, d \nu(y) = \int_{X_\ell} \int_Y |K(x,y)| \, d \nu(y) \, d \mu(x) \leq C_1(K) \cdot \mu(X_\ell) < \infty , \] which easily implies that $(\Phi_{K^T} g) (y)$ is well-defined for each $g \in \widetilde{\mathscr{G}}$ and $\nu$-almost all $y \in Y$. Finally, if $\ell \in \mathbb{N}$ and if $f : Y \to \mathbb{C}$ is a simple function (i.e., a finite linear combination of indicator functions of measurable sets) with $f = 0$ on $Y \setminus Y_\ell$ (so that in particular $f \in \mathscr{G}$), and if $g \in \widetilde{\mathscr{G}}$, then \[ \int_X \int_Y |K(x,y)| \, |f(y)| \, d \nu(y) \, |g(x)| \, d \mu(x) \leq C_1(K) \| f \|_{\bd{L}^\infty} \| g \|_{\bd{L}^1} < \infty , \] so that Fubini's theorem is applicable in the following calculation: \begin{align*} \bigg| \int_Y f(y) \cdot (\Phi_{K^T} g) (y) \, d \nu(y) \bigg| & = \bigg| \int_X g(x) \int_Y K(x,y) \, f (y) \, d \nu(y) \, d \mu(x) \bigg| \\ & \leq \| g \|_{\bd{L}^\infty(\mu)} \cdot \| \Phi_K f \|_{\bd{L}^1(\mu)} \leq \| g \|_{\bd{L}^\infty(\mu)} \cdot \theta_1 \cdot \| f \|_{\bd{L}^1(\nu)} . \end{align*} By the usual characterization of the $\bd{L}^\infty$-norm by duality (see \cite[Theorem~6.14]{FollandRA}) applied on $Y_\ell$, this implies $\| (\Phi_{K^T} g)|_{Y_\ell} \|_{\bd{L}^\infty(\nu)} \leq \theta_1 \cdot \| g \|_{\bd{L}^\infty(\mu)}$. Since this holds for every $\ell \in \mathbb{N}$, we get $\| \Phi_{K^T} g \|_{\bd{L}^\infty(\nu)} \leq \theta_1 \cdot \| g \|_{\bd{L}^\infty(\mu)}$ for all $g \in \widetilde{\mathscr{G}}$. Therefore, applying Step~1 to $K^T$ instead of $K$, we see that $C_1(K^T) \leq \theta_1 < \infty$. Since $C_2(K) = C_1(K^T)$ by (an obvious variation of) Lemma~\ref{lem:SchurConstantsForAdjointKernel}, we are done. \end{proof} Now, we provide an example showing that for mixed Lebesgue spaces and \emph{complex-valued} kernels, our generalized form of Schur's test is in general only sufficient---but not necessary--- for the boundedness of $\Phi_K : \bd{L}^{p,q} \to \bd{L}^{p,q}$ for all $p,q \in [1,\infty]$. \begin{example}\label{exa:ComplexKernelNoCharacterization} Define $X_1 := [0,1]$ with the Borel $\sigma$-algebra, and $X_2 := Y_1 := Y_2 := \mathbb{Z}$, all equipped with the power set $2^\mathbb{Z}$ as the $\sigma$-algebra. Next, let $\mu_1 := \lambda$ be the Lebesgue measure on $[0,1]$ and $\nu_2$ the counting measure on $\mathbb{Z}$. Furthermore, define $\mu_2 (A) := \sum_{n \in A} e^{-|n|}$ for $A \subset \mathbb{Z}$, noting that this is a finite measure on $\mathbb{Z}$ satisfying $\mu_2(\{ k \}) > 0$ for all $k \in \mathbb{Z}$. Finally, choose a positive sequence $c = (c_m)_{m \in \mathbb{Z}} \in \ell^2(\mathbb{Z}) \setminus \ell^1(\mathbb{Z})$, say $c_m = (1 + |m|)^{-2/3}$, and define $\nu_1 (A) := \sum_{n \in A} \beta_n$ for $A \subset \mathbb{Z}$, where $\beta_n := \bigl( (1 + n^2) \cdot \sum_{|m| \leq |n|} c_m \bigr)^{-1}$. Note that ${\nu_1(\mathbb{Z}) \leq c_0^{-1} \sum_{n \in \mathbb{Z}} (1 + n^2)^{-1} < \infty}$. Now, with $X = X_1 \times X_2$ and $Y = Y_1 \times Y_2$ and with $\mu = \mu_1 \otimes \mu_2$ and $\nu = \nu_1 \otimes \nu_2$, define \[ K : X \times Y \to \mathbb{C}, \big( (x,k), (n,m) \big) \mapsto c_m \cdot e^{-2 \pi i m x} \cdot {\mathds{1}}_{|m| \leq |n|} \cdot {\mathds{1}}_{|m| \leq |k|} . \] We claim that $\Phi_K : \bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu)$ is well-defined and bounded for all $p,q \in [1,\infty]$, but that $C_3 (K) = \infty$, while $C_i (K) < \infty$ for $i \in \{ 1,2,4 \}$. This will show that for complex-valued kernels, our generalized form of Schur's test does not yield a complete characterization, in contrast to the case of non-negative kernels. First of all, note for fixed $p,q \in [1,\infty]$, $k \in \mathbb{Z}$, and $f \in \bd{L}^{p,q}(\nu)$ that \begin{equation} \begin{split} \bigl| (\Phi_K f) (x,k) \bigr| & \leq (\Phi_{|K|} |f|) (x,k) \leq \sum_{|m| \leq |k|} c_m \sum_{n \in \mathbb{Z}} \beta_n \, |f(n,m)| \\ & \leq \sum_{|m| \leq |k|} c_m \, \| f(\bullet, m) \|_{\bd{L}^1(\nu_1)} \lesssim \sum_{|m| \leq |k|} c_m \, \| f(\bullet,m) \|_{\bd{L}^p(\nu_1)} < \infty . \end{split} \label{eq:ComplexKernelOperatorWellDefined} \end{equation} Here, we used that $\| \bullet \|_{\bd{L}^1(\nu_1)} \lesssim \| \bullet \|_{\bd{L}^p(\nu_1)}$ since $\nu_1$ is a finite measure. The estimate \eqref{eq:ComplexKernelOperatorWellDefined} shows that $\Phi_K f : [0,1] \times \mathbb{Z} \to \mathbb{C}$ is well-defined and also that ${\Phi_{|K|} |f| (\bullet, k) \in \bd{L}^\infty([0,1])}$. Finally, it shows that $\bd{L}^{p,q}(\nu) \to \mathbb{C}, f \mapsto (\Phi_K f) (x,k)$ is a bounded linear functional for arbitrary $(x,k) \in X$. Next, note that \[ C_1 (K) = \mathop{\operatorname{ess~sup}}_{(x,k) \in [0,1] \times \mathbb{Z}} \int_{\mathbb{Z}^2} \bigl| K \bigl( (x,k), (n,m) \bigr) \bigr| \, d \nu(n,m) \leq \sum_{n \in \mathbb{Z}} \beta_n \sum_{|m| \leq |n|} c_m = \sum_{n \in \mathbb{Z}} (1 + n^2)^{-1} < \infty , \] as well as \begin{align*} C_2 (K) & = \mathop{\operatorname{ess~sup}}_{(n,m) \in \mathbb{Z}^2} \int_{[0,1] \times \mathbb{Z}} \big| K \big( (x,k), (n,m) \big) \big| \, d \mu(x,k) \\ & \leq \sup_{m \in \mathbb{Z}} \int_{\mathbb{Z}} \int_{[0,1]} c_m \, d x \, d \mu_2(k) \leq \| c \|_{\ell^\infty} \cdot \mu_2 (\mathbb{Z}) < \infty , \end{align*} and \[ C_4 (K) = \mathop{\operatorname{ess~sup}}_{m \in \mathbb{Z}} \int_{\mathbb{Z}} \mathop{\operatorname{ess~sup}}_{x \in [0,1]} \int_{\mathbb{Z}} \big| K \big( (x,k), (n,m) \big) \big| \, d \nu_1 (n) \, d \mu_2(k) \leq \| c \|_{\ell^\infty} \cdot \nu_1(\mathbb{Z}) \cdot \mu_2 (\mathbb{Z}) < \infty . \] In view of Theorem~\ref{thm:SchurTestSufficientUnweighted}, this implies that $\Phi_K : \bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu)$ is well-defined and bounded for all $p,q \in [1,\infty]$ with $p \geq q$, and in particular for $p = q = 1$, $p = q = \infty$ and for $(p,q) = (\infty, 1)$. Next, for $f \in \bd{L}^{1,\infty}(\nu)$, $k \in \mathbb{Z}$ and $g \in \bd{L}^2 ([0,1])$, we have \begin{equation} \begin{split} \bigg| \int_0^1 \Phi_K f (x,k) \, g (x) \, d x \bigg| & = \bigg| \sum_{n \in \mathbb{Z}} \,\, \sum_{|m| \leq \min \{ |k|, |n| \}} \!\!\! c_m \, \beta_n \, f (n,m) \, \int_0^1 g(x) \, e^{- 2 \pi i m x} \, d x \bigg| \\[0.2cm] & \leq \sum_{m \in \mathbb{Z}} c_m \, |\widehat{g}(m)| \sum_{n \in \mathbb{Z}} \beta_n \, |f(n,m)| \leq \| f \|_{\bd{L}^{1,\infty}(\nu)} \cdot \| c \|_{\ell^2 (\mathbb{Z})} \cdot \| \widehat{g} \|_{\ell^2(\mathbb{Z})} \\ & = \| f \|_{\bd{L}^{1,\infty}(\nu)} \cdot \| c \|_{\ell^2 (\mathbb{Z})} \cdot \| g \|_{\bd{L}^2 ([0,1])} , \end{split} \label{eq:ComplexKernelOperatorBoundedness} \end{equation} where we used Plancherel's theorem $\| \widehat{g} \|_{\ell^2(\mathbb{Z})} = \| g \|_{\bd{L}^2}$ in the last step. The interchange of the series with the integral above can be justified using the dominated convergence theorem and the estimate in \eqref{eq:ComplexKernelOperatorWellDefined}. Since we know from above that $\Phi_K f (\bullet,k) \in \bd{L}^{\infty}([0,1]) \subset \bd{L}^2([0,1])$, the estimate \eqref{eq:ComplexKernelOperatorBoundedness} easily implies \( \| \Phi_K f (\bullet,k) \|_{\bd{L}^1} \leq \| \Phi_K f (\bullet,k) \|_{\bd{L}^2} \leq \| f \|_{\bd{L}^{1,\infty}(\nu)} \cdot \| c \|_{\ell^2 (\mathbb{Z})}, \) which shows that ${\Phi_K : \bd{L}^{1,\infty} (\nu) \to \bd{L}^{1,\infty}(\mu)}$ is bounded, as claimed. Now, a version of the Riesz-Thorin interpolation theorem for mixed Lebesgue-spaces (see \cite[Section~7, Theorem~2]{MixedLpSpaces}) shows that $\Phi_K : \bd{L}^{p,q}(\nu) \to \bd{L}^{p,q}(\mu)$ is in fact well-defined and bounded for all $p,q \in [1,\infty]$. Finally, to see $C_3 (K) = \infty$, recall that $\nu_2$ is the counting measure on $\mathbb{Z}$, so that \[ C_3 (K) = \mathop{\operatorname{ess~sup}}_{k \in \mathbb{Z}} \int_{\mathbb{Z}} \mathop{\operatorname{ess~sup}}_{n \in \mathbb{Z}} \int_0^1 \big| K \big( (x,k), (n,m) \big) \big| \, d x \, d \nu_2 (m) = \mathop{\operatorname{ess~sup}}_{k \in \mathbb{Z}} \sum_{|m| \leq |k|} c_m = \| c \|_{\ell^1} = \infty. \] \end{example} \let\section\origsection \markleft{References} \markright{} \end{document}
\begin{document} \preprint{AIP/123-QED} \title[]{Quantum sensing with superconducting circuits} \author{S. Danilin} \email{[email protected].} \author{M. Weides} \affiliation{James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, United Kingdom} \date{\today} \begin{abstract} Sensing and metrology play an important role in fundamental science and applications, by fulfilling the ever-present need for more precise data sets, and by allowing to make more reliable conclusions on the validity of theoretical models. Sensors are ubiquitous, they are used in applications across a diverse range of fields including gravity imaging, geology, navigation, security, timekeeping, spectroscopy, chemistry, magnetometry, healthcare, and medicine. Current progress in quantum technologies inevitably triggers the exploration of quantum systems to be used as sensors with new and improved capabilities. This perspective initially provides a brief review of existing and tested quantum sensing systems, before discussing future possible directions of superconducting quantum circuits use for sensing and metrology: superconducting sensors including many entangled qubits and schemes employing Quantum Error Correction. The perspective also lists future research directions that could be of great value beyond quantum sensing, e.g. for applications in quantum computation and simulation. \end{abstract} \maketitle \section{\label{sec:Introduction}Introduction} Quantum sensing is the procedure of measuring an unknown quantity of an observable using a quantum object as a probe. Quantum objects \textemdash\ those in which quantum-mechanical effects can manifest and be observed \textemdash\ are known to be highly sensitive to even tiny changes in their environment which is inevitably coupled to them. These changes can be so small that it is extremely challenging, or even impossible, to detect them employing classical measurements. Consequently, the fact that the probe/sensor is quantum endows it with extreme sensitivity. A back-action imposes a random change of a system state during the measurement. A probability of an outcome depends not only on the initial state of the system but also on the strength of the measurement~\cite{Back-action_general}. In the case of a quantum sensor, the back-action is quantum-limited, and measurement schemes where it can be evaded have been demonstrated, e.g. in Ref~[\onlinecite{Back-action_evading}]. Quantum sensors are highly engineered systems for measurements ranging from gravitational pull, to magnetic and electric fields and propagating photons. Different quantum systems have been employed for sensing to date, we are giving a short overview in the following. Thermal vapors of alkali atoms closed in a cell, pumped, and interrogated by near-resonant light are used to measure magnetic fields~\cite{Budker_OpticalMagnetometry}. This method is also known as nonlinear magneto-optical rotation magnetometry. Magnetometers of this type do not have intrinsic $1/f$-noise due to the absence of nearly degenerate energy states and do not require cryogenic cooling for operation; they offer millimetre spatial resolution and sensitivity exceeding ${\rm fT}/\sqrt{\rm Hz}\ $ \cite{Kominis_subfemtotelsa}. Their accuracy is shot-noise limited and scales as~\cite{Budker_magnetometry_review,Allred_SERF_magnetometry} $\delta B\sim1/\sqrt{NT_{2}t}$, where $N$ is the number of atoms, $T_{2}$ is the transverse relaxation (dephasing) time, and $t$ is the time of the signal acquisition. Spin-exchange relaxation free (SERF) operation can be achieved by increasing the gas density, and improves the sensitivity of atomic magnetometers~\cite{Allred_SERF_magnetometry}. Another type of magnetic field sensor utilises ensembles of nuclear spins~\cite{Waters_magnetometer_NMR}. Although they are not as sensitive as atomic vapor sensors, they find applications in a variety of areas from archaeology to MRI systems~\cite{Degen_sensor_review} due to simplicity and robustness. Nitrogen vacancy centres (NV-centres) in diamond \textemdash electron spin defects \textemdash have recently attracted a lot of attention as quantum sensors, with predicted sensitivity for ensembles of spins $\sim0.25\cdot {\rm fT}/\sqrt{{\rm Hz}\cdot {\rm cm}^3}$~\cite{Taylor_diamond_magnetometer}, and experimentally achieved sensitivities of $\sim 1 {\rm pT}/\sqrt{\rm Hz}$~\cite{Wolf_NVensemble}. With the advent of single spin in diamond readout~\cite{Gruber_single_defect_spectroscopy,Dobrovitski_single_spin_control}, it became possible to use such single spins for magnetometry~\cite{Taylor_diamond_magnetometer,Balasubramanian_nanoscale_imaging_magnetometry,Cole_decoherence_microscopy}, sensing of electric fields~\cite{Dolde_electric_field_sensing_NV_spin}, or to measure pressure~\cite{Doherty_NV_centre_pressure_sensor}. Demonstrations of frequency standards based on the NV defect centres in diamond~\cite{Hodges_NV_centre_frequency_standard} and nanoscale thermometry with down to $5 {\rm mK}/\sqrt{\rm Hz}$ sensitivity have been made~\cite{Kucsko_NV_centre_thermomtry,Neumann_NV_centre_thermomtry,Toyli_NV_centre_thermometry}. The main advantages of this type of sensors are their stability in nanostructures and superior $10-100$ nm spacial resolution. Trapped ions have been employed to detect extremely small forces and displacements. To increase the solid angle of the field access to the trapped ion, an enhanced access ion trap geometry was shown~\cite{Maiwald_enhanced_access_ion_trap}. A force sensitivity of $\sim 100 {\rm yN}/\sqrt{\rm Hz}$ has been reached on the crystals of trapped atomic ions, with the ability to discriminate ion displacements of $\sim 18 {\rm nm}$~\cite{Biercuk_trapped_ions_force_detection}. Their augmented force and displacement sensitivity are often traded against the reduced resolution. Rydberg atoms are another physical system for quantum sensing of electrical fields. Their high sensitivity is based on huge dipole moments of highly excited electronic states~\cite{Osterwalder_Rydberg_states_sensing}. Rubidium atoms prepared in circular Rydberg states were used for non-destructive (quantum nondemolition~\cite{Braginsky_QND_measurements,Levenson_QND_optics,Braginsky_QND,Grangier_QND_meaurement_optics}) measurement of single microwave photons~\cite{Nogues_QND_photon_measurement,Gleyzes_quantum_jumps_of_light}, and sensitivities reaching $3 {\rm mV}/{\rm m}\sqrt{\rm Hz}$ were achieved when Schr{\"o}dinger-cat states~\cite{Hacker_NaturePhotonics_cat_states_Rb,Vlastakis_100Photon_cat_state} were involved in the protocol~\cite{Facon_SchrodingerCat_electrometer}. The reader is directed to reviews on atomic spectroscopy and interferometry based sensors~\cite{Kitching_review}, quantum metrology with single spins in diamond~\cite{Chen_spins_in_diamond_review}, comparative analysis of magnetic field sensors~\cite{Lenz_magnetic_sensors_review}, and a more general and comprehensive review on quantum sensing~\cite{Degen_sensor_review}. Superconducting quantum circuits are among the leading approaches to real-world applications with quantum computers due to their controllability and reproducibility. Here we review their past and explore their future use as quantum sensors. \begin{figure*} \caption{Principal measurement schemes for magnetometers based on Superconducting Quantum Interference Devices (SQUIDs) for dc- (a) and rf- (b) measurement strategies. (c) The circuit of a frequency-tunable transmon device. (d) ac Stark shifts of transmon qudit transition frequencies induced by an external microwave drive of frequency $\omega_D$. The shifts depend on the frequency detuning of the signal from the transitions and the amplitude $A_D$ of the signal. (e) Dependence of the energy spectrum of a tunable transmon device on the normalized external magnetic flux $\lambda=\Phi_{\rm ext}/\Phi_0$.} \label{figure_1} \end{figure*} The long-established Superconducting Quantum Interference Devices (SQUIDs) for sensing of magnetic fields are distinguished from the relatively new quantum sensors based on superconducting qubits. Due to the noise, the accuracy of the measured parameter usually scales with sensing time as $\sim1/\sqrt{t}$, known as the Standard Quantum Limit (SQL) scaling. The ultimate accuracy scaling law, known as the Heisenberg Limit (HL), is given by the uncertainty principle and improves better as $\sim1/t$. The figures of merit of any sensor are: i) its accuracy limit, ii) the time it takes to reach this limit, and iii) the dynamic range of values possible to measure. With this in mind, this perspective aims to draw attention to, and trigger the development and experimental testing of, sensing protocols, which will allow to improve upon the SQL in the time domain without loss in sensor dynamic range. The article is structured as follows: Section \Romannum{2} briefly describes magnetometers based on Superconducting Quantum Interference Devices; sensors employing superconducting qubits are discussed in Section \Romannum{3}; the utility of quantum entanglement for sensing is considered in Section \Romannum{3}.1, and prospects to use Quantum Error Correction are discussed in Section \Romannum{3}.2; finally, Section \Romannum{4} summarises the conclusions of this perspective and provides an outlook on future experiments. \section{\label{sec:SQUIDs}Superconducting Quantum Interference Devices} Shortly after the theoretical prediction of the Josephson effect~\cite{Josephson_superconductive_tunneling} and its experimental observation~\cite{Anderson_tunneling_current_exp}, the quantum interference of currents was demonstrated. This interference is at the core of any SQUID magnetometer operation~\cite{Jaclevic_currents_interference}. There are two types of SQUID-based magnetometers: dc-SQUID~(Fig.\ref{figure_1}(a)) with a pair of Josephson junctions connected in parallel in a superconducting loop, and rf-SQUID~(Fig.\ref{figure_1}(b)) with a single junction in a loop. The experimental methods and measurement schemes of magnetic field sensing with SQUID systems are diverse and extensively studied~\cite{Clarke_handbook,Fagaly_SQUID_review}. SQUIDs became the most sensitive tools for magnetic field measurements, with applications in geophysics and neuroscience~\cite{Clarke_highTc_SQUIDs}, for example, and record magnetic flux sensitivities of $\sim 50\ {\rm n}\Phi_0/\sqrt{\rm Hz}$ ($\sim 50\ {\rm nT}/\sqrt{\rm Hz}$) at $100\ {\rm Hz}$ and $\sim 50\ {\rm nm}$ loop diameter ~\cite{Vasyukov_nanoSQUID}. SQUID superiority in sensitivity has only recently been challenged with the advent of SERF atomic vapor magnetometers~\cite{Allred_SERF_magnetometry}. Despite SQUIDs high sensitivity, the accuracy of measured results at low frequencies is shot-noise limited, and improves as $\sim1/\sqrt{t}$~\cite{Clarke_handbook}. \section{\label{sec:Qubits}Superconducting Quantum Circuit Based Sensors} Superconducting circuits including macroscopic, human-designed, many-level anharmonic systems (qubits/qudits Fig.\ref{figure_1}(c)) are a well established experimental technology platform in the field of quantum computation and simulation. The field development gained significant momentum when limitations of the conventional classical paradigm of computation became apparent in the early 1980s~\cite{Feynman_simulations_computer}. At present, it is undergoing a transition to the so-called Noisy Intermediate-Scale Quantum~\cite{NISQ2018_Preskill} regime, and new applications of superconducting circuits comprising qubits/qudits in quantum sensing and metrology are emerging. The first experimental works where such circuits are used as quantum sensors have recently appeared. The frequency and amplitude of a microwave signal were determined by spectroscopic means~\cite{Schneider_spectroscopy_sensing} and with time-domain measurements~\cite{Kristen_time-domain_sensing} via ac Stark shifts of qudit higher energy levels (Fig.\ref{figure_1}(d)). Here, an external microwave signal, with a frequency $\omega_D$ and an amplitude $A_D$, shifts the transitions from their unperturbed values. The change in transition frequencies allows for a measurement of $A_D$ and $\omega_D$ for the applied signal to be made. Furthermore, the absolute power flowing along a transmission line~\cite{Honigl-Decrinis_power_sensor} or distortions of microwave control pulses ~\cite{Bylander_distortion_sensing,Gustavsson_distortion_sensing} were measured by strong coupling to a flux qubit. Methods to use a transmon qubit as a VNA for in situ characterization of the transfer function of xy-control lines~\cite{Jerger_PRL}, and as a cryoscope to compensate the distortions of z-control pulses~\cite{Rol_APL} were demonstrated recently. These methods are useful for the calibration of microwave lines and the deduction of power reaching the circuit at millikelvin temperatures. They allow for the correction of pulse imperfections and increase fidelities of control gates used in quantum computation and simulation. All of these methods are implemented on a superconducting structure comprising a single qubit/qudit. In quantum information processing, precise dynamic control of the quantum states is key to increasing the circuit depth. Conventional qubit frequency tuning is achieved by applying a well-controlled magnetic flux through a split junction loop within the quantum circuit. In turn, the quantum circuit can sense these externally generated static or dynamic fields. By replacing the flux-threaded split junction with a voltage biased junction (gatemon~\cite{Larsen_PRL,Lange_PRL}), the sensed external quantity becomes a voltage instead of a current. Note that we are using magnetic flux as an external parameter here, but the sensed quantity could be a voltage too. Superconducting circuits comprising qubits/qudits possess all the properties required to construct external field sensing quantum systems~\cite{Degen_sensor_review}: they have quantized energy levels; it is possible to initialize, coherently control, and read out their quantum states; and energy levels of the circuit, $E_i(\lambda)$, can be made dependent on the external parameter, $\lambda$, to be measured (Fig.~\ref{figure_1}(e)). For frequency-tunable qubits with a split junction, the parameter $\lambda$ is an external flux, $\Phi_{\rm ext}$. If the qubit is prepared in a superposition of basis states $\{0,1\}$ and placed in an external field, its state will accumulate phase $\phi(\Phi_{\rm ext}) = \Delta\omega(\Phi_{\rm ext})\cdot\tau$, dependent on the flux $\Phi_{\rm ext}$. $\Delta\omega(\Phi_{\rm ext})=\omega_q(\Phi_{\rm ext})-\omega_{\rm d}$ is the detuning between the qubit and the control pulse frequency used for the state preparation. By applying a second control pulse identical to the first one after some time $\tau$, and measuring the population of qubit basis states, it is possible to reveal the accumulated phase in oscillating dependencies of $P_{|0\rangle}$ and $P_{|1\rangle}$. This measurement, known as {\it Ramsey fringes interferometry}, can be employed for field sensing tasks. An equal superposition state $(|0\rangle+|1\rangle)/\sqrt{2}$ provides the maximal pattern visibility here, and the best sensitivity to the field. The Ramsey fringes pattern $P_{|1\rangle}(\Phi_{\rm ext},\tau)$ can be simulated or directly measured as a calibration pattern before the field sensing routine. In this scenario, the outcome $P_m$ measured during the sensing procedure will be used in conjunction with the calibration pattern to determine the unknown flux value. Fig.~\ref{figure_3} shows the simulated dependence of the probability $P_{|1\rangle}(\Phi_{\rm ext},\tau)$ on the external flux at different delay times $\tau_i$. One can see that the longer the delay time, the higher the sensitivity of $P_{|1\rangle}$ to the external flux. This is only the case if delay time is shorter than the coherence time $T_2^*$ of the qubit; for longer delay times, the sensitivity will be reduced. However, two issues should be noted here. Firstly, for an unknown flux value, it is not possible to choose {\it a priori} the delay time $\tau^*$ with the best sensitivity. Secondly, for longer delay times, it is not possible to unambiguously determine the measured flux based on a single outcome. As shown in Fig.~\ref{figure_3}, the same result $P_m$ can correspond to many flux values $\{\Phi_1,\Phi_2,\Phi_3,\Phi_4\}$, and to make the measurement unambiguous, one has to reduce the dynamic range of the sensor to the interval highlighted in the figure. This interval is substantially shorter than for the shortest delay time $\tau_1$, where the measurement is single-valued (one-to-one correspondence). \begin{figure} \caption{Ramsey fringes pattern $P_{|1\rangle}(\Phi_{\rm ext},\tau)$ at different delay times $\tau_i$. $P_m$ is an outcome measured during the sensing procedure and used for the determination of the unknown flux value.} \label{figure_3} \end{figure} The phase estimation algorithms~\cite{Giovannetti_1,Giovannetti_4}(PEA) can be employed to address both issues. They gradually tune the delay time to the value $\tau^*=T_2^*$ with the highest available sensitivity, without a reduction in the dynamic range of the sensor, and appear to be powerful tools in sensing. Kitaev~\cite{Kitaev} and Fourier~\cite{Quantum_Fourier} phase estimation algorithms were used with a single tunable transmon qubit to measure external flux, and experimentally demonstrated the accuracy scaling beyond the SQL~\cite{Danilin_magnetometry}. These algorithms involve a stepped strategy. At each step of the Kitaev algorithm, the interval of possible fluxes is reduced by a factor of two based on the measurement outcome, and a new optimal delay time is found for the next step providing improved sensitivity. The optimal delay times grow from step to step on average, and gradually tend to the coherence time. PEAs allow to approach $\sim1/t$ accuracy scaling (HL). The qubit coherence $T_2^*$ serves as a quantum resource. The longer it is, the more steps of the algorithm can be made before the delay time approaches $T_2^*$, and higher accuracy can be achieved in the same sensing time. Quantum sensing algorithms employing qutrits instead of qubits have also recently been considered~\cite{Shlyakhov_metrology}. \subsubsection{Adding Entanglement} Quantum entanglement can provide improvements in attainable sensitivity~\cite{Giovannetti_2,Giovannetti_3} for short interrogation times $\tau$ since entangled state of $N$ qubits, used as the probe state, allows for an $N$-fold speed-up in phase accumulation. Experimentally, this has been demonstrated for systems including three trapped $^9Be^+$ ions~\cite{Leibfried_Science}, four-entangled photons~\cite{Nagata_Science}, ten nuclear spins~\cite{Jones_Science}, or single bosonic mode of a superconducting resonator~\cite{Wang_NatCommun}. Though these experiments clearly demonstrate the improvement of sensitivity beyond the SQL with the number of entangled qubits, they do not yet provide an explicit metrologic routine to allow the measurement of an unknown external field. To this end, we analyze the probability pattern $P_{|10\rangle}$ of a two-qubit state for sensing with PEAs. We use entangling conditional phase gates~\cite{DiCarlo_cPhase} and simulate the evolution of the two-qubit state in QuTiP~\cite{QuTiP}. Fig.~\ref{figure_2}(a) shows a time scheme used for the simulation of the pattern, where $CP_{ij}$ denotes the c-Phase gate inverting the sign of only the $|ij\rangle$ state. Relaxation $\sqrt{\Gamma_1}\hat\sigma_+$ and pure dephasing $\sqrt{\Gamma_\phi/2}\hat\sigma_z$ processes were taken into account in the simulation with the identical value used for both qubits decoherence rates. The flux dependence of the qubit transition frequency is assumed to be the same for both qubits. They are equally detuned to the flux point where $d\Delta\omega/d\Phi_{\rm ext}\ne 0$. Starting from both qubits in the ground state, we create the $|\Phi^+\rangle$ Bell state, apply the external flux we want to measure to both qubits, and allow the system to evolve for a variable time $\tau$. After that, we convert the entangled state to a separable state, projecting the entangled state phase to the phase of the first qubit, shown in Eq.~(\ref{evolution}). \begin{figure} \caption{(a) Sequence of control operations used in the simulation. (b) Simulated probability pattern $P_{|10\rangle}(\Phi_{\rm ext},\tau)$ of the two-qubit state $|10\rangle$ with $\Gamma_1=0.2\ {\rm MHz}$, $\Gamma_\phi=0.034\ {\rm MHz}$, and $f_{01}=8.943\ {\rm GHz}$ for both qubits. (c) Comparison of time dependencies of $P_{|1\rangle}$, probability for a single qubit, and $P_{|10\rangle}$, probability for two-qubit case, at a fixed external flux.} \label{figure_2} \end{figure} \begin{multline} |00\rangle\xrightarrow[CP_{10}]{\rm Entangler}\frac{|00\rangle+|11\rangle}{\sqrt{2}}\underset{\tau}{\Longrightarrow}\frac{|00\rangle+e^{i\phi(\Phi_{\rm ext},\tau)}|11\rangle}{\sqrt{2}}\\ \xrightarrow[CP_{00}]{\rm Projector}\left(\frac{-1+e^{i\phi}}{2}|0\rangle+\frac{-1-e^{i\phi}}{2}|1\rangle\right)\otimes|0\rangle. \label{evolution} \end{multline} Subsequent measurement of both qubit states, for different delay times $\tau$ and different external fluxes $\Phi_{\rm ext}$, results in a pattern $P_{|10\rangle}$, shown in the Fig.~\ref{figure_2}(b), and allows for the determination of the probabilities of all four possible two-qubit states. The pattern closely resembles that of the {\it Ramsey fringes}, but has doube the frequency of $P_{|10\rangle}$ oscillations, $\phi=2\times\Delta\omega\times\tau$ (Fig.~\ref{figure_2}(c)). The doubling of phase accumulation speed results in two times better accuracy of flux sensing at the same short sensing times. However, the pattern contrast also reduces more quickly (Fig.~\ref{figure_2}(c)) in comparison with a single qubit case, making the advantage less impressive for long measurement times. The quicker reduction of pattern contrast originates from the shortening of the coherence time $T_{2,N}^*$ with the growth of the system size $N$. It was pointed out~\cite{Huelga_PRL} that for entanglement only pure dephasing provides the same maximal sensitivity reached at a shorter delay time $\tau$, in comparison with the standard {\it Ramsey fringes} scheme, because the coherence time shortens proportionally to the size of the system ($\sim N$). However, experimental tests on up to eight trapped-ion qubits, under the influence of correlated noise~\cite{Monz_PRL}, demonstrated a quicker coherence reduction ($\sim N^2$). So, the pure dephasing rate can be proportional to $\sim N^\alpha$, with $\alpha=1$ representing non-correlated noise and $\alpha=2$ representing correlated noise acting on all qubits. Experimental investigations into noise correlations between two or more superconducting qubits have only recently started to appear~\cite{Harper_NatPhys,Lupke_PRX,Han_FundRes}. These results are important for quantum computation and quantum-enhanced sensing, and further experiments would be of great value. The dependence of the entangled state coherence time on the number of entangled qubits useful for quantum sensing has not been studied for superconducting qubits thus far. Next, we simulate the flux sensing routines based on the Kitaev PEA run with a single qubit, and with two and three qubits prepared in the GHZ entangled state. We compare the accuracy of flux sensing achieved by employing entangled states to that of a single qubit, for the cases when $\alpha = 1\ \textrm{and}\ 2$ in the pure dephasing rates. To perform the simulation we compute the probability patterns $P_{|10...0\rangle}$ of the $N$-qubit states for $N = 1,2,\ \textrm{and}\ 3$ as \begin{equation} P_{|10...0\rangle}(N,\Delta\omega,\tau)=\frac{1}{2}+\frac{1}{2}e^{-\left(\frac{N\Gamma_1}{2}+N^\alpha\Gamma_\phi\right)\tau}\cos(N\Delta\omega\tau). \label{P_pattern} \end{equation} These probabilities are obtained after projecting the phase accumulated by the GHZ $N$-qubit state during the evolution in the external magnetic field to the first qubit. The dependencies of the qubits' spectra on the flux are assumed to be identical. The total relaxation rate and the total pure dephasing rate are $\Gamma_{1,N}=N\Gamma_1$ and $\Gamma_{\phi,N}=N^\alpha\Gamma_\phi$, with $\Gamma_1 = 0.2\ \textrm{MHz}$ and $\Gamma_\phi = 0.034\ \textrm{MHz}$. We use the equidistant flux grids in the computation of probability patterns with 2048, 3072, and 6144 values for 1-, 2-, and 3-qubit cases, respectively. If the sensor is exposed to the measured field only during the phase accumulation time, the dynamic range of fluxes measured with $N$ entangled qubits is $\Delta\Phi_{\rm ext}\sim\pi/N\tau_{\rm min}$, where $\tau_{\rm min}$ is the minimal time required for switching the external field on and off. Thus, for a sensor with $N$ entangled qubits, the dynamic range is reduced as $\sim1/N$ in comparison with a single qubit sensor. The flux grids for 2- and 3-qubit sensors form the subsets of the grid for the single-qubit sensor (Fig.\ref{figure_4}(a)). We choose $F=256$ flux values to be measured from the flux grid of the 3-qubit sensor so that it is also possible to measure them with the two other sensors. As the flux interval of possible values is reduced by 2 at each step of the Kitaev PEA, the chosen flux grids allow us to make 10 steps of the algorithm. We repeat the algorithm $M=24$ times at each of the $F=256$ flux values. Fig.~\ref{figure_4}(b) shows the obtained delay times for every step of the algorithm averaged, first, over all $M=24$ repetitions and, then, over all $F=256$ flux values. One can see that the delay times grow on average from step to step, and tend toward the coherence time of the sensor $T_2^*$. With the reduction of the coherence time for $N=2,\ 3$ or $\alpha$ going from 1 to 2 the delay times start to saturate at the earlier steps. Fig.~\ref{figure_4}(c) shows the results of the simulations. We compute the phase accumulation time $\tau_{j,k,l}$ for every flux value $(j)$, repetition $(k)$, and the step $(l)$, and then the averaged total phase accumulation time, $\overline{\tau_l}$, for every step as \begin{equation} \overline{\tau_l}=\frac{1}{F}\sum_{j=1}^F\frac{1}{M}\sum_{k=1}^M\tau_{j,k,l},\quad \tau_{j,k,l}=\sum_{i=1}^l\tau_i^{(j,k)}n_i^{(j,k)}. \label{phase_acc_time_eq} \end{equation} Here, $\tau_i^{(j,k)}$ and $n_i^{(j,k)}$ are the delay time for the step number $i$ and the number of measurements done at this step for the $j$-th flux value in the $k$-th repetition, respectively. \begin{figure} \caption{(a) Calibration patterns at the minimal delay time used in the simulations for the single-qubit sensor, and for the sensors with 2 and 3 entangled qubits. (b) Averaged delay times for different steps of the Kitaev PEA and sensors comprising the different number of qubits. (c) Comparison of the flux sensing accuracy scaling with the phase accumulation time for the sensors comprising the different number of qubits. The coherence time $T_2^*$ is set to infinity for the QEC case.} \label{figure_4} \end{figure} In our simulations, we use $\sigma_0=\sigma_1=1.5$ for the widths of measurement outcome normal distributions for states $|0\rangle$ and $|1\rangle$, and $\epsilon=0.01\%$ for the error probability. These determine the number of measurements $n_i$ done at each step, and also the condition to terminate the step and discard less probable flux values. By the end of step $l$ of the algorithm, we have a probability distribution for the remaining most probable fluxes for every flux value $\Phi_j, j\in[1,F]$ chosen to be measured, and every repetition $k\in[1,M]$. We use this distribution to compute the mean flux values $\hat\Phi_{jkl}$ and find the averaged flux accuracy for every step as \begin{equation} \overline{\left(\frac{\delta\Phi}{\Phi_0}\right)_l}=\sqrt{\frac{1}{\Phi_0^2F}\sum_{j=1}^F\frac{1}{M-1}\sum_{k=1}^M(\hat\Phi_{jkl}-\Phi_j)^2}. \label{accuracy_eq} \end{equation} Dependencies of the averaged flux accuracy $\overline{\left(\delta\Phi/\Phi_0\right)_l}$ on the averaged total phase accumulation time $\overline{\tau_l}$ are shown in Fig.~\ref{figure_4}(c). We compare the improvements of the flux sensing accuracy with the phase accumulation time for the sensors comprising a single qubit, or 2 and 3 entangled qubits with $\alpha=1$ or $2$, in Fig.~\ref{figure_4}. One can see that for the first algorithm steps, the scaling of the flux accuracy is close to the HL scaling for all considered sensors. When the averaged delay time approaches the coherence time and starts to saturate (Fig.~\ref{figure_4}(b)), the accuracy scaling deviates from the HL scaling and returns gradually back to the SQL scaling. The shorter the coherence time of the sensor, the sooner this transition happens, so that the sensors with 2 and 3 entangled qubits deviate from the HL scaling at the earlier steps of the algorithm. Nevertheless, the accuracies at the same phase accumulation time achieved by the sensors with the entangled qubits are always better than that of the sensor based on a single qubit, and the sensor with 3 entangled qubits proves to be better than with 2 entangled qubits. The advantage in the accuracy is reduced as the crossover from the HL scaling to the SQL scaling occurs, but the advantage from the earlier steps of the algorithm is not completely lost at the later steps even when $\alpha>1$. Importantly, for all pure dephasing rates with $\alpha\in[1,2]$, there is an accuracy improvement caused by the use of the entangled sensor. In practice, the calibration of a sensor employing PEA -- the measurement of the probability pattern $P_{|10...0\rangle}$ -- can take a long time. To mitigate this, FPGA-based electronics can be used for fast reset of the sensor qubits~\cite{Gebauer_AIP}. If the duration of control pulses and the time to read out and reset the qubits are much shorter than the coherence time of the sensor, the total sensing time will almost entirely consist of the phase accumulation time. This will noticeably shorten the calibration and speed up the sensing itself. Another experimental aspect of employing entangled states for sensing is the sensitivity of the control pulses to the external field being measured. Conditional phase gates realized via flux control pulses are very sensitive to external magnetic fields, which makes it necessary to allow the external field to act on the system only during the phase accumulation time. Otherwise, the initial entangled $N$-qubit state will not be the desired one. With regard to this, all-microwave entangling gates~\cite{all_microwave_gates} can be considered as an alternative way of the preparation of the sensing state. If they appear to be more resilient to the field being measured, it will be possible to keep the field continuously present, simplifying the operation. \subsubsection{Quantum Error Correction} Instead of increasing the phase accumulation speed in the system of many entangled sensors, one can try to improve the coherence time $T_2^*$ of the available sensor qubits. This can be achieved with Quantum Error Correction (QEC) strategies~\cite{QEC_theor_1,QEC_theor_2} already demonstrated with superconducting qubits~\cite{QEC_exp_1,QEC_exp_2,QEC_exp_3}. In this case, one has a set of sensor qubits and entangles them with ancillary qubits which are periodically measured to detect any possible errors. Entanglement of a sensor-ancillary pair allows a judgment on the state of the sensor to be made, based on the measurement of the ancillary. Once an error is detected in one of the ancillary qubits, the state of the corresponding sensor qubit is corrected. It was shown that for $d-$dimensional sensor space it is sufficient to have the same dimensionality of the ancillary space, and that a minimalistic two-qubit QEC setup (one sensor qubit and one ancillary qubit) can outperform in resolution schemes involving large-scale entanglement~\cite{Sekatski_theory}. This strategy is experimentally very attractive due to its relative simplicity and the fact that it has not yet been tried with superconducting qubits. In the limiting case of infinitely long coherence time $T_2^*$ of sensor qubits achieved with QEC, the accuracy scaling will follow the HL law for all steps of the algorithm (Fig.~\ref{figure_4}(c)). The number of steps and the final accuracy reached will depend on the number of flux values used in the calibration pattern. Recently, important results were obtained on the attainability of the HL scaling of precision in time, depending on the noise present. For a superconducting qubit sensor with the Hamiltonian $\omega_q\hat{\sigma}_z/2$, it is possible to use QEC to compensate "perpendicular" $\{\hat{\sigma}_x,\hat{\sigma}_y\}$ noise (relaxation) and achieve the HL precision scaling. "Parallel" $\{\hat{\sigma}_z\}$ noise (pure dephasing) can not be compensated by QEC, and SQL scaling remains unsurpassed~\cite{HL_in_metrology}. As a consequence, superconducting qubits with coherence time limited by the relaxation rate are more suitable for sensing tasks. The design of a superconducting circuit and the metrologic scheme including QEC for sensing have already been suggested to overcome the limit imposed by relaxation~\cite{QEC_against_relaxation}. It is important to emphasize that QEC schemes rely on the realization of fast and full quantum control, where qubit readout, analysis, and reaction times are much shorter than the coherence time of the sensor. The development of experiments where PEAs are combined with QEC is of great importance, as such experiments can contribute substantially to future progress of quantum sensing and metrology. \section{\label{sec:Conclusions}Conclusions} Two strategies involving entanglement for sensing and metrology with superconducting quantum circuits are considered, with the aim of going beyond the SQL scaling in the time domain. The first is based on the increased speed of phase accumulation for a large-scale entangled state of $N$ sensors. The advantages seen for this strategy depend on the characteristics of noise seen by the entangled state. Future experimental studies of the coherence reduction with the size of the entangled state $N$ are interesting and necessary, but have not yet been undertaken with superconducting qubits. The second strategy is to use QEC on entangled pairs (sensor-ancillary) of superconducting qubits, with the idea of enhancing the coherence time of the sensor qubits. To this end, implementation of metrological protocol combining QEC with one of the PEAs could experimentally demonstrate magnetic field sensing beyond the SQL scaling in time. Proposed experiments are of high value for quantum metrology and sensing. \begin{acknowledgments} The authors are thankful to Prof A.V. Lebedev for valuable discussions and to P.G. Baity and J. Brennan for the careful reading of the manuscript. They acknowledge the financial support from the EPSRC grant number EP/T018984/1. \end{acknowledgments} \ {\bf\noindent Data availability} The data that support the findings of this study are available from the corresponding author upon reasonable request. \end{document}
\begin{document} \title{Accelerated Hierarchical Density Clustering} \begin{abstract} We present an accelerated algorithm for hierarchical density based clustering. Our new algorithm improves upon HDBSCAN*, which itself provided a significant qualitative improvement over the popular DBSCAN algorithm. The accelerated HDBSCAN* algorithm provides comparable performance to DBSCAN, while supporting variable density clusters, and eliminating the need for the difficult to tune distance scale parameter $\epsilon$. This makes accelerated HDBSCAN* the default choice for density based clustering. \end{abstract} \section{Introduction} Clustering is the attempt to group data in a way that meets with human intuition. Unfortunately, our intuitive ideas of what makes a `cluster' are poorly defined and highly context sensitive \cite{Hennig2015clusters}. This results in a plethora of clustering algorithms each of which matches a slightly different intuitive notion of what a natural grouping is. Despite the uncertainty underlying the clustering process it continues to be used in a multitude of scientific domains. The fundamental problem of finding groupings is pervasive and results, however poor, are still important and informative. It is used in diverse fields such as molecular dynamics \cite{melvin2016uncovering}, airplane flight path analysis \cite{wilson2016exploratory}, crystallography \cite{spackman2016high}, and social analytics \cite{korakakis2016xenia}, among many others. While clustering has many uses to many people, our particular focus is on clustering for the purpose of exploratory data analysis. By exploratory data analysis we mean the process of looking for ``interesting patterns'' in a data set, primarily with the goal of generating new hypotheses or research questions about the data set in question. This necessitates minimal parameter selection and few apriori assumptions about the data. In this use case, it is highly desirable that solutions have informative failure modes. Specifically, when data is poorly clustered or does not contain clusters, it is necessary to have some indication of this from the clustering algorithm itself. Many traditional clustering algorithms are poorly suited to exploratory data analysis tasks. In particular, most clustering algorithms suffer from the problems of difficult parameter selection, insufficient robustness to noise in the data, and distributional assumptions about the clusters themselves. Many algorithms require the selection of the number of clusters, either explicitly, or implicitly through proxy parameters. In the majority of use cases we have encountered, selecting the number of clusters is very difficult apriori. Methods to determine the number of clusters such as the elbow method and silhouette method are often subjective and can be hard to apply in practice. Ultimately these methods all hinge on the clustering quality measure chosen; these are diverse and often highly related with particular clustering algorithms \cite{Hennig2015clusters}. Many practitioners fail to distinguish between partitioning and clustering to the point where the terms are now often used interchangeably. By clustering we specifically mean finding subsets of the data which group ``naturally'', without necessarily assigning a cluster for all points. Partitioning, on the other hand, requires that every data point be associated with a particular cluster. In the presence of noise the partitioning approach can be problematic. Even without noise, if clear clusters are not present, partitioning will simply return a poor solution. \begin{figure} \caption{A qualitative comparison of some candidate clustering algorithms on synthetic data. Colors indicate cluster membership (grey denotes noise). We advocate density based clustering methods when performing exploratory data analysis as they require fewer assumptions about the data distribution, and can refuse to cluster points. Unclustered ``noise'' points for both DBSCAN and HDBSCAN* are depicted in gray. The above clustering results represent the result of a qualitative hand tuned search for optimal parameters.\protect\footnotemark} \label{fig:qualitative_clustering} \end{figure} \footnotetext{See \url{https://github.com/lmcinnes/hdbscan_paper/blob/master/Qualitative\%20clustering\%20results.ipynb} for code used to generate these plots} Distributional assumptions on the data are difficult to make in exploratory data analysis. As a result we examine density based clustering since it has few implicit assumptions about the distribution of clusters within the data. Among density based clustering techniques DBSCAN \cite{ester1996dbscan} is attractive in that it is efficient and is robust to the presence of noise within data. Its primary difficulties include parameter selection and the handling of variable density clusters. In \cite{campello2013density} and \cite{campello2015hierarchical} Campello, et al. propose the HDBSCAN* algorithm which addresses both of these problems, but its major difficulty is that it sacrifices performance to do so. In Figure \ref{fig:qualitative_clustering} we compare three candidate clustering algorithms: K-Means, DBSCAN, and HDBSCAN*. The archetypal clustering algorithm, K-Means, suffers from all three of the problems mentioned previously: requiring the selection of the number of clusters; partitioning the data, and hence assigning noise to clusters; and the implicit assumption that clusters have Gaussian distributions. In comparison, being a density based approach, DBSCAN only suffers from the difficulty of parameter selection. Finally HDBSCAN* resolves many of the difficulties in parameter selection by requiring only a small set of intuitive and fairly robust parameters. Section \ref{explain} introduces the HDBSCAN* algorithm. We provide three different descriptions of the algorithm: the first description follows Chauduri et al. \cite{Chaudhuri:2010:RCC:2997189.2997228}, \cite{chaudhuri2014consistent} and Stuetzle et al. \cite{stuetzle2003estimating}, \cite{stuetzle2010generalized}, viewing the algorithm as a statistically motivated extension of Single Linkage clustering; the second description follows Campello et al. \cite{campello2013density}, \cite{campello2015hierarchical}, viewing the algorithm as a natural hierarchical extension of the popular DBSCAN algorithm; the third, novel, description is in terms of techniques from topological data analysis \cite{carlsson2009topology}. All three descriptions are valid, and collecting them here serves to bring these diverse fields together. Both the statistical and computational descriptions of HDBSCAN* have been published before (though little comparison has been drawn). We believe the topological description is a significant new contribution of this paper, and offers the opportunity to bring new and powerful mathematical tools to bear on the problem. The major contribution of this paper is section \ref{accel}, which describes a new algorithm for computing HDBSCAN* clustering results. This new algorithm, building on the work of March et al. \cite{march2010fast} and Curtin et al. \cite{curtin2015faster}, \cite{curtin2013tree}, offers significant improvements in average case asymptotic performance. In section \ref{perf} we compare the performance of our new HDBSCAN* algorithm against other clustering algorithms. In particular, we demonstrate the asymptotic performance improvement over the reference HDBSCAN* algorithm, and show our new algorithm provides HDBSCAN* with comparable asymptotic performance to DBSCAN, one of the fastest extant clustering algorithms. \section{HDBSCAN* Explained Three Ways}\label{explain} Algorithms like HDBSCAN* lie at the convergence of several lines of research from different fields. To highlight this convergence we will describe the HDBSCAN* algorithm from three different perspectives: from a statistically motivated point of view; with a computationally motivated mindset; and in a topologically motivated framework. Through this repetition we hope to both provide a sound introduction to how the algorithm works, and to place it in a richer context of ideas. We also hope that explanations that are less familiar will become easier to follow by analogy to explanations closer to the reader's field of expertise. Finally, we hope to bring together disparate fields of research that are attacking the same problem and arriving at nearly the same solution, and unify their approaches. \subsection{Statistically Motivated HDBSCAN*}\label{stats} A statistically oriented view of density clustering begins with the assumption that there exists some unknown density function from which the observed data is drawn. From the density function $f$, defined on a metric space $(\mathcal{X}, d)$, one can construct a hierarchical cluster structure, where a cluster is a connected subset of an $f$-level set $\{x \in (\mathcal{X}, d) \mid f(x) \geq \lambda\}$. As $\lambda\geq 0$ varies these $f$-level sets nest in such a way as to construct an infinite tree, which is referred to as the \emph{cluster tree} (see figure \ref{fig:cluster_tree} for an example). Each cluster is a branch of this tree, extending over the range of $\lambda$ values for which it is distinct. The goal of a clustering algorithm is to suitably approximate the cluster tree, converging to it in the limit of infinite observed data points. \begin{figure} \caption{The cluster tree (red) induced by a density function (blue).} \label{fig:cluster_tree} \end{figure} This idea dates back at least to Hartigan \cite{hartigan1981consistency}, and has become an increasingly popular way to frame the clustering problem; see \cite{rinaldo2012stability}, \cite{rinaldo2010generalized}, \cite{stuetzle2003estimating}, \cite{stuetzle2010generalized} and \cite{von2005towards} for examples. Our description of HDBSCAN* in these terms follows Chaudhuri et al. \cite{Chaudhuri:2010:RCC:2997189.2997228}, \cite{chaudhuri2014consistent} and their description of Robust Single Linkage. The motivation for the approach is based on Hartigan's work on consistency results for single linkage clustering \cite{hartigan1981consistency}. Hartigan's results while impressive, only apply to one dimensional data. The commonly cited drawback of single linkage clustering is that it is not robust to noise and suffers from chaining effects (spurious points merging clusters prematurely) \cite{martinez2016properties}, \cite{wishart1969mode}. Wishart proposed a heuristic algorithm as a potential solution to this in \cite{wishart1969mode}. The Robust Single Linkage algorithm \cite{Chaudhuri:2010:RCC:2997189.2997228}, \cite{chaudhuri2014consistent} extends Wishart's basic approach, and provides suitable theoretical underpinnings. The Robust Single Linkage algorithm assumes that the data set \[ X = \{X_1, X_2, \ldots, X_N\} \] is sampled from an unknown density $f$ on some metric space $(\mathcal{X}, d)$. We then define $B(X_i, \varepsilon)$ to be the open ball of radius $\varepsilon$ in $(\mathcal{X}, d)$. The algorithm takes two inputs, $k$ and $\alpha$. For each $X_i \in X$ define $$r_k(X_i) = \inf \{\varepsilon \mid B(X_i, \varepsilon) \text{ contains }k\text{ points}\}.$$ For each $\varepsilon \geq 0$ define a graph $G_\varepsilon$ with vertices $\{X_i \in X \mid r_k(X_i) \leq \varepsilon\}$ and an edge $(X_i, X_j)$ if $d(X_i, X_j) \leq \alpha\varepsilon$. Define the clusters at level $\varepsilon$ of the tree to be the connected components of $G_\varepsilon$. In \cite{Chaudhuri:2010:RCC:2997189.2997228} and \cite{chaudhuri2014consistent} Chaudhuri et al. provide a number of results on the consistency and convergence of this algorithm in Euclidean space ($\mathbb{R}^d$) for $k \sim d \log N$ with $\sqrt{2} \leq \alpha \leq 2$. Eldridge et al. \cite{eldridge2015beyond} provide even stronger consistency results by introducing stricter notions of consistency. This provides a sound statistical basis for the approach. A remaining issue with this algorithm is that the resulting cluster tree with $N$ leaves is highly complex, making analysis difficult for large data set sizes. This is, of course, an issue faced by many hierarchical clustering algorithms. Several authors, including Stuetzle et al. \cite{stuetzle2003estimating}, \cite{stuetzle2010generalized}, and Chaudhuri et al. \cite{chaudhuri2014consistent} have proposed approaches to pruning the cluster tree to simplify presentation and analysis. While Chaudhuri et al. provide consistency guarantees for their approach, we find the required parameters to be less intuitive, and harder to tune. We therefore will follow the ``runt pruning'' algorithm of Stuetzle \cite{stuetzle2003estimating}. Tree simplification begins with the introduction of a new parameter $m$, the minimum cluster size. Any branch of the cluster tree that represents a cluster of less than $m$ points is pruned out of the tree, and we record the $\varepsilon$ value of the split, defining it as the $\varepsilon$ value when the points of the pruned branch left the parent branch. That is, for each branch $C_i$ of the cluster tree there is an associated set of points $\{X_{i_1}, X_{i_2}, \ldots, X_{i_t}\} \subseteq X$, and for each point $X_{i_\ell}$ in $\{X_{i_1}, X_{i_2}, \ldots, X_{i_t}\}$ there exists a value $\varepsilon_\ell$ for which the point $X_{i_\ell}$ is deemed to have left the cluster (including because the cluster $C_i$ split, or because it was removed). The resulting pruned tree has many fewer branches, and hence fewer leaves. Furthermore, each remaining branch has a record of the points remaining in the branch at each $\varepsilon$ value for which the branch exists. The result is a far simpler tree of clusters, amenable to further analysis, but still containing rich information about the actual cluster structures at a point-wise level. Finally, it is often desirable to extract a flat clustering -- selecting a set of non-overlapping clusters from the tree. For hierarchical cluster schemes this often takes the form of choosing a ``cut level'' (in our case a choice of $\varepsilon$) and using the clustering at that level of the tree. When we wish to consider variable density clusters, the cut level varies through the tree, and thus we must choose a different approach to selecting a flat clustering. Notionally our goal is to determine the clusters that persist over the largest ranges of distance scales. To do this we require a measure of the persistence of a cluster. To make this concrete we refer again to Hartigan \cite{hartigan1987estimation}, and also to M\"uller and Sawitzki \cite{muller1991excess}, for the notion of excess of mass. Given a density function $f$, let $C$ be a subset of the domain of $f$, and define the \emph{excess of mass} of $C$ at a level $\lambda$ to be \[ E(C, \lambda) = \int_{C_\lambda} (f(x) - \lambda) dx , \] where $C_\lambda = \{x\in C \mid f(x) \geq \lambda\}$. Given a cluster tree for $f$, we can define the excess of mass of a cluster $C_i$ that exists at level $\lambda_{C_i}$ of the cluster tree as follows: Let $\lambda_{\text{min}}(C_i)$ be the minimal $\lambda$ value for the branch associated to $C_i$ in the cluster tree. Then define the excess of mass of $C_i$ to be \[ E(C_i) = \int_{C_i} (f(x) - \lambda_{\text{min}}(C_i)) dx . \] Next we follow \cite{campello2013density} in defining the \emph{relative excess of mass} for a cluster $C_i$. First we define $\lambda_{\text{max}}(C_i)$ to be the maximal lambda value for which $C_i$ exists as a distinct cluster (i.e. before it splits into sub-clusters in the cluster tree). Then the relative excess of mass is \[ E_R(C_i) = \int_{C_i} (\min(f(x), \lambda_{\text{max}}(C_i)) - \lambda_{\text{min}}(C_i) dx . \] Alternatively, if $C_{i_1}, C_{i_2}, \ldots, C_{i_k}$ are the children of $C_i$ in the cluster tree then \[ E_R(C_i) = E(C_i) - \sum_{j=1}^k E(C_{i_j}) . \] That is, the relative excess of mass of a cluster is the total mass of the cluster \emph{not including} the mass of any descendant clusters in the cluster tree. We see this demonstrated in figure \ref{fig:excess_of_mass} with the shaded areas indicating the excess of mass of the each for clusters from the cluster tree. \begin{figure} \caption{Relative excess of mass for the cluster tree from figure \ref{fig:cluster_tree}. In (a) we label the clusters, and (b) depicts the relative excess of mass as shaded areas, using different colours for each clusters relative excess of mass.} \label{fig:excess_of_mass} \end{figure} We can translate these notions to the empirical pruned tree described above. The pruned tree can be used to construct a discrete density function ranging over data points. In order to do this we require two things. Firstly, a density associated to each data point. This is simply the inverse of the $\varepsilon$ value at which the point left the tree (this was the data we recorded in addition to pruning branches of the tree). Secondly, we need an ordering on the data points such that the cluster tree of the density function is isomorphic to the pruned tree. This is simply a matter of sorting the points via a depth first search of the pruned tree (making use of the per point $\varepsilon$ values to order data points within a branch of the cluster tree). Explicitly we have an empirical density \[ \hat{f}(X_j) = \frac{1}{\varepsilon_{X_j}} \] where $\varepsilon_{X_j}$ is the $\varepsilon$ value at which we recorded the point $X_j$ leaving the tree. Further we have a cluster tree associated with $\hat{f}$ isomorphic to the pruned tree and we can define $\lambda_{\text{min}}(C_i)$ for any cluster $C_i$ in the pruned tree accordingly. This allows us to compute excess of mass for $\hat{f}$ and any cluster $C_i$ from the pruned tree as \[ E(C_i) = \sum_{X_j \in C_i} (\hat{f}(X_j) - \lambda_{\text{min}}(C_i)) \] and consequently we have a persistence score provided by the relative excess of mass. That is, given cluster $C_i$ having children $C_{i_1}, C_{i_2}, \ldots, C_{i_k}$, we define the persistence score \[ \sigma(C_i) = E(C_i) - \sum_{j=1}^k E(C_{i_j}) . \] The optimal flat clustering can then be described as the solution to a constrained optimization problem. If the set of clusters is $\{C_1,C_2, \ldots , C_n\}$ then we wish to select $I \subseteq \{1, 2, \ldots, n\}$ to maximize \[ \sum_{i\in I} \sigma(C_i) \] subject to the constraint that, for all $i, j \in I$ with $i\neq j$, we have \[ C_i\cap C_j = \emptyset . \] That is, we wish to maximize the total persistence score over chosen clusters, subject to the constraint that clusters must not overlap. This constrained optimization problem is can be solved in a straightforward manner \cite{campello2015hierarchical}. \subsection{Computationally Motivated HDBSCAN*}\label{comp} HDBSCAN* can be thought of as a natural extension of the popular DBSCAN algorithm. We begin, following \cite{campello2013density} and \cite{campello2015hierarchical}, by describing a modified version of DBSCAN, denoted DBSCAN*, that will make the relationship clearer. This algorithm is an adaptation of standard DBSCAN which removes the notion of border points. Removing border points provides clarity and improves consistency with the statistical interpretation of clustering in section \ref{stats}. DBSCAN* takes two parameters, $\varepsilon$ and $k$, where $\varepsilon$ is a distance scale, and $k$ is a density threshold expressed in terms of a minimum number of points. Extending to HDBSCAN* can be conceptually considered as searching over all $\varepsilon$ values for DBSCAN* to find the clusters that persist for many values of $\varepsilon$. This selection of clusters, which persist over many distance scales, provides the benefits of not only eliminating the need to select the $\varepsilon$ parameter but also of dealing with the problem of variable density clustering, something which classical DBSCAN struggles with. We will describe DBSCAN* in the same terms used by Campello, et al. \cite{campello2013density}. Again we will be working with a set of $X = \{X_1, X_2, \ldots, X_N\}$ of data points in a metric space $(\mathcal{X}, d)$. A point $X_i$ is called a \emph{core point} with respect to $\varepsilon$ and $k$ if its $\varepsilon$-neighbourhood contains at least $k$ many points, i.e. if $|B(X_i,\varepsilon)\cap X| \geq k$. That is, the open ball of radius $\varepsilon$ contains at at least $k$ many points from X. Two core points $X_i$ and $X_j$ are \emph{$\varepsilon$-reachable } with respect to $\varepsilon$ and $k$ if $X_i \in B(X_j,\varepsilon)$ and $X_j \in B(X_i,\varepsilon)$. That is, they are both core points with respect to $k$, and are both contained within each others $\varepsilon$-neighbourhood. Two core points $X_i$ and $X_j$ are \emph{density-connected} with respect to $\varepsilon$ and $k$ if they are directly or transitively $\varepsilon$-reachable. A \emph{cluster} $C$, with respect to $\varepsilon$ and $k$, is a non-empty maximal subset of $X$ such that every pair of points in $C$ is density-connected. This definition of cluster results in the DBSCAN* algorithm. To extend the algorithm to get HDBSCAN* we need to build a hierarchy of DBSCAN* clusterings for varying $\varepsilon$ values. The key to doing this is to redefine how we measure distance between points in $X$. For a given fixed value $k$, we define a new distance metric derived from the metric $d$, called the \emph{mutual reachability distance}, as follows. For any point $X_i$ we define the \emph{core-distance} of $X_i$, denoted $\kappa(X_i)$ to be the distance to the $k^\text{th}$ nearest neighbor of $X_i$; then, given points $X_i$ and $X_j$ we define \[ d_{\text{mreach}}(X_i, X_j) = \begin{cases} \max \{ \kappa(X_i), \kappa(X_j), d(X_j, X_j) \} & X_i \neq X_j\\ 0 & X_i = X_j \end{cases}. \] It is straightforward to show that this is indeed a metric on $X$. We can then apply standard Single Linkage Clustering \cite{sibson1973slink} to the discrete metric space $(X, d_{\text{mreach}})$ to obtain a hierarchical clustering of $X$. The clusters at level $\varepsilon$ of this hierarchical clustering are precisely the clusters obtained by DBSCAN* for the parameter choices $k$ and $\varepsilon$; in this sense we have derived a hierarchical DBSCAN* clustering. The goal of a density based algorithm, such as DBSCAN, is to find areas of the greatest density. To do this we need to shift from the notion of distance to a notion of density. An efficient estimate of the local density at a point can be provided by the reciprocal of the distance to its $k^\text{th}$ nearest neighbor. This is simply the inverse of the core-distance of that point. With this in mind we will work in terms of varying density instead of varying distance by constructing our cluster tree with respect to $\lambda = \frac{1}{\varepsilon}$ and consider the $\lambda$ value at which cluster splits occur. The next step in the algorithm is to produce a condensed tree that simplifies the hierarchy. For this we introduce a new parameter $m$ which will denote the minimum cluster size that will be accepted. We process the tree from the root downward. At each cluster split we consider the child clusters. Any child cluster containing fewer than $m$ points is considered a spurious split, and we denote those points as ``falling out of the parent cluster'' at the given $\lambda$ value. If only one child cluster contains more than $m$ points we consider it the continuation of the parent, persisting the parent cluster's label/identity to it. If more than a single child cluster contains more than $m$ points then we consider the split to be a ``true'' split. In this fashion we arrive at a tree with a much smaller number of clusters which ``shrink'' in size as they persist over increasing $\lambda$ values of the tree. One can consider this a form of smoothing of the tree. We can now define the stability of a cluster to be the sum of the range of $\lambda$ values for points in a cluster. Explicitly, we define $\lambda_{\text{max}, C_i}(X_j)$ to be the $\lambda$ value at which the point $X_j$ falls out of the cluster $C_i$ (either as an individual point, or as a cluster split in the condensed tree). Similarly we define $\lambda_{\text{min}, C_i}(X_j)$ as the minimum lambda value for which $X_j$ is present in $C_i$. Then the stability of the cluster $C_i$ is defined as \[ \sigma(C_i) = \sum_{X_j \in C_i} \left(\lambda_{\text{max}, C_i}(X_j) - \lambda_{\text{min}, C_i}(X_j)\right) . \] The optimal flat clustering can then be described as the solution to a constrained optimization problem. If the set of clusters is $\{C_1,C_2, \ldots , C_n\}$ then we wish to select $I \subseteq \{1, 2, \ldots, n\}$ to maximize \[ \sum_{i\in I} \sigma(C_i) \] subject to the constraint that, for all $i, j \in I$ with $i\neq j$, we have \[ C_i\cap C_j = \emptyset \] That is, we wish to maximize the total persistence score over chosen clusters, subject to the constraint that clusters must not overlap. We should note that while this explanation is the most compact of the three, it is also the least formal, and most heuristically motivated. \subsection{Topologically Motivated HDBSCAN*}\label{top} Topological data analysis \cite{carlsson2009topology}, \cite{wasserman2016topology}, \cite{zomorodian2012topological} is a suite of techniques bringing the powerful tool of topology to bear on data analysis problems. Recently the techniques of topological data analysis have been brought to bear on clustering problems \cite{carlsson2010multiparameter}, \cite{carlsson2013classifying}, \cite{chazal2014robust}, \cite{rinaldo2012stability}, \cite{rinaldo2010generalized}. A number of insights can be gained by looking at HDBSCAN* through the lens of topological data analysis. The primary technique employed in topological data analysis is persistent homology \cite{edelsbrunner2008persistent}, \cite{zomorodian2005computing}. Although our description of HDBSCAN* makes use of persistent homology techniques, the full details of that subject are beyond the scope of this paper. Please see \cite{carlsson2009topology} and \cite{ghrist2014elementary} for a good introduction to the topic. We will also make use of the language of sheaves. Again, details are beyond the scope of this paper; see Ghrist \cite{ghrist2014elementary}, Mac Lane and Moerdijk \cite{maclane2012sheaves} or Bredon \cite{bredon2012sheaf} for an introduction to sheaves. Persistent homology for analysis of ``point cloud data'' begins with the assumption we are presented with a set of data points $X$ living in a metric space $(\mathcal{X}, d)$. One can then construct a simplicial complex from this data. The standard approaches to this are the \emph{\v{C}ech complex} construction and the \emph{Vietoris-Rips complex} construction. Both approaches make use of a scale parameter $\varepsilon$ and open balls of radius $\varepsilon$ centered at a point $X_i$, denoted $B(X_i, \varepsilon) = \{x\in \mathcal{X} \mid d(X_i, x) < \varepsilon\}$. The \v{C}ech complex $C_\varepsilon$ is constructed by taking all elements of $X$ as 0-dimensional simplices, and adding an $n$-dimensional simplex spanning $X_{i_1}, X_{i_2}, \ldots X_{i_n}$ if the intersection $B(X_{i_1}, \varepsilon) \cap B(X_{i_2}, \varepsilon) \cap \cdots \cap B(X_{i_n}, \varepsilon)$ is non-empty. While the \v{C}ech complex has nice topological properties, it is often too computationally expensive to work with directly. The Vietoris-Rips complex $V_\varepsilon$ is constructed by, again, taking all elements of $X$ as 0-dimensional simplices, but adding an $n$-dimensional simplex spanning $X_{i_1}, X_{i_2}, \ldots X_{i_n}$ if all \emph{pairwise} intersections $B(X_{i_j}, \varepsilon) \cap B(X_{i_k}, \varepsilon)$ for $1\leq j \leq k \leq n$ are non-empty. A family of simplicial complexes that have a natural ``nested" structure, such as $\{V_{\epsilon}\}_{\epsilon \geq 0}$, are called filtered simplicial complexes. There is a natural homology theory on filtered simplicial complexes, called persistent homology \cite{edelsbrunner2008persistent} \cite{zomorodian2005computing}. If we consider the 0-th homology, which computes groups with rank equal to the number of connected components of a topological space, we see that the persistent homology of the Vietoris-Rips complex associated to a point cloud provides a computation very similar to that of single-linkage clustering. As in the case of Robust Single Linkage we seek to make such a computation more robust to noise. Ultimately this falls to the method of construction of a simplicial complex from the point cloud data, and the metric of the space in which it resides. The goal is to make use of information about density in this construction. Intuitively, for both Vietoris-Rips and \v{C}ech complexes, higher dimensional simplices occur in denser regions of the space. Thus, the natural approach is to start with the Vietoris-Rips complex $V_\varepsilon$ and then remove all simplices that are not faces of simplices of dimension $k$. This gives a two parameter complex $W_{\varepsilon, k}$ where $k$ provides a density threshold. Unfortunately this approach, much like the \v{C}ech complex, is too computationally expensive to construct for all but trivial cases. Other alternative, but similar, approaches are proposed in \cite{martinez2012density} and \cite{martinez2016properties}, however we will follow Lesnick and Wright \cite{lesnick2015interactive} for a computationally tractable density-sensitive simplicial complex construction. We begin with some notation. Given a simplicial complex $A$, define the $n$-skeleton of $A$, denoted $sk_n(A)$, to be the sub-complex of $A$ containing all simplices of $A$ of dimension less than or equal to $n$. Thus the 1-skeleton of a complex can be viewed as a graph, and the 0-skeleton as a discrete set of points. We define the Lesnick complex $L_{\varepsilon, k}$ as follows. Let $V_\varepsilon$ be the Vietoris-Rips complex associated to $X$ and define a graph $G_{\varepsilon, k}$ to be the subgraph of the 1-skeleton $sk_1(V_\varepsilon)$ induced by the vertices with degree at least $k$. The Lesnick complex $L_{\varepsilon, k}$ of $X$ is the maximal simplicial complex having 1-skeleton $G_{\varepsilon, k}$. For a fixed choice of $k$ we now have a filtered simplicial complex based on the family of complexes $\{L_{\varepsilon, k}\}_{\varepsilon > 0}$, and so we can apply standard persistent homology. To extend this topological approach to the full HDBSCAN* algorithm however we will take a slightly different approach to that described above. We draw upon the same fundamental ideas and intuitions, but use the language of sheaves. Intuitively a sheaf is a set that ``varies continuously'' over a topological space; thus each open set of the topological space has a set of \emph{sections} that lie above it. See \cite{ghrist2014elementary} or \cite{maclane2012sheaves} for further details. Consider the set of non-negative reals $\mathbb{R}_{\geq 0} = \{x\in\mathbb{R} \mid x \geq 0\}$ with the following topology: for each non-negative real $x$ define an open set $\mathring{x} = \{y\in\mathbb{R}_{\geq 0} \mid y \geq x\}$ and take the topology formed by all such sets. Now define a sheaf $\mathscr{F}$ over $\mathbb{R}_{\geq 0}$ by defining \begin{equation} \mathscr{F}(\mathring{x}) = \{ sk_0(C) : C \in \Pi_0(L_{x, k}) \} , \end{equation} where $\Pi_0(L_{x, k})$ is the set of connected components of the simplicial complex. That is, for an open set associated to the non-negative real $x$ we associate the set of 0-skeletons of connected components of the Lesnick complex $L_{x, k}$ associated to $X$. Since $\Pi_0$ is a functor, the maps from the filtered complex $L_{x, k} \hookrightarrow L_{y, k}$ for $x \leq y$ naturally induce restriction maps $res_{x,y}:\mathscr{F}(\mathring{x}) \to \mathscr{F}(\mathring{y})$. Verification that this is a sheaf is straightforward given the nested nature of the topology. The sheaf is the structure we will use to capture persistence information; it can be seen as similar to a tree of clusters. While the specific sheaf described here is equivalent to a tree, the sheaf formalism allows the description of similar structures that cannot be described by trees. We now wish to condense the sheaf with regard to a parameter $m$ denoting the minimum cluster size. To do this we first consider the subsheaf $\mathscr{G}$ defined by \begin{equation} \mathscr{G}(\mathring{x}) = \{ s \in \mathscr{F}(\mathring{x})\: \mid\: |s| \geq m \}. \end{equation} This definition creates a simpler object by removing any sections that contain fewer than $m$ data points. Next we need to identify clusters from the sheaf. Since clusters must persist over a range distance scales (the $x$ in $L_{x,k}$) we must identify sections from different open sets. This can be viewed as the construction of an equivalence relation across the set of all sections in the sheaf \[ S = \bigcup_{x\in \mathbb{R}_{\geq 0}} \mathscr{G}(\mathring{x}) . \] We define the equivalence relation as follows: given sections $s\in \mathscr{G}(\mathring{x})$ and $s'\in \mathscr{G}(\mathring{y})$ where (without loss of generality) $x \leq y$, we say $s$ is equivalent to $s'$ (denoted as $s\sim s'$) if and only if $res_{x,y}^{-1}(s') = \{s\}$ and for all $z$ with $x \leq z \leq y$ we have $|res_{z,y}^{-1}(s')| = 1$. That is, we consider sections at different distance scales equivalent if the section at the smaller distance scale is the \emph{only} section that restricts to the section at the larger distance scale, and this remains true for all intervening distance scales. A cluster can then be identified with an equivalence class of sections under this equivalence relation. Such clusters necessarily overlap on the data points which they cover. If we wish to obtain a flat clustering we need to be able to score and compare clusters. We can score clusters in terms of their persistence over distance scales. Let $[s]$ be a cluster, and let $s_t \in S$ be the element of $S$ in the equivalence class $[s]$ that lies in $\mathscr{G}(\mathring{t})$, or the empty set if there is no representative of the equivalence class $[s]$ in $\mathscr{G}(\mathring{t})$. Define a function $\hat{s}(t) = |s_t|$, and then define the persistence score $\sigma$ of $[s]$ to be \begin{equation} \sigma([s]) = \int_0^\infty \frac{\hat{s}(t)}{t^2} dt . \end{equation} The inclusion of the $\frac{1}{t^2}$ term provides the equivalent transformation to the shift from $\varepsilon$ to $\lambda = \frac{1}{\varepsilon}$ and ensures that this definition of $\sigma$ computes the same values as the definitions given in sections \ref{stats} and \ref{comp}. To compare clusters for overlap it is necessary to be able to talk about the data points `in' a cluster. While the set of data points that make up a section is well defined, a cluster formed as an equivalence class of sections over different open sets has no natural assignment of data points to it. Instead we will define the points of a cluster $[s]$ to be the union of points in the sections within the equivalence class; thus we have a `points' function $p$ acting on clusters as \[ p([s]) = \bigcup_{t=0}^\infty s_t . \] The optimal flat clustering can then be described as the solution to a constrained optimization problem. If the set of clusters is $\{[s_1], [s_2], \ldots , [s_n]\}$ then we wish to select $I \subseteq \{1, 2, \ldots, n\}$ to maximize \[ \sum_{i\in I} \sigma([s_i]) \] subject to the constraint that, for all $i, j \in I$ with $i\neq j$, we have \[ p([s_i])\cap p([s_j]) = \emptyset \] That is, we wish to maximize the total persistence score over chosen clusters, subject to the constraint that clusters must not overlap. This is a complete description of the HDBSCAN* algorithm in topological terms. One of the major advantages of viewing HDBSCAN* through this lens is that it allows for generalisations that were not previously possible to describe. For example one could consider a new algorithm computing persistence across both $\varepsilon$ and $k$ simultaneously via techniques of multidimensional persistent homology, and making use of the more general structure of sheaves instead of trees. Such an approach provides a concrete realization of the techniques initially described in \cite{carlsson2010multiparameter}. This would provide a new clustering algorithm, \emph{Persistent Density Clustering \cite{healy2017pdc}}, that is nearly parameter free. \section{Accelerating HDBSCAN*}\label{accel} As described in \cite{campello2013density} and \cite{campello2015hierarchical} the HDBSCAN* algorithm on $N$ data points has $O(N^2)$ run-time. To be competitive with other high performance clustering algorithms a sub-quadratic run-time is required, with an $O(N\log N)$ run-time strongly preferred. The run-time analysis of HDBSCAN* in \cite{campello2015hierarchical} identified three steps having $O(N^2)$ time complexity: the computation of core-distances (and mutual reachability distances); the computation of a minimum spanning tree (MST) used for single linkage computation; and the tree condensing. We propose to improve each of these steps, and in so doing, approach an average case complexity that grows approximately proportionally to $N \log N$. One of the most common techniques for asymptotic performance improvement in the face of pairwise statistical problems (in our case pairwise distance computations) are space tree algorithms \cite{ram2009linear}. Indeed, these techniques are the basis for the impressive asymptotic performance of the DBSCAN and Mean Shift clustering algorithms, and are even used to accelerate some versions of K-Means. These techniques can also be applied to HDBSCAN* whenever the input data is provided as points in some metric space. The computation of core-distances is a query for the $k^\text{th}$ nearest neighbor of each point in the input data set. The use of space tree algorithms for efficient nearest neighbor computations is well established. In particular kd-trees \cite{bentley1975multidimensional} in euclidean space, and ball-trees \cite{omohundro1989five} or cover trees \cite{beygelzimer2006cover} for generic metric spaces, provide fast asymptotic performance for nearest neighbor computation. Strict asymptotic run-time bounds for such algorithms are often complicated by properties of the data set. For example, cover tree nearest neighbor computation is dependent upon the expansion constant of the data, and the performance of kd-trees and ball-trees are similarly dependent upon the data distribution. However, an all points nearest neighbor query algorithm for cover trees with ``linear'' run-time complexity $O(c^{16} N)$, where $c$ is the expansion constant for the cover tree, is presented by Ram et al. \cite{ram2009linear}. Claims of empirical run-time complexity of approximately $O(N \log N)$ for kd-trees and ball-trees are also common. While explicitly stating a run-time complexity for the core-distance computation is difficult, we feel confident in stating that, except for carefully constructed pathological examples, we can achieve sub-quadratic complexity. With core-distance computation improved, the next challenge is the efficient computation of single linkage clustering using mutual reachability distance. In \cite{campello2015hierarchical}, Campello et al. use Prim's algorithm \cite{prim1957shortest} to compute a minimum spanning tree of the complete graph with edges weighted by the mutual reachability distance. Campello et al. then sort the edges, and use that data to construct the single linkage tree. Such an approach is similar to the SLINK algorithm \cite{mullner2011modern}, \cite{sibson1973slink} which essentially uses a modified version of Prim's algorithm (that does not explicitly compute an MST). For the purposes of computing a MST, Prim's is among the fastest available algorithms, however it is targeted toward graphs where the number of edges is some small multiple of the number of vertices, rather than complete graphs with $O(|V|^2)$ edges. In particular, if we have extra information about the vertices of the graph, other algorithms such as Bor\r{u}vka's algorithm \cite{boruuvka1926jistem} become more appealing. This is because if vertices are points in some metric space and edge weights are distances, Bor\r{u}vka's algorithm resembles a series of repeated all points nearest neighbor queries. In \cite{march2010fast} March et al. make use of this observation and describe the Dual-Tree Bor\r{u}vka algorithm for computing minimum spanning trees of points in a metric space. Given points $X$ in $(\mathcal{X}, d)$, they provide an algorithm to compute a minimum spanning tree of the weighted complete graph with vertices $X$ and edges $(X_i, X_j)$ with weight $d(X_i, X_j)$, where $X_i, X_j \in X$. The algorithm makes explicit use of space trees to provide impressive asymptotic performance. In particular, if cover trees are used, March et al. prove a run-time complexity of $O(\max\{c^6, c_p^2, c_l^2\} c^{10} N \log N \alpha(N))$, where $c$, $c_p$, and $c_l$ are data dependent constants and $\alpha$ is the inverse Ackermann function \cite{ackermann1928hilbertschen}. Here we provide a (minor) adaptation of the algorithm to compute a MST of the mutual reachability distances, resulting in a computation with sub-quadratic complexity. In describing the algorithm we follow the approach of Curtin et al. in \cite{curtin2013tree} where they provide a version of March's algorithm adapted to a generic space partitioning tree framework. We begin with the introduction of notation to allow for easier statements of required algorithms. For our purposes, a \emph{space tree} on a data set $X \subset (\mathcal{X}, d)$ is a rooted tree with the following properties: \begin{itemize} \item Each node holds a number of points (possibly zero), has a single parent and has some number of children (possibly zero); \item each $X_i \in X$ is contained in at least one node of the tree; \item each node of the tree has an associated convex subset of $(\mathcal{X}, d)$ that contains all the points in the node, and the convex subsets associated with all of its children. \end{itemize} Notationally we will use a number of short form conventions to make discussions of points, children, descendants, and distances between nodes more convenient. Again, following Curtin et al. we will use the following notation: \begin{itemize} \item The set of child nodes of a node $\mathscr{N}_i$ will be denoted $\mathscr{C}(\mathscr{N}_i)$ or simply $\mathscr{C}_i$ if the context allows. \item The parent node of a node $\mathscr{N}_i$ will be denoted $\mathscr{U}(\mathscr{N}_i)$. \item The set of points held in a node $\mathscr{N}_i$ will be denoted $\mathscr{P}(\mathscr{N}_i)$ or simply $\mathscr{P}_i$ if the context allows. \item The convex subset of $(\mathcal{X}, d)$ associated to a node $\mathscr{N}_i$ will be denoted $\mathscr{S}(\mathscr{N}_i)$ or simply $\mathscr{S}_i$ if the context allows. \item The set of \emph{descendant nodes} of a node $\mathscr{N}_i$, denoted by $\mathscr{D}^n(\mathscr{N}_i)$ or $\mathscr{D}^n_i$, is the set of nodes $\mathscr{C}(\mathscr{N}_i)\cup\mathscr{C}(\mathscr{C}(\mathscr{N}_i))\cup \ldots$ . \item The set of \emph{descendant points} of a node $\mathscr{N}_i$, denoted $\mathscr{D}^p(\mathscr{N}_i)$ or $\mathscr{D}^p_i$, is the set of points $\{p \mid p\in \mathscr{D}^n(\mathscr{N}_i)\cup\mathscr{P}(\mathscr{N}_i)\}$. \item The \emph{minimum distance} between two nodes $\mathscr{N}_i$ and $\mathscr{N}_j$, denoted $d_{\text{min}}(\mathscr{N}_i, \mathscr{N}_j)$ is defined as $\min \{ d(p_i, p_j) \mid p_i \in \mathscr{D}_i^p, p_j \in \mathscr{D}^p_j\}$. \item The \emph{maximum child distance} of a node $\mathscr{N}_i$, denoted $\rho(\mathscr{N}_i)$ is maximum distance from the centroid of $\mathscr{S}(\mathscr{N}_i)$ to any point in $\mathscr{N}_i$. \item The \emph{maximum descendant distance} of a node $\mathscr{N}_i$, denoted $\lambda(\mathscr{N}_i)$ is the maximum distance from the centroid of $\mathscr{S}(\mathscr{N}_i)$ to any \emph{descendant point} of $\mathscr{N}_i$. \end{itemize} In general, the minimum distance between nodes can be bounded below statically without having to compute all the point to point distances. For example, in kd-trees we have $d_{\text{min}}(\mathscr{N}_i, \mathscr{N}_j)$ bounded below by the minimum distance between $\mathscr{S}_i$ and $\mathscr{S}_j$ which can be computed at the time of tree construction without computing any point to point distances. Other types of space trees offer similar methods to bound node distances. In \cite{curtin2013tree} Curtin et al. provide a generic algorithm from which specific dual tree algorithms can be constructed. This provides a simple breakdown of a dual tree algorithm into core constituent parts, which the authors of this paper found particularly helpful in understanding March's algorithm. We therefore work within the same general framework here. Dual tree algorithms make use of two different space trees, a \emph{query tree} $\mathscr{T}_q$ and a \emph{reference tree} $\mathscr{T}_r$. Curtin et al. breaks dual tree algorithms into three components. The first component is a \emph{pruning dual tree traversal}. This is a method of traversing a query and reference tree pair, pruning branches along the way. At each stage of such a pruning traversal we apply two procedures: the first, called \textsc{Score}, determines whether a branch is to be pruned (and potentially prioritises child branches); the second, called \textsc{BaseCase}, performs some algorithm specific operation on the pair of nodes at that stage of the traversal. A simple approach to a dual tree traversal is a depth first traversal with no prioritisation of child nodes to explore. Algorithm \ref{alg:traversal} describes such an approach. In practice, one may want a more finely tailored traversal algorithm, with concomitant complexity of description, but for our explanatory purposes, this simple traversal is sufficient. \begin{algorithm} \caption{Depth First Dual Tree Traversal}\label{alg:traversal} \begin{algorithmic}[0] \Procedure{DepthFirstTraversal}{$\mathscr{N}_q, \mathscr{N}_r$} \If{\textsc{Score}($\mathscr{N}_q, \mathscr{N}_r$) = $\infty$} \State \Return \EndIf \ForAll{$p_q \in \mathscr{P}_q, p_r \in \mathscr{P}_r$} \State \textsc{BaseCase}($p_q, p_r$) \EndFor \ForAll{$\mathscr{N}_{qc} \in \mathscr{C}_q, \mathscr{N}_{qr} \in \mathscr{C}_r$} \State \textsc{DepthFirstTraversal}($\mathscr{N}_{qc}, \mathscr{N}_{rc}$) \EndFor \EndProcedure \end{algorithmic} \end{algorithm} Given a traversal algorithm, the specifics of March's Dual Tree Bor\r{u}vka algorithm now falls to the \textsc{BaseCase} and \textsc{Score} procedures. To explicate these we begin by describing Bor\r{u}vka's original algorithm, and then explain how we reconstruct it within a dual tree framework. The general idea for Bor\r{u}vka's algorithm (Algorithm \ref{alg:boruvka}) is to build a forest, adding minimum weight edges to connect trees in iterative rounds. Bor\r{u}vka's algorithm starts with a weighted graph $G$, and initializes a forest $T$ to have the vertices of $G$, and no edges. Each pass of Bor\r{u}vka's algorithm finds minimum weight edges that span distinct connected components of $T$, and then adds those edges to $T$. As the algorithm proceeds, $T$ has larger but fewer connected components. The algorithm terminates when the forest $T$ is a single connected component, and thus a tree. \begin{algorithm} \caption{Classical Bor\r{u}vka's algorithm}\label{alg:boruvka} \begin{algorithmic}[0] \Procedure{MST}{$G = (V, E)$} \State $T \gets (V, \emptyset)$ \Comment{Initialize a graph $T$ with vertices from $G$ and no edges} \While{$T$ has more than one connected component} \ForAll{components $C$ of $T$} \State $S \gets \emptyset$ \ForAll{vertices $v$ in $C$} \State $D \gets \{a \in E \mid a\text{ meets }v\text{ and is not wholly contained in }C\}$ \State $e \gets $ minimum weight edge in $D$ \State $S \gets S\cup\{e\}$ \EndFor \State $e \gets $ minimum weight edge in $S$ \State Add $e$ to the graph $T$ \EndFor \EndWhile \EndProcedure \end{algorithmic} \end{algorithm} To convert Bor\r{u}vka's algorithm to a dual tree algorithm employing the spatial nature of the data, we make use of the space trees to find the nearest neighbors in a different component of the current forest for each point in the dataset. We then compile this information together to update the forest, and then reapply the nearest neighbor search. Notationally we are building a forest $F$ with connected components $F_i$. At initialization $F$ has no edges, and there are $N$ connected components. At each pass of the algorithm we will add edges to $F$ and update the list of connected components accordingly. To keep track of state during processing, a number of associative arrays are required. First we require a mapping from points to the connected component of $F$ in which they currently reside. We denote this $\mathcal{F}$ and define $\mathcal{F}(p)$ to be the component $F_i$ which contains the point $p$. During the tree traversal we keep track of the nearest candidate point for each component with an associative array $\mathcal{N}$ such that $\mathcal{N}(F_i)$ is the candidate point (not in component $F_i$) nearest to component $F_i$ found so far. To keep track of which point in the component $F_i$ is closest to the candidate point we use an associative array $\mathcal{P}$ such that $\mathcal{P}(F_i)$ is the point in component $F_i$ nearest to $\mathcal{N}(F_i)$. Finally we keep track of the distance to a nearest neighbor for each component through an associative array $\mathcal{D}$ such that $\mathcal{D}(F_i)$ is the distance between $\mathcal{N}(F_i)$ and $\mathcal{P}(F_i)$. To perform passes of the algorithm we need to use a modified nearest neighbor approach that looks for the nearest neighbor in a different component. Since we are searching for the ``nearest neighbors'' of the reference points each time, the query tree and reference tree are the same. After such an all-points nearest neighbor style tree search we can collate the results found for $\mathcal{N}$, $\mathcal{P}$ and $\mathcal{D}$ and use that to update the forest, and the associative array $\mathcal{F}$. This allows us to reset $\mathcal{N}, \mathcal{P}$ and $\mathcal{D}$ and make another pass with the same nearest neighbor style search. Each pass reduces the number of connected components in $F$ until we have a minimal spanning tree. With this is mind, the \textsc{BaseCase} (algorithm \ref{alg:base}) needs to find points in different components that have a shorter distance separating them than the current value stored for the component under consideration. If such a pair is found we update $\mathcal{N}$, $\mathcal{P}$ and $\mathcal{D}$ accordingly. \begin{algorithm} \caption{Bor\r{u}vka's algorithm base case}\label{alg:base} \begin{algorithmic}[0] \Procedure{BaseCase}{$p_q, p_r$} \If{$p_q = p_r$} \State \Return \EndIf \If{$\mathcal{F}(p_q) \neq \mathcal{F}(p_r)$ \textbf{and} $d(p_q, p_r) < \mathcal{D}(\mathcal{F}(p_q))$} \State $\mathcal{D}(\mathcal{F}(p_q)) \gets d(p_q, p_r)$ \State $\mathcal{N}(\mathcal{F}(p_q)) \gets p_r$ \State $\mathcal{P}(\mathcal{F}(p_q)) \gets p_q$ \EndIf \EndProcedure \end{algorithmic} \end{algorithm} The benefit of the tree based approach is that we are able to prune branches from our tree search which we know will not yield useful results. Since our queries are closely related to nearest neighbor queries we can make use of similar bounding approaches. The simplest such bound will prune the node pair $(\mathscr{N}_q, \mathscr{N}_r)$ if and only if the minimal distance between the nodes $d_{\text{min}}(\mathscr{N}_q, \mathscr{N}_r)$ is greater than the maximum of the nearest neighbor distances found so far for any point in $\mathscr{D}^p_q$. That is, if the closest any point in the query node (or its descendants) can be to any point in the reference node (or its descendants) is greater than all the current query node nearest neighbors found, clearly we do not need to descend any further. In practice, with care and use of the triangle inequality, better bounds can be derived. We refer the reader to \cite{curtin2013tree} for the derivation, but note that we can define a bound \[ \begin{split} B(\mathscr{N}_q) = \min\Big\{ & \max\{ \max_{p\in\mathscr{P}_q} D_p, \max_{\mathscr{N}_c \in \mathscr{C}_q} B(\mathscr{N}_c) \},\\ & \min_{p\in \mathscr{P}_q} \big(D_p + \rho(\mathscr{N}_q) + \lambda(\mathscr{N}_q)\big),\\ & \min_{\mathscr{N}_c\in\mathscr{C}_q} \bigg( B(\mathscr{N}_c) + 2\big(\lambda(\mathscr{N}_q) - \lambda(\mathscr{N}_c)\big)\bigg),\\ & B(\mathscr{U}(\mathscr{N}_q)) \Big\}, \end{split} \] where $D_p$ is the distance to the nearest neighbor of $p$ found so far. Given this bound, if $d_{\text{min}}(\mathscr{N}_q, \mathscr{N}_r) \geq B(\mathscr{N}_q)$ then we can safely prune the pair $(\mathscr{N}_q, \mathscr{N}_r)$. In practice pruning will be done based on the pre-computed lower bound estimate for $d_{\text{min}}(\mathscr{N}_q, \mathscr{N}_r)$. Furthermore, as the bound is expressed recursively, we can cache previous computations and calculate $B(\mathscr{N}_q)$ efficiently. We can improve our pruning further by using component membership to prune: if all the descendant points in the query and reference nodes are in the same component then we do not need to descend and check any of those points. Again, this is a computation that can be done recursively and cached. With that in mind we can define a \textsc{Score} function (Algorithm \ref{alg:score}) that prunes away unnecessary branches, resulting in far fewer distance computations being required. \begin{algorithm} \caption{Bor\r{u}vka's algorithm scoring}\label{alg:score} \begin{algorithmic} \Procedure{Score}{$\mathscr{N}_q, \mathscr{N}_r$} \If{$d_{\text{min}}(\mathscr{N}_q, \mathscr{N}_r) < B(\mathscr{N}_q)$} \If{$\forall (p_q\in\mathscr{D}^p_q, p_r\in\mathscr{D}^p_r) : \mathcal{F}(p_q) = \mathcal{F}(p_r)$} \State \Return $\infty$ \EndIf \State \Return $d_{\text{min}}(\mathscr{N}_q, \mathscr{N}_r)$ \EndIf \State \Return $\infty$ \EndProcedure \end{algorithmic} \end{algorithm} Combining all these pieces together provides us with an algorithm to compute a minimum spanning tree of the distance weighted complete graph of points in a metric space. In practice, we wish to compute a MST using mutual reachability distance, and want to compute as few distances as possible. We can do this by using precomputed core-distances as a filter on the pairs of points passed to \textsc{BaseCase}, and only compute distances (and mutual reachability distances) for a subset of points. Furthermore, by prioritising the order in which we perform our dual tree traversal we can construct tighter bounds $B$ sooner, and thus perform more tree pruning, resulting in even fewer distances being computed. We can express this in a more detailed tree traversal algorithm. The traversal algorithm is assumed to have access to the associative arrays $\mathcal{F}$ and $\mathcal{D}$. We also introduce a new associative array $\mathcal{C}$ such that $\mathcal{C}(p)$ is the core-distance (i.e. distance to the $k^{\text{th}}$ nearest neighbor) for the point $p$. We can then expand out the loop over pairs of algorithm \ref{alg:traversal} into a pair of nested for loops over points in the query node, and points in the reference node. This allows us to check if the core-distance of a point exceeds the current best distance for the component the point lies in. If the core-distance is larger then the mutual reachability distance is necessarily also larger, and hence this point can be eliminated from consideration. \begin{algorithm}[!ht] \caption{Tailored dual tree traversal}\label{alg:traversal2} \begin{algorithmic} \Procedure{DualTreeTraversal}{$\mathscr{N}_q, \mathscr{N}_r$} \If{\textsc{Score}($\mathscr{N}_q, \mathscr{N}_r$) = $\infty$} \State \Return \EndIf \ForAll{$p_q \in \mathscr{P}_q$} \If{$\mathcal{C}(p_q) < \mathcal{D}(\mathcal{F}(p_q))$} \ForAll{$p_r \in \mathscr{P}_r$} \If{$\mathcal{C}(p_r) < \mathcal{D}(F(p_q))$} \State \textsc{BaseCase}($p_q, p_r$) \EndIf \EndFor \EndIf \EndFor \State $L \gets \textsc{Sort}([(\mathscr{N}_{qc}, \mathscr{N}_{rc}) \mid \mathscr{N}_{qc} \in \mathscr{C}_q, \mathscr{N}_{qr} \in \mathscr{C}_r],\,\, d_\text{min}(\cdot, \cdot))$ \For{$(\mathscr{N}_{qc}, \mathscr{N}_{qr})$ \textbf{in} $L$} \State \textsc{DualTreeTraversal}($\mathscr{N}_{qc}, \mathscr{N}_{rc}$) \EndFor \EndProcedure \end{algorithmic} \end{algorithm} We can also prioritise the tree descent based on, for example, the distance between nodes, descending to nodes that are closer together first such that bounds get updated earlier. Algorithm \ref{alg:traversal2} gives an example of such a tailored algorithm that takes advantage of core-distances and prioritises descent down the tree. In practice the exact traversal and descent strategy can be more carefully tuned according to the exact space tree used. It only remains to adapt the \textsc{BaseCase} procedure presented in Algorithm \ref{alg:base} to use core-distances to compute $d_{\text{mreach}}$ and use it instead of $d$ (as seen in Algorithm \ref{alg:base2}) and we have a Dual Tree Bor\r{u}vka algorithm adapted to perform HDBSCAN*. Such an algorithm allows us to compute a minimum spanning tree of the mutual reachability distance weighted complete graph without having to compute all pairwise distances. This results in an asymptotically sub-quadratic MST computation. While the data dependent nature of complexity analysis for tree based algorithms makes it difficult to place an explicit bound on the run-time complexity, analyses such as March et al. \cite{march2010fast} suggest we can certainly approach $O(N \log N)$ asymptotic performance for many data sets. \begin{algorithm}[!ht] \caption{HDBSCAN* tailored Bor\r{u}vka's algorithm base case}\label{alg:base2} \begin{algorithmic}[0] \Procedure{BaseCase}{$p_q, p_r$} \If{$p_q = p_r$} \State \Return \EndIf \If{$F(p_q) \neq F(p_r)$} \State $\text{dist} = \max\{d(p_q, p_r), \mathcal{C}(p_q), \mathcal{C}(p_r)\}$ \If{$\text{dist} < \mathcal{D}(\mathcal{F}(p_q))$} \State $\mathcal{D}(\mathcal{F}(p_q)) \gets \text{dist}$ \State $\mathcal{N}(\mathcal{F}(p_q)) \gets p_r$ \State $\mathcal{P}(\mathcal{F}(p_q)) \gets p_q$ \EndIf \EndIf \EndProcedure \end{algorithmic} \end{algorithm} One notable feature of mutual reachability distance is that it can result in many equal distances. We can exploit this fact within our modified Dual Tree Bor\r{u}vka algorithm. After a tree traversal the algorithm updates the forest $F$ and then resets the bounds $B$. Since there are many equal distances we can run the tree search again with the same bounds $B$ and find new potential edges to add to $F$. Such a run is extremely efficient as it has very tight bounds and thus rapidly prunes branches. We can repeat these runs until no new edges are found, and only then reset the values for $B$, forcing the algorithm to make fast progress in the face of ties, and near ties. Unfortunately this breaks the guarantee that the algorithm will also find a minimal spanning tree. However, in practice the result is a close approximation of a minimal spanning tree. We trade off a small loss in accuracy for a significantly faster algorithm. Furthermore, the minor differences in MST get smoothed out in tree condensing and flat cluster extraction process, resulting in very small deviations in final cluster results. This trade-off of performance for accuracy is particularly relevant for higher dimensional data sets when using kd-trees or ball-trees. Given a minimal spanning tree it is possible to generate a single linkage cluster tree. This proceeds in two stages. The first stage is to sort the edges of the MST by weight (in this case, the mutual reachability distance between the pair of points the edge spans). Such an operation can be performed in $O(N \log N)$ run-time. In the second stage we process the edges in order using a union-find data structure \cite{tarjan1975efficiency}. This allows us to build the single linkage tree, providing the cluster merges and weights at which they occur, by progressively merging points and clusters by increasing weight. Since an MST has $O(N)$ edges we can complete this in $O(N\alpha(N))$ (using union-rank and path compression in our union-find algorithm). The next step is to process the single linkage tree into a condensed tree. We can do this in a single pass working from the root in a breadth first traversal, building an associative array mapping single linkage cluster identifiers to new condensed tree cluster identifiers. At each node we need only check on the sizes of the child nodes, update the associative array accordingly, and record any data points falling out of the cluster. Since the single linkage tree has $N \log N$ nodes, the condensed tree processing can be completed in $O(N \log N)$ run-time. In summary, the overall asymptotic run-time performance of the algorithm is bounded by the core-distance and minimum spanning tree computation stages, both of which now have sub-quadratic performance, and can be expected to approach $O(N \log N)$ performance for many data sets. This represents a significant improvement in potential scaling performance for HDBSCAN* clustering. To test these algorthmic improvements we have implemented our accelerated HDBSCAN* algorithm in Python \cite{McInnes2017}. Our Python implementation builds from, and conforms to, the scikit-learn \cite{scikit-learn} software, making use of the kd-tree and ball tree data structures provided. Making use of scikit-learn has enabled our implementation to support a wide variety of distance metrics, as well as the ability to fall back to fast $O(N^2)$ algorithms when provided with a (sparse) distance matrix rather than vector space data. In the following section we will make use of our accelerated HDBSCAN* implementation to compare scaling of run-time performance with data set size with classic HDBSCAN*, and with other popular clustering algorithms. \section{Performance Comparisons}\label{perf} In this section we will analyse the performance of our accelerated HDBSCAN*. For the purpose of this paper we will not be considering the quality of clustering results as that has been adequately covered in \cite{campello2013density} and \cite{campello2015hierarchical}. Instead we will demonstrate the computational competitiveness of our accelerated HDBSCAN* against other existing high performance clustering algorithms. We are mindful of the difficulties of run-time analyses \cite{kriegel2016black}. We therefore focus on scaling trends with data set size (and dimension), and speak to the comparability of algorithms rather than making claims of strict superiority. All our run-time benchmarking was performed on a Macbook Pro with a 3.1 GHz Intel Core i7 processor and 8GB of RAM. Furthermore the benchmarking was performed in Jupyter notebooks which we have made available at \url{https://github.com/lmcinnes/hdbscan_paper}. We encourage others to verify and extend these benchmarks. \subsection{Comparisons with HDBSCAN* reference implementation} As a baseline we compare the performance of our Python HDBSCAN* implementation against the reference implementation in Java from the original authors. Given two very different implementations in different languages our focus is on demonstrating that overall scalability and asymptotic performance can be improved through the spatial indexing acceleration techniques described. We compare the performance on data sets of varying size for both 2-dimensional and 50-dimensional data. The results can be seen in Figure \ref{fig:reference_impl_compare}. The left hand column demonstrates raw performance times for both 2-dimensional and 50-dimensional data, while the right hand column provides a log-log plot that makes clear the different asymptotic performance of the algorithms. The accelerated Python version shows significantly improved performance, both in absolute terms, and asymptotically (having significantly lower linear slope in the log-log plot), clearly demonstrating sub $O(N^2)$ performance. Furthermore, in both the 2-dimensional and 50-dimensional cases, the accelerated Python version demonstrates roughly two orders of magnitude better absolute run-time performance on data set sizes of 200,000 points. \begin{figure} \caption{We compare the reference implementation in Java with the accelerated version implemented in Python. \protect\footnotemark} \label{fig:reference_impl_compare} \end{figure} \footnotetext{See \url{https://github.com/lmcinnes/hdbscan_paper/blob/master/Performance\%20data\%20generation.ipynb} for the code used to generate this plot} \subsection{Comparisons among clustering algorithms} In order to gain an overview of the performance landscape of clustering algorithms in general, we compare a number of the more popular clustering algorithms found in scikit-learn\footnote{Benchmarking was performed using scikit-learn v0.18.1.} \cite{scikit-learn} \cite{scikit-learn-github}. Since we recognise that implementation can have a significant effect on run-time performance, our goal here is merely to provide a sample of the performance space rather than direct comparisons to specific algorithms. We chose scikit-learn as it provides a number of techniques that all rest on a common implementation foundation (including our scikit-learn compatible HDBSCAN* implementation). For the initial comparison we consider the following algorithms as implemented in scikit-learn: Affinity Propagation \cite{frey2007clustering}, Birch \cite{zhang1996birch}, Complete Linkage \cite{defays1977efficient}, DBSCAN \cite{ester1996dbscan}, KMeans \cite{macqueen1967kmeans}, Mean Shift\cite{fukunaga1975meanshift}, Spectral Clustering \cite{ng2001spectral}, and Ward Clustering \cite{ward1963hierarchical}. We compare these with our HDBSCAN* implementation. Since this is a broad comparison of overall performance characteristics, each algorithm will be initialized with default scikit-learn parameters. In the next section, we do a more detailed comparison; carefully considering the impact of clustering algorithm parameters on performance. \begin{figure} \caption{Comparison of scaling performance for scikit-learn implementations of a number of different clustering algorithms. Vertical bars present the range of run-times obtained over several runs at a given data set size.\protect\footnotemark} \label{fig:all_clustering_compare} \end{figure} \footnotetext{See \url{https://github.com/lmcinnes/hdbscan_paper/blob/master/Perfomance\%20comparisons\%20among\%20clustering\%20algorithms.ipynb} for the code used to generate this plot} As demonstrated in Figure \ref{fig:all_clustering_compare}, there are three classes of implementation. The first is Affinity Propagation, Spectral Clustering, and Mean Shift, which all had poor performance beyond a few thousand data points. Some of this is undoubtedly implementation specific (particularly in the case of Spectral Clustering and Mean Shift). The next class of implementations are Ward, Complete Linkage and Birch, which performed better, but still scaled poorly for larger data set sizes. Finally, there was the group of DBSCAN, K-Means and HDBSCAN*, which are difficult to tell apart from one another in Figure \ref{fig:all_clustering_compare}. \begin{figure} \caption{Comparison of scaling performance for scikit-learn implementations of a number of different clustering algorithms plotted on a log-log scale to demonstrate asymptotic performance more clearly.\protect\footnotemark} \label{fig:log_log_all_clustering_compare} \end{figure} \footnotetext{See \url{https://github.com/lmcinnes/hdbscan_paper/blob/master/Perfomance\%20comparisons\%20among\%20clustering\%20algorithms.ipynb} for the code used to generate this plot} If we consider a log-log plot of the same data (Figure \ref{fig:log_log_all_clustering_compare}) in order to better see and understand the asymptotic scaling, we see algorithms ranging from K-Means impressive approximately $O(N)$ performance, through to the traditional $O(N^2)$ algorithms. DBSCAN and HDBSCAN* demonstrate similar asymptotics to each other, and are the closest in performance to K-Means. Also worth noting is that Mean Shift, while having poorer performance in general, has similar asymptotic performance to DBSCAN and HDBSCAN*. K-Means, while being the fastest and most scalable algorithm (Figures \ref{fig:all_clustering_compare} and \ref{fig:log_log_all_clustering_compare}) explicitly fails to meet our desiderata. Although K-Means has only a single parameter, the selection of that parameter is difficult. K-Means also has implicit apriori assumptions about the data distribution -- specifically that clusters are Gaussian. Finally, K-Means is explicitly a partitioning algorithm and does not cope well with noise or outliers. This leaves DBSCAN as the main competitor to our accelerated HDBSCAN*. We therefore seek a more detailed comparison of performance between DBSCAN and HDBSCAN*, specifically considering how parameter selection can affect performance. \subsection{Comparisons with DBSCAN} The difficulty with DBSCAN run-time comparisons using default parameters is that very small values of $\varepsilon$ will return few or no core points. This results in a very fast run with virtually all the data being relegated to background noise. Conversely, for large values of $\varepsilon$ DBSCAN will have very poor performance. Our desire is to not misrepresent DBSCAN's run-times for real world use cases. To circumvent this problem we will perform a search over the parameter space of DBSCAN in order to find the parameters which best match our HDBSCAN* results on a particular data set. This is reasonable because, as described in section \ref{comp}, HDBSCAN* can be viewed as a natural extension to DBSCAN. Once suitable parameters have been discovered we will benchmark the run-time of DBSCAN using those specific parameters against our HDBSCAN* run-time. Of course, in practice a user may not, apriori, know the optimal parameter values for DBSCAN; that issue is not addressed in this experiment. As is the case for all tree based algorithms, run-time and run-time complexity are data dependent. As indicated in \cite{kriegel2016black} this raises significant difficulties when benchmarking algorithms or implementations. Our interest is in demonstrating the comparability of the scaling performance of these algorithms. Under the assumption that both algorithms are tree based, they should have similar performance changes under different data distributions. As such, we will examine the run-time behaviour of both algorithms with respect to a fairly simple data set. One could extend this experimental framework to more complex data sets including those containing background noise. For this simple scaling experiment our data will consist of mixtures of Gaussian distributions laid down within a fixed diameter hypercube. We use variable numbers of constant variance Gaussian balls for simplicity and to not unfairly penalize DBSCAN. DBSCAN, as has been previously mentioned, does not support variable density clusters and thus could not match the output of HDBSCAN* in such cases. We vary dimension, number of clusters and number of data points to determine their effect on run-time. Although we are building a generative model on which to compare the performance of DBSCAN and HDBSCAN*, it should be noted that we are comparing the clustering results of the algorithms directly against each other and not against the underlying generative model. This is intentional, since for any given instantiation of our generative model the generative model is not necessarily the most likely model (see \cite{Hennig2015clusters}). We avoid this issue entirely by ignoring the generative model used to create the data for these experiments. For the purposes of this experiment we chose to use scikit-learn's implementation of DBSCAN\footnote{Benchmarks were run using scikit-learn v0.18.1}. We did this for two reasons. First, scikit-learn's DBSCAN implementation is among the fastest of available DBSCAN implementations (see \cite{kriegel2016black} for DBSCAN implementation comparisons). Second, our Python HDBSCAN* implementation was built using scikit-learn\footnote{Our HDBSCAN* implementation has a similar level of genericity to DBSCAN, supporting the same distance metrics etc.}. This means that both the DBSCAN and HDBSCAN* implementations will be using the same underlying library implementations and, in particular, the same implementation of kd-trees which account for a significant part of any performance gains. A common implementation base aids in extension of results from implementations to algorithms. \begin{figure} \caption{Comparison of scaling performance for scikit-learn's implementation of DBSCAN and our accelerated HDBSCAN*. Axes are on a $\log_{10}$ scale. Each individual plot provides a log-log plot of run-time against data set size, with individual plots for each combination of data set dimension and number of clusters. For each parameter combination ten random data sets where generated in order to assess the variation from data distribution. The plot shows that accelerated HDBSCAN* and DBSCAN exhibit comparable performance. \protect\footnotemark} \label{fig:dbscanVShdbscan} \end{figure} \footnotetext{A more detailed supplemental notebook can be found at \url{https://github.com/lmcinnes/hdbscan_paper/blob/master/Benchmark_vs_DBSCAN.ipynb} } In order to find the DBSCAN parameters of best fit to our HDBSCAN* clustering, we make use of the Gaussian process optimization framework within scikit-optimize \cite{scikit-optimize-github}. We treat the background noise identified by both DBSCAN and HDBSCAN* as a single extra ``cluster'' which yields a partition, allowing us to use the adjusted Rand-index, as proposed by \cite{Hubert1985partitions}, to compute a similarity between the partitionings generated by each algorithm. We then perform Gaussian process optimization to find the $\varepsilon$ and $k$ for DBSCAN which optimize this partition similarity score. Due to the expense of this parameter search this optimization was distributed across multiple nodes of a large memory cluster\footnote{The results of this optimization can be found at \url{https://github.com/lmcinnes/hdbscan_paper/blob/master/optimizationResults.csv}. Due to the fact that we only care about relative timings between dbscan and hdbscan* we omit the exact specifications of this cluster}. The run-time comparison can be found in Figure \ref{fig:dbscanVShdbscan}. Each individual figure provides a log-log plot of run-time against data set size, with individual figures for each combination of data set dimension, and number of clusters. It is worth noting that, particularly in the two dimensional case, due to crowding, the generative model can result in overlapped Gaussians and as a result HDBSCAN* produces a variable density clustering different from the constant density generative model. DBSCAN can have difficulty reproducing such a clustering. This leaves some open questions about the complete accuracy of the 2-dimensional run-times. However, the optimization process still often chose reasonable epsilon parameters so we feel that it is still representative of the broad expected performance for DBSCAN. Figure \ref{fig:dbscanVShdbscan} clearly indicates that our accelerated algorithm has comparable asymptotic performance to DBSCAN. Furthermore, our implementation has comparable absolute performance. This is a significant achievement considering that HDBSCAN* can be thought of as computing DBSCAN for all values of $\varepsilon$. In fact, a single HDBSCAN* run allows a user to easily extract the DBSCAN clustering for any given $\varepsilon$. More importantly, parameter selection and variable density clusters are DBSCANs challenges; our accelerated HDBSCAN* algorithm has overcome both of these challenges without sacrificing performance. \section{Future work} A number of avenues for significant future work exist. First there are several ways that our current Python implementation could be improved. The effects of approximate nearest neighbor search via spill trees \cite{liu2004investigation}, bounding adjustments \cite{curtin2015faster}, or RP-trees with local neighborhood exploration \cite{tang2016visualizing}, both on core-distance computation, and within March's algorithm remains unexplored. Approximate nearest neighbor computations may offer significant performance improvements for a small trade-off in the accuracy of results. Secondly, since there is no cover tree implementation for scikit-learn, our Python implementation does not support cover trees. Cover trees offer better scaling with ambient dimension (cover trees scale according to the expansion constant of the data, related to its intrinsic dimension), and support arbitrary distance metrics. A high performance cover tree implementation may provide significant benefits for our Python implementation of HDBSCAN*. A significant weakness of our accelerated HDBSCAN* algorithm as described is that it is inherently serial. The inability to parallelise the algorithm is an obstacle for its use on large distributed data sets. We believe that partitioning the space via spill trees \cite{liu2004investigation}, and building MSTs on the partitioned data in parallel, then using the techniques of Karger, Klein and Tarjan \cite{karger1995randomized} to reconcile the overlapping trees may result in such a parallel algorithm. This is a topic of continued research. Finally, the topological presentation of HDBSCAN* in section \ref{top} provides the opportunity to use multi-dimensional persistent homology to eliminate the parameter $k$. In such an approach no condensed tree interpretation is possible; instead the relevant structure is a sheaf over a partially ordered set (with the supremum topology). The resulting algorithm, \emph{Persistent Density Clustering}, is the subject of a forthcoming paper \cite{healy2017pdc}. \section{Conclusions} The HDBSCAN* clustering algorithm lies at the confluence of several threads of research from diverse fields. As a density based algorithm with a small number of intuitive parameters and few assumptions about data distribution, it is ideally suited to exploratory data analysis. In this paper we have described an accelerated HDBSCAN* algorithm that can provide comparable performance to the popular DBSCAN clustering algorithm. Since it has more intuitive parameters and can find variable density clusters, HDBSCAN* is clearly superior to DBSCAN from a qualitative clustering perspective. As the improvements of our accelerated HDBSCAN* make its computational scalability comparable in performance to DBSCAN, HDBSCAN* should be the default choice for clustering. \end{document}
\begin{document} \title*{Null recurrence and transience of random difference equations in the contractive case} \titlerunning{Null recurrence and transience of RDE in the contractive case} \author{Gerold Alsmeyer, Dariusz Buraczewski and Alexander Iksanov} \institute{Gerold Alsmeyer \at Institute of Mathematical Stochastics, Department of Mathematics and Computer Science, University of M\"unster, Einsteinstrasse 62, D-48149 M\"unster, Germany.\at \email{[email protected]}\\ Dariusz Buraczewski \at Institute of Mathematics, University of Wroclaw, pl. Grunwaldzki 2/4, 50-384 Wroclaw, Poland.\at \email{[email protected]}\\ Alexander Iksanov \at Faculty of Computer Science and Cybernetics, Taras Shevchenko National University of Kyiv, 01601 Kyiv, Ukraine and Institute of Mathematics, University of Wroclaw, pl. Grunwaldzki 2/4, 50-384 Wroclaw, Poland. \at \email{[email protected]}} \maketitle \abstract{Given a sequence $(M_{k}, Q_{k})_{k\ge 1}$ of independent, identically distributed ran\-dom vectors with nonnegative components, we consider the recursive Markov chain $(X_{n})_{n\ge 0}$, defined by the random difference equation $X_{n}=M_{n}X_{n-1}+Q_{n}$ for $n\ge 1$, where $X_{0}$ is independent of $(M_{k}, Q_{k})_{k\ge 1}$. Criteria for the null recurrence/transience are provided in the situation where $(X_{n})_{n\ge 0}$ is contractive in the sense that $M_{1}\cdot\ldots\cdot M_{n}\to 0$ a.s., yet occasional large values of the $Q_{n}$ overcompensate the contractive behavior so that positive recurrence fails to hold. We also investigate the attractor set of $(X_{n})_{n\ge 0}$ under the sole assumption that this chain is locally contractive and recurrent.} {\noindent \textbf{AMS 2000 subject classifications:} 60J10; 60F15\ } {\noindent \textbf{Keywords:} attractor set, null recurrence, perpetuity, random difference equation, transience} \section{Introduction}\label{sec:intro} Let $(M_{n}, Q_{n})_{n\ge 1}$ be a sequence of independent, identically distributed (iid) $\mathbb{R}} \def\Rq{\overline\R_+^{2}$-valued random vectors with common law $\mu$ and generic copy $(M, Q)$, where $\mathbb{R}} \def\Rq{\overline\R_+:=[0,\infty)$. Further, let $X_{0}$ be a nonnegative random variable which is independent of $(M_{n},Q_{n})_{n\ge 1}$. Then the sequence $(X_{n})_{n\ge 0}$, recursively defined by the random difference equation (RDE) \begin{equation}\label{chain} X_{n}\ :=\ M_{n}X_{n-1}+Q_{n},\quad n\ge 1, \end{equation} forms a temporally homogeneous Markov chain with transition kernel $P$ given by $$ Pf(x)\ =\ \int f(mx+q)\ {\rm d}\mu(m,q) $$ for bounded measurable functions $f:\mathbb{R}} \def\Rq{\overline\R\to\mathbb{R}} \def\Rq{\overline\R$. The operator $P$ is Feller because it maps bounded continuous $f$ to functions of the same type. To underline the role of the starting point we occasionally write $X_{n}^{x}$ when $X_{0} = x$ a.s. Since $M$, $Q$ and $X_{0}$ are nonnegative, $(X_{n})_{n\ge 0}$ has state space $\mathbb{R}} \def\Rq{\overline\R_{+}$. The sequence $(X_{n})_{n\ge 0}$ may also be viewed as a \emph{forward iterated function system}, viz. $$ X_{n}\ =\ \Psi_{n}(X_{n-1})\ =\ \Psi_{n}\circ\ldots\circ\Psi_{1}(X_{0}),\quad n\ge 1, $$ where $\Psi_{n}(t):=Q_{n}+M_{n} t$ for $n\ge 1$ and $\circ$ denotes composition, and thus opposed to its closely related counterpart of \emph{backward iterations} $$\widehat{X}_{0}\ :=\ X_{0}\quad\text{and}\quad \widehat{X}_{n}\ :=\ \Psi_{1}\circ\ldots\circ\Psi_{n}(X_{0}),\quad n\ge 1. $$ The relation is established by the obvious fact that $X_{n}$ has the same law as $\widehat{X}_{n}$ for each $n$, regardless of the law of $X_{0}$. Put $$\Pi_{0}\ :=\ 1\quad\text{and}\quad\Pi_{n}\ :=\ M_{1}M_{2}\cdot\ldots\cdot M_{n},\quad n\ge 1.$$ Assuming that \begin{equation}\label{trivial} \mathbb{P}(M=0)\ =\ 0\quad \text{and}\quad\mathbb{P} (Q=0)\ <\ 1 \end{equation} and \begin{equation}\label{trivial2} \mathbb{P}(Mr+Q=r)\ <\ 1\quad\text{for all }r\ge 0, \end{equation} Goldie and Maller \cite[Theorem 2.1]{GolMal:00} showed (actually, these authors did not assume that $M$ and $Q$ are nonnegative) that the series $\sum_{k\ge 1}\Pi_{k-1}Q_{k}$, called \emph{perpetuity}, is a.s.\ convergent provided that \begin{equation}\label{33} \lim_{n\to\infty}\Pi_{n}\ =\ 0\quad\text{a.s.\quad and}\quad I_{Q}\ :=\ \int_{(1,\,\infty)}J_{-}(x)\ \mathbb{P}(\log Q\in {\rm d}x)\ <\ \infty, \end{equation} where \begin{equation}\label{jx} J_{-}(y):=\frac{y}{\mathbb{E}(y\wedge\log_-M)},\quad y>0 \end{equation} and $\log_- x=-\min(\log x, 0)$. Equivalently, the Markov chain $(X_{n})_{n\ge 0}$ is then positive recurrent with unique invariant distribution given by the law of the perpetuity. It is also well-known what happens in the ``trivial cases'' when at least one of the conditions \eqref{trivial} and \eqref{trivial2} fails \cite[Theorem 3.1]{GolMal:00}: \begin{description}[(b)] \item[(a)] If $\mathbb{P}(M=0)>0$, then $\tau:=\inf\{k\ge 1:M_{k}=0\}$ is a.s. finite, and the perpetuity trivially converges to the a.s.\ finite random variable $\sum_{k=1}^{\tau}\Pi_{k-1}Q_{k}$, its law being the unique invariant distribution of $(X_{n})_{n\ge 0}$. \item[(b)] If $\mathbb{P}(Q=0)=1$, then $\sum_{k\ge 1}\Pi_{k-1}Q_{k}=0$ a.s. \item[(c)] If $\mathbb{P}(Q+Mr=r)=1$ for some $r\ge 0$ and $\mathbb{P}(M=0)=0$, then either $\delta_{r}$, the Dirac measure at $r$, is the unique invariant distribution of $(X_{n})_{n\ge 1}$, or every distribution is invariant. \end{description} Further information on RDE and perpetuities can be found in the recent books \cite{BurDamMik:16} and \cite{Iksanov:17}. If \eqref{trivial}, \eqref{trivial2}, \begin{equation}\label{30} \lim_{n\to\infty}\Pi_{n}=0\quad\text{a.s.\quad and}\quad I_{Q}\,=\,\infty \end{equation} hold, which are assumptions in most of our results hereafter (with the exception of Section \ref{attr}) and particularly satisfied if \begin{equation}\label{eq:log} -\infty\ \le\ \mathbb{E}\log M\ <\ 0\quad\text{and}\quad\mathbb{E}\log_{+} Q\ =\ \infty, \end{equation} where $\log_+ x=\max(\log x, 0)$, then the afore-stated result \cite[Theorem 2.1]{GolMal:00} by Goldie and Maller implies that $(X_{n})_{n\ge 0}$ must be either null recurrent or transient. Our purpose is to provide conditions for each of these alternatives and also to investigate the path behavior of $(X_{n})_{n\ge 0}$. We refer to \eqref{30} as the \emph{divergent contractive case} because, on the one hand, $\Pi_{n}\to 0$ a.s. still renders $\Psi_{n}\circ\ldots\circ\Psi_{1}$ to be contractions for sufficiently large $n$, while, on the other hand, $I_Q=\infty$ entails that occasional large values of the $Q_{n}$ overcompensate this contractive behavior in such a way that positive recurrence does no longer hold. As a consequence, $\sum_{k\ge 1}\Pi_{k-1}Q_{k}=\infty$ a.s. and so the backward iterations $\widehat{X}_{n}=\Pi_{n}X_{0}+\sum_{k=1}^{n}\Pi_{k-1}Q_{k}$ diverge to $\infty$ a.s. regardless of whether the chain $(X_{n})_{n\ge 0}$ is null recurrent or transient. The question of which alternative occurs relies on a delicate interplay between the $\Pi_{n}$ and the $Q_{n}$. Our main results (Theorems \ref{main11} and \ref{main12}), for simplicity here confined to the situation when \eqref{trivial}, \eqref{trivial2}, \eqref{eq:log} hold and $s:=\lim_{t\to\infty}t\,\mathbb{P}(\log Q>t)$ exists, assert that $(X_{n})_{n\ge 0}$ is null recurrent if $s<-\mathbb{E}\log M$ and transient if $s>-\mathbb{E}\log M$. For deterministic $M\in (0,1)$, i.e., autoregressive sequences $(X_{n})_{n\ge 0}$, this result goes already back to Kellerer \cite[Theorem 3.1]{Kellerer:92} and was later also proved by Zeevi and Glynn \cite[Theorem 1]{ZeeviGlynn:04}, though under a further extra assumption, namely that $Q$ has log-Cauchy tails with scale parameter $s$, i.e. $$ \mathbb{P}(\log(1+Q)>t)\ =\ \frac{1}{1+st}\quad\text{for all }t>0. $$ On the other hand, they could show null recurrence of $(X_{n})_{n\ge 0}$ even in the boundary case $s=-\log M$. Kellerer's result will be of some relevance here because we will take advantage of it in combination with a stochastic comparison technique (see Section \ref{M<=gamma<1}, in particular Proposition \ref{Kellerer's result}). Finally, we mention work by Bauernschubert \cite{Bauernschubert:13}, Buraczewski and Iksanov \cite{Buraczewski+Iksanov:15}, Pakes \cite{Pakes:83} and, most recently, by Zerner \cite{Zerner:2016+} on the divergent contractive case, yet only the last one studies the recurrence problem and is in fact close to our work. We will therefore comment on the connections in more detail in Remark \ref{zer}. In the critical case $\mathbb{E}\log M=0$ not studied here, when $\limsup_{n\to\infty}\Pi_{n}=\infty$ a.s. and thus non-contraction holds, a sufficient criterion for the null recurrence of $(X_{n})_{n\ge 0}$ and the existence of an essentially unique invariant Radon measure $\nu$ was given by Babillot et al. \cite{BabBouElie:97}, namely $$ \mathbb{E}|\log M|^{2+\delta}\,<\,\infty\quad\text{and}\quad\mathbb{E}(\log_{+}Q)^{2+\delta}\,<\,\infty\quad\text{for some }\delta>0. $$ For other aspects like the tail behavior of $\nu$ or the convergence $\widehat{X}_{n}$ after suitable normalization see \cite{Brofferio:03, Buraczewski:07, Grincev:76, HitczenkoWes:11,Iksanov+Pilipenko+Samoilenko:17,RachSamo:95}. The paper is organized as follows. In Section \ref{sec:background}, we review known results about general locally contractive Markov chains which form the theoretical basis of the present work. Our main results are stated in Section \ref{results} and proved in Sections \ref{M<=gamma<1}, \ref{sec:tail lemma} and \ref{sec:main11 and main12}. In Section \ref{attr} we investigate the attractor set of the Markov chain $(X_{n})_{n\ge 0}$ under the sole assumption that $(X_{n})_{n\ge 0}$ is locally contractive and recurrent. \section{Theoretical background}\label{sec:background} We start by giving some useful necessary and sufficient conditions for the transience and recurrence of the sequence $(X_{n})_{n\ge 0}$. The following definition plays a fundamental role in the critical case $\mathbb{E}\log M=0$, see \cite{BabBouElie:97,Benda:98b,Brofferio:03,BroBura:13,Buraczewski:07,PeigneWoess:11a}. A general Markov chain $(X_{n})_{n\ge 0}$, possibly taking values of both signs, is called \emph{locally contractive} if, for any compact set $K$ and all $x,y\in\mathbb{R}} \def\Rq{\overline\R$, \begin{equation}\label{eq: local contraction} \lim_{n\to\infty}\big| X_{n}^{x}-X_{n}^{y}\big| \cdot\vec{1}_{\{X_{n}^{x}\in K \}}\ =\ 0\quad\text{a.s.} \end{equation} For the chain $(X_{n})_{n\ge 0}$ to be studied here, we observe that, under \eqref{30}, $$ \big| X_{n}^{x}-X_{n}^{y} \big|\ =\ \Pi_{n}|x-y|\ \underset{n\to\infty}{\longrightarrow} \ 0\quad\text{ a.s.} $$ for all $x,y\in\mathbb{R}} \def\Rq{\overline\R$. This means that $(X_{n})_{n\ge 0}$ is contractive and hence locally contractive. Yet, it may hold that $$ \mathbb{P}\left(\lim_{n\to\infty}|X_{n}^{x}-x|=\infty\right)\ =\ 1$$ for any $x\in\mathbb{R}} \def\Rq{\overline\R$ in which case the chain is called \emph{transient}. We quote the following result from \cite[Lemma 2.2]{PeigneWoess:11a}. \begin{Lemma}\label{lem:1} If $(X_{n})_{n\ge 0}$ is locally contractive, then the following dichotomy holds: either \begin{equation}\label{eq:3} \mathbb{P}\left(\lim_{n\to\infty}|X_{n}^{x}-x|=\infty\right)\ =\ 0\quad\text{for all }x\in\mathbb{R}} \def\Rq{\overline\R \end{equation} or \begin{equation}\label{eq:4} \mathbb{P}\left(\lim_{n\to\infty}|X_{n}^{x}-x|=\infty\right)\ =\ 1\quad\text{for all }x\in\mathbb{R}} \def\Rq{\overline\R. \end{equation} \end{Lemma} The lemma states that either $(X_{n})_{n\ge 0}$ is transient or visits a large interval infinitely often (i.o.). The Markov chain $(X_{n})_{n\ge 0}$ is called \emph{recurrent} if there exists a nonempty closed set $L\subset\mathbb{R}} \def\Rq{\overline\R$ such that $\mathbb{P}(X_{n}^{x}\in U\text{ i.o.})=1$ for every $x\in L$ and every open set $U$ that intersects $L$. Plainly, recurrence is a local property of the path of $(X_{n})_{n\ge 0}$. The next lemma can be found in \cite[Theorem 3.8]{Benda:98b} and \cite[Theorem 2.13]{PeigneWoess:11a}. \begin{Lemma}\label{lem:3} If $(X_{n})_{n\ge 0}$ is locally contractive and recurrent, then there exists a unique (up to a multiplicative constant) invariant locally finite measure $\nu$. \end{Lemma} The Markov chain $(X_{n})_{n\ge 0}$ is called \emph{positive recurrent} if $\nu(L)<\infty$ and \emph{null recurrent}, otherwise. Our third lemma was stated as Proposition 1.3 in \cite{Benda:98b}. Since this report has never been published, we present a short proof. \begin{Lemma}\label{lem:2} Let $(X_{n})_{n\ge 0}$ be a locally contractive Markov chain and $U$ an open subset of $\mathbb{R}} \def\Rq{\overline\R$. Then $\mathbb{P}(X_{n}^{x}\in U~{\rm i.o.})<1$ for some $x\in\mathbb{R}} \def\Rq{\overline\R$ implies $\sum_{n\ge 0} \mathbb{P}(X_{n}^{y}\in K)<\infty$ for all $y\in\mathbb{R}} \def\Rq{\overline\R$ and all compact $K\subset U$. \end{Lemma} \begin{proof} Take $x$ such that $\mathbb{P}(X_{n}^{x}\in U\ \text{i.o.})<1$. Then there exists $n_{1}\in\mathbb{N}$ such that $$ \mathbb{P}(X_{n}^{x}\notin U\text{ for all }n\ge n_{1})\ >\ 0. $$ Now fix an arbitrary $y\in \mathbb{R}} \def\Rq{\overline\R$ and a compact $K\subset U$. Defining the compact set $K_{y}:=K \cup\{y\}$, the local contractivity implies that for some $n_{2}\in\mathbb{N}$ \begin{equation}\label{star} \mathbb{P}\left(X_{n}^{z}\notin K\text{ for all } n\ge n_{2}\ \text{ and some } z\in K_{y}\right)\ =:\ \delta\ >\ 0. \end{equation} For $z\in K_{y}$, consider the sequence of stopping times \begin{align*} T_{0}^{z}\ &=\ 0\quad\text{and}\quad T_{n}^{z}\ =\ \inf\{ k> T_{n-1}^{z}:\; X_{k}^{z}\in K\}\quad\text{for }n\ge 1. \end{align*} Then \eqref{star} implies that $\mathbb{P}(T^{z}_{n_{2}}<\infty)\le 1-\delta$ for each $z\in K_{y}$. Consequently, $$ \mathbb{P}\left(T^{y}_{nn_{2}}<\infty\right)\ \le\ (1-\delta) \mathbb{P}\left(T_{(n-1)n_{2}}^{y}<\infty\right)\ \le\ (1-\delta)^{n} $$ for all $n\ge 1$ and thus \begin{align*} \sum_{n\ge 0} \mathbb{P}(X_{n}^{y} \in K)\ &=\ \mathbb{E}\left(\sum_{n\ge 0}\vec{1}_{\{X_{n}^{y}\in K\}}\right)\ \le\ \sum_{n\ge 0}\mathbb{P}\left(T_{n}^{y} <\infty\right)\\ &\le\ \sum_{n\ge 0} n_{2}\,\mathbb{P}\left(T_{n n_{2}}^{y} <\infty\right)\ \le\ n_{2}/\delta\ <\ \infty.\tag*\qed \end{align*} \end{proof} A combination of Lemmata \ref{lem:1} and \ref{lem:2} provides us with \begin{Prop}\label{transient} For a locally contractive Markov chain $(X_{n})_{n\ge 0}$ on $\mathbb{R}} \def\Rq{\overline\R$, the following assertions are equivalent: \begin{description}[(b)] \item[(a)] The chain is transient. \item[(b)] $\lim_{n\to\infty}|X_{n}^{x}|=\infty$ a.s. for all $x\in\mathbb{R}} \def\Rq{\overline\R$. \item[(c)] $\mathbb{P}(X_{n}^{x}\in U~{\rm i.o.})< 1$ for any bounded open $U\subset\mathbb{R}} \def\Rq{\overline\R$ and some/all $x\in\mathbb{R}} \def\Rq{\overline\R$. \item[(d)] $\sum_{n\ge 0}\mathbb{P}(X_{n}^{x}\in K)<\infty$ for any compact $K\subset\mathbb{R}} \def\Rq{\overline\R$ and some/all $x\in\mathbb{R}} \def\Rq{\overline\R$. \end{description} \end{Prop} \begin{proof} The equivalence of (a), (b) and (c) is obvious. By Lemma \ref{lem:2}, (c) entails (d), while the Borel-Cantelli lemma gives the converse.\qed \end{proof} Now we consider the case when \eqref{eq:4} is satisfied. For any $\omega$, we define $L^{x}(\omega)$ to be the set of accumulation points of $(X_{n}^x(\omega))_{n\ge 0}$, i.e. $$ L^{x}(\omega)\ :=\ \bigcap_{m\ge 1}\overline{\{X_{n}^x(\omega):n\ge m\}}, $$ where $\overline{C}$ denotes the closure of a set $C$. It is known \cite{Benda:98b,PeigneWoess:11a} that $L^x(\omega)$ does not depend on $x$ and $\omega$. In fact, there exists a deterministic set $L\subset\mathbb{R}} \def\Rq{\overline\R$ (called the \emph{attractor set} or \emph{limit set}) such that $$ \mathbb{P}\{L^x(\cdot) = L \ \text{ for all } x\in \mathbb{R}} \def\Rq{\overline\R\}=1. $$ \begin{Prop}\label{recur} For a locally contractive Markov chain $(X_{n})_{n\ge 0}$ on $\mathbb{R}} \def\Rq{\overline\R$, the following assertions are equivalent: \begin{description}[(b)] \item[(a)] The chain is recurrent. \item[(b)] $\liminf_{n\to\infty}|X_{n}^x-x|<\infty$ a.s. for all $x\in\mathbb{R}} \def\Rq{\overline\R$. \item[(c)] $\liminf_{n\to\infty}|X_{n}|<\infty$ a.s. \item[(d)] $\sum_{n\ge 0}\mathbb{P}\{X_{n}^{x}\in K\}=\infty$ for a nonempty compact set $K$ and some/all $x\in\mathbb{R}} \def\Rq{\overline\R$. \end{description} \end{Prop} \begin{proof} In view of the contrapositive Proposition \ref{transient}, we must only verify for ``(a)$\Rightarrow$(d)'' that the sum in (d) is indeed infinite for some compact $K\ne\oslash$ and \emph{all} $x\in\mathbb{R}} \def\Rq{\overline\R$. W.l.o.g. let $K=[-2b,2b]$ for some $b>0$ and $y\in\mathbb{R}} \def\Rq{\overline\R$ such that, by (a), $\sum_{n\ge 0}\vec{1}_{\{|X_{n}^{y}|\le b\}}=\infty$ a.s.\ and thus $\sum_{n\ge 0}\mathbb{P}(|X_{n}^{y}|\le b)=\infty$. Local contractivity implies that $\sigma_{x}:=\sup\{n\ge 0:|X_{n}^{x}-X_{n}^{y}|>b,\,|X_{n}^{y}|\le b\}$ is a.s.\ finite for \emph{all} $x\in\mathbb{R}} \def\Rq{\overline\R$. Consequently, $X_{n}^{x}$ hits $[-2b,2b]$ whenever $X_{n}^{y}$ hits $[-b,b]$ for $n>\sigma_{x}$, in particular $\sum_{n\ge 0}\vec{1}_{\{|X_{n}^{x}|\le 2b\}}=\infty$ a.s.\ and thus $\sum_{n\ge 0}\mathbb{P}(|X_{n}^{x}|\le 2b)=\infty$ for all $x\in\mathbb{R}} \def\Rq{\overline\R$.\qed \end{proof} \section{Results}\label{results} In order to formulate the main result, we need \begin{align} \begin{split}\label{eq:def of s_* and s^*} s_{*}\ :=\ &\liminf_{t\to\infty}\,t\,\mathbb{P}(\log Q>t),\\ s^{*}\ :=\ &\limsup_{t\to\infty}\,t\,\mathbb{P}(\log Q>t) \end{split} \end{align} for which $0\le s_{*}\le s^{*}\le\infty$ holds true. In some places, the condition \begin{equation}\label{tail2} \lim_{t\to\infty}t\,\mathbb{P}(\log Q>t)\ =\ s\ \in\ [0,\infty] \end{equation} will be used. Finally, put $\fm^{\pm}:=\mathbb{E}\log_{\pm}M$ and, if $\fm^{+}\wedge\fm^{-}<\infty$, $$ \fm\ :=\ \mathbb{E}\log M\ =\ \fm^{+}-\fm^{-} $$ which is then in $[-\infty,0)$ by our standing assumption $\Pi_{n}\to 0$ a.s. \begin{Theorem}\label{main11} Let $\fm\in [-\infty,0)$ and \eqref{trivial}, \eqref{trivial2}, \eqref{30} be valid. Then the following assertions hold: \begin{description}[(b)]\itemsep2pt \item[(a)] $(X_{n})_{n\ge 0}$ is null recurrent if $s^{*}<-\fm$. \item[(b)] $(X_{n})_{n\ge 0}$ is transient if $s_{*}>-\fm\ ($thus $\fm>-\infty)$. \end{description} \end{Theorem} \begin{Rem}\label{zer}\rm In the recent paper \cite{Zerner:2016+}, Zerner studies the recurrence/transience of $(X_{n})_{n\ge 0}$ defined by \eqref{chain} in the more general setting when $M$ is a nonnegative $d\times d$ random matrix and $Q$ an $\mathbb{R}} \def\Rq{\overline\R_{+}^{d}$-valued random vector. A specialization of his Theorem 5 to the one-dimensional case $d=1$ reads as follows. Suppose that \begin{equation}\label{boun} M\in [a,b]\quad\text{a.s.} \end{equation} for some $0<a<b<\infty$ and that either $\lim_{t\to\infty}\,t^\beta\,\mathbb{P}(\log Q>t)=0$ for some $\beta\in (2/3,1)$, or $s_{*}>-\fm$. Let $y\in (0,\infty)$ be such that $\mathbb{P}(Q\le y)>0$. Then $(X_{n})_{n\ge 0}$ is recurrent if, and only if, \begin{equation}\label{eq:Zerner condition} \sum_{n\ge 0}\prod_{k=0}^{n}\mathbb{P}(Q\le ye^{-k\fm})\ =\ \infty. \end{equation} It is not difficult to verify that \eqref{eq:Zerner condition} holds if $s^{*}<-\fm$ and that it fails if $s_{*}>-\fm$. Therefore, Zerner's result contains our Theorem \ref{main11} under the additional assumption \eqref{boun}. \end{Rem} \begin{Rem}\label{Lyapunov functions}\rm If $\log M$ and $\log Q$ are both integrable and $D(x):=\log X_{1}^{x}-\log X_{0}^{x}$, then $$ \mathbb{E} D(x)\ =\ \mathbb{E}\log(M+x^{-1}Q)\ \xrightarrow{x\to\infty}\ 0 $$ shows that $(\log X_{n})_{n\ge 0}$ forms a Markov chain with asymptotic drift zero. Such chains are studied at length by Denisov, Korshunov and Wachtel in a recent monograph-like publication \cite{DenKorWach:16}. They also provide conditions for recurrence and transience in terms of truncated moments of $D(x)$, see their Corollaries 2.11 and 2.16, but these appear to be more complicated and more restrictive than ours. \end{Rem} \begin{Rem}\rm Here is a comment on the boundary case $s=-\fm$ not covered by Theorem \ref{main11}. Assuming $M=e^\fm$ a.s., it can be shown that the null recurrence/transience of $(X_{n})_{n\ge 0}$ is equivalent to the divergence/convergence of the series $$ \sum_{n\ge 1}\mathbb{P}\left(\max_{1\le k\le n}\,e^{\fm (k-1)}Q_{k}\le e^{x}\right)\ =\ \sum_{n\ge 1}\prod_{k=0}^{n-1} F(x-\fm k) $$ for some/all $x\ge 0$, where $F(y):=\mathbb{P}(\log Q\le y)$. Indeed, assuming $X_{0}=0$, the transience assertion follows when using $\mathbb{P}(X_{n}\le e^{x})\le \mathbb{P}(\max_{1\le k\le n}\,e^{\fm (k-1)}Q_{k}\le e^{x})$, while the null recurrence claim is shown by a thorough inspection and adjustment of the proof of Theorem \ref{main11}(a). Using Kummer's test as stated in \cite{Tong:94}, we then further conclude that $(X_{n})_{n\ge 0}$ is null recurrent if, and only if, there exist positive $p_{1},p_{2},\ldots$ such that $F(-\fm k)\ge p_{k}/p_{k+1}$ and $\sum_{k\ge 1}(1/p_{k})=\infty$. For applications, the following sufficient condition, which is a consequence of Bertrand's test \cite[p.~408]{Stromberg:81}, may be more convenient. If $$ 1-F(x)\ =\ \frac{-\fm}{x}+\frac{f(x)}{x\log x}, $$ then $(X_{n})_{n\ge 0}$ is null recurrent if $\displaystyle\limsup_{x\to\infty}f(x)<-\fm$, and transient if $\displaystyle\liminf_{x\to\infty}f(x)>-\fm$. \end{Rem} If $\fm^{-}=\fm^{+}=\infty$ and $s^{*}<\infty$, then $(X_{n})_{n\ge 0}$ is \emph{always} null recurrent as the next theorem will confirm. Its proof will be based on finding an appropriate subsequence of $(X_{n})_{n\ge 0}$ which satisfies the assumptions of Theorem \ref{main11}(a). \begin{Theorem}\label{main12} Let $\fm^{+}=\fm^{-}=\infty$, $s^\ast<\infty$ and \eqref{trivial}, \eqref{trivial2}, \eqref{30} be valid. Then $(X_{n})_{n\ge 0}$ is null recurrent. \end{Theorem} The two theorems are proved in Section \ref{sec:main11 and main12} after some preparatory work in Sections \ref{M<=gamma<1} and \ref{sec:tail lemma}. \begin{Rem}\label{rem:main12}\rm It is worthwhile to point out that the assumptions of the previous theorem impose some constraint on the tails of $\log_{+}M$. Namely, given these assumptions, the negative divergence of the random walk $S_{n}:=\log\Pi_{n}$, $n\ge 0$, that is $S_{n}\to-\infty$ a.s., entails \begin{align*} I_{M}\ =\ \int_{(1,\,\infty)}J_{-}(x)\ \mathbb{P}(\log M\in {\rm d}x)\ <\ \infty \end{align*} by Erickson's theorem \cite[Theorem 2]{Erickson:73}. But this in combination with $I_{Q}=\infty$ and $s^{*}<\infty$ further implies by stochastic comparison that $$ r_{*}\ :=\ \liminf_{t\to\infty}t\,\mathbb{P}(\log M>t)\ =\ 0. $$ Indeed, if the latter failed to hold, i.e. $r_{*}>0$, then $$ \mathbb{P}(\log M>t)\ \ge\ \frac{r_{*}}{2t}\ \ge\ \frac{r_{*}}{4s^{*}}\,\mathbb{P}(\log Q>t) $$ for all sufficiently large $t$, say $t\ge t_{0}$, which in turn would entail the contradiction $$ I_{M}-J_-(0+)\ \ge\ \frac{r_{*}}{4s^{*}}\int_{(t_{0},\infty)}J_{-}'(x)\ \mathbb{P}(\log Q>x)\ {\rm d}x\ =\ \infty. $$ \end{Rem} \section{The cases $M\le\gamma$ and $M\geq \gamma$: Two comparison lemmata and Kellerer's result}\label{M<=gamma<1} This section collects some useful results for the cases when $M\le\gamma$ or $M\geq \gamma$ a.s.\ for a constant $\gamma\in (0,1)$, in particular Kellerer's unpublished recurrence result \cite{Kellerer:92} for this situation, see Proposition \ref{Kellerer's result} below. Whenever given iid nonnegative $Q_{1},Q_{2},\ldots$ with generic copy $Q$, let $(X_{n}(\gamma))_{n\ge 0}$ be defined by $$ X_{n}(\gamma)\ =\ \gamma X_{n-1}(\gamma)+Q_{n},\quad n\ge 1,$$ where $X_{0}(\gamma)$ is independent of $(Q_{n})_{n\ge 1}$. We start with two comparison lemmata which treat two RDE with identical $M\le\gamma$ but different $Q$. \begin{Lemma}\label{comparison lemma} Let $(M_{n},Q_{n},Q_{n}')_{n\ge 1}$ be a sequence of iid random vectors with nonnegative components and generic copy $(M,Q,Q')$ such that $M\le\gamma$ a.s. for some $\gamma\in (0,1)$ and \begin{equation}\label{eq:tail condition Q'} \mathbb{P}(Q'>t)\ \ge\ \mathbb{P}(Q>t) \end{equation} for some $t_{0}\ge 0$ and all $t\ge t_{0}$. Define $$ X_{n}\,:=\,M_{n}X_{n-1}+Q_{n}\quad\text{and}\quad X_{n}'\,:\,=M_{n}X_{n-1}'+Q_{n}' $$ for $n\ge 1$, where $X_{0}^\prime$ is independent of $X_{0}$ and $(M_{k}, Q_{k})_{k\ge 1}$. Then \begin{align*} (X_{n})_{n\ge 0}\text{ transient}\quad\Longrightarrow\quad (X_{n}')_{n\ge 0}\text{ transient}\\ \shortintertext{or, equivalently,} (X_{n}')_{n\ge 0}\text{ recurrent}\quad\Longrightarrow\quad (X_{n})_{n\ge 0}\text{ recurrent} \end{align*} \end{Lemma} \begin{proof} The tail condition \eqref{eq:tail condition Q'} ensures that we may choose a coupling $(Q,Q')$ such that $Q'\ge Q-t_{0}$ a.s. Then, with $(Q_{n},Q_{n}')_{n\ge 1}$ being iid copies of $(Q,Q')$, it follows that \begin{align*} X_{n}'-X_{n}\ &=\ M_{n}(X_{n-1}'-X_{n-1})\ +\ Q_{n}'-Q_{n}\\ &\ge\ M_{n}(X_{n-1}'-X_{n-1})\ -\ t_{0}\\ \ldots\ &\ge\ \left(\prod_{k=1}^{n}M_{k}\right)(X_{0}'-X_{0})\ -\ t_{0}\sum_{k=0}^{n-1}\gamma^{k}\quad\text{a.s.} \shortintertext{and thereby} &\liminf_{n\to\infty}\,(X_{n}'-X_{n})\ \ge\ -\frac{t_{0}}{1-\gamma}\quad\text{a.s.} \end{align*} which obviously proves the asserted implication.\qed \end{proof} \begin{Lemma}\label{comparison lemma 2} Replace condition \eqref{eq:tail condition Q'} in Lemma \ref{comparison lemma} with \begin{equation} Q'\ =\ \vec{1}_{\{Q>\beta\}}Q \end{equation} for some $\beta>0$, thus $\mathbb{P}(Q'=0)=\mathbb{P}(Q\le\beta)$. Then $$ (X_{n}')_{n\ge 0}\text{ recurrent}\quad\Longleftrightarrow\quad (X_{n})_{n\ge 0}\text{ recurrent}. $$ \end{Lemma} \begin{proof} Here it suffices to point out that \begin{align*} |X_{n}'-X_{n}|\ &=\ |M_{n}(X_{n-1}'-X_{n-1})\ +\ Q_{n}'-Q_{n}|\\ &\le\ M_{n}|X_{n-1}'-X_{n-1}|\ +\ \vec{1}_{\{Q_{n}\le\beta\}}Q_{n}\\ \ldots\ &\le\ \left(\prod_{k=1}^{n}M_{k}\right)(X_{0}'-X_{0})\ +\ \beta\sum_{k=0}^{n-1}\gamma^{k}\\ &\le\ \gamma^{n}(X_{0}'-X_{0})\ +\ \frac{\beta}{1-\gamma}\quad\text{a.s.} \end{align*} for all $n\ge 1$.\qed \end{proof} The announced result by Kellerer including its proof (with some minor modifications), taken from his unpublished Technical Report \cite[Theorem ~3.1]{Kellerer:92}, is given next. \begin{Prop}\label{Kellerer's result} Let $0<\gamma<1$. Then the following assertions hold: \begin{description} \item[(a)] $(X_{n})_{n\ge 0}$ is transient if $M\ge\gamma$ a.s. and \begin{equation}\label{lower tail condition} s_{*}\ =\ \liminf_{t\to\infty}t\,\mathbb{P}(\log Q>t)\ >\ \log(1/\gamma). \end{equation} \item[(b)] $(X_{n})_{n\ge 0}$ is null recurrent if $M\le\gamma$ a.s. and \begin{equation}\label{upper tail condition} s^{*}\ =\ \limsup_{t\to\infty}t\,\mathbb{P}(\log Q>t)\ <\ \log(1/\gamma). \end{equation} \end{description} \end{Prop} \begin{proof} It is enough to consider (in both parts) the case when $M=\gamma$ a.s. and thus the Markov chain $(X_{n}(\gamma))_{n\ge 0}$ as defined above. We may further assume that $X_{0}(\gamma)=0$ and put $\theta:=\log(1/\gamma)$. \emph{Transience.} It suffices to show that $\sum_{n\ge 1}\mathbb{P}(\widehat{X}_{n}(\gamma)\le e^{t})<\infty$ for all $t\ge 0$. Fixing $t$ and any $\varepsilon>0$ with $(1+\varepsilon)\theta<s_{*}$, pick $m\in\mathbb{N}$ so large that $$ \inf_{k\ge m+1}k\theta\,\mathbb{P}(\log Q>t+k\theta)\ \ge \ (1+\varepsilon)\theta. $$ Then we infer for all $n>m$ \begin{align*} \mathbb{P}(\widehat{X}_{n}(\gamma)\le e^{t})\ &=\ \mathbb{P}\left(\sum_{k=1}^{n}\gamma^{k-1}Q_{k}\le e^{t}\right)\\ &\le\ \mathbb{P}\big(\log Q_{k}\le t+(k-1)\theta,\,1\le k\le n\big)\\ &\le\ \prod_{k=m+1}^{n}\big(1-\mathbb{P}(\log Q>t+k\theta)\big)\\ &\le\ \prod_{k=m+1}^{n}\left(1-\frac{1+\varepsilon}{k}\right)\\ &\le\ \prod_{k=m+1}^{n}\left(1-\frac{1}{k}\right)^{1+\varepsilon}\ =\ \left(\frac{m}{n}\right)^{1+\varepsilon} \end{align*} where $(1-x)^{1+\varepsilon}\ge 1-(1+\varepsilon)x$ for all $x\in [0,1]$ has been utilized for the last inequality. Consequently, $\sum_{n\ge 1}\mathbb{P}(\widehat{X}_{n}(\gamma)\le e^{t})<\infty$, and the transience of $(X_{n}(\gamma))_{n\ge 0}$ follows by Proposition \ref{transient}. \emph{Null recurrence.} By Lemma \ref{comparison lemma 2}, we may assume w.l.o.g. that, for some sufficiently small $\varepsilon>0$, $\delta\,:=\,\mathbb{P}(Q=0)\,\ge\,\gamma^{\varepsilon}$ and \begin{align*} &\sup_{t\ge 1}t\,\mathbb{P}(\log Q>t)\ \le\ (1-\varepsilon)\theta. \end{align*} Put also $m_{n}:=\theta^{-1}(m+\log n)$ for integer $m\ge 1$ so large that $$g(x,n):=(x-1)\theta-\log n\ge 1\vee (1-\varepsilon)\theta$$ for all $x\in (m_{n},\infty)$. Note that $\delta^{m_{n}}\ge (e^mn)^{-\varepsilon}$. For all $n\ge 1$ so large that $g(n,n)>\theta$, we then infer \begin{align*} \mathbb{P}(\widehat{X}_{n}(\gamma)\le 1)\ &\ge\ \mathbb{P}\left(\max_{1\le k\le n}\gamma^{k-1}Q_{k}\le\frac{1}{n}\right)\\ &\ge\ \mathbb{P}(Q=0)^{m_{n}}\prod_{m_{n}+1\le k\le n}\mathbb{P}(\log Q\le g(k,n))\\ &\ge\ \delta^{m_{n}}\prod_{m_{n}+1\le k\le n}\left(1-\frac{(1-\varepsilon)\theta}{g(k,n)}\right)\\ &\ge\ (e^m n)^{-\varepsilon}\prod_{k=m}^{n}\left(1-\frac{1-\varepsilon}{k}\right)\\ &\ge\ (e^m n)^{-\varepsilon}\prod_{k=2}^{n}\left(1-\frac{1}{k}\right)^{1-\varepsilon}\ =\ \frac{e^{-m\varepsilon}}{n}. \end{align*} Here $(1-x)^{1-\varepsilon}\le 1-(1-\varepsilon)x$ for all $x\in [0,1]$ has been utilized for the last inequality. Hence, $\sum_{n\ge 1}\mathbb{P}(\widehat{X}_{n}(\gamma)\le 1)=\infty$, giving the recurrence of $(X_{n}(\gamma))_{n\ge 0}$ by Proposition \ref{recur}.\qed \end{proof} Given a Markov chain $(Z_{n})_{n\ge 0}$, a sequence $(\sigma_{n})_{n\ge 0}$ is called a \emph{renewal stopping sequence} for this chain if the following conditions hold: \begin{description}[(R2)] \item[(R1)] $\sigma_{0}=0$ and the $\tau_{n}:=\sigma_{n}-\sigma_{n-1}$ are iid for $n\ge 1$. \item[(R2)] There exists a filtration $\mathcal{F}=(\mathcal{F}_{n})_{n\ge 0}$ such that $(Z_{n})_{n\ge 0}$ is Markov-adapted and each $\sigma_{n}$ is a stopping time with respect to $\mathcal{F}$. \end{description} We define $$ S_{n}\ :=\ \log\Pi_{n}\ =\ \sum_{k=1}^{n}\log M_{k} $$ for $n\ge 0$ and recall that, by our standing assumption, $(S_{n})_{n\ge 0}$ is a negative divergent random walk $(S_{n}\to-\infty$ a.s.). For $c\in\mathbb{R}} \def\Rq{\overline\R$, let $(\sigma_{n}^{>}(c))_{n\ge 0}$ and $(\sigma_{n}^{<}(c))_{n\ge 0}$ denote the possibly defective renewal sequences of ascending and descending ladder epochs associated with the random walk $(S_{n}+cn)_{n\ge 0}$, in particular \begin{align*} \sigma^{>}(c)\ &=\ \sigma^{>}_{1}(c)\ :=\ \inf\{n\ge 1:S_{n}+cn>0\}\ =\ \inf\{n\ge 1:\Pi_{n}>e^{-cn}\},\\ \sigma^{<}(c)\ &=\ \sigma^{<}_{1}(c)\ :=\ \inf\{n\ge 1:S_{n}+cn<0\}\ =\ \inf\{n\ge 1:\Pi_{n}<e^{-cn}\}. \end{align*} Plainly, these are renewal stopping sequences for $(X_{n})_{n\ge 0}$ whenever nondefective. \begin{Lemma}\label{lem:bounding lemma} Let $c\ge 0$ and $\gamma=e^{-c}$. \begin{description}[(b)] \item[(a)] If $c$ is such that $\sigma^{<}(c)<\infty$ a.s., then, with $(\sigma_{n})_{n\ge 0}:=(\sigma_{n}^{<}(c))_{n\ge 0}$, $$ X_{\sigma_{n}}\ \le\ X_{\sigma_{n}}(\gamma)\quad\text{and}\quad\widehat{X}_{\sigma_{n}}\ \le\ \widehat{Y}_{n}\quad\text{a.s.} $$ for all $n\ge 0$, where $X_{0}=X_{0}(\gamma)=\widehat{Y}_{0}$ and $$ \widehat{Y}_{n}\ :=\ \sum_{k=1}^{n}\gamma^{\sigma_{k-1}}Q_{k}^{*}\quad\text{with}\quad Q_{n}^{*}\ :=\ \sum_{k=1}^{\sigma_{n}-\sigma_{n-1}}\frac{\Pi_{\sigma_{n-1}+k-1}}{\Pi_{\sigma_{n-1}}}Q_{k} $$ for $n\ge 1$ denotes the sequence of backward iterations pertaining to the recursive Markov chain $Y_{n}=\gamma^{\sigma_{n}-\sigma_{n-1}}Y_{n-1}+Q_{n}^{*}$. \item[(b)] If $c$ is such that $\sigma^{>}(c)<\infty$ a.s., then, with $(\sigma_{n})_{n\ge 0}:=(\sigma_{n}^{>}(c))_{n\ge 0}$, $$ X_{\sigma_{n}}\ \ge\ X_{\sigma_{n}}(\gamma)\quad\text{and}\quad\widehat{X}_{\sigma_{n}}\ \ge\ \widehat{Y}_{n}\quad\text{a.s.} $$ for all $n\ge 0$, where $X_{0}=X_{0}(\gamma)=\widehat{Y}_{0}$ and $\widehat{Y}_{n}$ is defined as in (a) for the $\sigma_{n}$ given here. \end{description} \end{Lemma} Plainly, one can take $c\in (0,-\fm)$ in (a) and $c\in (-\fm,\infty)$ in (b) if $-\infty<\fm<0$. \begin{proof} (a) Suppose that the $\sigma_{n}^{<}(c)$ are a.s. finite. To prove our claim for $X_{\sigma_{n}}$, we use induction over $n$. Since $\sigma_{0}=0$, we have $X_{\sigma_{0}}=X_{\sigma_{0}}(\gamma)$. For the inductive step suppose that $X_{\sigma_{n-1}}\le X_{\sigma_{n-1}}(\gamma)$ for some $n\ge 1$. Observe that, with $\tau_{n}=\sigma_{n}-\sigma_{n-1}$, \begin{align} \begin{split}\label{eq:crucial estimate} M_{\sigma_{n-1}+k+1}\cdot\ldots\cdot M_{\sigma_{n}}\ &=\ \frac{\Pi_{\sigma_{n}}}{\Pi_{\sigma_{n-1}+k}}\\ &=\ e^{(S_{\sigma_{n}}+c\sigma_{n})-(S_{\sigma_{n-1}+k}+c(\sigma_{n-1}+k))-c(\tau_{n}-k)}\ \le\ \gamma^{\tau_{n}-k} \end{split} \end{align} for all $0\le k\le\tau_{n}$. Using this and the inductive hypothesis, we obtain \begin{align*} X_{\sigma_{n}}\ &=\ \frac{\Pi_{\sigma_{n}}}{\Pi_{\sigma_{n-1}}}X_{\sigma_{n-1}}+\sum_{k=1}^{\tau_{n}}\frac{\Pi_{\sigma_{n}}}{\Pi_{\sigma_{n-1}+k}}\,Q_{\sigma_{n-1}+k}\\ &\le\ \gamma^{\tau_{n}}X_{\sigma_{n-1}}(\gamma)+\sum_{k=1}^{\tau_{n}}\gamma^{\tau_{n}-k}\,Q_{\sigma_{n-1}+k}\ =\ X_{\sigma_{n}}(\gamma)\quad\text{a.s.} \end{align*} as asserted. Regarding the backward iteration $\widehat{X}_{\sigma_{n}}$, we find more directly that \begin{align*} \widehat{X}_{\sigma_{n}}\ &=\ \sum_{k=1}^{n}\Pi_{\sigma_{k-1}}\sum_{j=1}^{\tau_{k}}\frac{\Pi_{\sigma_{k-1}+j-1}}{\Pi_{\sigma_{k-1}}}Q_{j}\\ &\le\ \sum_{k=1}^{n}\gamma^{\sigma_{k-1}}\sum_{j=1}^{\tau_{k}}\frac{\Pi_{\sigma_{k-1}+j-1}}{\Pi_{\sigma_{k-1}}}Q_{j}\ =\ \widehat{Y}_{n}\quad\text{a.s.} \end{align*} for each $n\ge 1$. (b) If $c$ is such that the $\sigma_{n}^{>}(c)$ are a.s. finite, then \eqref{eq:crucial estimate} turns into \begin{equation*} M_{\sigma_{n-1}+k+1}\cdot\ldots\cdot M_{\sigma_{n}}\ \ge\ \gamma^{\tau_{n}-k} \end{equation*} for all $n\in\mathbb{N}$ and $0\le k\le\tau_{n}$. Now it is easily seen that the inductive argument in (a) remains valid when reversing inequality signs and the same holds true for $\widehat{X}_{\sigma_{n}}$.\qed \end{proof} \section{Tail lemmata}\label{sec:tail lemma} In order to prove our results, we need to verify that the tail condition \eqref{tail2} is preserved under stopping times with finite mean. To be more precise, let $\sigma$ be any such stopping time for $(M_{k},Q_{k})_{k\ge 1}$ and consider $$ \widehat{X}_{\sigma}\ =\ \sum_{k=1}^{\sigma}\Pi_{k-1}Q_{k}. $$ Obviously, \begin{equation}\label{Qsigma* bounds} \max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}\ \le\ \widehat{X}_{\sigma}\ \le\ \sigma\,\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}. \end{equation} \begin{Lemma}\label{tail lemma} Assuming \eqref{trivial}, \eqref{trivial2} and $\fm<0$, condition \eqref{tail2} entails \begin{equation*} \lim_{t\to\infty}t\,\mathbb{P}(\log \widehat{X}_{\sigma}>t)\ =\ s\,\mathbb{E}\sigma, \end{equation*} where the right-hand side equals $0$ if $s=0$, and $\infty$ if $s=\infty$. \end{Lemma} \begin{proof} It suffices to prove \begin{equation*} \lim_{t\to\infty}t\,\mathbb{P}\left(\log\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}>t\right)\ =\ s\,\mathbb{E} \sigma \end{equation*} because \eqref{Qsigma* bounds} in combination with $\mathbb{E}\sigma<\infty$ entails \begin{align*} \mathbb{P}&\left(\log\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}>t\right)\ \le\ \mathbb{P}(\log \widehat{X}_{\sigma}>t)\\ &\le\ \mathbb{P}(\log\sigma>\varepsilon t)\ +\ \mathbb{P}\left(\log\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}>(1-\varepsilon)t\right)\\ &=\ o(t^{-1})\ +\ \mathbb{P}\left(\log\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}>(1-\varepsilon)t\right) \end{align*} for all $\varepsilon>0$. (a) We first prove that \begin{equation}\label{eq:upper bound} \limsup_{t\to\infty}\,t\,\mathbb{P}\left(\log\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}>t\right)\ \le\ s\,\mathbb{E}\sigma \end{equation} which is nontrivial only when assuming $s\in [0,\infty)$. Put $\eta_{n}:=\log Q_{n}$ for $n\in\mathbb{N}$. For any $\varepsilon\in (0,1)$, we then have \begin{align*} \mathbb{P}&\left(\log\max_{1\le k\le \sigma}\Pi_{k-1}Q_{k}>t\right)\ =\ \mathbb{P}\left(\max_{1\le k\le \sigma}\,(S_{k-1}+\eta_{k})>t\right)\\ &\le\ \mathbb{P}\left(\max_{0\le k\le \sigma}\,S_{k}>\varepsilon t\right)\ +\ \mathbb{P}\left(\max_{1\le k\le \sigma}\,\eta_{k}>(1-\varepsilon)t\right)\\ &=\ I_{1}(t)\ +\ I_{2}(t). \end{align*} Regarding $I_{1}(t)$, notice that $$ \max_{0\le k\le\sigma}S_{k}\ \le\ \sum_{k=1}^{\sigma}\log_{+}M_{k}. $$ Since $\fm\in [-\infty,0)$ entails $\mathbb{E}\log_{+}M<\infty$ and thus, by Wald's identity, $$ \mathbb{E}\left(\max_{0\le k\le\sigma}S_{k}\right)\ \le\ \mathbb{E}\left(\sum_{k=1}^{\sigma}\log_{+}M_{k}\right)\ =\ \mathbb{E}\sigma\,\mathbb{E}\log_{+}M\ <\ \infty. $$ As a consequence, $$ \lim_{t\to\infty}t\,I_{1}(t)\ =\ 0. $$ Turning to $I_2(t)$, we obtain \begin{align*} t\,I_{2}(t)\ &\le\ t\,\mathbb{E}\sum_{k=1}^{\sigma}\vec{1}_{\{\eta_{k}>(1-\varepsilon)t\}}\\ &=\ t\,\mathbb{E}\sum_{k\ge 1}\vec{1}_{\{\eta_{k}>(1-\varepsilon)t,\,\sigma\ge k\}}\\ &=\ t\,\mathbb{P}(\eta_{1}>(1-\varepsilon)t)\sum_{k\ge 1}\mathbb{P}(\sigma\ge k)\\ &=\ t\,\mathbb{E}\sigma\,\mathbb{P}(\eta_{1}>(1-\varepsilon)t)\ <\ \infty \end{align*} and thereupon $$ \limsup_{t\to\infty}\,t\,(I_{1}(t)+I_{2}(t))\ =\ \limsup_{t\to\infty}\,t\,I_{2}(t)\ \le\ \frac{s\,\mathbb{E}\sigma}{1-\varepsilon}. $$ Hence \eqref{eq:upper bound} follows upon letting $\varepsilon$ tend to 0. (b) It remains to show the inequality \begin{equation}\label{eq:lower bound} \liminf_{t\to\infty}\,t\,\mathbb{P}\left(\log\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}>t\right)\ \ge\ s\,\mathbb{E}\sigma \end{equation} which is nontrivial only when assuming $s\in (0,\infty]$. To this end observe that $$ \log\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}\ =\ \max_{1\le k\le\sigma}(S_{k-1}+\eta_{k})\ \ge\ \max_{1\le k\le\sigma\wedge\tau(c)}\eta_{k}-c $$ for any $c>0$, where $\tau(c):=\inf\{n\ge 1:S_{n}<-c\}$. Since, furthermore, \begin{align*} \mathbb{P}\left(\max_{1\le k\le \sigma\wedge\tau(c)}\,\eta_{k}>t\right)\ &=\ \mathbb{E}\left(\sum_{k=1}^{\sigma\wedge\tau(c)}\vec{1}_{\{\eta_{1}\vee...\eta_{k-1}\le t,\eta_{k}>t\}}\right)\\ &=\ \sum_{k\ge 1}\mathbb{P}\left(\max_{1\le j\le k-1}\eta_{j}\le t,\,\eta_{k}>t,\,\sigma\wedge\tau(c)\ge k\right)\\ &=\ \mathbb{P}(\eta_{1}>t)\sum_{k\ge 1}\mathbb{P}\left(\max_{1\le j\le k-1}\eta_{j}\le t,\,\sigma\wedge\tau(c)\ge k\right), \end{align*} we find \begin{align*} t\,&\mathbb{P}\left(\log\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}>t\right)\\ &\ge\ t\,\mathbb{P}(\eta_{1}>t+c)\sum_{k\ge 1}\mathbb{P}\left(\max_{1\le j\le k-1}\eta_{j}\le t,\,\sigma\wedge\tau(c)\ge k\right)\\ &\underset{t\to\infty}{\longrightarrow}\ s\,\mathbb{E}(\sigma\wedge\tau(c)), \end{align*} and this implies \eqref{eq:lower bound} upon letting $c$ tend to $\infty$, for $\sigma\wedge\tau(c)\uparrow\sigma$.\qed \end{proof} By combining the previous result with a simple stochastic majorization argument, we obtain the following extension. \begin{Lemma}\label{tail lemma 2} Let $s_{*}$ and $s^{*}$ be as defined in \eqref{eq:def of s_* and s^*}. Then \begin{align} \limsup_{t\to\infty}\,t\,\mathbb{P}(\log \widehat{X}_{\sigma}>t)\ &\le\ s^{*}\,\mathbb{E}\sigma\label{565+}\\ \text{and}\quad\liminf_{t\to\infty}\,t\,\mathbb{P}(\log \widehat{X}_{\sigma}>t)\ &\ge\ s_{*}\,\mathbb{E}\sigma.\label{565-} \end{align} \end{Lemma} \begin{proof} For \eqref{565+}, we may assume $s^{*}<\infty$. Recall the notation $F(t)=\mathbb{P}(\log Q\le t)$ and put $\overline{F}:=1-F$. Then define the new distribution function $G$ by $$ \overline{G}(t)\ :=\ \vec{1}_{(-\infty,0]}(t)+\left(\overline{F}(t)\vee\frac{s}{s+t}\right)\vec{1}_{(0,\infty)}(t) $$ for some arbitrary $s>s^{*}\ ($we can even choose $s=s^{*}$ unless $s^{*}=0)$. Since $\overline{G}\ge\overline{F}$, we may construct (on a possibly enlarged probability space) random variables $Q',\,Q_{1}',\,Q_{2}',\ldots$ such that $(M,Q,Q'),\,(M_{1},Q_{1},Q_{1}'),\,(M_{2},Q_{2},Q_{2}'),\ldots$ are iid, the distribution function of $\log Q'$ is $G$, and $Q'\ge Q$, thus $$ \widehat{X}_{\sigma}'\ :=\ \sum_{k=1}^{\sigma}\Pi_{k-1}Q_{k}'\ \ge\ \widehat{X}_{\sigma}. $$ On the other hand, $\overline{G}(t)=\mathbb{P}(\log Q'>t)$ satisfies the tail condition \eqref{tail2}, whence, by an appeal to Lemma \ref{tail lemma}, $$ \limsup_{t\to\infty}\,t\,\mathbb{P}(\log \widehat{X}_{\sigma}>t)\ \le\ \lim_{t\to\infty}\,t\,\mathbb{P}(\log \widehat{X}_{\sigma}'>t)\ =\ s\,\mathbb{E}\sigma. $$ This proves \eqref{565+} because $s-s^{*}$ can be chosen arbitrarily small. Assertion \eqref{565-} for $s>0$ is proved in a similar manner. Indeed, pick any $s\in (0,s_{*})\ ($or even $s_{*}$ itself unless $s_{*}=\infty)$ and define $$ \overline{G}(t)\ :=\ \left(\overline{F}(t)\wedge\frac{s}{s+t}\right)\vec{1}_{[0,\infty)}(t) $$ which obviously satisfies $\overline{G}\le\overline{F}$. In the notation from before, we now have $Q'\le Q$ and thus $\widehat{X}_{\sigma}'\le \widehat{X}_{\sigma}$. Since again $\overline{G}(t)=\mathbb{P}(\log Q'>t)$ satisfies the tail condition \eqref{tail2}, we easily arrive at the desired conclusion by another appeal to Lemma \ref{tail lemma}.\qed \end{proof} Our last tail lemma will be crucial for the proof of Theorem \ref{main12}. Given any $0<\gamma<1$, recall that $X_{0}(\gamma)=X_{0}$ and $$ X_{n}(\gamma)\ =\ \gamma X_{n-1}(\gamma)+Q_{n} $$ for $n\ge 1$. Let $\sigma$ be any integrable stopping time for $(X_{n})_{n\ge 0}$ and note that $$ X_{\sigma}(\gamma)\ =\ \gamma^{\sigma}X_{0}+Q(\gamma), $$ where $$ Q(\gamma)\ :=\ \sum_{k=1}^{\sigma}\gamma^{\sigma-k}Q_{k}. $$ More generally, if $(\sigma_{n})_{n\ge 0}$ denotes a renewal stopping sequence for $(X_{n})_{n\ge 0}$ with $\sigma=\sigma_{1}$, then $$ X_{\sigma_{n}}(\gamma)\ =\ \gamma^{\sigma_{n}-\sigma_{n-1}}X_{\sigma_{n-1}}(\gamma)+Q_{n}(\gamma) $$ for $n\ge 1$ with iid $(\gamma^{\sigma_{n}-\sigma_{n-1}},Q_{n}(\gamma))_{n\ge 1}$ and $Q_{\sigma_{1}}(\gamma)=Q(\gamma)$. \begin{Lemma}\label{tail lemma 3} Let $\gamma\in (0,1)$ and $\sigma,Q(\gamma)$ be as just introduced. If $Q$ satisfies condition \eqref{tail2}, then \begin{equation*} \lim_{t\to\infty}t\,\mathbb{P}(\log Q(\gamma)>t)\ =\ s\,\mathbb{E}\sigma, \end{equation*} where the right-hand side equals $0$ if $s=0$, and $\infty$ if $s=\infty$. More generally, with $s_{*},s^{*}$ as defined in \eqref{eq:def of s_* and s^*}, it is always true that \begin{align*} \begin{split} s_{*}\,\mathbb{E}\sigma\ &\le\ \liminf_{t\to\infty}t\,\mathbb{P}(\log Q(\gamma)>t)\\ &\le\ \limsup_{t\to\infty}t\,\mathbb{P}(\log Q(\gamma)>t)\ \le\ s^{*}\,\mathbb{E}\sigma. \end{split} \end{align*} \end{Lemma} \begin{proof} Embarking on the obvious inequality (compare \eqref{Qsigma* bounds}) $$ \max_{1\le k\le\sigma}\gamma^{\sigma-k}Q_{k}\ \le\ Q(\gamma)\ \le\ \sigma\max_{1\le k\le\sigma}\gamma^{\sigma-k}Q_{k}, $$ the arguments are essentially the same and even slightly simpler than those given for the proofs of Lemmata \ref{tail lemma} and \ref{tail lemma 2}. We therefore omit further details.\qed \end{proof} \section{Proof of Theorems \ref{main11} and \ref{main12}}\label{sec:main11 and main12} \begin{proof}[of Theorem \ref{main11}] (a) \emph{Null recurrence}: We keep the notation of the previous sections, in particular $S_{n}=\log\Pi_{n}$ and $\eta_{n}=\log Q_{n}$ for $n\ge 1$. For an arbitrary $c>0$, let $(\sigma_{n})_{n\ge 0}$ be the integrable renewal stopping sequence with $$ \sigma\ =\ \sigma_{1}\ :=\ \inf\{n\ge 1:S_{n}<-c\}. $$ Then $$ \big(M_{n}^{*},Q_{n}^{*}\big)\ :=\ \left(\frac{\Pi_{\sigma_{n}}}{\Pi_{\sigma_{n-1}}},\sum_{k=\sigma_{n-1}+1}^{\sigma_{n}}\frac{\Pi_{k-1}}{\Pi_{\sigma_{n-1}}}Q_{k}\right),\quad n\ge 1, $$ are independent copies of $(\Pi_{\sigma},\sum_{k=1}^{\sigma}\Pi_{k-1}Q_{k})$. Put $$ \Pi_{0}^{*}\ :=\ 1\quad\text{and}\quad\Pi_{n}^{*}\ :=\ \prod_{k=1}^{n}M_{k}^{*}\quad\text{for }n\ge 1. $$ By Lemma \ref{tail lemma 2}, $$ \limsup_{t\to\infty}\,t\,\mathbb{P}(\log Q_{1}^{*}>t)\ \le\ s^{*}\,\mathbb{E}\sigma $$ As already pointed out in the Introduction, validity of \eqref{trivial}, \eqref{trivial2} and \eqref{30} implies that $(X_{n})_{n\ge 0}$ cannot be positive recurrent. We will always assume $X_{0}=\widehat{X}_{0}=0$ hereafter. By Proposition \ref{recur}, the null recurrence of $(X_{n})_{n\ge 0}$ follows if we can show that \begin{equation}\label{eq:to show} \sum_{n\ge 1}\mathbb{P}(X_{n}\le t)\ =\ \sum_{n\ge 1}\mathbb{P}(\widehat{X}_{n}\le t)\ =\ \infty \end{equation} for some $t>0$ or, a fortiori, \begin{equation}\label{eq2:to show} \sum_{n\ge 1}\mathbb{P}(\widehat{X}_{\sigma_{n}}\le t)\ =\ \infty. \end{equation} We note that $\widehat{X}_{\sigma_{n}}=\sum_{k=1}^{\sigma_{n}}\Pi_{k-1}Q_{k}=\sum_{k=1}^{n}\Pi_{k-1}^{*}Q_{k}^{*}$ and pick an arbitrary nondecreasing sequence $0=a_{0}\le a_{1}\le\ldots$ such that $$ a\ :=\ \sum_{n\ge 0}e^{-a_{n}}\ <\ \infty. $$ Fix any $z>0$ so large that \begin{equation*} \mathbb{P}\left(Q_{1}^{*}\le\frac{z}{a}\right)\ >\ 0. \end{equation*} Using $M_{n}^{*}<1$ for all $n\ge 1$, we then infer that \begin{align*} \mathbb{P}\left(\max_{1\le k\le n}e^{a_{k-1}}\Pi_{k-1}^{*}Q_{k}^{*}\le\frac{t}{a}\right)\ &\ge\ \mathbb{P}\left(\max_{1\le k\le n}\Pi_{k-1}^{*}Q_{k}^{*}\le\frac{t}{a}\right)\\ &\ge\ \mathbb{P}\left(Q_{1}^{*}\le\frac{t}{a}\right)^{n}\ >\ 0. \end{align*} Furthermore, \begin{equation*} \mathbb{P}(\widehat{X}_{\sigma_{n}}\le t)\ =\ \mathbb{P}\left(\sum_{k=1}^{n}\Pi_{k-1}^{*}Q_{k}^{*}\le t\right)\ \ge\ \mathbb{P}\left(\max_{1\le k\le n}e^{\,a_{k-1}}\Pi_{k-1}^{*}Q_{k}^{*}\le\frac{t}{a}\right), \end{equation*} because $\max_{1\le k\le n}e^{\,a_{k-1}}\Pi_{k-1}^{*}Q_{k}^{*}\le\frac{t}{a}$ implies $$ \sum_{k=1}^{n}\Pi_{k-1}^{*}Q_{k}^{*}\ \le\ \frac{t}{a}\sum_{k=1}^{n}e^{-a_{k-1}}\ \le\ t. $$ Consequently, \begin{equation}\label{eq3:to show} \sum_{n\ge 1}\mathbb{P}\left(\max_{1\le k\le n}e^{a_{k-1}}\Pi_{k-1}^{*}Q_{k}^{*}\le\frac{t}{a}\right)\ =\ \infty \end{equation} implies \eqref{eq2:to show}, and thus \eqref{eq:to show}. By choice of the $\sigma_{n}$, we have $\log\Pi_{k}^{*}\le -ck$ a.s. Putting $x=\log t-\log a$, we have with $a_{k}=o(k)$ as $k\to\infty\ ($choose e.g. $a_{k}=2\,\log(1+k))$ \begin{align*} \sum_{n\ge 1}\,&\mathbb{P}\left(\max_{1\le k\le n}e^{a_{k-1}}\Pi_{k-1}^{*}Q_{k}^{*}\le\frac{t}{a}\right)\\ &\ge\ \sum_{n\ge 1}\mathbb{P}\left(\max_{1\le k\le n}\big(-c(k-1)+a_{k-1}+\log Q_{k}^{*}\big)\le x\right)\\ &=\ \sum_{n\ge 1}\prod_{k=1}^{n}\mathbb{P}\big(\log Q_{1}^{*}\le x-a_{k-1}+c(k-1)\big) \end{align*} Defining $b_{n}$ as the $n$th summand in the previous sum and writing $\sigma=\sigma(c)$ to show the dependence on $c$, Lemma \ref{tail lemma 2} provides us with $$ \liminf_{n\to\infty}\,n\left(\frac{b_{n+1}}{b_{n}}-1\right)\ \ge\ -s^{*}\,\frac{\mathbb{E}\sigma(c)}{c}, $$ hence Raabe's test entails \eqref{eq3:to show} if we can fix $c>0$ such that \begin{equation}\label{eq4:to show} s^{*}\,\frac{\mathbb{E}\sigma(c)}{c}\ <\ 1. \end{equation} Plainly, the latter holds true for any $c>0$ if $s^{*}=0$. But if $s^{*}\in (0,\infty)$, then use the elementary renewal theorem to infer (also in the case $\fm=-\infty$) $$ \lim_{c\to\infty}\,s^{*}\,\frac{\mathbb{E}\sigma(c)}{c}\ =\ \frac{s^{*}}{-\fm}\ <\ 1. $$ Hence, \eqref{eq4:to show} follows by our assumption $s^{*}<-\fm$.\qed \noindent (b) \emph{Transience}: By Proposition \ref{transient}, it must be shown that $$ \sum_{n\ge 0}\mathbb{P}(\widehat{X}_{n}\le t)\ <\ \infty $$ for any $t>0$. We point out first that it suffices to show \begin{equation}\label{reduced sum finite} \sum_{n\ge 0}\mathbb{P}(\widehat{X}_{\sigma_{n}}\le t)\ <\ \infty \end{equation} for some integrable renewal stopping sequence $(\sigma_{n})_{n\ge 0}$. Namely, since $(\widehat{X}_{n})_{n\ge 0}$ is nondecreasing, it follows that \begin{align*} \sum_{n\ge 0}\mathbb{P}(\widehat{X}_{n}\le t)\ &=\ \sum_{n\ge 0}\mathbb{E}\left(\sum_{k=\sigma_{n}}^{\sigma_{n+1}-1}\vec{1}_{\{\widehat{X}_{k}\le t\}}\right)\\ &\le\ \sum_{n\ge 0}\mathbb{E}\big(\sigma_{n+1}-\sigma_{n}\big)\vec{1}_{\{\widehat{X}_{\sigma_{n}}\le t\}}\\ &=\ \mathbb{E}\sigma\sum_{n\ge 0}\mathbb{P}(\widehat{X}_{\sigma_{n}}\le t), \end{align*} where we have used that $\sigma_{n+1}-\sigma_{n}$ is independent of $\widehat{X}_{\sigma_{n}}$ for each $n\ge 0$. Choosing $(\sigma_{n})_{n\ge 0}=(\sigma_{n}^{>}(c))_{n\ge 0}$ as defined before Lemma \ref{lem:bounding lemma} for an arbitrary $c\in (-\fm,s_{*})$, part (b) of this lemma provides us with \begin{align}\label{wh(X)_{n}>=wh(Y)_{n}} \widehat{X}_{\sigma_{n}}\ \ge\ \widehat{Y}_{n}\ =\ \sum_{k=1}^{n}\gamma^{\sigma_{k-1}}Q_{k}^{*}\quad\text{a.s.} \end{align} for all $n\ge 0$, where the $Q_{n}^{*}$ are formally defined as in (a) for the $\sigma_{n}$ given here and the $\widehat{Y}_{n}$ are the backward iterations of the Markov chain defined by the RDE $$ Y_{n}\ =\ \gamma^{\sigma_{n}-\sigma_{n-1}}Y_{n-1}+Q_{n}^{*},\quad n\ge 1. $$ By Lemma \ref{tail lemma 2}, $$ \liminf_{t\to\infty}t\,\mathbb{P}(\log Q^{*}>t)\ \ge\ s_{*}\mathbb{E}\sigma. $$ Let $(Q_{n}')_{n\ge 1}$ be a further sequence of iid random variables with generic copy $Q'$, independent of all other occurring random variables and such that \begin{equation}\label{eq:tail Q'} \lim_{t\to\infty}t\,\mathbb{P}(\log Q'>t)\ =:\ s\ \in\ (c,s_{*}). \end{equation} Put $\gamma:=e^{-c}$. Then Kellerer's result (Proposition \ref{Kellerer's result}) implies the transience of the Markov chain $X_{n}'(\gamma)=\gamma X_{n-1}'(\gamma)+Q_{n}'$, $n\ge 1$, and thus also of the subchain $(X_{\sigma_{n}}'(\gamma))_{n\ge 0}$. Since $\widehat{X}_{\sigma_{n}}'(\gamma)\ =\ \sum_{k=1}^{n}\gamma^{\sigma_{k-1}}\widehat{Q}_{k}$ with $$ \widehat{Q}_{n}\ =\ \sum_{k=\sigma_{n-1}+1}^{\sigma_{n}}\gamma^{k-\sigma_{n-1}-1}Q_{k}' $$ for $n\ge 1$ and since, by \eqref{eq:tail Q'} and Lemma \ref{tail lemma}, $$ \lim_{t\to\infty}t\,\mathbb{P}(\log\widehat{Q}>t)\ =\ s\,\mathbb{E}\sigma, $$ thus $\mathbb{P}(Q^{*}>t)\ge\mathbb{P}(\widehat{Q}>t)$ for all sufficiently large $t$, we now infer by invoking our Comparison Lemma \ref{comparison lemma} that the transience of $(X_{\sigma_{n}}'(\gamma))_{n\ge 0}$ entails the transience of $(Y_{n})_{n\ge 0}$ given above and thus $$ \sum_{n\ge 0}\mathbb{P}(\widehat{Y}_{n}\le t)\ <\ \infty $$ for all $t>0$. Finally, use \eqref{wh(X)_{n}>=wh(Y)_{n}} to arrive at \eqref{reduced sum finite}. This completes the proof of part (b).\qed \end{proof} \begin{proof}[of Theorem \ref{main12}] Fix $c>s^{*}$ and put as before $\gamma=e^{-c}$. Since $S_{n}=\log\Pi_{n}\to-\infty$ a.s. and $\fm^{+}=\fm^{-}=\infty$, we have $$ \lim_{n\to\infty}\frac{S_{n}}{n}\ =\ \lim_{n\to\infty}\frac{S_{n}+an}{n}\ =\ -\infty\quad\text{a.s. for all }a\in\mathbb{R}} \def\Rq{\overline\R $$ due to Kesten's trichotomy (see e.g. \cite[p.~3]{KesMal:96}) and hence in particular $S_{n}+cn\to -\infty$ a.s. As a consequence, the sequence $(\sigma_{n})_{n\ge 0}=(\sigma_{n}^{<}(c))_{n\ge 0}$ as defined before Lemma \ref{lem:bounding lemma} is an integrable renewal stopping sequence for $(X_{n})_{n\ge 0}$. Part (a) of this lemma implies $$ X_{\sigma_{n}}\ \le\ X_{\sigma_{n}}(\gamma)\ =\ \gamma^{\sigma_{n}-\sigma_{n-1}}X_{\sigma_{n-1}}(\gamma)+Q_{n}(\gamma)\quad\text{a.s.} $$ for all $n\ge 0$, where $Q_{n}(\gamma)=\sum_{k=\sigma_{n-1}+1}^{\sigma_{n}}\gamma^{\sigma_{n}-k}Q_{k}$ for $n\ge 1$. Hence it is enough to prove the null recurrence of $(X_{\sigma_{n}}(\gamma))_{n\ge 0}$. To this end, note first that $\fm(\gamma):=\mathbb{E}\log\gamma^{\sigma_{1}}=-c\,\mathbb{E}\sigma_{1}\in (-\infty,0)$. Moreover, Lemma \ref{tail lemma 3} provides us with $$ \limsup_{t\to\infty}t\,\mathbb{P}(\log Q_{1}(\gamma)>t)\ \le\ s^{*}\mathbb{E}\sigma_{1}\ <\ c\,\mathbb{E}\sigma_{1}\ =\ -\fm(\gamma), $$ and so the null recurrence of $(X_{\sigma_{n}}(\gamma))_{n\ge 0}$ follows from Theorem \ref{main11}.\qed \end{proof} \section{On the structure of the attractor set}\label{attr} The purpose of this section is to investigate the structure of the attractor set $L$ for the Markov chain $(X_{n})_{n\ge 0}$ defined by \eqref{chain}. Unlike before, we assume hereafter that $(X_{n})_{n\ge 0}$ is locally contractive and recurrent, the latter being an inevitable assumption for $L\ne\oslash$. To exclude the ``trivial case'' (as explained in the introduction) we assume $\mathbb{P}(M=0)=0$. Recall from the paragraph preceding Proposition \ref{recur} that $L$ consists of all accumulation points of $(X_{n}^{x}(\omega))_{n\ge 0}$ which turns out to be the same for all $x\in\mathbb{R}} \def\Rq{\overline\R_{+}$ and $\mathbb{P}$-almost all $\omega$. As already mentioned in the Introduction, $(X_{n})_{n\ge 0}$ possesses a unique invariant distribution, say $\nu$, if \eqref{trivial2} and \eqref{33} hold. The attractor set then coincides with the support of $\nu$. In the positive recurrent case the structure of $L$ was analyzed in \cite{BurDamMik:16}. According to Theorem 2.5.5 from there, $L$ necessarily equals a half-line $[a,\infty)$ for some $a\ge 0$ if it is unbounded. If $L$ is bounded, no general results concerning local properties of $L$ are known. It may equally well be a fractal (for instance, a Cantor set) or an interval. Below we consider both the positive and null recurrent case. The second one is implied by hypotheses of Theorem \ref{main11} a), but also holds when $\mathbb{E} \log M = 0$ (see \cite{BabBouElie:97,Benda:98b} for more details). For $(m,q)\in\mathbb{R}} \def\Rq{\overline\R_{+}^{2}$, let $g$ be the affine transformation of $\mathbb{R}} \def\Rq{\overline\R$ defined by \begin{equation*} g(x)=mx+q\,,\quad x\in\mathbb{R}} \def\Rq{\overline\R. \end{equation*} We will write $g=(m,q)$, thereby identifying $g$ with $(m,q)$. The affine transformations constitute a group ${\sf Aff}(\mathbb{R}} \def\Rq{\overline\R )$ with identity $(1,0)$ and multiplication defined by $$g_{1}g_{2}=(m_{1},q_{1})\,(m_{2},q_{2})=(m_{1}m_{2},q_{1}+m_{1}q_{2})$$ for $g_{i}=(m_{i}, q_{i})$, $i=1,2$. The inverse of $g=(m,q)$ is given by $g^{-1}=(m^{-1},-m^{-1}q)$. Assuming $m\ne1$, let $x_{0} = x_{0}(g)=q/(1-m)$ be the unique fixed point of $g$, that is the unique solution to the equation $g(x)=x$. Then $$ g(x)\ =\ m \,(x-x_{0})+x_{0}\,,\quad x\in\mathbb{R}} \def\Rq{\overline\R$$ and similarly \begin{equation}\label{eq:x0} g^{n}(x)\ =\ m^{n}x+q_{n}= m^{n}\, (x-x_{0}) + x_{0}\,,\quad x\in \mathbb{R}} \def\Rq{\overline\R,\ n\ge 1, \end{equation} where $q_{n}=\sum _{i=0}^{n-1}m^i\,q$. Formula \eqref{eq:x0} tells us that, modulo $x_{0}$, the action of $g$ is either contractive or expanding depending on whether $m<1$ or $m>1$, respectively. We interpret $\mu$, the distribution of $(M,Q)$, as a probability measure on ${\sf Aff}(\mathbb{R}} \def\Rq{\overline\R)$ hereafter and let ${\rm supp}\,\mu$ denote its support. Consider the subsemigroup $T$ of ${\sf Aff} (\mathbb{R}} \def\Rq{\overline\R)$ generated by ${\rm supp }\ \mu $, i.e. $$ T\ :=\ \{ g_{1}\cdot \ldots \cdot g_{n}: g_{i} \in {\rm supp}\,\mu,\ i=1,\ldots,n,\,n\ge 1\}\,, $$ and let $\overline T$ be its closure. A set $S\subset \mathbb{R}} \def\Rq{\overline\R$ is said to be {\em $\overline T$-invariant} if for every $g\in \overline T$ and $x\in S$, $g(x)=mx+q \in S$. The following result was stated in a slightly different setting as Proposition 2.5.3 in \cite{BurDamMik:16} and can be proved by the same arguments after minor changes. \begin{Lemma}\label{lem:attractor} Let $(X_{n})_{n\ge 0}$ be locally contractive and recurrent. Then $L=\overline S_{0}$, where $$ S_{0}\ :=\ \{(1-m)^{-1}q: g=(m,q)\in T \,, m<1\}. $$ Moreover, $L$ equals the smallest $\overline T$-invariant subset of $\mathbb{R}} \def\Rq{\overline\R$. \end{Lemma} For positive recurrent $(X_{n})_{n\ge 0}$, we have already pointed out that $L$, if unbounded, must be a half-line $[a,\infty)$ ($a\ge 0$). The subsequent theorem provides the extension of this fact to any locally contractive and recurrent $(X_{n})_{n\ge 0}$. \begin{Theorem} Let $(X_{n})_{n\ge 0}$ be locally contractive and recurrent with unbounded attractor set $L$. If $\mathbb{P}(M=0)=0$, then $L=[a,\infty)$ for some $a\ge 0$ . \end{Theorem} \begin{proof} By Lemma \ref{lem:attractor}, the set $L$ is uniquely determined by ${\rm supp}\,\mu$ and does not depend on the values $\mu(A)$ for any particular sets $A$. Consequently, any modification of $\mu$ with the same support leaves $L$ invariant. We will use this observation and define a tilting $\widetilde\mu$ of $\mu$ of the form $$ \widetilde\mu({\rm d}m,{\rm d}q)\ =\ f(m)h(q)\,\mu({\rm d}m,{\rm d}q) $$ for suitable positive functions $f,h$ such that, if $(M,Q)$ has law $\widetilde\mu$, then the corresponding Markov chain $(\widetilde X_{n})_{n\ge 0}$ is positive recurrent with unique invariant distribution $\widetilde\nu$. We thus conclude ${\rm supp}\,\widetilde\nu = L$ and thereupon the claim $L=[a,\infty)$ if $L$ is unbounded. Put \begin{align*} f(m)\ &:=\ \begin{cases} \displaystyle\frac{c_{0}}{|\log m|},&\text{if }0<m<\displaystyle\frac{1}{e},\\[1.5mm] c_{0},&\text{if }\displaystyle\frac{1}{e}\le m<1,\\[1.5mm] c_{0}\,c_{1},&\text{if }1\le m<e,\\[1.5mm] \displaystyle\frac{c_{0}\,c_{1}}{\log m},&\text{if }m\ge e, \end{cases} \shortintertext{and} h(q)\ &:=\ \begin{cases} c_{2},&\text{if }0\le q<e,\\[1.5mm] \displaystyle\frac{c_{2}}{\log q},&\text{if }q\ge e, \end{cases} \end{align*} and fix $c_{0},c_{1},c_{2}>0$ such that $$ \int f(m) h(q)\ \mu({\rm d}m,{\rm d}q)\ =\ 1. $$ Observe that, if $\mathbb{P}_{\widetilde\mu}$ is such that $(M,Q)$ has law $\widetilde\mu$ under this probability measure, then \begin{align}\label{int} \begin{split} \mathbb{E}_{\widetilde\mu}\log M\ &=\ c_{0}\left[-\int_{(0,1]\times\mathbb{R}} \def\Rq{\overline\R_{+}}(1\wedge|\log m|)\, h(q)\ \mu({\rm d}m,{\rm d}q)\right.\\ &\hspace{2cm}+\ c_{1}\left.\int_{(1,\infty)\times\mathbb{R}} \def\Rq{\overline\R_{+}}(1\wedge\log m)\,h(q)\ \mu({\rm d}m,{\rm d}q)\right], \end{split} \end{align} and from this it is readily seen that we can specify $c_{0},c_{1}$ further so as to have $$ \mathbb{E}_{\widetilde\mu}\log M\ <\ 0. $$ Regarding $\mathbb{E}_{\widetilde\mu}\log^{+}Q$, we find \begin{align*} \mathbb{E}_{\widetilde\mu} \log^{+}Q\ &=\ c_{2}\left[\int_{\mathbb{R}} \def\Rq{\overline\R_{+}\times [1,e]}\log q\,f(m)\ \mu({\rm d}m,{\rm d}q)\ +\ \int_{\mathbb{R}} \def\Rq{\overline\R_+\times (e,\infty)} f(m)\ \mu({\rm d}m,{\rm d}q)\right]\ <\ \infty. \end{align*} Hence, if $(M,Q)$ has law $\widetilde\mu$, then the corresponding Markov chain $(\widetilde X_{n})_{n\ge 0}$ defined by \eqref{chain} is indeed positive recurrent. This completes the proof of the theorem.\qed \end{proof} The next lemma provides some conditions on $\mu$ that are easily checked and sufficient for $L$ to be unbounded. \begin{Lemma}\label{lem: 3.4} If $\mathbb{P}(M=0)=0$, then each of the following conditions on the law of $(M,Q)$ implies that $L$ is unbounded. \begin{description}[(C2)]\itemsep2pt \item[(C1)] The law of $Q$ has unbounded support. \item[(C2)] $\mathbb{P}(M>1)>0$ and $\mathbb{P}(Q=0)<1$. \end{description} \end{Lemma} \begin{proof} Assume first (C1), put $\beta:=\sup\{x: x\in L\}$ and recall from Lemma \ref{lem:attractor} that $L$ is invariant under the action of ${\rm supp}\,\mu$, i.e., if $(m,q)\in {\rm supp }\,\mu$ and $x\in L$, then $mx+q\in L$. In particular, \begin{equation}\label{eq:5} m\beta+q\,\le\,\beta\quad\text{ for any } (m,q)\in {\rm supp }\, \mu. \end{equation} Hence, if $\beta>0$, we have $\beta\ge m\beta + q \ge q$ and conclude $\beta=\infty$, for $q$ can be chosen arbitrarily large. Assuming now (C2), pick $g=(m,q)\in {\rm supp}\,\mu$ such that $m>1$. Notice that $x=x_{0}(g)=q/(1-m)$, the unique fixed point of $g$, is negative or zero because $m>1$. Since, under our hypothesis, the attractor set consists of at least two points, one can choose some positive $y\in L$. Using \eqref{eq:x0}, we then infer $$ g^{n}(y)\ =\ m^{n}(y-x)\,+\,x\ \to\ \infty $$ as $n\to\infty$ which completes the proof.\qed \end{proof} The assumptions of Lemma \ref{lem: 3.4} are not optimal. Even if $\mathbb{P}(M<1)=1$ and the support of the distribution of $Q$ is bounded, the attractor set may be unbounded, as demonstrated by the next lemma. \begin{Lemma} Assume that $\mathbb{P}(M<1)=1$. Then the attractor set $L$ is bounded if, and only if, the set $$ S_{1}\ =\ \big\{x_{0}=x_{0}(g):g\in {\rm supp\,}\mu\big\} $$ is bounded or, equivalently, $Q/(1-M)$ is a.s. bounded. \end{Lemma} \begin{proof} Assuming that $S_{1}$ is bounded, denote by $a$ and $b$ its infimum and supremum, respectively. Since the closed interval $[a,b]$ is obviously $\overline T$-invariant, it must contain $L$ by Lemma \ref{lem:attractor} which implies that $L$ is bounded. If $S_{1}$ is unbounded, then $S_{1}\subset S_{0}$ implies that $S_{0}$ and thus also $L=\overline S_{0}$ is unbounded by another appeal to Lemma \ref{lem:attractor}. \qed \end{proof} Finally, we turn to the case when the attractor set $L$ is bounded. As already mentioned, the local structure of $L$ cannot generally be described precisely. If $\mu$ is supported by $(a,0)$ and $(a,1-a)$ for some $0<a<1/2$, then $L\subset [0,1]$ equals the Cantor set obtained by initially removing $(a,1-a)$ from $[0,1]$ and successive self-similar repetitions of this action for the remaining intervals (see also \cite[Remark 7]{BurtonRoesler:95}). So the Cantor ternary set is obtained if $a=1/3$. On the other hand, we have the following result. \begin{Lemma} For $\alpha,\beta<1$ with $\alpha+\beta\ge 1$ suppose that $(\alpha,q_{\alpha}), (\beta,q_{\beta})\in {\rm supp }\,\mu$ and further $x_{\alpha}:=q_{\alpha}/(1-\alpha)\le q_{\beta}/(1-\beta) =:x_{\beta}$. Then the interval $[x_{\alpha},x_{\beta}]$ is contained in $L$. \end{Lemma} \begin{proof} W.l.o.g. we assume that $x_{\alpha}=0$ and $x_{\beta}=1$ so that the points in ${\rm supp}\,\mu$ are $f_{\alpha}:=(\alpha,0)$ and $f_{\beta}:=(\beta,1-\beta)$ rather than $(\alpha,q_{\alpha}), (\beta,q_{\beta})$ and $[0,1]\subset L$ must be verified. Pick any $x\in (0,1)$. Let $U$ be the subsemigroup of ${\sf Aff}(\mathbb{R}} \def\Rq{\overline\R)$ generated by $f_{\alpha}$ and $f_{\beta}$. To prove that $x\in L$, it is sufficient by Lemma \ref{lem:attractor} to find a sequence $(g_{n})_{n\ge 1}$ in $U$ such that $x$ is an accumulation point of $(g_{n}(0))_{n\ge 1}$. We construct this sequence inductively. Observe first that $\alpha+\beta \ge 1$ implies $$ x\,\in\,(0,1)\,\subset\, [0,\alpha] \cup [1-\beta,1]. $$ If $x$ is an element of $[0,\alpha]$, take $g_{1} = f_{\alpha}$, otherwise take $g_{1}=f_{\beta}$. In both cases, $$ x\,\in\, [g_{1}(0), g_{1}(1)]\quad\text{and} \quad |g_{1}(1) -g_{1}(0)|\,\le\,\alpha\vee\beta. $$ Assume we have found $g_{n}=(a_{n},b_{n})$ such that $$ x\,\in\,[g_{n}(0), g_{n}(1)]\quad\text{and}\quad |g_{n}(1) -g_{n}(0)|\,=\, a_{n}\,\le\,\big(\alpha \vee \beta\big)^n. $$ Using again $\alpha+\beta\ge 1$, we have \begin{align*} x\ &\in\ [g_{n}(0), g_{n}(1)]\ =\ [b_{n}, a_{n} + b_{n}]\\ &\subset\ [b_{n}, \alpha a_{n} + b_{n}] \cup [(1-\beta)a_{n} + b_{n}, a_{n} + b_{n}]\\ &=\ [g_{n}f_{\alpha}(0), g_{n}f_{\alpha}(1)] \cup [g_{n}f_{\beta}(0), g_{n}f_{\beta}(1)]. \end{align*} Thus $x$ must belong to one of these intervals. If $x\in [g_{n}f_{\alpha}(0), g_{n}f_{\alpha}(1)]$, put $g_{n+1}=g_{n} f_{\alpha}$, otherwise put $g_{n+1}=g_{n} f_{\beta}$. In both cases, $$ x\,\in\,[g_{n+1}(0), g_{n+1}(1)]\quad\text{and} \quad |g_{n+1}(1)-g_{n+1}(0)|\,=\,a_{n} (\alpha \vee \beta)\,\le\,\big(\alpha \vee \beta\big)^{n+1}. $$ Hence, $x$ is indeed an accumulation point of the sequence $(g_{n}(0))_{n\ge 1}$ and therefore an element of $L$.\qed \end{proof} \footnotesize \noindent {\bf Acknowledgements.} The authors wish to thank two anonymous referees for various helpful remarks that helped to improve the presentation and for bringing reference \cite{DenKorWach:16} to our attention. G. Alsmeyer was partially supported by the Deutsche Forschungsgemeinschaft (SFB 878) "Geometry, Groups and Actions". D. Buraczewski was partially supported by the National Science Centre, Poland (Sonata Bis, grant number DEC-2014/14/E/ST1/00588). Part of this work was done while A.~Iksanov was visiting M\"unster in January, February and July 2015, 2016. He gratefully acknowledges hospitality and financial support. \def$'${$'$} \end{document}
\begin{document} \title{Quantum projective measurements and the CHSH inequality in Isabelle/HOL} \author{Mnacho Echenim and Mehdi Mhalla\\ {Universit\'e Grenoble Alpes, Grenoble INP}\\ {CNRS, LIG, F-38000 Grenoble, France} } \date{March 2021} \maketitle \begin{abstract} We present a formalization in Isabelle/HOL of quantum projective measurements, a class of measurements involving orthogonal projectors that is frequently used in quantum computing. We also formalize the CHSH inequality, a result that holds on arbitrary probability spaces, which can used to disprove the existence of a local hidden-variable theory for quantum mechanics. \end{abstract} \section{Introduction} One of the (many) counterintuitive aspects of quantum mechanics involves the measurement postulates, which state that: \begin{itemize} \item Measurements of equally prepared systems such as photons or electrons do not always output the same value, which implies that it is only possible to make statistical predictions on properties of these objects; \item Once the measurement of an object has produced an outcome $\lambda$, all subsequent measurements of the same object also produce the outcome $\lambda$, a phenomenon known as the \emph{collapse of the wave function}. \end{itemize} These postulates are now widely accepted, but this has not always been the case, as evidenced by the well known EPR paradox, named after Einstein, Podolsky and Rosen, in a paper published in 1935, suggesting that quantum mechanics was an incomplete theory \cite{EPR}. In their thought experiment, the authors considered two entangled particles that are separated. When one of them is measured, the collapse of the wave function guarantees that the measure of the other particle will produce the same result. Because instantaneous transmission of information is impossible, the authors viewed this phenomenon as evidence that there existed hidden variables that predetermined the measurement outcomes but were not accounted for by quantum mechanics. The search for so-called \emph{local hidden-variable theories} for quantum mechanics came to a halt in 1964 when Bell proved \cite{Bell64} that these theories entailed upper-bounds on the correlations of measure outcomes of entangled particles, and that these upper-bounds are violated by the correlations predicted by quantum mechanics; a violation that has since then been experimentally confirmed \cite{aspect}. In this paper we present the formalization in Isabelle/HOL of \emph{projective} measurements, also known as von Neumann measurement, along with the related notion of \emph{observables}. Projective measurements involve complete sets of orthogonal projectors that are used to compute the probabilities of measurement outcomes and to determine the \emph{state collapse} after the measurement. Although there are more general forms of measurements in quantum mechanics, projective measurements are quite common: for example, textbooks on quantum mechanics frequently mention ``measuring a state in a given orthonormal basis''; an operation that means performing a projective measurement with the projectors on the elements of the orthonormal basis. These measurements are especially used in quantum computing and quantum information, and are thus of a particular interest to computer scientists. Next we formalize the \emph{CHSH inequality}, named after Clauser, Horne, Shimony and Holt \cite{CHSH}. This inequality involving expectations of random variables can be used to prove Bell's theorem that quantum mechanics cannot be formalized in a probabilistic setting involving local hidden variables, thus showing the intrinsically statistical nature of quantum mechanics. \paragraph{Related work.} There are several lines of ongoing research on the use of formal tools for the analysis of quantum algorithms and protocols, such as an extension of attack trees with probabilities \cite{Kammuller19}, or the development of dedicated quantum Hoare logics \cite{Unruh_2019,liu19}. Approaches closer to ours involve the formalization of quantum notions and algorithms in proof assistants including Coq \cite{Boender_2015,Coq_Quantum} and Isabelle \cite{isa-dirac}. In this paper we extend the effort started in \cite{isa-dirac} in two ways. First, our formalization is based on so-called \emph{density operators} rather than pure quantum states. Although both notions are equivalent, density operators are a more convenient way of representing quantum systems that are in a mixed state, i.e., in one of several pure quantum states with associated probabilities that sum to 1. They also permit to obtain more general and natural statements on quantum mechanics. Second, although notions related to measurements are formalized in \cite{isa-dirac}, these are specific and in particular, only involve measurements in the standard basis. We develop the full formalization of projective measurements and observables in this paper. To the best of our knowledge, no such formalization is available in any proof assistant. Our formalization is available on the \emph{Archive of Formal Proofs}, at the address \url{https://www.isa-afp.org/entries/Projective_Measurements.html}. It is decomposed in three parts: \begin{inparaenum}[(i)]\item A formalization of necessary notions from Linear Algebra. We heavily rely on the results developed in \cite{liu19}, especially those involving complex matrices and the decomposition of Hermitian matrices for this part, as well as the \emph{types-to-sets} transfer tool \cite{types-sets} to use general results on types in our setting. \item The formalization of projective measurements and their relationship to observables. In particular we show how to construct a projective measurement starting from an observable and how to recover the observable starting from the projective measurement. \item The formalization of the CHSH inequality. Along with the proof of the inequality on an arbitrary probability space, we prove that assuming the existence of a local hidden variable to explain the outcome of measurements leads to a contradiction. \end{inparaenum} \section{Preliminaries} We review the formalization of probability theory in Isabelle, as well as basic notions from linear algebra and quantum mechanics. A more detailed presentation on this last topic can be found in, e.g., \cite{mathQuantum, nielsen-book}. As we will only formalize quantum notions in finite dimension, we present the standard general definitions and illustrate them in the finite dimensional case. \subsection{Probability theory in Isabelle} We begin by briefly presenting the syntax of the interactive theorem prover Isabelle/HOL; this tool can be downloaded at \url{https://isabelle.in.tum.de/}, along with tutorials and documentations. Additional material on Isabelle can be found in \cite{ConcreteSemantics}. This prover is based on higher-order logic; terms are built using types that can be: \begin{itemize} \item simple types, denoted with the Greek letters $\alpha, \beta,\ldots$ \item types obtained from type constructors, represented in postfix notation (e.g. the type $\alpha$ \texttt{set} denotes the type of sets containing elements of type $\alpha$), or in infix notation (e.g., the type $\alpha\rightarrow \beta$ denotes the type of total functions from $\alpha$ to $\beta$). \end{itemize} Functions are curried, and function application is written without parentheses. Anonymous functions are represented with the lambda notation: the function $x\mapsto t$ is denoted by $\lambda x.\,t$. We will use mathematical notations for standard terms; for example, the set of reals will be denoted by $\mathbb{R}$ and the set of booleans by $\mathbb{B}$. The application of function $f$ to argument $x$ may be written $f\ x$, $f(x)$ or $f_x$ for readability. \newcommand{\texttt{carrier-mat}}{\texttt{carrier-mat}} \newcommand{\texttt{mat}}{\texttt{mat}} A recent tool \cite{types-sets} permits to transfer type-based statements which hold on an entire type universe to their set-based counterparts. This tool is particularly useful to apply lemmata in contexts where the assumptions only hold on a strict subset of the considered universe. For example generalized summation over sets is defined for types that represent abelian semigroups with a neutral element. Such an algebraic structure is straightforward to define on any set of matrices that all have the same dimensions. In our formalization, we define a \emph{locale} \cite{Ballarin14} for such a set of matrices in which the number of rows and columns is fixed: \[\begin{array}{l} \keyw{locale}\ \texttt{fixed-carrier-mat}\ =\ \keyw{fixes}\ \texttt{fc-mats}\ \texttt{dimR}\ \texttt{dimC}\\ \quad\keyw{assumes}\ \texttt{fc-mats}\ =\ \texttt{carrier-mat}\ \texttt{dimR}\ \texttt{dimC} \end{array}\] The set of matrices with $n$ rows and $m$ columns is represented in Isabelle by $\texttt{carrier-mat}\ n\ m$, all matrices in {\texttt{fc-mats}} thus admit $\texttt{dimR}$ rows and $\texttt{dimC}$ columns. After proving that $\texttt{fc-mats}$ along with the standard addition on matrices and the matrices with $\texttt{dimR}$ rows and $\texttt{dimC}$ columns consisting only of zeroes is an abelian semigroup with a neutral element, we can define a generalized summation of matrices on this locale: \[\begin{array}{lcl} \texttt{sum-mat} & :: & (\alpha \rightarrow \beta\ \texttt{mat}) \rightarrow \alpha\ \texttt{set}\ \rightarrow \beta\ \texttt{mat}\\ \texttt{sum-mat}\ \mathcal{A}\ I &= & \texttt{sum-with}\ (+)\ \mathbf{0}\ \mathcal{A}\ I \end{array}\] The types-to-sets transfer tool then permits with no effort to transfer theorems that hold on abelian semigroups, and especially those involving sums, to this locale. A large part of the formalization of measure and probability theory in Isabelle was carried out in \cite{hoelzl2012thesis} and is included in Isabelle's distribution. We briefly recap some of the notions that will be used throughout the paper and the way they are formalized in Isabelle. We assume the reader has knowledge of fundamental concepts of measure and probability theory; any missing notions can be found in \cite{Durrett} for example. Probability spaces are particular \emph{measure spaces}. A measure space over a set $\Omega$ consists of a function $\mu$ that associates a nonnegative number {or $+\infty$} to some subsets of $\Omega$. {The subsets of $\Omega$ that can be measured are closed under complement and countable unions and make up a \emph{$\sigma$-algebra}.} In Isabelle the measure type with elements of type $\alpha$ is denoted by $\alpha\ \texttt{measure}$. \newcommand{\texttt{measurable}}{\texttt{measurable}} A function between two measurable spaces is \emph{measurable} if the preimage of every measurable set is measurable. In Isabelle, sets of measurable functions are defined as follow: \[\begin{array}{lcl} \texttt{measurable} & :: & \alpha\,\texttt{measure} \rightarrow \beta\,\texttt{measure}\rightarrow \left(\alpha \rightarrow \beta\right) \texttt{set}\\ \texttt{measurable}\ \mathcal{M}\ \mathcal{N}\ \mu & =& \setof{f: \Omega_\mathcal{M} \rightarrow \Omega_\mathcal{N}}{\forall A\in \mathcal{A}_\mathcal{N}.\, f^{-1}(A) \cap \Omega_\mathcal{M} \in \mathcal{A}_M} \end{array}\] Measurable functions that map the elements of a measurable space into real numbers, such as random variables which are defined below, are measurable on Borel sets: \[\begin{array}{l} \keyw{abbreviation}\ \texttt{borel-measurable}\ \mathcal{M}\ \equiv\ \texttt{measurable}\ \mathcal{M}\ \texttt{borel} \end{array}\] \newcommand{\texttt{prob-space}}{\texttt{prob-space}} \newcommand{\texttt{finite-measure}}{\texttt{finite-measure}} \newcommand{\textsc{AE}}{\textsc{AE}} Probability measures are measure spaces on which the measure of $\Omega$ is finite and equal to $1$. In Isabelle, they are defined in a {locale}. \[\begin{array}{l} \keyw{locale}\ \texttt{prob-space} = \texttt{finite-measure}\ +\keyw{assumes}\ \mu_\mathcal{M}(\Omega_\mathcal{M})\ =\ 1 \end{array}\] A \emph{random variable} on a probability space $\mathcal{M}$ is a measurable function with domain $\Omega_\mathcal{M}$. The average value of a random variable $f$ is called its \emph{expectation}, it is denoted by\footnote{The superscript is omitted when there is no confusion.} $\mathbb{E}^\mathcal{M}[f]$, and defined by $\mathbb{E}^\mathcal{M}[f] \isdef \int_{\Omega_\mathcal{M}}f\mathrm{d}\mu_\mathcal{M}$. In what follows, we will consider properties that hold \emph{almost surely} (or \emph{almost everywhere}), {i.e.,} are such that the elements for which they do not hold reside within a set of measure $0$: \begin{align*} \keyw{lemma}\ \textsc{AE-iff}& :\\\ (\textsc{AE}_\mathcal{M}\,x.\ P\ x)& \Leftrightarrow (\exists N\in \mathcal{A}_\mathcal{M}.\, \mu_\mathcal{M}(N) = 0 \wedge \setof{x}{\neg P\ x}\subseteq N) \end{align*} We will formalize results involving local hidden-variable hypotheses under the more general assumption that properties hold almost everywhere, rather than on the entire probability space under consideration. \subsection{On linear algebra} \newcommand{\texttt{complex }\gmat}{\texttt{complex }\texttt{mat}} \newcommand{\spct}[1]{\mathrm{spct}(#1)} We recap the core notions from linear algebra that will be used in our formalism. A more detailed treatment can be found, e.g., in \cite{linalg}. A \emph{Hilbert space} $\mathcal{H}$ is a complete vector space over the field of complex numbers, equipped with an \emph{inner product} $\inner{\cdot}{\cdot}: \mathcal{H} \times \mathcal{H} \rightarrow \mathbb{C}$, i.e., a function such that for all $\varphi, \psi\in \mathcal{H}$, $\inner{\varphi}{\psi} = \conj{\inner{\psi}{\varphi}}$, $\inner{\varphi}{\varphi} \geq 0$ and $\inner{\varphi}{\varphi} = 0$ iff $\varphi = \mathbf{0}$. The \emph{norm} induced by the inner product is defined by $\norm{\varphi} \isdef \sqrt{\inner{\varphi}{\varphi}}$, and $\varphi$ is \emph{normalized} if $\norm{\varphi} = 1$. The elements of a Hilbert space of dimension $n$ are represented as column vectors, and the inner product of $\varphi$ and $\psi$ is $\inner{\varphi}{\psi} = \sum_{i = 1}^n \conj{\varphi_i}\psi_i$. An \emph{operator} is a linear map on a Hilbert space. We denote by $\mathbf{I}$ the identity operator; we may also write $\mathbf{I}_n$ to specify that the considered Hilbert space is of dimension $n$. If $A$ is an operator on $\mathcal{H}$, then the operator $B$ such that for all $\varphi, \psi \in \mathcal{H}$, $\inner{\varphi}{A\psi} = \inner{B\varphi}{\psi}$, is called the \emph{adjoint of $A$}, and denoted by $\adj{A}$. If $A = \adj{A}$ then we say that $A$ is \emph{self-adjoint} or \emph{Hermitian}, and if $A\adj{A} = \adj{A}A = \mathbf{I}$, then we say that $A$ is \emph{unitary}. We identify operators with their matrix representation. The set of complex matrices is represented in Isabelle by \texttt{complex }\gmat. For convenience, we denote by $\cmat{n}{m}$ the set of complex matrix with $n$ rows and $m$ columns and by $\mathbf{0}_{n,m}$ the matrix in $\cmat{n}{m}$ containing only zeroes. When there is no ambiguity, we will write $\mathbf{0}$ instead of $\mathbf{0}_{n,m}$. The adjoint of the (square) matrix $A$ satisfies $(\adj{A})_{i,j} = \conj{A_{j,i}}$, and if $A$ is self-adjoint then we say that $A$ is a \emph{Hermitian matrix}. The \emph{trace} of a square matrix is the sum of its diagonal elements: if $A\in \cmat{n}{n}$ then $\trc{A} = \sum_{i=1}^n A_{i,i}$. An operator $A$ is a \emph{projector} if $A^2 = A$, and $A$ is an \emph{orthogonal projection} if $\adj{A} = A$. We say that $A$ is \emph{positive} if, for all $\varphi \in \mathcal{H}$, we have $\inner{\varphi}{A\varphi} \geq 0$. We say that the vector $\varphi$ is an \emph{eigenvector} of operator $A$ is there exists $\lambda\in \mathbb{C}$ such that $A\varphi = \lambda \varphi$. In this case, we say that $\lambda$ is an \emph{eigenvalue} of $A$. The set of eigenvalues of $A$ is called the \emph{spectrum} of $A$ and denoted by $\spct{A}$. When $A$ is Hermitian, all its eigenvalues are necessarily real and it is possible to associate every eigenvalue $\lambda \in \spct{A}$ with a projector $P_\lambda$ such that \[P_\lambda\cdot P_{\lambda'} = \mathbf{0}\ \text{if}\ \lambda \neq \lambda',\ \sum_{\lambda \in \spct{A}}P_\lambda = \mathbf{I}\ \text{and}\ A\ =\ \sum_{\lambda \in \spct{A}}\lambda P_\lambda.\] \newcommand{\texttt{hermitian}}{\texttt{hermitian}} \newcommand{\texttt{dim-row}}{\texttt{dim-row}} \newcommand{\texttt{unitary}}{\texttt{unitary}} The \emph{tensor product} of two vector spaces $U$ and $V$ is denoted by $U\otimes V$. We also consider the tensor product of vectors $u\in U$ and $v\in V$, and denote it by $u\otimes v$; similarly, for matrices $A$ and $B$, we denote by $A\otimes B$ their tensor product. Two properties of interest in this context are the following: \[\begin{array}{l} \keyw{lemma}\ \textsc{tensor-mat-hermitian}:\\ \quad \keyw{assumes}\ A \in \texttt{carrier-mat}\ n\ n\ \keyw{and}\ B\in \texttt{carrier-mat}\ n'\ n'\\ \quad \keyw{and}\ n > 0\ \keyw{and}\ n' > 0\\ \quad \keyw{and}\ \texttt{hermitian}\ A\ \keyw{and}\ \texttt{hermitian}\ B\\ \sindent \keyw{shows}\ \texttt{hermitian}\ A\otimes B \\ \keyw{lemma}\ \textsc{tensor-mat-unitary}:\\ \quad \keyw{assumes}\ \texttt{dim-row}\ A > 0\ \keyw{and}\ \texttt{dim-row}\ B > 0\\ \quad \keyw{and}\ \texttt{unitary}\ A\ \keyw{and}\ \texttt{unitary}\ B\\ \sindent \keyw{shows}\ \texttt{unitary}\ A\otimes B\\ \end{array}\] \newcommand{\texttt{rank-1-proj}}{\texttt{rank-1-proj}} \newcommand{\texttt{adjoint}}{\texttt{adjoint}} \newcommand{\texttt{projector}}{\texttt{projector}} \newcommand{\texttt{trace}}{\texttt{trace}} It is standard in quantum mechanics to represent the elements of $\mathcal{H}$ using the Dirac notation: these elements are called \emph{ket-vectors}, and are denoted by $\ket{u}$. In what follows, we will represent the tensor product $\ket{u}\otimes \ket{v}$ by $\ket{uv}$. We also consider the dual notion of \emph{bra-vectors}, denoted by $\bra{u}$. Formally, a bra-vector is the linear map that maps the vector $\ket{v}$ to the complex number $\inner{u}{v}$. In the finite-dimensional setting, if \[\ket{u} = \begin{pmatrix} u_{1} \\ u_{2} \\ \vdots \\ u_{n} \end{pmatrix}\] then $\bra{u} = (\conj{u_1}, \conj{u_2}, \ldots, \conj{u_n})$. We will write $\bra{uv}$ instead of $\bra{u}\otimes \bra{v}$, and by a slight abuse of notation, we will identify the application of $\bra{u}$ to $\ket{v}$ with the inner product $\inner{u}{v}$. We define the \emph{outer product} of two vectors, denoted by $\outerp{u}{v}$, as the linear map such that, for all $\ket{v'} \in \mathcal{H}$, $(\outerp{u}{v})\ket{v'} = \ket{u}\cdot\inner{v}{v'} = \inner{v}{v'}\cdot \ket{u}$. In the finite-dimensional setting, this outer product is represented by the matrix $M$, where $M_{i,j} = u_i\conj{v_j}$. When $\ket{v}$ is normalized, the outer product $\outerp{v}{v}$ is called the \emph{rank-1 projection on $v$}. It is in fact an orthogonal projection with trace 1: \[\begin{array}{l} \keyw{lemma}\ \textsc{rank-1-proj-adjoint}:\\ \sindent \keyw{shows}\ \texttt{adjoint}\ (\texttt{rank-1-proj}\ v)\ = \texttt{rank-1-proj}\ v \\ \keyw{lemma}\ \textsc{rank-1-proj-unitary}:\\ \quad \keyw{assumes}\ \norm{v} = 1\\ \sindent \keyw{shows}\ \texttt{projector}\ (\texttt{rank-1-proj}\ v) \\ \keyw{lemma}\ \textsc{rank-1-proj-trace}:\\ \quad \keyw{assumes}\ \norm{v} = 1\\ \sindent \keyw{shows}\ \texttt{trace}\ (\texttt{rank-1-proj}\ v)\ = 1 \end{array}\] Many introductory textbooks on quantum computing present quantum postulates on states, i.e., normalized vectors on the underlying Hilbert space. The more general statement of these postulates involves so-called \emph{density operators}, which are represented by positive matrices of trace 1. This notion has already been formalized in \cite{liu19}: \newcommand{\texttt{density-operator}}{\texttt{density-operator}} \[\begin{array}{lcl} \texttt{density-operator} & :: & \texttt{complex }\gmat \rightarrow \mathbb{B}\\ \texttt{density-operator}\ \rho &\equiv & \texttt{positive}\ \rho\ \wedge\ \texttt{trace}\ \rho\ = 1 \end{array}\] \begin{example} Given a Hilbert space $\mathcal{H}$, consider a set of normalized vectors $\ket{u_1}, \ldots \ket{u_n}$ and assume that for $i\in \interv{1}{n}$, $p_i\geq 0$ and $\sum_{i = 1}^n p_i = 1$. Then \[\rho \ \isdef\ \sum_{i=1}^n p_i \outerp{u_i}{u_i}\] is a density operator (lemma \textsc{rank-1-proj-sum-density}). In quantum terminology, when $n=1$, the matrix $\rho$ is referred to as a \emph{pure state} and otherwise, to a \emph{mixed state}. \end{example} A density operator that will be used in the formalization of the measurement postulate is the so-called \emph{maximally mixed state}. It is formalized as follows in Isabelle: \newcommand{\texttt{max-mix-density}}{\texttt{max-mix-density}} \[\begin{array}{lcl} \texttt{max-mix-density} & :: & \mathbb{N} \rightarrow \texttt{complex }\gmat\\ \texttt{max-mix-density}\ n &= & \frac{1}{n}\cdot \mathbf{I}_n \end{array}\] Intuitively, this density operator is the one with the maximal von Neumann entropy; in other words, its spectrum admits the maximum Shannon entropy. All the matrices we consider in what follows are nontrivial complex square matrices, we thus define a locale to work in this context: \[\begin{array}{l} \keyw{locale}\ \texttt{cpx-sq-mat}\ =\ \texttt{fixed-carrier-mat}\ (\texttt{fc-mats}:: \texttt{complex }\gmat\ \texttt{set})\, +\\ \quad\keyw{assumes}\ \texttt{dimR} = \texttt{dimC}\ \keyw{and}\ \texttt{dimR} > 0 \end{array}\] \subsection{Quantum mechanics postulates} We present the postulates of quantum mechanics that are used in quantum computation and information, following \cite{nielsen-book}. \begin{description} \item[State postulate] Associated to an isolated physical system is a Hilbert space, which is referred to as the \emph{state space}. The system itself is completely described by its density operator; we thus identify the state of a system with its density operator. \item[Evolution postulate] The evolution of a closed quantum system is described by a unitary transformation: the state $\rho$ at time $t$ of the system is related to the state $\rho'$ at time $t'$ of the system by a unitary operator $U\isdef U(t,t')$, by the equation $\rho' = U\rho \adj{U}$. \item[Measurement postulate] Quantum measurements are described by collections of so-called \emph{measurement operators}. These collections are of the form $\setof{M_\alpha}{\alpha\in I}$, where $M_\alpha$ is the matrix associated to the measurement outcome $\alpha$, and they satisfy the completeness equation: \[\sum_{\alpha\in I}\adj{M_\alpha}M_\alpha\ =\ \mathbf{I}.\] When the state $\rho$ of a quantum system is measured with the collection $\setof{M_\alpha}{\alpha\in I}$, the probability that the outcome $\alpha$ occurs is $\trc{\adj{M_\alpha}M_\alpha\rho}$, and the state after the measurement \emph{collapses} into \[\rho'\ \isdef\ \frac{M_\alpha\rho\adj{M_\alpha}}{\trc{\adj{M_\alpha}M_\alpha\rho}}.\] \item[Composite postulate] The state space of a composite physical system is the tensor product of the state spaces of the component physical systems. If the system consists of $n$ individual systems and system $i$ is prepared in state $\rho_i$ for $i\in \interv{1}{n}$, then the joint state of the composite system is $\rho_1\otimes \rho_2\otimes \cdots \otimes \rho_n$. \end{description} \begin{example} The simplest quantum system is described by a two-dimensional Hilbert space. It is standard to note an orthonormal basis for such a vector space as $\set{\ket{0}, \ket{1}}$; the two elements of this orthonormal basis are called the \emph{computational basis states}. In the density operator terminology, a \emph{qubit} is a state of the form $\outerp{\varphi}{\varphi}$, where $\ket{\varphi} = a \ket{0} + b \ket{1}$, for some $a,b\in \mathbb{C}$ such that $\card{a}^2 + \card{b}^2 = 1$. \end{example} In Section \ref{sec:chsh}, we will consider composite systems involving two qubits. Consider two physical systems $A$ and $B$, to which are associated the Hilbert spaces $\mathcal{H}_A$ and $\mathcal{H}_B$. Although the state space of the composite system consisting of $A$ and $B$ is simply $\mathcal{H}_A\otimes \mathcal{H}_B$, properties of measurements involving this composite system can be quite counterintuitive, because of the notion of entanglement: \begin{definition} A state $\rho$ is \emph{separable} if it can be written as \[\rho\ =\ \sum_{i = 1}^n p_i \rho^i_A\otimes \rho^i_B,\] where for $i \in \interv{1}{n}$, $0 \leq p_i\leq 1$ and $\sum_{i=1}^n p_i = 1$. Otherwise, it is \emph{entangled}. \end{definition} For systems involving two qubits, we have the \emph{Bell states} are examples of entangled pure states: \begin{eqnarray*} \ket{\Phi^+} & \isdef & \frac{1}{\sqrt{2}}\left(\ket{00} + \ket{11}\right)\\ \ket{\Phi^-} & \isdef & \frac{1}{\sqrt{2}}\left(\ket{00} - \ket{11}\right)\\ \ket{\Psi^+} & \isdef & \frac{1}{\sqrt{2}}\left(\ket{01} + \ket{10}\right)\\ \ket{\Psi^-} & \isdef & \frac{1}{\sqrt{2}}\left(\ket{01} - \ket{10}\right) \end{eqnarray*} The Bell state $\ket{\Psi^-}$ will be used in Section \ref{sec:chsh} to contradict the local hidden variable hypothesis. \section{Projective measurements} \subsection{The projective measurement postulate} A notion related to collections of measurement operations is that of \emph{observables}. Intuitively, an observable represents a physically measurable quantity of a quantum system, such as the spin of an electron or the polarization of a photon. Formally, observables are represented by Hermitian operators; this is the approach that was used by von Neumann in his axiomatic treatment of quantum mechanics. This is why it is common to see the Measurement postulate stated as follows: \begin{description} \item[Projective measurement postulate] A projective measurement is described by an observable, which is a Hermitian operator $M$ on the state space of the observed system. The measurement outcomes for an observable are its eigenvalues. Given the spectral decomposition \[M\ =\ \sum_{\lambda \in \spct{M}} \lambda\cdot P_\lambda,\] where $P_\lambda$ is the projector onto the eigenspace of $M$ with eigenvalue $\lambda$, the probability of obtaining outcome $\lambda$ when measuring the density operator $\rho$ is $\trc{\rho P_\lambda}$, and the resulting state is \[\rho'\ =\ \frac{P_\lambda\rho P_\lambda}{\trc{\rho P_\lambda}}.\] \end{description} Projective measurements are also known as von Neumann measurements. The Projective measurement postulate can be derived from the Measurement postulate. Indeed, using the fact that projectors are Hermitian, by the spectral theorem we have \[\sum_{\lambda \in \spct{M}} \adj{P_\lambda}P_\lambda\ =\ \sum_{\lambda \in \spct{M}} {P_\lambda}^2\ =\ \sum_{\lambda \in \spct{M}} {P_\lambda}\ =\ \mathbf{I},\] and by invariance of traces under cyclic permutations, we have: \[\trc{\adj{P_\lambda}P_\lambda\rho}\ =\ \trc{P_\lambda^2\rho}\ =\ \trc{P_\lambda\rho}\ =\ \trc{\rho P_\lambda},\] so that \[\frac{P_\lambda\rho\adj{P_\lambda}}{\trc{\adj{P_\lambda}P_\lambda\rho}}\ =\ \frac{P_\lambda\rho P_\lambda}{\trc{\rho P_\lambda}}.\] \newcommand{\texttt{measure-outcome}}{\texttt{measure-outcome}} \newcommand{\texttt{proj-measurement}}{\texttt{proj-measurement}} \newcommand{\texttt{inj-on}}{\texttt{inj-on}} \newcommand{\moval}[1]{#1^\mathrm{v}} \newcommand{\moprj}[1]{#1^\mathrm{p}} \newcommand{\texttt{meas-outcome-prob}}{\texttt{meas-outcome-prob}} Projective measurements could be formalized in many ways. We chose to stick with a formalization that is as close as possible to the one used in \cite{liu19} for their quantum programs, for the sake of future reusability. We consider a measure outcome as a couple $(\lambda, P_\lambda)$, where $\lambda$ represents the output of the measure and $P_\lambda$ the associated projector, and introduce a binary predicate that characterizes projective measurements. The first parameter of this argument represents the number of possible measure outcomes and the second parameter is the collection of measure outcomes. We require that the values of the measure outcomes are pairwise distinct, that the associated projectors have the correct dimensions and are orthogonal projectors, that sum to the identity. For the sake of readability, if $M_i = (\lambda, P_\lambda)$ is a measure outcome, then we denote $\lambda$ by $\moval{M_i}$ and $P_\lambda$ by $\moprj{M_i}$. \[\begin{array}{l} \keyw{type-synonym}\ \texttt{measure-outcome}\ =\ \mathbb{R}\times \texttt{complex }\gmat \end{array}\] \[\begin{array}{lcl} \texttt{proj-measurement} & :: & \mathbb{N} \rightarrow (\mathbb{N} \rightarrow \texttt{measure-outcome}) \rightarrow \mathbb{B} \\ \texttt{proj-measurement}\ n\ M\ & \Leftrightarrow & \texttt{inj-on}\ (\lambda i.\ \moval{M_i})\ \ \interv{0}{n-1}\ \wedge\\ & & \forall j < n.\, \moprj{M_j} \in \texttt{fc-mats} \wedge \texttt{projector}\ \moprj{M_j}\ \wedge\\ & & \forall i,j < n.\, i\neq j \Rightarrow \moprj{M_i}\cdot \moprj{M_j} = \mathbf{0}\ \wedge\\ & & \sum_{j = 0}^{n-1} \moprj{M_j} = \mathbf{I} \end{array}\] According to the projective measurement predicate, the probability of obtaining result $\lambda$ when measuring the density operator $\rho$ is $\trc{\rho P_\lambda}$. We prove that, although $\rho$ and $P_\lambda$ are complex matrices, these traces are real positive numbers that sum to 1. \[\begin{array}{lcl} \texttt{meas-outcome-prob} & :: & \texttt{complex }\gmat \rightarrow (\mathbb{N} \rightarrow \texttt{measure-outcome}) \rightarrow \\ & & \quad \quad \mathbb{N} \rightarrow \mathbb{C}\\ \texttt{meas-outcome-prob}\ \rho\ M\ i & = & \trc{\rho \cdot \moprj{M_i}} \end{array} \] \[\begin{array}{l} \keyw{lemma}\ \textsc{meas-outcome-prob-real}:\\ \quad \keyw{assumes}\ \rho \in \texttt{fc-mats}\ \keyw{and}\ \texttt{density-operator}\ \rho\\ \quad \keyw{and}\ \texttt{proj-measurement}\ n\ M\ \keyw{and}\ i < n\\ \sindent \keyw{shows}\ \texttt{meas-outcome-prob}\ \rho\ M\ i \in \mathbb{R} \\ \keyw{lemma}\ \textsc{meas-outcome-prob-pos}:\\ \quad \keyw{assumes}\ \rho \in \texttt{fc-mats}\ \keyw{and}\ \texttt{density-operator}\ \rho\\ \quad \keyw{and}\ \texttt{proj-measurement}\ n\ M\ \keyw{and}\ i < n\\ \sindent \keyw{shows}\ \texttt{meas-outcome-prob}\ \rho\ M\ i \geq 0 \\ \keyw{lemma}\ \textsc{meas-outcome-prob-sum}:\\ \quad \keyw{assumes}\ \rho \in \texttt{fc-mats}\ \keyw{and}\ \texttt{density-operator}\ \rho\\ \quad \keyw{and}\ \texttt{proj-measurement}\ n\ M\\ \sindent \keyw{shows}\ \sum_{j = 1}^{n-1} (\texttt{meas-outcome-prob}\ \rho\ M\ j) = 1 \end{array}\] When the result of the measurement of $\rho$ is $\lambda$, $\rho$ collapses into $\frac{P_\lambda\rho P_\lambda}{\trc{\rho P_\lambda}}$. When formalizing this collapse in Isabelle, some care must be taken to handle the case of results that occur with probability zero. Although such cases are never meant to be analyzed when reasoning on quantum algorithms, it is still necessary to provide a reasonable definition of the state $\rho$ collapses into. We have chosen to make $\rho$ collapse into the maximally mixed state in this case: \newcommand{\texttt{density-collapse}}{\texttt{density-collapse}} \[\begin{array}{lcl} \texttt{density-collapse} & :: & \texttt{complex }\gmat \rightarrow \texttt{complex }\gmat \rightarrow \texttt{complex }\gmat\\ \texttt{density-collapse}\ \rho\ P & = & \keyw{if}\ \trc{\rho \cdot P} = 0\ \keyw{then}\ \texttt{max-mix-density}\ \texttt{dimR}\\ & & \quad\quad\quad\quad\quad\quad\quad\keyw{else}\ \frac{P\cdot\rho\cdot P}{\trc{\rho\cdot P}} \end{array} \] \subsection{Projective measurements for observables} \newcommand{\texttt{diag-elems}}{\texttt{diag-elems}} \newcommand{\texttt{diag-idx-to-el}}{\texttt{diag-idx-to-el}} \newcommand{\texttt{diag-elem-indices}}{\texttt{diag-elem-indices}} \newcommand{\texttt{project-vecs}}{\texttt{project-vecs}} \newcommand{\texttt{mk-meas-outcome}}{\texttt{mk-meas-outcome}} \newcommand{\texttt{eigvals}}{\texttt{eigvals}} \newcommand{\texttt{make-pm}}{\texttt{make-pm}} \newcommand{\texttt{unitary-schur-decomposition}}{\texttt{unitary-schur-decomposition}} \newcommand{\texttt{dist-el-card}}{\texttt{dist-el-card}} We develop the construction of a projective measurement for an observable, i.e., a Hermitian matrix. This construction relies on the fact that a Hermitian matrix $A$ can be decomposed as $A = U\cdot B\cdot \adj{U}$, where $B$ is a diagonal matrix and $U$ is unitary. A construction of $B$ and $U$ based on the Schur decomposition theorem is available in Isabelle; this theorem was developed in \cite{jordan} and extended in \cite{liu19}. The projective measurement for $A$ is constructed using the fact that the spectrum of $A$ consists of the diagonal elements of $B$, and because $U$ is unitary, its column vectors are necessarily normalized and pairwise orthogonal. More specifically, assume $A\in \cmat{n}{n}$, let $D_B\isdef \setof{B_{i,i}}{i = 1, \ldots, n}$ (represented by $\texttt{diag-elems}\ B$ in our formalization), and let $p\isdef \card{D_B}$ (represented by $\texttt{dist-el-card}\ B$ in our formalization). The number $p$ represents the size of the spectrum of $A$; we let $\mathcal{C}$ (represented by $\texttt{diag-idx-to-el}\ B$ in our formalization) be a bijection between $\interv{0}{p-1}$ and $D_B$, and $\mathcal{E}$ (represented by $\texttt{diag-elem-indices}\ B$ in our formalization) associate to $\lambda \in D_B$ the set $\setof{i\leq n}{B_{i,i} = \lambda}$. Then the set $\set{\mathcal{E}\circ\mathcal{C}(0), \ldots, \mathcal{E}\circ\mathcal{C}(p-1)}$ is a partition of $\interv{1}{n}$ (lemmas \textsc{diag-elem-indices-disjoint} and \textsc{diag-elem-indices-union} in our formalization). This partition is used, along with the unitary matrix $U$, to construct projectors for the eigenspaces of $A$ as follows. We define the function $\texttt{project-vecs}$ which, for $i\in \interv{0}{p-1}$, constructs the matrix \[P_i\isdef \sum_{j\in \mathcal{E}\circ\mathcal{C}(i)} \outerp{U_j}{U_j},\] where $U_j$ denotes column $j$ of $U$. For $i\in \interv{0}{p-1}$, the function $\texttt{mk-meas-outcome}\ i$ constructs the couple $(\mathcal{C}(i), P_i)$. We obtain the definition of the projective measurement associated to the Hermitian matrix $A$ whose eigenvalues are represented by $\texttt{eigvals}\ A$: \[\begin{array}{lcl} \texttt{make-pm} & :: & \texttt{complex }\gmat \rightarrow \mathbb{N} \times (\mathbb{N} \rightarrow \texttt{measure-outcome})\\ \texttt{make-pm}\ A & = & \keyw{let}\ (B,U,\_) = \texttt{unitary-schur-decomposition}\ A\ (\texttt{eigvals}\ A)\\ & & \keyw{in}\ (\texttt{dist-el-card}\ B,\ \texttt{mk-meas-outcome}\ B\ U) \end{array} \] The resulting couple represents a projective measurement, and the original matrix can be recovered by summing the projectors scaled by the corresponding eigenvalues: \[\begin{array}{l} \keyw{lemma}\ \textsc{make-pm-proj-measurement}:\\ \quad \keyw{assumes}\ A \in \texttt{fc-mats}\ \keyw{and}\ \texttt{hermitian}\ A\\ \quad \keyw{and}\ \texttt{make-pm}\ A = (n, M)\\ \sindent \keyw{shows}\ \texttt{proj-measurement}\ n\ M \\ \keyw{lemma}\ \textsc{make-pm-sum}:\\ \quad \keyw{assumes}\ A \in \texttt{fc-mats}\ \keyw{and}\ \texttt{hermitian}\ A\\ \quad \keyw{and}\ \texttt{make-pm}\ A = (n, M)\\ \sindent \keyw{shows}\ \sum_{i = 0}^{n-1} \moval{M_i}\cdot \moprj{M_i} = A \end{array}\] \section{The CHSH inequality}\label{sec:chsh} \newcommand{\texttt{integrable}}{\texttt{integrable}} \newcommand{\jprob}[2]{\mathrm{p}(#1\mid #2)} \newcommand{\jexp}[1]{\mathrm{E}(#1)} The fact that a physical system is in a superposition of states and that, instead of revealing a pre-existing value, a measurement ``brings the outcome into being'' (Mermin, \cite{Mermin93}) was the cause of many controversies between the pioneers of quantum mechanics. Famously, Einstein did not believe in the intrinsically statistical nature of quantum mechanics. According to him, quantum mechanics was an incomplete theory, and the postulates on probabilistic measure outcomes actually reflected statistical outcomes of a deterministic underlying theory (see \cite{Dalton20,scarani2019bell} for detailed considerations on these controversies). The EPR paradox \cite{EPR} was designed to evidence the incompleteness of quantum mechanics. It involves two entangled and separated particles which are sent in opposite directions. If one of the particles is measured, then the outcome of the measurement of the other particle will be known with certainty. For example, if the entangled system is represented by the Bell state $\ket{\Phi^+} = \frac{1}{\sqrt{2}}\left(\ket{00} + \ket{11}\right)$ and a measurement of the first particle returns $0$, then it is certain that a measurement of the second one will also output $0$. This phenomenon is known as \emph{nonlocality}, and it may leave the impression that information traveled from the first to the second particle instantaneously, which would contradict the theory of relativity\footnote{Since then, it has been proven that this phenomenon is in no contradiction with the theory of relativity and does not imply faster-than-light communication.}. Einstein called this phenomenon ``spooky action at a distance''. A suggested solution to this phenomenon is that the measurement outcomes are actually properties that existed before the measurement was performed, and that deterministic underlying theories for quantum mechanics should thus be developed. Efforts to develop such theories are called \emph{hidden-variable programs}. The theories that take into account the fact that information cannot travel instantaneously, thus also requiring that distant events are independent, are called \emph{local hidden-variable theories}. The fact that there can be no local hidden-variable underlying theory for quantum mechanics was proved by Bell \cite{Bell64} who derived inequalities (the \emph{Bell inequalities}), that hold in a probabilistic setting, and showed that they are violated by measurements in quantum mechanics. {Recently, Aspect \cite{aspect} showed that this violation can be experimentally verified, even when taking experimental errors into account, {i.e.}, regardless of the possible outcomes of the particles lost during the experiment.} In what follows we formalize probabilistic an inequality that is violated in quantum mechanics: the \emph{CHSH inequality}, named after Clause, Horne, Shimony and Holt \cite{CHSH}. This inequality is an upper-bound involving expectations of products of random variables. It involves two parties, Alice and Bob, who each receive and measure a particle that is part of an entangled system originating from a common source. After repeating this operation a large number of times, they can compute the frequencies of the different outcomes. These frequencies are referred to as \emph{correlations}, or \emph{joint probabilities}, and denoted by $\jprob{a,b}{A,B}$, where $a$ and $b$ represent the outcomes and $A$ and $B$ represent the measuring devices used by Alice and Bob, respectively. In the CHSH setting, Alice and Bob have two measuring devices each, represented by the observables $A_1$ and $A_2$ for Alice, and $B_1$ and $B_2$ for Bob. All observables have $\pm 1$ as eigenvalues. At each round, Alice and Bob independently choose one measuring device; running the experimental sufficiently many times permits to construct the expectation values for observables $A_i$ and $B_j$, where $\set{i,j} \subseteq \set{1,2}$: \[\jexp{A_i,B_j}\ \isdef\ \sum_{a,b = \pm 1} a\cdot b\cdot \jprob{a,b}{A_i,B_j}.\] Consider the quantity \[\jexp{A_1,B_1} + \jexp{A_1,B_0} + \jexp{A_0, B_1} - \jexp{A_0, B_0}.\] As we will see, under the local hidden-variable assumption, this quantity admits an upper-bound, but for a suitable choice of density operator and observables, this upper-bound is violated; a violation that has been confirmed experimentally \cite{aspect}. Under the local hidden-variable assumption, the quantities $\jexp{A_i,B_j}$ are expectations in a suitable probability space. We prove the following inequality which holds in any probability space $\mathcal{M}$, with relaxed conditions on the upper-bounds of random variables compared to standard statements of the result, which are assumed to hold almost everywhere rather than for all samples: \[\begin{array}{l} \keyw{lemma}\ \textsc{chsh-expect}:\\ \quad \keyw{assumes}\ \textsc{AE}_\mathcal{M}\,x.\ \card{A_0(x)} \leq 1\ \keyw{and}\ \textsc{AE}_\mathcal{M}\,x.\ \card{A_1(x)} \leq 1\\ \quad \keyw{and}\ \textsc{AE}_\mathcal{M}\,x.\ \card{B_0(x)} \leq 1\ \keyw{and}\ \textsc{AE}_\mathcal{M}\,x.\ \card{B_1(x)} \leq 1\\ \quad \keyw{and}\ \texttt{integrable}\ \mathcal{M}\ (A_0\cdot B_0)\\ \quad \keyw{and}\ \texttt{integrable}\ \mathcal{M}\ (A_0\cdot B_1)\\ \quad \keyw{and}\ \texttt{integrable}\ \mathcal{M}\ (A_1\cdot B_0)\\ \quad \keyw{and}\ \texttt{integrable}\ \mathcal{M}\ (A_1\cdot B_1)\\ \sindent \keyw{shows}\ \card{\mathbb{E}[A_1\cdot B_0] + \mathbb{E}[A_0\cdot B_1] + \mathbb{E}[A_1\cdot B_1] - \mathbb{E}[A_0\cdot B_0]} \leq 2 \end{array}\] The local hidden-variable assumption on a system states that there exists a probability space and independent random variables such that, when performing simultaneous measurements on the system, the probabilities of the outcomes, the values of which are given by the Measurement postulate, are expectations in the probability space. This statement is often found in articles and textbooks under the assumption that the probability space admits a density\footnote{Including in the original paper on the CHSH inequality \cite{CHSH}.}, but it is defined in our formalization in a more general case, where no assumption on the existence of a density is made, and properties on the considered random variables are assumed to hold almost everywhere rather than on the entire probability space. \[\begin{array}{lcl} \texttt{pos-rv} & :: & \alpha\ \texttt{measure} \rightarrow (\alpha\rightarrow \mathbb{R}) \rightarrow \mathbb{B}\\ \texttt{pos-rv}\ \mathcal{M}\ X & \equiv & X\in \texttt{borel-measurable}\ \mathcal{M}\ \wedge\ \textsc{AE}_\mathcal{M}\, x.\ X(x) \geq 0 \\ \texttt{prv-sum} & :: & \alpha\ \texttt{measure} \rightarrow \texttt{complex }\gmat \rightarrow (\mathbb{C}\rightarrow \alpha \rightarrow \mathbb{R}) \rightarrow \mathbb{B}\\ \texttt{prv-sum}\ \mathcal{M}\ A\ X & \equiv & \textsc{AE}_\mathcal{M}\, x. \sum_{a\in \spct{A}} X_a(x) = 1 \\ \texttt{lhv} & :: & \alpha\ \texttt{measure} \rightarrow \texttt{complex }\gmat \rightarrow\\ & & \quad \texttt{complex }\gmat \rightarrow \texttt{complex }\gmat \rightarrow\\ & & \quad (\mathbb{C} \rightarrow \alpha \rightarrow \mathbb{R}) \rightarrow (\mathbb{C} \rightarrow \alpha \rightarrow \mathbb{R}) \rightarrow \mathbb{B}\\ \texttt{lhv}\ \mathcal{M}\ A\ B\ \rho\ X\ Y & \equiv & \texttt{prob-space} \mathcal{M}\ \wedge\\ & & \texttt{prv-sum}\ \mathcal{M}\ A\ X\ \wedge\ \texttt{prv-sum}\ \mathcal{M}\ B\ Y\ \wedge\\ & & \forall a\in \spct{A}.\, \texttt{pos-rv}\ \mathcal{M}\ X_a\ \wedge\\ & & \forall b\in \spct{B}.\, \texttt{pos-rv}\ \mathcal{M}\ Y_b\ \wedge\\ & & \forall a\in \spct{A}.\, \forall b\in \spct{B}.\\ & & \quad (\texttt{integrable}\ \mathcal{M}\ (X_a\cdot Y_b)\ \wedge\\ & & \quad \mathbb{E}[X_a\cdot Y_b] = \trc{P_a\cdot P_b\cdot \rho}) \end{array}\] \newcommand{\texttt{qt-expect}}{\texttt{qt-expect}} The \emph{quantum expectation value} of a measurement represents the average value of the projective measurement of an observable. In other words, given an observable $A$, if the probability of obtaining result $a\in \spct{A}$ after a measurement of some state is $p_a$, then the expectation value of $A$ is $\sum_{a\in \spct{A}} a\cdot p_a$. More generally, given a density operator $\rho$ and an observable $A$, the (quantum) expectation value of $A$ is \[\tuple{A}_\rho\ \isdef\ \trc{A\cdot\rho}.\] Under the local hidden-variable hypothesis, when $X$ represents observable $A$, we can define the random variable related to the expectation value of $A$: \[\begin{array}{lcl} \texttt{qt-expect} & :: & \texttt{complex }\gmat\rightarrow (\mathbb{C}\rightarrow \alpha \rightarrow \mathbb{R}) \rightarrow \alpha \rightarrow \mathbb{R}\\ \texttt{qt-expect}\ A\ X & = & \left(\lambda x.\, \sum_{a\in \spct{A}} a \cdot X_a(x)\right) \end{array}\] We obtain the following equality relating the expectation of the product of random variables and the quantum expectation of the corresponding observables: \[\begin{array}{l} \keyw{lemma}\ \textsc{sum-qt-expect}:\\ \quad \keyw{assumes}\ \texttt{lhv}\ \mathcal{M}\ A\ B\ \rho\ X\ Y\\ \quad \keyw{and}\ A\in \texttt{fc-mats}\ \keyw{and}\ B\in \texttt{fc-mats}\ \keyw{and}\ \rho\in \texttt{fc-mats}\\ \quad \keyw{and}\ \texttt{hermitian}\ A\ \keyw{and}\ \texttt{hermitian}\ B\\ \sindent \keyw{shows}\ \mathbb{E}[(\texttt{qt-expect}\ A\ X)\cdot(\texttt{qt-expect}\ B\ Y)] = \trc{A\cdot B\cdot \rho} \end{array}\] \newcommand{\texttt{X}}{\texttt{X}} \newcommand{\texttt{Z}}{\texttt{Z}} \newcommand{\texttt{XpZ}}{\texttt{XpZ}} \newcommand{\texttt{ZmX}}{\texttt{ZmX}} \newcommand{\texttt{X-I}}{\texttt{X-I}} \newcommand{\texttt{Z-I}}{\texttt{Z-I}} \newcommand{\texttt{I-XpZ}}{\texttt{I-XpZ}} \newcommand{\texttt{I-ZmX}}{\texttt{I-ZmX}} The goal becomes finding a suitable density operator and suitable observables so that the combination of their traces violates the CHSH inequality. To this purpose, we consider the density operator \[\rho_C\ \isdef\ \outerp{\Psi^-}{\Psi^-},\ \text{where } \ket{\Psi^-}\ =\ \frac{1}{\sqrt{2}}\left(\ket{01} - \ket{10}\right) \text{ is one of the Bell states}.\] We consider bipartite measurements of this entangled state. These measurements involve the following observables: \[\begin{array}{rclcrcl} \texttt{Z} & \isdef & \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix} & \quad & \texttt{X} & \isdef & \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} \\ \texttt{XpZ} & \isdef & -\frac{1}{\sqrt{2}}(\texttt{X} + \texttt{Z}) & & \texttt{ZmX} & \isdef & \frac{1}{\sqrt{2}}(\texttt{Z} - \texttt{X}) \end{array}\] These are all Hermitian matrices, and they are also unitary. The corresponding separated measurements are represented by the following tensor products: \[\begin{array}{rclcrcl} \texttt{Z-I} & \isdef & \texttt{Z}\otimes \mathbf{I} & \quad & \texttt{X-I} & \isdef & \texttt{X} \otimes \mathbf{I}\\ \texttt{I-XpZ} & \isdef & \mathbf{I} \otimes \texttt{XpZ} &\quad & \texttt{I-ZmX} & \isdef & \mathbf{I} \otimes \texttt{ZmX} \end{array}\] Under the local hidden-variable hypothesis, we can compute the following expectations: \[\begin{array}{l} \keyw{lemma}\ \textsc{Z-I-XpZ-chsh}:\\ \quad \keyw{assumes}\ \texttt{lhv}\ \mathcal{M}\ \texttt{Z-I}\ \texttt{I-XpZ}\ \rho\ V_z\ V_p\\ \sindent \keyw{shows}\ \mathbb{E}[(\texttt{qt-expect}\ \texttt{Z-I}\ V_z)\cdot(\texttt{qt-expect}\ \texttt{I-XpZ}\ V_p)] = \frac{1}{\sqrt{2}} \\ \keyw{lemma}\ \textsc{X-I-XpZ-chsh}:\\ \quad \keyw{assumes}\ \texttt{lhv}\ \mathcal{M}\ \texttt{X-I}\ \texttt{I-XpZ}\ \rho\ V_x\ V_p\\ \sindent \keyw{shows}\ \mathbb{E}[(\texttt{qt-expect}\ \texttt{X-I}\ V_x)\cdot(\texttt{qt-expect}\ \texttt{I-XpZ}\ V_p)] = \frac{1}{\sqrt{2}} \\ \keyw{lemma}\ \textsc{X-I-ZmX-chsh}:\\ \quad \keyw{assumes}\ \texttt{lhv}\ \mathcal{M}\ \texttt{X-I}\ \texttt{I-ZmX}\ \rho\ V_x\ V_m\\ \sindent \keyw{shows}\ \mathbb{E}[(\texttt{qt-expect}\ \texttt{X-I}\ V_x)\cdot(\texttt{qt-expect}\ \texttt{I-ZmX}\ V_m)] = \frac{1}{\sqrt{2}} \\ \keyw{lemma}\ \textsc{Z-I-ZmX-chsh}:\\ \quad \keyw{assumes}\ \texttt{lhv}\ \mathcal{M}\ \texttt{Z-I}\ \texttt{I-ZmX}\ \rho\ V_z\ V_m\\ \sindent \keyw{shows}\ \mathbb{E}[(\texttt{qt-expect}\ \texttt{Z-I}\ V_z)\cdot(\texttt{qt-expect}\ \texttt{I-ZmX}\ V_m)] = -\frac{1}{\sqrt{2}} \end{array}\] Summing the first three expectation values and subtracting the last one returns $2\sqrt{2} > 2$, and the CHSH inequality is violated. We conclude that the local hidden-variable assumption cannot hold: \[\begin{array}{l} \keyw{lemma}\ \textsc{no-lhv}:\\ \quad \keyw{assumes}\ \texttt{lhv}\ \mathcal{M}\ \texttt{Z-I}\ \texttt{I-XpZ}\ \rho\ V_z\ V_p\\ \quad \keyw{and}\ \texttt{lhv}\ \mathcal{M}\ \texttt{X-I}\ \texttt{I-XpZ}\ \rho\ V_x\ V_p\\ \quad \keyw{and}\ \texttt{lhv}\ \mathcal{M}\ \texttt{X-I}\ \texttt{I-ZmX}\ \rho\ V_x\ V_m\\ \quad \keyw{and}\ \texttt{lhv}\ \mathcal{M}\ \texttt{Z-I}\ \texttt{I-ZmX}\ \rho\ V_z\ V_m\\ \sindent \keyw{shows}\ \text{False} \end{array}\] \section{Conclusion} We have formalized the essential notion of quantum projective measurements and the way they are obtained from observables. This formalization was carried out in a setting that is as general as possible. For example, the local hidden-variable hypothesis is formalized with as few conditions as possible and contrarily to many textbooks, makes no assumption on the existence of a density function on the underlying probability space. We also took care of formalizing necessary notions in a way that should make them simple to reuse in other formalizations. For instance, the way projective measurements are defined is close to their usage in the quantum language used in \cite{liu19}, which could permit to consider extensions of this language in which it is possible to reason about measurement outcomes. This is a direction we are currently exploring: to the best of our knowledge, there are currently no formalized quantum languages that permit to perform probabilistic reasoning on quantum algorithms. Yet, this form of reasoning is common in textbooks on quantum mechanics, where it is obvious that if a large number of systems in the state $\frac{1}{\sqrt{2}}(\ket{0} + \ket{1})$ are measured in the standard basis, then approximately half of the outputs will be equal to 0; a simple consequence of the Law of large numbers which is formalized in Isabelle \cite{eberl}. Although the CHSH inequality shows that it is not possible to model quantum mechanics in a probabilistic setting, we are currently investigating how to associate a probability space to a quantum algorithm, in order to use the large corpus of results on measure theory that have already been formalized in Isabelle for the subsequent reasoning tasks. The CHSH inequality turned out to be difficult to formalize, because although it is presented in a similar way in \cite{nielsen-book, mathQuantum}, we were unable to find a justification why their presentation entails the required result without making an additional assumption on the relationship between expectations of the random variables they consider and the quantum expectations of the related measurements. This is why the treatment of the CHSH inequality that is formalized in this paper is the one from \cite{scarani2019bell}, in which the local hidden-variable formulation is the same as in the original paper \cite{CHSH}. This inequality is also an important first step toward the formalization of \emph{device-independent} quantum cryptography protocols; i.e., protocols that are \emph{unconditionally secure}, even in the case where the devices used are noisy or malicious. Indeed a key point of these protocols is the notion of \emph{self-testing}, which guarantees that the verification of inequality violations with classical interaction is enough to certify quantum properties that are stronger than entanglement. \paragraph{Acknowledgments.} The authors thank St\'ephane Attal for his feedback on the relationship between quantum and standard probabilities. This work benefited from the funding \emph{``Investissements d'avenir''} (ANR-15-IDEX-02) program of the French National Research Agency. \end{document}
\begin{document} \title{On the EOS formulation for light scattering.\\ Stability, Singularity and Parallelization} \author{Aihua Lin} \author{Per Kristen Jakobsen} \affil{Department of Mathematics and Statistics, UIT the Arctic University of Norway, 9019 Troms\o, Norway} \date{ } \renewcommand\Authands{ and } \maketitle \begin{abstract} \end{abstract} In this paper we discuss some of the mathematical and numerical issues that have to be addressed when calculating wave scattering using the EOS approach. The discussion is framed in context of light scattering by objects whose optical response can be of a nonlinear and/or inhomogeneous nature. The discussions address two issues that, more likely than not, will be part of any investigation of wave scattering using the EOS approach. \section{ Introduction} A new hybrid numerical approach for solving linear and nonlinear scattering problems, the Ewald Oseen Scattering(EOS) formulation, has recently been introduced and applied to the cases of 1D transient wave scattering \cite{Aihua} and 3D light scattering \cite{Aihua2}. The approach combines a domain-based method and a boundary integral representation in such a way that the wave fields inside the scattering objects are updated in time using the domain-based method, while the integral representation is used to update the boundary values of the fields, which are required by the inside domain-based method. In such a way, for the numerical implementations, no numerical grids outside the scattering objects are needed. This greatly reduces the computational complexity and cost compared to fully domain based methods like the Finite Difference Time Domain(FDTD) method or the Finite Element Methods. The method can handle inhomogeneous and/or nonlinear optical response, and include the time dependent Boundary Element Method(TBEM), as a special case. For the case of 1D transient wave scattering \cite{Aihua}, the method solves the model equations accurately and efficiently, but we don't expect the 1D case to be fully representative for the problems and issues that need to be resolved, while using the EOS formulation to calculate wave scattering. We do, however, expect the case of 3D light scattering \cite{Aihua2} to be fairly representative with respect to which problems arise, and also the computational and mathematical severity of these problems. We have seen three types of mathematical and computational issues arise for the case of light scattering which we believe are to be found in any nontrivial application of the EOS formulation to wave scattering. Firstly, we have the issue of numerical stability. Instabilities in numerical implementations of the EOS formulation can arise from discretization of the domain part of the algorithm but also from discretization of the boundary update part of the algorithm. The numerical instability arising from the boundary part of the algorithm has been noted earlier in the context of transient light scattering from objects that has a linear homogeneous optical response. For this situation, realized for example in antenna theory, the boundary part of the EOS algorithm can be disconnected from the domain part of the algorithm, which in this case can be discarded. The EOS formulation becomes a pure boundary update algorithm which is solving a set integro-differential equations located on the boundary of the scattering objects. These integro-differential equations, which are the defining equations for TBEM, are subject to an instability that, in many common situations, strikes at late times. This late time instability is a major nuisance, and has prevented TBEM from being more widely applied than it is today. The sources of these instabilities are not yet fully understood, but we believe that our investigation of light scattering using the EOS approach, gives some new insight into the origin of these instabilities. Even without a true understanding of the underlying causes of the late time instability, efforts have been made and several techniques have, over the last several decades, been developed with the goal of improving the stabilities of the numerical schemes designed to solve the integro-differential equations underlying TBEM. Broadly speaking, there are two different directions that has been pursued. One direction is to delay or remove the late time instability by applying increasing accurate spatial integration schemes \cite{Weile, Weile2, Weile3, Weile4, Shanker, Walker, Walker2}. For instance Danile. S. Weiler and his co-authors have published a series of articles focused on illustrating the dependence of the stability on the different numerical integration schemes \cite{Weile, Weile2, Weile3, Weile4}. The other direction is aimed at designing more stable time discretization schemes. M. J. Bluck and his co-authors developed a stable, but implicit numerical method, \cite{Walker, Walker2} for the integro-differential equations underlying TBEM, for the case when the magnetic response is the dominating one. These are the so called magnetic field integral equations. Some authors have reported some success in mitigating the instability by both making better approximations to the integrals and also applying improved algorithms for the time derivatives\cite{Zhao, Huang}. Our work has not been directly aimed at contributing to this discussion, but, as already noted above, the integro-differential equations discussed by these authors can be seen as a special case of our general EOS approach, and we therefore believe that the insights we have gained on how this long time instability depend on the different pieces of the EOS algorithm, in particular how it depends on the material parameters describing the optical response of the scattering object, do have some relevance to the discussion described above. Secondly, there is the issue of the singular integrals that appear when the integral part of the EOS algorithm is discretized. This issue is very much present in BEM and in TBEM \cite{singular1,singular2,singular3,singular4}, but they are more prevalent and severe for the EOS formulation, where we have to tackle both surface integrals and volume integrals. We believe that the type of singular integrals, and how to treat them for the case of light scattering, are fairly representative for the level of complexity one will encounter, while applying the EOS approach to wave scattering problems. For this reason we find it appropriate to include a section in this paper, where we discuss relevant types of integrals, and how to treat them. Thirdly, the fundamental equations underlying both the TBEM and our more general EOS approach to transient wave scattering, are retarded in time. This retardation is unavoidable since their underlying equations can only be derived using space-time Green's functions. Thus the solutions at a certain time depend on a values of the solutions from a potentially very long previous interval of time. Computationally this means that the method can be very demanding with respect to memory, and it also means that the updating of the boundary values of the fields, which is done by the boundary part of the EOS algorithm, can be very costly. Parallel processing, either using a computational cluster or a shared memory machine can take on these computational tasks. However, whenever large scale parallel processing is needed, the issue of appropriate partitioning of the problem and load balancing inevitably comes into play. In our work the EOS algorithm was implemented on a large cluster, but we will not in this paper report on any of the parallel issues that our EOS approach for light scattering gave rise to. These kind of considerations, which are important in practical terms, but typically have fairly low generality, are somewhat distinct from the mathematical and numerical issues that are the focus of the current paper, and will therefore be reported elsewhere at a later time. However, the high memory requirement of the EOS approach to light scattering, is something that should be addressed at this point. On the one hand, the EOS approach represents a large, potentially very large, reduction in memory use, as compared to fully domain based methods, since only the surface and inside of the scattering objects has to be discretized. On the other hand, because of the retardation, there is a large, potentially very large increase in memory use compared to the memory usage needed by the domain part of the algorithm. It is appropriate to ask if anything has been gained with respect to memory usage compared to a fully domain based method like the FDTD method? We don't, as of yet, know the answer to this question, and the answer is almost certainly not going to be a simple one. It will probably depend on the detailed structure of the problems like the nature of the source, the number, shape and distribution of scattering objects etc. However, even if the memory usage for purely domain based methods and our EOS approach are roughly the same for many problems of interest, our approach avoid many of the sources of problems that need to be taken into account while using purely domain based methods. These are problems like stair-casing at sharp interfaces defining the scattering objects, issues of accuracy, stability and complexity associated with the use of multiple grids in order to accommodate the possibly different geometric shapes of the scattering objects and the need to minimize the reflection from the boundary of the finite computational box. The EOS approach is not subject to any of these problems. In this paper our effort are aimed towards testing the EOS formulations of light scattering with respect to implementation complexity and numerical stability. Thus we illustrate the method by the simplest situation where we have single scattering object in the form of a rectangular box. In section \ref{stabilities3d} we analyze the numerical stability of our EOS scheme for light scattering by using eigenvalues of the matrix defining the linearized version of the scheme exactly like for the case of 1D wave scattering\cite{Aihua}. We find, just like for the 1D case, that the internal numerical scheme, Lax-Wendroff for our case determines a stability interval for the time step. In the 1D case, the stability interval of the EOS formulation is purely determined by the internal numerical scheme. However for the 3D case, there is another lower limit of the stability interval determined by the integral part of the scheme which leads to the situation where the lower limit of the stability interval is determined by the integral equations, and the upper limit is determined by the internal numerical scheme. We find that the late time instability is highly depended on the features of the scattering materials and specifically, it is directly related to the values of the relative magnetic permeability $\mu_1$ and the relative electric permittivity $\varepsilon_1$. Using this we prove that, for the relative permeability and permittivity in a certain range, the numerical scheme for our EOS formulation of light scattering, works well and is without any late time instabilities. The late time instability is only observed for high relative electric permittivity or high relative magnetic permeability. We also observe that the lower limit of the stability interval for the time step is more sensitive to relative differences in magnetic permeability $\mu_1$ than electric permittivity $\varepsilon_1$ between the inside and outside of the scattering objects. In section \ref{Asingularity} we present the singular integrals that appear in our EOS formulation for light scattering and the techniques we use to reduce their calculation to a singular core, which we calculate exactly, and a regular part which we calculate numerically. \section{Stability}\label{stabilities3d} In this section we discuss instabilities showing up at late times when we discretize the EOS formulation for light scattering. Whether or not the late time instability show up, depends on the values of the material parameters defining the problem. The overall method is far to complex for an analytical investigation of the stability to be feasible, but using numerical calculation of the eigenvalues of a linearization of the system of difference equations defining the numerical implementation of the EOS formulation, supplemented by running of the full algorithm, we find that the domain part and the boundary part of the algorithm contribute to the instability separately and in different ways. The focus of this section is to disentangle these two contributions to the instability. For the domain part of the algorithm we use Lax-Wendroff, which is an explicit method. The discrete grid inside the scattering object must, for the EOS formulation of light scattering, support both discrete versions of the partial derivatives, and also discretizations of the integrals defining the boundary update part of the algorithm. For this reason the grid is nonuniform close to the boundary. The discretization of the domain part of the algorithm takes the form of a vector iteration \begin{equation}\label{3dmatrixequation} Q^{n+1}=MQ^n, \end{equation} where $Q$ is a vector containing the components of the electric field and the magnetic field at all points of the grid with a size $6\times N_x \times N_y \times N_z$, where $N_x, $ $N_y$ and $N_z$ are the number of grid points in the $x,$ $ y$ and $z$ directions. \iffalse \begin{equation}\label{3dmatrixequation} \begin{bmatrix} (e_{1,i,j,k} )_{\Lambda_1 } \\ (e_{2,i,j,k} )_{\Lambda_1 } \\ (e_{3,i,j,k} )_{\Lambda_1 } \\ (b_{1,i,j,k} )_{\Lambda_1} \\ (b_{2,i,j,k} )_{\Lambda_1 } \\ (b_{3,i,j,k} )_{\Lambda_1} \\ \end{bmatrix}^{n+1} = M \begin{bmatrix} (e_{1,i,j,k} )_{\Lambda_1 } \\ (e_{2,i,j,k} )_{\Lambda_1 } \\ (e_{3,i,j,k} )_{\Lambda_1 } \\ (b_{1,i,j,k} )_{\Lambda_1} \\ (b_{2,i,j,k} )_{\Lambda_1 } \\ (b_{3,i,j,k} )_{\Lambda_1} \\ \end{bmatrix}^n \end{equation} \fi The entries of the matrix $M$ are presented in Appendix \ref{Amatrix}. In order to get a stable numerical solution, as discussed in \cite{Aihua}, the largest eigenvalues of the matrix $M$ must have a norm smaller than 1. For the non-uniform grids and the discretizations in \cite{Aihua2}, we find that the vector iteration (\ref{3dmatrixequation}) is stable if $$ 0.005< \tau <0.48, $$ where $\tau=c_1 \Delta t/\Delta x.$ \begin{figure} \caption{Numerical solutions from different values of $ \tau .$ $\mu_1=1.0, \varepsilon_1=1.5, \mu_0=1.0, \varepsilon_0=1.0.$ } \label{unstablei} \end{figure} \noindent Figure \ref{unstablei} illustrates the intensity of the electric field at a specific point inside the object, as a function of time, for different values of $\tau$. The instability, which in the TBEM literature is called the late time instability, is illustrated in the second panel of figure \ref{unstablei}. As we mentioned in the introduction in the paper, the term late time instability has been much used in the community that is focused on time dependent boundary element method. We believe that in their domain of application, like antenna theory, the physical parameters are such that the largest eigenvalue for the iteration is always only slightly bigger than 1, like it is in panel two of figure \ref{unstablei} . That's why the instability always shows up at late times. In panel three of the figure we are deeper into the unstable domain for $\tau$, and the larges eigenvalue is now so large that it destroys the whole calculation. The late time instability has thus been transformed into an early time instability. Note that the outside source in figure \ref{unstablei} is the same as in \cite{Aihua2}. In our numerical experiments, we found that the stable range of the EOS formulations is not only restricted by the eigenvalues of the matrix $M$, but is also restricted by the boundary integral identities through the relative electric permittivity $\varepsilon_1$ and the relative magnetic permeability $\mu_1$. Figure \ref{unstables} shows how the stability depends on the values of $\varepsilon_1$, and figure \ref{ucompare} shows how it depends on the values of $\mu_1$. Together, they tell us that increasing the electric permittivity or the magnetic permeability narrows the stable range. \begin{figure} \caption{Numerical solutions from different values of $ \varepsilon_1 .$ $\tau=0.45,$ $\mu_1=1.0, \mu_0=1.0, \varepsilon_0=1.0.$ } \label{unstables} \end{figure} \begin{figure} \caption{Numerical solutions from different values of $ \mu_1 .$ $\tau=0.45,$ $\mu_0=1.0, \varepsilon_0=1.0.$} \label{ucompare} \end{figure} Figure \ref{ucompare} also tells us that $\mu_1$ and $\varepsilon_1$ don't affect the stability of the full scheme in the same way. It seems that the method is more sensitive to $\mu_1$ than $\varepsilon_1.$ After a series of numerical experiments, our conclusion is that, for an explicit numerical method like the one we are using, the lower limit of the stable range of the EOS formulation is restricted by the electric permittivity $\varepsilon_1$ and the magnetic permeability $\mu_1$ while the upper limit of the stable range is determined by the inside domain-based method. This conjecture is verified by the following two tests. \subsection{Instabilities coming from the domain-based method} For the first test we consider a homogeneous model without current and charge inside the object which implies $\mu_1=\mu_0,$ $\varepsilon_1=\varepsilon_0,$ ${\bf J}_1=0$ and $\rho_1=0.$ Under these assumptions, the electric field and the magnetic field are continuous across the surfaces, \begin{equation*} \begin{split} {\bf E}_-&={\bf E}_+,\\ {\bf B}_-&={\bf B}_+, \end{split} \end{equation*} where ${\bf E}_\pm$and ${\bf B}_\pm$ are the integral representations of the solutions on the surface by taking the limit from the inside and the outside of the object respectively. The electric field inside the object can be calculated by the outside sources directly \begin{equation}\label{htestequation2} \begin{split} {\bf E}_1({\bf x},t)=-\partial_t\frac{\mu_0}{4\pi}\int_{V_0}\,\mathrm{d}V'\frac{{\bf J}_0({\bf x'},T)}{|\bf x'-\bf x|}-\nabla\frac{1}{4\pi\varepsilon_0}\int_{V_0}\,\mathrm{d}V'\frac{\rho_0({\bf x'},T)}{|{\bf x'}-{\bf x}|}, \end{split} \end{equation} where ${\bf x}\in V_1.$ (\ref{htestequation2}) expresses the exact solution for the inside fields. Also from \cite{Aihua2} we have the boundary integral identity \begin{equation} \begin{split}\label{htestb} {\bf E}_+({\bf x},t)=-\partial_t\frac{\mu_0}{4\pi}\int_{V_0}\,\mathrm{d}V'\frac{{\bf J}_0({\bf x'},T)}{|\bf x'-\bf x|}-\nabla\frac{1}{4\pi\varepsilon_0}\int_{V_0}\,\mathrm{d}V'\frac{\rho_0({\bf x'},T)}{|{\bf x'}-{\bf x}|}, \end{split} \end{equation} and \begin{equation}\label{htestbb} {\bf B}_+({\bf x},t)=\nabla\times\frac{\mu_0}{4\pi}\int_{V_0}\,\mathrm{d}V'\frac{{\bf J}_0({\bf x'},T)}{|{\bf x'}-{\bf x}|}, \end{equation} for ${\bf x} \in S$, ${\bf E}_+({\bf x},t)$ and ${\bf B}_+({\bf x},t)$ represent the limits by letting ${\bf x}$ approach the surface from the inside of the scattering object. On the other hand, \cite{Aihua2} gives the integral representations for the inside domain by \begin{equation}\label{htestequation1} \begin{split} {\bf E}_1({\bf x},t)&=\partial_t[\frac{1}{4\pi}\int_{S}\,\mathrm{d}S^{'}\{\frac{1}{c_1|{\bf x'}-{\bf x}|} ({{\bf n}'}\times {\bf E}_+({\bf x'},T))\times \nabla'|{\bf x'}-{\bf x}|\\ &+\frac{1}{c_1|{\bf x'}-{\bf x}|} ({{\bf n}'}\cdot {\bf E}_+({\bf x'},T))\nabla'|{\bf x'}-{\bf x}|+\frac{1}{|{\bf x'}-{\bf x}|} {{\bf n}'}\times {\bf B}_+({\bf x'},T) \}]\\ &-\frac{1}{4\pi}\int_{S}\,\mathrm{d}S^{'}\{ ({{\bf n}'}\times {\bf E}_+({\bf x'},T))\times \nabla'\frac{1}{|{\bf x'}-{\bf x}|}\\ &+({{\bf n}'}\cdot {\bf E}_+({\bf x'},T))\nabla'\frac{1}{|{\bf x'}-{\bf x}|}\}. \end{split} \end{equation} \begin{figure} \caption{Comparison of the intensity of the electric field inside the object at a specific point calculated by three methods. $t_0=1.5,$ $x_0=-2.0,$ $y_0=0.0,$ $z_0=0.0,$ $\tau=0.45,$ $\mu_1=1.0,$ $ \varepsilon_1=1.0,$ $\mu_0=1.0, $ $\varepsilon_0=1.0.$ } \label{htest1} \end{figure} Thus the solution for the domain inside the scattering object can now be calculated in three ways. The first is the exact solution expressed by (\ref{htestequation2}), the second, Method 2, is the Lax-Wendroff method supplied by the exact boundary values (\ref{htestb}) and (\ref{htestbb}) , and the third, Method 3, is to calculate the solution using formula (\ref{htestequation1}) which expresses the field values inside the scattering object in terms of the values of the fields on the boundary. Note that Method 3 uses the same surface integral expressions as the one that form the boundary part of the full implementation of our EOS formulation of light scattering. Thus, instabilities in the full algorithm originating from the boundary part of the algorithm, should appear as instability in Method 3. Figure \ref{htest1} compare the solutions calculated in these three ways, where $\mu_1, \nu_1$ and $\tau$ have been fixed in the stable range. Both Method 2 and Method 3 are stable and give solutions that agree with the exact solution to high accuracy. \begin{figure} \caption{Comparison of the intensity of the electric field inside the object at a specific point calculated by three methods. $t_0=1.5,$ $x_0=-2.0,$ $y_0=0.0,$ $z_0=0.0,$ $\tau=0.49,$ $\mu_1=1.0,$ $ \varepsilon_1=1.0,$ $\mu_0=1.0, $ $\varepsilon_0=1.0.$} \label{htest2} \end{figure} In Figure \ref{htest2}$, \tau$ has been set to be 0.49, and is thus is larger than the upper limit of the stable range. The figure shows that Method 2 is now unstable but Method 3 is still stable and equal to the exact solution to high accuracy. The outside source in figure \ref{htest1} and figure \ref{htest2} is as same as in \cite{Aihua2} and the values of the parameters are shown under the figure. \subsection{Instabilities coming from the boundary integral identities} In order to investigate the dependence of the stability on $\mu_1$ and $\varepsilon_1,$ we set up a test based on the use of artificial sources as in \cite{Aihua2}. The idea is to chose functional forms for an electromagnetic field, and then calculate the sources, charge density and current density, needed for making the chosen fields solutions to Maxwell's equations driven by the calculated sources We now calculate the electromagnetic field inside the scattering object in two different ways. In Method 1 we use the discretization of the EOS formulation developed in \cite{Aihua2}, which combines the Lax-Wendroff method for the domain part of the algorithm and our discretization of the integral representations of the boundary fields for the boundary part of the algorithm. Method 2 is to calculate the inside field values by only using the Lax-Wendroff method supplemented by the exact boundary values of the electromagnetic field which are the ones we chose while setting up the artificial sources. \begin{figure} \caption{Comparison of the intensity of the electric field inside the object at a specific point between the exact solution and the numerical results calculated by two methods. $\tau=0.45, $ $\mu_1=1.0,$ $ \varepsilon_1=2.5,$ $\mu_0=1.0, $ $\varepsilon_0=1.0.$} \label{uhtest} \end{figure} Figure \ref{uhtest} is the numerical result where the upper limit of the stable range is kept while the values of $\mu_1$ and $\varepsilon_1$ have been chosen to break the lower limit of the stable range of the EOS formulations. It shows that even though the lower limit of the stable range has been broken, Method 2, which only involves the Lax-Wendroff method works perfectly. \ref{htest2} and \ref{uhtest} tell us that the changing of the lower limit does not effect the stability of the Lax-Wendroff method and the changing of upper limit does not effect the stability of the surface integrals. For a general application where the source is located outside the object and there are current density and electric density inside the scattering object, the EOS formulations does have a range for a stable numerical implementation. The upper limit of the range is determined by the Lax-Wendroff method due to the non-uniform grids and the lower limit is determined by the changing $\mu_1$ and $\varepsilon_1.$ The setting up of the artificial sources and the values of the parameters in figure \ref{uhtest} are the same as the artificial sources in \cite{Aihua2}. From figure \ref{htest2} and figure \ref{uhtest}, we can also see that before the instabilities show up, both the EOS formulations and the Lax-Wendroff method solve the equations accurately. \section{Calculations of the singular integrals}\label{Asingularity} In this section we introduce a technique to accurately calculate integrals with singularities which can be applied for both the singular volume integrals and the singular surface integrals occurring in the EOS formulations of the 3D Maxwell's equations. Here we illustrate the technique by calculating one type of singular volume integral \begin{equation}\label{f1} f_1=\iiint_{V_{i,j,k}} \frac{1}{|{\bf x}'-{\bf x}_p|}\,\mathrm{d}V=\iiint_{V_{i,j,k}} \frac{1}{r}\,\mathrm{d}V, \end{equation} where the integral domain $V_{i,j,k}$ is adjacent to the surfaces of the scattering object and given by \begin{equation*} V_{i,j,k}=[x_a, x_a+\Delta x]\times [y_j-\frac{\Delta y}{2}, y_j+\frac{\Delta y}{2}] \times [z_k-\frac{\Delta z}{2}, z_k+\frac{\Delta z}{2}], \end{equation*} with surfaces $S_m, $ $ m=1,2,\cdots,6$. Here, $\Delta x,$ $\Delta y$ and $\Delta z$ are the grid parameters in $x,$ $y$ and $z$ directions respectively. The point ${\bf x}_p$ $${\bf x}_p=(x_a, y_j, z_k),$$ is centered on one of the surfaces of the scattering object. The geometry is illustrated in figure \ref{box}, where ${\bf n}_m$ is the unit normal vector on surface $S_m$ pointing out of $V_{i,j,k}.$ \begin{figure} \caption{The integral domain of the singular integral } \label{box} \end{figure} The components of the integration variable in (\ref{f1}) are given by $$ {\bf x'}=(x', y', z'), $$ and let us introduce the quantity $$ {\bf r}={\bf x}'-{\bf x}_p, $$ with $r=|{\bf r}|.$ We want to apply the divergence theorem on (\ref{f1}), and therefore need to find a function $\varphi(r)$ that satisfies \begin{equation*} \nabla \cdot ({\bf r} \varphi(r))=\frac{1}{r}, \end{equation*} or equivalently $$3 \varphi(r)+r\varphi'(r)=\frac{1}{r}.$$ Solving the above equation, we get \begin{equation*} \varphi(r)=\frac{1}{2 r}. \end{equation*} Because of the singularity on $S_1,$ we can not apply the divergence theorem directly, however we can write $f_1$ as \begin{equation*} \begin{split} f_1= \frac{1}{2 }(\sum_{m=2}^{6} \iint_{S_m} \frac{1}{ r} {\bf r}\cdot {\bf n}_m\,\mathrm{d}S+\lim_{\epsilon \rightarrow 0}\iint_{{S_\epsilon}} \frac{1}{r} {\bf r}\cdot {\bf n}_{\epsilon}\,\mathrm{d}S+\iint_{S_\Omega} \frac{1}{r} {\bf r}\cdot {\bf n}_1 \,\mathrm{d}S), \end{split} \end{equation*} where $S_\epsilon$ is a hemispherical surface of radius $\epsilon$ centered at ${\bf x}_p$ and $S_\Omega$ is the rest of the surface $S_1$ with a disk of radius $\epsilon$ around ${\bf x}_p$ has been removed. ${\bf n}_\epsilon$ is the unit normal vector on $S_\epsilon,$ pointing out of $V_{i,j,k}.$ ${\bf n}_m$ is the unit normal vector on $S_m,$ pointing out of $V_{i,j,k}.$ For the integral over $S_\Omega$, we have $${\bf r}=(0, y'-y_j, z'-z_k)$$ and $$ {\bf n}_1=(-1,0,0), $$ thus we get \begin{equation*} \iint_{S_\Omega}\frac{1}{ r} {\bf r}\cdot {\bf n}_1\,\mathrm{d}S=0. \end{equation*} For the integral over $S_\epsilon$, we use the spherical coordinate system, $${\bf r}=\epsilon (\cos \theta \sin \varphi, \sin \theta \sin \varphi, \cos \varphi),$$ and $$ {\bf n}_{\epsilon}= (\cos \theta \sin \varphi, \sin \theta \sin \varphi, \cos \varphi),$$ where $\epsilon, \varphi, \theta$ are respectively the radial distance, polar angle and azimuthal angle, so that \begin{equation*} \begin{split} \lim_{\epsilon \rightarrow 0} \iint_{S_\epsilon}\frac{1}{ r} {\bf r}\cdot {\bf n}_{\epsilon}\,\mathrm{d}S=&\lim_{\epsilon \rightarrow 0}\frac{1}{\epsilon}\int_{0}^{2\pi}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \epsilon (\cos \theta \sin \varphi, \sin \theta \sin \varphi, \cos \varphi)\\ &\cdot (\cos \theta \sin \varphi, \sin \theta \sin \varphi, \cos \varphi) \epsilon^2 \sin \varphi \, \mathrm{d} \theta\, \mathrm{d} \varphi\\ &=0. \end{split} \end{equation*} Defining $$ s_m= \iint_{S_m} \frac{1}{ r} {\bf r}\cdot {\bf n}_m\,\mathrm{d}S, $$ $f_1$ can be written as \begin{equation}\label{f1s} f_1= \frac{1}{2 }\sum_{m=2}^{6} s_m. \end{equation} (\ref{f1s}) is not singular any more and can be calculated by 2D Gaussian quadrature. However we will compute $f_1$ by reducing the surface integral into a line integral, which is also the approach we use to calculate the singular surface integrals appearing in the implementation discussed in this paper. We first consider the integral over $S_2$. The geometry is shown in figure \ref{s2}. \begin{figure} \caption{Surface $S_2$ } \label{s2} \end{figure} As show in figure \ref{s2}, the surface $S_2$ is bounded by the union of four straight lines $L_{2n}$, $n=1,2,3,4.$ On this surface we have $${\bf r}=(x'-x_a, \frac{1}{2}\Delta y, z'-z_k)$$ and the unit normal is $${\bf n}_2=(0,-1,0),$$ so that \begin{equation*} s_2=\frac{1}{2}\Delta y\iint_{S_2} \frac{1}{\sqrt{(x'-x_a)^2+\frac{1}{4}\Delta y^2+(z'-z_k)^2}}\, \mathrm{d}S. \end{equation*} The goal is to use the divergence theorem on this surface integral and thereby reduce it to line integrals over the four lines that forms the boundary of $S_2$. We therefore seek a function $\varphi(\bar r)$ that satisfies $$\nabla \cdot (\bar {\bf r} \varphi(\bar r))=\frac{1}{\sqrt{\bar r^2+\frac{1}{4}\Delta y^2}},$$ where $\bar {\bf r}=(x'-x_a,z'-z_k)$ and $\bar r=|\bar {\bf r}|.$ This equation can be rewritten in the form $$2 \varphi(\bar r)+r\varphi'(\bar r)=\frac{1}{\sqrt{\bar r^2+\frac{1}{4}\Delta y^2}}.$$ Solving the above equation we get \begin{equation*}\label{f1s2} \varphi(\bar r)=\frac{\sqrt{\bar r^2+\frac{1}{4}\Delta y^2}}{\bar r^2} . \end{equation*} Using the divergence theorem and taking into account of the singularity at $$\bar{ \bf x}=(x_a, z_k)$$ on $L_{21},$ we get \begin{equation*} \begin{split} s_2=\frac{1}{2}\Delta y (\sum_{n=2}^4 \int_{L_{2n}} \varphi(\bar r)\bar {\bf r}\cdot \bar {\bf n}_{n}\,\mathrm{d}L + \lim_{\epsilon \rightarrow 0}\int_{L_\epsilon} \varphi(\bar r) \bar {\bf r}\cdot \bar {\bf n}_{ \epsilon}\, \mathrm{d}L+ \int_{L_\Omega} \varphi(\bar r) \bar {\bf r}\cdot \bar {\bf n}_{1}\, \mathrm{d}L) \end{split} \end{equation*} where $L_\epsilon$ is a semicircle with radius $\epsilon$ centered at point ${\bar {\bf x}}$ and $L_\Omega$ is the rest of $L_{21}$. Here $\bar {\bf n}_{n}$ is the unit normal of $L_{2n}$, pointing out of $S_2,$ and $\bar {\bf n}_{\epsilon}$ is the unit normal of $L_{\epsilon}$, pointing out of $S_2.$ For the integral over $L_\Omega,$ we have $$\bar {\bf r}=(0, z'-z_k),$$ and $$\bar {\bf n}_1=(-1,0),$$ so that \begin{equation}\label{h1s21} \int_{L_\Omega} \frac{\sqrt{\bar r^2+\frac{1}{4}\Delta y^2}}{\bar r^2} (0, z'-z_k)\cdot (-1,0)\, \mathrm{d}L=0. \end{equation} For the integral over $L_\epsilon,$ using the polar coordinates, we have $$\bar {\bf r}=\epsilon (\cos \theta , \sin \theta),$$ and $$\bar {\bf n}_\epsilon=-(\cos \theta , \sin \theta),$$ so that \begin{equation}\label{h1s22} \begin{split} &\lim_{\epsilon \rightarrow 0}\int_{L_\epsilon} \frac{\sqrt{\bar r^2+\frac{1}{4}\Delta y^2}}{\bar r^2} \bar {\bf r}\cdot\bar {\bf n}_\epsilon\, \mathrm{d}L\\ &=-\lim_{\epsilon \rightarrow 0}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \epsilon (\cos \theta , \sin \theta)\cdot (\cos \theta , \sin \theta ) \frac{\sqrt{\epsilon^2+\frac{1}{4}\Delta y^2}}{\epsilon^2} \epsilon\, \mathrm{d} \theta\\ &=-\frac{1}{2} \Delta y \pi. \end{split} \end{equation} Summing up (\ref{h1s21}) and (\ref{h1s22}) gives $$l_{21}=-\frac{1}{2} \Delta y \pi.$$ Thus $s_2$ is expressed by \begin{equation*} \begin{split} s_2=\frac{1}{2}\Delta y\sum_{n=1}^4 l_{2n}, \end{split} \end{equation*} where \begin{equation*} \begin{split} l_{22}&=\frac{1}{2}\Delta z \int_{x_a}^{x_a+\Delta x} \frac{\sqrt{(x'-x_a)^2+\frac{1}{4}\Delta y^2+\frac{1}{4}\Delta z^2}}{(x'-x_a)^2+\frac{1}{4}\Delta z^2}\, \mathrm{d}x',\\ l_{23}&=\Delta x \int_{z_k-\frac{1}{2}\Delta z}^{z_k+\frac{1}{2}\Delta z} \frac{\sqrt{\Delta x^2+\frac{1}{4}\Delta y^2+(z'-z_k)^2}}{\Delta x^2+(z'-z_k)^2}\, \mathrm{d}z', \end{split} \end{equation*} and due to the symmetry of the integrand $ \bar {\bf r} \varphi(\bar r)$ on $xz$ plane $$l_{24}=l_{22}.$$ So finally we have $$s_2=\frac{1}{2}\Delta y (l_{21}+2 l_{22}+l_{23}).$$ Due to the symmetry of ${\bf r}$ in $V_{i,j,k}$ along $y$ direction, we have $$s_5=s_2.$$ The calculation of $s_3$ is similar to the one of $s_2$ with the final result $$s_3=\frac{1}{2}\Delta z (l_{31}+2 l_{32}+l_{33}),$$ where \begin{equation*} \begin{split} l_{31}&=-\frac{1}{2} \Delta z \pi,\\ l_{32}&=\frac{1}{2}\Delta y \int_{x_a}^{x_a+\Delta x} \frac{\sqrt{(x'-x_a)^2+\frac{1}{4}\Delta z^2+\frac{1}{4}\Delta y^2}}{(x'-x_a)^2+\frac{1}{4}\Delta y^2}\, \mathrm{d}x',\\ l_{33}&=\Delta x \int_{y_j-\frac{1}{2}\Delta y}^{y_j+\frac{1}{2}\Delta y} \frac{\sqrt{\Delta x^2+\frac{1}{4}\Delta z^2+(y'-y_j)^2}}{\Delta x^2+(y'-y_j)^2}\, \mathrm{d}y'. \end{split} \end{equation*} Also due to the symmetry of ${\bf r}$ in $V_{i,j,k}$ along $z$ direction, we have $$s_6=s_3.$$ The only surface integral remaining to be calculated is the one over $S_4.$ On this surface we have $${\bf r}=(\Delta x, y'-y_j, z'-z_k),$$ and $${\bf n}_4=(1,0,0),$$ so that \begin{equation*} s_4=\Delta x\iint_{S_4} \frac{1}{\sqrt{\Delta x^2+(y'-y_j)^2+(z'-z_k)^2}} \, \mathrm{d} S.\label{v1s4} \end{equation*} Defining $$\bar {\bf r}=(y'-y_j,z'-z_k)$$ and $$\bar r=|\bar {\bf r}|,$$ we seek a function $\varphi(\bar r)$ that satisfies $$\nabla \cdot (\bar {\bf r} \varphi(\bar r))=\frac{1}{\sqrt{\bar r^2+\Delta x^2}}.$$ This equation can be written in the form $$2 \varphi(\bar r)+\bar r\varphi'(\bar r)=\frac{1}{\sqrt{\bar r^2+\Delta x^2}}.$$ Solving the above equation gives \begin{equation*}\label{f1s4} \varphi(\bar r)=\frac{\sqrt{\bar r^2+\Delta x^2}}{\bar r^2} . \end{equation*} \begin{figure} \caption{Surface $S_4$ } \label{s4} \end{figure} Applying the divergence theorem, we have \begin{equation*} s_4=\Delta x\lim_{\epsilon \rightarrow 0}\int_{L_\epsilon}\frac{\sqrt{\bar r^2+\Delta x^2}}{\bar r^2}\bar {\bf r}\cdot\bar {\bf n}_\epsilon\, \mathrm{d} L+\Delta x \int_{L_\Omega}\frac{\sqrt{\bar r^2+\Delta x^2}}{\bar r^2}\bar {\bf r}\cdot\bar {\bf n}_\Omega\, \mathrm{d} L, \end{equation*} where $L_\epsilon $ is a circle with radius $\epsilon$ centered at point ${\bar{ \bf x}}=(y_j,z_k),$ and $L_\Omega $ is the four edges of surface $S_4$. $\bar {\bf n}_\epsilon$ is the unit normal vector of $L_\epsilon$ and $\bar {\bf n}_\Omega$ is the unit normal vector of $L_\Omega,$ as shown in figure \ref{s4}. For the integral over $L_\epsilon,$ we write $$\bar {\bf r}= \epsilon (\cos \theta , \sin \theta), $$ and $$\bar {\bf n}_\epsilon= - (\cos \theta , \sin \theta),$$ then \begin{equation}\label{h1s41} \begin{split} &\lim_{\epsilon \rightarrow 0} \int_{L_\epsilon}\frac{\sqrt{\bar r^2+\Delta x^2}}{\bar r^2}\bar {\bf r}\cdot \bar {\bf n}_\epsilon \, \mathrm{d} L\\ &=-\lim_{\epsilon \rightarrow 0}\int_{0}^{2\pi} \epsilon (\cos \theta , \sin \theta)\cdot (\cos \theta , \sin \theta ) \frac{\sqrt{\epsilon^2+\Delta x^2}}{\epsilon^2} \epsilon \, \mathrm{d} \theta\\ &=-2\Delta x \pi. \end{split} \end{equation} For the integral over $L_\Omega,$ there is no singularity anymore and this leads to \begin{equation}\label{h1s42} \begin{split} &\int_{l_\Omega}\frac{\sqrt{\bar r^2+\Delta x^2}}{\bar r^2}\bar {\bf r}\cdot \bar {\bf n}_\Omega \, \mathrm{d} L\\ &=2l_{41}+2l_{42}, \end{split} \end{equation} with \begin{equation*} \begin{split} l_{41}= \frac{1}{2}\Delta y \int_{z_k-\Delta z}^{z_k+\Delta z}\frac{\sqrt{\Delta x^2+\frac{1}{4}\Delta y^2+(z'-z_k)^2}}{\frac{1}{4}\Delta y^2+(z'-z_k)^2}\, \mathrm{d}z', \end{split} \end{equation*} and \begin{equation*} \begin{split} l_{42}=\Delta z \int_{y_j-\Delta y}^{y_j+\Delta y}\frac{\sqrt{\Delta x^2+\frac{1}{4}\Delta z^2+(y'-y_j)^2}}{\frac{1}{4}\Delta z^2+(y'-y_j)^2}\, \mathrm{d}y'. \end{split} \end{equation*} Summing up (\ref{h1s41}) and (\ref{h1s42}), we obtain, $$s_4=2\Delta x(l_{41}+l_{42}- \pi \Delta x).$$ We then finally get the following expression for $f_1$ \begin{equation*} \begin{split} f_1&=\frac{\Delta y}{2}(l_{21}+2l_{22}+l_{23})+\frac{\Delta z}{2}(l_{31}+2l_{32}+l_{33})\\ &+\Delta x (l_{41}+l_{42}- \pi \Delta x). \end{split} \end{equation*} All the line integrals $l_{21}$ etc are non-singular and can be calculated accurately using numerical integration. \section{Summary}\label{summary3d} In this paper we have, by considering 3D light scattering, discussed some important issues that we believe will be generic for numerical implementations of the EOS formulation for wave scattering. We have shown that the numerical instabilities can be thought as arising separately from the domain part and the boundary update part of the algorithm. We have argued that the instability arising from the boundary part of the algorithm is strongly related to the late time instability noted earlier while solving antenna problems using TBEM. We find that our version of the late time instability can be completely removed by suitably chosen material values, in particular the jump in material values at the boundary of the scattering object should not be too severe. In the limit where the material parameters simulate the properties of highly conductive metallic surfaces, we observe that our version of the late time instability is always present. Thus the instability interval vanishes in this limit. We take this as an indicator that for situations like in antenna theory, the late time instability should always be present, which it is. We are now aware of work where it has been noted that the instability can be removed by manipulating the material parameters defining the scattering objects. The EOS formulation gives thus different window into the late time instability that might be useful. We have in our discretization used explicit methods. It would not be easy, but we believe that it is possible to do a fully implicit method for the EOS formulation, such an approach might remove all instabilities, which is the ultimate goal both for TBEM and for our EOS formulation. In this paper we have also discussed how to calculate singular volume and surface integrals for light scattering. The reason for including this discussion is that we think the type of singular integrals we discuss are generic for the singular integrals that will arise while calculating wave scattering using the EOS approach. \begin{appendices} \section{Matrix elements}\label{Amatrix} In this section we detail the entries of the updating matrix $M$ in (\ref{3dmatrixequation}) where $Q$ is a vector containing the components of the electric field and the magnetic field at all points of the grid with a size $6\times N_x \times N_y \times N_z$, where $N_x, $ $N_y$ and $N_z$ are the number of grid points in the $x,$ $ y$ and $z$ directions. To simplify the writing, we denote \begin{equation*} \Lambda_1=N_x\times N_y\times N_z, \end{equation*} \begin{equation*} \Lambda_2= N_y\times N_z, \end{equation*} \begin{equation*} \Lambda_3= 6\Lambda_1, \end{equation*} \begin{equation*} \Gamma_1=\Lambda_2 i+N_z j+k, \end{equation*} \begin{equation*} \Gamma_2=\Lambda_1+\Gamma_1, \end{equation*} \begin{equation*} \Gamma_3=2\Lambda_1+\Gamma_1, \end{equation*} \begin{equation*} \Gamma_4=3\Lambda_1+\Gamma_1, \end{equation*} \begin{equation*} \Gamma_5=4\Lambda_1+\Gamma_1, \end{equation*} \begin{equation*} \Gamma_6=5\Lambda_1+\Gamma_1. \end{equation*} Thus $Q$ is expressed by \begin{equation*} Q= \left(\! \begin{array}{c} {[e_{1,i,j,k}]}_{\Lambda_1}\\{[e_{2,i,j,k}]}_{\Lambda_1}\\{[e_{3,i,j,k}]}_{\Lambda_1}\\{[b_{1,i,j,k}]}_{\Lambda_1}\\{[b_{2,i,j,k}]}_{\Lambda_1}\\{[b_{3,i,j,k}]}_{\Lambda_1} \end{array} \!\right)^{n+1}=\left(\! \begin{array}{c} {[Q_{\Gamma_1}]}_{\Lambda_1}\\{[Q_{\Gamma_2}]}_{\Lambda_1}\\{[Q_{\Gamma_3}]}_{\Lambda_1}\\{[Q_{\Gamma_4}]}_{\Lambda_1}\\{[Q_{\Gamma_5}]}_{\Lambda_1}\\{[Q_{\Gamma_6}]}_{\Lambda_1} \end{array} \!\right)^{n+1}, \end{equation*} where ${[e_{1,i,j,k}]}_{\Lambda_1}$ represents the vector containing the components of the electric field $e_1$ at all points of the grid indexing in $k, j, i$ order. ${[e_{2,i,j,k}]}_{\Lambda_1}$ and so on follow the same rule. Due to the complexity of the matrix, here we only illustrate the entries of the rows of $M$ corresponding to the components $Q_{\Gamma_1}^{n+1}.$ Other entries of the matrix can be expressed in the same way. After applying the Lax-Wendroff method, we have \begin{equation*} \begin{split} e_{1,i,j,k}^{n+1}&=e_{1,i,j,k}^{n}+w_1 (e_{1,i,j,k}^n)_{yy}+w_1 (e_{1,i,j,k}^n)_{zz}-w_1 (e_{2,i,j,k}^n)_{xy}\\ &-w_1 (e_{3,i,j,k}^n)_{xz}+w_2(b_{3,i,j,k}^n)_y-w_2(b_{2,i,j,k}^n)_z, \end{split} \end{equation*} $$\Downarrow$$ \begin{equation}\label{onerowmatrix} \begin{split} Q_{\Gamma_1}^{n+1}&=Q_{\Gamma_1}+w_1 (Q_{\Gamma_1})_{yy}+w_1 (Q_{\Gamma_1})_{zz}-w_1 (Q_{\Gamma_2})_{xy}\\ &-w_1 (Q_{\Gamma_3})_{xz}+w_2(Q_{\Gamma_6})_{y}-w_2(Q_{\Gamma_5})_{z}, \end{split} \end{equation} where \begin{equation*} \begin{split} w_1&=\frac{c^2\Delta t^2}{2},\\ w_2&=c^2\Delta t. \end{split} \end{equation*} The coefficients of the right side of the equation (\ref{onerowmatrix}) are corresponding to the $\Gamma_1$-th row of the matrix $M$ and the values of them are depended on the values of $i, j $ and $k.$ In order to have a compact and uniform expressions, we write \begin{equation*} \begin{split} &(Q_{\Gamma_6})_{y}=\frac{1}{ \Delta y}(\xi_{-2}Q_{\kappa_{-2}} +\xi_{-1} Q_{\kappa_{-1}}+\xi Q_{\kappa}+\xi_{1}Q_{\kappa_{1}} +\xi_{2} Q_{\kappa_{2}}),\\ &(Q_{\Gamma_1})_{yy}=\frac{1}{ (\Delta y)^2}(\delta_{-2}Q_{\chi_{-2}} +\delta_{-1} Q_{\chi_{-1}}+\delta Q_{\Gamma_1}+\delta_{1}Q_{\chi_{1}} +\delta_{2} Q_{\chi_{2}}),\\ &(Q_{\Gamma_2})_{xy}=\frac{1}{ 3 \Delta x \Delta y}(\omega_{-4}Q_{\Upsilon_{-4}} +\omega_{-3} Q_{\Upsilon_{-3}}+\omega_{-2}Q_{\Upsilon_{-2}} +\omega_{-1} Q_{\Upsilon_{-1}}+\omega Q_{\Upsilon}\\ &\qquad \qquad +\omega_{1}Q_{\Upsilon_{1}} +\omega_{2} Q_{\Upsilon_{2}}+\omega_{3}Q_{\Upsilon_{3}} +\omega_{4} Q_{\Upsilon_{4}}), \end{split} \end{equation*} where \begin{align*} &\chi_{1}=\Gamma_1+N_z, & &\chi_{2}=\Gamma_1+2N_z,& &\chi_{-1}=\Gamma_1-N_z, \\ & \chi_{-2}=\Gamma_1-2N_z, & & \kappa=\Gamma_6, & &\kappa_{1}=\Gamma_6+N_z, \\ &\kappa_{2}=\Gamma_6+2N_z, & &\kappa_{-1}=\Gamma_6-N_z, & &\kappa_{-2}=\Gamma_6-2N_z,\\ & \eta=N_y, & &\Upsilon=\Gamma_{2},& &\Upsilon_{-4}=\Gamma_{2}-\Lambda_2-N_z,\\ &\Upsilon_{-3}=\Gamma_{2}-\Lambda_2,& &\Upsilon_{-2}=\Gamma_{2}-\Lambda_2+N_z, & &\Upsilon_{-1}=\Gamma_{2}-N_z,\\ &\Upsilon_{1}=\Gamma_{2}+N_z,& &\Upsilon_{2}=\Gamma_{2}+\Lambda_2-N_z,& &\Upsilon_{3}=\Gamma_{2}+\Lambda_2,\\ & \Upsilon_{4}=\Gamma_{2}+\Lambda_2+N_z. \end{align*} The expressions for $(Q_{\Gamma_5})_{z},$ $(Q_{\Gamma_1})_{zz}$ and $(Q_{\Gamma_3})_{xz}$ have the same forms as $(Q_{\Gamma_6})_{y},$ $(Q_{\Gamma_1})_{yy}$ and $(Q_{\Gamma_2})_{xy}$ respectively, but with \begin{align*} &\chi_{1}=\Gamma_1+1, & &\chi_{2}=\Gamma_1+2,& &\chi_{-1}=\Gamma_1-1, \\ & \chi_{-2}=\Gamma_1-2, & &\kappa=\Gamma_5, & &\kappa_{1}=\Gamma_5+1, \\ &\kappa_{2}=\Gamma_5+2,& &\kappa_{-1}=\Gamma_5-1, & &\kappa_{-2}=\Gamma_5-2,\\ & \eta=N_z, & &\Upsilon=\Gamma_{3},& &\Upsilon_{-4}=\Gamma_{3}-\Lambda_2-1,\\ &\Upsilon_{-3}=\Gamma_{3}-\Lambda_2,& &\Upsilon_{-2}=\Gamma_{3}-\Lambda_2+1, & &\Upsilon_{-1}=\Gamma_{3}-1,\\ &\Upsilon_{1}=\Gamma_{3}+1,& &\Upsilon_{2}=\Gamma_{3}+\Lambda_2-1,& &\Upsilon_{3}=\Gamma_{3}+\Lambda_2,\\ &\Upsilon_{4}=\Gamma_{3}+\Lambda_2+1. \end{align*} After discussing the locations of $i, j$ and $k,$ the values of the coefficients are listed in table \ref{table1} and table \ref{table2}. \begin{center} \captionof{table}{($\frac{\partial}{\partial y}$,$\frac{\partial^2}{\partial y^2}$) or ($\frac{\partial}{\partial z}$,$\frac{\partial^2}{\partial z^2}$) related coefficients} \label{table1} \begin{tabular}{| l | l | l | l | l | l | l | l | l | l |l|} \hline j or k &$\delta_{-2}$ & $\delta_{-1}$ & $\delta$ &$\delta_1$ & $\delta_2$&$\xi_{-2}$ &$\xi_{-1} $& $\xi$ & $\xi_1$ &$\xi_2$ \\ \hline 0 & 0& 0&-5 & 2 & -1/5&0 &0&1/2 & 2/3 &-1/10 \\ \hline $\eta$-1 &-1/5 &2 & -5 & 0&0 & 1/10&-2/3&-1/2 & 0 &0 \\ \hline [1,$\eta$-2] & 0 &1 & -2 &1&0 &0&-1/2& 0 & 1/2 &0 \\ \hline \end{tabular} \end{center} \begin{table*} \caption{$\frac{\partial^2}{\partial x \partial y}$ or $\frac{\partial^2}{\partial x \partial z}$ related coefficients} \label{table2} \begin{tabularx}{\textwidth}{l|l|lllllllll} \toprule i & j or k& $\omega_{-4}$ & $\omega_{-3}$ & $\omega_{-2}$ & $\omega_{-1}$ & $\omega$ & $\omega_1$ & $\omega_2$ & $\omega_3$& $\omega_4$ \\ \hline \multirow{4}{*}{0} &0 & 0 & 0 &0 &0 & 9 & -5 &0 & -5 &1 \\ & $\eta$-1 & 0& 0 & 0 &5 &-9 & 0 & -1 & 5 &0\\ & [1,$\frac{\eta}{2}$) & 0 & 0 & 0 & 0 & 3 & -3 & 1 & -1 &0\\ & $[\frac{\eta}{2},\eta-2]$ & 0 &0 &0&3 & -3 & 0 & 0 & 1 & -1 \\\hline \multirow{4}{*}{ $N_x$-1} &0 & 0 & 5 & -1 & 0 &- 9 & 5 & 0 & 0 &0 \\ & $\eta$-1 &1 & -5 & 0 &-5 &9 &0 & 0 & 0 &0\\ & [1,$\frac{\eta}{2}$) & -1 & 1 &0 & 0 &- 3 & 3 & 0 & 0 &0\\ & $[\frac{\eta}{2},\eta-2]$ & 0 & -1&1&-3 & 3 & 0 & 0 & 0 & 0 \\\hline \multirow{3}{*}{[1,$N_x$-3]} &0 & 0 & 0 &1 &0 & 3 & -1 & 0 & -3 &0 \\ & $\eta$-1 & -1 & 0 & 0 &1 &-3 & 0 & 0 & 3 &0\\ & $[1,\eta-2]$ & -3/4 &0 &3/4& 0 & 0 & 0 &3/4 & 0 & -3/4 \\\hline \multirow{3}{*}{$N_x$-2} &0 & 0 & 3 &0 & 0 &-3 &1 &0 & 0 &-1 \\ & $\eta$-1 & 0 & -3 & 0 &-1 & 3 & 0 & 1 & 0 &0\\ & [1,$\eta$-2] & -3/4 &0 &3/4& 0 & 0 & 0 &3/4 & 0 & -3/4 \\ \bottomrule \end{tabularx} \end{table*} For example, if $i=0, $ $j=0$ and $k=0$, the entries of the $\Gamma_1$-th row of the matrix $M$ are the following \begin{align*} &M_{\Gamma_1,\Gamma_1}=1-5u_1-5v_1, & &M_{\Gamma_1,\Gamma_1+N_z}=2u_1, & &M_{\Gamma_1,\Gamma_1+2 N_z}=-\frac{1}{5}u_1, \\ & M_{\Gamma_1,\Gamma_6}=\frac{1}{2}u_2, & & M_{\Gamma_1,\Gamma_6+N_z}=\frac{2}{3}u_2, & &M_{\Gamma_1,\Gamma_6+2 N_z}=-\frac{1}{10}u_2, \\ &M_{\Gamma_1,\Gamma_2}=9u_3, & &M_{\Gamma_1,\Gamma_2+N_z}=-5u_3, & &M_{\Gamma_1,\Gamma_2+\Lambda_2}=-5u_3, \\ & M_{\Gamma_1,\Gamma_2+\Lambda_2+N_z}=u_3, & &M_{\Gamma_1,\Gamma_1+1}=2v_1, & &M_{\Gamma_1,\Gamma_1+2 }=-\frac{1}{5}v_1,\\ &M_{\Gamma_1,\Gamma_5}=\frac{1}{2}v_2,& & M_{\Gamma_1,\Gamma_5+1}=\frac{2}{3}v_2,& &M_{\Gamma_1,\Gamma_5+2 }=-\frac{1}{10}v_2,\\ &M_{\Gamma_1,\Gamma_3}=9v_3,& &M_{\Gamma_1,\Gamma_3+1}=-5v_3,& &M_{\Gamma_1,\Gamma_3+\Lambda_2}=-5v_3,\\ &M_{\Gamma_1,\Gamma_3+\Lambda_2+1}=v_3, \end{align*} otherwise $M_{\Gamma_1, *}=0$ and where \begin{align*} &u_1=\frac{w_1}{(\Delta y)^2}, & &u_2=\frac{w_2}{\Delta y},& &u_3=\frac{w_1}{3 \Delta x\Delta y}, \\ & v_1=\frac{w_1}{(\Delta z)^2},& &v_2=\frac{w_2}{\Delta z},& &v_3=\frac{w_1}{3 \Delta x\Delta z}. \end{align*} \section{Singular integrals}\label{Singularity} In this section, we detail the calculations of other types of singular integrals involved in the EOS formulations of 3D Maxwell's equations, denoted by ${\bf f}_2,$ $ {\bf f}_3,$ $ g_1,$ ${\bf g}_2,$ ${\bf g}_3$ in \cite{Aihua2}. The techniques are similar with the calculating of $f_1$ in section \ref{Asingularity}. The geometry is llustrated in figure \ref{box}. \subsection{Calculation of ${\bf f}_2$} \begin{equation}\label{f2} {\bf f}_2=\iiint_{V_{i,j,k}} \frac{{\bf x}'-{\bf x}_p}{|{\bf x}'-{\bf x}_p|^2}\,\mathrm{d}V. \end{equation} The components of the integration variable in (\ref{f2}) are given by $$ {\bf x'}=(x', y', z'), $$ and let us introduce the quantity $$ {\bf r}={\bf x}'-{\bf x}_p, $$ with $r=|{\bf r}|.$ We want to apply the divergence theorem on (\ref{f2}), and therefore need to find a function $\varphi(r)$ that satisfies $$\nabla \cdot ({\bf r} {\bf r} \varphi(r))=\frac{{\bf r}}{r^2},$$ or equivalently $$\nabla \cdot ({\bf r} {\bf r}) \varphi(r)+{\bf r} {\bf r}\cdot \nabla\varphi(r)=\frac{{\bf r} }{r^2} .$$ Solving the above equation, we get $$ \varphi(r)=\frac{1}{2r^2}.$$ Because of the singularity on surface $S_1,$ we can not apply the divergence theorem directly. However we can write \begin{equation*}\label{f2i} \begin{split} {\bf f}_2= \frac{1}{2 }(\sum_{m=2}^6 \iint_{S_m} \frac{{\bf r}{\bf r}}{r^2}\cdot {\bf n}_m\, \mathrm{d} S+\lim_{\epsilon \rightarrow 0}\iint_{S_\epsilon}\frac{{\bf r}{\bf r}}{ r^2}\cdot {\bf n}_\epsilon\, \mathrm{d}S+\iint_{S_\Omega}\frac{{\bf r}{\bf r}}{r^2}\cdot {\bf n}_1\,\mathrm{d}S), \end{split} \end{equation*} where $S_\epsilon$ is a hemispherical surface of radius $\epsilon$ centered at ${\bf x}_p$ and $S_\Omega$ is the rest of the surface $S_1$ with a disk of radius $\epsilon$ around ${\bf x}_p$ has been removed. ${\bf n}_\epsilon$ is the unit normal vector on $S_\epsilon,$ pointing out of $V_{i,j,k}.$ ${\bf n}_m$ is the unit normal vector on $S_m,$ pointing out of $V_{i,j,k}.$ For the integral over $S_\Omega$, we have $${\bf r}=(0, y'-y_j, z'-z_k),$$ and $${\bf n}_1=(-1,0,0),$$ thus we get \begin{equation*}\label{h2s11} \begin{split} \iint_{S_\Omega}\frac{{\bf r}{\bf r}}{ r^2}\cdot {\bf n}\,\mathrm{d}S=(0, 0, 0). \end{split} \end{equation*} For the integral over surface $S_\epsilon$, we use the spherical coordinate system, $${\bf r}=\epsilon (\cos \theta \sin \varphi, \sin \theta \sin \varphi, \cos \varphi),$$ and $${\bf n}_\epsilon= (\cos \theta \sin \varphi, \sin \theta \sin \varphi, \cos \varphi),$$ where $\epsilon, \varphi, \theta$ are respectively the radial distance, polar angle and azimuthal angle, so that \begin{equation*}\label{h2s12} \begin{split} \iint_{S_\epsilon}\frac{1}{ r^2} {\bf r}{\bf r}\cdot {\bf n}_\epsilon\,\mathrm{d}S=&\lim_{\epsilon \rightarrow 0}\frac{1}{\epsilon^2}\int_{0}^{2\pi}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \epsilon (\cos \theta \sin \varphi, \sin \theta \sin \varphi, \cos \varphi)\\ &\epsilon (\cos \theta \sin \varphi, \sin \theta \sin \varphi, \cos \varphi) \\ &\cdot (\cos \theta \sin \varphi, \sin \theta \sin \varphi, \cos \varphi)\\ & \epsilon^2 \sin \varphi \, \mathrm{d} \theta \, \mathrm{d} \varphi=(0,0,0). \end{split} \end{equation*} Defining \begin{equation*} \begin{split} s_m= \iint_{S_m} \frac{{\bf r}{\bf r}}{r^2}\cdot {\bf n}_m\, \mathrm{d} S, \end{split} \end{equation*} $f_2$ can be written as \begin{equation*} \begin{split} {\bf f}_2= \frac{1}{2 } \sum_{m=2}^6 s_m. \end{split} \end{equation*} Due to the symmetry of ${\bf r}$ along $y$ and $z$ directions in $V_{i,j,k}$, we have $$s_5= s_2$$ and $$s_6=s_3.$$ Thus $f_2$ can be written as, \begin{equation*} \begin{split} {\bf f}_2&=\frac{1}{2}(\iint_{S_2}\Delta y \frac{(x'-x_a,0,z'-z_k)}{(x'-x_a)^2+(z'-z_k)^2+\frac{1}{4}\Delta y^2}\, \mathrm{d}x'\, \mathrm{d}z'\\ &+\iint_{S_3}\Delta z \frac{(x'-x_a,y'-y_j,0)}{(x'-x_a)^2+(y'-y_j)^2+\frac{1}{4}\Delta z^2}\, \mathrm{d}x'\, \mathrm{d}y'\\ &+\iint_{S_4}\Delta x^2 \frac{(1,0,0)}{\Delta x^2+(y'-y_j)^2+(z'-z_k)^2}\, \mathrm{d}y'\, \mathrm{d}z'). \end{split} \end{equation*} For computation simplicity, we define $$\bar{s}_2=\iint_{S_2} \frac{(x'-x_a,z'-z_k)}{(x'-x)^2+(z'-z_k)^2+\frac{1}{4}\Delta y^2}\, \mathrm{d}x'\, \mathrm{d}z',$$ $$\bar{s}_3=\iint_{S_3} \frac{(x'-x_a,y'-y_j)}{(x'-x_a)^2+(y'-y_j)^2+\frac{1}{4}\Delta z^2}\, \mathrm{d}x'\, \mathrm{d}y',$$ $$\bar{s}_4=\iint_{S_4} x^2 \frac{1}{\Delta x^2+(y'-y_j)^2+(z'-z_k)^2}\, \mathrm{d}y'\, \mathrm{d}z'.$$ Thus for the calculations of $\bar{s}_2$ and $\bar{s}_3$, we consider a general form \begin{equation}\label{f23} \iint_{S}\frac{\bar {\bf r}}{\bar r^2+A^2}\, \mathrm{d}S, \end{equation} where $A$ is a constant, $\bar {\bf r}$ is a 2-component vector on surface S, and $\bar r=|\bar {\bf r}|.$ We want to apply the divergence theorem on (\ref{f23}), therefore we need to find a function $\varphi(\bar r)$ that satisfies $$\nabla \cdot (\bar{\bf r} \bar{\bf r} \varphi(\bar r))=\frac{\bar{\bf r}}{\bar r^2+A^2}.$$ Solving the above equation, we get \begin{equation*}\label{f2s2} \varphi(\bar r)=-\frac{A\tan^{-1}(\frac{\bar r}{A})}{\bar r^3}+\frac{1}{\bar r^2}. \end{equation*} For $\bar{s}_2,$ $$S=S_2, $$ $$A=\frac{1}{2}\Delta y,$$ and $$\bar{\bf r}=(x'-x_a,z'-z_k),$$ and because of the singularity on $S_2,$ we can not use the divergence theorem directly, however we can write \begin{equation*} \begin{split} \bar{s}_2=\sum_{n=2}^4\int_{L_{2n}}\varphi(\bar r)\bar {\bf r}\bar {\bf r}\cdot\bar {\bf n}_n\, \mathrm{d}L+\lim_{\epsilon\rightarrow 0}\int_{L_\epsilon} \varphi(\bar r)\bar {\bf r}\bar {\bf r}\cdot \bar {\bf n}_\epsilon\, \mathrm{d}L+\int_{L_\Omega} \varphi(\bar r)\bar {\bf r}\bar {\bf r}\cdot\bar {\bf n}_1\, \mathrm{d}L, \end{split} \end{equation*} where $L_{2n}$ are edges of $S_2.$ $L_\epsilon$ is a semicircle with radius $\epsilon$ centered at point ${\bar { \bf x}}$ and $L_\Omega$ is the rest of $L_{21}$. $\bar {\bf n}_\epsilon$ is the unit normal of $L_{\epsilon},$ pointing out of $S_2.$ $\bar {\bf n}_n$ is the unit normal of $L_{2n},$ pointing out of $S_2$. Geometry is illustrated in figure \ref{s2}. For the integral over $L_\Omega,$ we have $$\bar {\bf r}=(0, z'-z_k)$$ and $$\bar {\bf n}_1=(-1,0),$$ so that $$\int_{L_\Omega}\varphi(\bar r)\bar {\bf r}\bar {\bf r}\cdot\bar {\bf n}_1\, \mathrm{d}L=(0,0).$$ For the integral over $L_\epsilon,$ using the polar coordinates, we have $${\bf r}=\epsilon (\cos \theta , \sin \theta),$$ and $${\bf n}_\epsilon=-(\cos \theta , \sin \theta),$$ so that \begin{equation*} \begin{split} \int_{L_\epsilon}\varphi(\bar r)\bar {\bf r}\bar {\bf r}\cdot\bar {\bf n}_\epsilon\, \mathrm{d}L&=-\lim_{\epsilon\rightarrow 0}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} -\epsilon^3(\cos\theta,\sin \theta)(\cos\theta,\sin \theta)\\ &(-\frac{\frac{1}{2}\Delta y\tan^{-1}(\frac{\epsilon}{\frac{1}{2}\Delta y})}{\epsilon^3 }+\frac{1}{\epsilon^2}) \cdot (\cos\theta,\sin \theta)\, \mathrm{d}\theta\\ &=(0,0). \end{split} \end{equation*} There is no singularity on $L_{22}, L_{23}$ and $L_{24},$ finally, \begin{equation*} \bar{s}_2=(I_1,0), \end{equation*} where \begin{equation*} I_1=\int_{x_a}^{x_a+\Delta x}\Delta z (x'-x_a)\varphi(r_1)\, \mathrm{d}x'+\int_{z_k-\Delta z}^{z_k+\Delta z}\Delta x^2\varphi(r_2)\, \mathrm{d}z', \end{equation*} with $$ r_1=\sqrt{(x'-x_a)^2+\frac{1}{4}\Delta z^2},$$ and $$ r_2=\sqrt{\Delta x^2+(z'-z_k)^2}.$$ The calculation of $\bar{s}_3$ is similar to the one of $\bar{s}_2$ with the final result \begin{equation*} \bar{s}_3=(I_2,0), \end{equation*} where \begin{equation*} I_2=\int_{x_a}^{x_a+\Delta x}\Delta y (x'-x_a)\varphi(r_3)\, \mathrm{d}x'+\int_{y_j-\Delta y}^{y_j+\Delta y}\Delta x^2\varphi(r_4)\, \mathrm{d}y', \end{equation*} with $$ r_3=\sqrt{(x'-x_a)^2+\frac{1}{4}\Delta y^2},$$ and $$ r_4=\sqrt{\Delta x^2+(y'-y_j)^2}.$$ For the integral $\bar{s}_4,$ defining \begin{equation*} \bar{ \bf r}=(y'-y_j,z'-z_k), \end{equation*} and $${\bar r}=|\bar{ \bf r}|,$$ we need to find a function $\varphi(\bar r)$ that satisfies $$\nabla \cdot (\bar {\bf r}\varphi(\bar r))=\frac{1}{\bar r^2+\Delta x^2}.$$ Solving the above equation gives \begin{equation*}\label{f2s4} \varphi(\bar r)=\frac{\ln(\bar r^2+\Delta x^2)}{2\bar r^2}. \end{equation*} Because of the singularity at point ${\bar {\bf x}}=(y_j, z_k),$ we write $$ \bar{s}_4=\lim_{\epsilon\rightarrow 0}\int_{L_{\epsilon}}\varphi(\bar r)\bar {\bf r}\cdot\bar {\bf n}_\epsilon\,\mathrm{d}L+\int_{L_{\Omega}}\varphi(\bar r)\bar {\bf r}\cdot \bar {\bf n}_\Omega\,\mathrm{d}L,$$ where $L_{\epsilon}$ is a circle with radius $\epsilon$ centered at ${\bar {\bf x}}$ and $L_{\Omega}$ is the four edges of surface $S_4.$ $\bar {\bf n}_\epsilon$ is the unit normal vector of $L_\epsilon,$ $ \bar {\bf n}_\Omega$ is the unit normal vector of $L_\Omega,$ as shown in figure \ref{s4}. For the integral over $L_{\epsilon}$, we use the polar coordinates, $$\bar {\bf r}=\epsilon (\cos\theta,\sin\theta),$$ and $${\bf n}_\epsilon=- (\cos\theta,\sin\theta),$$ then \begin{equation*} \begin{split} \lim_{\epsilon\rightarrow 0}\int_{L_{\epsilon}}\varphi(\bar r)\bar {\bf r}\cdot\bar {\bf n}_\epsilon\,\mathrm{d}L&=\lim_{\epsilon\rightarrow 0}\int_{0}^{2\pi}-\epsilon^2(\cos\theta,\sin\theta)\frac{\ln(\epsilon^2+\Delta x^2)}{2\epsilon^2}(\cos\theta,\sin\theta)\, \mathrm{d}\theta\\ &=-\pi\ln(\Delta x^2). \end{split} \end{equation*} For the integral over $L_{\Omega},$ there is no singularity any more and this leads to \begin{equation*} \begin{split} \int_{l_{\Omega}}\varphi(\bar r)\bar {\bf r}\cdot \bar {\bf n}_\Omega\,\mathrm{d}L&=\frac{1}{2}\Delta y\int_{z_k-\Delta z}^{z_k+\Delta z}\frac{\ln(\frac{1}{4}\Delta y^2+\Delta x^2+(z'-z_k)^2)}{\frac{1}{4}\Delta y^2+(z'-z_k)^2}\, \mathrm{d}z'\\ &+\frac{1}{2}\Delta z\int_{y_j-\Delta y}^{y_j+\Delta y}\frac{\ln(\frac{1}{4}\Delta z^2+\Delta x^2+(y'-y_j)^2)}{\frac{1}{4}\Delta z^2+(y'-y_j)^2}\, \mathrm{d}y'\\ &=I_3. \end{split} \end{equation*} Combining all the above calculations, we finally get, $${\bf f}_2=\frac{1}{2}(\Delta y I_1+\Delta z I_2+\Delta x^2 (I_3-\pi\ln(\Delta x^2)), 0, 0).$$ All the line integrals $I_1,$ $I_2,$ $I_3$ are non-singular and can be calculated using numerical integration.\\ \subsection{Calculation of ${\bf f}_3$} \begin{equation}\label{f3} {\bf f}_3=\iiint_{V_{i,j,k}}\frac{{\bf x}'-{\bf x}_p}{|{\bf x}'-{\bf x}_p|^3}\,\mathrm{d}V. \end{equation} The components of the integration variable in (\ref{f3}) are given by $$ {\bf x'}=(x', y', z'), $$ and let us introduce the quantity $$ {\bf r}={\bf x}'-{\bf x}_p, $$ with $r=|{\bf r}|.$ We want to apply the divergence theorem on (\ref{f3}), and therefore need to find a function $\varphi(r)$ that satisfies $$\nabla \cdot ({\bf r} {\bf r} \varphi(r))=\frac{{\bf r}}{r^3},$$ thus we have $$\nabla \cdot ({\bf r} {\bf r}) \varphi(r)+{\bf r} {\bf r}\cdot \nabla\varphi(r)=4{\bf r} \varphi(r)+r{\bf r} \varphi'(r)=\frac{{\bf r} }{r^3} .$$ Solving the above equation gives \begin{equation*} \varphi(r)=\frac{1}{r^3}. \end{equation*} Because of the singularity on surface $S_1$, we can not apply the divergence theorem directly, however we can write \begin{equation*}\label{f3i} \begin{split} {\bf f}_3&=\sum_{m=2}^6 \iint_{S_m} \frac{{\bf r}{\bf r}}{ r^3}\cdot {\bf n}_m\, \mathrm{d} S+\lim_{\epsilon \rightarrow 0}\iint_{S_\epsilon}\frac{{\bf r}{\bf r}}{ r^3}\cdot {\bf n}_\epsilon\,\mathrm{d}S+\iint_{S_\Omega}\frac{{\bf r}{\bf r}}{ r^3}\cdot {\bf n}_1\,\mathrm{d}S, \end{split} \end{equation*} where $S_\epsilon$ is a hemispherical surface of radius $\epsilon$ centered at ${\bf x}_p$ and $S_\Omega$ is the rest of the surface $S_1$ with a disk of radius $\epsilon$ around ${\bf x}_p$ has been removed. ${\bf n}_\epsilon$ is the unit normal vector on $S_\epsilon,$ pointing out of $V_{i,j,k}.$ ${\bf n}_m$ is the unit normal vector on $S_m,$ pointing out of $V_{i,j,k}.$ For the integral over $S_\Omega$, we have $${\bf r}=(0, y'-y_j, z'-z_k),$$ and $${\bf n}_1=(-1,0,0) ,$$ thus we get \begin{equation*}\label{f3s11} \begin{split} \iint_{S_\Omega}\frac{{\bf r}{\bf r}}{r^3}\cdot {\bf n}\,\mathrm{d}S=(0, 0, 0). \end{split} \end{equation*} For the integral over surface $S_\epsilon$, we use the spherical coordinate system, $${\bf r}=\epsilon (\cos \theta \sin \varphi, \sin \theta \sin \varphi, \cos \varphi),$$ and $${\bf n}_\epsilon= (\cos \theta \sin \varphi, \sin \theta \sin \varphi, \cos \varphi),$$ where $\epsilon, \varphi, \theta$ are respectively the radial distance, polar angle and azimuthal angle, so that \begin{equation*}\label{f3s12} \begin{split} \iint_{S_\epsilon}\frac{1}{2 r^2} {\bf r}{\bf r}\cdot {\bf n}_\epsilon\,\mathrm{d}S=&\lim_{\epsilon \rightarrow 0}\int_{0}^{2\pi}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\frac{1}{2 r^2} {\bf r}{\bf r}\cdot {\bf n}_\epsilon \epsilon^2 \sin \varphi \,\mathrm{d} \theta \,\mathrm{d} \varphi=(0, 0, 0). \end{split} \end{equation*} Defining \begin{equation*} \begin{split} s_m=\iint_{S_m} \frac{{\bf r}{\bf r}}{ r^3}\cdot {\bf n}_m\, \mathrm{d} S, \end{split} \end{equation*} ${\bf f}_3$ can be written as \begin{equation*} \begin{split} {\bf f}_3=\sum_{m=2}^6 s_m. \end{split} \end{equation*} Due to the symmetry of ${\bf r}$ in $V_{i,j,k}$ along $y $ and $z$ direction , we have $$s_2= s_5$$ and $$s_3=s_6.$$ So we have the following, \begin{equation*} \begin{split} {\bf f}_3&=\iint_{S_2}\Delta y \frac{(x'-x_a,0,z'-z_k)}{((x'-x_a)^2+(z'-z_k)^2+\frac{1}{4}\Delta y^2)^{\frac{3}{2}}}\,\mathrm{d}x'\,\mathrm{d}z'\\ &+\iint_{S_3}\Delta z \frac{(x'-x_a,y'-y_j,0)}{((x'-x_a)^2+(y'-y_j)^2+\frac{1}{4}\Delta z^2)^{\frac{3}{2}}}\,\mathrm{d}x'\,\mathrm{d}y'\\ &+\iint_{S_4}\Delta x \frac{(\Delta x,0,0)}{(\Delta x^2+(y'-y_j)^2+(z'-z_k)^2)^{\frac{3}{2}}}\,\mathrm{d}y'\,\mathrm{d}z', \end{split} \end{equation*} For computation simplicity, we define $${\bar s_2}=\iint_{S_2}\frac{(x'-x_a,z'-z_k)}{((x'-x_a)^2+(z'-z_k)^2+\frac{1}{4}\Delta y^2)^{3/2}}\,\mathrm{d}x'\,\mathrm{d}z',$$ $${\bar s_3}=\iint_{S_3} \frac{(x'-x_a,y'-y_j)}{((x'-x_a)^2+(y'-y_j)^2+\frac{1}{4}\Delta z^2)^{3/2}}\,\mathrm{d}x'\,\mathrm{d}y',$$ and $${\bar s_4}=\iint_{S_4} \frac{\Delta x}{(\Delta x^2+(y'-y_j)^2+(z'-z_k)^2)^{3/2}}\,\mathrm{d}y'\,\mathrm{d}z'.$$ Thus for the calculations of ${\bar s}_2$ and ${\bar s}_3$, we consider a general form \begin{equation}\label{f33} \iint_{S}\frac{\bar {\bf r}}{(\bar r^2+A^2)^{3/2}}\,\mathrm{d}S, \end{equation} where $\bar {\bf r}$ is a 2-component vector, $A$ is a constant and $\bar r=\bar {\bf r}.$ We want to apply the divergence theorem on (\ref{f33}), thus we need to find a function $\varphi(\bar r)$ that satisfies $$\nabla \cdot (\bar {\bf r} \bar {\bf r} \varphi(\bar r))=\frac{\bar {\bf r}}{(\bar r^2+A^2)^{3/2}}.$$ Solving the above equation, we get \begin{equation*}\label{f3s2} \varphi(\bar r)=\frac{\log(\sqrt{\bar r^2+A^2}+\bar r)}{\bar r^3}-\frac{1}{\bar r^2\sqrt{\bar r^2+A^2}}. \end{equation*} For ${\bar s_2},$ $$S=S_2,$$ $$A=\frac{1}{2}\Delta y,$$ and $${\bar {\bf x}}=(x'-x_a,z'-z_k),$$ because of the singularity on $ S_2,$ we write \begin{equation*} \begin{split} {\bar s_2}= \sum_{n=2}^4 \int_{L_{2n}}\varphi(\bar r){\bf r}\bar {\bf r}\cdot \bar {\bf n}_n\,\mathrm{d}L+\lim_{\epsilon \rightarrow 0}\int_{L_\epsilon} \varphi(\bar r)\bar {\bf r}\bar {\bf r}\cdot \bar {\bf n}_\epsilon\,\mathrm{d}L+\int_{L_\Omega} \varphi(\bar r)\bar {\bf r} \bar {\bf r}\cdot \bar {\bf n}_1\,\mathrm{d}L, \end{split} \end{equation*} where $L_{2n}$ are edges of $S_2.$ $L_\epsilon$ is a semicircle with radius $\epsilon$ centered at point ${\bar { \bf x}}$ and $L_\Omega$ is the rest of $L_{21}$. $\bar {\bf n}_\epsilon$ is the unit normal of $L_{\epsilon},$ pointing out of $S_2.$ $\bar {\bf n}_n$ is the unit normal of $L_{2n},$ pointing out of $S_2$. Geometry is illustrated in figure \ref{s2}. For the integral over $L_\Omega,$ we have $$\bar {\bf r}=(0, z'-z_k)$$ and $$\bar {\bf n}=(-1,0),$$ so that $$\int_{L_\Omega} \varphi(\bar r)\bar {\bf r}\bar {\bf r}\cdot \bar {\bf n}_1\,\mathrm{d}L=(0,0).$$ For the integral over $L_\epsilon,$ using the polar coordinates, we have $$\bar {\bf r}=\epsilon (\cos \theta , \sin \theta),$$ and $$\bar {\bf n}_\epsilon=-(\cos \theta , \sin \theta),$$ then \begin{equation*} \begin{split} &\int_{L_\epsilon} \varphi(\bar r)\bar {\bf r} \bar {\bf r}\cdot \bar {\bf n}_1\,\mathrm{d}L\\ &=\lim_{\epsilon\rightarrow 0}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}-\epsilon^2(\cos\theta,\sin \theta)^3 (-\frac{1}{\epsilon^2(\epsilon^2+\frac{1}{4}\Delta y^2)^{\frac{1}{2}}}+\frac{\log(\sqrt{\epsilon^2+\frac{1}{4}\Delta y^2}+\epsilon)}{\epsilon^3})\,\mathrm{d}\theta\\ &=(-2\log(\frac{1}{2}\Delta y), 0), \end{split} \end{equation*} There is no singularity on $L_{22}, L_{23}$ and $L_{24}$ any more, finally, $$ {\bar s_2}=(I_1-2\log(\frac{1}{2}\Delta y), 0),$$ where \begin{equation*} I_1=\int_{x_a}^{x_a+\Delta x}\Delta z (x'-x_a)\varphi(r_1)\, \mathrm{d}x'+\int_{z_k-\Delta z}^{z_k+\Delta z}\Delta x^2\varphi(r_2)\, \mathrm{d}z', \end{equation*} with $$ r_1=\sqrt{(x'-x_a)^2+\frac{1}{4}\Delta z^2},$$ and $$ r_2=\sqrt{\Delta x^2+(z'-z_k)^2}.$$ The calculation of $\bar{s}_3$ is similar to the one of $\bar{s}_2$ with the final result \begin{equation*} \bar{s}_3=(I_2-2\log(\frac{1}{2}\Delta z),0), \end{equation*} where \begin{equation*} I_2=\int_{x_a}^{x_a+\Delta x}\Delta y (x'-x_a)\varphi(r_3)\, \mathrm{d}x'+\int_{y_j-\Delta y}^{y_j+\Delta y}\Delta x^2\varphi(r_4)\, \mathrm{d}y', \end{equation*} with $$ r_3=\sqrt{(x'-x_a)^2+\frac{1}{4}\Delta y^2},$$ and $$ r_4=\sqrt{\Delta x^2+(y'-y_j)^2}.$$ For the integral ${\bar s_4},$ defining $$\bar {\bf r}=(y'-y_j, z'-z_k)$$ and $$\bar r=|\bar {\bf r}|,$$ we seek a function that satisfies $$\nabla \cdot (\bar {\bf r}\varphi(\bar r))=\frac{1}{(\bar r^2+\Delta x^2)^{3/2}}.$$ Solving this equation gives \begin{equation*}\label{f3s4} \varphi(\bar r)=-\frac{1}{\bar r^2\sqrt{\bar r^2+\Delta x^2}}. \end{equation*} Because of the singularity at point ${\bar {\bf x}}=(y_j, z_k),$ we write $$ {\bar s_4}=\int_{L_{\epsilon}}\varphi(\bar r)\bar {\bf r}\cdot \bar {\bf n}_\epsilon\,\mathrm{d}L+\int_{L_{\Omega}}\varphi(\bar r)\bar {\bf r}\cdot \bar {\bf n}_\Omega\,\mathrm{d}L,$$ where $L_{\epsilon}$ is a circle with radius $\epsilon$ centered at ${\bar {\bf x}}$ and $L_{\Omega}$ is the four edges of surface $S_4.$ $\bar {\bf n}_\epsilon$ is the unit normal vector of $L_\epsilon,$ $ \bar {\bf n}_\Omega$ is the unit normal vector of $L_\Omega,$ as shown in figure \ref{s4}. For the integral over $L_{\epsilon}$, we use the polar coordinates, $$\bar {\bf r}=\epsilon (\cos\theta,\sin\theta),$$ and $${\bf n}_\epsilon=- (\cos\theta,\sin\theta),$$ then \begin{equation*} \lim_{\epsilon\rightarrow 0}\int_{L_{\epsilon}}\varphi(\bar r)\bar {\bf r}\cdot \bar {\bf n}_\epsilon\,\mathrm{d}L=\frac{2\pi}{\Delta x}. \end{equation*} For the integral on $L_{\Omega},$ there is no singularity any more and this gives \begin{equation*} \begin{split} &\int_{L_{\Omega}}\varphi(\bar r)\bar {\bf r}\cdot \bar {\bf n}_\Omega\,\mathrm{d}L\\ &=-\Delta y\int_{z_k-\Delta z}^{z_k+\Delta z}\frac{1}{(\frac{1}{4}\Delta y^2+(z'-z_k)^2)\sqrt{\Delta x^2+\frac{1}{4}\Delta y^2+(z'-z_k)^2}}\,\mathrm{d}z'\\ &-\Delta z\int_{y_j-\Delta y}^{y_j+\Delta y}\frac{1}{(\frac{1}{4}\Delta z^2+(y'-y_j)^2)\sqrt{\Delta x^2+\frac{1}{4}\Delta z^2+(y'-y_j)^2}}\,\mathrm{d}y'\\ &=I_3. \end{split} \end{equation*} Combining all the calculations above, we finally get \begin{equation*} \begin{split} {\bf f}_3&=(\Delta y I_1+\Delta z I_2-2 \Delta y \log(\frac{1}{2}\Delta y)-2 \Delta z \log(\frac{1}{2}\Delta z)\\ &+\Delta x^2 I_3+2\pi \Delta x , 0, 0). \end{split} \end{equation*} All the line integrals $I_1,$ $I_2,$ $I_3$ are non-singular and can be calculated using numerical integration. \subsection{Calculations of singular surface integrals} When the observing point ${\bf x}_p$ and the integrating point are both located on the same integral surface, for instance $S_1,$ as shown in figure \ref{box}, the surface integrals \begin{equation}\label{ss1} g_1=\iint_{S_1} \frac{1}{|{\bf x}'-{\bf x}_p|}\,\mathrm{d}S, \end{equation} \begin{equation}\label{ss2} {\bf g}_2=\iint_{S_1} \frac{{\bf x}'-{\bf x}_p}{|{\bf x}'-{\bf x}_p|^2}\,\mathrm{d}S, \end{equation} \begin{equation}\label{ss3} {\bf g}_3=\iint_{S_1} \frac{{\bf x}'-{\bf x}_p}{|{\bf x}'-{\bf x}_p|^3}\,\mathrm{d}S, \end{equation} are singular where $${\bf x}'-{\bf x}_p=(0, y-y', z-z').$$ Defining $$ \bar {\bf r}=(y'-y_j, z'-z_k),$$ and $$\bar r=|\bar {\bf r}|,$$ we apply the divergence theorem on (\ref{ss1}), thus we need to find a function $\varphi(\bar r)$ that satisfies $$\nabla \bar {\bf r}\varphi( \bar r))=\frac{1}{ \bar r},$$ or equivalently $$2\varphi( \bar r)+ \bar r\varphi'( \bar r)=\frac{1}{ \bar r}.$$ Solving the above equation, we get $$\varphi( \bar r)=\frac{1}{ \bar r}.$$ Thus $g_1$ is turned into \begin{equation*} g_1=\sum_{n=1}^4\int_{L_n}\frac{{\bf x}'-{\bar {\bf x}}}{|{\bf x}'-{\bar {\bf x}}|} \cdot {\bf n}_n\,\mathrm{d} L, \end{equation*} where ${\bar {\bf x}}=(y_j, z_k),$ ${\bf n}_n$ is the unit normal of $L_n.$ There is no singularity any more and $g_1$ can be calculated using numerical integration. For (\ref{ss2}) and (\ref{ss3}), due to the symmetry of vector ${\bf x}'-{\bf x}_p$ on $S_1,$ we have $$ {\bf g}_2=(0, 0, 0),$$ and $$ {\bf g}_3=(0, 0, 0).$$ \section{Parallelization}\label{Parallelization} This paper closely follows \cite{Aihua2} and we therefore directly address the final numerical solving system of the EOS formulations of the 3D Maxwell's equations. For the inside domain, the updating rule follows (\ref{3dmatrixequation}). For the boundary part, the discretized boundary integral identities are represented by \begin{align}\label{boundarydiscretization3} M_1 \left( \begin{array} [c]{c} {\bf E}_p^n \\ {\bf B}_p^n \end{array} \right)& =\left( \begin{array} [c]{c} {\bf E}_R \\ {\bf B}_R \end{array} \right), \end{align} where $\left( \begin{array} [c]{c} {\bf E}_p^n\\ {\bf B}_p^n \end{array} \right)$ are the solution at the surface point ${\bf x}_p$ at time $t^n$ and $\left( \begin{array} [c]{c} {\bf E}_R \\ {\bf B}_R \end{array} \right)$ are the summations of the integrals in the boundary integral representations after moving the unknowns to the left of the equations. From equation (\ref{3dmatrixequation}), it is easy to see that the updating for the inside domain at time now will only involve the values that are one time step before. While the solutions on the surface point ${\bf x}_p$ in (\ref{boundarydiscretization3}) require both the historical values of the current density and the charge density and the historical field values of all the surface points due to the retarded integrals involved. Therefore the part of the code calculating the surface solution dominates both the memory usage and the processor usage. The calculations are therefore parallelized based on partitioning the surface into pieces and distributing each piece to separate processors, whereas the inside of the scattering object is residing on each processor. The updating processes are illustrated by the following C code where \begin{equation*} \begin{split} &\text{p} : \quad \text{index of surface point}\\ &\text{n} : \quad \text{index of time level}\\ &\text{es, bs} : \quad \text{fields solutions on surface up to time}\, \,t^{n-1}\\ &\text{e, b} : \quad \text{fields solutions of inside domain at time}\, \,t^{n}\\ &\text{el, bl} : \quad \text{fields solutions of inside domain at time}\, \,t^{n-1}\\ &\text{J, P} : \quad \text{current density and electric density up to time}\, \,t^{n-1}\\ &\text{UpdateS(p,n,J,P,es,bs)}: \quad \text{update surface solutions at ${\bf x}_p$ at time} \, \,t^{n}\\ &\text{UpdateV(e,b,el,bl,es,bs,J,P,n)}: \quad \text{update inside solutions at time} \, \,t^{n} \end{split} \end{equation*} \begin{lstlisting}[style=CStyle] int rank,size;// processor id and number of processors MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&size); MPI_Comm_rank(MPI_COMM_WORLD,&rank); int Nt,Ns,Nss,lp,lsize; int p,index,indexeb; lp=Ns/size; lsize=3*lp; double lse[lsize],lsb[lsize]; double *esurface,*bsurface; esurface = (double *)malloc(Nss*sizeof(double)); bsurface = (double *)malloc(Nss*sizeof(double)); for(n=0;n<Nt;n++){ for(p=rank*lp;p<(rank+1)*lp;p++){ gsl_vector *lresult=gsl_vector_alloc(6); // update the surface values at each grid point in parallel lresult=UpdateS(p,n,J,P,es,bs); index=(p for(i=0;i<3;i++){ indexeb=index+i; lse[indexeb]=gsl_vector_get(lresult,i); lsb[indexeb]=gsl_vector_get(lresult,i+3); } gsl_vector_free(lresult); } //collect data from all processes MPI_Allgather( lse, lsize, MPI_DOUBLE, esurface, lsize, MPI_DOUBLE, MPI_COMM_WORLD); MPI_Allgather( lsb, lsize, MPI_DOUBLE, bsurface, lsize, MPI_DOUBLE, MPI_COMM_WORLD); //updating the whole surface for(p=0;p<Nss;p++){ gsl_matrix_set(es,p,n,*(esurface+p)); gsl_matrix_set(bs,p,n,*(bsurface+p)); } //update the inside domain by the domain based method supported by the surface values UpdateV(e,b,el,bl,es,bs,J,P,n); } MPI_Finalize(); \end{lstlisting} \end{appendices} \end{document}
\begin{document} \title{Zero forcing number versus general position number in tree-like graphs} \author{ Hongbo Hua $^{a}$\,\thanks{corresponding author} \and Xinying Hua $^{b}$ \and Sandi Klav\v zar $^{c,d,e}$ } \date{\today} \maketitle \begin{center} $^a$ Faculty of Mathematics and Physics, Huaiyin Institute of Technology \\ Huai'an, Jiangsu 223003, PR China \\ \texttt{hongbo\[email protected]} \\ $^b$ College of Science, Nanjing University of Aeronautics \& Astronautics \\ Nanjing, Jiangsu 210016, PR China \\ \texttt{[email protected]} \\ $^c$ Faculty of Mathematics and Physics, University of Ljubljana, Slovenia\\ \texttt{[email protected]}\\ $^d$ Faculty of Natural Sciences and Mathematics, University of Maribor, Slovenia\\ $^e$ Institute of Mathematics, Physics and Mechanics, Ljubljana, Slovenia\\ \end{center} \begin{abstract} Let ${\rm Z}(G)$ and ${\rm gp}(G)$ be the zero forcing number and the general position number of a graph $G$, respectively. Known results imply that ${\rm gp}(T)\ge {\rm Z}(T) + 1$ holds for every nontrivial tree $T$. It is proved that the result extends to block graphs. For connected, unicyclic graphs $G$ it is proved that ${\rm gp}(G) \ge {\rm Z}(G)$. The result extends neither to bicyclic graphs nor to quasi-trees. Nevertheless, a large class of quasi-trees is found for which ${\rm gp}(G) \ge {\rm Z}(G)$ holds. \end{abstract} \noindent {\bf Keywords:} zero forcing number; general position number; tree; unicyclic graph; quasi-tree \\ \noindent AMS Subj.\ Class.\ (2020): 05C69, 05C12 \section{Introduction} In linear algebra, the zero forcing number of a graph was introduced in~\cite{SGWGroup} to bound the minimum rank of matrices associated with graphs. In physics, the zero forcing was introduced to study controllability of quantum systems~\cite{Burgarth}; in computer science, it appears as the fast-mixed search model for some pursuit-evasion games~\cite{Yang}; in network science, it models the spread of a disease over a population~\cite{Dreyer}. Since its introduction by the ``AIM group" in~\cite{SGWGroup}, the zero forcing number has become a graph parameter being widely investigated for its own sake. In 2008, Aazami \cite{Aazami} proved the NP-hardness of computing the zero forcing number of a graph. So, it makes sense to establish sharp bounds on the zero forcing number for general graphs and to derive formulas for special graphs, see \cite{Ferrero,Javaid,Kang1,Lu,Oboudi} for a selection of relevant results. The general position number of a graph was introduced in~\cite{Manuel}. A couple of years earlier, however, the invariant was in different terminology considered in~\cite{UllasChandran}. Moreover, in the special case of hypercubes it was much earlier studied in~\cite{korner-1995}. In~\cite{Anand}, general position sets in graphs were characterized. Several additional papers on the concept followed, many of them dealing with bounds on the general position number and exact results in product graphs, Kneser graphs, and more, see~\cite{Ghorbani, Klavzar2, Klavzar1, neethu-2021, Patkos, thomas-2020, Tian, tian-2021}. In addition, the concept was very recently extended to the Steiner general position number~\cite{Klavzar-Kuziak}. Motivated by the comparative results between the zero forcing number and one of the central concepts of metric graph theory, the metric dimension, focusing on trees and unicyclic graphs~\cite{Eroh1, Eroh2}, we consider here the relation between the zero forcing number and the general position number. Now, from~\cite[Theorem 2]{Oboudi} we know that if $T$ is a tree on at least two vertices, then ${\rm Z}(T)\leq \ell(T)-1$, where $\ell(T)$ is the number of leaves of $T$. On the other hand, it was observed in~\cite[Corollary 3.7]{Manuel} that ${\rm gp}(T ) = \ell(T)$. Hence, if $T$ is a tree on at least two vertices, then \begin{equation} \label{eq:starting-point} {\rm gp}(T )\geq {\rm Z}(T)+1\,. \end{equation} This relation prompted us to investigate whether there are additional larger families for which the zero forcing number is a lower bound for the general position number. We proceed as follows. In the next subsection the concepts studied are formally introduced and additional definitions stated. In Section~\ref{sec:unicyclic} we prove that if $G$ is a connected, unicyclic graphs $G$, then ${\rm gp}(G) \ge {\rm Z}(G)$. We also demonstrate that the inequality does not extend to bicyclic graphs. In Section~\ref{sec:block-graphs-quasi-trees} we first prove that~\eqref{eq:starting-point} holds for arbitrary block graphs. Then we show that the zero forcing number and the general position number are in general not related on quasi-trees. On the other hand, a large class of quasi-trees is found for which ${\rm gp}(G) \ge {\rm Z}(G)$ holds. We conclude the paper with three open problems. \subsection{Definitions} The order and the size of a graph $G = (V(G), E(G))$ will be respectively denoted by $n(G)$ and $m(G)$. Let $G$ be a connected graph. If $m(G) = n(G)-1$, then $G$ is a {\em tree}, if $m(G)=n(G)$, then $G$ is a {\em unicyclic graph}, and if $m(G)=n(G)+1$, then $G$ is a {\em bicyclic graph}. If $G$ contains a vertex $v$, such that $G-v$ is a tree, then $G$ is a {\em quasi-tree}, the vertex $v$ is a {\em quasi-vertex} of $G$. A connected graph $G$ is a {\em block graph} if each 2-connected component of $G$ is a clique. Let $\mathcal{L}(G)$ denote the set of pendent vertices of $G$, so that $\ell(G)=|\mathcal{L}(G)|$. For a graph $G$, assume that all its vertices are given one of two colors, black and white by convention. Let $S$ denote the (initial) set of black vertices of $G$. The {\em color-change rule} changes the color of a vertex from white to black if the white vertex $y$ is the only white neighbor of a black vertex $x$, and we say that $x$ forces $y$. Obviously, at each step of the color change, there may be two or more vertices capable of forcing the same vertex. The \emph{zero forcing number} ${\rm Z}(G)$ of $G$ is the minimum cardinality of a set $S$ of black vertices (while all vertices of $V(G)\setminus S$ are colored white) such that all vertices of $V(G)$ are turned black after finitely many applications of the color-change rule. The distance $d_{G}(u, v)$ is the length of a shortest $u,v$-path in $G$. The interval $I_{G}(u, v)$ between vertices $u$ and $v$ is a vertex subset which consists of all vertices lying on shortest $u,v$-paths. A vertex subset $R$ of a graph $G$ is a {\em general position set} if no three vertices from $R$ lie on a common shortest path. The \emph{general position number} (${\rm gp}$-number for short) ${\rm gp}(G)$ of $G$ is the number of vertices in a largest general position set of $G$. For convenience, we say that a largest general position set is a ${\rm gp}$-set. For a positive integer $k$ we will use the notation $[k] = \{1,\ldots, k\}$. \section{Unicyclic graphs} \label{sec:unicyclic} In this section we prove a result parallel to~\eqref{eq:starting-point} for unicyclic graphs. Before stating and proving the result, we demonstrate that it cannot be extended to bicyclic graphs. Let $H_1$ be the top bicyclic graph from Fig.~\ref{fig:bicyclic}, and let $H_2$ be the bottom bicyclic graph from the same figure. $H_1$ actually represents a two-parametric family of graphs, but we will assume that $s$ and $t$ are fixed and denote the representative simply by $H_1$. \begin{figure} \caption{Bicyclic graphs $H_{1}$ and $H_{2}$} \label{fig:bicyclic} \end{figure} The set $\{u_{1},\ldots, u_{s}, x, y, v_{1}, \ldots, v_{t-1}\}$ is a minimum zero forcing set of $H_{1}$. Also, it can be seen that for $2\leq s+t\leq 3$, the set $\{x, y, z, w\}$ is a gp-set of $H_{1}$, while for $ s+t\geq 4$, the set $\{u_{1}, \ldots, u_{s}, v_{1}, \ldots, v_{t}\}$ is a gp-set of $H_{1}$. Thus, if $s+t\geq 4$, then ${\rm Z}(H_{1})=s+t+1>s+t = {\rm gp}(H_{1})$, and if $2\leq s+t\leq 3$, then ${\rm Z}(H_{1})=s+t+1\leq 4=|\{x, y, z, w\}| = {\rm gp}(H_{1})$. On the other hand, $\{v_{1}, v_{3}, v_{5}\}$ is a minimum zero forcing set of $H_{2}$ and $\{v_{1}, v_{2}, v_{4}, v_{5}\}$ is a gp-set of $H_{2}$. Thus, ${\rm Z}(H_{2}) = 3 < 4 = {\rm gp}(H_{2})$. We have thus seen that the zero forcing number and the general position number are incomparable on bicyclic graphs. On the other hand, the main result of this section asserts that the situation is different for unicyclic graphs. \begin{theorem} \label{thm:unicyclic} If $G$ is a connected, unicyclic graph, then ${\rm gp}(G)\geq{\rm Z}(G)$. \end{theorem} The rest of the section is devoted to the demonstration of Theorem~\ref{thm:unicyclic}. The \emph{path cover number} ${\rm P}(G)$ of a graph $G$ is the smallest positive integer $k$ such that there are $k$ vertex-disjoint induced paths in $G$ such that every vertex of $G$ is a vertex of one of the paths. It was proved in~\cite{Hogben} that ${\rm P}(G)\leq {\rm Z}(G)$ holds for each graph $G$. For unicyclic graphs, Row proved the following stronger result. \begin{theorem}{\rm \cite[Theorem 4.6]{Row}} \label{th0} If $G$ is a connected unicyclic graph, then ${\rm Z}(G)={\rm P}(G)$. \end{theorem} For the proof of Theorem~\ref{thm:unicyclic} we need to recall several concepts and results from~\cite{Barioli, Row}. Let $G$ be a graph and $x$ a vertex of $G$. If $G-x$ has at least two components which are paths, each joined to $x$ in $G$ at only one endpoint, then vertex $x$ is called \emph{appropriate}. A vertex $x$ is called a \emph{peripheral leaf} if $x$ is adjacent to only one other vertex $y$, and $y$ is adjacent to no more than two vertices. The \emph{trimmed form} $\breve{G}$ of a graph $G$ is an induced subgraph of $G$ obtained by a sequence of deletions of appropriate vertices, isolated paths, and peripheral leaves until no more such deletions are possible. Barioli, Fallet, and Hogben~\cite{Barioli} proved that $\breve{G}$ is unique. If $\breve{G}$ is obtained from $G$ by performing $n_1$ deletions of appropriate vertices, $n_2$ deletions of isolated paths, and $n_3$ deletions of peripheral leaves, then ${\rm P}(G) = {\rm P}(\breve{G}) + n_2-n_1$~\cite{Barioli}. Let $C_n$ be an $n$-cycle and let $U\subseteq V(C_n)$. The graph $H$ obtained from $C_n$ by appending a leaf to each vertex of $U$ is called a \emph{partial sun}. The term \emph{segment} of $H$ will refer to any maximal subset of consecutive vertices in $U$. The segments of $H$ will be denoted $U_{1}, \ldots, U_{t}$. For a partial sun $H$ with segments $U_{1}, \ldots, U_{t}$, it was proved in~\cite{Barioli} that ${\rm P}(H) = \max\{2, \sum_{i=1}^{t}\lceil \frac{|U_{i}|}{2}\rceil\}$. The trimmed form of a unicyclic graph $G$ is either the empty graph or a partial sun \cite{Barioli}. For an example see~Fig.~\ref{fig:trimmed} and note that $\breve{G}$ is a partial sun. \begin{figure} \caption{A unicyclic graph $G$ (left) and its trimmed form $\breve{G}$ (right)} \label{fig:trimmed} \end{figure} The following properties of unicyclic graphs with respect to their trimmed graphs will enable us to derive Theorem~\ref{thm:unicyclic}. \begin{theorem}\label{th001} Let $G$ be a connected, unicyclic graph and let $\breve{G}$, $U$, $n_{1}$, and $n_{2}$ be defined as above. Then the following hold. \begin{enumerate} \item[(i)] If $\breve{G}$ is a partial sun, then ${\rm gp}(G)\geq \max\{2, |U|\}+n_{2}-n_{1}$. \item[(ii)] If $\breve{G}$ is the empty graph, then $n_{2}-n_{1}\leq \ell(G)$. \end{enumerate} \end{theorem} \begin{proof} (i) Since $\breve{G}$ is a partial sun, $G$ is not a cycle. Let $C_{l}$ be the unique cycle of $G$. For any branch vertex $v$ on the cycle $C_{l}$, we denote by $T_{G}(v)$ the subtree containing $v$ of $G-\{u, w\}$, where $u\in N_{C_{l}}(v)$ and $w\in N_{C_{l}}(v)$. Such a subtree $T_{G}(v)$ is called a \emph{root tree} at $v$. Let $U_{1}, \ldots, U_{t}$ be the segments of $\breve{G}$. \begin{claim}\label{clm01} If $G$ has no appropriate vertices and $x$ is a vertex of $V(\breve{G})$ with $d_{\breve{G}}(x)=2$, then $d_{G}(x)=2$. \end{claim} \begin{proof} Suppose to the contrary that $d_{G}(x)\geq 3$. Since $G$ has no appropriate vertices, $x$ is a branch vertex on $C_{l}$. Since $G$ has no appropriate vertices, $T_{G}(x)$ is a path with $x$ being one end-vertex. So, we need to turn $x$ into a 2-degree vertex in $\breve{G}$ by a sequence of deletions of appropriate vertices, isolated paths, and peripheral leaves. By our assumption that $G$ has no appropriate vertices, we can only use the trimmed operation on $T_{G}(x)$ by repeatedly deleting peripheral leaves such that $x$ is turned into a 2-degree vertex. This is impossible by the definition of peripheral leaves. Thus, $d_{\breve{G}}(x)=3$, a contradiction. \end{proof} We now distinguish the following two cases. \begin{case} $G$ has no appropriate vertices. \end{case} \noindent In this case $n_{1}=n_{2}=0$. Moreover, for any branch vertex $v$ on $C_{l}$, $T_{G}(v)$ is a path with $v$ being its one end-vertex. Thus, $\ell(G)=\sum_{i=1}^{t}|U_{i}|=|U|$. When $|U|=1$, by Claim~\ref{clm01} and our assumption that $G$ has no appropriate vertices, $G$ has only one branch vertex, say $v$, on $C_{l}$. Let $u$ and $w$ be two neighbors of $v$ on $C_{l}$. Clearly, $U=\{v\}$. Note that in the current case, $\breve{G}$ can only be obtained from $G$ by a sequence of deletions of peripheral leaves. Then $\ell(G)=1$. So, $\{u, w\}\cup \mathcal{L}(G)$ forms a general position set. Thus, ${\rm gp}(G)\geq |\{u, w\}\cup \mathcal{L}(G)|=3>2=\max\{2, |U|\}+n_{2}-n_{1}$. Now, we assume that $|U|\geq 2$. Then the set of all pendent vertices of $G$ forms a general position set. So, ${\rm gp}(G)\geq \ell(G)=|U|= \max\{2, |U|\}+n_{2}-n_{1}$. \begin{case} $G$ has at least one appropriate vertex. \end{case} \noindent In this case $n_{1}\geq 1$ and $n_{2}\geq 2n_{1}$. Since $\breve{G}$ is a partial sun, $C_{l}$ does not contain appropriate vertices. So, all appropriate vertices of $G$ belong to the set $V(G)\setminus V(C_{l})$. Let $B(G)$ be the set of branch vertices of $G$. First, we assume that for any vertex $v$ of $U$, the subtree $T_{G}(v)$ does not contain an appropriate vertex. Hence any such subtree $T_{G}(v)$ is a path with $v$ being one end-vertex, and then $\ell(\breve{G})=|U|$. Moreover, since $\breve{G}$ is a partial sun, all appropriate vertices are contained in $\bigcup_{x\in B(G)\setminus U} (V(T_{G}(x))\setminus\{x\})$. Then $|\mathcal{L}(G)\setminus\mathcal{L}(\breve{G})|\geq n_{2}$, that is, $\ell(G)\geq n_{2}+\ell(\breve{G})=n_{2}+|U|$. Since the set of all pendent vertices of $G$ forms a general position set of $G$, ${\rm gp}(G)\geq \ell(G)\geq n_{2}+|U|$. If $|U|=1$, as $n_{1}\geq 1$, then ${\rm gp}(G)\geq n_{2}+1\geq \max\{2, |U|\}+n_{2}-n_{1}$. If $|U|\geq 2$, as $n_{1}\geq 1$, then ${\rm gp}(G)\geq \ell(G)\geq n_{2}+|U|>\max\{2, |U|\}+n_{2}-n_{1}$. Assume that $U$ has $s\in [|U|]$ vertices each of whose root trees contains at least one appropriate vertex. Since for some branch vertex $v\in V(C_{l})\setminus U$, the set $V(T_{G}(v))\setminus \{v\}$ may contain appropriate vertices, we have $s\leq n_{1}$. Thus, $\ell(G)\geq n_{2}+(|U|-s)\cdot 1\geq |U|+n_{2}-n_{1}$. If $|U|\geq 2$, then because the set of all pendent vertices of $G$ forms a general position set of $G$, we get ${\rm gp}(G)\geq \ell(G)\geq |U|+n_{2}-n_{1}= \max\{2, |U|\}+n_{2}-n_{1}$. Suppose now that $|U|=1$. Let $U=\{v\}$, and let $N(v) \cap V(C_l) = \{u, w\}$. Then $T_{G}(v)$ contain at least one appropriate vertex. We first prove the following claim. \begin{claim}\label{clm02} Let $G$ be a unicyclic graph with $\breve{G}$ being a partial sun. If the unique cycle $C_{l}$ of $G$ has $q$ branch vertices, say $v_{1},\,\ldots,\,v_{q}\,(q\geq 1)$, such that each $V(T_{G}(v_{i}))\setminus \{v_{i}\}$ ($i\in [q]$) contains appropriate vertices, then $\ell(G)\geq n_{2}-n_{1}+q$. \end{claim} \begin{proof} Since $\breve{G}$ is a partial-sun, we suppose as before that $G$ can be reduced to $\breve{G}$ by performing $n_1$ deletions of appropriate vertices outside $C_{l}$, $n_2$ deletions of isolated paths outside $C_{l}$, and $n_3$ deletions of peripheral leaves outside $C_{l}$. Note that each step of deletion of an old appropriate vertex and the corresponding isolated paths from $G$ will produce at most one new appropriate vertex in the resulting subgraph of $G$. Moreover, if a new appropriate vertex is born with the process of deletion of an old appropriate vertex in $G$, then a new pendent path must be produced in the resulting subgraph at the same time (this new pendent path becomes a new isolated path in future). Hence, if there are $p$ new appropriate vertices produced during the process of trimming $G$, then $G$ has at least $n_{2}-p$ pendent vertices. Since each $V(T_{G}(v_{i}))\setminus \{v_{i}\}$ ($i\in [q]$) contains at least one appropriate vertex, we have $p\leq n_{1}-q$. Therefore $\ell(G)\geq n_{2}-p\geq n_{2}-n_{1}+q$. \end{proof} If $q\geq 2$, then since the set of all pendent vertices of $G$ forms a general position set, we get ${\rm gp}(G)\geq \ell(G)\geq n_{2}-n_{1}+2=\max\{2, |U|\}+n_{2}-n_{1}$ by Claim \ref{clm02} and our assumption that $|U|=1$. Assume hence that $q=1$. Then all appropriate vertices of $G$ belongs to $T_{G}(v)$. The fact that $|U|=1$ together with Claim~\ref{clm01} yields that $G$ has $v$ as its unique branch vertex. Thus, $\{u, w\}\cup \mathcal{L}(G)$ forms a general position set which in turn implies that ${\rm gp}(G)\geq |\{u, w\}\cup \mathcal{L}(G)|= \ell(G)+2\geq (n_{2}-n_{1}+1)+2>n_{2}-n_{1}+2=\max\{2, |U|\}+n_{2}-n_{1}$ by Claim \ref{clm02}. This proves (i). (ii) Assume that $G$ can be reduced to $\breve{G}$ by performing $n_1$ deletions of appropriate vertices, $n_2$ deletions of isolated paths, and $n_3$ deletions of peripheral leaves. Since $\breve{G}$ is the empty graph, $G$ has at least one appropriate vertex, that is, $n_{1}\geq 1$. Let $C_{l}$ be the unique cycle in $G$. We proceed by induction on $n(G)+n_{1}$. If $n_{1}=1$, then since $\breve{G}$ is the empty graph, the unique appropriate vertex, say $v$, must lie on the cycle $C_{l}$. Moreover, as $n_{1}=1$, the deletion of $v$ results in only isolated paths. Among all isolated paths of $G-v$, there is at most one isolated path whose two end-vertices are not pendent vertices of $G$. So, $n_{2}\leq \ell(G)+1$. Thus, $n_{2}-n_{1}\leq (\ell(G)+1)-1=\ell(G)$. Let now $k>n(G)+1$. Assume that $n'_2-n'_1 \leq \ell(G')$ holds for all unicyclic graphs $G'$ with $n(G')+n'_{1}<k $ and $\breve{G}'$ being the empty graph. Let $G$ be an unicyclic graph, with $n(G)+n_{1}=k$, which can be reduced to the empty graph $\breve{G}$ by performing $n_{1}$ deletions of appropriate vertices. Assume first that there exists an appropriate vertex, say $v$, which lies outside $C_{l}$. Perform one step of the deletion of the appropriate vertex $v$, and the deletion of isolated paths in $G-v$ corresponding to $v$, and denote the resulting unicyclic graph by $G'$. Clearly, $\breve{G}'$ is also the empty graph. Assume that $G'$ can be reduced to $\breve{G}'$ by performing $n'_1$ deletions of appropriate vertices, $n'_2$ deletions of isolated paths, and $n'_3$ deletions of peripheral leaves. Then $n'_{1}=n_{1}-1$. As $n(G')+n'_{1}<k $, by the induction hypothesis, $n'_{2}-n'_{1}\leq \ell(G')$ holds for $G'$. Since $v$ is an appropriate vertex outside the cycle, one step of the deletion of $v$ and the corresponding isolated paths will produce at most one new pendent path in $G'$. Assume that there are $t$ pendent paths attaching to $v$ in $G$. Then $n_{2}= n'_{2}+t$. So, $n_{2}-n_{1}= (n'_{2}+t)-(n'_{1}+1)=(n'_{2}-n'_{1})+(t-1)\leq \ell(G')+(t-1)\leq \ell(G)$. Assume second that all the appropriate vertices of $G$ lie on the cycle $C_{l}$. Let $v$ be an arbitrary appropriate vertex. Set $G'=G[V(G)\setminus (V(T_{G}(v))\setminus \{v\}$)]. Then $G'$ is a unicyclic graph whose unique cycle is still $C_{l}$ and $d_{G'}(v)=2$. Obviously, $\breve{G}'$ is also the empty graph. Assume that $G'$ can be reduced to $\breve{G}'$ by performing $n'_1$ deletions of appropriate vertices, $n'_2$ deletions of isolated paths, and $n'_3$ deletions of peripheral leaves. Then $n'_{1}= n_{1}$ or $n'_{1}= n_{1}-1$. Since $n(G')+n'_{1}<k $, by the induction hypothesis, $n'_{2}-n'_{1}\leq \ell(G')$. Assume that $v$ is attached to $t$ pendent paths in $G$. By the construction of $G'$ and our assumption that $v$ is an appropriate vertex lying on $C_{l}$, we infer that $n_{2}\leq n'_{2}+t$. So, if $n'_{1}= n_{1}-1$, then $n_{2}-n_{1}\leq (n'_{2}+t)-(n'_{1}+1)=(n'_{2}-n'_{1})+(t-1)< \ell(G')+t= \ell(G)$, and if $n'_{1}= n_{1}$, then $n_{2}-n_{1}\leq (n'_{2}+t)-n'_{1}=(n'_{2}-n'_{1})+t\leq\ell(G')+t= \ell(G)$. \end{proof} Now all is ready to prove Theorem~\ref{thm:unicyclic}. If $G$ is a cycle graph, then ${\rm gp}(G)\geq 2={\rm Z}(G)$. Assume in the rest that $G$ is not a cycle. Let $\breve{G}$ be obtained from $G$ by a sequence of $n_1$ appropriate vertex deletions, $n_2$ isolated path deletions, and $n_3$ peripheral leaf deletions. Recall that $\breve{G}$ is either the empty graph or a partial sun, and consider the following two cases. Assume that $\breve{G}$ is a partial sun and let $U_{1}, \ldots, U_{t}$ be the segments of $\breve{G}$ with $\sum_{i=1}^{t}|U_{i}|=|U|$. From~\cite{Barioli}, we know that ${\rm P}(G)={\rm P}(\breve{G})+n_{2}-n_{1}$ and ${\rm P}(\breve{G})= \max\{2, \sum_{i=1}^{t} \lceil \frac{|U_{i}|}{2}\rceil\}$. Since $|U_{i}|\geq 1$, we have $\sum_{i=1}^{t}\lceil \frac{|U_{i}|}{2}\rceil\leq |U|$. By Theorem \ref{th001}(i), we have ${\rm gp}(G)\geq \max\{2, |U|\}+n_{2}-n_{1}\geq \max\{2, \sum_{i=1}^{t}\lceil \frac{|U_{i}|}{2}\rceil\}+n_{2}-n_{1}={\rm P}(\breve{G})+n_{2}-n_{1}={\rm P}(G)$. Combining this fact with Theorem~\ref{th0} gives ${\rm gp}(G)\geq {\rm Z}(G)$. Assume second that $\breve{G}$ is the empty graph. By the proof of~\cite[Theorem 4.6]{Row} we have ${\rm Z}(G)={\rm Z}(\breve{G})+n_{2}-n_{1}=n_{2}-n_{1}$. Further, by Theorem~\ref{th001}(ii) we have ${\rm Z}(G)=n_{2}-n_{1}\leq \ell(G)$. On the other hand, since the set of all pendent vertices of $G$ forms a general position set, ${\rm gp}(G)\geq \ell(G)$. Therefore, ${\rm gp}(G)\geq {\rm Z}(G)$. \section{Block graphs and quasi-trees} \label{sec:block-graphs-quasi-trees} In this section we consider two broad generalizations of trees---block graphs and quasi-trees---and relate them to~\eqref{eq:starting-point}. In the main result of the section we prove that~\eqref{eq:starting-point} extends to all block graph. Then we demonstrate that the zero forcing number and the general position number are not comparable on quasi-trees. On the positive side we show that ${\rm gp}(G)\geq {\rm Z}(G)+1$ still holds for a rich class of quasi-trees $G$. We conclude the section by showing that~\eqref{eq:starting-point} naturally extends to forests. Before proving the result for block graphs, some preparation is needed. A vertex of a graph is \emph{simplicial} if its neighbours induce a complete subgraph. A block in a graph is said to a \emph{pendent block} if it has exactly one cut vertex. Note that all vertices but one of a pendent block are simplicial vertices. \begin{theorem}\label{th3} If $G$ is a block graph with $n(G)\ge 2$, then ${\rm gp}(G)\geq {\rm Z}(G)+1$. \end{theorem} \begin{proof} Let $S$ be the set of simplicial vertices of a block graph $G$. Then it was proved in~\cite[Theorem 3.6]{Manuel} that $S$ is a general position set of $G$ and that ${\rm gp}(G) = |S|$. To prove the theorem it thus suffices to show that ${\rm Z}(G)\leq |S|-1$. Suppose that $G$ has $k$ cut vertices and set $n = n(G)$. Then $|S|=n-k$. We prove that ${\rm Z}(G)\leq n-k-1$ by induction on $n+k$. Clearly, $n+k\geq n$. if $n+k=n$, that is, if $k=0$, then $G\cong K_{n}$ and ${\rm Z}(G)=n-1$, as desired. Suppose next that $k\geq 1$ and let $B$ be a pendent block of $G$ sharing the unique cut vertex $v_{0}$ with a smaller block graph $G'$ of order $n'$ and with $k'$ cut vertices. Note that $n' = n - |B| + 1$. Since $n'+k' < n + k$, the induction hypothesis implies ${\rm Z}(G')\leq n'-k'-1$. Let $S'$ be a zero forcing set of $G'$ with $|S'| = {\rm Z}(G')$. Let $V(B)=\{v_{0}, v_{1}, \ldots, v_{t}\}$. Then $t = |B|-1$. If $t=1$, then $S'$ is also a zero forcing set of $G$. Assume next that $t\geq 2$. Let $S = S' \cup \{v_{1}, \ldots, v_{t-1}\}$. Note that each of the vertices $v_{1}, \ldots, v_{t-1}$ is simplicial. Moreover, all vertices of $G'$ are forced to be black under $S'$. Thus, $v_{t}$ is the unique white neighbor of $v_{0}$ in $G$. Then $v_{t}$ is forced to be black by $v_{0}$ under $S$ in $G$. So, $S$ is a zero forcing set of $G$. In conclusion, if $t=1$, then $${\rm Z}(G)\leq |S|=|S'|\leq n'-k'-1=(n-1)-k'-1\leq n-k-1\,,$$ and if $t\geq 2$, then since $k'+1\geq k$, $${\rm Z}(G)\leq |S|=|S'|+t-1\leq n'-k'-1+t-1=n-k'-2\leq n-k-1\,,$$ and we are done. \end{proof} We next demonstrate that the zero forcing number and the general position number are not comparable on quasi-trees. For this sake consider the quasi-trees $H_{3}$ and $H_{4}$ from Fig.~\ref{fig:quasi-trees}. Just as $H_1$ in Fig.~\ref{fig:bicyclic}, also $H_3$ and $H_4$ each represents an infinite family of graphs, but we consider their parameters as fixed and denote the representatives simply by $H_3$ and $H_4$. \begin{figure} \caption{Quasi-trees $H_{3}$ and $H_{4}$} \label{fig:quasi-trees} \end{figure} It is straightforward to verify that $\{v_{1}, v\}$ is a minimum zero forcing set of $H_{3}$, and that $\{v_{1}, v_{2}, v_{4}, v_{5}, v_{7}, v_{8}\}$ is a gp-set of $H_{3}$. Thus, ${\rm Z}(H_{3})=2<6 = {\rm gp}(H_{3})$. On the other hand, it can be seen that $\{u_{1}, \ldots, u_{s}, x, y, z, w, v_{1}, \ldots, v_{t-1}\}$ is a minimum zero forcing set of $H_{4}$, and that $\{u_{1}, \ldots, u_{s}, v_{1}, \ldots, v_{t}\}$ is a gp-set of $H_{4}$. Thus, ${\rm Z}(H_{4})=s+t+3>s+t = {\rm gp}(H_{4})$. So the zero forcing number and the general position number are not comparable on quasi-trees. But we do have the following result. \begin{theorem} If $G$ is a quasi-tree in which one of the following conditions hold: \begin{enumerate} \item[(i)] $G$ contains no pendent vertices, \item[(ii)] $G$ contains a quasi-vertex $x$ such that $x$ does not have degree $2$ neighbors, \end{enumerate} then ${\rm gp}(G)\geq {\rm Z}(G)$. \end{theorem} \begin{proof} If $G$ is itself a tree, then the result holds by~\eqref{eq:starting-point}. Hence assume in the rest that $G$ has at least one cycle. (i) Suppose that $G$ contains no pendent vertices. Let $x$ be an arbitrary quasi-vertex of $G$. Then $G-x$ is a tree and by the assumption we see that $x$ must be adjacent to all the leaves of $G-x$, that is, $\mathcal{L}(G-x)\subseteq N_{G}(x)$. We claim that $\mathcal{L}(G-x)$ is a general position set of $G$. Indeed, if $u$ and $v$ are arbitrary leaves from $\mathcal{L}(G-x)$, then $d_G(u,v) = 2$ as each of them is adjacent to $x$. So $\mathcal{L}(G-x)$ is a set of vertices that are pairwise at distance $2$ and hence forms a general position set. We can now estimate as follows: \begin{align*} Z(G)& \leq {\rm Z}(G-x) + 1 \\ & \leq ({\rm gp}(G-x) - 1) + 1 = {\rm gp}(G-x) \\ & = \ell(G-x) \\ & \leq {\rm gp}(G)\,. \end{align*} The first inequality follows by~\cite[Theorem 2.3]{Edholm}, the second inequality by~\eqref{eq:starting-point}, the equality follows because $G-x$ is a tree, while the last inequality follows by the argument above. (ii) Suppose that $G$ contains a quasi-vertex $x$ such that no neighbor of $x$ is of degree $2$. This means that $x$ is adjacent to no leaf of $G-x$ which in turn implies that $\mathcal{L}(G-x) = \mathcal{L}(G)$. Since the set of leaves is a general position set in any graph, this means that ${\rm gp}(G-x)\leq {\rm gp}(G)$. Then, similarly as in (i), we conclude that ${\rm Z}(G) \leq {\rm Z}(G-x) + 1\leq{\rm gp}(G-x) = \ell(G-x) \leq {\rm gp}(G)$. \end{proof} We conclude the section with the following extension of~\eqref{eq:starting-point}. \begin{proposition} If $F$ is a forest with $k\ge 1$ non-trivial components, then ${\rm gp}(F)\geq {\rm Z}(F)+k$. \end{proposition} \begin{proof} Let $T$ be a forest, let $T_{1}, \ldots, T_{k}$ be its non-trivial components, and let $x_{1}, \ldots, x_{s}$ be its isolated vertices, where $s\ge 0$. By~\eqref{eq:starting-point} we have ${\rm gp}(T_{i})\geq {\rm Z}(T_{i})+1$ for each $i \in [k]$. Let $S_{i}$ be a minimum forcing set of $T_{i}$ for each $i\in [k]$. Then $S_{1}\cup \cdots \cup S_{k}\cup \{x_{1},\ldots,x_{s}\}$ is a minimum forcing set of $F$ and hence ${\rm Z}(F)=|S_{1}|+\cdots+|S_{k}|+s={\rm Z}(T_{1})+\cdots+{\rm Z}(T_{k})+s\leq ({\rm gp}(T_{1})-1)+\cdots+({\rm gp}(T_{k})-1)+s ={\rm gp}(T_{1})+\cdots+{\rm gp}(T_{k})-k+s$. For each $i\in [k]$, let $R_i$ be a ${\rm gp}$-set of $T_{i}$. Then $R_{1}\cup \cdots \cup R_{k}\cup \{x_{1},\ldots,x_{s}\}$ is a gp-set of $F$. So, ${\rm gp}(T_{1})+\cdots+{\rm gp}(T_{k})+s=|R_{1}\cup \cdots \cup R_{k}\cup \{x_{1}, \ldots, x_{s}\}|\leq {\rm gp}(F)$. We conclude that ${\rm Z}(F)\leq {\rm gp}(F)-k$. \end{proof} \section{Concluding remarks} \label{sec:conclude} We have proved that the zero forcing number (plus maybe 1) is a lower bound for the general position number for trees, unicyclic graphs, block graphs, and special quasi-trees. We have also demonstrated that this does not hold for bicyclic graphs and for quasi-trees, hence we pose the following two problems. \begin{problem} Determine the bicyclic graphs $G$ such that ${\rm gp}(G)\geq {\rm Z}(G)$. \end{problem} \begin{problem} Determine the quasi-trees $G$ such that ${\rm gp}(G)\geq {\rm Z}(G)$. \end{problem} Note that the graphs $H_1$ from Fig.~\ref{fig:bicyclic} and the graphs $H_4$ from Fig.~\ref{fig:quasi-trees} are bipartite. As we have seen, ${\rm Z}(H_1) > {\rm gp}(H_1)$ and ${\rm Z}(H_4) > {\rm gp}(H_4)$. On the other hand, \eqref{eq:starting-point} asserts that ${\rm gp}(T) > {\rm Z}(T)$ holds for trees $T$. Hence the following problem is also relevant. \begin{problem} Determine the bipartite graphs $G$ such that ${\rm gp}(G)\geq {\rm Z}(G)$. \end{problem} \section*{Competing Interests} The authors have no relevant financial or non-financial interests to disclose. \section*{Data Availability Statements} All data generated or analysed during this study are included in this published article (and its supplementary information files). \end{document}
\begin{document} \title{Partition number identities\\which are true for all set of parts} \author{Kim, Bongju} \address{Department of Mathematics \\Pusan National University\\ Korea} \email{[email protected]} \footnote{The theorems in this paper stated and proved in 2012 winter. 2010 \textit{Mathematics Subject Classification.} 11P81, 11P84, 05A17} \keywords{partition number identities, change of set of parts} \begin{abstract} Let $B$ be an infinite subset of $\mathbf{N}$. When we consider partitions of natural numbers into elements of $B$, a partition number without a restriction of the number of equal parts can be expressed by partition numbers with a restriction $\alpha$ of the number of equal parts. Although there are many way of the expression, we prove that there exists a expression form such that this expression form is true for all possible set $B$. This identities comes from the partition numbers of natural numbers into $\{1,\alpha,\alpha^2,\alpha^3,\cdots\}$. Furthermore, we prove that there exist inverse forms of the expression forms. And we prove other similar identities. The proofs in this paper are constructive. \end{abstract} \maketitle Let $p(n)$ be the number of partitions of $n$ into natural numbers, and $d(n)$ be the number of partitions of $n$ into distinct natural numbers. If one calculates $p(5)$ and from $d(1)$ to $d(5)$ except $d(4)$, \[p(5)=7, d(1)=1, d(2)=1, d(3)=2, d(5)=3.\] However, $p(5)$ can de expressed by the following product and sum of $d(1)$, $d(2)$, $d(3)$, $d(5)$. \begin{align*} p(5)=7 & =3+1\times1+1\times1+2\times1\\ &=d(5)+d(1)d(1)+d(2)d(1)+d(3)d(1). \end{align*} Next, let $p^{\mathbf{P}}(n)$ be the number of partitions of $n$ into primes, and $p^{\mathbf{P}}_1(n)$ be the number of partitions of $n$ into distinct primes. Then \[p^{\mathbf{P}}(5)=2, p^{\mathbf{P}}_1(1)=0, p^{\mathbf{P}}_1(2)=1, p^{\mathbf{P}}_1(3)=1,p^{\mathbf{P}}_1(5)=2\] and $p^{\mathbf{P}}(5)$ can de expressed by the following product and sum of $p^{\mathbf{P}}_1(1)$, $p^{\mathbf{P}}_1(2)$, $p^{\mathbf{P}}_1(3)$, $p^{\mathbf{P}}_1(5)$. \begin{align*} p^{\mathbf{P}}(5)=2& =2+0\times0+1\times0+1\times0\\ &=p^{\mathbf{P}}_1(5)+p^{\mathbf{P}}_1(1)p^{\mathbf{P}}_1(1)+p^{\mathbf{P}}_1(2)p^{\mathbf{P}}_1(1)+p^{\mathbf{P}}_1(3)p^{\mathbf{P}}_1(1). \end{align*} Finally, let $P^{O}(n)$ be the number of partitions of $n$ into odd numbers, and $p^{O}_1(n)$ be the number of partitions of $n$ into distinct odd numbers. Then \[p^{O}(5)=3, p^{O}_1(1)=1, p^{O}_1(2)=0, p^{O}_1(3)=1, p^{O}_1(5)=1\] and \begin{align*}p^{O}(5)=3&=1+1\times1+0\times1+1\times1\\ &=p^{O}_1(5)+p^{O}_1(1)p^{O}_1(1)+p^{O}_1(2)p^{O}_1(1)+p^{O}_1(3)p^{O}_1(1). \end{align*} If one compares above three expressions, one can see that these three expression forms are same though the set of parts was changed. The numbers in the previous identity are not random but come from the all possible binary expressions of $5$ i.e. \begin{align*} 5&=5\times2^0\\ &=1\times2^0+1\times2^2\\ &=1\times2^0+2\times2^1\\ &=3\times2^0+1\times2^1. \end{align*} \begin{definition} Let $\mathbf{N}$ be the set of natural numbers and $\psi$ be an one to one function such that $\psi:\mathbf{N}\rightarrow \mathbf{N}$. We denote the set $\{\psi(n)\in\mathbf{N}\}$ by $A$. $p^{A}_{\alpha}(n)$ is the number of partitions of $n$ into elements of $A$ such that the number of equal parts is less than or equals to $\alpha\in \mathbf{N}\setminus\{0\}$. $p^{A}_{}(n)$ is the number of partitions of $n$ into elements of $\psi(\mathbf{N})$ without a restriction of the number of equal parts. We define $p^{A}_{\alpha}(0)=p^{A}_{}(0):=1$ for all $\alpha\in\mathbf{N}\setminus\{0\}$. \end{definition} \section{Identities between $p^{A}_{}$ and $p^{A}_{\alpha}$} Before considering the identities between $p^{A}_{}$ and $p^{A}_{\alpha}$, we prove the identities between $p^{A}_{}$ and $p^{A}_{1}$. \begin{definition} Consider all non-negative integer solutions of the indeterminate equation $n=N_{0}+2N_{1}+4N_{2}+\cdots=\sum_{i\geq 0}2^{i}N_{i}$ and denote by $(a^{n}_{11},a^{n}_{12},a^{n}_{13}\cdots)$, $(a^{n}_{21},a^{n}_{22},a^{n}_{23}\cdots)$, $\cdots$. In other word, \begin{align*} n&=a^{n}_{11}+2a^{n}_{12}+4a^{n}_{13}+\cdots\\ &=a^{n}_{21}+2a^{n}_{22}+4a^{n}_{23}+\cdots\\ &\cdots \end{align*} Then the solution matrix of this equation is \[A_n:=(a^{n}_{ij}).\] \end{definition} \begin{proposition} Let $A_n=(a^{n}_{ij})$ be the solution matrix of $n=\sum_{i\geq 0}2^{i}N_{i}$ and let $\psi$ be an one to one function such that $\psi:\mathbf{N}\rightarrow \mathbf{N}$. Then \[ p^{A}_{}(n)=\sum_{i\geq 1}\prod_{j\geq 1}p^{A}_{1}(a^{n}_{ij}) \] for all $n\in\mathbf{N}$. \end{proposition} \begin{proof} It is well known fact that for $|q|<1$, \[\sum_{n\geq 0}p^{A}_{}(n)q^{n}=\prod_{n\geq 1}\frac{1}{1-q^{\psi(n)}}\] and \[\sum_{n\geq 0}p^{A}_{1}(n)q^{n}=\prod_{n\geq 1}(1+q^{\psi(n)})\](see \cite{1} or \cite{2}). On the other hand, \begin{align*} \prod_{n\geq1}(1-q^{2\psi(n)})&=\prod_{n\geq1}(1-q^{\psi(n)})\prod_{n\geq1}(1+q^{\psi(n)})\\ \prod_{n\geq1}(1-q^{4\psi(n)})&=\prod_{n\geq1}(1-q^{\psi(n)})\prod_{n\geq1}(1+q^{\psi(n)})\prod_{n\geq1}(1+q^{2\psi(n)}) \\ \vdots\\ \prod_{n\geq1}(1-q^{2^{I+1}\psi(n)})&=\prod_{n\geq1}(1-q^{\psi(n)})\prod_{i=0}^{I}\prod_{n\geq1}(1+q^{2^{i}\psi(n)}) \end{align*} for all $I \in \mathbf{N}$. So, \begin{align*} 1&=\lim_{I\rightarrow\infty}\prod_{n\geq1}(1-q^{2^{I+1}\psi(n)})\\ &=\prod_{n\geq1}(1-q^{\psi(n)})\prod_{i=0}^{\infty}\prod_{n\geq1}(1+q^{2^{i}\psi(n)}). \end{align*} Therefore, \begin{align*} \prod_{n\geq1}\frac{1}{1-q^{\psi(n)}}=\prod_{i\geq0}\prod_{n\geq1}(1+q^{2^{i}\psi(n)}) \end{align*} and \begin{align*} \sum_{n\geq 0}p^{A}_{}(n)q^{n} =\prod_{i\geq0}(\sum_{n\geq0}p^{A}_{1}(n)q^{2^{i}n}). \end{align*} If one expands the above infinite product to an infinite series, one can express $p^{A}_{}(n)$ by $p^{A}_{1}(n)$, $p^{A}_{1}(n-1)$, $\cdots$, $p^{A}_{1}(1)$ and one can see that the coefficient of $q^n$ is related with non-negative integer solutions of $n=\sum_{i\geq0}2^{i}N_{i}$. If one expands some terms, \begin{align*} \,&\prod_{i\geq0}(\sum_{n\geq 0}p^{A}_{1}(n)q^{2^{i}n})\\ &=1+p^{A}_{1}(1)q+[p^{A}_{1}(2)+p^{A}_{1}(1)]q^2\\ &\;+[p^{A}_{1}(3)+p^{A}_{1}(1)p^{A}_{1}(1)]q^3\\ &\;+[p^{A}_{1}(4)+p^{A}_{1}(2)p^{A}_{1}(1)+p^{A}_{1}(2)+p^{A}_{1}(1)]q^4\\ &\;+[p^{A}_{1}(5)+p^{A}_{1}(1)p^{A}_{1}(2)+p^{A}_{1}(1)p^{A}_{1}(1)+p^{A}_{1}(3)p^{A}_{1}(1)]q^5\\&\quad\;\cdots \end{align*} \end{proof} Now, we prove more general identities between $p^{A}_{}$ and $p^{A}_{\alpha}$. \begin{definition} Let $n \in \mathbf{N}$ and $\{(a^{n,\alpha}_{11},a^{n,\alpha}_{12},a^{n,\alpha}_{13}\cdots)$, $(a^{n,\alpha}_{21},a^{n,\alpha}_{22},a^{n,\alpha}_{23}\cdots), \cdots \}$ be the set of all non-negative integer solutions of the indeterminate equation $n=N_{0}+(\alpha +1)N_{1}+(\alpha +1)^2N_{2}+\cdots=\sum_{i\geq 0}(\alpha +1)^{i}N_{i}$. In other words, \begin{align*} n&=a^{n,\alpha}_{11}+(\alpha +1)a^{n,\alpha}_{12}+(\alpha +1)^2a^{n,\alpha}_{13}+\cdots\\ &=a^{n,\alpha}_{21}+(\alpha +1)a^{n,\alpha}_{22}+(\alpha +1)^2a^{n,\alpha}_{23}+\cdots\\ &\cdots \end{align*}where $a^{n,\alpha}_{ij}\in\mathbf{N}$. Then the solution matrix of this equation is \[A_{n,\alpha}:=(a^{n,\alpha}_{ij}).\] \end{definition} \begin{theorem} Let $\psi$ be an one to one function such that $\psi:\mathbf{N}\rightarrow \mathbf{N}$. And let $n\in\mathbf{N}$ and $A_{n,\alpha }=(a_{ij}^{n,\alpha})$ be the solution matrix of $n=\sum_{i\geq 0}(\alpha +1)^{i}N_{i}$. Then \[ p^{A}_{}(n)=\sum_{i\geq 1}\prod_{j\geq 1}p^{A}_{\alpha}(a^{n,\alpha}_{ij}) \] for all $\alpha\in\mathbf{N}\setminus\{0\}$. \end{theorem} \begin{proof} It is well known fact that for $|q|<1$, \begin{align*} \sum_{n\geq 0}p^{A}_{\alpha}(n)q^{n}&=\prod_{n\geq 1}\frac{1-q^{(\alpha+1)\psi(n)}}{1-q^{\psi(n)}}\\ &=\prod_{n\geq 1}(1+q^{\psi(n)}+q^{2\psi(n)}+\cdots+q^{\alpha\psi(n)}) \end{align*}(see \cite{2}). On the other hand, \begin{align*}\prod_{n\geq1}(1-q^{(\alpha+1)\psi(n)})&=\prod_{n\geq1}(1-q^{\psi(n)})\prod_{n\geq1}(1+q^{\psi(n)}+q^{2\psi(n)}+\cdots+q^{\alpha \psi(n)})\\ \prod_{n\geq1}(1-q^{(\alpha+1)^2\psi(n)})&=\prod_{n\geq1}(1-q^{\psi(n)})\prod_{n\geq1}(1+q^{\psi(n)}+q^{2\psi(n)}+\cdots+q^{\alpha \psi(n)})\\ &\,\times\prod_{n\geq1}(1+q^{(\alpha+1)\psi(n)}+q^{(\alpha+1)2\psi(n)}+\cdots+q^{(\alpha+1)\alpha\psi(n)})\\ \vdots\\ \prod_{n\geq1}(1-q^{(\alpha+1)^{I+1}\psi(n)})&=\prod_{n\geq1}(1-q^{\psi(n)})\\&\;\times\prod_{i=0}^{I}\prod_{n\geq1}(1+q^{(\alpha+1)^{i}\psi(n)}+\cdots+q^{(\alpha+1)^{i}\alpha \psi(n)}) \end{align*} for all $I\in\mathbf{N}$. So, \begin{align*} 1&=\lim_{I\rightarrow\infty}\prod_{n\geq1}(1-q^{(\alpha+1)^{I+1}\psi(n)})\\ &=\prod_{n\geq1}(1-q^{\psi(n)})\prod_{i=0}^{\infty}\prod_{n\geq1}(1+q^{(\alpha+1)^{i}\psi(n)}+\cdots+q^{(\alpha+1)^{i}\alpha \psi(n)}). \end{align*} Therefore, \[ \prod_{n\geq1}\frac{1}{1-q^{\psi(n)}}=\prod_{i\geq0}\prod_{n\geq 1}(1+q^{(\alpha+1)^{i}\psi(n)}+\cdots+q^{(\alpha+1)^{i}\alpha\psi(n)}) \] and \[ \sum_{n\geq 0}p^{A}_{}(n)q^{n}=\prod_{i\geq 0}(\sum_{n\geq 0}p^{A}_{\alpha}(n)q^{(\alpha+1)^{i}n}). \] If one expands the above infinite product to an infinite series, one can express $p^{A}_{}(n)$ by $p^{A}_{\alpha}(n)$, $p^{A}_{\alpha}(n-1)$, $\cdots$, $p^{A}_{\alpha}(1)$ and one can see that the coefficient of $q^n$ is related with non-negative integer solutions of $n=\sum_{i\geq 0}(\alpha +1)^{i}N_{i}$ \end{proof} \section{Inverse identities and some other similar identities} In section 1, we found the identities which express $p^{A}_{}(n)$ by $p^{A}_{\alpha}(n)$, $p^{A}_{\alpha}(n-1)$, $\cdots$, $p^{A}_{\alpha}(1)$. Now, we will find the inverse identities. \begin{definition} Let $E^{A}(n)$ be the number of even partitions of $n$ without a restriction of the number of equal parts and $O^{A}(n)$ be the number of odd partitions of $n$ without a restriction of the number of equal parts. Then we define \[\bar{p}^{A}(n):=E^{A}(n)-O^{A}(n)\] and $\bar{p}^{A}(0):=1.$ \end{definition} \begin{definition} Let $n=(\alpha+1)\sum_{i\geq0}2^{i}N_{i}$ be the indeterminate equation for $n, \alpha\in\mathbf{N}\setminus\{0\}$. If this equation has solutions $(b^{n,\alpha}_{11}, b^{n,\alpha}_{12}, b^{n,\alpha}_{13}, \cdots), (b^{n,\alpha}_{21}, b^{n,\alpha}_{22}, b^{n,\alpha}_{23}, \cdots), \cdots$ where $b^{n,\alpha}_{ij}\in\mathbf{N}$, then we define the solution matrix of this equation by \[B_{n, \alpha}:=(b^{n,\alpha}_{ij}).\] \end{definition} \begin{definition} Let $n, \alpha\in\mathbf{N}\setminus\{0\}$. If $n=(\alpha+1)\sum_{i\geq 0}2^{i}N_{i}$ has non-negative integer solutions, we define \[\Gamma^{\psi}_{\alpha}(n):=\sum_{i\geq 1}\prod_{j\geq 1}\bar{p}^{A}(b^{n,\alpha}_{ij})\] and if $n=(\alpha+1)\sum_{i\geq 0}2^{i}N_{i}$ does not have a non-negative integer solution, we define $\Gamma^{\psi}_{\alpha}(n):=0$. For $n=0$, we define $\Gamma^{\psi}_{\alpha}(0):=1$. \end{definition} \begin{theorem} Let $\psi$ be an one to one function such that $\psi:\mathbf{N}\rightarrow \mathbf{N}$. Then \[ p^{A}_{\alpha}(n)=\sum^{n}_{i=0}p^{A}_{}(n-i) \Gamma^{\psi}_{\alpha}(i) \] for all $n, \alpha \in \mathbf{N}$. \end{theorem} \begin{proof} For $|q|<1$, \begin{align*} \prod_{n\geq1}(1-q^{2(\alpha+1)\psi(n)})&=\prod_{n\geq 1}(1-q^{(\alpha+1)\psi(n)})\prod_{n\geq 1}(1+q^{(\alpha+1)\psi(n)})\\ \prod_{n\geq1}(1-q^{4(\alpha+1)\psi(n)})&=\prod_{n\geq 1}(1-q^{(\alpha+1)\psi(n)})\\ &\,\times\prod_{n\geq 1}(1+q^{(\alpha+1)\psi(n)})\prod_{n\geq 1}(1+q^{2(\alpha+1)\psi(n)})\\ \vdots\\ \prod_{n\geq1}(1-q^{2^{I+1}(\alpha+1)\psi(n)})&=\prod_{n\geq 1}(1-q^{(\alpha+1)\psi(n)})\prod_{i=0}^{I}\prod_{n\geq 1}(1+q^{2^i(\alpha+1)\psi(n)}) \end{align*} for all $I\in\mathbf{N}$. So, \begin{align*} 1&=\lim_{I\rightarrow\infty}\prod_{n\geq1}(1-q^{2^{I+1}(\alpha+1)\psi(n)})\\ &=\prod_{n\geq 1}(1-q^{(\alpha+1)\psi(n)})\prod_{i=0}^{\infty}\prod_{n\geq 1}(1+q^{2^i(\alpha+1)\psi(n)}) \end{align*} and \[ \prod_{n\geq 1}\frac{1-q^{(\alpha+1)\psi(n)}}{1-q^{\psi(n)}} =\prod_{n\geq 1}\frac{1}{1-q^{\psi(n)}}\prod_{i\geq0}\prod_{n\geq 1}\frac{1}{1+q^{2^{i}(\alpha+1)\psi(n)}}. \]On the other hand, \[ \prod_{n\geq 1}\frac{1}{1+q^{2^{i}(\alpha+1)\psi(n)}}=\sum_{n\geq 0}\bar{p}^{A}(n)q^{2^{i}(\alpha+1)n}. \]Therefore, if we define \[ \prod_{i\geq 0}\prod_{n\geq 1}\frac{1}{1+q^{2^{i}(\alpha+1)\psi(n)}}:=\sum_{n\geq 0}\Gamma^{\psi}_{\alpha}(n)q^{n}, \]then \[ \Gamma^{\psi}_{\alpha}(n)=\sum_{i\geq 1}\prod_{j\geq 1}\bar{p}^{A}(b^{n,\alpha}_{ij}) \] when $n=(\alpha+1)\sum_{i\geq 0}2^{i}N_{i}$ has non-negative integer solutions and $\Gamma^{\psi}_{\alpha}(n)=0$ when $n=(\alpha+1)\sum_{i\geq 0}2^{i}N_{i}$ does not have a non-negative integer solution. \\Finally, \begin{align*} \sum_{n\geq 0}p^{A}_{\alpha}(n)q^{n}&=\prod_{n\geq 1}\frac{1}{1-q^{\psi(n)}}\prod_{i\geq 0}\prod_{n\geq 1}\frac{1}{1+q^{2^{i}(\alpha+1)\psi(n)}}\\ &=(\sum_{n\geq 0}p^{A}_{}(n)q^{n})(\sum_{n\geq 0}\Gamma^{\psi}_{\alpha}(n)q^{n}) \end{align*}and \[ p^{A}_{\alpha}(n)=\sum^{n}_{i=0}p^{A}_{}(n-i) \Gamma^{\psi}_{\alpha}(i). \] \end{proof} Next, we prove two similar theorems. \begin{definition} Let $E^{A}_{\alpha}(n)$ be the number of even partitions of $n$ such that the number of equal parts is less than or equals to $\alpha\in \mathbf{N}$ and $O^{A}_{\alpha}(n)$ be the number of odd partitions of $n$ such that the number of equal parts is less than or equals to $\alpha\in\mathbf{N}\setminus\{0\}$. Then we define \[\bar{p}^{A}_{\alpha}(n):=E^{A}_{\alpha}(n)-O^{A}_{\alpha}(n)\] and $\bar{p}^{A}_{\alpha}(0):=1.$ \end{definition} \begin{theorem} Let $A_n=(a^{n}_{ij})$ be the solution matrix of $n=\sum_{i\geq 0}2^{i}N_{i}$ and let $\psi$ be an one to one function such that $\psi:\mathbf{N}\rightarrow \mathbf{N}$. Then \[ \bar{p}^{A}_{1}(n)=\sum_{i\geq 1}\prod_{j\geq 1}\bar{p}^{A}(a^{n}_{ij}) \] for all $n\in\mathbf{N}$. \end{theorem} \begin{proof} We proved that for $|q|<1,$ \[ 1=\prod_{n\geq1}(1-q^{\psi(n)})\prod_{i=0}^{\infty}\prod_{n\geq1}(1+q^{2^{i}\psi(n)}) \] in the proof of theorem 2.1. So, \[ \prod_{n\geq1}(1-q^{\psi(n)})=\prod_{i=0}^{\infty}\prod_{n\geq1}\frac{1}{(1+q^{2^{i}\psi(n)})} \] and \[\sum_{n\geq0}\bar{p}^{A}_{1}(n)q^n=\prod_{i\geq0}(\sum_{n\geq0}\bar{p}^{A}(n)q^{2^{i}n}).\] This proves the theorem. \end{proof} \begin{theorem} Let $A_{n, \alpha}=(a^{n, \alpha}_{ij})$ be the solution matrix of $n=\sum_{i\geq 0}(\alpha+1)^{i}N_{i}$ and let $\psi$ be an one to one function such that $\psi:\mathbf{N}\rightarrow \mathbf{N}$. Then \[ \bar{p}^{A}(n)=\sum_{i\geq 1}\prod_{j\geq 1}\bar{p}^{A}_{\alpha}(a^{n, \alpha}_{ij}) \] for all $n\in\mathbf{N}$ and for all even natural number $\alpha$. \end{theorem} \begin{proof} Let $\alpha$ be an even natural number. If $|q|<1,$ \begin{align*} &\prod_{n\geq1}(1+q^{(\alpha+1)^{I+1}\psi(n)})\\ &=\prod_{n\geq1}(1+q^{\psi(n)})\prod_{i=0}^{I}\prod_{n\geq1}(1-q^{(\alpha+1)^{i}\psi(n)}+q^{(\alpha+1)^{i}2\psi(n)}-\cdots+q^{(\alpha+1)^{i}\alpha\psi(n)}) \end{align*} for all $I\in\mathbf{N}$. So, \begin{align*} 1&=\lim_{I\rightarrow\infty}\prod_{n\geq1}(1+q^{(\alpha+1)^{I+1}\psi(n)})\\ &=\prod_{n\geq1}(1+q^{\psi(n)})\prod_{i=0}^{\infty}\prod_{n\geq1}(1-q^{(\alpha+1)^{i}\psi(n)}+q^{(\alpha+1)^{i}2\psi(n)}-\cdots+q^{(\alpha+1)^{i}\alpha\psi(n)}). \end{align*} Therefore, \begin{align*} \prod_{n\geq1}\frac{1}{(1+q^{\psi(n)})}&=\prod_{i=0}^{\infty}\prod_{n\geq1}(1-q^{(\alpha+1)^{i}\psi(n)}+q^{(\alpha+1)^{i}2\psi(n)}-\cdots+q^{(\alpha+1)^{i}\alpha\psi(n)}) \end{align*} and \[ \sum_{n\geq0}\bar{p}^{A}(n)=\prod_{i\geq0}(\sum_{n\geq0}\bar{p}^{A}_{\alpha}(n)q^{(\alpha+1)^{i}n}). \] This proves the theorem. \end{proof} \pagebreak \section{Appendix: Examples} In this appendix, we will consider two identities for $p^{A}_{}(10)$ and $p^{A}_{1}(10)$. I) Let us consider the expression of $p^{A}_{}(10)$ by $p^{A}_{1}$s. The indeterminate equation for this identity is \[ 10=N_0+2N_1+4N_2+\cdots. \]The solution matrix of this equation and the identity for $p^{A}_{}(10)$ are $\left( \begin{matrix} 0 & 1 & 0 & 1 & \cdots \\ 0 & 1 & 2 & 0 & \cdots \\ 0 & 3 & 1 & 0 & \cdots \\ 0 & 5 & 0 & 0 & \cdots \\ 2 & 2 & 1 & 0 & \cdots \\ 2 & 4 & 0 & 0 & \cdots \\ 2 & 0 & 0 & 1 & \cdots \\ 2 & 0 & 2 & 0 & \cdots \\ 4 & 1 & 1 & 0 & \cdots \\ 4 & 3 & 0 & 0 & \cdots \\ 6 & 2 & 0 & 0 & \cdots \\ 6 & 0 & 1 & 0 & \cdots \\ 8 & 1 & 0 & 0 & \cdots \\ 10 & 0 & 0 & 0 & \cdots \end{matrix}\right) $,$\quad$ $\begin{array}{l} p^{A}_{}(10) \\ \\ =p^{A}_{1}(1)p^{A}_{1}(1)+p^{A}_{1}(1)p^{A}_{1}(2)+p^{A}_{1}(1)p^{A}_{1}(3) \\ +p^{A}_{1}(5)+p^{A}_{1}(2)p^{A}_{1}(2)p^{A}_{1}(1)\\ +p^{A}_{1}(2)p^{A}_{1}(4)+p^{A}_{1}(2)p^{A}_{1}(1) \\ +p^{A}_{1}(2)p^{A}_{1}(2)+p^{A}_{1}(4)p^{A}_{1}(1)p^{A}_{1}(1) \\ +p^{A}_{1}(4)p^{A}_{1}(3)+p^{A}_{1}(6)p^{A}_{1}(2)+p^{A}_{1}(6)p^{A}_{1}(1) \\ +p^{A}_{1}(8)p^{A}_{1}(1)+p^{A}_{1}(10). \end{array} $ Now, we calculate $p^{A}_{}(10)$ for three set of parts.\\1) $\psi(\mathbf{N})=\{p\;|\;p\;is\;a\;prime\}$\\Let us calculate partition numbers.\\ $ \begin{array}{ll} 10= & 5+5 \\ & 3+2+5 \\ & 3+3+2+2 \\ & 7+3 \\ & 2+2+2+2+2 \end{array} $, $ \begin{array}{ll} p^{A}_{1}(1)=0 & p^{A}_{1}(5)=2 \\ p^{A}_{1}(2)=1 & p^{A}_{1}(6)=0 \\ p^{A}_{1}(3)=1 & p^{A}_{1}(8)=1 \\ p^{A}_{1}(4)=0 & p^{A}_{1}(10)=2. \end{array} $\\So, $p^{A}_{}(10)=5$ and since $p^{A}_{1}(1)=p^{A}_{1}(4)=p^{A}_{1}(6)=0$, \[ p^{A}_{}(10)=p^{A}_{1}(5)+p^{A}_{1}(2)p^{A}_{1}(2)+p^{A}_{1}(10)=2+1+2=5. \] \\2) $\psi(\mathbf{N})=\{n^2\;|\;n\in\mathbf{N} \} $.\\Let us calculate the partition numbers.\\ $ \begin{array}{ll} 10= & 1+9 \\ & 1+1+1+1+1+1+1+1+1+1 \\ & 4+1+1+1+1+1+1 \\ & 4+4+1+1 \end{array} $, $ \begin{array}{ll} p^{A}_{1}(1)=1 & p^{A}_{1}(5)=1 \\ p^{A}_{1}(2)=0 & p^{A}_{1}(6)=0 \\ p^{A}_{1}(3)=0 & p^{A}_{1}(8)=0 \\ p^{A}_{1}(4)=1 & p^{A}_{1}(10)=1. \end{array} $\\So, $p^{A}_{}(10)=4$ and since $p^{A}_{1}(2)=p^{A}_{1}(3)=p^{A}_{1}(6)=p^{A}_{1}(8)=0$, \begin{align*} p^{A}_{}(10)&=p^{A}_{1}(1)p^{A}_{1}(1)+p^{A}_{1}(5)+p^{A}_{1}(4)p^{A}_{1}(1)p^{A}_{1}(1)+p^{A}_{1}(10)\\ &=1+1+1+1=4. \end{align*} 3) $\psi(\mathbf{N})=\{n\;|\;n\;is\;an\;odd\;number\}$.\\Let us calculate the partition numbers.\\ $ \begin{array}{ll} 10= & 1+1+1+1+1+1+1+1+1+1 \\ & 3+1+1+1+1+1+1+1 \\ & 5+1+1+1+1+1 \\ & 7+1+1+1 \\ & 9+1 \\ & 3+3+1+1+1+1 \\ & 3+3+3+1 \\ & 5+5 \\ & 7+3 \\ & 3+5+1+1 \end{array} $, $ \begin{array}{ll} p^{A}_{1}(1)=1 & p^{A}_{1}(5)=1 \\ p^{A}_{1}(2)=0 & p^{A}_{1}(6)=1 \\ p^{A}_{1}(3)=1 & p^{A}_{1}(8)=2 \\ p^{A}_{1}(4)=1 & p^{A}_{1}(10)=2. \end{array}$\\So, $p^{A}_{}(10)=10$ and since $p^{A}_{1}(2)=0$,\\ \begin{align*} p^{A}_{}(10) & =p^{A}_{1}(1)p^{A}_{1}(1)+p^{A}_{1}(1)p^{A}_{1}(3)+p^{A}_{1}(5) \\ & \,\,+p^{A}_{1}(4)p^{A}_{1}(1)p^{A}_{1}(1)+p^{A}_{1}(4)p^{A}_{1}(3) \\ & \,\,+p^{A}_{1}(6)p^{A}_{1}(1)+p^{A}_{1}(8)p^{A}_{1}(1)+p^{A}_{1}(10) \\ & \,\,=1+1+1+1+1+1+2+2 \\ & \,\,=10. \end{align*} II) Let us consider the identity for $p^{A}_{1}(10)$. The indeterminate equation for this identity is \[n=2N_0+4N_1+8N_2+\cdots\] and solutions of this equation are \[ \begin{tabular}{|ll|} \hline n & solutions \\ \hline 2 & $(1, 0, 0, \cdots)$ \\ \hline 4 & $(2, 0, 0, \cdots)$, $(0, 1, 0, \cdots)$ \\ \hline 6 & $(3, 0, 0, \cdots)$, $(1, 1, 0, \cdots)$ \\ \hline 8 & $(4, 0, 0, \cdots)$, $(2, 1, 0, \cdots)$, $(0, 0, 1, \cdots)$, $(0, 2, 0, \cdots)$ \\ \hline 10 & $(5, 0, 0, \cdots)$, $(1, 2, 0, \cdots)$, $(1, 0, 1, \cdots)$, $(3, 1, 0, \cdots)$ \\ \hline \end{tabular} \]and $\emptyset$ for $n=odd\;number$. So, \[ \begin{array}{ll} n & \Gamma^{\psi}_{1}(n) \\ 0 & 1 \\ 2 & \bar{p}^{A}(1) \\ 4 & \bar{p}^{A}(2)+\bar{p}^{A}(1) \\ 6 & \bar{p}^{A}(3)+\bar{p}^{A}(1)\bar{p}^{A}(1) \\ 8 & \bar{p}^{A}(4)+\bar{p}^{A}(2)\bar{p}^{A}(1)+\bar{p}^{A}(2)+\bar{p}^{A}(1) \\ 10 & \bar{p}^{A}(5)+\bar{p}^{A}(2)\bar{p}^{A}(1)+\bar{p}^{A}(1)\bar{p}^{A}(1)+\bar{p}^{A}(3)\bar{p}^{A}(1) \end{array}\] and $\Gamma^{\psi}_{1}(n)=0$ for $n=odd\,number$.\\Therefore, if we denote $p^{A}_{}(n)$ by $p^{A}(n)$ and $\bar{p}^{A}(n)$ by $\bar{p}^{A}(n)$, \[\,\] $ \begin{array}{ll} p^{A}_{1}(10) & =p^{A}(10)+p^{A}(8)\bar{p}^{A}(1)+p^{A}(6)[\bar{p}^{A}(2)+\bar{p}^{A}(1)] \\ & \,\,+p^{A}(4)[\bar{p}^{A}(3)+\bar{p}^{A}(1)\bar{p}^{A}(1)] \\ & \,\,+p^{A}(2)[\bar{p}^{A}(4)+\bar{p}^{A}(2)\bar{p}^{A}(1)+\bar{p}^{A}(2)+\bar{p}^{A}(1)] \\ & \,\,+\bar{p}^{A}(5)+\bar{p}^{A}(2)\bar{p}^{A}(1)+\bar{p}^{A}(1)\bar{p}^{A}(1)+\bar{p}^{A}(3)\bar{p}^{A}(1). \end{array} $ \[\,\] Now, we claulate for two set of parts $\{p\;|\;p\;is\;a\;prime\}$ and $\{n^{2}\;|\;n\in\mathbf{N}\}$.\\ 1) For $\{p\;|\;p\;is\;a\;prime\}$, $p^{A}_{1}(10)=2$ and\\ $ \begin{array}{ll} p^{A}(2)=1 & \bar{p}^{A}(1)=0 \\ p^{A}(4)=1 & \bar{p}^{A}(2)=-1 \\ p^{A}(6)=2 & \bar{p}^{A}(3)=-1 \\ p^{A}(8)=3 & \bar{p}^{A}(4)=1 \\ p^{A}(10)=5 & \bar{p}^{A}(5)=0. \end{array}$ \\If we calculate $p^{A}_{1}(10)$, \begin{align*} p^{A}_{1}(10) & =2 \\ & =5+3\times0+2\times[(-1)+0]+1\times[(-1)+0\times0] \\ & \,\,+1\times[1+(-1)\times0+(-1)+0]+0\\ &\,\,+0\times(-1)+0\times0+0\times(-1). \end{align*} 2) For $\{n^{2}\;|\;n\in\mathbf{N}\}$, $p^{A}_{1}(10)=1$ and\\ $ \begin{array}{ll} p^{A}(2)=1 & \bar{p}^{A}(1)=-1 \\ p^{A}(4)=2 & \bar{p}^{A}(2)=1 \\ p^{A}(6)=2 & \bar{p}^{A}(3)=-1 \\ p^{A}(8)=3 & \bar{p}^{A}(4)=0 \\ p^{A}(10)=4 & \bar{p}^{A}(5)=0. \end{array}$ \\If we calculate $p^{A}_{1}(10)$, \begin{align*} p^{A}_{1}(10) & =1 \\ & =4+3\times(-1)+2\times[1+(-1)] \\ & \,\,+2\times[(-1)+(-1)\times(-1)] \\ & \,\,+1\times[0+1\times(-1)+1+(-1)] \\ & \,\,+0+(-1)\times1+(-1)\times(-1)+(-1)\times(-1). \end{align*} \end{document}
\begin{document} \input{psfig.sty} \draft \title{Free-space quantum key distribution} \author{W. T. Buttler,$^{1}$ R. J. Hughes,$^{1}$ P. G. Kwiat,$^{1}$ G. G. Luther,$^{1}$ G. L. Morgan,\\$^{1}$ J. E. Nordholt,$^{2}$ C. G. Peterson,$^{1}$ and C. M. Simmons$^{1}$} \address{University of California,\\Los Alamos National Laboratory, \\$^{1}$Physics Division,\\$^{2}$Nonproliferation and International Security,\\Los Alamos, NM 87545} \date{\today} \maketitle \begin{abstract} A working free-space quantum key distribution (QKD) system has been developed and tested over a 205-m indoor optical path at Los Alamos National Laboratory under fluorescent lighting conditions. Results show that free-space QKD can provide secure real-time key distribution between parties who have a need to communicate secretly. \end{abstract} \pacs{PACS Numbers: 42.79.Sz, 03.65-w} Quantum cryptography was introduced in the mid-1980s \cite{BB84} as a new method for generating the shared, secret random number sequences, or cryptographic keys, that are used in crypto-systems to provide communications security. The appeal of quantum cryptography is that its security is based on laws of Nature, in contrast to existing methods of key distribution that derive their security from the perceived intractability of certain problems in number theory, or from the physical security of the distribution process. Since the introduction of quantum cryptography, several groups have demonstrated that quantum key distribution (QKD) can be performed over multi-kilometer distances of optical fiber \cite{1_km}-\cite{Aero97}, but the utility of the method would be greatly enhanced if it could also be performed over free-space paths, such as are used in laser communications systems. Indeed there are certain key distribution problems in this category for which QKD would have definite practical advantages (for example, it is impractical to send a courier to a satellite). We are developing QKD for use over line-of-sight paths, including surface to satellite, and here we report our first results on key generation over indoor paths of up to $205$ m. The feasibility of QKD over free-space paths might be considered problematic because it requires the transmission of single photons through a medium with varying properties and detection of these photons against a high background. However, others have shown that the combination of sub-nanosecond timing, narrow filters \cite{PhotonByPhoton,DaylightPairs}, and spatial filtering can render both of these problems tractable. Furthermore, the atmosphere is essentially non-birefringent at optical wavelengths, allowing faithful transmission of the single-photon polarization states used in QKD. A QKD procedure starts with the sender, ``Alice,'' generating a secret random binary number sequence. For each bit in the sequence, Alice prepares and transmits a single photon to the recipient, ``Bob,'' who measures each arriving photon and attempts to identify the bit value Alice has transmitted. Alice's photon state preparations and Bob's measurements are chosen from sets of non-orthogonal possibilities. For example, in the B92 protocol \cite{B92} Alice agrees with Bob (through public discussion) that she will transmit a horizontally-polarized photon, $|h\rangle$, for each ``0'' in her sequence, and a right-circular-polarized photon, $|rcp\rangle$, for each ``1'' in her sequence. Bob agrees with Alice to randomly test the polarization of each arriving photon in one of two ways: he either tests with vertical polarization, $|v\rangle$, to reveal ``1s,'' or left-circular polarization, $|lcp\rangle$, to reveal ``0s.'' Note that Bob will never detect a photon for which he and Alice have used a preparation/measurement pair that corresponds to different bit values, such as $|h\rangle$ and $|v\rangle$, which happens for $50$\% of the bits in Alice's sequence. However, for the other $50$\% of Alice's bits where the preparation and measurement protocols agree, such as $|h\rangle$ and $|lcp\rangle$, there is a 50\% probability that Bob detects the photon, as shown in TABLE I. So, by detecting photons Bob is able to identify a random $25$\% portion of the bits in Alice's sequence, assuming no bit loss in transmission or detection. (This $25$\% efficiency factor is the price that Alice and Bob must pay for secrecy.) Bob then communicates to Alice over a public channel the locations, but not the bit values, in the sequence where he detected photons, and Alice retains only these detected bits from her initial sequence. The resulting detected bit sequences are the raw key material from which a pure key is distilled using classical error detection techniques. An eavesdropper, ``Eve,'' can neither ``tap'' the key transmissions, owing to the indivisibility of a photon \cite{indivis1,indivis2}, nor copy them owing to the quantum ``no-cloning'' \cite{noclone1}-\cite{noclone4} theorem. Furthermore, the non-orthogonal nature of the quantum states ensures that if Eve makes her own measurements she will be detected through the elevated error rate she causes by the irreversible ``collapse of the wavefunction'' \cite{Eavesdropping}. The prototype QKD transmitter (FIG. 1) consisted of a temperature controlled diode laser, a collimating lens, two dielectric mirrors, a fiber to free-space launch system, a single-mode fiber pigtailed polarization neutral beamsplitter, a variable optical attenuator (OA), a $\sim10$-m single-mode optical fiber delay, a $2.5$ nm bandwidth interference filter (IF), a polarizing beamsplitter (PBS), a low-voltage pockels cell (PC), and an $8\times$ beam expander (BE). The diode laser wavelength is temperature selected to $772$ nm, and the laser is configured to emit a short, weak coherent pulse of $\sim1$ ns length, containing approximately $10^{5}$ photons. The free-space QKD receiver (FIG. 2) was comprised of a $3.5$ in. Cassegrain telescope (CT), a free-space to fiber launch system, a single-mode fiber pigtailed polarization neutral beamsplitter, two sets of polarization controllers (each consisting of a quarter-wave retarder and a half-wave retarder), a PBS, and a single photon counting module, or SPCM (EG\&G part number: SPCM-AQ 142-FL). The prototype receiver did not include an interference filter but it is expected that future versions of the receiver will incorporate this feature to reduce background light levels. A computer control system, ``Alice,'' starts the QKD protocol by pulsing the diode laser at a rate previously agreed upon between herself and the receiving computer control system, ``Bob.'' Each laser pulse is launched into a single-mode optical fiber and then split by the beamsplitter with equal probability between the direct path and the delay path. The direct path produces a coherent ``bright pulse'' of $\sim10^{5}$ photons which Bob uses as his system trigger for timing purposes. Light traveling along the direct path passes through the IF, the PBS, the PC, and is then launched into free-space from the BE. The IF constrains wavelength, and the PBS is oriented to transmit horizontal polarization. The fiber delay and OA are used to delay the diverted pulse by $\sim50$ ns as well as attenuate the diverted pulse to an average of $\sim1.4$ photons per pulse. This attenuated pulse then impinges again upon the beamsplitter, which transmits a dim-pulse with and average of $\sim0.7$ photons that follows the bright pulse along the direct path through the IF, the PBS, the PC, and the BE. (The attenuated pulse only approximates a ``single-photon'' state; we tested out the system with an average of $\sim0.7$ photons per ``dim-pulse.'' This corresponds to a $2$-photon probability of $\sim12$\% and implies that $\sim30$\% of the detectable dim-pulses will contain $2$ or more photons, e.g., with a Poisson distribution with an average photon number of $0.7$ there will be $\sim50$ empty sets, $\sim35$ sets of $1$ photon, $\sim12$ sets of $2$ photons, and $\sim3$ sets of $3$ photons for every $100$ dim-pulses.) The PBS transmits the $|h\rangle$ dim-pulse to the PC which is randomly switched to affect only the dim-pulse polarization. The random switch setting is determined by discriminating a random voltage generated by a white noise source and either passes the dim-pulse unchanged as $|h\rangle$ (zero-wave retardation) or changes it to $|rcp\rangle$ (quarter-wave retardation), depending on the random bit value. The bright pulse's polarization is never altered. Bob then collects the bright- and dim-pulses with the Cassegrain telescope and launches them into single-mode fiber. The bright pulse is split at the beamsplitter along two independent paths---one path [the long path (LP)] is approximately $5$ ns longer than the other path [the short path (SP)]. Each path contains polarization controlling optics which terminate upon the PBS. We configured our system to operate with a single SPCM, but we have also operated with SPCMs at both of the output ports of the PBS. If the dim-pulse of $\sim0.7$ photons is collected and launched into the fiber at the receiver it will be diverted by the beamsplitter with equal probability along one of the two possible optical paths. In the prototype system the polarization controlling optics were adjusted to behave together as a quarter-wave retarder along the SP, and a zero-wave retarder along the LP. Thus, a dim-pulse of $|rcp\rangle$ traveling the SP is converted to $|v\rangle$ and reflected away from the SPCM. Conversely, a dim-pulse of $|h\rangle$ traveling the SP is converted to $|rcp\rangle$ and is transmitted toward or reflected away from the SPCM with equal probability. Similarly, a dim-pulse of $|h\rangle$ traveling the LP is transmitted away from the SPCM, but a dim-pulse of $|rcp\rangle$ is reflected toward or transmitted away from the SPCM with equal probability. We used the differing path lengths, together with fast timing electronics gated with narrow coincidence windows ($\sim5$ ns), to determine dim-pulse polarizations with a single detector. Specifically, a coincidence observed $50$ ns after the bright pulse (early coincidence) informs Bob that the dim-pulse was of $|rcp\rangle$, while a coincidence observed $55$ ns after the bright pulse (late coincidence) tells Bob that the dim-pulse was of $|h\rangle$. The detector dead time was $\sim35$ ns. A variety of transmitter and receiver configurations were used to evaluate the equipment and test out the optical elements over optical path lengths of $2$-, $36$-, and $205$-m, but here we discuss only the $205$-m results. The $205$-m experiment was performed with the transmitter and receiver colocated in order to simplify data acquisition. The $205$-m optical path was achieved by sending the emitted beam up and down a $\sim17.1$-m laboratory hallway $6$ times with the use of $10$ mirrors, and a corner cube under fluorescent lighting conditions. The corner cube was used to determine the feasibility of transmitting single photons from a ground station to a low earth orbit satellite covered with corner cubes (such as LAGEOS-I and LAGEOS-II) and back. (Note: the primary property of the corner cube is its ability to return light back along the path it came. However, the corner cube also possesses the seldom-noted feature that each of its $6$ possible optical paths will transform a given incident polarization differently\cite{corner_cube}. Because of this, a fully illuminated corner cube cannot be used to perform polarization dependent experiments. Therefore, only one path through the corner cube was used during the experiment.) The coupling efficiency, $\eta$, between the transmitter and receiver for the $205$-m path was $\eta\sim2$\%, where $\eta$ accounts for losses between the transmitter and the power coupled into the single-mode fibers preceding the detector at the receiver. This efficiency led to a bit-rate of $\sim50$ Hz when the transmitter was pulsed at a rate of $\sim20$ kHz over the $205$-m path, with the system operating at an average of $\sim0.7$ photons per dim-pulse. The final bit-rate is the product of $\eta$ and the probabilities that the weak coherent pulse of photons will reach the detector, and the probabilities that the detector will fire when Poisson distributed photons reach the detector. [The detector efficiency is a function of the average photon number per dim-pulse, and accounts for the probability the detector will fire given 1 photon ($p(1)\sim0.65$), $2$ photons ($p(2)\sim0.878$), $3$ photons ($p(3)\sim0.957$), etc., reach the detector. These probabilities are convolved with the probabilities that $1$, $2$, $3$, or more of those photons actually reach the detector and then convolved with the Poisson probabilities for $1$, $2$, $3$, or more photons per dim-pulse ($p(1)=0.348$, $p(2)\sim0.123$, $p(3)\sim0.0284$, etc.). These convolutions are then summed to give the detection efficiency as a function of the Poisson average photon number.] The bit error rate (BER) for the $205$-m path was $\sim6$\%, where the BER is defined as the ratio of the bits received in error to the total number of bits received. A sample of raw key material from the $205$-m experiment, with errors, is shown in TABLE II. The narrow coincidence time windows in Bob's receiver minimized bit errors due to detector dark noise ($\sim80$ Hz); the ambient background was $\sim1$ kHz. These low noise rates amounted to $\sim1$ bit-error every $9$ s. After-pulsing of the SPCMs caused by the bright pulses contributed $\sim2$\% to the total BER---an average rate of $1$ bit-error per second. In addition, bright pulse reflections within the transmitter caused the ``1s'' errors (late coincidence errors) to be about $6$ times higher than the ``0s'' errors (early coincidence). After-pulsing errors could be reduced by increasing the length of the fiber delay to further separate the bright and dim pulse and should result in a BER of $\sim4$\%---an average rate of $2$ bit-errors per second; reflection errors could be reduced through the use of angle polished fiber termination and should result in a BER of $\sim2$\%. It is important to eliminate reflection errors because these are weaknesses which could be exploited by Eve. The BER might be further reduced to $\sim1$\% by elimination of the common PBS at the receiver, and by operating the receiver in a $2$ detector configuration. The poor coupling efficiency ($\eta\sim2$\%) together with the constant average bit-errors caused by after-pulsing and reflections (about $3$ bit-errors per second) prevented us from effectively operating the prototype system below an average of $\sim0.7$ photons per dim-pulse. This experiment implemented a two-dimensional parity check scheme which allowed the generation of error free key material. The error detection program permitted the isolation of error free bits from key material with BERs exceeding $10$\%. A further stage of ``privacy amplification'' is necessary to reduce any partial knowledge gained by an eavesdropper to less than $1$-bit of information \cite{PrivAmp}. We have not implemented this protocol at this time. Our prototype incorporates a ``one time pad\footnote{One time pad encryption utilizes a unique random string of key bits to encrypt a single plaintext message. In particular, the key bit string exactly the same length as the plaintext string and is used only one time. Encryption (decryption) is accomplished by XORing the message bits (encrypted bits) with the key bits.}'' \cite{Vernam} encryption (also known as the Vernam Cipher)---the only provably secure encryption method, and could also support any other symmetric key system. The original form of the B92 protocol \cite{B92} has a weakness to a ``man-in-the middle,'' or opaque, attack by Eve. For instance, Eve could measure Alice's photons in Bob's basis and only send a photon, or coherent photon pulse, when she identifies a bit. However, if Eve retransmits each observed bit as a single-photon (or a weak coherent pulse) she will noticeably lower Bob's bit-rate. To compensate for the additional attenuation to Bob's bit-rate Eve could send on a coherent photon pulse of an intensity appropriate to raise Bob's bit-rate to a level similar to her own bit-rate with Alice. [In fact, if Eve sends a bright classical pulse (a pulse of a large average photon number) she guarantees that Bob's bit-rate is equal to her own.] Our system protects against this scenario when operated with two SPCMs. For example, this type of attack would be revealed by an increase in ``dual-fire'' errors which occur when both SPCMs fire simultaneously. (In a perfect system there would be no ``dual-fire'' errors, regardless of the average photon number per pulse, but in an imperfect experimental system, where bit-errors occur, dual-fire errors will occur.) A better protection would be to use the BB84 \cite{BB84} protocol, which our system also supports. Over the next few months we intend to implement design changes to the transmitter and receiver in order to increase system efficiency, $\eta$, and increase the total range of the QKD system. Our calculations show that a narrow filter, the spatial filtering and the narrow coincidence timing provided by this system will allow reliable key distribution under bright daylight conditions. Our goal is to exchange key bits outdoors over one or two kilometers by the end of $1997$. Finally, we note that somewhat similar results to these presented here are reported in reference \cite{FreeSpace}. However, the protocol of reference \cite{FreeSpace} was implemented with a modulated HeNe laser, utilized long pulse lengths ($\sim100$ ns), and active polarization switching at the receiver, whereas we implemented our protocol over a line-of-sight path $35$\% longer than in reference \cite{FreeSpace} with a system which incorporates short pulse lengths ($\sim1$ ns, and allows the use of narrow coincidence timing windows to minimize ambient background noise allowing daytime applications) and passive polarization switching at the receiver (a simpler design than in reference \cite{FreeSpace} which will be critical for the locating of a receiver, Bob, on a satellite). R. J. H. wishes to extend special thanks to J. G. Rarity for the many helpful discussions regarding free-space quantum cryptography, and W. T. B. extends his appreciation to S. K. Lamoreaux and A. G. White for their helpful conversations. The work described in this letter was performed with U. S. Government funding. \begin{table} \caption{Observation Probabilities} \begin{center} \begin{tabular}{|l|c|c|c|c|} \hline Alice's Bit Value & ``0'' & ``0'' & ``1'' & ``1'' \\ Bob Tests With & ``1'' & ``0'' & ``1'' & ``0'' \\ \hline Observation Probability & p$=0$ & p$=\frac{1}{2}$ & p$=\frac{1}{2}$ & p$=0$ \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{A 200-Bit Sample of Alice's (A) and Bob's (B) Raw Key Material Generated by Free-Space QKD} \begin{center} \begin{tabular}{|l|l|l|l|l|l|} \hline A & 11111010 & 10100100 & 00110010 & 10011011 & 01010110 \\ B & 1111101{\bf 1} & 1{\bf 1}100{\bf 0}00 & 0011{\bf 1}010 & 10011011 & 01010110 \\ \hline A & 00111101 & 01101111 & 11010000 & 01101111 & 01011011 \\ B & 00111101 & 01101111 & 11010000 & 01101111 & 01011011 \\ \hline A & 11100100 & 01010001 & 10110100 & 10110101 & 01101011 \\ B & 11100100 & 01010001 & {\bf 0}0{\bf 0}10100 & 10110101 & 01101011 \\ \hline A & 10001011 & 11010111 & 10101110 & 10100111 & 00010011 \\ B & {\bf 0}00010{\bf 0}1 & 11010111 & 10101{\bf 0}10 & 10100111 & 00010011 \\ \hline A & 01000010 & 00100011 & 00111001 & 01101100 & 01110001 \\ B & 01000010 & 00100011 & 00111001 & 01101{\bf 0}00 & 01110001 \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \caption{Free-Space QKD Transmitter (Alice).} \label{app} \end{figure} \begin{figure} \caption{Free-Space QKD Receiver (Bob).} \label{a1} \end{figure} \end{document}
\begin{document} \title[Geometric Main Conjectures in Function Fields]{Geometric main conjectures in function fields} \author{Werner Bley} \author{Cristian D. Popescu$^\ast$}\thanks{$^\ast$Supported by a Simons Foundation collaboration grant} \maketitle \begin{abstract} We prove an Equivariant Main Conjecture in Iwasawa Theory along any rank one, sign-normalized Drinfeld modular, split at $\infty$ Iwasawa tower of a general function field of characteristic $p$, for the Iwasawa modules recently considered by Greither and Popescu in \cite{GP12}, in their proof of the classical Equivariant Main Conjecture along the (arithmetic) cyclotomic Iwasawa tower. As a consequence, we prove an Equivariant Main Conjecture for a projective limit of certain Ritter--Weiss type modules, along the same Drinfeld modular Iwasawa towers. This generalizes the results of Angl\`es et.al. \cite{Angles}, Bandini et al. \cite{Bandini}, and Coscelli \cite{Coscelli}, for the split at $\infty$ piece of the Iwasawa towers considered in loc.cit., and refines the results in \cite{GP12}. \end{abstract} \section{Introduction and Notations}\label{intro} \subsection{Arithmetic Iwaswa Theory} In \cite{GP12}, Greither and the second author considered a set of data $(K/k, S, \Sigma)$ consisting of an abelian extension $K/k$ of global fields of characteristic $p>0$ of Galois group $G$ and two finite, nonempty, disjoint sets of places $S$ and $\Sigma$ in $k$, such that $S$ contains the ramification locus of $K/k$. From this data one can construct a Deligne--Picard $1$--motive $M_{S,\Sigma}$, which is naturally acted upon by the Galois group $G\times\mathbf{\Gamma}$, where $\Gamma:=G(\overline{\Bbb F_q}/\Bbb F_q)$ and $\Bbb F_q$ is the exact field of constants of $k$. As a consequence, all the $\ell$--adic realizations $T_{\ell}(M_{S,\Sigma})$ are natural finitely generated modules over the profinite group--algebra $\Bbb Z_{\ell}[[G\times\Gamma]]$, for all prime numbers $\ell$. On the other hand, the set of data $(K/k,S,\Sigma)$ gives rise to a polynomial $\Theta_{S,\Sigma}(u)\in\Bbb Z[G][u]$, which is uniquely determined by the packet of ($S$--incomplete, $\Sigma$--smoothed) Artin $L$--functions $L_{S,\Sigma}(\chi, s)$, for all the complex valued characters $\chi$ of $G$, via the equalities \[ \chi(\Theta_{S,\Sigma}(u))\mid_{u=q^{-s}}=L_{S, \Sigma}(\chi^{-1}, s), \] for all $s\in\Bbb C$. The main result in \cite{GP12} is the following $G$--equivariant Iwasawa main conjecture, along the arithmetic (cyclotomic) Iwasawa tower $(K\otimes_{\Bbb F_q}\overline{\Bbb F_q})/K$, of Galois group $\Gamma\simeq \widehat{\Bbb Z}$, whose natural topologial generator is the $q$--power Frobenius automorphism of $\overline{\Bbb F_q}$, denoted by $\gamma$. \begin{theorem}[Greither--Popescu \cite{GP12}]\label{GP-main-intro} For $(K/k, S, \Sigma)$ as above and all primes $\ell$ we have \begin{enumerate} \item ${\rm pd}_{\Bbb Z_\ell[[G\times\Gamma]]}(T_\ell(M_{S,\Sigma}))=1.$ \item ${\rm Fitt}_{\Bbb Z_\ell[[G\times\Gamma]]}(T_\ell(M_{S,\Sigma}))=\langle \Theta_{S, \Sigma}(\gamma^{-1})\rangle.$ \end{enumerate} \end{theorem} In the statement above, ${\rm pd}_R(M)$ and ${\rm Fitt}_R(M)$ denote the projective dimension, respectively the $0$--th Fitting ideal of a finitely presented module $M$ over a commutative, unital ring $R$. See \S4 in \cite{GP12} for the relevant definitions and properties of Fitting ideals. \subsection{Geometric Iwasawa Theory} The main goal of this paper is to prove analogues of Theorem \ref{GP-main-intro} above along geometric Iwasawa--towers $K_\infty/K$, which are highly ramified and obtained from $K$ essentially by adjoining the $\frak p^n$--torsion points of a sign--normalized, rank one Drinfeld module (a Hayes module), for some place $\frak p$ in $k$ and all $n\in\Bbb Z_{\geq 1}$. This geometric Iwasawa theoretic approach was first considered in \cite{Angles}, in the particular case where $K=k=\Bbb F_q(t)$ and $K_\infty=\cup_{n\geq 0} K_n$ with $K_n$ obtained by adjoining the $\frak p^{n+1}$--torsion $\mathcal C[\frak p^{n+1}]$ of the Carlitz module \[ \mathcal C: \Bbb F_q[t]\to \Bbb F_q[t]\{\tau\}, \qquad \mathcal C(t)=t+\tau, \] for a maximal ideal $\frak p$ in $\Bbb F_q[t]$. The fields $K_n$ are the ray--class fields of $k$ of conductors $(\frak p^{n+1}v_\infty)$, where $v_\infty$ is the valuation of $k$ of uniformizer $1/t$. While using Theorem \ref{GP-main-intro} above and the techniques and results developed in \cite{GP12}, the authors of \cite{Angles} are studying the more classical Iwasawa $\Bbb Z_p[[G(K_\infty/k)]]$--module $$\frak X_p^{(\infty)}:=\varprojlim_n\, ({\rm Pic}^0(K_n)\otimes\Bbb Z_p),$$ where the projective limit is taken with respect to the usual norm maps at the level of the Picard groups of the function fields $K_n$. One has topological group isomorphisms $$G(K_\infty/K)\simeq \Bbb F_{\frak p}^\times\times U_{\frak p}^{(1)}\simeq \Bbb F_{\frak p}^\times\times \Bbb Z_p^{\aleph_0},$$ where $\Bbb F_{\frak p}$ is the residue field of $\frak p$, $U_{\frak p}^{(1)}$ is the group of principal units in the completion of $k$ at $\frak p$ and $\Bbb Z_p^{\aleph_0}$ denotes a product of countably many copies of $\mathbb{Z}_p$. The main Iwasawa theoretic result in \cite{Angles} gives the $0$--th Fitting ideal of $\frak X_p^\infty$, away from the trivial character of $\Bbb F_{\frak p}^\times$, in terms of an element $\Theta^{\infty, \sharp}_{S, \Sigma}\in \Bbb Z_p[[G(K_\infty/K)]]$, which should be viewed as the $\Bbb Z_p[[G(K_\infty/k)]]$--analogue of the special value $\Theta_{S, \Sigma}(1)\in\Bbb Z[G(K/k)]$ of the element $\Theta_{S,\Sigma}(u)$ described above. The work in \cite{Angles} was further developed in \cite{Bandini} and \cite{Coscelli}, see Remark \ref{remark-Bandini-Coscelli}. \\ As opposed to \cite{Angles}, the set-up of this paper is the following. We fix an arbitrary function field $k$ of exact field of constants $\Bbb F_q$ and a place $v_\infty$ of $k$, called the infinite place of $k$ from now on. We let $A$ denote the Dedekind domain consisting of those elements in $k$ which are integral at all places of $k$, except for $v_\infty$. Further, we fix an ideal $\frak f$ and a maximal ideal $\frak p$ of $A$, such that $\frak p\nmid\frak f$. The geometric extensions of $k$ of interest to us are the fields $$L_n:=H_{\frak f\frak p^{n+1}}, \text{ for all }n\geq 0,$$ which are the ray--class fields of $k$ of conductors $\frak f\frak p^{n+1}$ in which $v_\infty$ splits completely (i.e. the {\it{real}} ray--class fields of conductors $\frak f\frak p^{n+1}$.) As proved by Hayes in \cite{Hay85}, the extension $L_n/L_0$ is essentially generated by the $\frak p^{n+1}$--torsion points of a certain type of rank 1, sign-normalized Drinfeld module defined on $A$. (See Section~\ref{Drinfeld modules and cft} for details.) The ensuing geometric Iwasawa tower $L_\infty/k$, with $L_\infty=\cup_nL_n$, has Galois group $G_\infty$ which sits in an exact sequence $$0\to G(L_\infty/L_0)\to G_\infty\to G(L_0/k)\to 0,$$ where $G(L_\infty/L_0)\simeq \Bbb Z_p^{\aleph_0}$ and $G(L_0/k)$ finite. Since the ramification locus of $L_\infty/k$ is finite, namely $S:=\{\frak p\}\cup\{v\,\vert\, v \text{ prime in } A, v\vert\frak f\}$, one can construct the following element $$\Theta_{S, \Sigma}^{(\infty)}(u):=\varprojlim_n\Theta_{S,\Sigma}^{(n)}(u)\in\Bbb Z_p[[G_\infty]][[u]],$$ out of the polynomials $\Theta_{S,\Sigma}^{(n)}(u)\in\Bbb Z[G(L_n/k)][u]$ associated in \cite[\S4.2]{GP12} to the data $(L_n/k,S, \Sigma)$, for any finite, non-empty set $\Sigma$ of primes in $k$, disjoint from $S$. On the other hand, to the set of data $(L_\infty/k, S, \Sigma)$ one can associate the following $\mathbb{Z}_p[[G_\infty\times\Gamma]]$--module $$T_p(M_{S,\Sigma}^{(\infty)}):=\varprojlim_n T_p(M_{S,\Sigma}^{(n)}),$$ where $M_{S,\Sigma}^{(n)}$ is the Picard $1$--motive for $(L_n/k, S, \Sigma)$, and $T_p(M_{S,\Sigma}^{(n)})$ is its $p$--adic Tate module, as defined in \cite[ \S2]{GP12}. The projective limit is taken with respect to certain canonical norm maps, described in detail in \S3 below. It turns out that neither $T_p(M_{S, \Sigma}^{(n)})$ nor $T_p(M_{S, \Sigma}^{(\infty)})$ depend on $\Sigma$, reason for which we will drop $\Sigma$ from those notations. In \S3.2, we prove the following geometric--arithmetic analogue of Theorem \ref{GP-main-intro} above. \begin{theorem}\label{EMC-I-intro} For any finite, non-empty set $\Sigma$ of primes in $k$, disjoint from $S$, \\ the $\Bbb Z_p[[G_\infty\times\Gamma]]$--module $T_p(M_{S}^{(\infty)})$ is finitely generated, torsion and \begin{enumerate} \item ${\rm pd}_{\mathbb{Z}_p[[G_\infty\times\Gamma]]}(T_p(M^{(\infty)}_{S}))=1.$ \item ${\rm Fitt}_{\mathbb{Z}_p[[G_\infty\times\Gamma]]}(T_p(M^{(\infty)}_{S}))=\langle \Theta_{S,\Sigma}^{(\infty)}(\gamma^{-1})\rangle.$ \end{enumerate} \end{theorem} In order to obtain a geometric (along the tower $L_\infty/k$) Iwasawa main conjecture--type result, one has to take $\Gamma$--coninvariants. In \S3.3 we establish a $\Bbb Z_p[[G_\infty]]$--module isomorphism $$T_p(M_S^{(\infty)})_{\Gamma}\simeq \bigtriangledown_S^{(\infty)},$$ where $\bigtriangledown_S^{(\infty)}$ is an arithmetically meaningful $\Bbb Z_p[[G_\infty]]$--module, a projective limit of Ritter--Weiss type modules $\nabla_S^{(n)}$ which are extensions of divisor groups by class groups. (See the Appendix. Also, see \cite{Ritter-Weiss} for the number field analogues of $\bigtriangledown_S^{(n)}$.) We prove the following. \begin{theorem}\label{EMC-II-intro} For any finite, non-empty set $\Sigma$ of primes in $k$, disjoint from $S$, \\ the $\Bbb Z_p[[G_\infty]]$--module $\bigtriangledown_S^{(\infty)}$ is finitely generated, torsion and \begin{enumerate} \item ${\rm pd}_{\Bbb Z_p[[G_\infty]]}(\bigtriangledown_S^{(\infty)})=1.$ \item ${\rm Fitt}_{\mathbb{Z}_p[[G_\infty]]}(\bigtriangledown_S^{(\infty)})=\langle \Theta_{S,\Sigma}^{(\infty)}(1)\rangle.$ \end{enumerate} \end{theorem} To relate our results to those in \cite{Angles, Bandini,Coscelli} we have to introduce some further notation. We let $\Delta$ denote the maximal subgroup of $G(L_0/k)$ whose order is not divisible by $p$. Then we have a canonical isomorphism $G_\infty \simeq \Delta \times G_\infty^{(p)}$ where $G_\infty^{(p)}$ is the maximal pro-$p$ subgroup of $G_\infty$. We view the idempotent $e_\Delta := \frac{1}{|\Delta|} \sum_{\delta \in \Delta}\delta$ as an element of $\mathbb{Z}_p[[G_\infty]]$ and consider the exact functor $M \mapsto M^\sharp := (1 - e_\Delta)M$ from the category of $\mathbb{Z}_p[[G_\infty]]$-modules to the category of modules over the quotient ring $\mathbb{Z}_p[[G_\infty]]^\sharp:=(1-e_{\Delta})\Bbb Z_p[[G_\infty]]$. Further, if $\frak f$ is the unit ideal $\frak e$ and the prime $\frak p$ stays inert in the real Hilbert class field $H_{\frak e}$ over $k$, then for $S = \{\mathfrak{p}\}$ one has an isomorphism of $\Bbb Z_p[[G_\infty]]^\sharp$--modules (see \S3.3 below) \[ \bigtriangledown_S^{(\infty), \sharp}\simeq \frak X_{p}^{(\infty), \sharp}. \] Since these additional hypotheses are obviously satisfied when $k=\Bbb F_q(t)$ (as $H_{\frak e}=k$ in that case), Theorem \ref{EMC-II-intro} above implies the main Iwasawa theoretic result in \cite{Angles} discussed above, for the real Carlitz tower, i.e the maximal subfield of $K_\infty$ where $v_\infty$ splits completely. (See Theorem \ref{limit theorem 3 sharp}.) Under slightly stronger hypotheses, we obtain an isomorphism of $\Bbb Z_p[[G_\infty]]$--modules $\bigtriangledown_S^{(\infty)}\simeq \frak X_{p}^{(\infty)}$ which leads to a full description of ${\rm Fitt}_{\Bbb Z_p[[G_\infty]]}(\frak X_{ p}^{(\infty)})$. (See Theorem \ref{limit theorem 3}.) For an even more detailed comparison of our results with those in \cite{Angles, Bandini,Coscelli} we refer the reader to Remark~\ref{remark-Bandini-Coscelli} below. In order to establish the link between the modules $\bigtriangledown_S^{(\infty)}$ and $\frak X_{p}^{(\infty)}$ mentioned above, we needed to provide slight generalizations of the results in \cite{GPff} on Ritter-Weiss modules and Tate sequences for function fields. This is done in the Appendix.\\ \begin{remark} Unlike in classical Iwasawa theory, all Iwasawa algebras $\Bbb Z_p[[G_\infty\times\Gamma]]$, $\Bbb Z_p[[\Gamma]]$ and $\Bbb Z_p[[G_\infty]]$ relevant in this context are not Noetherian. In particular, one has an isomorphism $$\Bbb Z_p[[G_\infty]]\simeq \Bbb Z_p[t(G_\infty)][[X_1, X_2, \dots]]$$ of topological rings, where the power series ring has countably many variables and $t(G_\infty)$ is the (finite) torsion subgroup of $G_\infty$. Throughout, if $R$ is a commutative ring, the ring of power series in countably many variables with coefficients in $R$ is defined by $$R[[X_1, X_2, \dots]]:=\varprojlim_nR[[X_1, X_2, \dots, X_n]],$$ where the transition maps $R[[X_1, X_2, \dots, X_{n+1}]]\to R[[X_1, X_2, \dots, X_{n}]]$ are the $R$--algebra morphisms sending $X_i\mapsto X_i$, for all $i\leq n$, and $X_{n+1}\mapsto 0$. \end{remark} \section{Class--field theory and geometric Iwasawa towers} \iffalse \subsection{The structure of $U_K^{(1)}$ for a local field $K$ of characteristic $p > 0$}\label{Iwasawa iso} Let $K$ be a local field of characteristic $p > 0$, with exponential valuation $v = v_K$ and residue class field $\kappa$. If $t \in K$ is a prime element in $K$, we can view $K$ as the field of Laurent series $\kappa((t))$. Below, $\mathcal O_K$ is the ring of integers of $K$, $\frak m_K$ is its maximal ideal, $U_K=\mathcal O_K^\times$ and $(U_K^{(m)}=1+\frak m_K^n)_{n\geq 0}$ is the usual filtration of $U_K$. We recall the proof of the following structure theorem, due to Iwasawa. (See also \cite[Satz II.5.7]{Neu99}). \begin{theorem} Let $K$ be a local field of characteristic $p > 0$ and let $q = p^f$ denote the cardinality of the residue class field $\kappa$. Then there is an isomorphism of topological groups \[ K^\times \simeq \mathbb{Z} \oplus \mathbb{Z}/ (q-1)\mathbb{Z} \oplus \mathbb{Z}_p^\mathbb{N}. \] \end{theorem} \begin{proof} We fix an $\mathbb{F}_p$-basis $\omega_1, \ldots, \omega_f$ of $\mathbb{F}_q/\mathbb{F}_p$. For $n \in \mathbb{N}$ with $(p, n) = 1$ we consider the homomorphism \[ g_n \colon \mathbb{Z}_p^f \longrightarrow U_K^{(n)}, \quad (a_1, \ldots, a_f) \mapsto \prod_{i=1}^f (1 + \omega_it^n)^{a_i}. \] The function $g_n$ has the following properties. For $m = np^s, s \ge 0$ one has \begin{equation}\label{gn 1} U_K^{(m)} = g_n(p^s\mathbb{Z}_p^f) U_K^{(m+1)}. \end{equation} For $\alpha = (a_1, \ldots, a_f) \in \mathbb{Z}_p^f$ one has \begin{equation}\label{gn 2} \alpha \not\in p\mathbb{Z}_p^f \iff g_n(p^s\alpha) \not\in U_K^{(m+1)}. \end{equation} We now put $A_n := \mathbb{Z}_p^f$ for $n \in \mathbb{N}$ with $(p,n) = 1$ and $A := \prod_{n} A_n$. We consider the continuous homomorphism of $\mathbb{Z}_p$-modules \[ g = \prod_n g_n \colon A \longrightarrow U_K^{(1)}, \quad (\alpha_n)_n \mapsto \prod_n g_n(\alpha_n). \] From (\ref{gn 1}) one easily deduces that $g(A)$ is a dense subset of $U_K^{(1)}$. Since $A$ is compact by Tychonov's theorem , $g$ continuous and $g(A) \subseteq U_K^{(1)}$ dense, we see that $g$ is surjective. Indeed, let $y \in U_K^{(1)}$ and $x_i \in A$ such that $\lim\limits_{i \rightarrow \infty}g(x_i) = y$. Since $A$ is compact, we may without loss of generality assume that $x_i \stackrel{i \rightarrow \infty}\longrightarrow x \in A$. Then $g(x) \stackrel{(*)}= \lim\limits_{i \rightarrow \infty} g(x_i) = y$, where $(*)$ holds because $g$ is continuous. We now prove injectivity. Let $0 \ne \xi = (\ldots, \alpha_n, \ldots ) \in A$ with $\alpha_n \in A_n = \mathbb{Z}_p^f$, $\alpha_n \ne 0$. Then $\alpha_n = p^s\beta_n$ with $s = s(\alpha_n) \ge 0$ and $\beta_n \in \mathbb{Z}_p^f \setminus p\mathbb{Z}_p^f$. From (\ref{gn 1}) and (\ref{gn 2}) we deduce \[ g_n(\alpha_n) \in U_K^{(m)} \text{ and } g_n(\alpha_n) \not\in U_K^{(m+1)} \] where $m = m(\alpha_n) = p^sn$. Obviously, the $m(\alpha_n)$ are pairwise distinct for all $\alpha_n \ne 0$. Let $n$ be defined by $m(\alpha_n) < m(\alpha_{n'})$ for all $n' \ne n$ and $\alpha_{n'} \ne 0$. Then we have for all such $n'$ \[ g_{n'}(\alpha_{n'}) \in U_K^{(m+1)} \text{ where } m := m(\alpha_n) < m(\alpha_{n'}). \] Hence $g(\xi) \equiv g_n(\alpha_n) \not\equiv 1 \pmod{U_K^{(m+1)}}$ and, consequently, $g(\xi) \ne 1$. \end{proof} In the following we determine the subgroup of $A = \prod_{p \nmid n}A_n \simeq \mathbb{Z}_p^\mathbb{N}$ which corresponds to $U_K^{(k)}$ under Iwasawa's isomorphism $g \colon A \longrightarrow U_K^{(1)}$. \begin{definition}\label{def Ak} a) For $k \ge 1$ we write $k = lp^t$ with $p \nmid l$ and set \[ s_k(n) := \max\left\{0, \left[ t + \log\left( \frac{l}{n} \right)\right]\right\}. \] b) We set \[ A^{(k)} := \prod_{p \nmid n} p^{s_k(n)}A_n. \] c) We write $\log_\mathrm{Iw} \colon U_K^{(1)} \longrightarrow A$ for the inverse of $g$ \end{definition} Note that $s_k(n) = \min\{s \ge 0 \colon p^sn \ge k \}$ and $A = A^{(1)}$. \begin{prop} \label{level n iso} $g$ induces an isomorphism $A^{(k)} \simeq U_K^{(k)}$, or in other words, \[ \log_\mathrm{Iw}\left( U_K^{(k)} \right) = A^{(k)}. \] \end{prop} \begin{proof} It suffices to show that $g(A^{(k)})$ is a dense subset of $U_K^{(k)}$. By definition of $s_k(n)$ and (\ref{gn 1}) we have \[ g_n(p^{s_k(n)}A_n) \subseteq U_K^{(p^{s_k(n)}n)} \subseteq U_K^{(k)}, \] and hence $g(A^{(k)}) \subseteq U_K^{(k)}$. We obviously have $s_k(l) = t$ and by (\ref{gn 1}) the set $g_l(p^tA_l)$ contains a set of representatives of $U_K^{(k)}$ modulo $U_K^{(k+1)}$. Therefore $g(A^{(k)})$ contains a set of representatives of $U_K^{(k)}$ modulo $U_K^{(k+1)}$. More generally, for each $m \ge k$, $g(A^{(m)})$ contains a set of representatives of $U_K^{(m)}$ modulo $U_K^{(m+1)}$. By its very definition, $s_m(n) \ge s_k(n)$ for all $m \ge k$, and hence $A^{(m)} \subseteq A^{(k)}$. It follows that $g(A^{(k)})$ contains a set of representatives of $U_K^{(m)}$ modulo $U_K^{(m+1)}$ for all $m \ge k$, and this in turn, immediately implies that $g(A^{(k)}) \subseteq U_K^{(k)}$ is dense. \end{proof} \fi \subsection{Sign normalized Drinfeld modules and class field theory}\label{Drinfeld modules and cft} This subsection follows the exposition of Hayes \cite{Hay85}. In particular, we recall without proof the results of \S 4 of loc.cit. Let $k$ be a global function field and let $\mathbb{F}_q$ be its field of constants. For a place (discrete, rank $1$ valuation) $v$ of $k$, we let $k_v$ denote the completion of $k$ in the $v$--adic topology. We let $\mathcal O_{k_v}$, $\mathfrak{m}_{k_v}$, $U_{k_v}$ be the ring of integers in $k_v$, its maximal ideal, and its group of units, respectively. As usual, we let $U_{k_v}^{(n)}:=1+\mathfrak{m}_{k_v}^n$, for all $n\geq 1$. We denote by $d_v$ the degree of $v$ relative to $\Bbb F_q$ and by $\Bbb F_v$ its residue field. By definition, we have $\Bbb F_v=\Bbb F_{q^{d_v}}$. Further, we fix a uniformiser $\pi_v\in k$, for all $v$ as above. For every $v$, we have group isomorphisms $$k_v^\times\simeq \pi_v^{\Bbb Z}\times U_{k_v}, \qquad U_{k_v}\simeq \Bbb F_v^\times\times U_{k_v}^{(1)}.$$ Now, we fix once and for all a place $v_\infty$ of $k$ (called the place at infinity) and let $A$ be the Dedekind ring of elements of $k$ that are integral outside $v_\infty$. (In \cite{Hay85} our $A$ is denoted by $A_\infty$.) Note that the places $v$ of $k$, which are different from $v_\infty$, are in one--to--one correspondence with the maximal ideals of $A$. For such a maximal ideal $\frak p$, we denote by $v_{\frak p}$ the corresponding place of $k$, viewed as a rank one, discrete valuation of $k$, normalized so that $v_{\frak p}(k^\times)=\Bbb Z$. In order to simplify notations, we let $k_\infty=k_{v_\infty}$, $\pi_\infty:=\pi_{v_\infty}$, $\Bbb F_\infty:=\Bbb F_{v_\infty}$, $d_\infty:=d_{v_\infty}$, etc. The same notation principle applies to the finite places $v_\frak p$, namely $k_{\frak p}:=k_{v_{\frak p}}$, etc. Finally, for any place $v$, we let ${\rm ord}_v: k_v^\times\to \Bbb Z$ be its associated valuation, normalized so that ${\rm ord}_v(\pi_v)=1$. \\ Let $I_A$ denote the group of fractional ideals of $A$ and let $P_A$ be the subgroup of principal ideals. Then $\mathrm{Pic}(A) = I_A / P_A$ and we write $h_A := |\mathrm{Pic}(A)|$ for the class number of $A$. We write $D_k$ for the group of divisors of $k$ and $D_k^0$ for the subgroup of divisors of degree zero. Let $\mathrm{div} \colon k^\times \longrightarrow D_k^0$ be the divisor map and set $\mathrm{Pic}^0(k) := D_k^0/\mathrm{div}(k^\times)$. We write $h_k := |\mathrm{Pic}^0(k)|$. Recall that we have an exact sequence $$0\to {\rm Pic}^0(k)\to{\rm Pic}(A)\overset{\widehat{\rm deg}}\longrightarrow \Bbb Z/d_\infty\to 0,$$ where $\widehat{\rm deg}$ is the degree modulo $d_\infty$ map. Consequently, we have an equality $h_A = h_kd_\infty$. \begin{definition} As in \cite{Hay85}, we define the following. \begin{enumerate} \item A finite, Galois extension $K/k$ is called real (relative to $v_\infty$) if $v_\infty$ splits completely in $K/k$, or, equivalently, if there exists a $k$--embedding $K\hookrightarrow k_\infty$. \item For an integral ideal $\frak m\subseteq A$, $H_{\frak m}$ denotes the real ray--class field of $k$ of conductor $\frak m$. \item If $\frak m=\frak e:=A$ is the unit ideal, then we call $H_{\frak e}$ the real Hilbert class field of $k$. \end{enumerate} \end{definition} Next, we give the id\`ele theoretic description of the class fields $H_\mathfrak{m}$, as in \cite{Hay85}. For that, let $J_k$ denote the group of id\`eles of $k$ and consider the following subgroups of $J_k$. $$U(\mathfrak{m}):= k_\infty^\times\times\prod_{\frak p\mid \frak m}U_{\frak p}^{(v_{\frak p}(\frak m))}\times\prod_{\frak p\nmid\frak m\infty}U_{\frak p},\qquad J_{\frak m}:=k^\times\cdot U(\frak m).$$ The following is proved in \cite{Hay85}. \begin{prop}\label{cft-hm-prop} For all $\frak m$ as above, the Artin reciprocity map gives group isomorphisms $$ J_k/J_{\frak m}\simeq G(H_{\frak m}/k), \qquad J_k/J_{\frak e}\simeq G(H_{\frak e}/H), \qquad J_{\frak e}/J_{\frak m}\simeq G(H_{\frak m}/H_{\frak e}). $$ Further, if $\frak m\ne \frak e$, we have canonical group isomorphisms $$ J_k/J_{\frak e}\simeq {\rm Pic}(A), \qquad J_{\frak e}/J_{\frak m}\simeq (A/\frak m)^\times/\Bbb F_q^\times.$$ \end{prop} The real ray--class fields $H_{\frak m}$ are contained in slightly larger abelian extensions $H_{\frak m}^\ast/k$, of conductor $\frak m\cdot v_\infty$, tamely ramified at $v_\infty$. The advantage of passing from $H_{\frak m}$ to $H_{\frak m}^\ast$ is that the latter can be explicitly constructed by adjoining torsion points of certain rank $1$ $A$--Drinfeld modules to the field of definition $H_{\frak e}^\ast$ of these Drinfeld modules. Next, we give the id\`ele theoretic description and explicit construction of the fields $H_{\frak m}^\ast$, both due to Hayes \cite{Hay85}.\\ As in \cite{Hay85}, let us fix a sign function $${\rm sgn}: k_\infty^\times\to \Bbb F_\infty^\times.$$ By definition, this is a group morphism, such that ${\rm sgn}(U_{\infty}^{(1)})=1$ and ${\rm sgn}\vert_{\Bbb F_\infty^\times}={\rm id}_{\Bbb F_\infty^\times}.$ Note that ${\rm sgn}$ is uniquely determined by the value ${\rm sgn}(\pi_\infty)$ at the fixed uniformiser $\pi_\infty$.\\ For every integral ideal $\frak m\subseteq A$, we define the following subgroups of $J_k$. $$U^*(\mathfrak{m}) := \{\left(\alpha_v\right)_v \in U(\frak m) \mid\, {\rm sgn}(\alpha_\infty)=1\},\quad J_{\frak m}^\ast:=k^\times\cdot U^*(\frak m).$$ \begin{definition} For all integral ideals $\mathfrak{m} \subseteq A$, we define $H_{\frak m}^\ast$ to be the unique abelian extension of $k$ which corresponds to the subgroup $J_{\frak m}^\ast$ of $J_k$ via the standard class--field theoretic correspondence. \end{definition} For all $\frak m$ as above, with $\frak m\ne \frak e$, we have a canonical commutative diagram of group morphisms with exact rows and columns. \begin{equation*} \xymatrix{0\ar[r] & \Bbb F_q^\times\ar[r]\ar@{^{(}->}[d] & \Bbb F_\infty^\times\ar[r]\ar@{^{(}->}[d] & \Bbb F_\infty^\times/\Bbb F_q^\times\ar[r]\ar@{^{(}->}[d] &0\\ 0\ar[r] & (A/\frak m)^\times \ar[r]\ar@{>>}[d] & J_k/J_{\frak m}^\ast\ar[r]\ar@{>>}[d] & J_k/J_{\frak e}^\ast\ar[r]\ar@{>>}[d] &0\\ 0\ar[r] & (A/\frak m)^\times/\Bbb F_q^\times \ar[r] & J_k/J_{\frak m}\ar[r] & J_k/J_{\frak e}\ar[r] &0} \end{equation*}\\\\ As a consequence, we have the following diagram of abelian extensions of $k$, whose relative Galois groups are canonically isomorphic to the labels on the connecting line segments. \begin{equation}\label{field-diagram-1} \xymatrix{ & & H_\mathfrak{m}^\ast \\ & H_\mathfrak{e}^*H_\mathfrak{m} \ar@{-}[ur]^{\Bbb F_q^\times} & \\ H_\mathfrak{e}^* \ar@{.}@/^2.0pc/[uurr]^{(A/\mathfrak{m})^\times} \ar@{-}[ur] \ar@{-}[dd]_{\Bbb F_\infty^\times/\Bbb F_q^\times} & & \\ & H_\mathfrak{m} \ar@{-}[uu]\ar@{.}[uuur]_{\Bbb F_\infty^\times}& \\ H_\mathfrak{e} \ar@{-}[ur]_{\frac{(A/\frak m)^\times}{\Bbb F_q^\times}} & & \\ k \ar@{-}[u]^{{\rm Pic}(A)} && } \end{equation}\\ The extensions $H_\mathfrak{e}^\ast/H_\mathfrak{e}$ and $H_\mathfrak{m}^\ast/H_\mathfrak{m}$ are totally and tamely ramified at the primes above $v_\infty$.\\ Next, we describe the explicit Drinfeld modular construction of the fields $H_\mathfrak{e}^\ast$ and $H_\mathfrak{m}^\ast$, for all $\mathfrak{m}\ne \mathfrak{e}$. Let $\Bbb C_\infty$ be the $v_\infty$--completion of the algebraic closure of $k_\infty$. Let $\Bbb C_\infty\{\tau\}$ be the non-commutative ring of twisted polynomials with the rule $\tau\omega = \omega^q\tau$, for $\omega \in \Bbb C_\infty$. We write \[ D \colon \Bbb C_\infty\{\tau\} \longrightarrow \Bbb C_\infty, \quad a_0\tau^0 + a_1\tau^1 + \ldots + a_d\tau^d \mapsto a_0, \] for the constant term map. \begin{definition}[Hayes] A map $\rho \colon A \longrightarrow \Bbb C_\infty\{\tau\}$, $x \mapsto \rho_x$, is called a sgn--normalized Drinfeld module of rank one if the following are satisfied. \begin{itemize} \item [(a)] $\rho$ is an $\mathbb{F}_q$-algebra homomorphism. \item [(b)] $\deg_\tau(\rho_x) = \deg(x):={\rm dim}_{\Bbb F_q}(A/xA)=-{\rm ord}_{v_\infty}(x)d_\infty$, for all $x\in A$. \item [(c)] The map $A \longrightarrow \Bbb C_\infty$, $x \mapsto D(\rho_x)$, is the inclusion $A \subseteq \Bbb C_\infty$. \item [(d)] If $s_\rho(x)$ denotes the leading coefficient of $\rho_x \in \Bbb C_\infty\{\tau\}$, then $s_\rho(x) \in \mathbb{F}_\infty^\times$, for all $x \in A$. \item[(e)] If one extends $s_\rho$ to $k_\infty^\times$ by sending \[ x = \sum_{i=i_0}^\infty a_i \pi_\infty^i \mapsto a_{i_0}s_\rho(\pi_\infty)^{i_0}, \] if $a_{i_0}\ne 0$, then $s_\rho \colon k_\infty^\times \longrightarrow \mathbb{F}_\infty^\times$ is a twist of ${\rm sgn}$, i.e there exists $\sigma \in \mathrm{G}(\mathbb{F}_\infty/\mathbb{F}_q)$, such that $s_\rho = \sigma \circ \mathrm{sgn}$. \end{itemize} \end{definition} Any Drinfeld module as above endows $(\Bbb C_\infty, +)$ with an $A$--module structure given by $$x\ast z:=\rho_x(z), \qquad \text{ for all } x\in A,\, z\in\Bbb C_\infty,$$ where $(\sum_i a_i\tau^i)(z)=\sum_i a_iz^{q^i}$. \begin{definition} Let $\rho:A\to\Bbb C_\infty\{\tau\}$ be a rank $1$, sgn--normalized Drinfeld module as above. \begin{itemize} \item[(a)] The minimal field of definition $k_\rho$ of $\rho$ is the extension of $k$ inside $\Bbb C_\infty$ generated by the coefficients of the twisted polynomials $\rho_x$, for all $x\in A$. \item[(b)] For all integral ideals $\mathfrak{m}\subseteq A$ with $\mathfrak{m}\ne\mathfrak{e}$, we let $$\rho[\mathfrak{m}] := \{ \alpha \in \Bbb C_\infty\mid \rho_x(\alpha) = 0 \text{ for all } x \in \mathfrak{m}\}$$ denote the $A$--module of $\mathfrak{m}$--torsion points of $\rho$. \end{itemize} \end{definition} The following gives an explicit construction of the class--fields $H^\ast_\mathfrak{m}$. (See \cite[\S4]{Hay85} for proofs.) \begin{prop}[Hayes \cite{Hay85}] Let $\rho$ be a rank $1$, sgn--normalized Drinfeld module as above. Then, the following hold, for all ideals $\mathfrak{m}\subseteq A$, with $\mathfrak{m}\ne\mathfrak{e}$. \begin{enumerate} \item The minimal field of definition $k_\rho$ of $\rho$ equals $H_\mathfrak{e}^\ast$. \item We have an equality $H_\mathfrak{m}^\ast=H_\mathfrak{e}^\ast(\rho[\mathfrak{m}])$. \item The $A/\mathfrak{m}$--module $\rho[\mathfrak{m}]$ is free of rank $1$ and, via the canonical isomorphism $$(A/\mathfrak{m})^\times\simeq G(H_\mathfrak{m}^\ast/H_\mathfrak{e}^\ast), \qquad \widehat x\to\sigma_{\widehat x},$$ we have $\sigma_{\widehat x}(\alpha)=s_\rho(x)^{-1}\cdot\rho_x(\alpha)$, for all $x\in A$ coprime to $\mathfrak{m}$ and all $\alpha\in\rho[\mathfrak{m}]$. \end{enumerate} \end{prop} \begin{remark} The proposition above should be viewed as the function field analogue of the theory of complex multiplication for quadratic imaginary fields, where the role of $\rho$ is played by an elliptic curve with CM by the ring of integers of a quadratic imaginary field $k$. \end{remark} \iffalse We set $W_k := |\mathbb{F}_q^\times| = q-1$ and $W_\infty := |\mathbb{F}_\infty^\times| = q^{d_\infty}-1$. For any field $K$ we let $\mu(K)$ denote the group of roots of unity in $K$. The following holds. \begin{lemma}\label{roots of 1} a) If $K/k$ is a finite, real extension, then $|\mu(K)| \le W_\infty$. b) For any integral ideal $\mathfrak{m} \sseq A$ one has $|\mu(H_\mathfrak{m})| = W_\infty$. \end{lemma} \iffalse \begin{proof} If $K$ is real, then $K \subseteq k_\infty = \mathbb{F}_\infty((\pi_\infty))$ and the only roots of unity in $ \mathbb{F}_\infty((\pi_\infty))$ are the elements in $\mathbb{F}_\infty^\times$. This shows a). For the proof of b) it suffices to note that $k \subseteq \mathbb{F}_\infty k \subseteq H_\mathfrak{e}$ since constant field extensions are unramified everywhere and $\infty$ splits completely in $\mathbb{F}_\infty k / k$. \end{proof} \fi Let now $\Omega$ be the completion of an algebraic closure of $k_\infty$. We fix a sign function $\mathrm{sgn} \colon k_\infty^\times \longrightarrow \mathbb{F}_\infty^\times$. Then, by definition, the composite map \[ \mathbb{F}_\infty^\times \stackrel\subseteq\longrightarrow k_\infty^\times \stackrel{\mathrm{sgn}}\longrightarrow \mathbb{F}_\infty^\times \] is the identity map and $\mathrm{sgn}(U_{k_\infty}^{(1)})=1$. We write $\Omega\{\tau\}$ for the non-commutative ring of twisted polynomials with the rule $\tau\omega = \omega^q\tau$ for $\omega \in \Omega$. We also write \[ D \colon \Omega\{\tau\} \longrightarrow \Omega, \quad a_0\tau^0 + a_1\tau^1 + \ldots + a_d\tau^d \mapsto a_0, \] for the constant term map. Now, let $\rho \colon A \longrightarrow \Omega\{\tau\}$, $x \mapsto \rho_x$, be a sign-normalized Drinfeld module of rank one. Then, by definition, we have the following properties (see loc.cit.). \begin{itemize} \item [(a)] $\rho$ is an $\mathbb{F}_q$-algebra homomorphism. \item [(b)] $\deg_\tau(\rho_x) = \deg(x)$ for all $x \in A$. Here $\deg_\tau$ is the polynomial degree with respect to $\tau$ and $\deg(x) = -v_\infty(x)d_\infty$. \item [(c)] The map $A \longrightarrow \Omega$, $x \mapsto D(\rho_x)$, is the inclusion $A \subseteq \Omega$. \item [(d)] If $s_\rho(x)$ denotes the leading coefficient of $\rho_x \in \Omega\{\tau\}$, then $s_\rho(x) \in \mathbb{F}_\infty^\times$ for all $x \in A$. Actually, one can extend $s_\rho$ to $k_\infty^\times$ by \[ x = \sum_{i=i_0}^\infty a_i \pi_\infty^i \mapsto a_{i_0}s_\rho(\pi_\infty)^{i_0}. \] Then $s_\rho \colon k_\infty^\times \longrightarrow \mathbb{F}_\infty^\times$ is a twisting of $\mathrm{sgn}$, i.e., there exists $\sigma \in \mathrm{G}(\mathbb{F}_\infty/\mathbb{F}_q)$ such that $s_\rho = \sigma \circ \mathrm{sgn}$. \end{itemize} \begin{remark} If we set $\deg(x) := -v_\infty(x)d_\infty$ as above, then $|A / xA| = q^{deg(x)}$. Indeed, if $\mathrm{div}(x) = v_\infty(x)d_\infty \cdot \infty + \sum_{\mathfrak{p} \ne \infty}v_\mathfrak{p}(x)d_\mathfrak{p} \cdot \mathfrak{p}$, then we have an equality of ideals in $A$ \[ xA = \prod_{\mathfrak{p} \ne \infty} \mathfrak{p}^{v_\mathfrak{p}(x)}. \] It follows that \[ |A/xA| = \prod_{\mathfrak{p} \ne \infty} | A / \mathfrak{p} |^{v_\mathfrak{p}(x)} = \prod_{\mathfrak{p} \ne \infty} q^{d_\mathfrak{p} v_\mathfrak{p}(x)} = q^{ \sum_{\mathfrak{p} \ne \infty}d_\mathfrak{p} v_\mathfrak{p}(x)} = q^{ -d_\infty v_\infty(x)}. \] \end{remark} Let now $H_\mathfrak{e}^*$ be the extension of $k$ generated by the coefficients of all $\rho_x, x \in A$. Then $H_\mathfrak{e}^*$ is independent of the sign-normalized Drinfeld module $\rho$. It is called the normalizing field and every sign-normalized rank one Drinfeld module is defined over $H_\mathfrak{e}^*$, i.e., $\rho \colon A \longrightarrow H_\mathfrak{e}^*\{\tau\}$. We recall the class field theoretic description of $H_\mathfrak{e}^*$. To that end, let $J_k$ be the id\`ele group of $k$. Then, via the correspondence of global class field theory, we have \[ H_\mathfrak{e} \leftrightarrow J_\mathfrak{e} := k^\times \left( \prod_{\mathfrak{p} \ne \infty}U_{k_\mathfrak{p}} \times k_\infty^\times \right), \quad H_\mathfrak{e}^* \leftrightarrow J_\mathfrak{e}^* := k^\times \left( \prod_{\mathfrak{p} \ne \infty}U_{k_\mathfrak{p}} \times \ker(\mathrm{sgn}) \right). \] Consequently, the global Artin reciprocity map induces group isomorphisms \[ \mathrm{G}(H_\mathfrak{e}/k) \simeq J_k/J_\mathfrak{e}, \quad \mathrm{G}(H_\mathfrak{e}^*/k) \simeq J_k/J_\mathfrak{e}^*, \quad \mathrm{G}(H_\mathfrak{e}^*/H_\mathfrak{e}) \simeq J_\mathfrak{e}/J_\mathfrak{e}^*. \] It is easy to see that the canonical inclusion $k_\infty^\times \hookrightarrow J_\mathfrak{e}$ induces an isomorphism \[ k_\infty^\times / \mathbb{F}_q^\times \ker(\mathrm{sgn}) \simeq J_\mathfrak{e} / J_\mathfrak{e}^*. \] Furthermore, $\mathrm{sgn}$ induces group isomorphisms $$k_\infty^\times /\ker(\mathrm{sgn}) \simeq \mathbb{F}_\infty^\times, \qquad k_\infty^\times / \mathbb{F}_q^\times\ker(\mathrm{sgn}) \simeq \mathbb{F}_\infty^\times/\mathbb{F}_q^\times.$$ Hence $H_\mathfrak{e}^\ast/H_\mathfrak{e}$ is totally and tamely ramified at $\infty$ and we have equalities \[ [H_\mathfrak{e}^* : H_\mathfrak{e}] = r := W_\infty / W_k = \frac{q^{d_\infty}-1}{q-1}. \]\\\\ Now, we let $A$ act on $\Omega$ via $\rho$, i.e. \[ x.\alpha := \rho_x(\alpha) = a_0\alpha + a_1\alpha^q + \ldots + a_d\alpha^{q^d}, \] if $\rho_x(\tau) = a_0\tau^0 + \ldots + a_d\tau^d$. We write $\Omega_\rho = \Omega$ if we endow $\Omega$ with this $A$-action. For an ideal $\mathfrak{m} \sseq A$, with $\mathfrak{m}\ne\mathfrak{e}$, we write \[ \Lambda_\mathfrak{m} := \{ \alpha \in \Omega_\rho \mid \rho_x(\alpha) = 0 \text{ for all } x \in \mathfrak{m}\} \] for the $\mathfrak{m}$-torsion points of $\Omega_\rho$ and set \[ K_\mathfrak{m} := H_\mathfrak{e}^*(\Lambda_\mathfrak{m}). \] Note that $\Lambda_\mathfrak{m} \simeq A/\mathfrak{m}$ as $A$--modules and that $\mathrm{G}(K_m/H_\mathfrak{e}^*)$ acts on $\Lambda_\mathfrak{m}$ in an $A$--linear manner. This action induces an isomorphism of groups \[ \mathrm{G}(K_\mathfrak{m} / H_\mathfrak{e}^*) \simeq \mathrm{Aut}_A(\Lambda_\mathfrak{m}) = \left( A / \mathfrak{m} \right)^\times \] whose inverse is given by \[ \left( A / \mathfrak{m} \right)^\times \longrightarrow \mathrm{G}(K_\mathfrak{m} / H_\mathfrak{e}^*), \quad \bar x \mapsto \sigma_x, \] where $\sigma_x(\lambda) := \rho_x(\lambda)$ for all $\lambda \in \Lambda_\mathfrak{m}$.\\\\ To describe the subgroup of $J_k$ corresponding to $K_\mathfrak{m}$ via global class field theory we define \begin{eqnarray*} U(\mathfrak{m}) &:=& \{ \alpha = \left(\alpha_\mathfrak{p}\right) \in J_k \mid v_\mathfrak{p}(\alpha_\mathfrak{p} - 1) \ge v_\mathfrak{p}(\mathfrak{m}) \text{ for all } \mathfrak{p}\ne\infty\}, \\ U^*(\mathfrak{m}) &:=& \{ \alpha = \left(\alpha_\mathfrak{p}\right) \in J_k \mid v_\mathfrak{p}(\alpha_\mathfrak{p} - 1) \ge v_\mathfrak{p}(\mathfrak{m}) \text{ for all } \mathfrak{p}\ne\infty \text{ and } \mathrm{sgn}(\alpha_\infty) = 1 \}. \end{eqnarray*} Hayes proves in loc.cit that class field theory establishes correspondences $$ H_{\frak m}\leftrightarrow J_\mathfrak{m} := k^\times \cdot U(\mathfrak{m}), \qquad K_{\frak m}\leftrightarrow J_\mathfrak{m}^* := k^\times \cdot U^*(\mathfrak{m}). $$ One can show that \[ \left| J_\mathfrak{e}^* / J_\mathfrak{m}^* \right| = \Phi(\mathfrak{m}), \] where $\Phi(\mathfrak{m}) := | (A/\mathfrak{m})^\times|$ is the Euler phi function of the ring $A$. \begin{remark} a) If $\mathfrak{m} = \mathfrak{p}^e\mathfrak{b}, \mathfrak{p} \nmid \mathfrak{b}$, then the ramification subgroup of $\mathfrak{p}$ is isomorphic to $(A/\mathfrak{p}^e)^\times$ where we view $(A/\mathfrak{p}^e)^\times$ as a subgroup of $\mathrm{G}(K_\mathfrak{m}/H_\mathfrak{e}^*)$ via the canonical isomorphism \[ (A/\mathfrak{p}^e)^\times \times (A/\mathfrak{b})^\times \simeq (A/\mathfrak{p}^e\mathfrak{b})^\times \simeq \mathrm{G}(K_\mathfrak{m}/H_\mathfrak{e}^*). \] b) $\infty$ is ramified in $K_\mathfrak{m}/k$ with ramification index $W_\infty = q^{d_\infty}-1$. We also know that $\infty$ is totally ramified in $H_\mathfrak{e}^*/H_\mathfrak{e}$. Since $[H_\mathfrak{e}^* : H_\mathfrak{e}] = r = W_\infty/W_k $ we deduce that $\infty$ is ramified in $K_\mathfrak{m}/H_\mathfrak{e}^*$ with ramification degree $W_k = q-1$. \end{remark} Now, let us define \[ V_\mathfrak{m} := \{ x \in k^\times \mid v_\mathfrak{p}(x-1) \ge v_\mathfrak{p}(\mathfrak{m}) \text{ for all } \mathfrak{p} \mid \mathfrak{m}\}, \quad G_\infty^* := \{\sigma_x \mid x \in V_\mathfrak{m} \} \le G_\mathfrak{m}^* := \mathrm{G}(K_\mathfrak{m}/k), \] where $\sigma_x := (x, K_\mathfrak{m}/k)$ is the Artin symbol. Then $G_\infty^*$ is cyclic of order $W_\infty$ and, moreover, $G_\infty^*$ is both the decomposition and ramification subgroup at $\infty$ in $K_\mathfrak{m}/k$. Obviously, we have $K_\mathfrak{m}^{G_\infty^*} = H_\mathfrak{m}$. Hence we have the following diagram of abelian extensions of $k$. \textcolor{red}{Probably we should only keep the diagram of fields after introducing the relevant notation. All of this is essentially in Hayes, I guess?} \begin{equation*} \xymatrix{ & & K_\mathfrak{m} = H_\mathfrak{e}^*(\Lambda_\mathfrak{m}) \\ & H_\mathfrak{e}^*H_\mathfrak{m} \ar@{-}[ur]^{W_k} & \\ H_\mathfrak{e}^* \ar@{-}[ur] \ar@{-}[dd]_{\frac{W_\infty}{W_k}} & & \\ & H_\mathfrak{m}= K_\mathfrak{m}^{G_\infty^*} \ar@{-}[uu] & \\ H_\mathfrak{e} \ar@{-}[ur]_{\frac{\Phi(\mathfrak{m})}{W_k}} & & \\ k \ar@{-}[u]^{hd_\infty} && } \end{equation*}\\\\ To conclude this section, we describe the action of $G_\mathfrak{m}^* := \mathrm{G}(K_\mathfrak{m}/k)$ on $\mathfrak{m}$-torsion points $\lambda \in \Lambda_\mathfrak{m}$. Let $\mathfrak{a} \sseq A$ be an integral ideal with $(\mathfrak{a}, \mathfrak{m}) = 1$ and let $\sigma_\mathfrak{a} = (\mathfrak{a}, K_\mathfrak{m}/k)$ denote the Artin automorphism associated with $\mathfrak{a}$. Let $\rho_\mathfrak{a} \in \Omega\{\tau\}$ be the monic generator of the left $\Omega\{\tau\}$-ideal which is generated by the set $\{\rho_x \colon x \in \mathfrak{a}\}$. Then $\rho_\mathfrak{a}$ is the unique isogeny from $\rho$ to $\sigma_\mathfrak{a}\rho = \mathfrak{a} * \rho$, i.e., \[ \rho_\mathfrak{a} \circ \rho = \left( \sigma_\mathfrak{a}\rho \right) \circ \rho_\mathfrak{a}. \] By \cite[Th.~4.12]{Hay85} we have for all $\lambda \in \Lambda_\mathfrak{m}$ \[ \sigma_\mathfrak{a}(\lambda) = \rho_\mathfrak{a}(\lambda). \] For future reference we also recall from \cite[Lemma 7.4.5]{GossBook} that $\rho$ is defined over $\mathcal{O}_{H_\mathfrak{e}^*}$, i.e., for all $x \in A$ we have $\rho_x \in \mathcal{O}_{H_\mathfrak{e}^*}\{\tau\}$. As a consequence, one easily derives the following. \begin{lemma} For any integral ideal $\mathfrak{a} \sseq A$ one has $\rho_\mathfrak{a} \in \mathcal{O}_{H_\mathfrak{e}^*}\{\tau\}$. \end{lemma} Finally, we recall the ideal theoretic description of the abelian extensions $H_\mathfrak{e}, H_\mathfrak{e}^*, K_\mathfrak{m}$ and $H_\mathfrak{m}$. References here are \cite{Hay92} or \cite[Sec.~7.3 - 7.5]{GossBook}. As before, we write $I_A$ for the group of fractional ideals and now define for an integral ideal $\mathfrak{m}$ \begin{eqnarray*} P_A(\mathfrak{m}) &:=& \{ xA \colon x \in k^\times, v_\mathfrak{p}(x-1) \ge v_\mathfrak{p}(\mathfrak{m}) \text{ for all } \mathfrak{p}\mid\mathfrak{m} \}, \\ P_A^+(\mathfrak{m}) &:=& \{ xA \colon x \in k^\times, \mathrm{sgn}(x) = 1, v_\mathfrak{p}(x-1) \ge v_\mathfrak{p}(\mathfrak{m}) \text{ for all } \mathfrak{p}\mid\mathfrak{m} \}. \end{eqnarray*} Then we define the raly class group and narrow ray class group mod $\mathfrak{m}$ by \[ \mathrm{Pic}_A(\mathfrak{m}) := I_A(\mathfrak{m}) / P_A(\mathfrak{m}), \quad \mathrm{Pic}_A^+(\mathfrak{m}) := I_A(\mathfrak{m}) / P_A^+(\mathfrak{m}), \] respectively. The Artin reciprocity map induces group isomorphisms \[ \mathrm{Pic}_A(\mathfrak{m}) \simeq \mathrm{G}(H_\mathfrak{m} / k), \quad \mathrm{Pic}_A^+(\mathfrak{m}) \simeq \mathrm{G}(K_\mathfrak{m} / k),\] and it is clear from the definitions that the following diagram commutes \begin{equation*} \xymatrix{ 0 \ar[r] & P_A(\mathfrak{m})/P_A^+(\mathfrak{m}) \ar[r] \ar[d] & \mathrm{Pic}_A^+(\mathfrak{m}) \ar[r] \ar[d] & \mathrm{Pic}_A(\mathfrak{m}) \ar[r] \ar[d] & 0 \\ 0 \ar[r] & \mathrm{G}(K_\mathfrak{m}/H_\mathfrak{m}) \ar[r] & \mathrm{G}(K_\mathfrak{m}/k) \ar[r] & \mathrm{G}(H_\mathfrak{m}/k) \ar[r] & 0 \\ } \end{equation*} It follows that we have equalities \[ \left| P_A(\mathfrak{m}) / P_A^+(\mathfrak{m}) \right| = \begin{cases} W_\infty / W_k, & \text{ if } \mathfrak{m} = \mathfrak{e}, \\ W_\infty, & \text{ if } \mathfrak{m} \ne \mathfrak{e}. \end{cases} \] \fi \subsection{$\mathbb{Z}_p^{\aleph_0}$--extensions (Geometric Iwasawa towers)}\label{section-geometric-Iwasawa-towers} We continue to use the notation of Subsection \ref{Drinfeld modules and cft}. We fix a prime ideal $\mathfrak{p}$ of $A$ and an integral ideal $\mathfrak{f} \sseq A$ which is coprime to $\mathfrak{p}$. For all $n \ge 0$, we consider the following abelian extensions of $k$, viewed as subfields of $\Bbb C_\infty$: \[ L_n := H_{\mathfrak{f}\mathfrak{p}^{n+1}}, \quad L_n^\ast := H^\ast_{\mathfrak{f}\mathfrak{p}^{n+1}} \] and set \[ L_\infty := \bigcup_{n \ge 0} L_n, \quad L_\infty^\ast := \bigcup_{n \ge 0} L_n^\ast. \] For all $n\geq 0$, we let $G_n:=G(L_n/k)$, $\Gamma_n:=G(L_n/L_0)$. Also, we let $G_\infty:=G(L_\infty/k)$, $\Gamma_\infty:=G(L_\infty/L_0)$. The results in the previous section show that we have the following commutative diagrams of abelian groups with exact rows and canonical vertical isomorphisms.\\ \begin{equation*} \xymatrix{ 0 \ar[r] & U_{k_\mathfrak{p}}^{(1)}/U_{k_\mathfrak{p}}^{(n+1)} \ar[r] \ar[d]^\wr & (A/\mathfrak{f}\mathfrak{p}^{n+1})^\times/\Bbb F_q^\times \ar[r] \ar[d]^\wr &(A/\mathfrak{f}\mathfrak{p})^\times/\Bbb F_q^\times \ar[r] \ar[d]^\wr & 0 \\ 0 \ar[r] & \Gamma_n \ar[r] & G(L_n/H_{\frak e})\ar[r] & \mathrm{G}(L_0/H_\mathfrak{e})\ar[r] & 0 } \end{equation*}\\ Consequently, for all $n\geq 0$, we obtain the following diagram of field extensions whose relative Galois groups are canonically isomorphic to the labels of the connecting line segments.\\ \begin{equation}\label{field-diagram-2} \xymatrix{ & & H^\ast_{\mathfrak{f}\mathfrak{p}^{n+1}} \\ & H_{\mathfrak{f}\mathfrak{p}}^\ast \ar@{-}[ur]^{U_{k_\mathfrak{p}}^{(1)}/U_{k_\mathfrak{p}}^{(n+1)}} & \\ & H_\mathfrak{e}^* H_{\mathfrak{f}\mathfrak{p}} \ar@{-}[u]^{\Bbb F_q^\times} & H_{\mathfrak{f}\mathfrak{p}^{n+1}} \ar@{-}[uu]_{\Bbb F_\infty^\times} \\ H_\mathfrak{e}^* \ar@{-}[ur] & H_{\mathfrak{f}\mathfrak{p}} \ar@{-}[ur]_{U_{k_\mathfrak{p}}^{(1)}/U_{k_\mathfrak{p}}^{(n+1)}} \ar@{-}[u] & \\ H_\mathfrak{e} \ar@{-}[ur]_{\frac{(A/\mathfrak{f}\mathfrak{p})^\times}{\Bbb F_q^\times}} \ar@{-}[u]^{\Bbb F_\infty^\times/\Bbb F_q^\times} & & \\ k \ar@{-}[u]^{{\rm Pic}(A)} \ar@{-}@{.}@/_3pc/[uuurr]_{G_n} \ar@{-}@{.}@/_2pc/[uur]_{G_0}& & } \end{equation}\\ Further, we obtain topological group isomorphisms \[ \Gamma_\infty := \mathrm{G}(L_\infty/L_0) \simeq \mathrm{G}(L_\infty^\ast/L_0^\ast) \simeq \varprojlim_n \frac{U_{k_\mathfrak{p}}^{(1)}}{U_{k_\mathfrak{p}}^{(n+1)}} \simeq U_{k_\mathfrak{p}}^{(1)}. \] Now, we recall the following structure theorem, due to Iwasawa. (See also \cite[Satz II.5.7]{Neu99}). \begin{theorem}[{Iwasawa \cite{Iw86}}] Let $K$ be a local field of characteristic $p > 0$ and let $U_K^{(1)}$ denote its group of principal units. Then, there is an isomorphism of topological groups \[ U_K^{(1)}\simeq \mathbb{Z}_p^{\aleph_0}, \] where the right side denotes a direct product of countably many copies of $(\Bbb Z_p, +)$, endowed with the product of the $p$--adic topologies. \end{theorem} As a consequence, we have an isomorphism of topological groups \begin{equation}\label{Gamma-infinity-isomorphism}\Gamma_\infty:=G(L_\infty/L_0)\simeq \Bbb Z_p^{\aleph_0}.\end{equation}\\ The following gives a description of the Iwasawa algebras relevant in our considerations below. \begin{prop}\label{big-Iwasawa-algebra-prop} Let $(\mathcal O, \frak m_{\mathcal O})$ be a local, compact $\Bbb Z_p$--algebra, which is $\frak m_{\mathcal O}$--adically complete. If $\mathcal G$ is an abelian pro--$p$ group, topologically isomorphic to $\Bbb Z_p^{\aleph_0}$, then the following hold. \begin{enumerate} \item There is an isomorphism of topological $\mathcal O$--algebras $$\mathcal O[[\mathcal G]]\simeq \mathcal O[[X_1, X_2, \dots]],$$ where the left side is endowed with the profinite limit topology and the right side with the projective limit of the $(\frak m_{\mathcal O}, X_1, \dots, X_n)$--adic topologies on each $\mathcal O[[X_1, \dots, X_n]]$, as $n\to\infty$. \item If $\mathcal O$ is an integral domain, then $\mathcal O[[\mathcal G]]$ is a local, integral domain. \item If $\mathcal O$ is a PID, then $\mathcal O[[\mathcal G]]$ is a UFD and, therefore, normal. \end{enumerate} \end{prop} \begin{proof}{\it (Sketch.)} (1) Use induction on $n$ and the Weierstrass Preparation Theorem (see Thm. 2.1 in \cite[Ch.5, \S2]{Lang-cyclo}) to show that one has an isomorphism of topological $\mathcal O$--algebras $$\mathcal O[[\Bbb Z_p^n]]\simeq \mathcal O[[X_1, \dots, X_n]].$$ Then, pass to a projective limit with respect to $n$ to get the desired isomorphism. (2) This is Lemma 1 in \cite{Nishimura-I}. Note that with the notations and definitions of loc.cit. we have $\mathcal O[[X_1, X_2, \dots]]=\mathcal O\{X\}_{\aleph_0}$, where $X$ is a set of cardinality $\aleph_0$. (3) This is Theorem 1 in \cite{Nishimura-I}. See the note above regarding the notations in loc.cit. \end{proof} \begin{remark} Typical examples of $\Bbb Z_p$--algebras $\mathcal O$ as in the Proposition above are rings of integers $\mathcal O_F$ in finite extensions $F/\Bbb Q_p$ of $\Bbb Q_p$. Also, group rings $\mathcal O_F[P]$, where $P$ is a finite, abelian $p$--group satisfy the hypotheses of part (1), but not parts (2)--(3) of the proposition above. Note that in the latter case the maximal ideal of $\mathcal O_F[P]$ is given by $\frak m_{\mathcal O_F[P]}=(\frak m_{\mathcal O_F}, I_P)$, where $I_P$ is the augmentation ideal of $\mathcal O_F[P]$. Note that if $P$ is a product of $r$ cyclic groups of orders $p^{n_1}, \dots, p^{n_r}$, respectively, then we have isomorpisms of topological $\mathcal O_F$--algebras \begin{eqnarray*} \mathcal O_F[P]&\simeq &\mathcal O_F[X_1, \dots, X_r]/\left((X_1+1)^{p^{n_1}}-1, \dots (X_r+1)^{p^{n_r}}-1\right)\\ &\simeq& O_F[[X_1, \dots, X_r]]/\left((X_1+1)^{p^{n_1}}-1, \dots (X_r+1)^{p^{n_r}}-1\right), \end{eqnarray*} where the first isomorphism sends the generators of $P$ to $\widehat{X_1+1}, \dots, \widehat{X_r+1}$, respectively, and the second is a consequence of the Weierstrass preparation theorem cited above, applied inductively. Since the right-most algebra is clearly complete in its $(\frak m_{\mathcal O_F}, X_1, \dots, X_r)$--adic topology, the left-most algebra is also complete in its $\frak m_{{\mathcal O}_F[P]}$--adic topology. \end{remark} We end this section with a result on the decomposition groups $G_v(L_\infty/L_n)$ in the extension $L_\infty/L_n$, for all primes $v\vert\frak{f}$ and a fixed $n\geq 0$. This will be used in the proof of Proposition \ref{not a zero divisor prop} below. To that end, fix $n\geq 0$ and for every prime $v\vert\frak{f}$, let $U_{S_v}$ be the group of $S_v$--units in $k^\times$, where $S_v:=\{v, \infty\}$. We remind the reader that these are the elements of $k^\times$ whose divisor is supported at $S_v$. Consequently, we have a group isomorphism $$U_{S_v}\simeq\Bbb F_q^\times\times \Bbb Z.$$ Further we let $U_{S_v}^{(n+1)}:=\{x\in U_{S_v}\mid x\equiv 1 \mod (\frac{\frak{f}}{v^{{\rm ord}_v(\frak f)}}\cdot\frak p^{n+1})\}.$ This is a subgroup of finite index in $U_{S_v}$ which is torsion free. Therefore, it is infinite cyclic $$U_{S_v}^{(n+1)}=x_v^{\Bbb Z},$$ generated by some $x_v\in k^\times$, which obviously satisfies the following \begin{equation}\label{divisors}{\rm div}(x_v)={\rm ord}_v(x_v)\cdot v+{\rm ord}_{\infty}(x_v)\cdot\infty, \quad {\rm ord}_v(x_v)={-(d_\infty/d_v)}\cdot{\rm ord}_\infty(x_v)\ne 0.\end{equation} In what follows, we let $U(\frak f\frak p^\infty):=\bigcap_n U(\frak f\frak p^n)$ and let $i_v:k_v^\times\to J_k/k^\times U(\frak f\frak p^\infty)$ be the standard morphism (sending $x\in k_v^\times$ into the class of the id\`ele having $x$ in the $v$--component and $1$ everywhere else), for all primes $v$ of $k$. Now, we consider the topological group isomorphism $$\rho_{\frak p}^{(n)}: U_{k_{\frak p}}^{(n+1)}\simeq G(L_\infty/L_n)$$ obtained by composing the Artin reciprocity isomorphism $\rho \colon J_k/k^\times U({\frak f\frak p^\infty})\simeq G(L_\infty/k)$ with the standard embedding $U_{k_{\frak p}}^{(n+1)}\subseteq k_{\frak p}^\times\overset{i_{\frak p}}\longrightarrow J_k/k^\times U({\frak f\frak p^\infty})$. Note that $i_{\frak p}$ restricted to $U_{k_{\frak p}}^{(n+1)}$ is indeed injective, for all $n\geq 0$. \begin{prop}\label{decomposition groups prop} With notations as above, the following hold. \begin{enumerate} \item For all primes $v\vert\frak f$, if we let $x_v^{\Bbb Z_p}$ denote the cyclic $\Bbb Z_p$--submodule of $U_{k_{\frak p}}^{(n+1)}$ generated by $x_v$, then $\rho_{\frak p}^{(n)}$ gives an isomorphism of topological groups $$\rho_{\frak p}^{(n)}: x_v^{\Bbb Z_p}\simeq G_v(L_\infty/L_n).$$ \item Let $G_{\frak f}(L_\infty/L_n)$ be the subgroup of $G(L_\infty/L_n)$ generated by $G_v(L_\infty/L_n)$, for all $v\vert\frak f$. Then, if we let $f:={\rm card}\{v\mid v\vert\frak f\}$, we have topological group isomorphisms $$G_{\frak f}(L_\infty/L_n)\simeq \prod_{v\mid\frak f} G_{v}(L_\infty/L_n)\simeq \Bbb Z_p^f$$ \end{enumerate} \end{prop} \begin{proof} (1) A well--known class--field theoretical fact gives an equality of groups $$G_v(L_\infty/L_n)=\overline{\rho(i_v(k_v^\times))\cap\rho(i_{\frak p}(U_{k_{\frak p}}^{(n+1)}))},$$ where $\overline{X}$ denotes the pro-$p$ completion (topological closure) of the subgroup $X$ inside the pro-$p$ group $G(L_\infty/L_n)$. However, it is easily seen that we have $$\rho(i_v(k_v^\times))\cap\rho(i_{\frak p}(U_{k_{\frak p}}^{(n+1)}))=\rho_{\frak p}^{(n)}(x_v^{\Bbb Z}),$$ which, after taking the pro--$p$ completion of both sides, concludes the proof of part (1).\\ (2) According to part (1), it suffices to show that the elements $\{x_v\mid v\vert\frak f\}$ are $\Bbb Z_p$--linearly independent in $U_{k_{\frak p}}^{(n+1)}$. However, since their divisors are clearly $\Bbb Z$--linearly independent (see \eqref{divisors} above), these elements are $\Bbb Z$--linearly independent in $k^\times$. Now, the function field (strong) analogue of Leopoldt's Conjecture, proved in \cite{Kisilevsky}, implies that the elements in question are $\Bbb Z_p$--linearly independent in $U_{k_{\frak p}}^{(n+1)}$, as desired. \end{proof} \iffalse If $L/k$ is abelian and $\mathfrak{a} \sseq A$ prime to the conductor of $L/k$, then we write $\sigma_\mathfrak{a} = (\mathfrak{a}, L/k)$ for the Artin automorphism. We usually write $\sigma_a = \sigma_{aA}$ for elements $a \in A$. \begin{prop} Let $\mathfrak{f} \ne \mathfrak{e}$ and $\mathfrak{p}$ as above. Then the Artin map induces isomorphisms \begin{enumerate} \item \[ \left( A / \mathfrak{p}^{n+1} \right)^\times \longrightarrow \mathrm{G}(K_{\mathfrak{p}^{n+1}} / H_\mathfrak{e}^*), \quad \bar a \mapsto \sigma_x, \] where $x \in k^\times$ satisfies $x \equiv a (\mathrm{mod}^\times{\mathfrak{p}^{n+1}})$ and $\mathrm{sgn}(x) = 1$. \item \[ \left( A / \mathfrak{p}^{n+1} \right)^\times / \mathbb{F}_q^\times \longrightarrow \mathrm{G}(H_{\mathfrak{p}^{n+1}} / H_\mathfrak{e}), \quad \bar a \mathbb{F}_q^\times \mapsto \sigma_a. \] \item \[ \left( A / \mathfrak{p}^{n+1} \right)^\times \longrightarrow \mathrm{G}(K_{\mathfrak{f}\mathfrak{p}^{n+1}} / K_\mathfrak{f}), \quad \bar a \mapsto \sigma_x, \] where $x \in k^\times$ satisfies $x \equiv a (\mathrm{mod}^\times {\mathfrak{p}^{n+1}})$, $x \equiv 1 (\mathrm{mod}^\times{\mathfrak{f}})$ and $\mathrm{sgn}(x) = 1$. \item \[ \left( A / \mathfrak{p}^{n+1} \right)^\times \longrightarrow \mathrm{G}(H_{\mathfrak{f}\mathfrak{p}^{n+1}} / H_\mathfrak{f}), \quad \bar a \mapsto \sigma_x, \] where $x \in k^\times$ satisfies $x \equiv a (\mathrm{mod}^\times {\mathfrak{p}^{n+1}})$ and $x \equiv 1 (\mathrm{mod}^\times{\mathfrak{f}})$. \end{enumerate} \end{prop} \begin{proof} (1) We have a commutative diagram \begin{equation*} \xymatrix{ 0 \ar[r] & \left( A / \mathfrak{p}^{n+1} \right)^\times \ar[r]^f \ar[d] & \mathrm{Pic}_A^+(\mathfrak{p}^{n+1}) \ar[r] \ar[d] & \mathrm{Pic}_A^+ \ar[r] \ar[d] & 0 \\ 0 \ar[r] & 0 \ar[r] & \mathrm{Pic}_A^+ \ar[r] & \mathrm{Pic}_A^+ \ar[r] & 0 } \end{equation*} where $f(\bar a) := xA$ with $x \in k^\times$ such that $v_\mathfrak{p}(x-a) \ge n+1$ and $\mathrm{sgn}(x) = 1$. Note that the map $f$ is well-defined and that the existence of $x$ is guaranteed by the Strong Approximation Theorem. It follows that \[ \left( A / \mathfrak{p}^{n+1} \right)^\times \stackrel\simeq\longrightarrow \mathrm{G}(K_{\mathfrak{p}^{n+1}}/ H_\mathfrak{e}^*), \quad \bar a \mapsto \sigma_x, \] where $v_\mathfrak{p}(x-a) \ge n+1$ and $\mathrm{sgn}(x) = 1$. (2) We have a commutative diagram \begin{equation*} \xymatrix{ 0 \ar[r] & \mathbb{F}_q^\times \ar[r] \ar[d] & \left( A / \mathfrak{p}^{n+1} \right)^\times \ar[r] \ar[d] & \mathrm{Pic}_A(\mathfrak{p}^{n+1}) \ar[r] \ar[d] & \mathrm{Pic}_A \ar[r] \ar[d] & 0 \\ 0 \ar[r] & 0 \ar[r] & 0 \ar[r] & \mathrm{Pic}_A^+ \ar[r] & \mathrm{Pic}_A^+ \ar[r] & 0 } \end{equation*} It follows that \[ \left( A / \mathfrak{p}^{n+1} \right)^\times / \mathbb{F}_q^\times \stackrel\simeq\longrightarrow \mathrm{G}(K_{\mathfrak{p}^{n+1}}/ H_\mathfrak{e}^*), \quad \bar a \mathbb{F}_q^\times \mapsto \sigma_a. \] (3) We have a commutative diagram \begin{equation*} \xymatrix{ 0 \ar[r] & \left( A / \mathfrak{f}\mathfrak{p}^{n+1} \right)^\times \ar[r] \ar[d] & \mathrm{Pic}_A^+(\mathfrak{f}\mathfrak{p}^{n+1}) \ar[r] \ar[d] & \mathrm{Pic}_A^+ \ar[r] \ar[d] & 0 \\ 0 \ar[r] & \left( A / \mathfrak{f} \right)^\times \ar[r] & \mathrm{Pic}_A^+(\mathfrak{f}) \ar[r] & \mathrm{Pic}_A^+ \ar[r] & 0 } \end{equation*} It follows that \[ \left( A / \mathfrak{p}^{n+1} \right)^\times \stackrel\simeq\longrightarrow \mathrm{G}(K_{\mathfrak{f}\mathfrak{p}^{n+1}}/ K_{\mathfrak{f}}), \quad \bar a \mapsto \sigma_x, \] where $v_\mathfrak{p}(x - a) \ge n+1$, $x \equiv 1 (\mathrm{mod}^\times{\mathfrak{f}})$ and $\mathrm{sgn}(x) = 1$. (4) We have a commutative diagram \begin{equation*} \xymatrix{ 0 \ar[r] & \mathbb{F}_q^\times \ar[r] \ar[d] & \left( A / \mathfrak{f}\mathfrak{p}^{n+1} \right)^\times \ar[r] \ar[d] & \mathrm{Pic}_A(\mathfrak{f}\mathfrak{p}^{n+1}) \ar[r] \ar[d] & \mathrm{Pic}_A \ar[r] \ar[d] & 0 \\ 0 \ar[r] & \mathbb{F}_q^\times \ar[r] & \left( A / \mathfrak{f} \right)^\times \ar[r] & \mathrm{Pic}_A(\mathfrak{f}) \ar[r] & \mathrm{Pic}_A \ar[r] & 0 } \end{equation*} In addition, we have the exaxt sequence \[ 0 \longrightarrow \left(A/\mathfrak{p}^{n+1} \right)^\times \longrightarrow \left(A/\mathfrak{f}\mathfrak{p}^{n+1} \right)^\times / \mathbb{F}_q^\times \longrightarrow \left(A/\mathfrak{f} \right)^\times / \mathbb{F}_q^\times \longrightarrow 0 \] It follows that \[ \left( A / \mathfrak{p}^{n+1} \right)^\times \stackrel\simeq\longrightarrow \mathrm{G}(H_{\mathfrak{f}\mathfrak{p}^{n+1}}/ H_{\mathfrak{f}}), \quad \bar a \mapsto \sigma_x, \] where $v_\mathfrak{p}(x - a) \ge n+1$ and $x \equiv 1 (\mathrm{mod}^\times{\mathfrak{f}})$. \end{proof} The following field diagram illustrates the situation for $\mathfrak{f} \ne \mathfrak{e}$. \textcolor{red}{Again, I think that the diagram is essentially enough. Sections 2.1 and 2.2 should become much shorter.} \begin{equation*} \xymatrix{ & & K_{\mathfrak{f}\mathfrak{p}^{n+1}} \\ & K_{\mathfrak{f}\mathfrak{p}} \ar@{-}[ur]^{N\mathfrak{p}^{n}} & \\ & H_\mathfrak{e}^* H_{\mathfrak{f}\mathfrak{p}} \ar@{-}[u]^{W_k} & H_{\mathfrak{f}\mathfrak{p}^{n+1}} \ar@{-}[uu]_{W_\infty} \\ H_\mathfrak{e}^* \ar@{-}[ur] & H_{\mathfrak{f}\mathfrak{p}} \ar@{-}[ur]_{N\mathfrak{p}^n} \ar@{-}[u] & \\ H_\mathfrak{e} \ar@{-}[ur]_{\frac{\Phi(\mathfrak{f}\mathfrak{p})}{W_k}} \ar@{-}[u]^{\frac{W_\infty}{W_k}} & & \\ k \ar@{-}[u]^{hd_\infty} && } \end{equation*} For a fixed ideal $\mathfrak{f}$ (we allow now also $\mathfrak{f} = \mathfrak{e}$) and $n \ge 0$ we consider the fields \[ H_n := H_{\mathfrak{f}\mathfrak{p}^{n+1}}, \quad K_n := K_{\mathfrak{f}\mathfrak{p}^{n+1}} \] and set \[ H_\infty := \bigcup_{n \ge 0} H_n, \quad K_\infty := \bigcup_{n \ge 0} K_n. \] We obtain \[ \Gamma := \mathrm{G}(K_\infty/K_0) \simeq \mathrm{G}(H_\infty/H_0) \simeq \lim_n \frac{1 + \mathfrak{p}}{1+\mathfrak{p}^{n+1}} = U_{k_\mathfrak{p}}^{(1)} \simeq \mathbb{Z}_p^\aleph. \] \fi \iffalse As in Subsection \ref{Iwasawa iso} we write $\log_\mathrm{Iw} \colon U_{k_\mathfrak{p}}^{(1)} \longrightarrow A_\mathrm{Iw} = \prod_{p \nmid m}A_m$ for Iwasawa's isomorphism and also recall the definition of the subgroups $A_\mathrm{Iw}^{(k)} \subseteq A_\mathrm{Iw}$ of Definiton \ref{def Ak}. \begin{prop} \[ \mathrm{G}(K_\infty/K_n) \simeq \mathrm{G}(H_\infty/H_n) \simeq U_{k_\mathfrak{p}}^{(n+1)} \simeq A_\mathrm{Iw}^{(n+1)}. \] \end{prop} \begin{proof} This is immediate from \begin{equation*} \xymatrix{ 0 \ar[r] & \mathrm{G}(H_\infty/H_n) \ar[r] \ar[d] & \mathrm{G}(H_\infty/H_0) \ar[r] \ar[d] & \mathrm{G}(H_n/H_0) \ar[r] \ar[d] & 0 \\ 0 \ar[r] & U_{k_\mathfrak{p}}^{(n+1)} \ar[r] & U_{k_\mathfrak{p}}^{(1)} \ar[r] & U_{k_\mathfrak{p}}^{(1)} / U_{k_\mathfrak{p}}^{(n+1)} \ar[r] & 0 \\ } \end{equation*} and Proposition \ref{level n iso}. \end{proof} \begin{remark} In this remark we describe an explicit character of finite order for $\Gamma$. Let $\kappa \colon \Gamma \longrightarrow U_{k_\mathfrak{p}}^{(1)}$ be the elliptic character, i.e., $\gamma(\epsilon) = \rho_{\kappa(\gamma)}(\epsilon)$ for all $\gamma \in \Gamma$ and $\epsilon \in \rho [\mathfrak{f}\mathfrak{p}^\infty]$. Then we define for a fixed $m_0$ with $p \nmid m_0$ and $n\ge 0$ a character $\chi = \chi_{m_0,n} \in \mathrm{Hom}(\Gamma, {\mathbb{Q}_p}/\mathbb{Z}_p)$ of finite order by \begin{eqnarray*} && \Gamma \stackrel\kappa\longrightarrow U_{k_\mathfrak{p}}^{(1)} \longrightarrow U_{k_\mathfrak{p}}^{(1)} / U_{k_\mathfrak{p}}^{(n+1)} \\ &\stackrel\log_\mathrm{Iw}\longrightarrow& A_\mathrm{Iw}^{(1)} / A_\mathrm{Iw}^{(n+1)} \simeq \prod_{p \nmid m} A_m / p^{s_{n+1}(m)}A_m \longrightarrow A_{m_0}/ p^{s_{n+1}(m_0)}A_{m_0} \\ &=& \mathbb{Z}_p^d / p^{s_{n+1}(m_0)}\mathbb{Z}_p^d \stackrel{\omega_i}\longrightarrow \mathbb{Z}_p / p^{s_{n+1}(m_0)}\mathbb{Z}_p \\ \end{eqnarray*} We define \[ \widehat{A_\mathrm{Iw}(m_0)} := \prod_{p \nmid m, m \ne m_0} A_m. \] The following diagram illustrates the origin of $\chi = \chi_{m_0, n}$ \begin{equation*} \xymatrix{ & K_\infty \ar@{-}[dd] \ar@{-}[ddl]_{\widehat{A_\mathrm{Iw}(m_0)}}\\ & \bullet \ar@{-}[ld] \\ \bullet \ar@{-}[d]^{p^{s_{n+1}(m_0)}A_{m_0}} \ar@{-}[dd]_{A_{m_0} \simeq \mathbb{Z}_p^d} & K_n \ar@{-}[ld]\\ \bullet \ar@{-}[d]^{\mathbb{Z}_p^d / p^{s_{n+1}(m_0)}\mathbb{Z}_p^d} & \\ K_0 & \\ } \end{equation*} \end{remark} \fi \subsection{The basic example: The Carlitz module} \label{Carlitz subsection} We briefly describe the special situation which arises in the case of the Carlitz cyclotomic extension of a rational function field (see e.g. \cite[Sec.~2]{Angles}).\\ Let $k = \mathbb{F}_q(\theta)$ be the rational function field over $\Bbb F_q$ and let $v_\infty$ correspond to the valuation on $k$ of uniformizer $1/\theta$. Then $A = \mathbb{F}_q[\theta]$. Furthermore, $h_k = 1$, $d_\infty = 1$, and $H_\mathfrak{e}^* = H_\mathfrak{e} = k$. We consider the Carlitz module \[ \mathcal C \colon A \longrightarrow k\{\tau\}, \quad \theta \mapsto \mathcal C(\theta) = \theta\tau^0 + \tau^1, \] which is sgn--normalized with respect to the unique sign function satisfying ${\rm sgn}(1/\theta)=1.$ All data in the following refers to $\rho = \mathcal C$.\\ For each $\mathfrak{m} \ne \mathfrak{e}$ we have \[ \mathrm{G}(H^\ast_\mathfrak{m}/k) \simeq \left( A/\mathfrak{m} \right)^\times, \quad \mathrm{G}(H_\mathfrak{m}/k) \simeq \left( A/\mathfrak{m} \right)^\times / \mathbb{F}_q^\times, \quad \mathrm{G}(H_\mathfrak{m}^\ast/H_\mathfrak{m})\simeq \Bbb F_q^\times=\Bbb F_\infty^\times, \] and the subgroup $\mathbb{F}_q^\times \hookrightarrow \left( A/\mathfrak{m} \right)^\times$ identifies with the decomposition subgroup at $\infty$ which also equals the ramification subgroup at $v_\infty$ for the extension $H_\mathfrak{m}^\ast/k$. \\ We fix a prime $\mathfrak{p}$ of $A$ of degree $d = d_\mathfrak{p}$ and consider the fields $L^\ast_n := H^\ast_{\mathfrak{p}^{n+1}}$ for $n \ge 0$. Then \begin{equation}\label{decomp of group algebra} G_n^\ast := \mathrm{G}(L_n^\ast/k) = \Delta^\ast \times \Gamma_n \simeq \left( A/\mathfrak{p}^{n+1} \right)^\times \simeq \left( A/\mathfrak{p} \right)^\times \times \frac{U_{k_\mathfrak{p}}^{(1)}}{U_{k_\mathfrak{p}}^{(n+1)}}, \end{equation} where $\Delta^\ast \simeq \mathrm{G}(L_0^\ast/k) \simeq \left( A/\mathfrak{p} \right)^\times$ is cyclic of order $q^d-1$ and $\Gamma_n \simeq \mathrm{G}(L^\ast_n/L^\ast_0) \simeq U_{k_\mathfrak{p}}^{(1)}/U_{k_\mathfrak{p}}^{(n+1)}$ is the $p$-Sylow subgroup of $G^\ast_n$.\\ The extension $L^\ast_n/k$ is unramified outside $\{v_\infty, \mathfrak{p}\}$, totally ramified at $\mathfrak{p}$ and tamely ramified of ramification degree $(q-1)$ at $v_\infty$. More precisely, the decomposition field at $v_\infty$ is $L_n := H_{\mathfrak{p}^{n+1}}$ and $L^\ast_n/L_n$ is totally ramified of degree $(q-1)$.\\ Hence, the extension $L^\ast_\infty/k$ is also unramified outside $\{v_\infty, \mathfrak{p}\}$, totally ramified at $\mathfrak{p}$ and tamely ramified of degree $(q-1)$ at $v_\infty$. More precisely, the decomposition field at $v_\infty$ is $L_\infty := \cup_n H_{\mathfrak{p}^n}$ and $L^\ast_\infty/L_\infty$ is totally ramified at $v_\infty$ of degree $(q-1)$. We have \[ G_\infty^\ast := \mathrm{G}(L_\infty^\ast/k) = \Delta^\ast \times \Gamma_\infty \text, \quad \Gamma_\infty\simeq U_{k_\mathfrak{p}}^{(1)}. \] Here the isomorphism $\Gamma_\infty \simeq U_{k_\mathfrak{p}}^{(1)}$ is induced by the $\mathfrak{p}$-cyclotomic character \[ \kappa \colon G_\infty \longrightarrow U_{k_\mathfrak{p}}^{}, \] which is defined as follows. We write $A_\mathfrak{p}$ for the completion of $A$ at $\mathfrak{p}$, so that $A_\mathfrak{p}$ identifies with the valuation ring of $k_\mathfrak{p}$, in particular, $A_\mathfrak{p}^\times = U_{k_\mathfrak{p}}$. Then, $\phi$ can be uniquely extended to a formal Drinfeld module (see \cite{Rosen03}) \[ \widehat{\mathcal C} \colon A_\mathfrak{p} \longrightarrow A_\mathfrak{p} \{\{\tau \}\}. \] Then, for any $\sigma \in G_\infty$ the value $\kappa(\sigma)$ is determined by the equality \[ \sigma(\epsilon) = \widehat{\mathcal C}_{\kappa(\sigma)}(\epsilon) \text{ for all } \epsilon \in \mathcal C[\mathfrak{p}^\infty]. \] Finally, we note that \[ G_\infty := \mathrm{G}(L_\infty/k) = \Delta \times \Gamma_\infty, \] where $\Delta := \Delta^\ast / \mathbb{F}_q^\times\simeq (A/\mathfrak{p})^\times/\Bbb F_q^\times$. \section{Equivariant main conjectures in positive characteristic} \subsection{Review of the work of Greither and Popescu} \label{GP review} In what follows, if $G$ is a finite, abelian group and $F$ is a field of characteristic $0$, we denote by $\widehat G(F)$ the set of equivalence classes of the $\overline F$--valued characters $\chi$ of $G$, with respect to the equivalence relation $\chi\sim\chi' $ if there exists $\sigma\in G(\overline F/F)$, such that $\chi'=\sigma\circ\chi$. If $R$ is a commutative ring and $M$ is a finitely presented $R$--module, we let ${\rm Fitt}_R(M)$ denote the $0$--th Fitting ideal of $M$. For the definitions and relevant properties of Fitting ideals needed in this context, the reader may consult \cite{GP12}. We let $K/k$ denote an abelian extension of characteristic $p$ global fields, of Galois group $G$. We assume that $\mathbb{F}_q$ is the exact field of constants of $k$ (but not necessarily of $K$). Let $X \longrightarrow Y$ be the corresponding $G$-Galois cover of smooth projective curves defined over $\mathbb{F}_q$. Let $S$ and $\Sigma$ be two finite, non-empty, disjoint sets of closed points of $Y$, such that $S$ contains the set $S_{\mathrm{ram}}$ of points which ramify in $X$. We let $\overline{\Bbb F_q}$ denote the algebraic closure of $\mathbb{F}_q$ and set $\bar{X} := X \times_{\mathbb{F}_q} \overline{\Bbb F_q}$, $\bar{Y} := Y \times_{\mathbb{F}_q} \overline{\Bbb F_q}$. Also, $\bar{S}$ and $\bar{\Sigma}$ denote the set of points on $\bar{X}$ sitting above points of $S$ and $\Sigma$, respectively. For every unramified closed point $v$ on $Y$ we denote by $G_v$ and $\sigma_v$ the decomposition group and the Frobenius automorphism associated to $v$. As before, we write $d_v$ for the residual degree over $\mathbb{F}_q$ and we let $Nv := q^{d_v} = | \mathbb{F}_{q^{d_v}} |$ denote the cardinality of the residue field associated to $v$. To the set of data $(K/k, \mathbb{F}_q, S, \Sigma)$, one can associate a polynomial equivariant $L$-function \begin{equation} \label{eq L poly} \Theta_{S, \Sigma}(u) := \prod_{v \in \Sigma}\left( 1 - \sigma_v^{-1} \cdot (qu)^{d_v} \right) \cdot \prod_{v \not\in S}\left( 1 - \sigma_v^{-1} \cdot u^{d_v} \right)^{-1}. \end{equation} The infinite product on the right is taken over all closed points in $Y$ which are not in $S$. This product converges in $\Ze[G][[u]]$ and in fact it converges to an element in the polynomial ring $\Ze[G][u]$. We recall the link between $\Theta_{S, \Sigma}(u)$ and classical Artin $L$-functions. For every complex valued irreducible character $\chi$ of $G$ we let $L_{S, \Sigma}$ denote the $(S, \Sigma)$-modified Artin $L$-function associated to $\chi$. This is the unique holomorphic function of the complex variable $s$ satisfying the equality \begin{equation}\label{poly versus L fct} L_{S, \Sigma}(\chi, s) = \prod_{v \in \Sigma}\left( 1 - \chi(\sigma_v)Nv^{1-s} \right) \cdot \prod_{v \not\in S}\left( 1 - \chi(\sigma_v) (Nv)^{-s} \right)^{-1} \end{equation} for all $s \in {\mathbb{C}}$ with $\mathrm{Re}(s) > 1$. Then, for all $s \in {\mathbb{C}}$, \begin{equation}\label{StickelArtin} \Theta_{S, \Sigma}(q^{-s}) = \sum_{\chi \in \widehat{G}(\Bbb C)} L_{S, \Sigma}(\chi, s) e_{\chi^{-1}}, \end{equation} where $e_\chi := 1/|G| \sum_{g \in G}\chi(g) g^{-1} \in {\mathbb{C}}[G]$ denotes the idempotent corresponding to $\chi$. We denote by $M_{\bar{S}, \bar{\Sigma}}$ the Picard $1$-motive associated to the set of data $(\bar{X}, \overline\mathbb{F}_q, \bar{S}, \bar{\Sigma})$, see \cite[Def.~2.3]{GP12} for the definition. For a prime number $\ell$ we consider the $\ell$-adic Tate module (or $\ell$-adic realization ) $T_\ell(M_{\bar{S}, \bar{\Sigma}})$, see \cite[Def.~2.6]{GP12}, endowed with the usual ${\mathbb{Z}_\ell}[[G\times\Gamma]]$-module structure, where $\Gamma := G(\overline\mathbb{F}_q / \mathbb{F}_q)$. Recall that $\Gamma$ is isomorphic to the profinite completion $\widehat{\Bbb Z}$ of $\Bbb Z$ and has a natural topological generator $\gamma$ given by the $q$-power arithmetic Frobenius automorphism. The main result of Section 4 of \cite{GP12} is the following. \begin{theorem}[Greither--Popescu] \label{GP Th.4.3} The following hold for all prime numbers $\ell$. \begin{itemize} \item [(1)] The $\Zl[G]$-module $T_\ell(M_{\bar{S}, \bar{\Sigma}})$ is projective. \item [(2)] We have an equality of ${\mathbb{Z}_\ell}[[G\times\Gamma]]$-ideals \[ \left( \Theta_{S, \Sigma}(\gamma^{-1}) \right) = \mathrm{Fitt}_{{\mathbb{Z}_\ell}[[G\times \Gamma]]}(T_\ell(M_{\bar{S}, \bar{\Sigma}})). \] \end{itemize} \end{theorem} \begin{remark}\label{p-no Sigma-remark} By \cite[Rem.~2.7]{GP12} we have $T_p(M_{\bar{S}, \bar{\Sigma}}) = T_p(M_{\bar{S}, \emptyset})$. This is in accordance with the fact that the product of Euler factors $$ \prod_{v \in \Sigma}\left( 1 - \sigma_v^{-1} \cdot (q\gamma^{-1})^{d_v} \right) $$ is a unit in ${\mathbb{Z}_\ell}[[G \times \Gamma]]$. Indeed, from \[ \sum_{n=0}^{N-1} \left( \sigma_v^{-1} (qu)^{d_v} \right)^n = \frac{1 - \left( \sigma_v^{-1} (qu)^{d_v} \right)^N} {1 - \left( \sigma_v^{-1} (qu)^{d_v} \right)} \] and $\lim\limits_{N\rightarrow\infty}\left(1 - \left( \sigma_v^{-1} (qu)^{d_v} \right)^N \right) = 1$ in $\Bbb Z_p[G][[u]]$, we see that \[ \frac{1}{1 - \left( \sigma_v^{-1} (qu)^{d_v} \right)} = \sum_{n=0}^{\infty} \left( \sigma_v^{-1} (qu)^{d_v} \right)^n \in \mathbb{Z}_p[G][[u]]. \] \end{remark} \subsection{The main results (Geometric Equivariant Main Conjectures)} We fix a prime ideal $\mathfrak{p}$ and an integral ideal $\mathfrak{f}$ of $A$, such that $\mathfrak{p} \nmid \mathfrak{f}$. We will consider the tower of fields $H_{\mathfrak{f}\mathfrak{p}^{n+1}}/k$, for $n \ge 0$. (See \S2.2 and field diagram \eqref{field-diagram-2}.) The definition of the real ray--class fields $H_{\mathfrak{f}\mathfrak{p}^{n}}$ implies that we have $$H_{\mathfrak{f}\mathfrak{p}^{n}} \cap \bar\mathbb{F}_q = \mathbb{F}_{q^{d_\infty}}, \qquad \bar\mathbb{F}_q H_\mathfrak{f} \cap H_{\mathfrak{f}\mathfrak{p}^{n}} = H_\mathfrak{f},$$ for all $n\geq 0$. Consequently, we have the following perfect field diagram. \begin{equation}\label{field-diagram-3} \xymatrix{ &&& \mathcal{L}_n := \bar\mathbb{F}_q H_{\mathfrak{f}\mathfrak{p}^{n+1}} \ar@{-}[dd] \ar@{-}[ddll]\\ &&&\\ & L_n := H_{\mathfrak{f}\mathfrak{p}^{n+1}} \ar@{-}[dd] \ar@{-}@{.}@/_2pc/[ddddl]_{G_n} & & \mathcal{L}_0 := \bar\mathbb{F}_q H_{\mathfrak{f}\mathfrak{p}} \ar@{-}[d] \ar@{-}[ddll] \\ &&& \mathcal{K} := \bar\mathbb{F}_q k \ar@{-}[d] \ar@{-}[ddll] \\ & L_0 := H_{\mathfrak{f}\mathfrak{p}} \ar@{-}[d] \ar@{-}@{.}@/_1pc/[ddl]_{G_0} & & \kappa := \bar\mathbb{F}_q \ar@{-}[ddll] \ar@{-}@{.}@/^1pc/[dddlll]^{\Gamma}\\ & E := L_0 \cap \mathcal{K} \ar@{-}[d] \ar@{-}[dl] & & \\ k \ar@{-}[d] & \mathbb{F}_{q^{d_\infty}} \ar@{-}[dl] && \\ \mathbb{F}_q &&& } \end{equation} As in Subsection \ref{GP review} we let $S$ and $\Sigma$ be two finite, non-empty, disjoint sets of closed points of the smooth, projective curve $Y$ corresponding to $k$, such that $S$ contains the set $S_{\mathrm{ram}}$ of points which ramify in $H_{\mathfrak{f}\mathfrak{p}^{n+1}}$. Note that this condition does not depend on $n$. We write $\Theta_{S, \Sigma}^{(n)}(u) \in \mathbb{Z}[G_n][u]$ for the equivariant $L$-function attached to $(L_n/k, \mathbb{F}_q, S, \Sigma)$ in (\ref{eq L poly}). We let $L_\infty:=\cup_n L_n$ and $G_\infty:={\rm Gal}(L_\infty/k)$. The next lemma shows that the following is well defined. \[ \Theta_{S, \Sigma}^{(\infty)}(u) := \varprojlim_{n} \Theta_{S, \Sigma}^{(n)}(u) \in \mathbb{Z}_p[[G_\infty]][[u]]. \] \begin{lemma}\label{Theta functoriality} Let $L/k$ be a finite abelian extension with Galois group $G := \mathrm{G}(L/k)$. Let $K/k$ be a subextension with $H := \mathrm{G}(L/K)$. We write $$\Theta_{S, \Sigma, L/k}(u) \in \mathbb{Z}_p[G][u], \quad \Theta_{S, \Sigma, K/k}(u) \in \mathbb{Z}_p[G/H][u]$$ for the equivariant $L$-functions attached to the data $(L/k, \mathbb{F}_q, S, \Sigma)$ and $(K/k, \mathbb{F}_q, S, \Sigma)$, respectively. Then the canonical map $\mathbb{Z}_p[G][u] \longrightarrow \mathbb{Z}_p[G/H][u]$ sends $\Theta_{S, \Sigma, L/k}(u)$ to $\Theta_{S, \Sigma, K/k}(u)$. \end{lemma} \begin{proof} We write $\pi$ for the canonical map $G \longrightarrow G/H$ and also for any map which is naturally induced by $\pi$. It is straightforward to verify that for any character $\chi \in \hat G$ one has \[ \pi(e_\chi) = \begin{cases} e_\psi, & \text{if } \chi|_H = 1 \text{ and } \chi = \inf_{G/H}^G(\psi), \\ 0, & \text{if } \chi|_H \ne 1. \end{cases} \] Hence, by the inflation invariance of $(S, \Sigma)$-modified Artin $L$-functions, we obtain \begin{eqnarray*} \pi\left( \sum_{\chi \in \hat{G}}L_{S, \Sigma}(\chi, s) e_{\chi^{-1}}\right) &=& \sum_{\psi \in \widehat{G/H}}L_{S, \Sigma}(\psi, s) e_{\psi^{-1}}. \end{eqnarray*} It follows that $(\pi(\Theta_{S, \Sigma, L/k}))(q^{-s}) = \Theta_{S, \Sigma, K/k}(q^{-s}) $ for all $s \in {\mathbb{C}}$, and hence we also have $\pi(\Theta_{S, \Sigma, L/k}(u)) = \Theta_{S, \Sigma, K/k}(u)$ by \eqref{StickelArtin}. \end{proof} We let $X_n \longrightarrow Y$ denote the $G_n$-Galois cover of smooth, projective curves defined over $\mathbb{F}_q$ corresponding to $L_n / k$. We write $\overline{X}_n := X_n \times_{\mathbb{F}_q} \overline\mathbb{F}_q$, $\bar{Y} := Y \times_{\mathbb{F}_q} \overline\mathbb{F}_q$, and also $\bar{S}_n$ for the set of points of $\bar{X}_n$ above points of $S$. We let $M_S^{(n)}$ be the Picard $1$-motive associated with the set of data $(\bar{X}_n, \overline\mathbb{F}_q, \bar{S}_n, \emptyset)$ and write $T_p^{(n)} := T_p(M_S^{(n)})$ for the $p$-adic Tate module of $M_S^{(n)}$. Then, $T_p^{(n)}$ is endowed with a natural structure of $\Bbb Z_p[G_n][[\Gamma]]$--module. (See \cite[Sec.~3]{GP12}.) Now, since $\Bbb F_{q^{d_\infty}}$ is the exact field of constants of $L_n$, $\overline X_n$ is going to have $d_\infty$ connected components, all isomorphic to $\overline X_n^c:=X_n\times_{\Bbb F_{q^{d\infty}}}\overline\mathbb{F}_q$. We let $T_{p,c}^{(n)}:=T_{p, c}(M_S^{(0)}):=T_p(M_S^{(n), c})$ denote the $p$-adic Tate module of the $1$--motive $M_S^{(n), c}$ associated to the data $(\bar{X}_n^c, \overline{\mathbb{F}_q}, \bar{S}_n\cap \overline{X}_n^c ,\, \emptyset)$.\\ For $m \ge n \ge 0$ we write $N_{m/n} \colon L_m^\times \longrightarrow L_n^\times$ for the field theoretic norm map and also view $N_{m/n} = \sum_{g \in \mathrm{G}(L_m/L_n)}g$ as an element of $\mathbb{Z}_p[G_m]$. Galois restriction gives isomorphisms \[ \mathrm{G}(\mathcal{L}_m / \mathcal{L}_n) \simeq \mathrm{G}(L_m / L_n). \] By the results of \cite[Sec.~3]{GP12} we have a natural $\Bbb Z_p[G_{n+1}]$--equivariant, injective morphism $T_p^{(n)}\hookrightarrow T_p^{(n+1)}.$ We may also consider the norm map \[ N_{n+1/n}:\, T_p^{(n+1)} \longrightarrow \left( T_p^{(n+1)} \right)^{\mathrm{G}(L_{n+1}/L_n)}. \] By \cite[Th.~3.10]{GP12} this norm map is surjective and, moreover, it is an almost formal consequence of \cite[Th.~3.1]{GP12} to show that, under the canonical injective morphism $T_p^{(n)}\hookrightarrow T_p^{(n+1)}$, we can identify the following $\Bbb Z_p[G_n]$--modules \begin{equation}\label{Tp invariants} \left( T_p^{(n+1)} \right)^{\mathrm{G}(L_{n+1}/L_n)} = T_p^{(n)}. \end{equation} Indeed, we recall the diagram of fields above and set for $n \ge 0$ \[ H_n := \mathrm{G}(L_n / E). \] Then we have natural isomorphisms of $\Bbb Z_p[G_n]$--modules (see \cite[proof of Th.~3.10]{GP12}) \[ T_p^{(n)} \simeq T_{p,c}^{(n)} \otimes_{\mathbb{Z}_p[H_n]} \mathbb{Z}_p[G_n]. \] Therefore, we have the following \begin{eqnarray*} \left( T_p^{(n+1)} \right)^{\mathrm{G}(L_{n+1} / L_n)} &\simeq& \left( T_{p,c}^{(n+1)} \otimes_{\mathbb{Z}_p[H_{n+1}]} \mathbb{Z}_p[G_{n+1}] \right)^{\mathrm{G}(L_{n+1} / L_n)} \\ &\stackrel{(*)}=& \left( T_{p,c}^{(n+1)} \right)^{\mathrm{G}(L_{n+1} / L_n)} \otimes_{\mathbb{Z}_p[H_{n}]} \mathbb{Z}_p[G_{n}] \\ &\stackrel{(**)}=& \left( T_{p,c}^{(n)} \right) \otimes_{\mathbb{Z}_p[H_{n}]} \mathbb{Z}_p[G_{n}] \\ &\simeq& T_p^{(n)}, \end{eqnarray*} where $(*)$ is easy to verify using the fact that $T_{p,c}^{(n+1)}$ is $\mathbb{Z}_p[H_{n+1}]$-projective by Theorem \ref{GP Th.4.3} and $(**)$ is the result of \cite[Th.~3.1]{GP12}. \begin{definition} We define the Iwasawa--type algebras \[ \Lambda_n:=\Bbb Z_p[[G_n\times \Gamma]], \qquad \Lambda := \varprojlim_n \Lambda_n= \mathbb{Z}_p[[G_\infty \times \Gamma]], \] and consider the $\Lambda$--module defined by \[ T_p(M_S^{(\infty)}) := \varprojlim_n T_p(M_S^{(n)}), \] where the projective limit is taken with respect to norm maps. \end{definition} The rest of this section is devoted to the proof of Theorem \ref{EMC-I-intro} which we recall for the reader's convenience. \begin{theorem}[EMC-I]\label{limit theorem 1} Let $S$ and $\Sigma$ be as above. Then the $\Lambda$--module $T_p(M_S^{(\infty)})$ is finitely generated and torsion and the following hold. \begin{enumerate} \item ${\rm pd}_\Lambda \left(T_p(M_S^{(\infty)})\right)=1$. \item $\mathrm{Fitt}_\Lambda\left( T_p(M_S^{(\infty)}) \right) = \Theta_{S, \Sigma}^{(\infty)}(\gamma^{-1}) \cdot \Lambda.$ \end{enumerate} \end{theorem} The strategy of proof is as follows. First, we obtain a finitely generated, $\Lambda$--projective resolution of length $1$ for $T_p(M_S^{(\infty)})$, as a projective limit of certain $\Lambda_n$--projective resolutions of length $1$ for $T_p(M_S^{(n)})$, for $n\geq 0$, essentially constructed in \cite{GP12}. This implies that $T_p(M_S^{(\infty)})$ is finitely generated and ${\rm pd}_{\Lambda}\leq 1$. Then, we use this construction further to show that \begin{equation}\label{lim fitt commute} \mathrm{Fitt}_\Lambda\left(T_p(M_S^{(\infty)}) \right) = \varprojlim_n\, \mathrm{Fitt}_{\Lambda_n} \left( T_p(M_S^{(n)}) \right). \end{equation} Next, we show that $\Theta_{S, \Sigma}^{(n)}(\gamma^{-1})$ is a non-zero divisor in $\Lambda_n$, for all $n\geq 0$ and $\Theta_{S, \Sigma}^{(\infty)}(\gamma^{-1})$ is a non--zero divisor in $\Lambda$. (See Corollary \ref{non-zero divisors} below.) When combined with Theorem \ref{GP Th.4.3}(2), equality \eqref{lim fitt commute} and Lemma \ref{lim of nzd} below, this leads to the equalities \[ \mathrm{Fitt}_\Lambda\left( T_p(M_S^{(\infty)}) \right) = \varprojlim_n \left( \Theta_{S, \Sigma}^{(n)}(\gamma^{-1}) \cdot \Lambda_n\right) = \Theta_{S, \Sigma}^{(\infty)} (\gamma^{-1}) \cdot \Lambda. \] Now, the fact that $T_p(M_S^{(\infty)})$ is $\Lambda$--torsion and of projective dimension exactly equal to $1$ follows from the following elementary result. \begin{lemma}\label{torsion lemma} Let $R$ be a commutative ring and $X$ a finitely generated $R$--module. Assume that ${\rm Fitt}_R(X)$ contains a non--zero divisor $f\in R$. Then $X$ is a torsion $R$--module. Consequently, if $X$ is non--zero, then $X$ cannot be a submodule of a free $R$--module and therefore it cannot be $R$--projective. \end{lemma} \begin{proof} From the well--known inclusion ${\rm Fitt}_R(X)\subseteq {\rm Ann}_R(X)$, we conclude that $f\cdot X=0$, which concludes the proof of the Lemma. \end{proof} \iffalse \begin{lemma} Under our hypothesis $p \nmid |G(H_\mathfrak{e}/k)|$, we have a direct product decomposition \[ G_n = \mathrm{G}(L_n/k) = \mathrm{G}(L_0/k) \times \mathrm{G}(L_n/L_0) = \Delta \times P \times \mathrm{G}(L_n/L_0), \] where $P$ denotes the $p$-Sylow subgroup of $ \mathrm{G}(L_0/k)$ and $\Delta$ its complement. \end{lemma} \begin{proof} For all $n\geq 0$, let $P_n$ denote the $p$--Sylow subgroup of $G_n$ and let $\Delta_n$ be its complement. Since $G(L_n/L_0)\simeq U_{k_\mathfrak{p}}^{(1)}/U_{k_\mathfrak{p}}^{(n)}$ is a $p$--group, we have $\Delta_n=\Delta_0=:\Delta$, for all $n$. Consequently, we have the following group isomorphisms \[ G(L_n/k)\simeq P_n\times\Delta\simeq G(L_n/L_n^{P_n})\times \Delta\simeq G(L_n/L_0^{P_0})\times \Delta. \] Our hypothesis is equivalent to $p\nmid|G(H_\mathfrak{p}/k)|=|G(H_\mathfrak{e}/k)|\times|(A/\mathfrak{p})^\times/\Bbb F_q^\times|.$ Therefore, we have $H_\mathfrak{p}\subseteq L_0^{P_0}$. On the other hand, $H_{\mathfrak{p}^{n+1}}/H_\mathfrak{p}$ is totally ramified at the primes above $\mathfrak{p}$ and $L_0/H_\mathfrak{p}$ is unramified at those primes. Therefore, $L_0\cap H_{\mathfrak{p}^{n+1}}=H_\mathfrak{p}$. Since we obviously have $L_n=L_0\cdot H_{\mathfrak{p}^{n+1}}$ (compare degrees over $H_\mathfrak{p}$), this gives a group isomorphism $$G(L_n/L_0^{P_0})\simeq G(L_n/L_0)\times G(L_0/L_0^{P_0})=G(L_n/L_0)\times P_0.$$ When combined with the last displayed group ismorphisms, this concludes the proof. \end{proof} \fi \iffalse As a consequence of the Lemma, for all $n\geq 0$, we have an isomorphism of $\Bbb Z_p$--algebras \[ \mathbb{Z}_p[G_n] \simeq \bigoplus_{\chi \in \hat\Delta / \sim} \mathbb{Z}_p(\chi)[P] [\mathrm{G}(L_n/L_0)], \] given by the usual direct sum of $\chi$--evaluation maps, for $\chi\in\widehat\Delta/\sim$. \fi From now on, we let $P_n$ denote the Sylow $p$--subgroup of $G_n$ and $\Delta_n$ its complement, so that $G_n=P_n\times\Delta_n$, for all $n\geq 0$. Note that since $\Gamma_n=G(L_n/L_0)$ is a $p$--group, $\Delta:=\Delta_n$ does not depend on $n$. Consequently, for all $n\geq 0$, we have \begin{equation}\label{Pl Delta decomposition} G_n\simeq P_n\times\Delta, \qquad P_{n}/\Gamma_n\simeq P_0. \end{equation} Therefore, for all $n\geq 0$, we have an isomorphism of $\Bbb Z_p$--algebras \[ \mathbb{Z}_p[G_n] \simeq \bigoplus_{\chi \in \hat\Delta({\mathbb{Q}_p})} \mathbb{Z}_p(\chi)[P_n], \] given by the usual direct sum of $\chi$--evaluation maps for $\chi\in\widehat\Delta$. Consequently, any $\Bbb Z_p[G_n]$--module $X$ splits naturally into a direct sum \[ X= \bigoplus_{\chi \in \hat\Delta({\mathbb{Q}_p})} X^\chi, \qquad\text{where }X^\chi\simeq X\otimes_{\Bbb Z_p[G_n]}\Bbb Z_p(\chi)[P_n]. \] Since $P_n$ is an abelian $p$--group, the rings $\Bbb Z_p(\chi)[P_n]$ are local rings, for all $n$ and $\chi$ as above. Further, since projective modules over local rings are free, Theorem \ref{GP Th.4.3} and \cite[Rem.~2.7]{GP12} imply that we have isomorphisms of $\mathbb{Z}_p(\chi)[P_n]$--module \begin{equation}\label{def mchil} T_p\left(M_S^{(n)}\right)^\chi\simeq \left( \mathbb{Z}_p(\chi)[P_n] \right) ^{m_\chi^{(n)}}, \end{equation} with integers $m_\chi^{(n)} \ge 0$, for all $\chi$ and $n$ as above. \begin{lemma}\label{indep of l} The non-negative integers ${m_\chi^{(n)}}$ do not depend on $n$. \end{lemma} \begin{proof} Note that since $T_p(M_S^{(n)})$ is $\mathbb{Z}_p[G_n]$--projective, taking $\mathrm{G}(L_n/L_0)$ fixed points commutes with taking $\chi$-parts. Hence (\ref{def mchil}) combined with \eqref{Tp invariants} implies $m_\chi^{(n)} = m_\chi^{(0)}$, for all characters $\chi$ and all $n\geq 0$. \end{proof} \begin{definition} We let $m_\chi:=m_\chi^{(n)} = {\rm rank}_{\Bbb Z_p(\chi)[P_n]}T_p(M_S^{(n)})^\chi$, for all $\chi$ and $n$ as above. \end{definition} In order to simplify notations, for every $\chi\in\widehat\Delta$ and all $n\geq 0$, we write \[ R_n:=\Bbb Z_p[G_n], \quad R_n^\chi := \mathbb{Z}_p(\chi)[P_n] , \quad T_n = T_p\left(M_S^{(n)}\right), \quad T_{n}^\chi = T_p\left(M_S^{(n)}\right)^\chi. \] We let $P_\infty:=\varprojlim_n P_n$, observe that $G_\infty=P_\infty\times\Delta$, and set \[ R_\infty:=\Bbb Z_p[[G_\infty]], \quad R_\infty^\chi := \mathbb{Z}_p(\chi)[[P_\infty]] , \quad T_\infty := T_p(M_S^{(\infty)}), \quad T_\infty^\chi := T_p(M_S^{(\infty)})^\chi. \] Further, we let $\Lambda_n^\chi:=R_n^\chi[[\Gamma]]=\Bbb Z_p(\chi)[[P_n\times \Gamma]]$ and $\Lambda^\chi:=R_\infty^\chi[[\Gamma]]=\Bbb Z_p(\chi)[[P_\infty\times\Gamma]]$. Since \[ \Lambda_n\simeq\bigoplus_{\chi\in\widehat{\Delta}({\mathbb{Q}_p})}\Lambda_n^\chi, \quad \Lambda\simeq \bigoplus_{\chi\in\widehat{\Delta}({\mathbb{Q}_p})}\Lambda^\chi, \] via the usual character--evaluation maps, the $\Lambda_n$--modules $\Lambda_n^\chi$ and the $\Lambda$--modules $\Lambda^\chi$ are projective and cyclic, for all characters $\chi$ as above.\\ Now, we fix a character $\chi$ as above and, for a given $n\geq 0$, we fix an $R_n^\chi$-basis of $T_n^\chi$: \[ x_1^{(n)}, \ldots, x_{m_\chi}^{(n)}. \] We let $A_{\gamma}^{(n),\chi} \in \mathrm{GL}_{m_\chi}(R_n^\chi)$ be the matrix associated to the action of $\gamma$ on $T_n^\chi$ with respect to the fixed basis. Let $\Phi_\gamma^{(n), \chi}$ be the $R_n^\chi[[\Gamma]]$-linear endomorphism of $R_n^\chi[[\Gamma]]^{m_\chi}$ of matrix \[ 1 - \gamma^{-1} A_\gamma^{(n), \chi} \in M_{m_\chi}(R_n^\chi[[\Gamma]]) \] with respect to the canonical $R_n^\chi[[\Gamma]]$-basis $e_1^{(n)}, \ldots, e_{m_\chi}^{(n)}$ of $R_n^\chi[[\Gamma]]^{m_\chi}$. By the proof of \cite[Prop.~4.1]{GP12}, in particular (6) of loc.cit., combined with Corollary \ref{non-zero divisors}(2) below, we have an exact sequence of $R_n^\chi[[\Gamma]]$--modules \begin{equation}\label{Lambda-ell-chi-ses} 0\longrightarrow R_n^\chi[[\Gamma]]^{m_\chi} \stackrel{\Phi_\gamma^{(n), \chi}}\longrightarrow R_n^\chi[[\Gamma]]^{m_\chi}\stackrel{\pi_n^\chi}\longrightarrow T_n^\chi \longrightarrow 0, \end{equation} where $\pi_n^\chi$ is defined by $\pi_n^\chi(e_i^{(n)}) = x_i^{(n)}$. \\ Next, we show that, given an $R_n^\chi$--basis $x_1^{(n)}, \ldots, x_{m_\chi}^{(n)}$ for $T_n^\chi$, we can choose an $R_{n+1}^\chi$-basis $ x_1^{(n+1)}, \ldots, x_{m_\chi}^{(n+1)} $ of $T_{n+1}^\chi$, such that we have a commutative diagram \begin{equation}\label{comm GP diagram} \xymatrix{ 0\ar[r]&R_n^\chi[[\Gamma]]^{m_\chi} \ar[r]^{\Phi_\gamma^{(n), \chi}} & R_n^\chi[[\Gamma]]^{m_\chi} \ar[r]^{\pi_n^\chi} & T_n^\chi \ar[r] & 0 \\ 0\ar[r] &R_{n+1}^\chi[[\Gamma]]^{m_\chi} \ar[r]^{\Phi_\gamma^{(n+1), \chi}} \ar@{>>}[u]^{\varphi_{n+1/n}} & R_{n+1}^\chi[[\Gamma]]^{m_\chi} \ar[r]^{\pi_{n+1}^\chi} \ar@{>>}[u]^{\varphi_{n+1/n}} & T_{n+1}^\chi \ar[r] \ar@{>>}[u]^{N_{n+1/n}} & 0. } \end{equation} Here the vertical maps on the left and in middle are defined by $e_i^{(n+1)} \mapsto e_i^{(n)}$ and the canonical (componentwise) projections $R_{n+1}^\chi\to R_n^\chi$. To that end, we start with an arbitrary $R_{n+1}^\chi$-basis \[ y_1^{(n+1)}, \ldots, y_{m_\chi}^{(n+1)} \] of $T_{n+1}^\chi$ and show how to modify it so that diagram (\ref{comm GP diagram}) commutes. Since the module $T_{n+1}$ is $G_{n+1}$--cohomologically trivial (see \cite[Th.~3.10]{GP12}), by (\ref{Tp invariants}) above we have \[ N_{n+1/n}\left(T_{n+1}^\chi \right) = \left(T_{n+1}^\chi \right)^{\mathrm{G}(L_{n+1}/L_n)} = T_n^\chi \] and therefore \[ \left\{ N_{n+1/n}\left( y_1^{(n+1)} \right), \ldots, N_{n+1/n}\left( y_{m_\chi}^{(n+1)} \right) \right\} \] is an $R_n^\chi$-basis of $T_ n^\chi$. Let $U_n \in \mathrm{GL}_{m_\chi}(R_n^\chi)$ denote the matrix such that \[ \left( \begin{array}{c} x_1^{(n)} \\ \vdots \\ x_{m_\chi}^{(n)} \end{array} \right) = U_n \left( \begin{array}{c} N_{n+1/n}\left( y_1^{(n+1)} \right) \\ \vdots \\ N_{n+1/n}\left( y_{m_\chi}^{(n+1)} \right) \end{array} \right). \] Let $U_{n+1} \in M_{m_\chi}(R_{n+1}^\chi)$ be such that $ \varphi_{n+1/n}(U_{n+1}) = U_n$. Actually, by the next lemma, $U_{n+1}$ is an invertible matrix. \begin{lemma} Let $\varphi \colon S \longrightarrow R$ be a morphism of commutative local rings, i.e. $\varphi(\mathfrak{m}_S) \subseteq \mathfrak{m}_R$, where $\mathfrak{m}_S$ and $\mathfrak{m}_R$ are the corresponding maximal ideals. Let $U \in M_m(R)$, $V \in M_m(S)$ be matrices such that $\varphi(V) = U$. Then: \[ U \in \mathrm{GL}_m(R) \iff V \in \mathrm{GL}_m(S). \] \end{lemma} \begin{proof} If $VW = 1$ with $W \in M_m(S)$, then $1 = \varphi(VW) = \varphi(V) \cdot \varphi(W) = U \cdot \varphi(W)$, i.e., $U^{-1} = \varphi(W)$. Conversely suppose that $V \not\in \mathrm{GL}_m(S)$. Then $\det_S(V) \in \mathfrak{m}_S$, and hence \[ \det\nolimits_R(U) = \det\nolimits_R(\varphi(V)) = \varphi( \det\nolimits_R(V) ) \in \mathfrak{m}_R, \] contradicting $U \in \mathrm{GL}_m(R)$. \end{proof} Now, since $U_{n+1}$ is invertible, $\{x_1^{(n+1)}, \dots, x_{m_\chi}^{(n+1)}\}$ defined by \[ \left( \begin{array}{c} x_1^{(n+1)} \\ \vdots \\ x_{m_\chi}^{(n+1)} \end{array} \right) = U_{n+1} \left( \begin{array}{c} y_1^{(n+1)} \\ \vdots \\ y_{m_\chi}^{(n+1)} \end{array} \right). \] is an $R_{n+1}^\chi$--basis of $T_{n+1}^\chi$. Then the right hand square of (\ref{comm GP diagram}) commutes because \begin{equation}\label{norm comp basis} \left( \begin{array}{c} N_{n+1/n}x_1^{(n+1)} \\ \vdots \\ N_{n+1/n}x_{m_\chi}^{(n+1)} \end{array} \right) = U_{n+1} \left( \begin{array}{c} N_{n+1/n}y_1^{(n+1)} \\ \vdots \\ N_{n+1/n}y_{m_\chi}^{(n+1)} \end{array} \right) = U_{n} \left( \begin{array}{c} N_{n+1/n}y_1^{(n+1)} \\ \vdots \\ N_{n+1/n}y_{m_\chi}^{(n+1)} \end{array} \right) = \left( \begin{array}{c} x_1^{(n)} \\ \vdots \\ x_{m_\chi}^{(n)} \end{array} \right). \end{equation} Let $\mu_\gamma$ denote multiplication by $\gamma$ in $T_n^\chi$. By the definition of $A_{\gamma}^{(n), \chi}$, one has \begin{equation}\label{matrix coeff} \mu_\gamma\left( x_i^{(n)} \right) = \sum_{j=1}^{m_\chi}A_{\gamma,ij}^{(n), \chi} x_j^{(n)}, \quad\text{for all } n\ge 0\text{ and } 1\leq i\leq m_\chi. \end{equation} To prove commutativity of the left hand square of (\ref{comm GP diagram}) one has to show \begin{equation}\label{coeffientwise} \varphi_{n+1/n}\left( A_{\gamma,ij}^{(n+1), \chi} \right) = A_{\gamma,ij}^{(n), \chi}. \end{equation} Since $\mu_\gamma$ is an $R_n^\chi$--linear map, it follows from (\ref{norm comp basis}) and (\ref{matrix coeff}) that \begin{eqnarray*} \mu_\gamma\left( x_i^{(n)} \right) &=& \mu_\gamma\left( N_{n+1/n}x_i^{(n+1)} \right) \\ &=& N_{n+1/n} \left( \sum_{j=1}^{m_\chi}A_{\gamma,ij}^{(n+1), \chi} x_j^{(n+1)} \right) \\ &=& \sum_{j=1}^{m_\chi} \varphi_{n+1/n} \left( A_{\gamma,ij}^{(n+1), \chi}\right)N_{n+1/n} \left( x_j^{(n+1)} \right) \\ &=& \sum_{j=1}^{m_\chi} \varphi_{n+1/n} \left( A_{\gamma,ij}^{(n+1), \chi}\right)x_j^{(n)}, \end{eqnarray*} and this, in turn, immediately implies (\ref{coeffientwise}).\\ Now, we start with an $R_0^\chi$--basis $x_1^{(0)}, \dots, x_{m_\chi}^{(0)}$ for $T_0^\chi$ and use the procedure above inductively to construct $R_n^\chi$--bases $x_1^{(n)}, \dots, x_{m_\chi}^{(n)}$ for $T_n^\chi$ so that \eqref{comm GP diagram} commutes, for all $n\geq 0$. Therefore, we can take a projective limit as $n\to\infty$ in \eqref{comm GP diagram}. The Mittag-Leffler property (see \cite[Prop.~9.1]{Hartshorne}) implies that we obtain an exact sequence of $\Lambda^\chi$--modules \begin{equation}\label{Lambda-chi-ses} \xymatrix{ 0 \ar[r] & (\Lambda^\chi)^{m_\chi} \ar[rr]^{\Phi_\gamma^{(\infty), \chi}} && (\Lambda^\chi)^{m_\chi} \ar[rr]^{\qquad \pi_\infty^\chi} && T_\infty^\chi\ar[r] & 0, } \end{equation} where $\Phi_\gamma^{(\infty), \chi}:=\varprojlim_n \Phi_\gamma^{(n), \chi}$ and $\pi_\infty^\chi:=\varprojlim_n\pi_n^\chi$. By \eqref{coeffientwise}, we may define the following matrix \[ A_\gamma^{(\infty), \chi} := \{ A_\gamma^{(n), \chi} \}_{n \ge 0} \in \varprojlim_n \mathrm{GL}_{m_\chi}(R_n^\chi)=\mathrm{GL}_{m_\chi}(R_\infty^\chi). \] Consequently, the map $\Phi_\gamma^{(\infty), \chi}$ has matrix $(1-\gamma^{-1}A_\gamma^{(\infty), \chi})$ in the standard basis of $(\Lambda^\chi)^{m_\chi}$, for all characters $\chi$ as above.\\ If we take the direct sum of \eqref{Lambda-chi-ses} over all $\chi$ we obtain the exact sequence of $\Lambda$--modules \begin{equation}\label{Lambda-ses} \xymatrix{ 0 \ar[r] & \bigoplus_{\chi}(\Lambda^\chi)^{m_\chi} \ar[rr]^{\Phi_\gamma^{(\infty)}} && \bigoplus_\chi(\Lambda^\chi)^{m_\chi} \ar[rr]^{\pi_\infty} && T_\infty \ar[r] & 0, } \end{equation} where $\Phi_\gamma^{(\infty)}:=(\Phi_\gamma^{(\infty), \chi})_\chi$ and $\pi_\infty:=(\pi_\infty^\chi)_\chi$. The exact sequence above shows that the $\Lambda$--module $T_\infty = T_p(M_S^{(\infty)})$ is finitely generated, of projective dimension at most $1$. \\ Moreover, for all $\chi$ we have the following equalities \begin{eqnarray} \mathrm{Fitt}_{\Lambda^\chi}\left( T_\infty^\chi \right) &\stackrel{(*)}=& \det\nolimits_{\Lambda^\chi} \left( 1 - \gamma^{-1}A_\gamma^{(\infty), \chi} \right) \cdot \Lambda^\chi \nonumber \\ &\stackrel{(**)}=& \varprojlim_n \left( \det\nolimits_{\Lambda_n^\chi} \left( 1 - \gamma^{-1}A_\gamma^{(n), \chi} \right) \cdot \Lambda_n^\chi\right) \label{Fitt eqs I} \\ &\stackrel{(***)}=& \varprojlim_n\, \mathrm{Fitt}_{\Lambda_n^\chi}\left(T_n^\chi\right). \nonumber \end{eqnarray} Above, equality (*) follows from \eqref{Lambda-chi-ses}, equality (**) follows from Lemma \ref{lim of nzd} and Corollary \ref{non-zero divisors}(1) below, and (***) follows from \eqref{Lambda-ell-chi-ses}. Now, we take the direct sum over all $\chi$ of equality \eqref{Fitt eqs I} to obtain \begin{eqnarray*}\label{eqn:one} \mathrm{Fitt}_\Lambda\left( T_\infty \right) &=& \varprojlim_n\, \mathrm{Fitt}_{\Lambda_n} \left( T_n\right ) \\ &\stackrel{(*)}=& \varprojlim_n \left(\Theta_{S, \Sigma}^{(n)}(\gamma^{-1}) \cdot \Lambda_n\right) \\ &\stackrel{(**)}=& \Theta_{S, \Sigma}^{(\infty)}(\gamma^{-1}) \cdot \Lambda, \end{eqnarray*} where $(*)$ is one of the main results of Greither and Popescu (see Theorem \ref{GP Th.4.3}(2) above) and $(**)$ is Corollary \ref{non-zero divisors}(4) below. Now, the fact that $T_\infty$ is torsion and has projective dimension exactly equal to $1$ over $\Lambda$ follows from equality \eqref{Fitt eqs I} above, Corollary \ref{non-zero divisors}(5), and Lemma \ref{torsion lemma}.\\ This concludes the proof of Theorem \ref{limit theorem 1}, save for the technical results which imply the injectivity of the maps $\Phi_\gamma^{(n), \chi}$ and therefore the exactness of \eqref{Lambda-ell-chi-ses}, as well as both equalities (**) above. We state and prove these technical results below.\\ \begin{lemma}\label{lim of nzd} Let $(R_m, \pi_{m,n})$ be a projective system of commutative rings and set $R_\infty := \varprojlim_m R_m$. Let $\alpha_\infty := \{ \alpha_m \}_m \in R_\infty $ be a coherent sequence of non-zero divisors. Then \[ \varprojlim_m\, (\alpha_m R_m) = \alpha_\infty R_\infty. \] \end{lemma} \begin{proof} Let $\{ \alpha_m r_m \}_m \in \varprojlim_m \alpha_m R_m$. Then, for $m > n$, \[ \alpha_n r_n = \pi_{m,n}(\alpha_{m} r_{m}) = \alpha_n \pi_{m,n}(r_{m}). \] Since $\alpha_n$ is a non-zero divisor by assumption, this implies $r_n = \pi_{m,n}(r_{m})$, and hence $\{ \alpha_m r_m \}_m = \alpha_\infty r_\infty \in \alpha_\infty R_\infty$ with $r_\infty := \{ r_m \}_m $. The opposite containment is obvious (and true without the assumption that the $a_m$ are non-zero divisors). \end{proof} \begin{lemma}\label{nzd polynomials} Let $R$ be $R_n^\chi$ or $R_n$. Let $f \in R[\gamma] \subseteq R[[\Gamma]]$ be a polynomial in $\gamma$ such that the leading coefficient is a unit in $R$. Then $f$ is a non-zero divisor in $R[[\Gamma]]$. \end{lemma} \begin{proof} We write \[ f = \lambda_d \gamma^d + \lambda_{d-1} \gamma^{d-1} + \ldots + \lambda_{1} \gamma + \lambda_0 \] with $\lambda_j \in R$ and $\lambda_d \in R^\times$. We argue by contradiction and suppose that $f$ is a zero divisor in $R[[\Gamma]]$. Then there exists $g \in R[[\Gamma]]$ such that $fg = 0$ and $g \ne 0$. Let $f = \{f_m\}_{m \in \mathbb{N}}$ and $g = \{g_m\}_{m \in \mathbb{N}}$ with $f_m, g_m \in R[\Gamma / \Gamma^m]$. Then there exists $N \in \mathbb{N}$ such that for all $m \in \mathbb{N}$ \[ f_{Nm} g_{Nm} = 0, \quad f_{Nm} \ne 0 \ne g_{Nm} \] in $R[\Gamma / \Gamma^{Nm}]$. In particular, for all $k \ge 0$, we have \begin{equation}\label{zero divisor eq 1} f_{Np^k} g_{Np^k} = 0, \quad f_{Np^k} \ne 0 \ne g_{Np^k}. \end{equation} Write $N = Mp^a$ with $p \nmid M$. Since $\Gamma\simeq\widehat {\Bbb Z}$ (the profinite completion of $\Bbb Z$), we have the following group isomorphisms \[ \Gamma/\Gamma^{Np^k} = \Gamma/\Gamma^{Mp^{a+k}} \simeq \Gamma/\Gamma^M \times \Gamma/\Gamma^{p^{a+k}} \simeq \mathbb{Z}/M\mathbb{Z} \times \mathbb{Z}/p^{a+k}\mathbb{Z}. \] Further, we have a surjective topological group morphism $$\Gamma\twoheadrightarrow \Gamma/\Gamma^M \times \varprojlim_k\Gamma/\Gamma^{p^{a+k}}\simeq \Bbb Z/M\Bbb Z\times\Bbb Z_p, \quad \gamma\to (\gamma_M, \gamma_p).$$ It follows that $\varprojlim_k R[\Gamma/\Gamma^{Np^k} ] \simeq R[\Gamma/\Gamma^M][[t]]$, where $\gamma_p$ maps to $1+t$. Thus, we obtain a $\Bbb Z_p$--algebra homomorphism $\varphi$ defined by the following composition of maps \[ \varphi:\,R[[\Gamma]]\overset{\pi_{M,p}}\longrightarrow\varprojlim_k R[\Gamma/\Gamma^{Np^k} ] \simeq R[\Gamma/\Gamma^M][[t]] \hookrightarrow \bigoplus_\psi \mathbb{Z}_p[\psi][[t]] \] where $\psi$ runs through the $\overline{\Bbb Q_p}$-valued characters of $(G_n \times \Gamma/\Gamma^M)$ modulo the action of $\mathrm{G}(\overline{\Bbb Q_p}/{\mathbb{Q}_p})$ (and such that $\psi|_\Delta = \chi$ if $R = R^\chi_{n}$). Clearly, $\varphi$ maps $f$ to $(\psi(\lambda_d\gamma_M^d) (1+t)^d + \ldots)_\psi$. As $\lambda_d \in R^\times$ by assumption, we have $\psi(\lambda_d\gamma_M^d)\in\Bbb Z_p[\psi]^\times$, so the $\psi$-component of $\varphi(f)$ is non-zero, for all $\psi$. Since $\Bbb Z_p[\psi][[t]]$ is an integral domain, for all $\psi$, this implies that $\pi_{M,p}(f)$ is not a zero--divisor in $\varprojlim_k R[\Gamma/\Gamma^{Np^k} ]$. However, this contradicts equalities \eqref{zero divisor eq 1}, which concludes the proof of the Lemma. \end{proof} \begin{coro}\label{non-zero divisors} The following hold for all $n \ge 0$ and all $\chi\in\widehat{\Delta}$. \begin{enumerate} \item The element $\det\nolimits_{R_n^\chi[[\Gamma]]}\left( 1 - \gamma^{-1}A_\gamma^{(n), \chi} \right)$ is a non-zero divisor in $R_n^\chi[[\Gamma]]$. \item The map $\Phi_\gamma^{(n), \chi} \colon R_n^\chi[[\Gamma]]^{m_\chi} \longrightarrow R_n^\chi[[\Gamma]]^{m_\chi}$ is injective. \item The element $ \Theta_{S, \Sigma}^{(n)}(\gamma^{-1})$ is a non-zero divisor in $\Lambda_n = R_n[[\Gamma]] $. \item We have an equality $\Theta_{S, \Sigma}^{(\infty)}(\gamma^{-1})\cdot\Lambda=\varprojlim_n\left( \Theta_{S, \Sigma}^{(n)}(\gamma^{-1})\cdot\Lambda_n\right).$ \item The element $\Theta_{S, \Sigma}^{(\infty)}(\gamma^{-1})$ is a non--zero divisor in $\Lambda$. \end{enumerate} \end{coro} \begin{proof} (1) Observe that $\gamma^{m_\chi} \det_{R_n^\chi[[\Gamma]]}\left(1 - \gamma^{-1} A_\gamma^{(n), \chi} \right)$ is a polynomial in $R_n^\chi[\gamma]$ of degree $m_\chi$ and leading coefficient $1$. Hence part (1) follows immediately from Lemma \ref{nzd polynomials} above. (2) is a consequence of (1) and the following fact: let $R$ be a commutative ring, $A \in M_n(R)$ and suppose that $\det_R(A)$ is not a zero divisor. Then $A \colon R^n \longrightarrow R^n$ (defined with respect to the standard basis) is injective. Indeed, let $A^*$ be the adjoint matrix. Then $$AA^* = A^*A = \rm{det}_R(A)\cdot {\rm id},$$ and therefore $A^\ast\circ A$ is injective and, as a consequence, $A$ is injective (3) We apply results of \cite{GP12}, in particular Propositions 4.8 and 4.10. We can express the image $\Theta_{S, \Sigma}^{(n), \chi}(u)$ of $\Theta_{S, \Sigma}^{(n)}(u)$ in $R_n^\chi[u]$ as a product of two polynomials $P^\chi(u), Q^\chi(u) \in R_n^\chi[u]$, such that $Q^\chi(\gamma^{-1}) \in R_n^\chi[[\Gamma]]^\times$. It thus suffices to show that $P^\chi(\gamma^{-1})$ is a non-zero divisor. By \cite[Prop.~4.8 a)]{GP12} we have \[ P^\chi(u) = \det\nolimits_{R_n^\chi}\left(1-\gamma u \mid T_p(M_{\bar{S}, \bar{\Sigma}}^{(n)})^\chi\right), \] so the element $\gamma^{m_\chi} P^\chi(\gamma^{-1})$ is a polynomial with leading coefficient $1$. Lemma \ref{nzd polynomials} implies that $\Theta_{S, \Sigma}^{(n), \chi}(\gamma^{-1})$ is a non-zero divisor in $\Lambda_n^\chi$, for all $\chi$. Therefore, $\Theta_{S, \Sigma}^{(n)}(\gamma^{-1})=(\Theta_{S, \Sigma}^{(n), \chi}(\gamma^{-1}))_\chi$ is a non--zero divisor in $\Lambda_n=\oplus_\chi\Lambda_n^\chi$. (4) Apply part (3) combined with Lemmas \ref{Theta functoriality} and \ref{lim of nzd}. (5) This follows immediately from (3), as $\Theta_{S, \Sigma}^{(\infty)}(\gamma^{-1})=\varprojlim_n\Theta_{S, \Sigma}^{(n)}(\gamma^{-1})$ in $\Lambda=\varprojlim_{n}\Lambda_n$. \end{proof} \subsection{Co-descent to $L_\infty/k$} In what follows, for every $n\geq 0$, we denote by $D^0(L_n)$ and $D_S^0(L_n)$ the $\Bbb Z [G_n]$--modules of divisors of degree $0$ in $L_n$ and divisors of degree $0$ supported at primes above $S$ in $L_n$, respectively. By $D_S(L_n)$ we denote the $S$--supported divisors of $L_n$ of arbitrary degree. Note that the degree is computed relative to $\Bbb F_q$. Also, $U_S^{(n)}$ denotes the $\Bbb Z[G_n]$--module of $S$--units in $L_n$ (i.e. elements $f\in L_n^\times$ whose divisor ${\rm div}(f)$ is in $D_S^0(L_n)$). Finally, $X_S^{(n)}$ denotes the $\Bbb Z[G_n]$--module of divisors of $L_{n}$ supported at primes above $S$ and of {\it formal degree} $0$, i.e., formal sums $\sum_{v\in S(L_n)}n_v\cdot v$, with $n_v\in \Bbb Z$ and $\sum_v n_v=0$.\\ By slightly generalizing the results in \cite{GPff}, our Proposition \ref{smallS-proposition} and Remark \ref{RW-Tate-remark} in the Appendix applied for $L=L_n$, for every $n\geq 0$, provide us with canonical exact sequences of $\Bbb Z_p[G_n]$--modules \begin{equation}\label{Tate seq} 0 \longrightarrow U_S^{(n)} \otimes_\mathbb{Z} \mathbb{Z}_p \longrightarrow T_p(M_S^{(n)}) \stackrel{1- \gamma}\longrightarrow T_p(M_S^{(n)}) \longrightarrow \bigtriangledown_S^{(n)} \longrightarrow 0. \end{equation} Here $\bigtriangledown_S^{(n)}:=T_p(M_S^{(n)})_\Gamma$ sits in a short exact sequence of $\Bbb Z_p[G_n]$--modules \begin{equation}\label{RW-ses-ell} 0 \longrightarrow \mathrm{Pic}^0_S(L_n) \otimes_\mathbb{Z} \mathbb{Z}_p \longrightarrow \bigtriangledown_S^{(n)} \longrightarrow\widetilde X_S^{(n)} \longrightarrow 0, \end{equation} and $\widetilde X_S^{(n)}$ (defined precisely in the appendix) sits itself in a short exact sequence \begin{equation}\label{tilde-x-ses-ell}0\to \Bbb Z_p/d_S^{(n)}\mathbb{Z}_p \to \widetilde X_S^{(n)}\to X_S^{(n)}\otimes\Bbb Z_p\to 0, \end{equation} where $d_S^{(n)}\Bbb Z:={\rm deg}(D_S(L_n))$ and $G_n$ acts trivially on $\Bbb Z_p/d_S^{(n)}\mathbb{Z}_p$. In particular, note that if $S(L_n)$ (the set of places of $L_n$ sitting above places in $S$) contains a prime of degree coprime to $p$, then we have $\widetilde X_S^{(n)}= X_S^{(n)}\otimes\Bbb Z_p$. \\ Exact sequence \eqref{Tate seq} combined with Theorem \ref{GP Th.4.3} above, gives the following. \begin{coro}\label{Fitt-Nabla-ell} For all finite non-empty sets $\Sigma$ with $S \cap \Sigma = \emptyset$ and all $n \in \mathbb{N}$ we have \[ {\rm Fitt}_{\Bbb Z_p[G_n]}\left(\bigtriangledown_S^{(n)}\right) = \Theta_{S, \Sigma}^{(n)}(1) \cdot \mathbb{Z}_p[G_n]. \] \end{coro} \begin{proof} Exact sequence \eqref{Tate seq} gives an isomorphism of $\Bbb Z_p[G_n]$--modules $$\bigtriangledown_S^{(n)}\simeq T_p(M_S^{(n)})\otimes_{\Bbb Z_p[G_n][[\Gamma]]}\Bbb Z_p[G_n],$$ where $\mathbb{Z}_p[G_n]$ is viewed as a $\Bbb Z_p[G_n][[\Gamma]]$--algebra via the unique $\Bbb Z_p[G_n]$--algebra morphism $\pi:\Bbb Z_p[G_n][[\Gamma]]\twoheadrightarrow\Bbb Z_p[G_n]$ which takes $\gamma\to 1$. Since Fitting ideals commute with extension of scalars, this gives an equality of $\Bbb Z_p[G]$--ideals $${\rm Fitt}_{\Bbb Z_p[G_n]}\left(\bigtriangledown_S^{(n)}\right)=\pi\left({\rm Fitt}_{\Bbb Z_p[G_n][[\Gamma]]}(T_p(M_S^{(n)})\right).$$ Now, the Corollary follows from Theorem \ref{GP Th.4.3}. \end{proof} In what follows, we set \[ U_S^{(\infty)} := \varprojlim_n (U_S^{(n)} \otimes_\mathbb{Z} \mathbb{Z}_p), \quad \bigtriangledown_S^{(\infty)} := \varprojlim_n \bigtriangledown_S^{(n)}, \] where both limits are taken with respect to norm maps. \begin{lemma}\label{infty Tate seq lemma} The sequence (\ref{Tate seq}) stays exact when we pass to the limit, i.e. \begin{equation}\label{infty Tate seq} 0 \longrightarrow U_S^{(\infty)} \longrightarrow T_p(M_S^{(\infty)}) \stackrel{1- \gamma}\longrightarrow T_p(M_S^{(\infty)}) \longrightarrow \bigtriangledown_S^{(\infty)} \longrightarrow 0 \end{equation} is an exact sequence of $\Lambda$-modules. \end{lemma} \begin{proof} We set $W^{(n)} := (1-\gamma) T_p(M_S^{(n)}) $. Then the functor $\varprojlim_n$ is exact on \begin{equation*} \xymatrix{ 0 \ar[r] & U_S^{(n+1)} \otimes_\mathbb{Z} \mathbb{Z}_p \ar[r] \ar[d]^{N_{n+1/n}} & T_p(M_S^{(n+1)}) \ar[r] \ar[d]^{N_{n+1/n}} & W^{(n+1)} \ar[r] \ar[d]^{N_{n+1/n}} & 0 \\ 0 \ar[r] & U_S^{(n)} \otimes_\mathbb{Z} \mathbb{Z}_p \ar[r] & T_p(M_S^{(n)}) \ar[r] & W^{(n)} \ar[r] & 0 \\ } \end{equation*} because $ U_S^{(n)} \otimes_\mathbb{Z} \mathbb{Z}_p $ is a finitely generated $\mathbb{Z}_p$-module and therefore the projective system $\{U_S^{(n)} \otimes_\mathbb{Z} \mathbb{Z}_p\}_n$ satisfies the Mittag-Leffler condition. Since $N_{n+1/n} \colon T_p(M_S^{(n+1)}) \longrightarrow T_p(M_S^{(n)})$ is surjective, the map $N_{n+1/n} \colon W^{(n+1)} \longrightarrow W^{(n)}$ is also surjective. Hence, as above, $\varprojlim_n$ is exact on \begin{equation*} \xymatrix{ 0 \ar[r] & W{(n+1)} \ar[r] \ar[d]^{N_{n+1/n}} & T_p(M_S^{(n+1)}) \ar[r] \ar[d]^{N_{n+1/n}} & \bigtriangledown_S^{(n+1)} \ar[r] \ar[d]^{N_{n+1/n}} & 0 \\ 0 \ar[r] & W^{(n)} \ar[r] & T_p(M_S^{(n)}) \ar[r] & \bigtriangledown_S^{(n)} \ar[r] & 0 \\ } \end{equation*} We can now glue the two short exact sequences at the $\infty$-level. \end{proof} The following is an equivariant Iwasawa main conjecture--type result along the Drinfeld module (geometric) tower $L_\infty/k$, for the $\Bbb Z_p[[G_\infty]]$--module $\bigtriangledown_S^{(\infty)}$, see Theorem \ref{EMC-II-intro} of the introduction. \begin{theorem}[EMC-II]\label{limit theorem 2} For any finite, non-empty set $\Sigma$ of primes in k, disjoint from $S$, the following hold. \begin{enumerate} \item $\bigtriangledown_S^{(\infty)}$ is a finitely generated, torsion $\Bbb Z_p[[G_\infty]]$--module of projective dimension $1$. \item $\mathrm{Fitt}_{\mathbb{Z}_p[[G_\infty]]}\left(\bigtriangledown_S^{(\infty)}\right) = \Theta_{S, \Sigma}^{(\infty)}(1) \cdot \mathbb{Z}_p[[G_\infty]].$ \end{enumerate} \end{theorem} \begin{proof} Part (2) is an immediate consequence of exact sequence \eqref{infty Tate seq} and Theorem \ref{limit theorem 1} above. (Repeat the arguments in the proof of Corollary \ref{Fitt-Nabla-ell}.) Part (1) is Proposition \ref{nabla torsion prop} in \S3.5 below, which itself is a consequence of the fact that $\Theta_{S, \Sigma}^{(\infty)}(1)$ is a non--zero divisor in $\Bbb Z_p[[G_\infty]]$, as proved in Proposition \ref{not a zero divisor prop} below. \end{proof} \subsection{Results on ideal class groups} We conclude this section by deriving an Iwasawa main conjecture in the spirit of \cite[Th.~1.4]{Angles} for the classical $\Bbb Z_p[[G_\infty]]$--module {$$\frak X_{p}^{(\infty)}:=\varprojlim_n\left( \mathrm{Pic}^0(L_n) \otimes_\mathbb{Z} \mathbb{Z}_p\right),$$} where the projective limit is taken with respect to the usual norm maps. As in \eqref{Pl Delta decomposition} we let $\Delta$ denote the maximal subgroup of $G_0$ whose order is not divisible by $p$. Since $G(L_\infty/L_0)$ is a pro--$p$--group, we have $G_\infty \simeq \Delta\times G_\infty^{(p)}$, where $G^{(p)}_\infty$ is the maximal pro--$p$ subgroup of $G_\infty$. As a consequence, we can view the element $e_\Delta:=\frac{1}{|\Delta|}\sum_{\delta\in\Delta}\delta$ as an idempotent in $\Bbb Z_p[[G_\infty]]$. This allows us to define the functor $$M\mapsto M^\sharp:=(1-e_{\Delta})\cdot M$$ from the category of $\Bbb Z_p[[G_\infty]]$--modules to the category of modules over the quotient ring $$\Bbb Z_p[[G_\infty]]^\sharp = (1-e_{\Delta})\Bbb Z_p[[G_\infty]].$$ Note that since $M=e_{\Delta}M\oplus M^\sharp$, for every $M$ as above, the functor $M\mapsto M^\sharp$ is exact.\\ The study of $\frak X_{p}^{(\infty)}$ requires some additional hypotheses. We list them below. \begin{itemize} \item [(a)] $\mathfrak{f} = \mathfrak{e}$. \item [(b)] $\mathfrak{p}$ does not split in $H_\mathfrak{e} / k$. \item[(c)] $p\nmid [H_\mathfrak{e}:k]=h_k d_\infty$. \item[(d)] $p\nmid{\rm deg}(\mathfrak{p})$. \end{itemize} Note that (a), (b) and (c) are satisfied in the basic example of the Carlitz module, see \S\ref{Carlitz subsection}. \begin{theorem}[EMC-III]\label{limit theorem 3} Under hypotheses (a)--(d), the following hold for $S=\{\frak p\}$ and all nonempty sets $\Sigma$, disjoint from $S$. \begin{enumerate} \item $\frak X_{p}^{(\infty)}$ is a torsion $\Bbb Z_p[[G_\infty]]$--module of projective dimension $1$. \item $\mathrm{Fitt}_{\mathbb{Z}_p[[G_\infty]]}\left(\frak X_{p}^{(\infty)}\right) = \Theta_{S, \Sigma}^{(\infty)}(1) \cdot \mathbb{Z}_p[[G_\infty]].$ \end{enumerate} \end{theorem} \begin{proof} Since $L_n = H_{\mathfrak{p}^{n+1}}$ is unramified outside $\mathfrak{p}$, the set $S = \{ \mathfrak{p} \}$ satisfies all the desired requirements. In general, each divisor of $\mathfrak{p}$ in $H_\mathfrak{e}$ is totally ramified in $H_{\mathfrak{p}^{n+1}}/ H_\mathfrak{e}$. Hence, hypotheses (a)--(b) imply that $D_S^0(L_n) = 0$ and $X_S^{(n)} = 0$. Further, hypotheses (c)--(d) imply that if $\frak p_n$ is the unique prime in $L_n$ sitting above $\frak p$, then $d_S^{(n)}={\rm deg}(\frak p_n)=[H_{\frak e}:k]\cdot{\rm deg}(\frak p)$ is not divisible by $p$. Consequently, \eqref{RW-ses-ell} and \eqref{tilde-x-ses-ell} give us \[ \bigtriangledown_S^{(n)} = \mathrm{Pic}^0(L_n)\otimes\Bbb Z_p, \qquad \bigtriangledown_S^{(\infty)}=\frak X_{p}^{(\infty)}, \] and the result follows from Theorem \ref{limit theorem 2} above. \end{proof} Under the milder hypotheses (a)--(b), both satisfied in the basic case of the Carlitz module, a similar result holds, away from the trivial character of $\Delta$. \begin{theorem}[${\rm EMC-III}^\sharp$]\label{limit theorem 3 sharp} Under hypotheses (a)--(b), the following hold for $S=\{\frak p\}$ and all nonempty sets $\Sigma$, disjoint from $S$. \begin{enumerate} \item $\frak X_{p}^{(\infty), \sharp}$ is a torsion $\Bbb Z_p[[G_\infty]]^\sharp$--module of projective dimension $1$. \item $\mathrm{Fitt}_{\mathbb{Z}_p[[G_\infty]]^\sharp}\left(\frak X_{p}^{(\infty), \sharp}\right) = \Theta_{S, \Sigma}^{(\infty)}(1) \cdot \mathbb{Z}_p[[G_\infty]]^\sharp.$ \end{enumerate} \end{theorem} \begin{proof} Once again, we have $X_S^{(n)}=0$. Since $(\Bbb Z_p/d_S^{(n)}\mathbb{Z}_p)^\sharp=0$ (as $\Delta$ acts trivially on $\Bbb Z_p/d_S^{(n)}\mathbb{Z}_p$), when applying the exact functor $^\sharp$ to exact sequences \eqref{RW-ses-ell} and \eqref{tilde-x-ses-ell}, we obtain \[ \bigtriangledown_S^{(n), \sharp} = (\mathrm{Pic}^0(L_n)\otimes\Bbb Z_p)^\sharp, \qquad \bigtriangledown_S^{(\infty), \sharp}=\frak X_{p}^{(\infty),\sharp}, \] and the result follows by projecting the equality in Theorem \ref{limit theorem 2} onto $\Bbb Z_p[[G_\infty]]^\sharp$. \end{proof} \begin{remark}\label{remark-Bandini-Coscelli} In this remark we use the assumptions of \cite{Bandini} and \cite{Coscelli}. More specifically, we assume $\mathfrak{f} = \mathfrak{e}$, $d_\infty = 1$ and $p \nmid [H_\mathfrak{e}:k]$. Note that, for example, these assumptions hold in the case of the Carlitz module which is studied in \cite{Angles}. We let $G_\mathfrak{p}(L_0/k)$ and $I_\mathfrak{p}(L_0/k)$ denote the decomposition, respectively inertia group associated to $\mathfrak{p}$ in $L_0/k$. We observe that $\Delta = \mathrm{G}(L_0/k)$ has order prime to $p$ and let $\chi \in \hat\Delta$ be a character, such that $\chi|_{G_\mathfrak{p}(L_0/k)}$ is non-trivial. Since $I_\mathfrak{p}(L_0/k) = \mathrm{G}(L_0/H_\mathfrak{e}) \subseteq G_\mathfrak{p}(L_0/k)$, the set of such characters $\chi$ includes the characters of type 2 as defined in \cite[Def.~3.1]{Bandini} or \cite[Def.~2.3.6]{Coscelli}. Since we only consider real ray--class fields, we do not see characters $\chi$ of type 1, as defined in loc.cit. Then, for all $n \in \mathbb{N} \cup \{\infty\}$ and $S = \{ \mathfrak{p} \}$, we have \begin{eqnarray*} \left( X_S^{(n)} \otimes_\mathbb{Z} \mathbb{Z}_p \right)^\chi &=& \left( D_S(L_n) \otimes_\mathbb{Z} \mathbb{Z}_p \right)^\chi \\ &\simeq& \mathbb{Z}_p[\mathrm{G}(L_n/k) / G_\mathfrak{p}(L_n/k)]^\chi \\ &\simeq& \mathbb{Z}_p[\Delta / G_\mathfrak{p}(L_0/k)]^\chi = 0. \end{eqnarray*} As a consequence, we have $\bigtriangledown_S^{(\infty),\chi}=\frak X_{p}^{(\infty), \chi}$ and our Theorem \ref{limit theorem 2}(2) implies that \[ \mathrm{Fitt}_{\mathbb{Z}_p(\chi)[[\Gamma_\infty]]}\left(\frak X_{p}^{(\infty), \chi}\right) = \Theta_{S, \Sigma}^{(\infty)}(1, \chi) \cdot \mathbb{Z}_p(\chi)[[\Gamma_\infty]], \] for all characters $\chi$ as above. Thus we recover the central results of \cite[Thm.~1.1]{Angles}, \cite{Bandini} and \cite[Thm.~2.4.8]{Coscelli} restricted to the real Iwasawa towers considered in loc.cit. \end{remark} \subsection{$\Theta_{S, \Sigma}^{(\infty)}(1)$ is a non-zero divisor} The goal of this section is a proof of part (1) of Theorem \ref{limit theorem 2} above. We will first establish a structure theorem for the Iwasawa algebra $\Bbb Z_p[[G_\infty]]$ whose proof will be based on the following result on pro-$p$ groups. \begin{theorem}[Theorem 3.1 of \cite{Kiehlmann}] \label{pro-p-theorem} Let $\mathcal G$ be a pro-$p$ group with countable (topological) basis, whose torsion subroup $t(\mathcal G)$ has bounded exponent. Then $t(\mathcal G)$ is a closed subgroup of $\mathcal G$ and we have an isomorphism of topological groups $$\mathcal G\simeq t(\mathcal G)\times\Bbb Z_p^X,$$ where $X$ is a cardinal in the set $\Bbb N\cup\{\aleph_0\}$. \end{theorem} Here is the promised structure theorem for the Iwasawa algebra $\Bbb Z_p[[G_\infty]]$. \begin{prop}\label{G-infty-algebra-prop} The following hold. \begin{enumerate} \item There are closed subgroups $\widetilde{G_0}$ and $\widetilde{\Gamma_\infty}$ of $G_\infty$, such that $$G_\infty=\widetilde{G_0}\times\widetilde{\Gamma_\infty},$$ with $\widetilde{\Gamma_\infty}\simeq \Bbb Z_p^{\aleph_0}$ (topologically), $[\Gamma_\infty:\Gamma_\infty\cap\widetilde{\Gamma_\infty}]<\infty$, $[\widetilde{\Gamma_\infty}:\Gamma_\infty\cap\widetilde{\Gamma_\infty}]<\infty$, and $\widetilde{G_0}$ is isomorphic to a subgroup of $G_0$.\\ \item There is an injective morphism of topological $\Bbb Z_p$--algebras $$\Bbb Z_p[[G_\infty]]\simeq \Bbb Z_p[\widetilde{G_0}][[\widetilde{\Gamma_\infty}]]\hookrightarrow \bigoplus_{\rho\in\widehat{\widetilde G_0}(\Bbb Q_p)}\Bbb Z_p(\rho)[[\widetilde{\Gamma_\infty}]]\simeq \bigoplus_{\rho\in\widehat{\widetilde G_0}(\Bbb Q_p)}\Bbb Z_p(\rho)[[X_1, X_2, \dots]],$$ where the injective map in the middle is given by the usual $\rho$--evaluation maps. \end{enumerate} \end{prop} \begin{proof} Part (2) is a clear consequence of part (1) and Proposition \ref{big-Iwasawa-algebra-prop}. For the proof of (1), note that, with notations as in \S3.2 above, if we let $P_\infty:=\varprojlim_n P_n$, then we have \begin{equation}\label{iso-P-infty}G_\infty\simeq P_\infty\times \Delta, \qquad P_\infty/\Gamma_\infty\simeq P_0.\end{equation} Recall that $\Delta$ is the complement of the $p$--Sylow subgroup $P_n$ of $G_n$, for all $n\geq 0$. Since $\Gamma_\infty$ is torsion free, the second isomorphism above implies that $t(P_\infty)$ is isomorphic to a subgroup of $P_0$, therefore it is finite and, obviously, of bounded exponent. Since $\Gamma_\infty$ has countable basis, the second isomorphism above implies that $P_\infty$ has countable basis as well. Consequently, Theorem \ref{pro-p-theorem} applied to $\mathcal G:=P_\infty$ gives a topological isomorphism \begin{equation}\label{P-infty-structure}P_\infty\simeq t(P_\infty)\times\widetilde{\Gamma_\infty},\end{equation} where $\widetilde{\Gamma_\infty}\simeq \Bbb Z_p^{\aleph_0}$. Consequently, we have $$G_\infty=\widetilde{G_0}\times\widetilde{\Gamma_\infty}, \qquad \text{for }\widetilde{G_0}:=t(G_\infty)\simeq t(P_\infty)\times\Delta.$$ Obviously, $\widetilde{G_0}$ is isomorphic to a subgroup of $G_0\simeq P_0\times\Delta$. The fact that $[\Gamma_\infty:\Gamma_\infty\cap\widetilde{\Gamma_\infty}]<\infty$ and $[\widetilde{\Gamma_\infty}:\Gamma_\infty\cap\widetilde{\Gamma_\infty}]<\infty$ follows immediately from \eqref{iso-P-infty} and \eqref{P-infty-structure}. \end{proof} \begin{remark}\label{phi-remark} From now on, we let \[ \varphi=(\varphi_\rho)_\rho:\Bbb Z_p[[G_\infty]]\simeq\Bbb Z_p[\widetilde{G_0}][[\widetilde{\Gamma_\infty}]] \hookrightarrow \bigoplus_{{\rho\in\widehat{\widetilde G_0}({\mathbb{Q}_p})}}\Bbb Z_p(\rho)[[\widetilde{\Gamma_\infty}]]=:\overline{\Bbb Z_p[[G_\infty]]} \] denote the character evaluation map described above. Proposition \ref{big-Iwasawa-algebra-prop}(2)--(3) implies that the direct summands of $\overline{\Bbb Z_p[[G_\infty]]}$ are integral, normal domains. (Notice that $\overline{\Bbb Z_p[[G_\infty]]}$ is the integral closure of $\Bbb Z_p[[G_\infty]]$ in its total ring of fractions, which justifies the notation.) \end{remark} \begin{prop}\label{not a zero divisor prop} $\Theta_{S, \Sigma}^{(\infty)}(1)$ is a non-zero divisor in $\overline{\mathbb{Z}_p[[G_\infty]]}$ and therefore in $\Bbb Z_p[[G_\infty]]$. \end{prop} \begin{proof} Proposition \ref{G-infty-algebra-prop}(2) implies that the statement to be proved is equivalent to $$\varphi_\rho(\Theta_{S, \Sigma}^{(\infty)}(1))\ne 0 \text{ in }\Bbb Z_p(\rho)[[\widetilde{\Gamma_\infty}]],$$ for all characters $\rho\in\widehat{\widetilde{G_0}}$. Of course, this is equivalent to proving that for every $\rho$ as above, there exists a character $\psi$ of $\widetilde{\Gamma_\infty}$, of open kernel (so, of finite order), such that $$\psi(\varphi_\rho(\Theta_{S, \Sigma}^{(\infty)}(1))\ne 0 \text{ in }\Bbb Z_p(\rho\psi).$$ However, from the definition of $\Theta_{S, \Sigma}^{(\infty)}$, for all $\rho$ and $\psi$ as above, we have an equality $$\psi(\varphi_\rho(\Theta_{S, \Sigma}^{(\infty)}(1))=L_{S,\Sigma}({(\rho\psi)^{-1}}, 0),$$ where $\rho\psi$ is viewed as a complex--valued character of the finite quotient $$\widetilde{G_0}\times(\widetilde{\Gamma_\infty}/\ker\psi)=G_\infty/\ker\psi$$ of $G_\infty$ (under a fixed field isomorphism $\Bbb C_p\simeq\Bbb C$) and $L_{S,\Sigma}(\rho\psi, s)$ is the complex--valued Artin $L$--function ($S$--imprimitive and $\Sigma$--completed) associated to $\rho\psi$. (See equalities \eqref{poly versus L fct} above.) Now, the following Lemma is a well--known description of the order of vanishing at $s=0$ of the Artin $L$--functions in question. For the number field case of this result, see \cite[Ch.~I, Prop.~3.4]{Ta84} and for the function field case, relevant in our context, see \cite[Sec.~2.2]{Po11}. \begin{lemma}\label{order of vanishing} If $\chi$ is a non--trivial character of $G_\infty$ with open kernel, then $${\rm ord}_{s=0}\, L_{S, \Sigma}(\chi, s)={\rm card}\,\{v\in S\mid \chi(G_v(L_\infty/k))=1\},$$ where $G_v(L_\infty/k)$ denotes the decomposition group of $v$ inside $G_\infty=G(L_\infty/k)$. \end{lemma} Consequently, we claim that it suffices to find a finite subextension $L/K$ of $L_\infty/L_\infty^{\widetilde{\Gamma_\infty}}$ and a character $\chi$ of $G(L/K)$, such that the following conditions are simultaneously satisfied. \begin{enumerate} \item[(A1)] $\widetilde{L_0}:=L_\infty^{\widetilde{\Gamma\infty}}\subseteq K\subseteq L\subseteq L_\infty$ and $L/\widetilde{L_0}$ finite. \item[(A2)] $\chi(G_v(L/K))\ne \{1\}$, for all $v\in S$. \end{enumerate} Indeed, if we construct an $L/K$ and $\chi$ as above, then for any given character $\rho$ of $\widetilde G_0\simeq G(\widetilde{L_0}/k)$, we take any character $\psi$ of $G(L/\widetilde{L_0})$, such that $\psi\mid_{G(L/K)}=\chi$. Now, $\rho\psi$ is a character of $G(L/k)$ which satisfies the property $$\rho\psi (G_v(L/K))=\psi(G_v(L/K))=\chi(G_v(L/K))\ne \{1\}, \qquad \text{ for all } v\in S.$$ Since $G_v(L/K)\subseteq G_v(L/k)$, for all $v\in S$, Lemma \ref{order of vanishing} gives us the desired nonvanishing $$\psi(\varphi_\rho(\Theta_{S, \Sigma}^{(\infty)}(1))=L_{S,\Sigma}({(\rho\psi)^{-1}}, 0)\ne 0.$$ Now, since $[\widetilde{\Gamma_\infty}:\widetilde{\Gamma_\infty}\cap\Gamma_\infty]<\infty$, the existence of $L/K$ satisfying conditions (A1)-(A2) above is ensured if we can find two integers $m>n$ and a character $\chi$ of $G(L_m/L_n)$, such that \begin{enumerate} \item[(B1)] $n$ is large enough, so that $L_\infty^{\Gamma_\infty\cap\widetilde{\Gamma_\infty}}\subseteq L_n$. (Note that $\widetilde{L_0}\subseteq L_\infty^{\Gamma_\infty\cap\widetilde{\Gamma_\infty}}$.) \item[(B2)] $\chi(G_v(L_m/L_n))\ne \{1\}$, for all primes $v\in S$. \end{enumerate} Now, we proceed to constructing $m$ and $n$ as above. First, we fix an $n\geq 0$, large enough so that (B1) is satisfied. Now, we apply Proposition \ref{decomposition groups prop} to get topological group isomorphisms \begin{equation}\label{decomposition groups infty}G_{\frak f}(L_\infty/L_n)=\prod_{v\vert\frak f}G_v(L_\infty/L_n)\simeq\prod_{v\vert\frak f}\Bbb Z_p,\end{equation} where the notations are as in loc.cit. Since the $p$--adic and the profinite topologies on $G_{\frak f}(L_\infty/L_n)$ coincide, there exists an $m>n$, such that \begin{equation}\label{topologies} G_{\frak f}(L_\infty/L_n)\cap G(L_\infty/L_m)\subseteq p\cdot G_{\frak f}(L_\infty/L_n).\end{equation} We let $G_{\frak f}(L_m/L_n)$ denote the subgroup of $G(L_m/L_n)$ generated by the decomposition groups $G_v(L_m/L_n)$, for all $v\vert\frak f$. From the definitions, we have a group morphism $$G_{\frak f}(L_m/L_n)\simeq \frac{G_{\frak f}(L_\infty/L_n)}{G_{\frak f}(L_\infty/L_n)\cap G(L_\infty/L_m)}\twoheadrightarrow \frac{G_{\frak f}(L_\infty/L_n)}{p\cdot G_{\frak f}(L_\infty/L_n)}=\prod_{v\vert\frak f}\frac{G_v(L_\infty/L_n)}{p\cdot G_v(L_\infty/L_n)}, $$ where the isomorphism to the left is induced by Galois restriction, the surjection is induced by the inclusion \eqref{topologies} and the equality is a consequence of \eqref{decomposition groups infty}. It is easily seen that for all $v\vert\frak f$ the above morphism maps $G_v(L_m/L_n)$ onto the quotient $G_v(L_\infty/L_n)/pG_v(L_\infty/L_n)$ which by Proposition \ref{decomposition groups prop} is isomorphic to $\Bbb Z/p\Bbb Z$. Consequently, there is a character $\psi$ of $G_{\frak f}(L_m/L_n)$, such that $\psi(G_v(L_m/L_n))=\{\zeta\in\Bbb C_p\mid \zeta^p=1\}$ for all $v\vert\frak f$. Now, take any character $\chi$ of $G(L_m/L_n)$ which equals $\psi$ when restricted to $G_{\frak f}(L_m/L_n)$. This character obviously satisfies (B2) for all $v\vert\frak f$. Since it is non--trivial on $G(L_m/L_n)$ and $G_{\frak p}(L_m/L_n)=G(L_m/L_n)$ (recall that $L_m/L_n$ is totally ramified at the $\frak p$--adic primes), the character $\chi$ also satisfies (B2) for $v=\frak p$. This concludes the proof of Proposition 3.22. \end{proof} We conclude this section with a corollary to Proposition \ref{not a zero divisor prop}. \begin{prop}\label{nabla torsion prop} The $\Bbb Z_p[[G_\infty]]$--module $\bigtriangledown_S^{(\infty)}$ is finitely generated, torsion, and of projective dimension $1$. \end{prop} \begin{proof} We will use the notations in the proof of Theorem \ref{limit theorem 1}. In particular, note that $$\Lambda^\chi/(1-\gamma)\simeq\Bbb Z_p(\chi)[[P_\infty]], $$ for all $\overline{\Bbb Q_p}$--valued characters $\chi$ of $\Delta$. Consequently, the exact sequence \eqref{Lambda-ses} leads to the following commutative diagram of $\Lambda$--modules. \begin{equation*} \xymatrix{ & & & U_S^{(\infty)} \ar@{>->}[d] & \\ 0 \ar[r] &\bigoplus_\chi \left(\Lambda^\chi\right)^{m_\chi} \ar[r]^{\Phi_\gamma^{(\infty)}} \ar[d]^{\gamma-1} & \bigoplus_\chi\left(\Lambda^\chi\right)^{m_\chi} \ar[r]^{\pi_\infty} \ar[d]^{\gamma-1} & T_p(M_S^{(\infty)}) \ar[r] \ar[d]^{\gamma-1} & 0 \\ 0 \ar[r] & \bigoplus_\chi\left(\Lambda^\chi\right)^{m_\chi} \ar[r]^{\Phi_\gamma^{(\infty)}} \ar@{>>}[d] & \bigoplus_\chi\left(\Lambda^\chi\right)^{m_\chi} \ar[r]^{\pi_\infty} \ar@{>>}[d] & T_p(M_S^{(\infty)}) \ar[r] \ar@{>>}[d] & 0 \\ & \bigoplus_\chi\mathbb{Z}_p(\chi)[[P_\infty]]^{m_\chi} & \bigoplus_\chi \mathbb{Z}_p(\chi)[[P_\infty]]^{m_\chi} & \bigtriangledown_S^{(\infty)} & \\ } \end{equation*} where the right vertical exact sequence is given by Lemma \ref{infty Tate seq lemma}. The snake lemma applied to the diagram above gives the exact sequence of $\Bbb Z_p[[G_\infty]]$--modules \begin{equation*} \xymatrix{ \bigoplus_\chi\mathbb{Z}_p(\chi)[[P_\infty]]^{m_\chi} \ar[r]^{\overline{\Phi_\gamma^{(\infty)}}} & \bigoplus_\chi\mathbb{Z}_p(\chi)[[P_\infty]]^{m_\chi} \ar[r] & \bigtriangledown_S^{(\infty)} \ar[r] & 0, \\ } \end{equation*} where $\overline{\Phi_\gamma^{(\infty)}}:= {\Phi_\gamma^{(\infty)}} \mod (\gamma-1)$. It follows that \[ \mathrm{Fitt}_{\mathbb{Z}_p[[G_\infty]]} \left( \bigtriangledown_S^{(\infty)} \right) = \det\nolimits_{\mathbb{Z}_p[[G_\infty]]} \left( {\overline{\Phi_\gamma^{(\infty)}}} \right) \cdot \mathbb{Z}_p[[G_\infty]]. \] Combined with Theorem \ref{limit theorem 2}(2), the equality above shows that the element $\det\nolimits_{\mathbb{Z}_p[[G_\infty]]} \left( {\overline{\Phi_\gamma^{(\infty)}}} \right)$ differs from $\Theta_{S, \Sigma}^{(\infty)}(1)$ by a unit in $\Bbb Z_p[[G_\infty]]$. By Propositon \ref{not a zero divisor prop}, the element $\Theta_{S,\Sigma}^{(\infty)}(1)$ is a non-zero divisor in $\Bbb Z_p[[G_\infty]]$. Therefore, $\det\nolimits_{\mathbb{Z}_p[[G_\infty]]} \left( {\overline{\Phi_\gamma^{(\infty)}}} \right)$ is a non-zero divisor in $\Bbb Z_p[[G_\infty]]$ as well. By a standard argument (using the adjoint matrix $\chi$--componentwise, see proof of Cor. \ref{non-zero divisors}(2)) we see that ${\overline{\Phi_\gamma^{(\infty)}}}$ is injective, hence the sequence of $\Bbb Z_p[[G_\infty]]$--modules \begin{equation*} \xymatrix{ 0 \ar[r] & \bigoplus_\chi\mathbb{Z}_p(\chi)[[P_\infty]]^{m_\chi} \ar[r]^{\overline{\Phi_\gamma^{(\infty)}}} & \bigoplus_\chi\mathbb{Z}_p(\chi)[[P_\infty]]^{m_\chi} \ar[r] & \bigtriangledown_S^{(\infty)} \ar[r] & 0 \\ } \end{equation*} is exact. Consequently, the $\Bbb Z_p[[G_\infty]]$--module $ \bigtriangledown_S^{(\infty)}$ is finitely generated, of projective dimension at most one. Further, the fact that $ \bigtriangledown_S^{(\infty)}$ is torsion and of projective dimension exactly $1$ as a $\Bbb Z_p[[G_\infty]]$--module follows immediately from Lemma \ref{torsion lemma}. \end{proof} \section{Appendix ($p$--adic Ritter--Weiss modules and Tate sequences for small $S$)} Let $L$ be a finite, separable extension of $\Bbb F_q(t)$. Denote by $Z$ a smooth, projective curve defined over $\Bbb F_q$, whose field of rational functions is isomorphic to $L$. We let $\overline Z:=Z\times_{\Bbb F_q}\overline{\Bbb F_q}$, $\Gamma:=G(\overline{\Bbb F_q}/\Bbb F_q)$ and let $\gamma$ be the $q$--power arithmetic Frobenius automorphism, viewed as a canonical topological generator of $\Gamma$. Note that $\overline Z$ may not be connected. Consequently, $\overline L:=L\otimes_{\Bbb F_q}\overline{\Bbb F_q}$ (the $\overline{\Bbb F_q}$--algebra of rational functions on $\overline Z$) could be a finite direct sum of isomorphic fields (the fields of rational functions of the connected components of $\overline Z$.)\\ Next, we consider a finite, non--empty set $S$ of closed points on $Z$ and let $\overline S$ be the set of closed points on $\overline Z$ sitting above points in $S$. We let ${\rm Div}^0_{\overline S}(\overline L)$ (respectively ${\rm Div}_{\overline S}(\overline L)$) and ${\rm Div}^0_S(L)$ (respectively ${\rm Div}_S(L)$) denote the divisors of degree $0$ (respectively arbitrary degree) on $\overline Z$ and $Z$, supported at $\overline S$ and $S$, respectively. Note that the degree of a divisor on $Z$, denoted by ${\rm deg}$, is computed relative to the field of definition $\Bbb F_q$. Also, the degree of a divisor on $\overline Z$ is in fact a multidegree, computed on each connected component on $\overline Z$ separately. Further, $X_S(L)$ denotes the $S$--supported divisors on $Z$ of arbitrary formal degree, denoted below ${\rm fdeg}$. Also, $U_S(L)$ denotes the group of $S$--units inside $L^\times$ and \[ {\rm Pic}^0_S(L):=\frac{{\rm Pic}^0(L)}{\widehat{{\rm Div}^0_S(L)}} = \frac{{\rm Div^0}(L)}{{\rm Div^0_S}(L) + {\rm div}(L^\times)}, \] is the $S$--Picard group associated to $L$, obtained by taking the quotient of the usual Picard group ${\rm Pic}^0(L)$ by the subgroup ${\widehat{{\rm Div}^0_S(L)}}$ of classes of all $S$--supported divisors of degree $0$.\\ Finally, we let $M_S$ denote the Picard $1$--motive associated as in \cite{GP12} to the data $(\overline Z, \overline{\Bbb F_q}, \overline S, \emptyset)$. As usual, $T_p(M_S)^\Gamma$ and $T_p(M_S)_{\Gamma}$ denote the $\Gamma$--invariants, respectively $\Gamma$--coinvariants of the $p$--adic Tate module of $M_S$. In what follows, if $N$ is a $\Bbb Z$--module, we let $N_p:=N\otimes_{\Bbb Z}\Bbb Z_p$. \begin{definition}\label{p-large-definition} The set $S$ is called $p$--large if the following are satisfied. \begin{enumerate} \item ${\rm Pic}^0_S(L)_p=0$. \item $S$ contains at least one place of degree (relative to $\Bbb F_q$) coprime to $p$. \end{enumerate} \end{definition} \begin{remark} It is easily seen that $S$ is $p$--large if and only if ${\rm Pic}_S(L)_p=0$, where $${\rm Pic}_S(L):=\frac{{\rm Div}(L)}{{\rm Div}_S(L) + {\rm div}(L^\times)}$$ is the quotient of the full Picard group ${\rm Pic}(L)$ of $L$ by its subgroup of $S$--supported divisor classes. This is perhaps a more natural definition, but we prefer to use the definition above because ${\rm Pic}^0(L)$ (as opposed to ${\rm Pic}(L)$) is much more naturally related to the $1$--motive $M_S$. \end{remark} The following result was obtained in \cite{GPff}. (See Proposition 1.1 in loc.cit.) \begin{prop}[Greither--Popescu, \cite{GPff}]\label{largeS-proposition} If $S$ is $p$--large, then the following hold. \begin{enumerate} \item There is a canonical isomorphism $T_p(M_S)^{\Gamma}\simeq U_S(L)_p$. \item There is a canonical isomorphism $T_p(M_S)_{\Gamma}\simeq X_S(L)_p$. \end{enumerate} \end{prop} \begin{remark} In fact, in \cite{GPff}, the authors describe the modules $T_p(M_{S, T})^\Gamma$ and $T_p(M_{S, T})_\Gamma$, where $M_{S, T}$ is the Picard $1$--motive associated to $(\overline Z, \overline{\Bbb F_q}, \overline S, \overline T)$, where $T$ is a finite, non--empty set of closed points on $Z$, disjoint from $S$. However, by \cite[Rem.~2.7]{GP12} and the proof of \cite[Lemma 3.2]{GPff}, we have for any such $T$ equalities \[ T_p(M_{S, T})=T_p(M_S), \qquad U_{S, T}(L)_p=U_S(L)_p, \] where $U_{S,T}(L)$ is the group of $S$--units in $L^\times$, congruent to $1$ modulo all primes in $T$. \end{remark} The goal of this Appendix is to remove the hypothesis ``$S$ is $p$--large'' in the Proposition above. More precisely, we sketch the proof of the following. \begin{prop}\label{smallS-proposition} With notations as above, the following hold for all sets $S$. \begin{enumerate} \item There is a canonical isomorphism $T_p(M_S)^{\Gamma}\simeq U_S(L)_p$. \item There are canonical exact sequences of $\Bbb Z_p$--modules: \begin{eqnarray*} && 0\to {\rm Pic}^0_S(L)_p\to T_p(M_S)_{\Gamma}\to \widetilde{X_S}(L)\to 0, \\ && 0\to \Bbb Z_p/d_S\mathbb{Z}_p \to \widetilde{X_S}(L)\to X_S(L)_p\to 0, \end{eqnarray*} where $d_S\Bbb Z={\rm deg}({\rm Div}_S(L))$ and $\widetilde X_S(L):= ({\rm Div}^0_{\overline S}(\overline L)_p)_\Gamma$. In particular, if $S$ contains a prime of degree not divisible by $p$, then $\widetilde{X_S}(L)=X_S(L)_p$. \end{enumerate} \end{prop} \begin{proof} (Sketch) We will give only a brief sketch of the proof, as the techniques and main ideas are borrowed from \cite{GPff}. First, we consider the exact sequence of $\Bbb Z_p[[\Gamma]]$--modules $$0\to {\rm Div}^0_{\overline S}(\overline L)_p\to {\rm Div}_{\overline S}(\overline L)_p\overset{\rm deg}\longrightarrow \Bbb Z_p\to 0$$ and take $\Gamma$--invariants and $\Gamma$--coinvariants to obtain a long exact sequence $$0\to {\rm Div}^0_{S}(L)_p\to {\rm Div}_{S}(L)_p\overset{\rm deg}\longrightarrow \Bbb Z_p\to ({\rm Div}^0_{\overline S}(\overline L)_p)_\Gamma\to {\rm Div}_{S}(L)_p\overset{\rm fdeg}\longrightarrow \Bbb Z_p\to 0.$$ The fact that the $\Gamma$--invariant of the complex $[{\rm Div}_{\overline S}(\overline L)_p\overset{\rm deg}\longrightarrow \Bbb Z_p]$ is $[{\rm Div}_{S}(L)_p\overset{\rm deg}\longrightarrow \Bbb Z_p]$ and its $\Gamma$--coinvariant is $[{\rm Div}_{S}(L)_p\overset{\rm fdeg}\longrightarrow \Bbb Z_p]$ follows immediately from the definitions and is explained in \S2 of \cite{GPff}. Now, in the long exact sequence above, we have $${\rm ker}({\rm fdeg})=X_S(L)_p, \qquad {\rm coker}({\rm deg})=\Bbb Z_p/d_S\mathbb{Z}_p.$$ Therefore, if we let $\widetilde X_S(L):= ({\rm Div}^0_{\overline S}(\overline L)_p)_\Gamma$, we have a canonical exact sequence \begin{equation}\label{tilde-x-ses} 0\to \Bbb Z_p/d_S\mathbb{Z}_p \to \widetilde{X_S}(L)\to X_S(L)_p\to 0. \end{equation} If $J$ denotes the Jacobian of $\overline Z$, there is a canonical exact sequence of $\Bbb Z_p$--modules $$0\to T_p(J)\to T_p(M_S)\to {\rm Div}^0_{\overline S}(\overline L)_p\to 0.$$ (See \S2 of \cite{GP12} for the exact sequence above.) Since we have a canonical isomorphism $$T_p(J)_\Gamma\simeq {\rm Pic}^0(L)_p$$ (see Corollary 5.7 in \cite{GP12}) and $T_p(J)$ is $\Bbb Z_p$--free of finite rank, we also have $$T_p(J)^{\Gamma}=0.$$ Therefore, when taking $\Gamma$--invariants and $\Gamma$--coinvariants in the above exact sequence, we obtain a canonical long exact sequence of $\Bbb Z_p$--modules $$0\to T_p(M_S)^\Gamma\to {\rm Div}^0_{S}(L)_p\overset{\delta}\longrightarrow {\rm Pic}^0(L)_p\to T_p(M_S)_\Gamma\to \widetilde X_S(L)\to 0,$$ where the connecting morphism $\delta$ is the usual divisor--class map. (See \cite[\S1]{GPff} for this fact.) Since there is a canonical isomorphism $U_S(L)_p\simeq {\rm ker}(\delta)$, where $U_S(L)_p$ injects into ${\rm Div}^0_{S}(L)_p$ via the divisor map, we obtain a canonical isomorphism of $\Bbb Z_p$--modules $$T_p(M_S)^\Gamma\simeq U_S(L)_p,$$ which concludes the proof of part (1) of the Proposition. To conclude the proof of part (2), observe that, by definition, we have ${\rm coker}(\delta)={\rm Pic}^0_S(L)_p$. Therefore, the last four non--zero terms of the long exact sequence above lead to a canonical short exact sequence of $\Bbb Z_p$--modules \begin{equation}\label{coinvariant-ses} 0\to {\rm Pic}^0_S(L)_p\to T_p(M_S)_{\Gamma}\to \widetilde{X_S}(L)\to 0.\end{equation} In combination with \eqref{tilde-x-ses}, this concludes the proof of part (2). \end{proof} \begin{remark}\label{RW-Tate-remark} (Ritter-Weiss modules and Tate sequences.) Assume that $L$ is the top field in a finite, Galois extension $L/K$, of Galois group $G$ and that $\Bbb F_q(t)\subseteq K\subseteq L$. Further, assume that the set $S$ is $G$--equivariant. Then, all the $\Bbb Z_p$--modules involved in the proof of the above Proposition carry natural $\Bbb Z_p[G]$--module structures. Most importantly, due to their canonical constructions, all the exact sequences above are exact in the category of $\Bbb Z_p[G]$--modules. Exact sequence \eqref{coinvariant-ses} is the $p$--adic, function field analogue of the Ritter--Weiss exact sequence (see \cite{Ritter-Weiss}), defining a certain extension class $\bigtriangledown_S$ of a module of $S$--divisors by an $S$--ideal class group, in the number field setting. This is what prompts the notation $\bigtriangledown_S(L)_p:=T_p(M_S)_{\Gamma}$. Further, since $T_p(M_S)$ is $\Bbb Z_p[G]$--projective, the exact sequence of $\Bbb Z_p[G]$--modules \begin{equation}\label{Tate-seq}0\to U_S(L)_p\to T_p(M_S)\overset{1-\gamma}\longrightarrow T_p(M_S)\to \bigtriangledown_S(L)_p\to 0,\end{equation} is the $p$--adic, function field analogue of a Tate exact sequence (see \cite{GPff} and also \cite{Ritter-Weiss} for more details), in the case where $S$ is not necessarily $p$--large. Of course, in order to cement these analogies, one would have to compute the extension classes of \eqref{coinvariant-ses} an \eqref{Tate-seq} in ${\rm Ext}^1_{\Bbb Z_p[G]}(\widetilde X_S(L), {\rm Pic}^0_S(L)_p)$ and ${\rm Ext}^2_{\Bbb Z_p[G]}(\bigtriangledown_S(L)_p, U_S(L)_p)$ and show that they coincide with the class--field theoretically meaningful Ritter--Weiss and Tate classes, respectively. In \cite{GPff}, this was done $\ell$--adically, for $\ell\ne p$, for the exact sequence \eqref{Tate-seq}, in the case where $S$ is $\ell$--large. (See Theorem 2.2 in loc.cit.) A proof of the $p$--adic analogue of that theorem (even in the case where $S$ is $p$--large) is still missing in the literature, unless $|G|$ is not divisible by $p$, in which case this was proved in \cite{GPff}. (See Theorem 2.2. in loc.cit.) \end{remark} \iffalse \textcolor{red}{I suggest to delete the appendix and to just keep the proposition.} \begin{prop}\label{large power series} As a topological ring, $\mathbb{Z}_p[[\Gamma_\infty]] \simeq \mathbb{Z}_p[[x_1, x_2, \ldots]]$ is a power series ring in countably infinitely many variables. In particular, $\mathbb{Z}_p[[\Gamma_\infty]]$ is a local, integral domain of maximal ideal $(p, x_1, x_2, \ldots)$. \end{prop} \section{Appendix: The large Iwasawa algebra $\mathbb{Z}_p[[\Gamma_\infty]]$} In this section we prove the following structure theorem for the profinite group ring $\mathbb{Z}_p[[\Gamma_\infty]]$. \begin{prop}\label{large power series} As a topological ring, $\mathbb{Z}_p[[\Gamma_\infty]] \simeq \mathbb{Z}_p[[x_1, x_2, \ldots]]$ is a power series ring in countably infinitely many variables. In particular, $\mathbb{Z}_p[[\Gamma_\infty]]$ is a local, integral domain of maximal ideal $(p, x_1, x_2, \ldots)$. \end{prop} \begin{proof} Recall that, as a topological group, $\Gamma_\infty = \mathbb{Z}_p^\mathbb{N}$ is a product of countably infinitely many copies of $\mathbb{Z}_p$. We write \[ \Gamma_\infty = \prod_{n=1}^\infty A_n \] with $A_n = \mathbb{Z}_p$, for all $n \in \mathbb{N}$. For $N \in \mathbb{N}$ we define \[ \mathcal{O}_N := \left\{ U = \prod_{n=1}^\infty U_n \mid U_n \le A_n \text{ open for } 1 \le n \le N, U_n = A_n \text{ for } n > N \right\}. \] Clearly $\mathcal{O}_N \subseteq \mathcal{O}_{N+1}$. The set $\mathcal{O} := \cup_{N \in \mathbb{N}} \mathcal{O}_N$ is a basis of the topology of $\Gamma_\infty$. Hence \[ \mathbb{Z}_p[[\Gamma_\infty]] = \varprojlim_{V \in \mathcal{O}} \mathbb{Z}_p[\Gamma_\infty / V]. \] For $N \in \mathbb{N}$ we set \[ \mathcal{L}_N := \varprojlim_{U \in \mathcal{O}_N} \mathbb{Z}_p[\Gamma_\infty / U]. \] and note that we obtain a projective system of rings $\left( \mathcal{L}_N \right)_{N \in \mathbb{N}}$ with maps \[ \mathcal{L}_{N+1} \longrightarrow \mathcal{L}_N, \quad \left( \lambda_U \right)_{U \in \mathcal{O}_{N+1}} \mapsto \left( \lambda_U \right)_{U \in \mathcal{O}_{N}}. \] It is now quite formal to show that \begin{eqnarray*} \alpha \colon \varprojlim_{V \in \mathcal{O}} \mathbb{Z}_p[\Gamma_\infty / V] &\longrightarrow& \varprojlim_{N \in \mathbb{N}} \mathcal{L}_N, \\ \left( \lambda_V \right)_{V \in \mathcal{O}} &\mapsto& \left( \left( \lambda_U^{(N)} \right)_{U \in \mathcal{O}_N} \right)_{N \in \mathbb{N}} \end{eqnarray*} with $\lambda_U^{(N)} := \lambda_U$ is a well-defined isomorphism of rings. Indeed, we have a well-defined morphism \begin{eqnarray*} \beta \colon \varprojlim_{N \in \mathbb{N}} \mathcal{L}_N &\longrightarrow& \varprojlim_{V \in \mathcal{O}} \mathbb{Z}_p[\Gamma_\infty / V], \\ \left( \left( \lambda_U^{(N)} \right)_{U \in \mathcal{O}_N} \right)_{N \in \mathbb{N}} &\mapsto& \left( \lambda_V \right)_{V \in \mathcal{O}} \end{eqnarray*} with $\lambda_V$ defined as follows: choose $N$ such that $V \in \mathcal{O}_N$ and set $\lambda_V := \lambda_V^{(N)}$. It is easy to show that $\beta$ is the inverse of $\alpha$. We now have \begin{eqnarray*} \mathbb{Z}_p[[\Gamma_\infty]] &=& \varprojlim_{V \in \mathcal{O}} \mathbb{Z}_p[\Gamma_\infty / V] \\ &=& \varprojlim_{N \in \mathbb{N}} \mathcal{L}_N \\ &\simeq& \varprojlim_{N \in \mathbb{N}} \mathbb{Z}_p[[x_1, \ldots, x_N]] \end{eqnarray*} where the last isomorphism follows from \[ \mathcal{L}_N = \mathbb{Z}_p[[A_1 \times \ldots \times A_N]] = \mathbb{Z}_p[[\mathbb{Z}_p^N]] \] and a standard result of Iwasawa theory. We finally prove that \[ \varprojlim_{N \in \mathbb{N}} \mathbb{Z}_p[[x_1, \ldots, x_N]] \simeq \mathbb{Z}_p[[x_1, x_2, \ldots]]. \] The projective limit on the left is with respect to \[ \mathbb{Z}_p[[x_1, \ldots, x_N, x_{N+1}]] \stackrel{X_{N+1}=0}\longrightarrow \mathbb{Z}_p[[x_1, \ldots, x_N]]. \] For a finite subset $I \subseteq \mathbb{N}$ and $\underline{e} = (e_i)_{i \in I} \in \mathbb{N}^I$ we write $\underline{x}^{\underline{e}}$ for the monomial $\prod_{i \in I} x_i^{e_i}.$ Then \[ \mathbb{Z}_p[[x_1, x_2, \ldots ]] = \left\{ \sum_{(I, \underline{e})} a_{(I, \underline{e})} \underline{x}^{\underline{e}} \mid a_{(I, \underline{e})} \in \mathbb{Z}_p \right\}. \] We then have an isomorphism of rings \begin{eqnarray*} \mathbb{Z}_p[[x_1, x_2, \ldots ]] &\stackrel{\alpha'}\longrightarrow& \varprojlim_{N \in \mathbb{N}} \mathbb{Z}_p[[x_1, \ldots, x_N]], \\ \sum_{(I, \underline{e})} a_{(I, \underline{e})} \underline{x}^{\underline{e}} &\mapsto& \left( \sum_{\stackrel{(I, \underline{e})}{I \subseteq \{1, \ldots, N\}}} a_{(I, \underline{e})}^{(N)} \underline{x}^{\underline{e}} \right)_{N \in \mathbb{N}} \end{eqnarray*} with $a_{(I, \underline{e})}^{(N)} := a_{(I, \underline{e})}$. The inverse of $\alpha'$ is given by \[ \left( \sum_{\stackrel{(I, \underline{e})}{I \subseteq \{1, \ldots, N\}}} a_{(I, \underline{e})}^{(N)} \underline{x}^{\underline{e}} \right)_{N \in \mathbb{N}} \mapsto \left( \sum_{(I, \underline{e})} a_{(I, \underline{e})}^{} \underline{x}^{\underline{e}} \right)_{N \in \mathbb{N}} \] with $a_{(I, \underline{e})}^{} :=a_{(I, \underline{e})}^{(N)}$, where $N$ is large enough such that $I \subseteq \{1, \ldots, N \}$. Finally, we note that since $\mathbb{Z}_p[[x_1, x_2, \ldots ]] $ is a projective limit of local, integral domains $(\mathbb{Z}_p[[x_1, \ldots, x_N]], \, \mathfrak{m}_N:=(p, x_1,\ldots, x_N))$ and the transition maps are morphisms of local rings (mapping maximal ideals into maximal ideals), it is itself a local, integral domain of maximal ideal $\varprojlim_N\mathfrak{m}_N=(p, x_1, x_2, \ldots)$. \end{proof} \fi \footnotesize Werner Bley, \textsc{Department of Mathematics, Ludwig-Maximilians-Universit\"at M\"unchen}\par\nopagebreak \textit{E-mail address: } \texttt{[email protected]} Cristian D. Popescu, \textsc{Department of Mathematics, University of California San Diego}\par\nopagebreak \textit{E-mail address: }\texttt{[email protected]} \end{document}
\begin{document} \newtheorem{lem}{Lemma}[section] \newtheorem{pro}[lem]{Proposition} \newtheorem{thm}[lem]{Theorem} \newtheorem{rem}[lem]{Remark} \newtheorem{cor}[lem]{Corollary} \newtheorem{df}[lem]{Definition} \title[The Toda System on Compact Surfaces] {A variational Analysis of the Toda System on Compact Surfaces} \author{Andrea Malchiodi and David Ruiz} \address{SISSA, via Bonomea 265, 34136 Trieste (Italy) and Departamento de An\'alisis Matem\'atico, University of Granada, 18071 Granada (Spain).} \thanks{A. M. has been partially supported by GENIL (SPR) for a stay in Granada in 2011, and is supported by the FIRB project {\em Analysis and Beyond} from MIUR. D.R has been supported by the Spanish Ministry of Science and Innovation under Grant MTM2008-00988 and by J. Andalucia (FQM 116).} \email{[email protected], [email protected]} \keywords{Geometric PDEs, Variational Methods, Min-max Schemes.} \subjclass[2000]{35J50, 35J61, 35R01.} \begin{abstract} In this paper we consider the following {\em Toda system} of equations on a compact surface: $$ \left\{ \begin{array}{ll} - \D u_1 = 2 \rho_1 \left( \frac{h_1 e^{u_1}}{\int_\Sg h_1 e^{u_1} dV_g} - 1 \right) - \rho_2 \left( \frac{h_2 e^{u_2}}{\int_\Sg h_2 e^{u_2} dV_g} - 1 \right), \\ - \D u_2 = 2 \rho_2 \left( \frac{h_2 e^{u_2}}{\int_\Sg h_2 e^{u_2} dV_g} - 1 \right) - \rho_1 \left( \frac{h_1 e^{u_1}}{\int_\Sg h_1 e^{u_1} dV_g} - 1 \right). & \end{array} \right.$$ We will give existence results by using variational methods in a non coercive case. A key tool in our analysis is a new Moser-Trudinger type inequality under suitable conditions on the center of mass and the scale of concentration of the two components $u_1, u_2$. \end{abstract} \maketitle \section{Introduction} Let $\Sg$ be a compact orientable surface without boundary, and $g$ a Riemannian metric on $\Sg$. Consider the following system of equations: \begin{equation}\label{eq:toda} - \frac 1 2 \D u_i(x) = \sum_{j=1}^{N} a_{ij} e^{u_j(x)}, \qquad x \in \Sg, \ i = 1, \dots, N, \end{equation} where $\D=\D_g$ stands for the Laplace-Beltrami operator and $A = (a_{ij})_{ij}$ is the {\em Cartan matrix} of $SU(N+1)$, $$ A = \left( \begin{array}{cccccc} 2 & -1 & 0 & \dots & \dots & 0 \\ -1 & 2 & -1 & 0 & \dots & 0 \\ 0 & -1 & 2 & -1 & \dots & 0 \\ \dots & \dots & \dots & \dots & \dots & \dots \\ 0 & \dots & \dots & -1 & 2 & -1 \\ 0 & \dots & \dots & 0 & -1 & 2 \\ \end{array} \right). $$ Equation \eqref{eq:toda} is known as the {\em Toda system}, and has been extensively studied in the literature. This problem has a close relationship with geometry, since it can be seen as the Frenet frame of holomorphic curves in $\mathbb{CP}^N$ (see \cite{guest}). Moreover, it arises in the study of the non-abelian Chern-Simons theory in the self-dual case, when a scalar Higgs field is coupled to a gauge potential, see \cite{dunne, tar, yys}. Let us assume, for the sake of simplicity, that $\Sg$ has total area equal to $1$, i.e. $\int_{\Sg} 1 \, dV_g=1$. In this paper we study the following version of the Toda system for $N=2$: \begin{equation}\label{eq:gtodaaux} \left\{ \begin{array}{l} - \D u_1 = 2 \rho_1 \left( h_1 e^{u_1} - 1 \right) - \rho_2 \left( h_2 e^{u_2} - 1 \right), \\ - \D u_2 = 2 \rho_2 \left(h_2 e^{u_2} - 1 \right) - \rho_1 \left( h_1 e^{u_1} - 1 \right), \end{array} \right. \end{equation} where $h_i$ are smooth and strictly positive functions defined on $\Sg$. By integrating on $\Sg$ both equations, we obtain that any solution $(u_1,u_2)$ of \eqref{eq:gtodaaux} satisfies: $$ \int_{\Sg} h_i e^{u_i} \, dV_g =1, \qquad i=1,\ 2. $$ Hence, problem \eqref{eq:gtodaaux} is equivalent to: \begin{equation}\label{eq:gtoda} \left\{ \begin{array}{ll} - \D u_1 = 2 \rho_1 \left( \frac{h_1 e^{u_1}}{\int_\Sg h_1 e^{u_1} dV_g} - 1 \right) - \rho_2 \left( \frac{h_2 e^{u_2}}{\int_\Sg h_2 e^{u_2} dV_g} - 1 \right), \\ - \D u_2 = 2 \rho_2 \left( \frac{h_2 e^{u_2}}{\int_\Sg h_2 e^{u_2} dV_g} - 1 \right) - \rho_1 \left( \frac{h_1 e^{u_1}}{\int_\Sg h_1 e^{u_1} dV_g} - 1 \right). & \end{array} \right. \end{equation} Problem \eqref{eq:gtoda} is variational, and solutions can be found as critical points of a functional $J_\rho : H^1(\Sg) \times H^1(\Sg) \to \mathbb{R}$ ($\rho=(\rho_1,\rho_2)$) given by \begin{equation} \label{funzionale} J_\rho(u_1, u_2) = \int_\Sg Q(u_1,u_2)\, dV_g + \sum_{i=1}^2 \rho_i \left ( \int_\Sg u_i dV_g - \log \int_\Sg h_i e^{u_i} dV_g \right ), \end{equation} where $Q(u_1,u_2)$ is defined as: \begin{equation}\label{eq:QQ} Q(u_1,u_2) = \frac{1}{3} \left ( |\n u_1|^2 + |\n u_2|^2 + \n u_1 \cdot \n u_2\right ). \end{equation} Here and throughout the paper $\n u= \n_g u$ stands for the gradient of $u$ with respect to the metric $g$, whereas $\cdot$ denotes the Riemannian scalar product. Observe that both \eqref{eq:gtoda} and \eqref{funzionale} are invariant under addition of constants to $u_1$, $u_2$. The structure of the functional $J_{\rho}$ strongly depends on the parameters $\rho_1$, $\rho_2$. To start with, the following analogue of the Moser-Trudinger inequality has been given in \cite{jw}: \begin{equation} \label{mtjw} 4\pi \sum_{i=1}^2 \left (\log \int_\Sg h_i e^{u_i} dV_g - \int_\Sg u_i dV_g \right ) \leq \int_\Sg Q(u_1,u_2)\, dV_g +C, \end{equation} for some $C=C(\Sg)$. As a consequence, $J_{\rho}$ is bounded from below for $\rho_i \leq 4 \pi$ (see also \cite{sw, sw2, wang} for related inequalities). In particular, if $\rho_i < 4 \pi$ ($i=1,2$), $J_{\rho}$ is coercive and a solution for \eqref{eq:gtoda} can be easily found as a minimizer. If $\rho_i>4\pi$ for some $i=1,\ 2$, then $J_{\rho}$ is unbounded from below and a minimization technique is no more possible. Let us point out that the Leray-Schauder degree associated to \eqref{eq:gtoda} is not known yet. For the scalar case, the Leray-Schauder has been computed in \cite{clin}. The unique result on the topological degree for Liouville systems is \cite{lin-zhang}, but our case is not covered there. In this paper we use variational methods to obtain existence of critical points (generally of saddle type) for $J_{\rho}$. Before stating our results, let us comment briefly on some aspects of the problem under consideration. When some of the parameters $\rho_i$ equals $4 \pi$, the situation becomes more subtle. For instance, if we fix $\rho_1 < 4 \pi$ and let $\rho_2 \nearrow 4 \pi$, then $u_2$ could exhibit a blow-up behavior (see the proof of Theorem 1.1 in \cite{jlw}). In this case, $u_2$ would become close to a function $U_{\lambda, x}$ defined as: $$ U_{\lambda, x}(y)= \log \left( \frac{4 \l}{\left(1 + \lambda \, d(x,y)^2\right)^2} \right), $$ where $y \in \Sg$, $d(x,y)$ stands for the geodesic distance and $\lambda$ is a large parameter. Those functions $U_{\l, x}$ are the unique entire solutions of the Liouville equation (see \cite{cl2}): $$ - \D U= 2 e^{U}, \qquad \int_{\mathbb{R}^2} e^{U} \, dx < +\infty.$$ In \cite{jlw} and \cite{ll} some conditions for existence are given when some of the $\rho_i$'s equals $4 \pi$. The proofs involve a delicate analysis of the limit behavior of the solutions when $\rho_i$ converge to $4\pi$ from below, in order to avoid bubbling of solutions. For that, some conditions on the functions $h_i$ are needed. The scalar counterpart of \eqref{eq:gtoda} is a Liouville-type problem in the form: \begin{equation} \label{scalar} - \Delta u = 2 \rho \left( \frac{h(x) e^{u}}{\int_\Sigma h(x) e^{u} d V_g} - 1 \right),\end{equation} with $\rho \in \mathbb{R}$. This equation has been very much studied in the literature; there are by now many results regarding existence, compactness of solutions, bubbling behavior, etc. We refer the interested reader to the reviews \cite{mreview, tar3}. Solutions of \eqref{scalar} correspond to critical points of the functional $I_{\rho}:H^1(\Sg) \to \mathbb{R}$, \begin{equation}\label{scalar2} I_\rho(u) = \frac 1 2 \int_{\Sigma} |\n_g u|^2 dV_g + 2 \rho \left ( \int_\Sigma u dV_g - \log \int_\Sigma h(x) e^{u} dV_g \right ), \qquad u \in H^{1}(\Sigma). \end{equation} The classical Moser-Trudinger inequality implies that $I_{\rho}$ is bounded from below for $\rho \leq 4 \pi$. For larger values of $\rho$, variational methods were applied to \eqref{scalar} for the first time in \cite{djlw}, \cite{st}. In \cite{dm} the $Q$-curvature prescription problem is addressed in a 4-dimensional compact manifold: however, the arguments of the proof can be easily translated to the Liouville problem \eqref{scalar}, see \cite{dja}. Let us briefly describe the proof of \cite{dm} in the case $\rho \in (4 \pi , 8 \pi)$, for simplicity. In \cite{dm} it is shown that, whenever $I_{\rho}(u_n) \to -\infty$, then (up to a subsequence) $$ \frac{e^{u_n}}{\int_{\Sg} e^{u_n}\, dV_g} \rightharpoonup \delta_{x}, \ x \in \Sg, $$ in the sense of measures. Moreover, for $L>0$ sufficiently large, one can define a homotopy equivalence (see also \cite{mal}): $$I_{\rho}^{-L}= \{u\in H^1(\Sg):\ I_{\rho}(u)< -L \} \simeq \{ \delta_x:\ x \in \Sg\} \simeq \Sg.$$ Therefore the sublevel $I_{\rho}^{-L}$ is not contractible, and this allows us to use a min-max argument to find a solution. We point out that \cite{dm} also deals with the case of higher values of $\rho$, whenever $\rho \notin 4 \pi \mathbb{N}$. Coming back to system \eqref{eq:gtoda}, there are very few results when $\rho_i>4\pi$ for some $i=1$, $2$. One of them is given in \cite{cheikh} and concerns the case $\rho_1<4\pi$ and $\rho_2 \in (4\pi m, 4 \pi (m+1))$, $m \in \mathbb{N}$. There, the situation is similar to \cite{dm}; in a certain sense, one can describe the set $J_{\rho}^{-L}$ from the behavior of the second component $u_2$ as in \cite{dm}. In Theorem 1.4 of \cite{jlw}, an existence result is stated for $\rho_i \in (0,4 \pi) \cup (4 \pi, 8 \pi)$ for a compact surface $\Sg$ with positive genus: however, the min-max argument used in the proof seems not to be correct. The main problem is that a one-dimensional linking argument is used to obtain conditions on both the components of the system. In any case, the core of \cite{jlw} is the blow-up analysis for the Toda system (see Remark \ref{puffff} for more details). In particular, it is shown that if the $\rho_i$'s are bounded away from $4 \pi \mathbb{N}$, the set of solutions of \eqref{eq:gtoda} is compact (up to addition of constants). This is an essential tool for our analysis. In this paper we deal with the case $\rho_i \in (4 \pi, 8 \pi)$, $i=1,2$. Our main result is the following: \begin{thm}\label{t:main} Assume that $\rho_i \in (4 \pi, 8 \pi)$ and that $h_1, h_2$ are two positive $C^1$ functions on $\Sg$. Then there exists a solution $(u_1, u_2)$ of \eqref{eq:gtoda}. \end{thm} Let us point out that we find existence of solutions also if $\Sg$ is a sphere. Moreover, our existence result is based on a detailed study of the topological properties of the low sublevels of $J_{\rho}$. This study is interesting in itself; in the scalar case an analogous one has been used to deduce multiplicity results (see \cite{demarchis}) and degree computation formulas (see \cite{mal}). We shall see that the low sublevels of $J_{\rho}$ contain couples in which at least one component is very concentrated around some point of $\Sg$. Moreover, both components can concentrate at two points that could eventually coincide. However, we shall see that, in a certain sense, \begin{equation} \label{frase} \mbox{\em if } u_1,\ u_2 \ \mbox{\em concentrate around the same point at the same rate, then } J_{\rho} \, \mbox{\em is bounded from below.} \end{equation} To make this statement rigorous, we need several tools. The first is a definition of a rate of concentration of a positive function $f \in \Sg$, normalized in $L^1$, which is a refinement of the one given in \cite{mr}; this will be measured by a positive parameter called $\s=\s(f)$. In a sense, the smaller is $\s$, the higher is the rate of concentration of $f$. Compared to the classical concentration compactness arguments, our function $\s$ has the property of being continuous with respect to the $L^1$ topology (see Remark \ref{sigmacont}). Second, we also need to define a continuous center of mass when $\s\leq \delta $ for some fixed $\delta >0$: we will denote it by $\beta=\beta(f) \in \Sg$. When $\sigma \geq \delta $, the function is not concentrated and the center of mass cannot be defined. Hence, we have a map: $$\psi: H^1(\Sg) \to \overline{\Sg}_{\delta }, \ \psi(u_i)= (\beta(f_i), \s(f_i)),\ \mbox{ where } f_i=\frac{e^{u_i}}{\int_{\Sg} e^{u_i}\, dV_g}.$$ Here $\overline{\Sg}_{\delta }$ is the topological cone with base $\Sg$, so that we make the identification to a point when $\sigma \geq \delta $ for some $\delta>0$ fixed. Third, we need an improvement of the Moser-Trudinger inequality in the following form: if $ \psi(f_1) = \psi(f_2)$, then $J_{\rho}(u_1,u_2)$ is bounded from below. In this sense, \eqref{frase} is made precise. The proof uses local versions of the Moser-Trudinger inequality and applications of it to small balls (via a convenient dilation) and to annuli with small internal radius (via a Kelvin transform). Roughly speaking, on low sublevels one of the following alternatives hold: \begin{enumerate} \item one component concentrates at a point whereas the other does not concentrate ($\s_i< \delta \leq \s_j$), or \item the two components concentrate at different points ($\s_i < \delta,\ \beta_1 \neq \beta_2$), or \item the two components concentrate at the same point with different rates of concentration ($\s_i< \s_j<\delta$, $\beta_1=\beta_2$). \end{enumerate} With this at hand, for $L > 0$ large we are able to define a continuous map: $$ J_{\rho}^{-L} \quad \stackrel{\psi \oplus \psi}{\longrightarrow} \quad X:=(\overline{\Sg}_{\delta } \times \overline{\Sg}_{\delta }) \setminus \overline{D}, $$ where $\overline{D}$ is the diagonal of $\overline{\Sg}_{\delta } \times \overline{\Sg}_{\delta }$. We can also proceed in the opposite direction: in Section \ref{s:4} we construct a family of test functions modeled on $X$ on which $J_\rho$ attains arbitrarily low values, see Lemma \ref{l:dsmallIlow} for the precise result. Calling $\phi : X \to J_{\rho}^{-L}$ the corresponding map, we will prove that the composition \begin{equation} \label{compo} X \quad \stackrel{\phi}{\longrightarrow} \quad J_{\rho}^{-L} \quad \stackrel{\psi \oplus \psi}{\longrightarrow} \quad X \end{equation} is homotopically equivalent to the identity map. In this situation it is said that $J_{\rho}^{-L}$ {\em dominates} $X$ (see \cite{hat}, page 528). In a certain sense, those maps are natural since they describe properly the topological properties of $J_{\rho}^{-L}$. We will see that for any compact orientable surface $\Sg$, $X$ is non-contractible; this is proved by estimating its cohomology groups. As a consequence, $\phi(X)$ is not contractible in $J_{\rho}^{-L}$. This allows us to use a min-max argument to find a critical point of $J_{\rho}$. Here, the compactness of solutions proved in \cite{jlw} is an essential tool, since the Palais-Smale property for $J_{\rho}$ is an open problem (as it is for the scalar case). The rest of the paper is organized as follows. In Section \ref{s:pr} we present the notations that will be used in the paper, as well as some preliminary results. The definition of the map $\psi$, its properties, and the improvement of the Moser-Trudinger inequality will be exposed in Section \ref{s:3}. In Section \ref{s:4} we define the map $\phi$ and prove that the composition \eqref{compo} is homotopic to the identity. Here we also develop the min-max scheme that gives a critical point of $J_{\rho}$. The fact that $X$ is not contractible is proved in a final Appendix. \section{Notations and preliminaries}\label{s:pr} In this section we collect some useful notation and preliminary facts. Throughout the paper, $\Sg$ is a compact orientable surface without boundary; for simplicity, we assume $|\Sg|= \int_{\Sg} 1 dV_g =1$. Given $\delta>0$, we define the topological cone: \begin{equation} \label{cono} \overline{\Sg}_{\delta} = \left(\Sigma\times (0, +\infty) \right) |_{\left( \Sigma\times [\delta, + \infty) \right)}.\end{equation} For $x, y \in \Sg$ we denote by $d(x,y)$ the metric distance between $x$ and $y$ on $\Sg$. In the same way, for any $p \in \Sigma$, $\Omega, \Omega' \subseteq \Sg$, we denote: $$ d(p, \Omega) = \inf \left\{ d(p,x) \; : x \in \Omega \right\}, \qquad d(\Omega,\Omega') = \inf \left\{ d(x,y) \; : \; x \in \Omega,\ y \in \Omega' \right\}. $$ Moreover, the symbol $B_p(r)$ stands for the open metric ball of radius $r$ and center $p$, and $A_p(r,R)$ the open annulus of radii $r$ and $R$, $r<R$. The complement of a set $\Omega$ in $\Sg$ will be denoted by $\Omega^c$. Given a function $u \in L^1(\Sg)$ and $\Omega \subset \Sg$, we consider the average of $u$ on $\Omega$: $$ \fint_{\Omega} u \, dV_g = \frac{1}{|\Omega|} \int_{\Omega} u \, dV_g.$$ We denote by $\overline{u}$ the average of $u$ in $\Sg$: since we are assuming $|\Sg| = 1$, we have $$ \overline{u}= \int_\Sg u \, dV_g = \fint_\Sg u\, dV_g. $$ Throughout the paper we will denote by $C$ large constants which are allowed to vary among different formulas or even within lines. When we want to stress the dependence of the constants on some parameter (or parameters), we add subscripts to $C$, as $C_\delta $, etc.. Also constants with subscripts are allowed to vary. Moreover, sometimes we will write $o_{\alpha}(1)$ to denote quantities that tend to $0$ as $\alpha \to 0$ or $\alpha \to +\infty$, depending on the case. We will similarly use the symbol $O_\a(1)$ for bounded quantities. \ \noindent We begin by recalling the following compactness result from \cite{jlw}. \begin{thm}\label{th:jlw} (\cite{jlw}) Let $m_1, m_2$ be two non-negative integers, and suppose $\L_1, \L_2$ are two compact sets of the intervals $(4 \pi m_1, 4 \pi (m_1 + 1))$ and $(4 \pi m_2, 4 \pi (m_2 + 1))$ respectively. Then if $\rho_1 \in \L_1$ and $\rho_2 \in \L_2$ and if we impose $\int_{\Sg} u_i dV_g = 0$, $i = 1, 2$, the solutions of \eqref{eq:gtoda} stay uniformly bounded in $L^\infty(\Sg)$ (actually in every $C^l(\Sg)$ with $l \in \mathbb{N}$). \end{thm} \noindent Next, we also recall some Moser-Trudinger type inequalities. As commented in the introduction, problem \eqref{eq:gtoda} is the Euler-Lagrange equation of the energy functional $J_{\rho}$ given in \eqref{funzionale}. This functional is bounded below only for certain values of $\rho_1, \rho_2$, as has been proved by Jost and Wang (see also \eqref{mtjw}): \begin{thm}\label{th:jw} (\cite{jw}) The functional $J_\rho$ is bounded from below if and only if $\rho_i \leq 4 \pi$, $i=1,\ 2$. \end{thm} \noindent The next proposition can be thought of as a local version of Theorem \ref{th:jw}, and will be of use in Section \ref{s:3}. Let us recall the definition of the quadratic form $Q$ in \eqref{eq:QQ}. \begin{pro}\label{p:MTbd} Fix $\delta > 0$, and let $\Omega_1 \subset \Omega_2 \subset \Sg$ be such that $d(\Omega_1, \partial \Omega_2) \geq \delta $. Then, for any $\varepsilon > 0$ there exists a constant $C = C(\e, \delta)$ such that for all $u \in H^1(\Sg)$ \begin{equation}\label{eq:ineqSg} 4 \pi \left ( \log \int_{\Omega_1} e^{u_1} dV_g + \log \int_{\Omega_1} e^{u_2} dV_g -\fint_{\Omega_2} u_1 dV_g - \fint_{\Omega_2} u_2 dV_g \right )\leq (1+\e) \int_{\Omega_2} Q(u_1, u_2) dV_g + C. \end{equation} \end{pro} \begin{pf} We can assume without loss of generality that $\fint_{\Omega_2} u_i dV_g = 0$ for $i = 1, 2$. Let us write $$ u_i = v_i + w_i, \quad \int_{\Omega_2} v_i \, dV_g = \int_{\Omega_2} w_i \, dV_g =0, $$ where $v_i \in L^\infty(\Omega_2)$ and $w_i \in H^1(\Omega_2)$ will be fixed later. We have \begin{equation}\label{eq:ddmm2} \log \int_{\Omega_1} e^{u_1} dV_g + \log \int_{\Omega_1} e^{u_2} dV_g \leq \| v_1\|_{L^{\infty}(\Omega_1)} + \| v_2\|_{L^{\infty}(\Omega_1)} + \log \int_{\Omega_1} e^{w_1} dV_g + \log \int_{\Omega_1} e^{w_2} dV_g. \end{equation} We next consider a smooth cutoff function $\chi$ with values into $[0,1]$ satisfying $$ \left\{ \begin{array}{ll} \chi(x) = 1 & \hbox{ for } x \in \Omega_1,\\ \chi(x) = 0 & \hbox{ if } d(x, \Omega) > \delta/2, \end{array} \right. $$ and then define $$ \tilde{w}_i(x) = \chi(x) w_i(x); \qquad \quad i = 1, 2. $$ Clearly $\tilde{w}_i$ belongs to $H^1(\Sg)$ and is supported in a compact set of the interior of $\Omega_2$. Hence we can apply Theorem \ref{th:jw} to $\tilde{w}_i$ on $\Sg$, finding $$ \log \int_{\Omega_1} e^{w_1} dV_g + \log \int_{\Omega_1} e^{w_2} dV_g \leq \log \int_{{\Sg}} e^{\tilde{w}_1} dV_g + \log \int_{{\Sg}} e^{\tilde{w}_2} dV_g \leq $$$$\frac{1}{4\pi} \int_{{\Sg}} Q(\tilde{w}_1,\tilde{w}_2) dV_g + \fint_{\Sg} (\tilde{w}_1 + \tilde{w}_2)\, dV_g + C. $$ Using the Leibnitz rule and H\"older's inequality we obtain $$ \int_{{\Sg}} Q(\tilde{w}_1,\tilde{w}_2) dV_g \leq (1+\e) \int_{\Omega_2} Q(w_1, w_2) dV_g + C_{\e} \int_{\Omega_2} (w_{1}^2 + w_{2}^2) dV_g. $$ Moreover, we can estimate the mean value of $\tilde{w}_i$ in the following way: $$\fint_{\Sg} \tilde{w}_i \, dV_g \leq C \left ( \int_{\Sg} |\nabla \tilde{w}_i|^2\, dV_g \right )^{1/2} \leq C_{\e} + \varepsilon \int_{\Omega_2} |\nabla \tilde{w}_i|^2\, dV_g \leq $$$$C_{\e} + C \varepsilon \left ( \int_{\Omega_2} |\n w_i|^2\, dV_g + C \int_{\Omega_2} w_{i}^2\, dV_g \right ).$$ From \eqref{eq:ddmm2} and the last formulas we find \begin{eqnarray}\label{eq:last} \nonumber \log \int_{\Omega_1} e^{u_1} dV_g + \log \int_{\Omega_1} e^{u_2} dV_g & \leq & \| v_1\|_{L^{\infty}(\Omega_1)} + \| v_2\|_{L^{\infty}(\Omega_1)} + \frac{1+\e}{4\pi} \int_{\Omega_2} Q(w_1, w_2) dV_g \\ & + & C_{\e} \int_{\Omega_2} (w_1^2 + w_2^2) dV_g + C. \end{eqnarray} To control the latter terms we use truncations in Fourier modes. Define $V_{\e}$ to be the direct sum of the eigenspaces of the Laplacian on $\Omega_2$ (with Neumann boundary conditions) with eigenvalues less or equal than $C_{\e}\e^{-1}$. Take now $v_i$ to be the orthogonal projection of $u_i$ onto $V_{\e}$. In $V_\e$ the $L^\infty$ norm is equivalent to the $L^2$ norm: by using Poincar{\'e}'s inequality we get $$ C_{\e} \int_{\Omega_2} (w_{1}^2 + w_{2}^2) dV_g \leq \varepsilon \int_{\Omega_2} Q(u_1,u_2) dV_g, $$$$ \|v_i\|_{L^\infty(\Omega_1)} \leq C_{\e} \|v_i\|_{L^2(\Omega_2)} \leq C_\varepsilon \left( \int_{\Omega_2} |\n u_i|^2 dV_g \right)^{\frac 12} \leq \varepsilon \int_{\Omega_2} Q(u_1,u_2) dV_g +C_{\e}. $$ Hence, from \eqref{eq:last} and the above inequalities we derive \eqref{eq:ineqSg} by renaming $\e$ properly. \end{pf} \begin{rem}\label{r:regeigenv} While the Fourier decomposition used in the above proof depends on $\Omega_2$, the constants only depend on $\Sg$, $\delta $ and $\e$. In fact, one can replace $\O_2$ by a domain $\check{\O}_2$, $\O_2 \subseteq \check{\O}_2 \subseteq B_{\O_2}(\delta /2)$ with boundary curvature depending only on $\delta $ and satisfying a uniform interior sphere condition with spheres of radius $\delta ^3$. For example, one can obtain such a domain $\check{\O}_2$ triangulating $\Sg$ by simplexes with diameters of order $\delta ^2$, take suitable union of triangles and smoothing the corners. For these domains, which are finitely many, the eigenvalue estimates will only depend on $\delta $. \end{rem} \ \noindent We next prove a criterion which gives us a first insight on the properties of the low sublevels of $J_{\rho}$. This result is in the spirit of an improved inequality in \cite{cl}, and we use an extra covering argument to track the concentration properties of both components of the system. We need first an auxiliary lemma. \begin{lem}\label{l:step1} Let $\delta _0 > 0$, $\g_0 > 0$, and let $\Omega_{i,j} \subseteq \Sg$, $i,j = 1, 2$, satisfy $d(\Omega_{i,j},\Omega_{i,k}) \geq \delta _0$ for $j \neq k$. Suppose that $u_1, u_2 \in H^1(\Sg)$ are two functions verifying \begin{equation}\label{eq:ddmm} \frac{\int_{\Omega_{i,j}} e^{u_i} dV_g}{\int_\Sg e^{u_i} dV_g} \geq \g_0, \qquad \qquad i,j = 1, 2. \end{equation} Then there exist positive constants $\tilde{\g}_0$, $\tilde{\delta }_0$, depending only on $\g_0$, $\delta _0$, and two sets $\tilde{\O}_1, \tilde{\O}_2 \subseteq \Sg$, depending also on $u_1, u_2$ such that \begin{equation}\label{eqLddmm2} d(\tilde{\O}_1, \tilde{\O}_2) \geq \tilde{\delta }_0; \qquad \quad \frac{\int_{\tilde{\Omega}_{i}} e^{u_1} dV_g}{\int_\Sg e^{u_1} dV_g} \geq \tilde{\g}_0, \quad \frac{\int_{\tilde{\Omega}_{i}} e^{u_2} dV_g}{\int_\Sg e^{u_2} dV_g} \geq \tilde{\g}_0; \quad i = 1, 2. \end{equation} \end{lem} \begin{pf} First, we fix a number $r_0 < \frac{\delta _0}{80}$. Then we cover $\Sg$ with a finite union of metric balls $(B_{x_l}(r_0))_l$, whose number can be bounded by an integer $N_{r_0}$ which depends only on $r_0$ (and $\Sg$). Next we cover $\overline{\Omega}_{i,j}$ by a finite number of these balls, and we choose $y_{i,j} \in \cup_l (x_l)$ such that $$ \int_{B_{y_{i,j}}(r_0)} e^{u_i} dV_g = \max \left\{ \int_{B_{x_l}(r_0)} e^{u_i} dV_g \; : \; B_{x_l}(r_0) \cap \overline{\Omega}_{i,j} \neq \emptyset \right\}. $$ Since the total number of balls is bounded by $N_{r_0}$ and since by our assumption the (normalized) integral of $e^{u_i}$ over $\Omega_{i,j}$ is greater or equal than $\g_0$, it follows that \begin{equation}\label{eq:bdtg0} \frac{\int_{B_{y_{i,j}}(r_0)} e^{u_i} dV_g}{\int_{\Sg} e^{u_i} dV_g} \geq \frac{\g_0}{N_{r_0}}. \end{equation} By the properties of the sets $\Omega_{i,j}$, we have that: $$ B_{y_{i,j}}(2r_0) \cap B_{y_{i,k}}(r_0)=\emptyset \qquad \hbox{ for } j \neq k. $$ Now, one of the following two possibilities occurs: \begin{description} \item[(a)] $B_{y_{1,1}}(5 r_0) \cap \left( B_{y_{2,1}}(5 r_0) \cup B_{y_{2,2}}(5 r_0) \right) \neq \emptyset$ or $B_{y_{1,2}}(5 r_0) \cap \left( B_{y_{2,1}}(5 r_0) \cup B_{y_{2,2}}(5 r_0) \right) \neq \emptyset$; \item[(b)] $B_{y_{1,1}}(5 r_0) \cap \left( B_{y_{2,1}}(5 r_0) \cup B_{y_{2,2}}(5 r_0) \right) = \emptyset$ and $B_{y_{1,2}}(5 r_0) \cap \left( B_{y_{2,1}}(5 r_0) \cup B_{y_{2,2}}(5 r_0) \right) = \emptyset$. \end{description} In case {\bf (a)} we define the sets $\tilde{\Omega}_i$ as $$ \tilde{\Omega}_1 = B_{y_{1,1}}(30 r_0), \qquad \quad \tilde{\Omega}_2 = B_{y_{1,1}}(40 r_0)^c, $$ while in case {\bf (b)} we define $$ \tilde{\Omega}_1 = B_{y_{1,1}}(r_0) \cup B_{y_{2,1}}(r_0); \qquad \quad \tilde{\Omega}_2 = B_{y_{1,2}}(r_0) \cup B_{y_{2,2}}(r_0)). $$ We also set $\tilde{\g}_0 = \frac{\g_0}{N_{r_0}}$ and $\tilde{\delta }_0 = r_0$. We notice that $\tilde{\g}_0$ and $\tilde{\delta }_0$ depend only on $\g_0$ and $\delta _0$, as claimed, and that the sets $\tilde{\Omega}_i$ satisfy the required conditions. This concludes the proof of the lemma. \end{pf} \ \noindent We next derive the improvement of the constants in Theorem \ref{th:jw}, in the spirit of \cite{cl}. \begin{pro}\label{p:imprc} Let $u_1, u_2 \in H^1(\Sg)$ be a couple of functions satisfying the assumptions of Lemma \ref{l:step1} for some positive constants $\delta _0, \g_0$. Then for any $\varepsilon > 0$ there exists $C=C(\e) > 0$, depending on $\e, \delta _0$, and $\g_0$ such that $$ 8 \pi \left( \log \int_{\Sg} e^{u_1-\overline{u}_1} dV_g + \log \int_{\Sg} e^{u_2-\overline{u}_2} dV_g \right) \leq (1+\e) \int_\Sg Q(u_1, u_2) dV_g + C. $$ \end{pro} \begin{pf} Let $\tilde{\delta }_0, \tilde{\g}_0$ and $\tilde{\Omega}_1, \tilde{\Omega}_2$ be as in Lemma \ref{l:step1}, and assume without loss of generality that $\overline{u}_1 = \overline{u}_2 = 0$. Let us define $U_i=\{x \in \Omega:\ d(x, \tilde{\Omega}_i) < \tilde{\delta }_0/2\}$. By applying Proposition \ref{p:MTbd}, we get: \begin{equation} \label{hola} 4 \pi \left (\log \int_{\tilde{\Omega}_i} e^{u_1} dV_g + \log \int_{\tilde{\Omega}_i} e^{u_2} dV_g - \fint_{U_i} (u_1+u_2) dV_g \right)\leq (1+\e) \int_{U_i} Q(u_1, u_2) dV_g + C.\end{equation} Observe that: $$ \log \int_{\tilde{\Omega}_i} e^{u_j} dV_g\geq \log \left ( \int_{\Sg} e^{u_j} dV_g\right ) + \log \tilde{\g}_0.$$ Since $U_1 \cap U_2 = \emptyset$, we can sum \eqref{hola} for $i=1,2$, to obtain $$8 \pi \left ( \log \int_{\Sg} e^{u_1} dV_g + \log \int_{\Sg} e^{u_2} dV_g -\sum_{i=1}^2 \fint_{U_i} (u_1+u_2) dV_g \right )\leq (1+\e) \int_{\Sg} Q(u_1, u_2) dV_g + C.$$ It suffices now to estimate the term $\fint_{U_i} (u_1+u_2) dV_g$. By using Poincar{\'e}'s inequality and the estimate $|U_i| \geq \tilde{\delta }_0^2$, we have: $$ \fint_{U_i} u_j \, dV_g \leq \tilde{\delta }_0^{-2} \int_{U_i} u_j \, dV_g \leq C \left ( \int_{\Sg} |\n u_j|^2\, dV_g \right )^{1/2} \leq C + \varepsilon \int_{\Sg} |\n u_j|^2\, dV_g.$$ To finish the proof it suffices to properly rename $\e$. \end{pf} Proposition \ref{p:imprc} implies that on low sublevels, at least one of the components must be very concentrated around a certain point. A more precise description of the topological properties of $J_{\rho}^{-L}$ will be given later on. \section{Volume concentration and improved inequality} \label{s:3} In this section we give the main tools for the description of the sublevels of the energy functional $J_{\rho}$. Those will be contained in Propositions \ref{covering} and \ref{mt}, whose proof will be given in the subsequent subsections. First, we give continuous definitions of center of mass and scale of concentration of positive functions normalized in $L^1$, which are adequate for our purposes. Those are a refinement of \cite{mr}. Consider the set $$ A = \left\{ f \in L^1(\Sg) \; : \; f > 0 \ \hbox{ a. e. and } \int_\Sg f dV_g = 1 \right\}, $$ endowed with the topology inherited from $L^1(\Sg)$. Moreover, let us recall the definition \eqref{cono} for the cone $\overline{\Sg}_{\delta}$. \begin{pro} \label{covering} Let us fix a constant $R>1$. Then there exists $\delta= \delta(R) >0$ and a continuous map: $$ \psi : A \to \overline{\Sg}_{\delta}, \qquad \quad \psi(f)= (\beta, \sigma),$$ satisfying the following property: for any $f \in A$ there exists $p \in \Sg$ such that \begin{enumerate} \item[{\emph a)}] $ d(p, \beta) \leq C' \sigma$ for $C' = \max\{3 R+1, \delta^{-1}diam(\Sg) \}.$ \item[{\emph b)}] There holds: $$ \int_{B_p(\sigma)} f \, dV_g > \tau, \qquad \quad \int_{B_p(R \sigma)^c} f \, dV_g > \tau, $$ where $\tau>0$ depends only on $R$ and $\Sg$. \end{enumerate} \end{pro} Roughly speaking, the above map $\psi(f)= (\beta, \sigma)$ gives us a center of mass of $f$ and its scale of concentration around that point. Indeed, the smaller is $\sigma$, the bigger is the rate of concentration. Moreover, if $\sigma$ exceeds a certain positive constant, $\beta$ could not be defined; so, it is natural to make the identification in $\overline{\Sg}_\delta $. Next, we state an improved Moser-Trudinger inequality for couples $(u_1,u_2)$ such that $e^{u_i}$ are centered at the same point with the same rate of concentration. Being more specific, we have the following: \begin{pro} \label{mt} Given any $\e>0$, there exist $R=R(\e)>1$ and $\psi$ as given in Proposition \ref{covering}, such that for any $(u_1, u_2) \in H^1(\Sg)\times H^1(\Sg)$ with: $$\psi \left( \frac{e^{u_1}}{\int_{\Sigma} e^{u_1} dV_g} \right )= \psi \left( \frac{e^{u_2}}{\int_{\Sigma} e^{u_2} dV_g} \right ), $$ the following inequality holds: $$ (1+\e) \int_\Sg Q(u_1,u_2)\, dV_g \geq 8 \pi \left (\log \int_\Sg e^{u_1-\overline{u}_1} + dV_g +\log \int_\Sg e^{u_2-\overline{u}_2} dV_g \right )+ C, $$ for some $C=C(\e)$. \end{pro} The rest of the section is devoted to the proof of those propositions. \subsection{Proof of Proposition \ref{covering}} Take $R_0=3R$, and define $ \sigma: A \times \Sigma\to (0,+\infty)$ such that: \begin{equation} \label{sigmax} \int_{B_x(\sigma(x,f))} f \, dV_g = \int_{B_x(R_0 \sigma(x,f))^c} f \, dV_g. \end{equation} It is easy to check that $\sigma(x,f)$ is uniquely determined and continuous. Moreover, $\sigma$ satisfies: \begin{equation} \label{dett} d(x,y) \leq R_0 \max \{ \sigma(x,f), \sigma(y,f)\} +\min \{ \sigma(x,f), \sigma(y,f)\}. \end{equation} Otherwise, $ B_x(R_0 \sigma(x,f)) \cap B_y(\sigma(y,f)+\e) = \emptyset $ for some $\e>0$. Moreover, since $B_y(R_0 \sigma(y,f))$ does not fulfil the whole space $\Sg$, $A_y(\sigma(y,f), \sigma(y,f)+\e)$ is a nonempty open set. Then: $$ \int_{B_x(\sigma(x,f))} f \, dV_g = \int_{B_x(R_0 \sigma(x,f))^c} f \, dV_g \geq \int_{B_y(\sigma(y,f)+\e)} f \, dV_g > \int_{B_y(\sigma(y,f))} f \, dV_g. $$ By interchanging the roles of $x$ and $y$, we would also obtain the reverse inequality. This contradiction proves \eqref{dett}. We now define: $$ T: A \times \Sigma\to \mathbb{R}, \qquad T(x,f) = \int_{B_x(\s(x,f))} f dV_g. $$ \begin{lem}\label{sigma} If $x_0 \in \Sg$ is such that $T(x_0,f) = \max_{y \in\Sg} T(y,f)$, then $\s(x_0,f) < 3\s(x, f)$ for any other $x \in \Sg$. \end{lem} \begin{pf} Choose any $x \in \Sg$ and $\e>0$. First, observe that $B_x(R_0 \s(x,f)+\e) $ must intersect $B_{x_0}(\s(x_0,f))$. Otherwise, as above, we know that $A_x(R_0 \s(x,f), R_0 \s(x,f)+\e)$ is an open nonempty set. Then $$ T(x_0,f)= \int_{B_{x_0}(\s(x_0,f))} f \, dV_g < \int_{B_{x}(R_0\s(x,f))^c} f \, dV_g =T(x,f), $$ a contradiction. Arguing in the same way, we can also conclude that $B_x(R_0 \s(x,f)+\e) $ cannot be contained in $B_{x_0}(R_0\s(x_0,f))$. By the triangular inequality, we obtain that: $$ 2 (R_0 \sigma(x,f)+\e) > (R_0-1) \sigma(x_0,f).$$ Since $\e>0$ is arbitrary, there follows: $$ \sigma(x,f) \geq \frac{R_0-1}{2 R_0} \sigma(x_0,f).$$ Recalling that $R_0>3$, we are done. \end{pf} As a consequence of the previous lemma, we obtain the following: \begin{lem} \label{tau} There exists a fixed $\tau > 0$ such that $$ \max_{x \in \Sg} T(x,f) > \tau > 0 \qquad \quad \hbox{ for all } f \in A. $$ \end{lem} \begin{pf} Let us fix $x_0 \in \Sigma$ such that $T(x_0,f)=\max_{x\in \Sigma} T(x,f)$. For any $x \in A_{x_0}(\sigma(x_0,f ), R\sigma(x_0,f))$, by Lemma \ref{sigma}, we have that: $$ \int_{B_x(\sigma(x_0,f)/3)} f \, dV_g \leq \int_{B_x(\sigma(x,f))} f \, dV_g \leq T(x_0,f). $$ Let us take a finite covering: $$A_{x_0}(\sigma(x_0,f ), R\sigma(x_0,f)) \subset \cup_{i=1}^k B_{x_i}(\sigma(x_0,f)/3).$$ Observe that $k$ is independent of $f$ or $\sigma(x_0,f)$, and depends only on $\Sigma$ and $R$. Therefore: $$ 1 = \int_{\Sg} f \, dV_g \leq \int_{B_{x_0}(\sigma(x_0,f))} f \, dV_g + \int_{B_{x_0}(R\sigma(x_0,f))^c} f \, dV_g+ \sum_{i=1}^k \int_{B_{x_i}(\sigma(x_0,f)/3)} f \, dV_g \leq (k+2) T(x_0,f).$$ \end{pf} Let us define: $$ \sigma : A \to \mathbb{R}, \qquad \quad \sigma(f)= 3 \min\{ \sigma(x,f): \ x \in \Sigma\},$$ which is obviously a continuous function. \begin{rem} \label{sigmacont} In \cite{mr} (see Section 3 there) a sort of concentration parameter is defined, but it does not depend continuously on $f$. Moreover, the definition of barycenter given below has been modified compared to \cite{mr}. Finally, the application $\psi$ is mapped to a cone; this interpretation, which is crucial in our framework, was missing in \cite{mr}. \end{rem} Given $\tau$ as in Lemma \ref{tau}, consider the set: \begin{equation} \label{defS} S(f) = \left\{ x \in \Sigma\; : \; T(x,f) > \t,\ \s(x,f) < \s(f) \right\}. \end{equation} If $x_0 \in \Sg$ is such that $T(x_0,f)= \max_{x\in \Sg} T(x,f)$, then Lemmas \ref{sigma} and \ref{tau} imply that $x_0 \in S(f)$. Therefore, $S(f)$ is a nonempty open set for any $f \in A$. Moreover, from \eqref{dett}, we have that: \begin{equation}\label{eq:diamS} diam(S(f)) \leq (R_0+1)\s(f).\end{equation} By the Nash embedding theorem, we can assume that $\Sigma\subset \mathbb{R}^N$ isometrically, $N \in \mathbb{N}$. Take an open tubular neighborhood $\Sigma\subset U \subset \mathbb{R}^N$ of $\Sg$, and $\delta>0$ small enough so that: \begin{equation} \label{co} co \left [ B_x((R_0+1)\delta)\cap \Sigma\right ] \subset U \ \forall \, x \in \Sg, \end{equation} where $co$ denotes the convex hull in $\mathbb{R}^N$. We define now $$ \eta(f) = \frac{\displaystyle \int_\Sigma(T(x,f) - \t)^+ \left( \s(f) - \s(x,f) \right)^+ x \ dV_g}{\displaystyle \int_\Sigma(T(x,f) - \t)^+ \left( \s(f) - \s(x,f) \right)^+ dV_g}\in \mathbb{R}^N. $$ The map $\eta$ yields a sort of center of mass in $\mathbb{R}^N$. Observe that the integrands become nonzero only on the set $S(f)$. However, whenever $\sigma(f) \leq \delta$, \eqref{eq:diamS} and \eqref{co} imply that $\eta(f) \in U$, and so we can define: $$ \beta: \{f \in A:\ \s(f)\leq \delta \} \to \Sg, \ \ \beta(f)= P \circ \eta (f),$$ where $P: U \to \Sg$ is the orthogonal projection. Now, let us check that $\psi(f)=(\beta(f), \sigma(f))$ satisfies the conditions given by Proposition \ref{covering}. If $\sigma(f) \leq \delta$, then $\beta(f) \in co [ S(f)] \cap \Sg$. Therefore, $d(\beta(f), S(f)) < (R_0+1)\sigma(f)$. Take any $p \in S(f)$. Recall that $R_0 = 3R$ and that $\s(f) \leq 3\s(x,f)<3 \s(f)$ for any $x \in S(f)$: it is easy to conclude then {\it a)} and {\it b)}. If $\sigma(f) \geq \delta$, $\beta$ is not defined. Observe that {\it a)} is then satisfied for any $\beta \in \Sg$. \subsection{Proof of Proposition \ref{mt}} First of all, we will need the following technical lemma: \begin{lem} \label{technical} There exists $C>0$ such that for any $x \in \Sg$, $d>0$ small, $$ \left | \fint_{B_x(d)} u\, dV_g - \fint_{\partial B_x(d)} u\, dS_g\right | \leq C \left ( \int_{B_{x}(d)} |\nabla u|^2 \, dV_g \right )^{1/2}.$$ Moreover, given $r\in (0,1)$, there exists $C=C(r, \Sg)>0$ such that for any $x_1$, $x_2 \in \Sg$, $d>0$ with $B_1 = B_{x_1}(r d) \subset B_2 = B_{x_2}(d)$, then: $$ \left | \fint_{B_1} u\, dV_g - \fint_{B_2} u\, dV_g\right | \leq C \left ( \int_{B_2} |\nabla u|^2 \, dV_g \right )^{1/2}.$$ \end{lem} \begin{pf} The existence of such a constant $C$ is given just by the $L^1$ embedding of $H^1$ and trace inequalities. Moreover, $C$ is independent of $d$ since both inequalities above are dilation invariant. \end{pf} In view of the statement of Proposition \ref{covering}, we now deduce a Moser-Trudinger type inequality for small balls, and also for annuli with small internal radius. Those inequalities are in the core of the proof of Proposition \ref{mt}, and are contained in the following two lemmas. The first one uses a dilation argument: \begin{lem} \label{ball} For any $\e>0$ there exists $C=C(\e)>0$ such that \begin{eqnarray*} (1+\e)\int_{B_p(s)}Q(u_1, u_2) \ dV_g + C & \geq & 4 \pi \left (\log \int_{B_p(s/2)} e^{u_1} dV_g +\log \int_{B_p(s/2)} e^{u_2} dV_g \right) \\ & - & 4 \pi (\bar{u}_1(s) + \bar{u}_2(s) + 4 \log s ), \end{eqnarray*} for any $u \in H^1(\Sg), \ p \in \Sg$, $s>0$ small and for $\bar{u}_i(s)= \displaystyle \fint_{B_p(s)} u_i\, dV_g$. \end{lem} \begin{pf} For $s > 0$ smaller than the injectivity radius, the result follows easily from Proposition \ref{p:MTbd}, for some $C=C(s, \e)$. Then, we need to prove that the constant $C$ can be taken independent of $s$ as $s \to 0$. Notice that, as $s \to 0$ we consider quantities defined on smaller and smaller geodesic balls $B_q(\varsigma)$ on $\Sg$. Working in normal geodesic coordinates at $q$, gradients, averages and the volume element will resemble Euclidean ones. If we assume that near $q$ the metric of $\Sg$ is flat, we will get negligible error terms which will be omitted for reasons of brevity. To prove the lemma, we simply make a dilation of the pair $(u_1, u_2)$ of the form: $$ v_i(x)= u_i(s x+ p).$$ From easy computations there follows: $$ \int_{B(p,s)} Q(u_1,u_2) \, dV_g= \int_{B(0,1)} Q(v_1, v_2) \, dV_g, $$ $$ \bar{u}_i(s)= \fint_{B(0,1)} v_i \, dV_g, $$ $$ \int_{B_p(s/2)} e^{u_i} \, dV_g = s^{2} \int_{B(0,1/2)} e^{v_i}\, dV_g.$$ Applying Proposition \ref{p:MTbd} to the pair $(v_1, v_2)$, we conclude the proof of the lemma. \end{pf} The next lemma gives us an estimate of the quadratic form $Q$ on annuli by using the Kelvin transform. This transformation is indeed very natural in this framework, see Remark \ref{r:kelvin} for a more detailed discussion. \begin{lem} \label{annulus} Given $\e>0$, there exists a fixed $r_0>0$ (depending only on $\Sg$ and $\e$) satisfying the following property: for any $r \in (0, r_0)$ fixed, there exists $C=C(r, \e)>0$ such that, for any $(u_1,u_2) \in H^1(\Sg)$ with $u_i=0$ in $\partial B_p(2 r)$, $$ \int_{A_p(s/2,2 r)}Q(u_1, u_2) \ dV_g + \varepsilon \int_{B_p(2 r)}Q(u_1, u_2) \ dV_g + C \geq $$$$ 4 \pi \left (\log \int_{A_p(s, r)} e^{u_1} dV_g +\log \int_{A_p(s,r)} e^{u_2} dV_g + (\bar{u}_1(s) + \bar{u}_2(s) + 4 \log s )(1+\e)\right ) + C, $$ with $\ p \in \Sg$, $s\in(0,r)$ and $\bar{u}_i(s)= \fint_{B_p(s)} u_i\, dV_g$. \end{lem} \begin{pf} As in the proof of Lemma \ref{ball}, we need to show that $C$ is independent of $s$ as $s \to 0$. By taking $r_0$ small enough, also here the metric becomes close to the Euclidean one. Reasoning as in the proof of Lemma \ref{ball}, we can then assume that the metric is flat around $p$. We can define the Kelvin transform: $$ K : A_p(s/2, 2r) \to A_p(s/2, 2r), \qquad K(x)= p+ r s \frac{x-p}{\ |x-p|^2}.$$ Observe that $K$ maps the interior boundary of $A_p(s/2, 2r)$ onto the exterior one and viceversa, and fixes the set $\partial B_p(\sqrt{s\, r})$. Let us define the functions $ \hat{u}_i \in H^1(B_p(2r))$ as: $$\hat{u}_i(x)= \left \{ \begin{array}{ll} u_i(K(x)) - 4 \log |x-p| & \mbox{ if } |x-p| \geq s/2, \\ -4 \log (s/2)& \mbox{ if } |x-p| \leq s/2. \end{array} \right.$$ Our goal is to apply the Moser-Trudinger inequality given by Proposition \ref{p:MTbd} to $(\hat{u}_1, \hat{u}_2)$. In order to do so, let us compute: \begin{equation} \label{exp} \int_{A_p(s, r)} e^{\hat{u}_i} \, dV_g = \int_{A_p(s, r)} e^{u_i(K(x))} |x-p|^{-4} \, dV_g =\frac{1}{s^2 r^2} \int_{A_p(s, r)} e^{u_i(x)} \, dV_g,\end{equation} since the Jacobian of $K$ is $J(K(x)) = - r^2 s^2 |x-p|^{-4}$. Moreover, by Lemma \ref{technical}, we have that: $$ \left | \fint_{B_p(2r)} \hat{u}_i \, dV_g - \fint_{\partial B_p(r)} \hat{u}_i \, dS_x \right | \leq C \left (\int_{B_p(2r)} |\nabla \hat{u}_i| ^2 \, dV_g \right )^{1/2} \leq C + \varepsilon \int_{B_p(2r)} |\nabla \hat{u}_i| ^2 \, dV_g .$$ By using again a change of variables, $$ \fint_{\partial B_p(r)} \hat{u}_i \, dS_x = \fint_{\partial B_p(s)} u_i \, dS_x - 8 \pi r \log r.$$ Therefore, \begin{equation} \label{media} \left | \fint_{B_p(2r)} \hat{u}_i \, dV_g- \bar{u}_i(s) \right | \leq C + \varepsilon \int_{B_p(2r)} |\n \hat{u}_i| ^2 \, dV_g + \varepsilon \int_{B_p(s)} |\n u_i| ^2 \, dV_g. \end{equation} Let us now estimate the gradient terms. For $|x-p|\geq s/2$, $$|\nabla \hat{u}_i(x)|^2 = |\n u_i(K(x))|^2 \frac{s^2 r^2}{|x-p|^4} + \frac{16}{|x-p|^2} + 8 \nabla u (K(x)) \cdot \frac{x-p}{|x-p|^4}s r.$$ Therefore, $$\int_{B_p(2r)} |\nabla \hat{u}_i(x)|^2 \, dV_g= \int_{A_p(s/2, 2 r)} |\nabla \hat{u}_i(x)|^2 \, dV_g = \int_{A_p(s/2, 2 r)} |\n u_i(K(x))|^2 \frac{s^2 r^2}{\ |x-p|^4} \, dV_g +$$$$ 16 \int_{A_p(s/2, 2 r)} \frac{dV_g}{|x-p|^2} + 8 \int_{A_p(s/2, 2 r)} \nabla u_i (K(x)) \cdot \frac{x-p}{|x-p|^4}\ s \, r \, dV_g = $$$$ \int_{A_p(s/2, 2 r)} |\n u_i(x)|^2 \, dV_g + 32 \pi (\log (2r) - \log (s/2)) + 8 \int_{A_p(s/2, 2 r)} \nabla u_i (K(x)) \cdot \frac{K(x)-p}{|K(x)-p|^2} \ \frac{s^2 r^2}{\ |x-p|^{4}}\, dV_g = $$ $$ \int_{A_p(s/2, 2 r)} |\n u_i(x)|^2 \, dV_g + 32 \pi (\log (2r) - \log (s/2)) + 8 \int_{A_p(s/2, 2 r)} \nabla u_i (x) \cdot \frac{x-p}{\ |x-p|^{2}} \, dV_g = $$ $$ \int_{A_p(s/2, 2 r)} |\n u_i(x)|^2 \, dV_g + 32 \pi (\log (2r) - \log (s/2)) - 16 \pi \fint_{\partial B_p(s/2)} u_i \, dS_x. $$ In the last equality we have used integration by parts. By using again Lemma \ref{technical}, \begin{equation} \label{dirich} \left | \int_{B_p(2 r)} |\nabla \hat{u}_i(x)|^2 \, dV_g - \int_{A_p(s/2, 2 r)} |\n u_i(x)|^2 \, dV_g + 32 \pi \log s + 16 \pi \bar{u}_i(s) \right | \leq C + \varepsilon \int_{B_p(s)} |\n {u}_i| ^2 \, dV_g.\end{equation} Regarding the mixed term $\nabla \hat{u}_1 \cdot \nabla \hat{u}_2$, we have that for $|x-p|\geq s/2$, $$\nabla \hat{u}_1(x) \cdot \nabla \hat{u}_2(x) = \n u_1(K(x)) \cdot \n u_2(K(x)) \frac{s^2 r^2}{|x-p|^4} + \frac{16}{|x-p|^2} + \frac{4sr}{|x-p|^4} (\n u_1(K(x)) + \n u_2(K(x))\cdot (x-p).$$ Reasoning as above, we obtain the estimate: \begin{equation} \label{dirich2} \begin{array}{c} \left | \displaystyle \int_{B_p(2 r)} \nabla \hat{u}_1(x) \cdot \nabla \hat{u}_2(x) \, dV_g - \int_{A_p(s/2, 2 r)} \n u_1(x) \cdot \n u_2(x) \, dV_g + 32 \pi \log s + 8 \pi \bar{u}_1(s) + 8 \pi \bar{u}_2(s) \right | \leq \\ \\ C + \varepsilon \displaystyle \int_{B_p(s)} \left( |\nabla {u}_1| ^2 + |\n {u}_2|^2 \right )\, dV_g. \end{array}\end{equation} We now apply Proposition \ref{p:MTbd} to $(\hat{u}_1, \hat{u}_2)$ and use the estimates \eqref{exp}, \eqref{media}, \eqref{dirich} and \eqref{dirich2}, to obtain: $$ 4 \pi \left [ \log \int_{A_p(s,r)} e^{u_1} \, dV_g+ \log \int_{A_p(s,r)} e^{u_2} \, dV_g -(4 \log s + \bar{u}_1(s) + \bar{u}_2(s)) \right ] \leq $$$$ 4 \pi \left [ \log \int_{A_p(s,r)} e^{\hat{u}_1} \, dV_g + \log \int_{A_p(s,r)} e^{\hat{u}_2}\, dV_g - \fint_{B_p(2r)} (\hat{u}_1 + \hat{u}_2) \right ] +$$$$ \varepsilon \int_{B_p(2r)} \left(|\nabla \hat{u}_1|^2 + |\nabla \hat{u}_2|^2 \right) \, dV_g + \varepsilon \int_{B_p(s)} \left ( |\n u_1|^2 + |\n u_2|^2 \right )\, dV_g + C \leq $$ $$ (1+C\e) \int_{B_p(2r)} Q(\hat{u}_1,\hat{u}_2)\, dV_g + \varepsilon \int_{B_p(s)} \left ( |\n u_1|^2 + |\n u_2|^2 \right )\, dV_g +C \leq $$ $$ (1+C\e) \left [ \int_{A_p(s/2,2r)} Q(u_1,u_2)\, dV_g - 8 \pi ( 4 \log s + \bar{u}_1(s) + \bar{u}_2(s) ) \right ] + \varepsilon \int_{B_p(s)} \left( |\nabla u_1|^2 + |\nabla u_2|^2 \right) \, dV_g+C.$$ By renaming $\e$ conveniently, we conclude the proof. \end{pf} \begin{rem} The term $\bar{u}(s) + 2 \log s$ has an easy interpretation; by the Jensen inequality we have the estimate $$ \log \int_{B_p(s)} e^u \, dV_g = \log \left (|B_p(s)| \fint_{B_p(s)} e^u \, dV_g \right) \geq \bar{u}(s) + 2 \log s - C.$$ \end{rem} \begin{rem}\label{r:kelvin} The transformation $K$ is used to exploit the geometric properties of the problem, in order to gain as much control as possible on the exponential terms. From the formulas in \cite{jw2} one has that both components of the entire solutions of the Toda system in $\mathbb{R}^2$ decay at infinity at the rate $- 4 \log |x|$. In this way, the Kelvin transform brings these functions to (nearly) constants at the origin, giving a sort of optimization in the Dirichlet part. The minimal value of Dirichlet energy to obtain concentration of volume at a scale $s$ (as in the statement of Lemma \ref{annulus}) is then transformed into a boundary integral which cancels exactly the extra terms in Lemma \ref{ball} due to the $s$-dilation. \end{rem} \begin{rem} Lemmas \ref{ball} and \ref{annulus}, together with Proposition \ref{covering}, give a precise idea of the proof. Indeed, assume that for some $p\in \Sg$, $\sigma >0$: \begin{equation} \label{1} \int_{B_{p}(\s)} e^{u_i} dV_g \geq \tau \int_\Sg e^{u_i} dV_g, \ i=1,2;\end{equation} \begin{equation} \label{2} \int_{B_{p}(R \s)^c} e^{u_i} dV_g \geq \tau \int_\Sg e^{u_i} dV_g, \ i=1,2. \end{equation} If we sum the inequalities given by Lemmas \ref{ball} and \ref{annulus}, the term $\bar{u}_1(\s) + \bar{u}_2(\s) + 4 \log \s$ cancels and we deduce the estimate of Proposition \ref{mt}. The problem is that when $\psi \left( \frac{e^{u_1}}{\int_{\Sigma} e^{u_1} dV_g} \right )= \psi \left( \frac{e^{u_2}}{\int_{\Sigma} e^{u_2} dV_g} \right )$ we do not really have \eqref{1}, \eqref{2} around the same point $p$. Moreover, $u_i$ needs not be zero on the boundary of a ball, as requested in Proposition \ref{annulus}. Some technical work is needed to deal with those difficulties. \end{rem} We now prove Proposition \ref{mt}. Fixed $\e>0$, take $R>1$ (depending only on $\e$) and let $\psi$ be the continuous map given by Proposition \ref{covering}. Fix also $\delta>0$ small (which will depend only on $\e$, too). Let $u_1$ and $u_2$ be two functions in $H^1(\Sg)$ with $\int_{\Sg} u_{i} \, dV_g =0$, such that: $$\psi \left( \frac{e^{u_1}}{\int_{\Sigma} e^{u_1} dV_g} \right )= \psi \left( \frac{e^{u_2}}{\int_{\Sigma} e^{u_2} dV_g} \right )= (\beta, \sigma) \in \overline{\Sg}_{\delta }. $$ If $\sigma \geq \frac{\delta}{R^2}$, then Proposition \ref{p:imprc} yields the result. Therefore, assume $\sigma < \frac{\delta}{R^2}$; Proposition \ref{covering} implies the existence of $\tau>0$, $p_1,\ p_2 \in \Sg$ satisfying: \begin{equation} \label{dentro} \int_{B_{p_i}(\s)} e^{u_i} dV_g \geq \tau \int_\Sg e^{u_i} dV_g, \ i=1,2;\end{equation} \begin{equation} \int_{B_{p_i}(R \s)^c} e^{u_i} dV_g \geq \tau \int_\Sg e^{u_i} dV_g \ i=1,2; \end{equation} $$d(p_1, p_2) \leq (6R+2) \s; $$ The proof will be divided into two cases: \noindent {\bf CASE 1:} Assume that: \begin{equation} \label{fuera} \int_{A_{p_i}(R\s, \delta)} e^{u_i} dV_g \geq \tau/2 \int_{\Sg} e^{u_i} dV_g.\end{equation} In order to be able to apply Lemma \ref{annulus}, we need to modify our functions outside a certain ball. Choose $k \in \mathbb{N}$, $k \leq 2 \e^{-1}$, such that: $$ \int_{A_{p_1}(2^{k-1} \delta, 2^{k+1} \delta)} \left ( |\n u_1|^2 + |\n u_2|^2 \right )\, dV_g \leq \varepsilon \int_{\Sg} \left (|\n u_1|^2 + |\n u_2|^2 \right )\, dV_g. $$ We define $\tilde{u}_i \in H^1(\Sg)$ by: $$ \left \{ \begin{array}{ll} \tilde{u}_i(x) = u_i(x) & x \in B_{p_1}(2^k \delta), \\ \Delta \tilde{u}_i(x) =0 & x \in A_{p_1}(2^k \delta, 2^{k+1} \delta), \\ \tilde{u}_i(x) = 0 & x \notin B_{p_1}(2^{k+1} \delta). \end{array} \right. $$ Since we plan to apply Lemma \ref{annulus} to $(\tilde{u}_1, \tilde{u}_2)$, we need to choose $\delta$ small enough so that $2^{3\e^{-1}}\delta < r_0$, where $r_0$ is given by that Lemma. It is easy to check, by using Lemma \ref{technical}, that $$ \int_{A_{p_1}(2^{k} \delta, 2^{k+1} \delta)} \left (|\nabla \tilde{u}_1|^2 + |\nabla \tilde{u}_2|^2 \right ) \, dV_g \leq $$$$ C \int_{A_{p_1}(2^{k-1} \delta, 2^{k} \delta)} \left (|\n u_1|^2 + |\n u_2|^2 \right ) \, dV_g \leq C \varepsilon \int_{\Sg} \left (|\n u_1|^2 + |\n u_2|^2 \right )\, dV_g, $$ where $C$ is a universal constant. \noindent {\bf Case 1.1:} $d(p_1, p_2) \leq R^{\frac 12} \s$. By applying Lemma \ref{ball} to $u_i$ for $p=p_1$ and $s= 2 (R^{1/2}+1)\sigma$, and taking into account \eqref{dentro}, we obtain: \begin{equation} \label{dentro11} \begin{array}{c}(1+\e) \displaystyle \int_{B_p(s)}Q(u_1, u_2) \ dV_g + C \geq \\ \\4 \pi \left (\log \displaystyle \int_{B_p(s/2)} e^{u_1} dV_g +\log \int_{B_p(s/2)} e^{u_2} dV_g - (\bar{u}_1(\s) + \bar{u}_2(\s) + 4 \log \sigma ) \right ) \geq \\ \\ 4 \pi \left (\log \displaystyle \int_{\Sg} e^{u_1} dV_g +\log \int_{\Sg} e^{u_2} dV_g - (\bar{u}_1(\s) + \bar{u}_2(\s) + 4 \log \sigma )-C \right ),\end{array} \end{equation} where $\bar{u}_i(\s)= \fint_{B_p(\s)} u_i\, dV_g$. We now apply Lemma \ref{annulus} to $\tilde{u}_i$ for $p=p_1$, $s'=4(R^{1/2}+1)\s$ and $r = 2^{k+1}\delta$: \begin{equation} \label{fuera11} \begin{array}{c} \displaystyle \int_{A_p(s'/2,2 r)}Q(\tilde{u}_1, \tilde{u}_2) \ dV_g + \e \int_{\Sg} \left( |\n u_1|^2 + |\n u_2|^2 \right )\, dV_g+ C \geq \\ \\ 4 \pi \left (\log \displaystyle \int_{A_p(s', r)} e^{\tilde{u}_1} dV_g +\log \int_{A_p(s',r)} e^{\tilde{u}_2} dV_g + (\bar{u}_1(\s) + \bar{u}_2(\s) + 4 \log \sigma )(1+\e)\right ). \end{array} \end{equation} Taking into account \eqref{fuera}, we conclude: \begin{equation} \label{fuera11-bis} \begin{array}{c} \displaystyle \int_{A_p(s'/2,2 r)}Q(\tilde{u}_1, \tilde{u}_2) \ dV_g + \e \int_{\Sg} \left( |\n u_1|^2 + |\n u_2|^2 \right ) \, dV_g+ C \geq \\ \\4 \pi \left (\log \displaystyle \int_{\Sg} e^{u_1} dV_g +\log \int_{\Sg} e^{{u}_2} dV_g + (\bar{u}_1(\s) + \bar{u}_2(\s) + 4 \log \sigma )(1+\e)\right ).\end{array} \end{equation} Combining \eqref{dentro11} and \eqref{fuera11-bis} we obtain our result (after properly renaming $\e$). \noindent {\bf Case 1.2:} $d(p_1, p_2) \geq R^{\frac 12} \s$ and $\displaystyle \int_{B_{p_1}(R^{\frac 13} \s)} e^{u_2} dV_g \geq \tau/4 \int_\Sg e^{u_2} dV_g$. Here we argue as in Case 1.1: as a first step, we apply Lemma \ref{ball} to $(u_1, u_2)$ for $p=p_1$ and $s = 2 (R^{1/3}+1)\s$. Then, we use Lemma \ref{annulus} with $(\tilde{u}_1, \tilde{u}_2)$ for $p=p_1$, $s'= 4 (R^{1/3}+1)\s$ and $r=2^{k+1} \delta$. \noindent {\bf Case 1.3:} $d(p_1, p_2) \geq R^{\frac 12} \s$ and $\displaystyle \int_{B_{p_2}(R^{\frac 13} \s)} e^{u_1} dV_g \geq \tau/4 \int_\Sg e^{u_1} dV_g$. This case can be treated as in Case 1.2, by just interchanging the indices $1$ and $2$. \noindent {\bf Case 1.4:} $d(p_1, p_2) \geq R^{\frac 12} \s$, $\displaystyle \int_{B_{p_2}(R^{\frac 13} \s)} e^{u_1} dV_g \leq \tau/4 \int_\Sg e^{u_1} dV_g$ and $\displaystyle \int_{B_{p_1}(R^{\frac 13} \s)} e^{u_2} dV_g \leq \tau/4 \int_\Sg e^{u_2} dV_g$. Here we need to use again some harmonic lifting of our functions. Take $n \in \mathbb{N}$, $n \leq 2 \e^{-1}$ so that $$ \sum_{i=1}^2 \int_{A_{p_i}(2^{n-1} \s, 2^{n+1} \sigma )} \left( |\n u_1|^2 + |\n u_2|^2 \right ) \, dV_g \leq \varepsilon \int_{\Sg} \left (|\n u_1|^2 + |\n u_2|^2 \right )\, dV_g, $$ where we have chosen $R$ so that $2^{3\e^{-1}} <R^{1/3}$. We define the function $v_i$ of class $H^1$ by: $$ \left \{ \begin{array}{ll} v_i(x) = u_i(x) & x \in B_{p_1}(2^{n} \s) \cup B_{p_2}(2^{n} \s), \\ \Delta v_i(x) =0 & x \in A_{p_1}(2^{n} \s, 2^{n+1} \s) \cup A_{p_2}(2^{n}\s, 2^{n+1} \s), \\ v_i(x) = \bar{u}_i(\s) & x \notin B_{p_1}(2^{n+1} \s) \cup B_{p_2}(2^{n+1} \s). \end{array} \right. $$ Again, $$ \sum_{i=1}^2 \int_{A_{p_i}(2^{n} \s, 2^{n+1} \s)} \left( |\n v_1|^2 + |\n v_2|^2 \right) \, dV_g \leq $$$$ C \sum_{i=1}^2 \int_{A_{p_i}(2^{n-1} \s, 2^{n+1} \s)} \left (|\n u_1|^2 + |\n u_2|^2 \right ) \, dV_g \leq C \varepsilon \int_{\Sg} \left (|\n u_1|^2 + |\n u_2|^2 \right )\, dV_g, $$ where $C$ is a universal constant. We now apply Lemma \ref{ball} to $(v_1,v_2)$ with $p=p_1$ and $s=2(6R+2)\s$, and take into account \eqref{dentro}: \begin{equation} \label{dentro14} \begin{array}{c} \displaystyle \int_{ B_{p_1}(2^{n} \s) \cup B_{p_2}(2^{n} \s)} Q(u_1,u_2) \, dV_g + C \varepsilon \int_{\Sg} \left ( |\n u_1|^2 + |\n u_2|^2 \right) \, dV_g +C \geq \\ \\ (1+\e)\displaystyle \int_{B_p(s)}Q(v_1, v_2) \ dV_g +C \geq \\ \\ 4 \pi \left (\log \displaystyle \int_{B_p(s/2)} e^{v_1} dV_g +\log \int_{B_p(s/2)} e^{v_2} dV_g - (\bar{u}_1(s) + \bar{u}_2(s) + 4 \log s ) \right ) \geq \\ \\ 4 \pi \left (\log \displaystyle \int_{\Sg} e^{u_1} dV_g +\log \int_{\Sg} e^{u_2} dV_g - (\bar{u}_1(s) + \bar{u}_2(s) + 4 \log s ) \right )-C. \end{array} \end{equation} Now, we define $w_i \in H^1(\Sg)$ as: $$ \left \{ \begin{array}{ll} w_i(x) = \bar{u}_i(\s) & x \in B_{p_1}(2^{n} \s) \cup B_{p_2}(2^{n} \s), \\ \Delta w_i(x) =0 & x \in A_{p_1}(2^{n} \s, 2^{n+1} \s) \cup A_{p_2}(2^{n}\s, 2^{n+1} \s), \\ w_i(x) = \tilde{u}_i & x \notin B_{p_1}(2^{n+1} \s) \cup B_{p_2}(2^{n+1} \s). \end{array} \right. $$ As before, $$ \sum_{i=1}^2 \int_{A_{p_i}(2^{n} \s, 2^{n+1} \s)} \left (|\n w_1|^2 + |\n w_2|^2 \right ) \, dV_g \leq $$$$ C \sum_{i=1}^2 \int_{A_{p_i}(2^{n-1} \s, 2^{n+1} \s)} \left (|\n u_1|^2 + |\n u_2|^2 \right ) \, dV_g \leq C \varepsilon \int_{\Sg} \left (|\n u_1|^2 + |\n u_2|^2 \right ) \, dV_g, $$ where also here $C$ is a universal constant. We apply Lemma \ref{annulus} to $(w_1,w_2)$ for any point $p'$ such that $d(p', p_1) = \frac 1 2 R^{1/3}\s$, $s'= \s$ and $r = 2^{k+1} \delta$: \begin{equation} \label{fuera14} \begin{array}{c}\displaystyle \int_{ ( B_{p_1}(2^{n+1} \s) \cup B_{p_2}(2^{n+1} \s))^c} Q(u_1,u_2) \, dV_g + C \varepsilon \int_{\Sg} \left (|\n u_1|^2 + |\n u_2|^2 \right )\, dV_g +C \geq \\ \\ (1+\e)\displaystyle \int_{A_{p'}(s'/2,2 r)}Q(w_1, w_2) \ dV_g +C \geq \\ \\ 4 \pi \left (\log \displaystyle \int_{A_{p'}(s', r)} e^{w_1} dV_g +\log \int_{A_{p'}(s',r)} e^{w_2} dV_g + (\bar{u}_1(\s) + \bar{u}_2(\s) + 4 \log \sigma )(1+\e)\right ). \end{array} \end{equation} Taking into account \eqref{fuera} and the hypothesis of Case 1.4, \begin{equation} \label{fuera14-bis} \begin{array}{c}\displaystyle \int_{ ( B_{p_1}(2^n \s) \cup B_{p_2}(2^n \s))^c} Q(u_1,u_2) \, dV_g + C \varepsilon \int_{\Sg} \left ( |\n u_1|^2 + |\n u_2|^2 \right )\, dV_g +C \geq \\ \\ 4 \pi \left (\log \displaystyle \int_{\Sg} e^{u_1} dV_g +\log \int_{\Sg} e^{u_2} dV_g + (\bar{u}_1(\s) + \bar{u}_2(\s) + 4 \log \sigma )(1+\e)\right ). \end{array} \end{equation} Combining inequalities \eqref{dentro14} and \eqref{fuera14-bis}, we obtain our result. \noindent {\bf CASE 2:} Assume that for some $i=1,2$: $$ \int_{B_{p_i}(\delta)^c} e^{u_i} dV_g \geq \tau/2 \int_{\Sg} e^{u_i} dV_g.$$ Without loss of generality, let us consider $i=1$. Take $\delta'= \frac{\delta}{2^{3/\e}}$. If moreover: $$ \int_{B_{p_2}(\delta')^c} e^{u_2} dV_g \geq \tau/2,$$ then Proposition \ref{p:imprc} implies the desired inequality. So, we can assume that: \begin{equation} \label{hia} \int_{A_{p_2}(R\s, \delta')} e^{u_2} dV_g \geq \tau/2.\end{equation} We now apply the whole procedure of Case 1 to $u_1$, $u_2$, replacing $\delta$ with $\delta'$. For instance, as in Case 1.1, we would get \eqref{dentro11} and \eqref{fuera11}. However, here \eqref{fuera11-bis} does not follow immediately since now we do not know whether: $$ \int_{A_p(s',r)} e^{u_1} dV_g \geq \alpha \int_{\Sg} e^{u_1} dV_g,$$ for some fixed $\alpha>0$. This is needed to estimate: $$ \log \int_{A_p(s', r)} e^{\tilde{u}_1} dV_g \geq \log \int_{\Sg} e^{u_1} dV_g- C,$$ which allows us to obtain \eqref{fuera11-bis}. By applying the Jensen inequality and Lemma \ref{technical}, we get: $$ \log \int_{A_p(s', r)} e^{\tilde{u}_1} dV_g \geq \log \int_{A_p(r/8, r/4)} e^{u_1} dV_g\geq $$ $$ \log \fint_{A_p(r/8, r/4)} e^{u_1} dV_g-C \geq \fint_{A_p(r/8, r/4)} u_1 dV_g-C \geq -\varepsilon \int_{\Sg} |\n u_1|^2\, dV_g - C. $$ Therefore, from \eqref{hia} and \eqref{fuera11} we get: \begin{equation} \label{fuera2} \begin{array}{c} \displaystyle \int_{A_p(s'/2,2 r)}Q(\tilde{u}_1, \tilde{u}_2) \ dV_g + C\varepsilon \int_{\Sg} \left (|\n u_1|^2 + |\n u_2|^2 \right )\, dV_g+ C \geq \\ \\4 \pi \left (\log \displaystyle \int_{\Sg} e^{{u}_2} dV_g + (\bar{u}_1(\s) + \bar{u}_2(\s) + 4 \log \sigma )(1+\e)\right ).\end{array} \end{equation} Now, we apply Proposition \ref{p:MTbd}, to find: $$ (1+C\e) \int_{B_{p_1}(\delta/2)^c} Q(u_1, u_2) dV_g +C \geq 4 \pi \left ( \log \int_{B_{p_1}(\delta)^c} e^{u_1} dV_g + \log \int_{B_{p_1}(\delta)^c} e^{u_2} dV_g \right ). $$ Again here we can use Jensen inequality and the hypothesis of Case 2 to deduce: \begin{equation} \label{fuera2-bis} \begin{array}{c} \displaystyle \int_{B_{p_1}(\delta/2)^c} Q(u_1, u_2) dV_g + C\varepsilon \int_{\Sg} \left (|\n u_1|^2 + |\n u_2|^2 \right )\, dV_g+ C\geq \\ \\4 \pi \left ( \log \displaystyle \int_{\Sg} e^{u_1} dV_g \right ). \end{array}\end{equation} We conclude now by combining \eqref{fuera2-bis}, \eqref{fuera2} and \eqref{dentro11}. We can argue in the same way if we are under the conditions of Cases 1.2, 1.3 or 1.4. \begin{rem}\label{puffff} The improved inequality in Proposition \ref{mt} is consistent with the asymptotic analysis in \cite{jlw}. Here the authors prove that when both $u_1, u_2$ blow-up at the same rate at the same point, then the corresponding quantization of conformal volume is $(8\pi, 8\pi)$. On the other hand when the blow-up rates are different, but occur at the same point, then the quantization values are $(4\pi, 8\pi)$ or $(8\pi,4\pi)$. \end{rem} \section{Min-max scheme}\label{s:4} \noindent Let $\overline{\Sg}_{\delta }$ be as in \eqref{cono}, and let us set \begin{equation}\label{eq:DX} \overline{D}_{\delta } = diag(\overline{\Sg}_{\delta } \times \overline{\Sg}_{\delta }) := \left\{ (\vartheta_1, \vartheta_2) \in \overline{\Sg}_{\delta } \times \overline{\Sg}_{\delta } \; : \; \vartheta_1 = \vartheta_2 \right\}; \qquad \quad X = \overline{\Sg}_{\delta } \times \overline{\Sg}_{\delta } \setminus \overline{D}_{\delta }. \end{equation} Let $\varepsilon > 0$ be such that $\rho_i + \varepsilon < 8 \pi$ for $i = 1, 2$, and let $R, \delta , \psi$ be as in Proposition \ref{covering}. Consider then the map $\Psi : H^1(\Sg) \times H^1(\Sg)$ defined in the following way \begin{equation}\label{eq:Psi} \Psi(u_1, u_2) = \left( \psi \left( \frac{e^{u_1}}{\int_{\Sg} e^{u_1} dV_g} \right), \psi \left( \frac{e^{u_2}}{\int_{\Sg} e^{u_2} dV_g} \right) \right). \end{equation} By Proposition \ref{mt}, and since $C \geq h_i(x) \geq \frac{1}{C}>0$ for any $x \in \Sg$, we have that $I_\rho(u_1, u_2)$ is bounded from below for any $(u_1,\ u_2)$ such that $\Psi(u_1, u_2) \in \overline{D}_{\delta }$. Therefore, there exists a large $L > 0$ such that \begin{equation}\label{eq:psilow} J_\rho(u_1, u_2) \leq - L \quad \Rightarrow \quad \Psi(u_1, u_2) \in X. \end{equation} \ \noindent By our definition of $\overline{\Sg}_{\delta }$, the set $X$ is not compact: however it retracts to some compact subset $\mathcal{X}_\nu$, as it is shown in the next result. \begin{lem}\label{l:retr} For $\nu \ll \delta $, define $$ \mathcal{X}_{\nu,1} = \left\{ \left( (x_1, t_1), (x_2, t_2) \right) \in X \; : \; \left| t_1 - t_2 \right|^2 + d(x_1, x_2)^2 \geq \delta ^4, \max\{t_1, t_2\} < \delta , \min\{t_1, t_2\} \in \left[ \nu^2, \nu \right] \right\}; $$ $$ \mathcal{X}_{\nu,2} = \left\{ \left( (x_1, t_1), (x_2, t_2) \right) \in X \; : \; \max\{t_1, t_2\} = \delta , \min\{t_1, t_2\} \in \left[ \nu^2, \nu \right] \right\}, $$ and set \begin{equation}\label{eq:retrx} \mathcal{X}_{\nu} = \left( \mathcal{X}_{\nu,1} \cup \mathcal{X}_{\nu,2} \right) \subseteq X. \end{equation} Then there is a retraction $R_{\nu}$ of $X$ onto $\mathcal{X}_{\nu}$. \end{lem} \begin{pf} We proceed in two steps. First, we define a deformation of $X$ in itself satisfying that: \begin{enumerate} \label{caracola} \item[a)] either $\max\{t_1, t_2\} < \delta $ and $\left| t_1 - t_2 \right|^2 + d(x_1, x_2)^2 \geq \delta ^4$, \item[b)] or $\max\{t_1, t_2\} = \delta $. \end{enumerate} Then another deformation will provide us with the condition $\min\{t_1, t_2\} \in \left[ \nu^2, \nu \right]$. Let us consider the following ODE in $(\Sigma\times (0,\delta])^2$: $$ \frac{d}{ds} \left( \begin{array}{c} x_1(s) \\ t_1(s) \\ x_2(s) \\ t_2(s) \\ \end{array} \right) = \left( \begin{array}{c} (\delta - \max_i \{t_i(s)\}) \n_{x_1} d(x_1(s), x_2(s))^2 \\ (t_1(s) - t_2(s)) t_1(s) (\delta - t_1(s)) \\ (\delta - \max_i \{t_i(s)\}) \n_{x_2} d(x_1(s), x_2(s))^2 \\ (t_2(s) - t_1(s)) t_2(s) (\delta - t_2(s)) \\ \end{array} \right). $$ Notice that if $\left| t_1 - t_2 \right|^2 + d(x_1, x_2) < \delta ^4$ (and $\max\{t_1, t_2\} < \delta $) then $d(x_1, x_2)$ is small so $d(x_1, x_2)^2$ is a smooth function on $(\Sigma\times \mathbb{R})^2$, and the above vector field is well defined. For each initial datum $(\vartheta_1, \vartheta_2) \in X$ we define $s_{\vartheta_1, \vartheta_2}\geq 0$ as the smallest value of $s$ for which the above flow satisfies either a) or b). To define the first homotopy $H_1(s,\cdot)$ then one can use the above flow, rescaling in the evolution variable (depending on the initial datum) as $s \mapsto \tilde{s} = s_{\vartheta_1, \vartheta_2}s$. To define the second homotopy, we introduce two cutoff functions $\chi_1, \chi_2$: $$ \begin{array}{ll} \left\{ \begin{array}{ll} \chi_1(t) = 1 & \hbox{ for } t \leq \nu^2. \\ \chi \mbox{ is non increasing, } & \\ \chi_1(t) = -1& \hbox{ for } t \geq \nu, \end{array} \right. & \left\{ \begin{array}{ll} \chi_2(t) = 1 & \hbox{ for } t \leq \delta/2, \\ \chi_2(t)= 2 \left( 1-\frac{t}{\delta}\right) & t \in (\delta/2, \delta), \\ \chi_2(t) = 0& \hbox{ for } t \geq \delta, \end{array} \right. \end{array} $$ and consider the following ODE $$ \frac{d}{ds} \left( \begin{array}{c} t_1(s) \\ t_2(s) \\ \end{array} \right) = \left( \begin{array}{c} \chi_1(\min_i\{t_i(s)\}) \chi_2(t_1(s)) \\ \chi_1(\min_i\{t_i(s)\}) \chi_2(t_2(s)) \end{array} \right). $$ As in the previous case, there exists $\hat{s}_{\vartheta_1, \vartheta_2}$ such that the condition $\min_i t_i \in [\nu^2, \nu]$ is reached for $s = \hat{s}_{\vartheta_1, \vartheta_2}$, and one can define the homotopy $H_2$ rescaling in $s$ correspondingly. Observe that along the homotopy $H_2$ the distance $| t_1 - t_2 |$ is non decreasing if $|t_1-t_2| \leq \delta /4$. The concatenation of the homotopies $H_1$ and $H_2$ gives the desired conclusion. Note that both $H_1$ and $H_2$, by the way they are constructed, preserve the quotient relations in the definition of $X$. \end{pf} \ \noindent We next construct a family of test functions parameterized by $\mathcal{X}_{\nu}$ on which $J_{\rho}$ attains large negative values. For $(\vartheta_1, \vartheta_2) = \left( (x_1, t_1), (x_2, t_2) \right) \in \mathcal{X}_\nu$ define \begin{equation}\label{eq:test} \var_{(\vartheta_1, \vartheta_2)}(y) = \left( \var_1(y), \var_2(y) \right), \end{equation} where we have set \begin{equation}\label{eq:varvar12} \var_1(y) = \log \frac{1 + \tilde{t}_2^2 d(x_2,y)^2}{\left( 1 + \tilde{t}_1^2 d(x_1,y)^2 \right)^2}, \qquad \qquad \var_2(y) = \log \frac{1 + \tilde{t}_1^2 d(x_1,y)^2}{\left( 1 + \tilde{t}_2^2 d(x_2,y)^2 \right)^2}, \end{equation} with \begin{equation}\label{eq:tti} \tilde{t}_1 = \tilde{t}_1(t_1) = \left\{ \begin{array}{ll} \frac{1}{t_1}, & \hbox{ for } t_1 \leq \frac{\delta }{2}, \\ - \frac{4}{\delta ^2} (t_1-\delta ) & \hbox{ for } t_1 \geq \frac{\delta }{2}; \end{array} \right. \qquad \tilde{t}_2 = \tilde{t}_2(t_2) = \left\{ \begin{array}{ll} \frac{1}{t_2}, & \hbox{ for } t_2 \leq \frac{\delta }{2}, \\ - \frac{4}{\delta ^2} (t_2-\delta ) & \hbox{ for } t_2 \geq \frac{\delta }{2}. \end{array} \right. \end{equation} Notice that, by our choices of $\tilde{t}_1, \tilde{t}_2$, this map is well defined on $\mathcal{X}_\nu$ (especially for what concerns the identifications in $\overline{\Sg}_\delta $). We have then the following result. \begin{lem}\label{l:integrals} For $\nu$ sufficiently small, and for $(\vartheta_1, \vartheta_2) \in \mathcal{X}_{\nu}$, there exists a constant $C=C(\delta ,\Sg) > 0$, depending only on $\Sg$ and $\delta $, such that \begin{equation}\label{eq:inttot} \frac{1}{C} \frac{t_i^2}{t_j^2} \leq \int_\Sg e^{\var_i(y)} dV_g(y) \leq C \frac{t_i^2}{t_j^2}, \qquad \quad i \neq j; \end{equation} \end{lem} \begin{pf} First, we notice that by an elementary change of variables \begin{equation}\label{eq:intfalt} \int_{\mathbb{R}^2} \frac{1}{\left( 1 + \l^2 |x|^2 \right)^2} dx = \frac{C_0}{\l^2}; \qquad \quad \lambda > 0 \end{equation} for some fixed positive constant $C_0$. We distinguish next the two cases \begin{equation}\label{eq:alt1} |t_1 - t_2| \geq \delta ^3 \qquad \quad \hbox{ and } \qquad \quad |t_1 - t_2| < \delta ^3. \end{equation} In the first alternative, by the definition of $\mathcal{X}_{\nu}$ and by the fact that $\nu \ll \delta $, one of the $t_i$'s belongs to $[\nu^2, \nu]$, while the other is greater or equal to $\frac{\delta ^3}{2}$. If $t_1 \in [\nu^2, \nu]$ and if $t_2 \geq \frac{\delta ^3}{2}$ then the function $1 + \tilde{t}_2^2 d(x_2,y)^2$ is bounded above and below by two positive constants depending only on $\Sg$ and $\delta $. Therefore, working in geodesic normal coordinates centered at $x_1$ and using \eqref{eq:intfalt} we obtain $$ \frac{t_1^2}{C} \leq \frac{1}{C \tilde{t}_1^2} \leq \int_\Sg e^{\var_1(y)} dV_g(y) \leq \frac{C}{\tilde{t}_1^2} \leq C t_1^2. $$ If instead $t_2 \in [\nu^2, \nu]$ and if $t_1 \geq \frac{\delta ^3}{2}$ then the function $1 + \tilde{t}_1^2 d(x_1,y)^2$ is bounded above and below by two positive constants depending only on $\Sg$ and $\delta $, hence one finds $$ \int_\Sg e^{\var_1(y)} dV_g(y) \geq \frac{1}{C} \int_\Sigma(1 + \tilde{t}_2^2 d(x_2,y)^2) dV_g(y) \geq \frac{\tilde{t}_2^2}{C} = \frac{1}{C t_2^2}, $$ and similarly $$ \int_\Sg e^{\var_1(y)} dV_g(y) \leq C \int_\Sigma(1 + \tilde{t}_2^2 d(x_2,y)^2) dV_g(y) \leq C \tilde{t}_2^2 = \frac{C}{t_2^2}. $$ In both the last two cases we then obtain the conclusion. Suppose now that $|t_1 - t_2| < \delta ^3$: then by the definition of $\mathcal{X}_{\nu}$ we have that $d(x_1, x_2) \geq \frac{\delta ^2}{2}$ and that $t_1, t_2 \leq \nu + \delta ^3$. Then, from \eqref{eq:intfalt} and some elementary estimates we derive $$ \int_\Sg e^{\var_1(y)} dV_g(y) \geq \int_{B_{x_1}(\delta^3)} e^{\var_1(y)} dV_g(y) \geq \frac{1}{C} \frac{1 + \tilde{t}_2^2 d(x_1,x_2)^2}{\tilde{t}_1^2} \geq \frac{1}{C} \frac{t_1^2}{t_2^2}. $$ By the same argument we obtain $$ \int_{B_{x_1}(\delta^3)} e^{\var_1(y)} dV_g(y) \leq C \frac{1 + \tilde{t}_2^2 d(x_1,x_2)^2}{\tilde{t}_1^2} \leq C \frac{t_1^2}{t_2^2}. $$ Moreover, we have $$ \int_{(B_{x_1}(\delta^3))^c} e^{\var_1(y)} dV_g(y) \leq \frac{C}{\tilde{t}_1^4} \int_{(B_{x_1}(\delta^3))^c} (1 + \tilde{t}_2^2 d(x_2,y)^2) dV_g(y) \leq C \frac{t_1^4}{t_2^2}. $$ This concludes the proof. \end{pf} \begin{lem}\label{l:dsmallIlow} For $(\vartheta_1, \vartheta_2) \in \mathcal{X}_{\nu}$, let $\var_{(\vartheta_1, \vartheta_2)}$ be defined as in the above formula. Then $$ J_{\rho}(\var_{(\vartheta_1, \vartheta_2)}) \to - \infty \quad \hbox{ as } \nu \to 0 \qquad \quad \hbox{ uniformly for } (\vartheta_1, \vartheta_2) \in \mathcal{X}_{\nu}. $$ \end{lem} \begin{pf} The statement follows from Lemma \ref{l:integrals} once the following three estimates are shown \begin{equation}\label{eq:estQ} \int_{\Sg} Q\left( \var_{(\vartheta_1, \vartheta_2)} \right) dV_g \leq 8 \pi (1 + o_{\delta }(1)) \log \frac{1}{t_1} + 8 \pi (1 + o_{\delta }(1)) \log \frac{1}{t_2}; \end{equation} \begin{equation}\label{eq:estlin1} \fint_{\Sg} \var_1 dV_g = 4 (1 + o_{{\delta }}(1)) \log t_1 - 2 (1 + o_{{\delta }}(1)) \log t_2; \end{equation} \begin{equation}\label{eq:estlin2} \fint_{\Sg} \var_2 dV_g = 4 (1 + o_{{\delta }}(1)) \log t_2 - 2 (1 + o_{{\delta }}(1)) \log t_1. \end{equation} In fact, these yield the inequality $$ J_{\rho}(\var_{(\vartheta_1, \vartheta_2)}) \leq (2 \rho_1 - 8 \pi + o_\delta (1)) \log t_1 + (2 \rho_2 - 8 \pi + o_\delta (1)) \log t_2 \to - \infty \qquad \quad \hbox{ as } \nu \to 0 $$ uniformly for $(\vartheta_1, \vartheta_2) \in \mathcal{X}_\nu$, since $\rho_1, \rho_2 > 4 \pi$. Here again we are using that $C \geq h_i(x) \geq \frac{1}{C}>0$ for any $x \in \Sg$. We begin by showing \eqref{eq:estlin1}, whose proof clearly also yields \eqref{eq:estlin2}. It is convenient to write $$ \var_1 = \log \left(1 + \tilde{t}_2^2 d(x_2,y)^2 \right) - 2 \log \left( 1 + \tilde{t}_1^2 d(x_1,y)^2 \right), $$ and to divide $\Sg$ into the two subsets $$ \mathcal{A}_1 = B_{x_1}(\delta ) \cup B_{x_2}(\delta ); \qquad \qquad \mathcal{A}_2 = \Sigma\setminus \mathcal{A}_1. $$ For $y \in \mathcal{A}_2$ we have that $$ \frac{1}{C_{\delta ,\Sg} t_1^2} \leq 1 + \tilde{t}_1^2 d(x_1,y)^2 \leq \frac{C_{\delta ,\Sg}}{t_1^2}; \qquad \qquad \frac{1}{C_{\delta ,\Sg} t_2^2} \leq 1 + \tilde{t}_2^2 d(x_2,y)^2 \leq \frac{C_{\delta ,\Sg}}{t_2^2}, $$ which implies \begin{equation}\label{eq:int1111} \frac{1}{|\Sg|} \int_{\mathcal{A}_2} \var_1 dV_g = 4 (1 + o_{{\delta }}(1)) \log t_1 - 2 (1 + o_{{\delta }}(1)) \log t_2. \end{equation} On the other hand, working in normal geodesic coordinates at $x_i$ one also finds $$ \int_{B_{\delta }(x_i)} \log \left( 1 + \tilde{t}_i^2 d(x_i,y)^2 \right) dV_g = o_\delta (1) \log t_i. $$ Using \eqref{eq:int1111} and the last formula we then obtain \eqref{eq:estlin1}. Let us now show \eqref{eq:estQ}. We clearly have that \begin{eqnarray*} \nabla \var_1 & = & \nabla \log \left( 1 + \tilde{t}_2^2 d(x_2,y)^2 \right) - 2 \nabla \log \left( 1 + \tilde{t}_1^2 d(x_1,y)^2 \right) \\ & = & \frac{2 \tilde{t}_2^2 d(x_2,y) \n_y d(x_2,y)}{1 + \tilde{t}_2^2 d(x_2,y)^2} - \frac{4 \tilde{t}_1^2 d(x_1,y) \n_y d(x_1,y)}{1 + \tilde{t}_1^2 d(x_1,y)^2}, \end{eqnarray*} and similarly \begin{eqnarray*} \nabla \var_2 & = & \nabla \log \left( 1 + \tilde{t}_1^2 d(x_1,y)^2 \right) - 2 \nabla \log \left( 1 + \tilde{t}_2^2 d(x_2,y)^2 \right) \\ & = & \frac{2 \tilde{t}_1^2 d(x_1,y) \n_y d(x_1,y)}{1 + \tilde{t}_1^2 d(x_1,y)^2} - \frac{4 \tilde{t}_2^2 d(x_2,y) \n_y d(x_2,y)}{1 + \tilde{t}_2^2 d(x_2,y)^2}. \end{eqnarray*} From now on we will assume, without loss of generality, that $t_1 \leq t_2$. We distinguish between the case $t_2 \geq \delta ^3$ and $t_2 \leq \delta ^3$. In the first case the function $1 + \tilde{t}_2^2 d(x_2,y)^2$ is uniformly Lipschitz with bounds depending only on $\delta $, and therefore we can write that $$ \nabla \var_1 = - \frac{4 \tilde{t}_1^2 d(x_1,y) \n_y d(x_1,y)}{1 + \tilde{t}_1^2 d(x_1,y)^2} + O_\delta (1); \qquad \quad \nabla \var_2 = \frac{2 \tilde{t}_1^2 d(x_1,y) \n_y d(x_1,y)}{1 + \tilde{t}_1^2 d(x_1,y)^2} + O_\delta (1). $$ Given a large but fixed constant $C_1 > 0$, we divide the surface $\Sg$ into the three regions \begin{equation}\label{eq:3regions} \mathcal{B}_1 = B_{x_1}(C_1 t_1); \qquad \quad \mathcal{B}_2 = B_{x_2}(C_1 t_2); \qquad \quad \mathcal{B}_3 = \Sigma\setminus (\mathcal{B}_1 \cup \mathcal{B}_2). \end{equation} In $\mathcal{B}_1$ we have that $|\nabla \var_i| \leq {C}{\tilde{t}_1}$, while \begin{equation}\label{eq:sim1} \frac{\tilde{t}_1^2 d(x_1,y) \n_y d(x_1,y)}{1 + \tilde{t}_1^2 d(x_1,y)^2} = (1 + o_{C_1}(1)) \frac{ \n_y d(x_1,y)}{d(x_1,y)} \qquad \quad \hbox{ in } \Sigma\setminus \mathcal{B}_1. \end{equation} The last gradient estimates imply that \begin{eqnarray}\label{eq:estQt2large} \nonumber \int_{\Sg} Q(\var_{(\vartheta_1, \vartheta_2)}) dV_g & = & \int_{\Sigma\setminus \mathcal{B}_1} Q(\var_{(\vartheta_1, \vartheta_2)}) dV_g + o_{\delta }(1) \log \frac{1}{t_1} + O_\delta (1) \\ & = & 8 \pi \int_{C_1 t_1}^1 \frac{dt}{t} + o_{\delta }(1) \log \frac{1}{t_1} + O_\delta (1) \\ & = & 8 \pi (1 + o_{\delta }(1)) \log \frac{1}{t_1} + 8 \pi (1 + o_{\delta }(1)) \log \frac{1}{t_2} + O_\delta (1); \qquad \quad t_2 \geq \delta ^3. \nonumber \end{eqnarray} Assume now that $t_2 \leq \delta _3$. Then by the definition of $\mathcal{X}_{\nu}$ we have that $d(x_1, x_2) \geq \frac{\delta ^2}{2}$, and therefore $\mathcal{B}_1 \cap \mathcal{B}_2 = \emptyset$. Similarly to \eqref{eq:sim1} we find $$ \left\{ \begin{array}{ll} \frac{\tilde{t}_1^2 d(x_1,y) \n_y d(x_1,y)}{1 + \tilde{t}_1^2 d(x_1,y)^2} = (1 + o_{C_1}(1)) \frac{ \n_y d(x_1,y)}{d(x_1,y)}; & \\ \frac{\tilde{t}_2^2 d(x_2,y) \n_y d(x_2,y)}{1 + \tilde{t}_2^2 d(x_2,y)^2} = (1 + o_{C_1}(1)) \frac{ \n_y d(x_2,y)}{d(x_2,y)} & \end{array} \right. \qquad \quad \hbox{ in } \mathcal{B}_3. $$ Moreover we have the estimates $$ \left | \nabla \var_i \right |\leq {C}{\tilde{t}_i} \quad \hbox{ in } \mathcal{B}_i, \ i=1,\ 2; \qquad \qquad \left |\nabla \var_i \right | \leq C \quad \hbox{ in } \mathcal{B}_j, \ i \neq j. $$ Then, there follows: \begin{eqnarray}\label{eq:estQt2small} \nonumber \int_{\Sg} Q(\var_{(\vartheta_1, \vartheta_2)}) dV_g & = & \int_{\mathcal{B}_3} Q(\var_{(\vartheta_1, \vartheta_2)}) dV_g + o_{\delta }(1) \log \frac{1}{t_1} + o_{\delta }(1) \log \frac{1}{t_2} + O_\delta (1) \\ & = & 8 \pi (1 + o_{\delta }(1)) \log \frac{1}{t_1} + 8 \pi (1 + o_{\delta }(1)) \log \frac{1}{t_2} + O_\delta (1); \qquad \quad t_2 \leq \delta ^3. \end{eqnarray} With formulas \eqref{eq:estQt2large} and \eqref{eq:estQt2small}, we conclude the proof of \eqref{eq:estQ} and hence that of the lemma. \end{pf} \ \noindent Since the functional $J_\rho$ attains large negative values on the above test functions $\var_{(\vartheta_1, \vartheta_2)}$, these are mapped to $X$ by $\Psi$. We next evaluate the image of $\Psi$ with more precision, beginning with the following technical lemma. \begin{lem}\label{l:concscale} Let $\var_1, \var_2$ be as in \eqref{eq:varvar12}: then, for some $C=C(\delta ,\Sg)>0$, the following estimates hold uniformly in $(\vartheta_1, \vartheta_2) \in \mathcal{X}_\nu$: \begin{equation}\label{eq:noconct1d} \sup_{x \in \Sg} \int_{B_x(r t_i)} e^{\var_i} dV_g \leq C r^2 \frac{t_i^2}{t_j^2} \qquad \quad \forall r >0,\ i \neq j. \end{equation} Moreover, given any $\varepsilon > 0$ there exists $C=C(\e, \delta , \Sg)$, depending only on $\e$, $\delta $ and $\Sg$ (but not on $\nu$), such that \begin{equation}\label{eq:noconct1d3} \int_{B_{x_i}(C t_i)} e^{\var_i} dV_g \geq (1 - \e) \int_{\Sg} e^{\var_i(y)} dV_g, \ i=1,\ 2. \end{equation} uniformly in $(\vartheta_1, \vartheta_2) \in \mathcal{X}_\nu$. \end{lem} \begin{pf} We prove the case $i=1$. Observe that $1 + \tilde{t}_2^2 d(x_2,y)^2 \leq \frac{C}{t_2^2}$ and that $1 + \tilde{t}_1^2 d(x_1,y)^2 \geq 1$. Therefore we immediately find $$ \int_{B_x(t_1 r)} e^{\var_1} dV_g \leq \frac{C}{t_2^2} \int_{B_x(t_1 r)} \frac{1}{\left( 1 + \tilde{t}_1^2 d(x_1,y)^2 \right)^2} dV_g(y) \leq C r^2 \frac{t_1^2}{t_2^2} \qquad \quad \hbox{ for all } x \in \Sg, $$ which gives the first inequality in \eqref{eq:noconct1d}. We now show \eqref{eq:noconct1d3}, by evaluating the integral in the complement of $B_{x_1}(R t_1)$ for some large $R$. Using again the fact that $1 + \tilde{t}_2^2 d(x_2,y)^2 \leq \frac{C}{t_2^2}$ we clearly have that \begin{equation}\label{eq:miao} \int_{\Sigma\setminus B_{x_1}(R t_1)} e^{\var_1(y)} dV_g(y) \leq \frac{C}{t_2^2} \int_{\Sigma\setminus B_{x_1}(R t_1)} \frac{1}{\left( 1 + \tilde{t}_1^2 d(x_1,y)^2 \right)^2} dV_g(y). \end{equation} To evaluate the last integral one can use normal geodesic coordinates centered at $x_1$ and \eqref{eq:intfalt} with a change of variable to find that $$ \lim_{t_1 \to 0^+} t_1^{-2} \int_{\Sigma\setminus B_{x_1}(R t_1)} \frac{1}{\left( 1 + \tilde{t}_1^2 d(x_1,y)^2 \right)^2} dV_g = o_R(1) \qquad \quad \hbox{ as } R \to + \infty. $$ This and \eqref{eq:miao}, jointly with the second inequality in \eqref{eq:inttot}, conclude the proof of the \eqref{eq:noconct1d3}, by choosing $R$ sufficiently large, depending on $\e, \delta $ and $\Sg$. \end{pf} \ \noindent We next show that, parameterizing the test functions on $\mathcal{X}_{\nu}$ and composing with $R_{\nu} \circ \Psi$, we obtain a map homotopic to the identity on $\mathcal{X}_{\nu}$. This step will be fundamental for us in order to run the variational scheme later in this section. \begin{lem}\label{l:homid} Let $L > 0$ be so large that $\Psi(\{ J_\rho \leq - L \}) \in X$, and let $\nu$ be so small that $J_\rho(\var_{(\vartheta_1, \vartheta_2)}) < - L$ for $(\vartheta_1, \vartheta_2) \in \mathcal{X}_{\nu}$ (see Lemma \ref{l:dsmallIlow}). Let $R_{\nu}$ be the retraction given in Lemma \ref{l:retr}. Then the map from $T_\nu : \mathcal{X}_{\nu} \to \mathcal{X}_{\nu}$ defined as $$ T_\nu((\vartheta_1, \vartheta_2)) = R_{\nu} (\Psi(\var_{(\vartheta_1, \vartheta_2)})) $$ is homotopic to the identity on $\mathcal{X}_{\nu}$. \end{lem} \begin{pf} Let us denote $\vartheta_i= (x_i, t_i)$, $$f_i= \frac{e^{\var_i}}{\int_{\Sg} e^{\var_i} dV_g}, \quad \psi (f_i)=(\beta_i, \s_i),$$ where $\psi$ is given in Proposition \ref{covering}. First, we claim that there is a constant $C=C(\delta ,\Sg)>0$, depending only on $\Sg$ and $\delta $, such that: \begin{equation}\label{eq:betatest} \frac{1}{C} \leq \frac{\s_i}{t_i} \leq C, \qquad \qquad d \left( \b_i , x_i \right) \leq C t_i. \end{equation} By \eqref{eq:noconct1d3}, we have that $$ \s\left(x_i, f_i \right) \leq C t_i,$$ where $\s(x,f)$ is the continuous map defined in \eqref{sigmax}. From that, we get that $\s_i \leq C t_i$. Using now \eqref{eq:noconct1d}, we get the relation $t_i \leq C \s_i$. Taking into account that $\s(x_i, f) \leq C t_i$ and \eqref{dett}, we obtain that $$d \left(x_i, S\left(f_i \right) \right )\leq C t_i,$$ where $S(f)$ is the set defined in \eqref{defS}. But since the inequality $$d\left(\b_i, S\left(f_i \right) \right) \leq C \s_i$$ is always satisfied, we conclude the proof of \eqref{eq:betatest}. We are now ready to prove the lemma. Let us define a first deformation $H_1$ in the following form: $$ \left( \left( \begin{array}{c} (\b_1, \s_1) \\ (\b_2, \s_2) \\ \end{array} \right), s \right) \;\; \stackrel{\small H_1}{\longmapsto} \;\; \left( \begin{array}{c} \left( \b_1,\ (1-s) \s_1 + s \kappa_1 \right) \\ \\ \left( \b_2,\ (1-s) \s_2 + s \kappa_2 \right) \end{array} \right), $$ where $\kappa_i= \min \{ \delta, \frac{\s_i}{\sqrt{\nu}} \}$. A second deformation $H_2$ is defined in the following way: $$ \left( \left( \begin{array}{c} (\b_1, \kappa_1) \\ (\b_2, \kappa_2) \\ \end{array} \right), s \right) \;\; \stackrel{\small H_2}{\longmapsto} \;\; \left( \begin{array}{c} \left( (1-s)\b_1 + s x_1, \ \kappa_1 \right) \\ \\ \left( (1-s)\b_2 + s x_2,\ \kappa_2 \right) \end{array} \right), $$ where $(1-s)\b_i + s x_i$ stands for the geodesic joining $\beta_i$ and $x_i$ in unit time. A comment is needed here. If $\kappa_i < \delta$, then we have that $\s_i < \sqrt{\nu} \delta$. By choosing $\nu$ small enough, this implies that $\beta_i$ and $x_i$ are close to each other (recall \eqref{eq:betatest}). Instead, if $\kappa_i= \delta$, the identification in $\overline{\Sg}_\delta $ makes the above deformation trivial. We also use a third deformation $H_3$: $$ \left( \left( \begin{array}{c} (x_1, \kappa_1) \\ (x_2, \kappa_2) \\ \end{array} \right), s \right) \;\; \stackrel{\small H_3}{\longmapsto} \;\; \left( \begin{array}{c} \left( x_1,\ (1-s) \kappa_1 + s t_1 \right) \\ \\ \left( x_2,\ (1-s) \kappa_2 + s t_2 \right) \end{array} \right). $$ We define $H$ as the concatenation of those three homotopies. Then, $$ ((\vartheta_1, \vartheta_2), s) \mapsto R_{\nu} \circ H(\Psi(\var_{(\vartheta_1, \vartheta_2)}),s)$$ gives us the desired homotopy to the identity. Observe that, since $\nu \ll \delta $, $H(\Psi(\var_{(\vartheta_1, \vartheta_2)}),s)$ always stays in $X$, so that $R_{\nu}$ can be applied. \end{pf} \ \noindent We now introduce the variational scheme which yields existence of solutions: this remaining part follows the ideas of \cite{djlw} (see also \cite{mal}). Let $\overline{\mathcal{X}}_{\nu}$ denote the (contractible) cone over $\mathcal{X}_{\nu}$, which can be represented as $$ \overline{\mathcal{X}}_{\nu} = \left( \mathcal{X}_{\nu} \times [0,1] \right)|_{\sim}, $$ where the equivalence relation $\sim$ identifies $\mathcal{X}_{\nu} \times \{1\}$ to a single point. We choose $L > 0$ so large that \eqref{eq:psilow} holds, and then $\nu$ so small that $$ J_{\rho}(\var_{(\vartheta_1, \vartheta_2)}) \leq - 4L \qquad \quad \hbox{ uniformly for } (\vartheta_1, \vartheta_2) \in \mathcal{X}_{\nu}, $$ the last claim being possible by Lemma \ref{l:dsmallIlow}. Fixing this value of $\nu$, consider the following class of functions \begin{equation}\label{eq:PiPi} \Gamma = \left\{ \eta : \overline{\mathcal{X}}_{\nu} \to H^1(\Sg) \; : \; \eta \hbox{ is continuous and } \eta(\cdot \times \{0\}) = \var_{(\vartheta_1, \vartheta_2)} \hbox{ on } \mathcal{X}_{\nu} \right\}. \end{equation} Then we have the following properties. \begin{lem}\label{l:min-max} The set $\Gamma$ is non-empty and moreover, letting $$ \alpha = \inf_{\eta \in \Gamma} \; \sup_{m \in \overline{\mathcal{X}}_{\nu}} J_\rho(\eta(m)), \qquad \hbox{ one has } \qquad \alpha > - 2 L. $$ \end{lem} \begin{pf} To prove that $\Gamma \neq \emptyset$, we just notice that the map \begin{equation}\label{eq:ovPi} \tilde{\eta}(\vartheta,s) = s \var_{(\vartheta_1, \vartheta_2)}, \qquad \qquad (\vartheta,s) \in \overline{\mathcal{X}}_{\nu}, \end{equation} belongs to $\Gamma$. Suppose by contradiction that $\alpha \leq - 2L$: then there would exist a map $\eta \in \Gamma$ satisfying the condition $\sup_{m \in \overline{\mathcal{X}}_{\nu}} J_\rho(\eta(m)) \leq - L$. Then, since Lemma \ref{l:homid} applies, writing $m = (\vartheta, s)$, with $\vartheta \in \mathcal{X}_{\nu}$, the map $$ s \mapsto R_{\nu} \circ \Psi \circ \eta(\cdot,s) $$ would be a homotopy in $\mathcal{X}_{\nu}$ between $R_{\nu} \circ \Psi \circ \var_{(\vartheta_1, \vartheta_2)}$ and a constant map. But this is impossible since $\mathcal{X}_{\nu}$ is non-contractible (by the results in Section \ref{s:app} and by the fact that $\mathcal{X}_{\nu}$ is a retract of $X$) and since $R_{\nu} \circ \Psi \circ \var_{(\vartheta_1, \vartheta_2)}$ is homotopic to the identity on $\mathcal{X}_{\nu}$. Therefore we deduce $\alpha > - 2 L$, which is the desired conclusion. \end{pf} \ From the above Lemma, the functional $J_{\rho}$ satisfies suitable structural properties for min-max theory. However, we cannot directly conclude the existence of a critical point, since it is not known whether the Palais-Smale condition holds or not. The conclusion needs a different argument, which has been used intensively (see for instance \cite{djlw, dm}), so we will be sketchy. \ \noindent We take $\mu > 0$ such that $\mathcal{J}_i := [\rho_i-\mu, \rho_i+\mu]$ is contained in $(4 \pi, 8 \pi)$ for both $i = 1, 2$. We then consider $\tilde{\rho}_i \in \mathcal{J}_i$ and the functional $J_{\tilde{\rho}}$ corresponding to these values of the parameters. Following the estimates of the previous sections, one easily checks that the above min-max scheme applies uniformly for $\tilde{\rho}_i \in \mathcal{J}_i$ for $\nu$ sufficiently small. More precisely, given any large number $L > 0$, there exists $\nu$ so small that for $\tilde{\rho}_i \in \mathcal{J}_i$ \begin{equation}\label{eq:min-maxrho} \sup_{m \in \partial \overline{\mathcal{X}}_{\nu}} J_{\tilde{\rho}}(m) < - 4 L; \qquad \qquad \alpha_{\tilde{\rho}} := \inf_{\eta \in \Gamma} \; \sup_{m \in \overline{\mathcal{X}}_{\nu}} J_{\tilde{\rho}}(\eta(m)) > - 2L, \ \ (\tilde{\rho}=(\tilde{\rho}_1, \tilde{\rho}_2)). \end{equation} where $\Gamma$ is defined in \eqref{eq:PiPi}. Moreover, using for example the test map \eqref{eq:ovPi}, one shows that for $\mu$ sufficiently small there exists a large constant $\overline{L}$ such that \begin{equation}\label{eq:ovlovl} \alpha_{\tilde{\rho}} \leq \overline{L} \qquad \qquad \hbox{ for } \tilde{\rho}_i \in \mathcal{J}_i. \end{equation} \ \noindent Under these conditions, the following Lemma is well-known, usually taking the name "monotonicity trick". This technique was first introduced by Struwe in \cite{struwe}, and made general in \cite{jeanjean} (see also \cite{djlw, lucia}). \begin{lem}\label{l:arho} Let $\nu$ be so small that \eqref{eq:min-maxrho} holds. Then the functional $J_{t \rho}$ possesses a bounded Palais-Smale sequence $(u_l)_l$ at level $\tilde{\alpha}_{t \rho}$ for almost every $t \in \left[ 1 - \frac{\mu}{16 \pi}, 1 + \frac{\mu}{16 \pi} \right]$. \end{lem} \ \begin{pfn} {\sc of Theorem \ref{t:main}.} The existence of a bounded Palais-Smale sequence for $J_{t \rho}$ implies by standard arguments that this functional possesses a critical point. Let now $t_j \to 1$, $t_j \in \L$ and let $(u_{1,j}, u_{2,j})$ denote the corresponding solutions. It is then sufficient to apply the compactness result in Theorem \ref{th:jlw}, which yields convergence of $(u_{1,j}, u_{2,j})_j$ by the fact that $\rho_1, \rho_2$ are not multiples of $4 \pi$. \end{pfn} \section{Appendix: the set $ X=\overline{\Sg}_{\delta } \times \overline{\Sg}_{\delta } \setminus \overline{D}_\delta $ is not contractible.}\label{s:app} Without loss of generality, we consider the case $\delta=1$ (see \eqref{cono}). Let us denote $\overline{\Sg} = \overline{\Sg}_{1}$. If $\Sg= \mathbb{S}^2$, we have a complete description of $X$. Indeed, in this case $\overline{\Sg}$ can be identified with $B(0,1) \subset \mathbb{R}^3$. Therefore, we have: $$ X = (B(0,1) \times B(0,1)) \setminus E, $$ where $E=\{x \in \mathbb{R}^6:\ x_i = x_{i+3},\ i=1,\ 2,\ 3\}$. By taking the orthogonal projection onto $E^{\bot}$, we have that $X \simeq U \setminus \{0\}$ ($\simeq$ stands for homotopical equivalence), where $U \subset E^{\bot}$ is a convex neighborhood of $0$. And, clearly, $U\setminus \{0\} \simeq \mathbb{S}^2$. The case of positive genus is not so easy and we have a less complete description of $X$. However, we will prove that it is non-contractible by studying its cohomology groups $H^*(X)$, where coefficients will be taken in $\mathbb{R}$. Indeed, we will show that: \begin{pro} \label{pro51} If the genus of $\Sg$ is positive, then $H^4(X)$ is nontrivial. \end{pro} \begin{pf} In what follows, the elements of $\overline{\Sg}$ will be written as $(x,t)$, where $x \in \Sg$, $t \in (0,1]$. Clearly, $X= Y \cup Z$, where $Y$, $Z$ are open sets defined as: $$Y= \{((x_1,t_1), (x_2,t_2)) \in \overline{\Sg} \times \overline{\Sg}:\ t_1 \neq t_2 \},$$ $$Z= \{((x_1,t_1), (x_2,t_2)) \in \overline{\Sg} \times \overline{\Sg}:\ t_1<1,\ t_2<1,\ x_1 \neq x_2 \}.$$ Then, the Mayer-Vietoris Theorem gives the exactness of the sequence: $$ \cdots \rightarrow H^3(X) \rightarrow H^3(Y) \oplus H^3(Z) \rightarrow H^3(Y \cap Z) \rightarrow H^4(X) \rightarrow \cdots $$ Since our coefficients are real, the above cohomology groups are indeed real vector spaces. The exactness of the sequence then gives: \begin{equation} \label{mv} dim(H^3(Y \cap Z)) \leq dim(H^4(X)) + dim(H^3(Y) \oplus H^3(Z)).\end{equation} Let us describe the sets involved above. First of all, observe that $Y=Y_1 \cup Y_2$ has two connected components: $$Y_i= \{((x_1,t_1), (x_2,t_2)) \in \overline{\Sg} \times \overline{\Sg}:\ t_i > t_j, \ j \neq i \}.$$ To study $Y_1$, we define the following deformation retraction: $$ r_1 : Y_1 \to Y_1, \qquad r_1((x_1,t_1), (x_2,t_2))= ((x_1,1), (x_2,1/2)).$$ Clearly, $r_1(Y_1)=0 \times \left(\Sigma\times \{1/2\}\right)$, which is homeomorphic to $\Sg$. Analogously, $Y_2 \simeq \Sg$, and so $Y \simeq \Sigma\ \ensuremath{\mathaccent\cdot\cup}\, \Sg$ (here $\ \ensuremath{\mathaccent\cdot\cup}\,$ stands for the disjoint union). For what concerns $Z$, we can define a deformation retraction: $$r : Z \to Z, \qquad r((x_1,t_1), (x_2,t_2))= ((x_1,1/2), (x_2,1/2)).$$ Observe that $r(Z)= \left( \Sigma\times \{ 1/2\} \times \Sigma\times \{ 1/2\} \right)\setminus \overline{D}$ which is homeomorphic to $\Sigma\times \Sigma\setminus D$, where $D$ is the diagonal of $\Sigma\times \Sg$. Let us set $$ A=\Sigma\times \Sigma\setminus D, $$ since it will appear many times in what follows. Moreover, $Y \cap Z = (Y_1 \cap Z) \cup (Y_2 \cap Z)$, and so this has two connected components. Also here we have a deformation retraction: $$ r'_1: Y_1 \cap Z \to Y_1 \cap Z ,\qquad r'_1((x_1,t_1), (x_2,t_2))= ((x_1,1/2), (x_2,1/3)).$$ It is clear that $r'_1(Y_1 \cap Z)$ is homeomorphic to $A=\Sigma\times \Sigma\setminus D$. Analogously we can argue for $Y_2 \cap Z$; therefore, $Y \cap Z \simeq A \ \ensuremath{\mathaccent\cdot\cup}\, A$. Hence, from \eqref{mv} we obtain: \begin{equation} \label{H30} dim(H^4(X)) \geq dim (H^3(A)).\end{equation} Let us now compute the cohomology of $A=\Sigma\times \Sigma\setminus D$. Given $\e>0$, let us define: $$B = \{(x,y) \in \Sigma\times \Sg:\ d(x,y) < \varepsilon \},$$ which is an open neighborhood of $D$. Clearly, we can use the local contractibility of $\Sg$ to retract $B$ onto $D$. Moreover, $A \cup B = \Sigma\times \Sg$. The Mayer-Vietoris Theorem yields the exact sequence: \begin{equation} \label{mv2}\cdots \rightarrow H^2(A \cap B) \rightarrow H^3(\Sigma\times \Sg) \rightarrow H^3(A) \oplus H^3(B) \rightarrow H^3(A\cap B) \rightarrow \cdots \end{equation} Therefore, in order to study $H^3(A)$ we need some information about $H^*(A\cap B)$. By using the exponential map, we can define a homeomorphism: $$ h : A \cap B = \{(x,y) \in \Sigma\times \Sg:\ 0< d(x,y) < \varepsilon \} \to \{(x,v) \in T\Sg:\ 0<\|v\|<\e\},$$ $$ h(x,y)= (x,v) \in T\Sigma\mbox{ such that } \exp_x(v)=y,$$ where $T \Sg$ is the tangent bundle of $\Sg$. Therefore, $A \cap B$ is homotopically equivalent to the unit tangent bundle $UT \Sg$. The cohomology of $UT\Sg$ must be well known, but we have not been able to find a precise reference. We state and prove the following lemma: \begin{lem} Let us denote by $g=g(\Sg)$ the genus of $\Sg$. Then: \begin{enumerate} \item if $g= 1$, $H^0(UT \Sg)\cong H^3(UT \Sg)\cong \mathbb{R}$ and $H^1(UT \Sg)\cong H^2(UT \Sg)\cong \mathbb{R}^3$. \item if $g \neq 1$, $H^0(UT \Sg) \cong H^3(UT \Sg)\cong \mathbb{R}$ and $H^1(UT \Sg)\cong H^2(UT \Sg)\cong \mathbb{R}^{2g}$. \end{enumerate} \end{lem} \begin{pf} We only need to compute $H^1(UT \Sg)$ and $H^2(UT \Sg)$. If $g=1$, that is, $\Sigma\simeq \mathbb{T}^2$, then $T\Sg$ is trivial and hence $UT \Sigma\simeq \mathbb{T}^2 \times \mathbb{S}^1 \simeq \mathbb{T}^3$. The K{\"u}nneth formula gives us the result. If $g \neq 1$, we use the Gysin exact sequence (see Proposition 14.33 of \cite{bott-tu}): $$0 \rightarrow H^1(\Sg) \rightarrow H^1(UT\Sg) \rightarrow H^0(\Sg) \stackrel{\wedge e}{\rightarrow} H^2(\Sg) \rightarrow H^2(UT\Sg) \rightarrow H^1(\Sg) \rightarrow H^3(\Sg)=0.$$ In the above sequence, $\wedge e$ is the wedge product with the Euler class $e$. Since we are working with real coefficients and the Euler characteristic of $\Sg$ is different from zero, then $\wedge e$ is an isomorphism. Therefore, we conclude: $$ H^1(UT\Sg) \cong H^1(\Sg) \cong \mathbb{R}^{2g}, \qquad H^2(UT\Sg) \cong H^1(\Sg) \cong \mathbb{R}^{2g}.$$ \end{pf} \begin{rem} We have chosen real coefficients since they simplify our arguments and are enough for our purposes. As a counterpart, the above computations do not take into account the torsion part. For instance, it is known that $UT \mathbb{S}^2 = \mathbb{R} \mathbb{P}^3$ (see \cite{montesinos}). \end{rem} We now come back to the proof of Proposition \ref{pro51}. With our information, \eqref{mv2} becomes: $$ \cdots \rightarrow H^2(A \cap B) \rightarrow \mathbb{R}^{4g} \rightarrow H^3(A) \rightarrow \mathbb{R} \rightarrow \cdots $$ In the above sequence we computed $H^3(\Sigma\times \Sg)$ using the K{\"u}nneth formula. Then, $4g \leq dim(H^2(UT \Sg)) + dim(H^3(A))$. Therefore, $dim(H^3(A)) \geq 2g$, if $g>1$, or $dim(H^3(A)) \geq 1$, if $g=1$. In any case we conclude by \eqref{H30}. \end{pf} \end{document}
\begin{document} \title[The Erd\H os conjecture for primitive sets]{The Erd\H os conjecture for primitive sets} \author{Jared Duker Lichtman} \address{Department of Mathematics, Dartmouth College, Hanover, NH 03755} \email{[email protected]} \email{[email protected]} \author{Carl Pomerance} \address{Department of Mathematics, Dartmouth College, Hanover, NH 03755} \email{[email protected]} \subjclass[2010]{Primary 11B83; Secondary 11A05, 11N05} \date{June 30, 2018.} \keywords{primitive set, primitive sequence, Mertens' product formula} \begin{abstract} A subset of the integers larger than 1 is {\it primitive} if no member divides another. Erd\H os proved in 1935 that the sum of $1/(a\log a)$ for $a$ running over a primitive set $A$ is universally bounded over all choices for $A$. In 1988 he asked if this universal bound is attained for the set of prime numbers. In this paper we make some progress on several fronts, and show a connection to certain prime number ``races" such as the race between $\pi(x)$ and $\textnormal{li}(x)$. \end{abstract} \maketitle \section{Introduction} A set of positive integers $>1$ is called {\bf primitive} if no element divides any other (for convenience, we exclude the singleton set $\{1\}$). There are a number of interesting and sometimes unexpected theorems about primitive sets. After Besicovitch \cite{besicovitch}, we know that the upper asymptotic density of a primitive set can be arbitrarily close to $1/2$, whereas the lower asymptotic density is always $0$. Using the fact that if a primitive set has a finite reciprocal sum, then the set of multiples of members of the set has an asymptotic density, Erd\H os gave an elementary proof that the set of nondeficient numbers (i.e., $\sigma(n)/n\ge2$, where $\sigma$ is the sum-of-divisors function) has an asymptotic density. Though the reciprocal sum of a primitive set can possibly diverge, Erd\H os \cite{erdos35} showed that for a primitive set $A$, $$ \sum_{a\in A}\frac1{a\log a}<\infty. $$ In fact, the proof shows that these sums are uniformly bounded as $A$ varies over primitive sets. Some years later in a 1988 seminar in Limoges, Erd\H os suggested that in fact we always have \begin{equation} \label{eq:conj} f(A):=\sum_{a\in A}\frac1{a\log a}\le\sum_{p\in\mathcal{P}}\frac1{p\log p}, \end{equation} where $\mathcal{P}$ is the set of prime numbers. The assertion \eqref{eq:conj} is now known as the Erd\H os conjecture for primitive sets. In 1991, Zhang \cite{zhang1} proved the Erd\H os conjecture for primitive sets $A$ with no member having more than 4 prime factors (counted with multiplicity). After Cohen \cite{cohen}, we have \begin{equation} \label{eq:cohen} C: = \sum_{p\in \mathcal{P}}\frac1{p\log p} = 1.63661632336\ldots\,, \end{equation} the sum over primes in \eqref{eq:conj}. Using the original Erd\H os argument in \cite{erdos35}, Erd\H os and Zhang showed that $f(A)<2.886$ for a primitive set $A$, which was later improved by Robin to $2.77$. These unpublished estimates are reported in Erd\H os--Zhang \cite{ez} who used another method to show that $f(A)<1.84$. Shortly after, Clark \cite{clark} claimed that $f(A)\le e^\gamma=1.781072\dots$\,. However, his brief argument appears to be incomplete. Our principal results are the following. \begin{theorem}\label{thm:egamma} For any primitive set $A$ we have $f(A) < e^\gamma$. \end{theorem} \begin{theorem} \label{thm:no8s} For any primitive set $A$ with no element divisible by $8$, we have $f(A)<C+2.37\times10^{-7}$. \end{theorem} Say a prime $p$ is {\bf Erd\H os strong} if for any primitive set $A$ with the property that each element of $A$ has least prime factor $p$, we have $f(A)\le 1/(p\log p)$. We conjecture that every prime is Erd\H os strong. Note that the Erd\H os conjecture \eqref{eq:conj} would immediately follow, though it is not clear that the Erd\H os conjecture implies our conjecture. Just proving our conjecture for the case of $p=2$ would give the inequality in Theorem \ref{thm:no8s} for all primitive sets $A$. Currently the best we can do for a primitive set $A$ of even numbers is that $f(A)<e^\gamma/2$, see Proposition \ref{lem:erdos} below. For part of the next result, we assume the Riemann hypothesis (RH) and the Linear Independence hypothesis (LI), which asserts that the sequence of numbers $\gamma_n>0$ such that $\zeta(\tfrac{1}{2}+i\gamma_n)=0$ is linearly independent over ${\mathbb Q}$. \begin{theorem} \label{thm:race} Unconditionally, all of the odd primes among the first $10^8$ primes are Erd\H os strong. Assuming RH and LI, the Erd\H os strong primes have relative lower logarithmic density $>0.995$. \end{theorem} The proof depends strongly on a recent result of Lamzouri \cite{lamz} who was interested in the ``Mertens race" between $\prod_{p\le x}(1-1/p)$ and $1/(e^\gamma \log x)$. For a primitive set $A$, let $\mathcal{P}(A)$ denote the support of $A$, i.e., the set of prime numbers that divide some member of $A$. It is clear that the Erd\H os conjecture \eqref{eq:conj} is equivalent to the same assertion where the prime sum is over $\mathcal{P}(A)$. \begin{theorem} \label{thm:support} If $A$ is a primitive set with $\mathcal{P}(A)\subset[3,\exp(10^6)]$, then $$ f(A)\le\sum_{p\in\mathcal{P}(A)}\frac{1}{p\log p}. $$ \end{theorem} If some primitive set $A$ of odd numbers exists with $f(A)>\sum_{p\in\mathcal{P}(A)}1/(p\log p)$, Theorem \ref{thm:support} suggests that it will be very difficult indeed to give a concrete example! For a positive integer $n$, let $\Omega(n)$ denote the number of prime factors of $n$ counted with multiplicity. Let ${\mathbb N}_k$ denote the set of integers $n$ with $\Omega(n)=k$. Zhang \cite{zhang2} proved a result that implies $f({\mathbb N}_k)< f({\mathbb N}_1)$ for each $k\ge2$, so that the Erd\H os conjecture holds for the primitive sets ${\mathbb N}_k$. More recently, Banks and Martin \cite{bm} conjectured that $f({\mathbb N}_1)>f({\mathbb N}_2)>f(N_3)>\cdots$\,. The inequality $f({\mathbb N}_2)>f({\mathbb N}_3)$ was just established by Bayless, Kinlaw, and Klyve \cite{BKK}. We prove the following result. \begin{theorem} \label{thm:Nk} There is a positive constant $c$ such that $f({\mathbb N}_k)\ge c$ for all $k$. \end{theorem} We let the letters $p,q,r$ represent primes. In addition, we let $p_n$ represent the $n$th prime. For an integer $a>1$, we let $P(a)$ and $p(a)$ denote the largest and smallest prime factors of $a$. Modifying the notation introduced in \cite{ez}, for a primitive set $A$ let \begin{align*} A_p & = \{a\in A: p(a)\ge p\},\\ A'_p & = \{a\in A: p(a) = p\},\\ A''_p & = \{a/p : a\in A'_p\}. \end{align*} We let $f(a)=1/(a\log a)$ and so $f(A)=\sum_{a\in A}f(a)$. In this language, Zhang's full result \cite{zhang2} states that $f(({\mathbb N}_k)'_p)\le f(p)$ for all primes $p$, $k\ge1$. We also, let $$ g(a)=\frac1a\prod_{p<P(a)}\left(1-\frac1p\right),\quad h(a)=\frac1{a\log P(a)}, $$ with $ g(A)=\sum_{a\in A}g(a)$ and $h(A)=\sum_{a\in A}h(a)$. \section{The Erd\H os approach} In this section we will prove Theorem \ref{thm:egamma}. We begin with an argument inspired by the original 1935 paper of Erd\H os \cite{erdos35}. \begin{proposition}\label{lem:erdos} For any primitive set $A$, if $q\notin A$ then $$f(A'_q) < e^\gamma g(q) = \frac{e^\gamma}{q}\prod_{p<q}\bigg(1-\frac{1}{p}\bigg).$$ \end{proposition} \begin{proof} For each $a\in A'_q$, let $S_a = \{ba : p(b) \ge P(a)\}$. Note that $S_a$ has asymptotic density $g(a)$. Since $A'_q$ is primitive, we see that the sets $S_a$ are pairwise disjoint. Further, the union of the sets $S_a$ is contained in the set of all natural numbers $m$ with $p(m) = q$, which has asymptotic density $g(q)$. Thus, the sum of densities for each $S_a$ is dominated by $g(q)$, that is, \begin{align}\label{eq:g(A)} g(A'_q)=\sum_{a\in A'_q}g(a) \le g(q). \end{align} By Theorem 7 in \cite{RS1}, we have for $x\ge285$, \begin{equation} \label{eq:mert} \prod_{p\le x}\left(1-\frac1p\right)>\frac{1}{e^\gamma\log(2x)}, \end{equation} which may be extended to all $x\ge 1$ by a calculation. Thus, since each $a \in A'_q$ is composite, $$g(a)=\frac{1}{a}\prod_{p<P(a)}\bigg(1-\frac{1}{p}\bigg) > \frac{e^{-\gamma}}{ a\log\big(2P(a)\big)} > \frac{e^{-\gamma}}{ a\log a} = e^{-\gamma}f(a).$$ Hence by \eqref{eq:g(A)}, \begin{equation*} f(A'_q)/e^\gamma < g(A'_q) \le g(q). \end{equation*} \end{proof} \begin{remark} Let $\sigma$ denote the sum-of-divisors function and let $A$ be the set of $n$ with $\sigma(n)/n\ge2$ and $\sigma(d)/d<2$ for all proper divisors $d$ of $n$, the set of primitive nondeficient numbers. Then an appropriate analog of $g(A)$ gives the density of nondeficient numbers, recently shown in \cite{mits1} to lie in the tight interval $(0.2476171,\,0.2476475)$. In \cite{JDLpnd}, an analog of Proposition \ref{lem:erdos} is a key ingredient for sharp bounds on the reciprocal sum of the primitive nondeficient numbers. \end{remark} \begin{remark} We have $g(\mathcal P)=1$. It is easy to see by induction over primes $r$ that \begin{equation*} \sum_{p\le r}g(p)=\sum_{p\le r}\frac1p\prod_{q<p}\left(1-\frac1q\right)=1-\prod_{p\le r}\left(1-\frac1p\right). \end{equation*} Letting $r\to\infty$ we get that $g(\mathcal P)=1$. There is also a holistic way of seeing this. Since $g(p)$ is the density of the set of integers with least prime factor $p$, it would make sense that $g(\mathcal P)$ is the density of the set of integers which have a least prime factor, which is 1. To make this rigorous, one notes that the density of the set of integers whose least prime factor is $>y$ tends to 0 as $y\to\infty$. As a consequence of $g(\mathcal P)=1$, we have \begin{equation} \label{eq:ident} \sum_{p>2}g(p)=\frac12, \end{equation} an identity we will find to be useful. \end{remark} For a primitive set $A$, let $$A^k = \{a: 2^k\| a\in A\}, \qquad B^k = \{a/2^k: a\in A^k\}.$$ The next result will help us prove Theorem \ref{thm:egamma}. \begin{lemma}\label{lem:egamma} For a primitive set $A$, let $k\ge1$ be such that $2^k\notin A$. Then we have $$f(A^{k}) < \frac{e^\gamma}{2^k}\sum_{p\notin A\atop p>2}g(p).$$ \end{lemma} \begin{proof} If $2^kp\notin A$ for a prime $p>2$, then $(B^k)'_p$ is a primitive set of odd composite numbers, so by Proposition \ref{lem:erdos}, $f((B^k)'_p) < e^\gamma g(p)$. Now if $2^kp\in A$ for some odd prime $p$, then $(B^{k})'_p=\{p\}$ and note $p\notin A$ by primitivity. We have $f(2^kp) < 2^{-k}e^\gamma g(p)$ since $$ \frac1{2^kp\log(2^kp)}\le\frac1{2^kp\log(2p)}<\frac{e^\gamma}{2^k}g(p), $$ which follows from \eqref{eq:mert}. Hence combining the two cases, \begin{align*} f(A^k)=\sum_{p\notin A\atop p>2}f(2^k{\cdot}(B^k)'_p)& \le \sum_{p\in B^k,p\notin A\atop p>2}f(2^kp) + 2^{-k}\sum_{p\notin B^k,p\notin A\atop p>2}f((B^k)'_p)\\ & < \frac{e^\gamma}{2^k}\sum_{p\notin A\atop p>2}g(p). \end{align*} \end{proof} With Lemma \ref{lem:egamma} in hand, we prove $f(A)<e^\gamma$. \begin{proof}[Proof of Theorem \ref{thm:egamma}] From Erd\H os--Zhang \cite{ez}, we have that $f(A_3)<0.92$. If $2\in A$, then $A'_2=\{2\}$, so that $f(A)=f(A_3)+f(A'_2)<0.92+1/(2\log2)<e^\gamma$. Hence we may assume that $2\notin A$. If $A$ contains every odd prime, then $f(A'_2)$ consists of at most one power of 2, and the calculation just concluded shows we may assume this is not the case. Hence there is at least one odd prime $p_0\notin A$. By Proposition \ref{lem:erdos}, we have \begin{align}\label{eq:A} f(A) &= \sum_pf(A'_p)= \sum_{p\in A}f(p) + \sum_{p\notin A} f(A'_p) < \sum_{p\in A}f(p) + e^\gamma\sum_{p\notin A\atop p>2} g(p) + f(A'_2). \end{align} First suppose $A$ contains no powers of $2$. Then by Lemma \ref{lem:egamma}, \begin{align*} f(A'_2) = \sum_{k\ge1}f(A^{k}) < \sum_{k\ge1}\frac{e^\gamma}{2^k}\sum_{p\notin A\atop p>2}g(p) = e^\gamma\sum_{p\notin A\atop p>2}g(p). \end{align*} Substituting into \eqref{eq:A}, we conclude, using \eqref{eq:ident}, \begin{align} \label{eq:fA} f(A) & < \sum_{p\in A}f(p) + 2e^\gamma\sum_{p\notin A\atop p>2}g(p) \le 2e^\gamma\sum_{p>2}g(p) = e^\gamma. \end{align} For the last inequality we used that for every prime $p$, \begin{equation} \label{eq:fgineq2} \frac{f(p)}{e^\gamma g(p)}<1.082, \end{equation} which follows after a short calculation using \cite[Theorem 7]{RS1}. Now if $2^K\in A$ for some positive integer $K$, then $K$ is unique and $K\ge 2$. Also $A^K=\{2^K\}$ and $A^k=\emptyset$ for all $k>K$, so again by Lemma \ref{lem:egamma}, \begin{align*} f(A'_2) = \sum_{k=1}^Kf(A^{k}) = f(2^K) + \sum_{k=1}^{K-1}\frac{e^\gamma}{2^k}\sum_{p\notin A\atop p>2}g(p) = f(2^K) + (1-2^{1-K})e^\gamma\sum_{p\notin A\atop p>2}g(p). \end{align*} Substituting into \eqref{eq:A} gives \begin{align} \label{eq:fAK} f(A) < \sum_{p\in A}f(p) + f(2^K) + (2-2^{1-K})e^\gamma\sum_{p\notin A\atop p>2}g(p) & \le f(2^K) + (2-2^{-1})e^\gamma\sum_{p>2}g(p)\nonumber\\ & \le f(2^2) + (1-2^{-2})e^\gamma < e^\gamma, \end{align} using $K\ge2$, the identity \eqref{eq:ident}, inequality \eqref{eq:fgineq2}, and $f(2^2)< 2^{-2} e^\gamma$. This completes the proof. \end{proof} \section{Mertens primes} In this section we will prove Theorems \ref{thm:race} and Theorem \ref{thm:support}. Note that by Mertens' theorem, $$ \prod_{p<x}\left(1-\frac1p\right)\sim\frac1{e^\gamma\log x},\quad x\to\infty, $$ where $\gamma$ is Euler's constant. We say a prime $q$ is {\bf Mertens} if \begin{equation} \label{eq:help} e^\gamma\prod_{p<q}\Big(1-\frac{1}{p}\Big) \le \frac{1}{\log q}, \end{equation} and let $\mathcal P^{\textrm{Mert}}$ denote the set of Mertens primes. We are interested in Mertens primes because of the following consequence of Proposition \ref{lem:erdos}, which shows that every Mertens prime is Erd\H os strong. \begin{corollary}\label{cor:mert} Let $A$ be a primitive set. If $q\in \mathcal{P}^{\rm Mert}$, then $f(A'_q)\le f(q)$. Hence if $A'_q \subset \{q\}$ for all $q\notin \mathcal{P}^{\rm Mert}$, then $A$ satisfies the Erd\H os conjecture. \end{corollary} \begin{proof} By Proposition \ref{lem:erdos} we have $f(A'_q)\le\max\{e^\gamma g(q),f(q)\}$. If $q\in\mathcal{P}^{\textrm{Mert}}$, then $$e^\gamma g(q) = \frac{e^\gamma}{q}\prod_{p<q}\bigg(1-\frac{1}{p}\bigg)\le \frac{1}{q\log q} =f(q),$$ so $f(A'_q)\le f(q)$. \end{proof} Now, one would hope that the Mertens inequality \eqref{eq:help} holds for all primes $q$. However, \eqref{eq:help} fails for $q=2$ since $e^\gamma > 1/\log 2$. We have computed that $q$ is indeed a Mertens prime for all $2<q\le p_{10^8} = 2{,}038{,}074{,}743$, thus proving the unconditional part of Theorem \ref{thm:race}. \subsection{Proof of Theorem \ref{thm:race}} To complete the proof, we use a result of Lamzouri \cite{lamz} relating the Mertens inequality to the race between $\pi(x)$ and $\textnormal{li}(x)$, studied by Rubinstein and Sarnak \cite{RubSarn}. Under the assumption of RH and LI, he proved that the set $\mathcal N$ of real numbers $x$ satisfying \begin{align*} e^\gamma \prod_{p\le x}\bigg(1-\frac{1}{p}\bigg) > \frac{1}{\log x}, \end{align*} has logarithmic density $\delta(\mathcal{N})$ equal to the logarithmic density of numbers $x$ with $\pi(x)>\textnormal{li}(x)$, and in particular \begin{align} \delta(\mathcal N) = \lim_{x\to\infty}\frac{1}{\log x}\int_{t\in \mathcal N\cap[2,x]}\frac{dt}{t} = 0.00000026\ldots\,. \end{align} We note that if a prime $p=p_n\in \mathcal N$, then for $p'=p_{n+1}$ we have $[p,p')\subset\mathcal N$ because the prime product on the left-hand side is constant on $[p,p')$, while $1/\log x$ is decreasing for $x\in [p,p')$. The set of primes ${\mathcal Q}$ in $\mathcal N$ is precisely the set of non-Mertens primes, so ${\mathcal Q}=\mathcal P\setminus\mathcal P^{\textrm{Mert}}$. From the above observation, we may leverage knowledge of the continuous logarithmic density $\delta(\mathcal N)$ to obtain an upper bound on the relative (upper) logarithmic density of non-Mertens primes \begin{align} \label{eq:dens} \bar\delta({\mathcal Q}) := \limsup_{x\to \infty}\frac{1}{\log x}\sum_{p\le x\atop p\in {\mathcal Q}}\frac{\log p}{p}. \end{align} From the above observation, we have \begin{align*} \delta(\mathcal N) \ge \limsup_{x\to\infty}\frac{1}{\log x}\sum_{p\le x\atop p\in \mathcal Q}\int_{p}^{p'}\frac{dt}{t} & = \limsup_{x\to\infty}\frac{1}{\log x}\sum_{p\le x\atop p\in \mathcal Q}\log(p'/p). \end{align*} Then letting $d_p=p'-p$ be the gap between consecutive primes, we have \begin{align*} \delta(\mathcal N) \ge \limsup_{x\to\infty}\frac{1}{\log x}\sum_{p\le x\atop p\in \mathcal Q}\frac{d_p}{p}, \end{align*} since $\sum\log(p'/p) = \sum d_p/p + O(1)$. The average gap is roughly $\log p$, so we may consider the primes for which $d_p < \epsilon \log p$, for a small positive constant $\epsilon$ to be determined. We claim \begin{align}\label{eq:RVclaim} \limsup_{x\to\infty}\frac1{\log x}\sum_{\substack{p\le x\\ d_p < \epsilon \log p}}\frac{\log p}{p} \ \le \ 16\epsilon, \end{align} from which it follows \begin{align*} \bar\delta({\mathcal Q}) & = \limsup_{x\to\infty}\frac{1}{\log x}\sum_{p\le x\atop p\in {\mathcal Q}}\frac{\log p}{p} \le \limsup_{x\to\infty}\frac{1}{\log x}\Big(\sum_{\substack{p\le x\\ p\in {\mathcal Q}\\d_p \ge \epsilon \log p}}\frac{d_p/\epsilon}{p} + \sum_{\substack{p\le x\\ d_p < \epsilon \log p}}\frac{\log p}{p}\Big)\\ & \le \delta(\mathcal N)/\epsilon + 16\epsilon. \end{align*} Hence to prove Theorem \ref{thm:race} it suffices to prove \eqref{eq:RVclaim}, since taking $\epsilon = \sqrt{\delta(\mathcal N)}/4$ gives \begin{align} \bar\delta({\mathcal Q}) < 8\sqrt{\delta(\mathcal N)} < 4.2\times10^{-3}. \end{align} By Riesel-Vaughan \cite[Lemma 5]{RV}, the number of primes $p$ up to $x$ with $p+d$ also prime is at most \begin{align*} \sum_{p\le x\atop p+d\textrm{ prime}}1 \le \frac{8c_2x}{\log^2 x}\prod_{p\mid d\atop p>2}\frac{p-1}{p-2}, \end{align*} where $c_2$ is for the twin-prime constant $2\prod_{p>2}p(p-2)/(p-1)^2=1.3203\ldots$. Denote the prime product by $F(d) = \prod_{p\mid d\atop p>2}\frac{p-1}{p-2}$, and consider the multiplicative function $H(d) = \sum_{u\mid d}\mu(u)F(d/u)$. We have $H(2^k)=0$ for all $k\ge1$, and for $p>2$ we have $H(p)=F(p)-1$, and $H(p^k)=0$ if $k\ge2$. Thus, \begin{align*} \sum_{d\le y} F(d) & = \sum_{d\le y}\sum_{u\mid d}H(u) = \sum_{u\le y}H(u)\sum_{d\le y/u}1\le y\sum_{u\le y}\frac{H(u)}{u} \le y\prod_{p>2}\Big(1 + \frac{H(p)}{p}\Big)\\ & = y\prod_{p>2}\Big(1 + \frac{(p-1)/(p-2)-1}{p}\Big) = y\prod_{p>2}\Big(1 + \frac{1}{p(p-2)}\Big). \end{align*} Noting that $c_2':=\prod_{p>2}(1 + 1/[p(p-2)])=2/c_2$, we have \begin{align*} \sum_{\substack{p\le x\\ d_p < \epsilon \log p}}1\le \sum_{d\le \epsilon\log x}\sum_{p\le x\atop p+d\textrm{ prime}}1 \le \frac{8c_2x}{\log^2 x}\sum_{d\le \epsilon\log x}F(d) \le \epsilon\frac{8c_2c_2'x}{\log x} = \epsilon\frac{16x}{\log x}. \end{align*} Thus, \eqref{eq:RVclaim} now follows by partial summation, and the proof is complete. \begin{remark} The concept of relative upper logarithmic density of the set of non-Mertens primes in \eqref{eq:dens} can be replaced in the theorem with $$ \bar\delta_0({\mathcal Q}):=\limsup_{x\to\infty}\frac1{\log\log x}\sum_{\substack{p\le x\\p\in {\mathcal Q}}}\frac1p. $$ Indeed, $\bar\delta_0({\mathcal Q})\le\bar\delta({\mathcal Q})$ follows from the identity $$ \sum_{\substack{p\le x\\p\in {\mathcal Q}}}\frac1p=\frac1{\log x}\sum_{\substack{p\le x\\p\in {\mathcal Q}}}\frac{\log p}p +\int_2^x\frac1{t(\log t)^2}\sum_{\substack{p\le t\\p\in {\mathcal Q}}}\frac{\log p}p\,dt. $$ \end{remark} \begin{remark} \label{rmk:martin} Greg Martin has indicated to us that one should be able to prove (under RH and LI) that the relative logarithmic density of ${\mathcal Q}$ exists and is equal to the logarithmic density of $\mathcal N$. The idea is as follows. Partition the positive reals into intervals of the form $[y,y+y^{1/3})$. Let $E_1$ be the union of those intervals $[y,y+y^{1/3})$ where the sign of $e^\gamma\prod_{p\le x}(1-1/p)-1/\log x$ is not constant and let $E_2$ be the union of those intervals $[y,y+y^{1/3})$ which do not have $\sim y^{1/3}/\log y$ primes as $y\to\infty$. The the logarithmic density of $E_1\cup E_2$ can be shown to be 0, from which the assertion follows. \end{remark} \subsection{Proof of Theorem \ref{thm:support}} We now use some numerical estimates of Dusart \cite{dusart} to prove Theorem \ref{thm:support}. We say a pair of primes $p\le q$ is a {\bf Mertens pair} if $$ \prod_{p\le r<q}\left(1-\frac1r\right)>\frac{\log p}{\log pq}. $$ We claim that every pair of primes $p,q$ with $2<p\le q<e^{10^6}$ is a Mertens pair. Assume this and let $A$ be a primitive set supported on the odd primes to $e^{10^6}$. By \eqref{eq:g(A)}, if $p\notin A$, we have \begin{align*} \frac1{p}&\ge\sum_{a\in A'_p}\frac1a\prod_{p\le r <P(a)}\left(1-\frac1{r}\right) >\sum_{a\in A'_p}\frac{\log p}{a\log(p\,P(a))}\\ &\ge\sum_{a\in A'_p}\frac{\log p}{a\log a}=f(A'_p)\log p. \end{align*} Dividing by $\log p$ we obtain $f(A'_p)\le f(p)$, which also holds if $p\in A$. Thus, the claim about Mertens pairs implies the theorem. To prove the claim, first note that if $p$ is a Mertens prime, then $p,q$ is a Mertens pair for all primes $q\ge p$. Indeed, we have $$ \prod_{p\le r<q}\left(1-\frac1r\right)=\prod_{r<p}\left(1-\frac1r\right)^{-1}\prod_{r<q}\left(1-\frac1r\right) >e^\gamma \log p\prod_{r<q}\left(1-\frac1r\right). $$ By \eqref{eq:mert}, this last product exceeds $e^{-\gamma}/\log(2q)>e^{-\gamma}/\log(pq)$, and using this in the above display shows that $p,q$ is indeed a Mertens pair. Since all of the odd primes up to $p_{10^8}$ are Mertens, to complete the proof of our assertion, it suffices to consider the case when $p>p_{10^8}$. Define $E_p$ via the equation $$ \prod_{r<p}\left(1-\frac1r\right)=\frac{1+E_p}{e^\gamma \log p}. $$ Using \cite[Theorem 5.9]{dusart}, we have for $p>2{,}278{,}382$, \begin{equation} \label{eq:D} |E_p|\le .2/(\log p)^3. \end{equation} A routine calculation shows that if $p\le q<e^{4.999(\log p)^4}$, then $$ \prod_{p\le r<q}\left(1-\frac1r\right)=\frac{\log p}{\log q}\cdot\frac{1+E_q}{1+E_p} >\frac{\log p}{\log pq}. $$ It remains to note that $4.999(\log p_{10^8})^4 >1{,}055{,}356$. It seems interesting to record the principle that we used in the proof. \begin{corollary} \label{cor:mertens} If $A$ is a primitive set such that $p(a),P(a)$ is a Mertens pair for each $a\in A$, then $f(A)\le f(\mathcal{P}(A))$. \end{corollary} \begin{remark} \label{rmk:ford} Kevin Ford has noted to us the remarkable similarity between the concept of Mertens primes in this paper and the numbers $$ \gamma_n=\left(\gamma+\sum_{k\le n}\frac{\log p_k}{p_k-1}\right)\prod_{k\le n}\left(1-\frac1{p_k}\right) $$ discussed in Diamond--Ford \cite{DF}. In particular, while it may not be obvious from the definition, the analysis in \cite{DF} on whether the sequence $\gamma_1,\gamma_2,\dots$ is monotone is quite similar to the analysis in \cite{lamz} on the Mertens inequality. Though the numerical evidence seems to indicate we always have $\gamma_{n+1}<\gamma_n$, this is disproved in \cite{DF}, and it is indicated there that the first time this fails may be near $1.9\cdot10^{215}$. This may also be near where the first odd non-Mertens prime exists. If this is the case, and under assumption of RH, it may be that every pair of primes $p\le q$ is a Mertens pair when $p>2$ and $q < \exp(10^{100})$. \end{remark} \section{Odd primitive sets} In this section we prove Theorem \ref{thm:no8s} and establish a curious result on parity for primitive sets. Let $$ \epsilon_0=\sum_{\substack{p>2\\p\notin\mathcal{P}^{\rm Mert}}}\left(e^\gamma g(p)-f(p)\right). $$ \begin{lemma} \label{lem:eps} We have $0\le\epsilon_0<2.37\times10^{-7}$. \end{lemma} \begin{proof} By the definition of $\mathcal{P}^{\rm Mert}$, the summands in the definition of $\epsilon_0$ are nonnegative, so that $\epsilon_0\ge0$. If $p>2$ is not Mertens, then $p>p_{10^8}>2\times10^9$, so that \eqref{eq:D} shows that \begin{equation} \label{eq:nonM} e^\gamma g(p)-f(p)<\frac1{5p(\log p)^4}. \end{equation} By \cite[Proposition 5.16]{dusart}, we have $$ p_n >n(\log n+\log\log n-1 +(\log\log n-2.1)/\log n,\quad n\ge 2. $$ Using this we find that $$ \sum_{n>10^8}\frac1{5p_n(\log p_n)^4}<2.37\times10^{-7}, $$ which with \eqref{eq:nonM} completes the proof. \end{proof} \begin{remark} Clearly, a smaller bound for $\epsilon_0$ would follow by raising the search limit for Mertens primes. Another small improvement could be made using the estimate in \cite{axler} for $p_n$. It follows from the ideas in Remark \ref{rmk:martin} that $\epsilon_0>0$. Further, it may be provable from the ideas in Remark \ref{rmk:ford} that $\epsilon_0<10^{-100}$ if the Riemann Hypothesis holds. \end{remark} We have the following result. \begin{theorem} \label{thm:odd} For any odd primitive set $A$, we have \begin{align} \label{eq:odd} f(A) \le f(\mathcal{P}(A))+\epsilon_0. \end{align} \end{theorem} \begin{proof} Assume that $A$ is an odd primitive set. We have $$ f(A)=\sum_{p\in\mathcal{P}(A)}f(A'_p)\le\sum_{p\in\mathcal{P}(A)\cap\mathcal{P}^{\rm Mert}}f(p)+\sum_{p\in\mathcal{P}(A)\setminus\mathcal{P}^{\rm Mert}}e^\gamma g(p) \le\epsilon_0+\sum_{p\in\mathcal{P}(A)}f(p) $$ by the definition of $\epsilon_0$. This completes the proof. \end{proof} This theorem yields the following corollary. \begin{corollary}\label{cor:8} If $A$ is a primitive set containing no multiple of $8$, then \eqref{eq:odd} holds. \end{corollary} \begin{proof} We have seen the corollary in the case that $A$ is odd. Next, suppose that $A$ contains an even number, but no multiple of 4. If $2\in A$, the result follows by applying Theorem \ref{thm:odd} to $A\setminus\{2\}$, so assume $2\notin A$. Then $A''_2$ is an odd primitive set and $f(A'_2)\le f(A''_2)/2$. We have by the odd case that \begin{equation} \label{eq:2mod4} f(A)=f(A_3)+f(A'_2)< f(\mathcal{P}(A_3))+\epsilon_0+\frac12\left(f(\mathcal{P}(A''_2))+\epsilon_0\right). \end{equation} Since $$ \frac12f(\mathcal{P}(A''_2))\le\frac12f(\mathcal{P}\setminus\{2\})<0.4577 $$ and $f(2)=0.7213\dots$, \eqref{eq:2mod4} and Lemma \ref{lem:eps} imply that $f(A)<f(\mathcal{P}(A))$, which is stronger than required. The case when $A$ contains a multiple of 4 but no multiple of 8 follows in a similar fashion. \end{proof} Since a cube-free number cannot be divisible by 8, \eqref{eq:odd} holds for all primitive sets $A$ of cube-free numbers. Also, the proof of Corollary \ref{cor:8} can be adapted to show that \eqref{eq:odd} holds for all primitive sets $A$ containing no number that is 4~(mod~8). We close out this section with a curious result about those primitive sets $A$ where \eqref{eq:odd} does not hold. Namely, the Erd\H os conjecture must then hold for the set of odd members of $A$. Put another way, \eqref{eq:odd} holds for any primitive set $A$ for which the Erd\H os conjecture for the odd members of $A$ {\it fails}. \begin{theorem} \label{thm:curious} If $A$ is a primitive set with $f(A)> f(\mathcal{P}(A))+\epsilon_0$, then $f(A_3)<f(\mathcal{P}(A_3))$. \end{theorem} \begin{proof}[Proof (Sketch)] Without loss of generality, we may include in $A$ all primes not in $\mathcal{P}(A)$, and so assume that $\mathcal{P}(A)=\mathcal{P}$ and $f(A)>C+\epsilon_0$. By Theorem \ref{thm:odd} we may assume that $A$ is not odd, and by Corollary \ref{cor:8} we may assume that $2\notin A$. By the proof of Theorem \ref{thm:egamma} (see \eqref{eq:fA} and \eqref{eq:fAK}), if $3\in A$, we have $$ f(A)<f(3)+\frac23e^\gamma<C, $$ a contradiction, so we may assume that $3\notin A$. We now apply the method of proof of Theorem \ref{thm:egamma} to $A_3$, where powers of 3 replace powers of 2. This leads to $$ f(A_3)<\frac12e^\gamma<C-f(2)=f(\mathcal{P}(A_3)). $$ This completes the argument. \end{proof} \section{Zhang primes and the Banks--Martin conjecture} Note that $$ \sum_{p\ge x}\frac1{p\log p}\sim\frac1{\log x},\quad x\to\infty. $$ In Erd\H os--Zhang \cite{ez} and in Zhang \cite{zhang2}, numerical approximations to this asymptotic relation are exploited. Say a prime $q$ is {\bf Zhang} if $$\sum_{p\ge q}\frac{1}{p\log p} \le \frac{1}{\log q}.$$ Let $\mathcal P^{\textrm{Zh}}$ denote the set of Zhang primes. We are interested in Zhang primes because of the following result. \begin{theorem}\label{thm:zhang} If $\mathcal P(A'_p)\subset \mathcal P^{\textrm{Zh}}$, then $f(A'_p) \le f(p)$. Hence the Erd\H os conjecture holds for all primitive sets $A$ supported on $\mathcal P^{\textrm{Zh}}$. \end{theorem} \begin{proof} As in \cite{ez} it suffices to prove the theorem in the case that $A$ is a finite set. By $d^\circ(A)$ we mean the maximal value of $\Omega(a)$ for $a\in A$. We proceed by induction on $d^\circ(A_p')$. If $d^\circ(A'_p)\le 1$, then $f(A_p') \le f(p)$. If $d^\circ(A'_p)> 1$, then $f(A_p')\le f(A_p'')/p$. The primitive set $B:=A_p''$ satisfies $f(B)=f(B_p)=\sum_{q\ge p}f(B_q')$. Since $d^\circ(B_q') \le d^\circ(B) < d^\circ(A'_p)$, by induction we have $f(B_q') \le f(q)$. Thus, since $p$ is Zhang, $$f(A_p'') = f(B)=\sum_{q\ge p}f(B_q')\le \sum_{q\ge p}\frac{1}{q\log q} \le \frac{1}{\log p},$$ from which we obtain $f(A_p')\le f(A_p'')/p \le 1/(p\log p)$. This completes the proof. \end{proof} From this one might hope that all primes are Zhang. However, the prime 2 is not Zhang since $C> 1/\log 2$, and the prime 3 is not Zhang since $C-1/(2\log2)>1/\log3$. Nevertheless, as with Mertens primes, it is true that the remaining primes up to $p_{10^8}$ are Zhang. Indeed, starting from \eqref{eq:cohen}, we computed that \begin{align} \sum_{p\ge q}\frac1{p\log p} = C - \sum_{p < q}\frac{1}{p\log p} \le \frac{1}{\log q}\qquad\textrm{for all } 3< q \le p_{10^8}. \end{align} The computation stopped at $10^8$ for convenience, and one could likely extend this further with some patience. It seems likely that there is also a ``race" between $\sum_{p\ge q}1/(p\log p)$ and $1/\log q$, as with Mertens primes, and that a large logarithmic density of primes $q$ are Zhang, with a small logarithmic density of primes failing to be Zhang. A related conjecture due to Banks and Martin \cite{bm} is the chain of inequalities, \begin{align*} \sum_{p}\frac{1}{p\log p} > \sum_{p\le q}\frac{1}{pq\log pq} > \sum_{p\le q\le r}\frac{1}{pqr\log pqr} > \cdots, \end{align*} succinctly written as $f({\mathbb N}_k) > f({\mathbb N}_{k+1})$ for all $k\ge1$, where ${\mathbb N}_k = \{n: \Omega(n) = k\}$. As mentioned in the introduction, we know only that $f({\mathbb N}_1)> f({\mathbb N}_k)$ for all $k\ge2$ and $f({\mathbb N}_2)>f({\mathbb N}_3)$. More generally, for a subset $Q$ of primes, let ${\mathbb N}_k(Q)$ denote the subset of ${\mathbb N}_k$ supported on $Q$. A result of Zhang \cite{zhang2} impies that $f({\mathbb N}_1(Q)) > f({\mathbb N}_{k}(Q))$ for all $k>1$, while Banks and Martin showed that $f({\mathbb N}_k(Q)) > f({\mathbb N}_{k+1}(Q))$ if $\sum_{p\in Q}1/p$ is not too large. We prove a similar result in the case where $Q$ is a subset of the Zhang primes and we replace $f({\mathbb N}_k(Q))$ with $h({\mathbb N}_k(Q))$. Recall $h(A) = \sum_{a\in A}1/(a\log P(a))$. \begin{proposition} For all $k\ge1$ and $Q\subset \mathcal P^{\textrm{Zh}}$, we have $h({\mathbb N}_k(Q)) \ge h({\mathbb N}_{k+1}(Q))$. \end{proposition} \begin{proof} Since $p_k$ is a Zhang prime, we have \begin{align*} h({\mathbb N}_{k+1}(Q)) & = \sum_{\substack{q_1\le \cdots\le q_{k+1}\\ q_i\in Q}}\frac{1}{q_1\cdots q_k q_{k+1}\log q_{k+1}} \\ & = \sum_{\substack{q_1\le \cdots\le q_{k}\\ q_i\in Q}}\frac{1}{q_1\cdots q_{k}}\sum_{q_{k+1}\ge q_{k}}\frac{1}{q_{k+1}\log q_{k+1}}\\ & \le \sum_{\substack{q_1\le \cdots\le q_{k}\\ q_i\in Q}}\frac{1}{q_1\cdots q_{k} \log q_k} = h({\mathbb N}_{k}(Q)). \end{align*} This completes the proof. \end{proof} It is interesting that if we do not in some way restrict the primes used, the analogue of the Banks--Martin conjecture for the function $h$ fails. In particular, we have $$ h({\mathbb N}_2)>\sum_{m\le 10^4}\frac1{p_m}\sum_{n\ge m}\frac1{p_n\log p_n} =\sum_{m\le 10^4}\frac1{p_m}\left(C-\sum_{k<m}\frac1{p_k\log p_k}\right) >1.638, $$ while $h({\mathbb N}_1)=C<1.637$. It is also interesting that the analogue of the Banks--Martin conjecture for the function $g$ is false since $$ 1=g({\mathbb N}_1)=g({\mathbb N}_2)=g({\mathbb N}_3)=\cdots\,. $$ We have already shown in \eqref{eq:g(A)} that $g(A'_q)\le g(q)$ for any primitive set $A$ and prime $q$, so the analogue for $g$ of the strong Erd\H os conjecture holds. \subsection{Proof of Theorem \ref{thm:Nk}.} We now return to the function $f$ and prove Theorem \ref{thm:Nk}. We may assume that $k$ is large. Let $m=\lfloor\sqrt{k}\rfloor$ and let $B(n)=e^{e^n}$. We have \begin{align*} f({\mathbb N}_k)&=\sum_{\Omega(a)=k}\frac1{a\log a} >\sum_{\substack{\Omega(a)=k\\ e^{e^{k}}<a\le e^{e^{k+m}}}}\frac1{a\log a}\\ &=\sum_{j\le m}\sum_{\substack{\Omega(a)=k\\B({k+j-1})<a\le B(k+j)}}\frac1{a\log a} > \sum_{j\le m}\frac1{\log B({k+j})}\sum_{\substack{\Omega(a)=k\\B(k+j-1)<a\le B({k+j})}} \frac1a. \end{align*} Thus it suffices to show that there is a positive constant $c$ such that for $j\le m$ we have \begin{equation} \label{eq:ss} \sum_{\substack{\Omega(a)=k\\B({k+j-1})<a\le B({k+j})}}\frac1a \ge c\frac{\log B({k+j})}m=c\frac{e^{k+j}}m, \end{equation} so that the proposition will follow. Let $N_k(x)$ denote the number of members of ${\mathbb N}_k$ in $[1,x]$. We use the Sathe--Selberg theorem, see \cite[Theorem 7.19]{MV}, from which we have that uniformly for $B({k})< x\le B({k+m})$, as $k\to\infty$, $$ N_k(x)\sim \frac x{k!}\frac{(\log\log x)^k}{\log x}. $$ This result also follows from Erd\H os \cite{erdos48}. We have \begin{align*} \sum_{\substack{\Omega(a)=k\\B({k+j-1})<a\le B({k+j})}}\frac1a &>\int_{B({k+j-1})}^{B({k+j})}\frac{N_k(x)-N_k(B({k+j-1}))}{x^2}\,dx\\ &\gg \int_{2B({k+j-1})}^{B({k+j})}\frac{N_k(x)}{x^2}\,dx. \end{align*} Thus, \begin{align*} \sum_{\substack{\Omega(a)=k\\B({k+j-1})<a\le B({k+j} )}}\frac1a &\gg \frac{(\log\log B({k+j-1}))^k}{k!}\int_{2B({k+j-1})}^{B({k+j})}\frac{dx}{x\log x}\\ &=\frac{(k+j-1)^k}{k!}(\log\log B({k+j})-\log\log(2B({k+j-1}))\\ &\gg\frac{(k+j-1)^k}{k!}\gg\frac{e^{k+j}}{\sqrt{k}}, \end{align*} the last estimate following from Stirling's formula. This proves \eqref{eq:ss}, and so the theorem. The sets ${\mathbb N}_k$ and Theorem \ref{thm:Nk} give us the following result. \begin{corollary} \label{cor:largex} We have that $$ \limsup_{x\to\infty}\{f(A):A\subset[x,\infty),\,A \textnormal{ primitive}\}>0. $$ \end{corollary} \section*{Acknowledgments} We thank Greg Martin for the content of Remark \ref{rmk:martin} and Kevin Ford for the content of Remark \ref{rmk:ford}. We thank Paul Kinlaw and Zhenxiang Zhang for some helpful comments. \end{document}
\begin{document} \renewcommand{\arabic{footnote}}{$\star$} \renewcommand{094}{094} \FirstPageHeading \ShortArticleName{On Tanaka's Prolongation Procedure for Filtered Structures of Constant Type} \ArticleName{On Tanaka's Prolongation Procedure\\ for Filtered Structures of Constant Type\footnote{This paper is a contribution to the Special Issue ``\'Elie Cartan and Dif\/ferential Geometry''. The full collection is available at \href{http://www.emis.de/journals/SIGMA/Cartan.html}{http://www.emis.de/journals/SIGMA/Cartan.html}}} \Author{Igor ZELENKO} \AuthorNameForHeading{I.~Zelenko} \Address{Department of Mathematics, Texas A$\&$M University, College Station, TX 77843-3368, USA} \Email{\href{mailto:[email protected]}{[email protected]}} \ArticleDates{Received June 02, 2009, in f\/inal form September 29, 2009; Published online October 06, 2009} \newcommand{\norm}[1]{\left\Vert#1\right\Vert} \newcommand{\abs}[1]{\left\vert#1\right\vert} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\mathbb R}{\mathbb R} \newcommand{\varepsilon}{\varepsilon} \newcommand{\longrightarrow}{\longrightarrow} \newcommand{\mathbf{B}(X)}{\mathbf{B}(X)} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathfrak g}{\mathfrak g} \newcommand{\varphi}{\varphi} \Abstract{We present Tanaka's prolongation procedure for f\/iltered structures on manifolds discovered in [Tanaka N., {\it J.~Math.\ Kyoto.\ Univ.\/} \textbf{10} (1970), 1--82] in a spirit of Singer--Sternberg's description of the prolongation of usual $G$-structures [Singer I.M., Sternberg S., {\it J.~Analyse Math.\/} {\bf 15} (1965), 1--114; Sternberg S., Prentice-Hall, Inc., Englewood Clif\/fs, N.J., 1964]. This approach gives a transparent point of view on the Tanaka constructions avoiding many technicalities of the original Tanaka paper.} \Keywords{$G$-structures; f\/iltered structures; generalized Spencer operator; prolongations} \Classification{58A30; 58A17} \renewcommand{\arabic{footnote}}{\arabic{footnote}} \setcounter{footnote}{0} \section{Introduction} This note is based on series of lectures given by the author in the Working Geometry Seminar at the Department of Mathematics at Texas A$\&$M University in Spring 2009. The topic is the prolongation procedure for f\/iltered structures on manifolds discovered by Noboru Tanaka in the paper \cite{tan} published in 1970. The Tanaka prolongation procedure is an ingenious ref\/inement of Cartan's method of equivalence. It provides an ef\/fective algorithm for the construction of canonical frames for f\/iltered structures, and for the calculation of the sharp upper bound of the dimension of their algebras of inf\/initesimal symmetries. This note is by no means a complete survey of the Tanaka theory. For such a survey we refer the reader to \cite{mor}. Our goal here is to describe geometric aspects of Tanaka's prolongation procedure using the language similar to one used by Singer and Sternberg in \cite{sinstern} and \cite{stern} for description of the prolongation of the usual $G$-structures. We found that it gives a quite natural and transparent point of view on Tanaka's constructions, avoiding many formal def\/initions and technicalities of the original Tanaka paper. We believe this point of view will be useful to anyone who is interested in studying both the main ideas and the details of this fundamental Tanaka construction. We hope that the material of Sections~\ref{section3} and~\ref{section4} will be of interest to experts as well. Our language also allows to generalize the Tanaka procedure in several directions, including f\/iltered structures with non-constant and non-fundamental symbols. These generalizations, with applications to the local geometry of distributions, will be given in a separate paper. \subsection{Statement of the problem}\label{section1.1} Let $D$ be a rank $l$ distribution on a manifold $M$; that is, a rank $l$ subbundle of the tangent bundle~$TM$. Two vector distributions $D_1$ and $D_2$ are called equivalent if there exists a~dif\/feomorphism $F:M\rightarrow M$ such that $F_*D_1(x)=D_2(F(x))$ for any $x\in M$. Two germs of vector distributions~$D_1$ and~$D_2$ at the point $x_0\in M$ are called equivalent, if there exist neighborhoods~$U$ and~$\tilde U$ of~$x_0$ and a dif\/feomorphism $F:U\rightarrow \tilde U$ such that \begin{gather*} F_*D_1= D_2 , \qquad F(x_0)=x_0. \end{gather*} The general question is: When are two germs of distributions equivalent? \subsection{Weak derived f\/lags and symbols of distributions}\label{section1.2} Taking Lie brackets of vector f\/ields tangent to a distribution $D$ (i.e.\ sections of $D$) one can def\/ine a f\/iltration $D^{-1}\subset D^{-2}\subset\cdots$ of the tangent bundle, called a \emph{weak derived flag} or a \emph{small flag $($of $D)$}. More precisely, set $D=D^{-1}$ and def\/ine recursively $D^{-j}=D^{-j+1}+[D,D^{-j+1}]$, $j>1$. Let $X_1,\ldots X_l$ be $l$ vector f\/ields constituting a local basis of a distribution~$D$, i.e.\ $D= {\rm span}\{X_1, \ldots, X_l\}$ in some open set in $M$. Then $D^{-j}(x)$ is the linear span of all iterated Lie brackets of these vector f\/ields, of length not greater than $j$, evaluated at a point~$x$. A~distribution~$D$ is called \emph{bracket-generating} (or \emph{completely nonholonomic}) if for any $x$ there exists $\mu(x)\in\mathbb N$ such that $D^{-\mu(x)}(x)=T_x M$. The number $\mu(x)$ is called the \emph{degree of nonholonomy} of $D$ at a point~$x$. A distribution $D$ is called \emph{regular} if for all $j<0$, the dimensions of subspaces $D^j(x)$ are independent of the point $x$. From now on we assume that $D$ is regular bracket-generating distribution with degree of nonholonomy $\mu$. Let $\mathfrak g^{-1}(x)\stackrel{\text{def}}{=}D^{-1}(x)$ and $\mathfrak g^{j}(x)\stackrel{\text{def}}{=}D^{j}(x)/D^{j+1}(x)$ for $j<-1$. Consider the graded space \begin{gather*} \mathfrak{m}(x)=\bigoplus_{j=-\mu}^{-1}\mathfrak g^j(x), \end{gather*} corresponding to the f\/iltration \begin{gather*} D(x)=D^{-1}(x)\subset D^{-2}(x)\subset\cdots\subset D^{-\mu+1}(x) \subset D^{-\mu}(x)=T_xM. \end{gather*} This space is endowed naturally with the structure of a graded nilpotent Lie algebra, generated by $\mathfrak g^{-1}(x)$. Indeed, let $\mathfrak p_j:D^j(x)\mapsto \mathfrak g^j(x)$ be the canonical projection to a factor space. Take $Y_1\in\mathfrak g^i(x)$ and $Y_2\in \mathfrak g^j(x)$. To def\/ine the Lie bracket $[Y_1,Y_2]$ take a local section $\widetilde Y_1$ of the distribution $D^i$ and a local section $\widetilde Y_2$ of the distribution $D^j$ such that $\mathfrak p_i\bigl(\widetilde Y_1(x)\bigr) =Y_1$ and $\mathfrak p_j\bigl(\widetilde Y_2(x)\bigr)=Y_2$. It is clear that $[Y_1,Y_2]\in\mathfrak g^{i+j}(x)$. Put \begin{gather} \label{Liebrackets} [Y_1,Y_2]\stackrel{\text{def}}{=}\mathfrak p_{i+j}\bigl([\widetilde Y_1,\widetilde Y_2](x)\bigr). \end{gather} It is easy to see that the right-hand side of \eqref{Liebrackets} does not depend on the choice of sections $\widetilde Y_1$ and $\widetilde Y_2$. Besides, $\mathfrak g^{-1}(x)$ generates the whole algebra $\mathfrak{m}(x)$. A graded Lie algebra satisfying the last property is called \emph{fundamental}. The graded nilpotent Lie algebra $\mathfrak{m}(x)$ is called the \emph {symbol of the distribution $D$ at the point $x$}. Fix a fundamental graded nilpotent Lie algebra $\mathfrak{m}=\displaystyle{\bigoplus_{i=-\mu}^{-1} \mathfrak g^i}$. A distribution $D$ is said to be of \emph{constant symbol $\mathfrak{m}$} or of \emph{constant type $\mathfrak{m}$} if for any $x$ the symbol $\mathfrak{m}(x)$ is isomorphic to $\mathfrak{m}$ as a nilpotent graded Lie algebra. In general this assumption is quite restrictive. For example, in the case of rank two distributions on manifolds with $\dim\,M\geq 9$, symbol algebras depend on continuous parameters, which implies that generic rank 2 distributions in these dimensions do not have a constant symbol. For rank 3 distributions with $\dim D^{-2}=6$ the same holds in the case $\dim M=7$ as was shown in \cite{kuz}. Following Tanaka, and for simplicity of presentation, we consider here distributions of constant type $\mathfrak{m}$ only. One can construct the \emph{flat distribution~$D_{\mathfrak{m}}$ of constant type $\mathfrak{m}$}. For this let $M(\mathfrak{m})$ be the simply connected Lie group with the Lie algebra~$\mathfrak{m}$ and let $e$ be its identity. Then $D_\mathfrak{m}$ is the left invariant distribution on $M(\mathfrak{m})$ such that $D_{\mathfrak{m}}(e)=\mathfrak g^{-1}$. \subsection[The bundle $P^0(\mathfrak{m})$ and its reductions]{The bundle $\boldsymbol{P^0(\mathfrak{m})}$ and its reductions}\label{section1.3} \looseness=-1 To a distribution of type $\mathfrak{m}$ one can assign a principal bundle in the following way. Let $G^0(\mathfrak{m})$ be the group of automorphisms of the graded Lie algebra $\mathfrak{m}$; that is, the group of all automorphisms~$A$ of the linear space $\mathfrak{m}$ preserving both the Lie brackets ($A([v,w])=[A(v),A(w)]$ for any $v,w\in \mathfrak{m}$) and the grading ($A (\mathfrak g^i)=\mathfrak g^i$ for any $i<0$). Let $P^0(\mathfrak m)$ be the set of all pairs $(x,\varphi)$, where $x\in M$ and $\varphi:\mathfrak{m}\to\mathfrak m(x)$ is an isomorphism of the graded Lie algebras~$\mathfrak {m}$ and~$\mathfrak m(x)$. Then $P^0(\mathfrak m)$ is a principal $G^0(\mathfrak m)$-bundle over $M$. The right action $R_A$ of an automorphism $A\in G^0(\mathfrak{m})$ is as follows: $R_A$ sends $(x,\varphi)\in P^0(\mathfrak m)$ to $(x,\varphi\circ A)$, or shortly $(x,\varphi)\cdot R_A=(x,\varphi\circ A)$. Note that since $\mathfrak g^{-1}$ generates $\mathfrak m$, the group $G^0(\mathfrak{m})$ can be identif\/ied with a subgroup of $\text{GL}(\mathfrak g^{-1})$. By the same reason a point $(x,\varphi)\in P^0(\mathfrak m)$ of a f\/iber of $P^0(\mathfrak m)$ is uniquely def\/ined by $\varphi|_{\mathfrak g^{-1}}$. So one can identify $P^0(\mathfrak m)$ with the set of pairs $(x,\psi)$, where $x\in M $ and $\psi:\mathfrak g^{-1}\to D(x)$ can be extended to an automorphism of the graded Lie algebras $\mathfrak {m}$ and $\mathfrak m(x)$. Speaking informally, $P^0(\mathfrak m)$ can be seen as a $G^0(\mathfrak{m})-$reduction of the bundle of all frames of the distribution $D$. Besides, the Lie algebra $\mathfrak g^0(\mathfrak m)$ is the algebra of all derivations $a$ of $\mathfrak m$, preserving the grading (i.e.\ $a \mathfrak g^i\subset \mathfrak g^i$ for all $i<0$). Additional structures on distributions can be encoded by reductions of the bundle $P^0(\mathfrak m)$. More precisely, let $G^0$ be a Lie subgroup of $G^0(\mathfrak{m})$ and let $P^0$ be a principal $G^0$-bundle, which is a reduction of the bundle $P^0(\mathfrak{m})$. Since $\mathfrak g^0$ is a subalgebra of the algebra of derivations of $\mathfrak m$ preserving the grading, the subspace $\mathfrak m\oplus \mathfrak g^0$ is endowed with the natural structure of a graded Lie algebra. For this we only need to def\/ine brackets $[f,v]$ for $f\in \mathfrak g^0$ and $v\in \mathfrak m$, because $\mathfrak m$ and $\mathfrak g^0$ are already Lie algebras. Set $[f,v]\stackrel{\text{def}}{=} f(v)$. The bundle $P^0$ is called a \emph{structure of constant type $(\mathfrak m, \mathfrak g^0)$}. Let, as before, $D_{\mathfrak m}$ be the left invariant distribution on $M(\mathfrak{m})$ such that $D_{\mathfrak m}(e)=\mathfrak g^{-1}$. Denote by $L_x$ the left translation on $M(\mathfrak m)$ by an element $x$. Finally, let $P^0(\mathfrak m, \mathfrak g^0)$ be the set of all pairs $(x,\varphi)$, where $x\in M(\mathfrak m)$ and $\varphi:\mathfrak{m}\to\mathfrak m(x)$ is an isomorphism of the graded Lie algebras $\mathfrak {m}$ and $\mathfrak m(x)$ such that $(L_{x^{-1}})_*\varphi\in G^0$. The bundle $P^0(\mathfrak m, \mathfrak g^0)$ is called \emph{the flat structure of constant type $(\mathfrak m, \mathfrak g^0)$}. Let us give some examples. {\bf Example 1. ${\boldsymbol G}$-structures.} Assume that $D=TM$. So $\mathfrak m=\mathfrak g^{-1}$ is abelian, $G^0(\mathfrak m)={\rm GL}(\mathfrak m)$, and $P^0(\mathfrak m)$ coincides with the bundle $\mathcal F(M)$ of all frames on $M$. In this case $P^0$ is nothing but a usual $G^0$-structure. {\bf Example 2. Contact distributions.} Let $D$ be the contact distribution in $\mathbb R^{2n+1}$. Its symbol $\mathfrak m_{\text{cont},n}$ is isomorphic to the Heisenberg algebra $\eta_{2n+1}$ with grading $\mathfrak g^{-1}\oplus\mathfrak g^{-2}$, where $\mathfrak g^{-2}$ is the center of $\eta_{2n+1}$. Obviously, a skew-symmetric form $\Omega$ is well def\/ined on $\mathfrak g^{-1}$, up to a multiplication by a nonzero constant. The group $G^0(\mathfrak m_{\text{cont},n})$ of automorphisms of $\mathfrak m_{\text{cont},n}$ is isomorphic to the group $\text{CSP}(\mathfrak g^{-1})$ of conformal symplectic transformations of $\mathfrak g^{-1}$, i.e. transformations preserving the form $\Omega$, up to a multiplication by a nonzero constant. {\bf Example 3. Maximally nonholonomic rank 2 distributions in $\boldsymbol{\mathbb R^5}$.} Let $D$ be a rank~2 distribution in $\mathbb R^5$ with degree of nonholonomy equal to $3$ at every point. Such distributions were treated by \'E.~Cartan in his famous work \cite{cartan}. In this case $\dim D^{-2}\equiv 3$ and $\dim D^{-3}\equiv 5$. The symbol at any point is isomorphic to the Lie algebra $\mathfrak m_{(2,5)}$ generated by $X_1$, $X_2$, $X_3$, $X_4,$ and $X_5$ with the following nonzero products: $[X_1,X_2]=X_3$, $[X_1, X_3]=X_4$, and $[X_2, X_3]=X_5$. The grading is given as follows: \[ \mathfrak g^{-1}=\langle X_1, X_2\rangle,\qquad \mathfrak g^{-2}=\langle X_3 \rangle,\qquad \mathfrak g^{-3}=\langle X_4, X_5\rangle, \] where $\langle Y_1,\ldots, Y_k\rangle$ denotes the linear span of vectors $Y_1,\ldots,Y_k$. Since $\mathfrak m_{(2,5)}$ is a free nilpotent Lie algebra with two generators $X_1$ and $X_2$, its group of automorphism is equivalent to $\text{GL}(\mathfrak g^{-1})$. \looseness=-1 {\bf Example 4. Sub-Riemannian structures of constant type} (see also \cite{morsR}). Assume that each space $D(x)$ is endowed with an Euclidean structure $Q_x$ depending smoothly on $x$. In this situation the pair $(D,Q)$ def\/ines a sub-Riemannian structure on a manifold $M$. Recall that $\mathfrak g^{-1}(x)=D(x)$. This motivates the following def\/inition: A pair $ \bigl(\mathfrak m, \mathfrak Q)$, where $\mathfrak m=\displaystyle{\bigoplus_{j=-\mu}^{-1} \mathfrak g^{j}}$ is a fundamental graded Lie algebra and $\mathfrak Q$ is an Euclidean structure on $\mathfrak g^{-1}$, is called a \emph{sub-Riemannian symbol}. Two sub-Riemannian symbols $(\mathfrak m, \mathfrak Q)$ and $(\tilde{\mathfrak m}, \widetilde {\mathfrak Q})$ are isomorphic if there exists a map $\varphi: \mathfrak m\to \tilde{\mathfrak m}$, which is an isomorphism of the graded Lie algebras $\mathfrak m$ and $\tilde{\mathfrak m}$, preserving the Euclidean structures $\mathfrak Q$ and $\tilde{\mathfrak Q}$ (i.e. such that $\widetilde Q \bigl(\varphi (v_1), \varphi (v_2)\bigr)=Q(v_1, v_2)$ for any $v_1$ and $v_2$ in $\mathfrak g^{-1}$). Fix a sub-Riemannian symbol $(\mathfrak m,\mathfrak Q)$. A sub-Riemannian structure $(D,Q)$ is said to be of \emph{constant type $(\mathfrak{m},\mathfrak Q)$}, if for every $x$ the sub-Riemannian symbol $(\mathfrak{m}(x), Q_x)$ is isomorphic to $(\mathfrak{m}, \mathfrak Q)$. It may happen that a sub-Riemannian structure does not have a constant symbol even if the distribution does. Such a situation occurs already in the case of the contact distribution on $\mathbb R^{2n+1}$ for $n>1$ (see Example 2 above). As was mentioned above, in this case a skew-symmetric form~$\Omega$ is well def\/ined on $\mathfrak g^{-1}$, up to a multiplication by a nonzero constant. If in addition a Euclidean structure $Q$ is given on $\mathfrak g^{-1}$, then a skew-symmetric endomorphism $J$ of~$\mathfrak g^{-1}$ is well def\/ined, up to a multiplication by a nonzero constant, by $\Omega(v_1,v_2)=Q(J v_1,v_2)$. Take $0<\beta_1\leq \cdots\leq\beta_n$ so that $\{\pm \beta_1 i, \ldots,\pm\beta_n i\}$ is the set of the eigenvalues of $J$. Then a sub-Riemannian symbol with $\mathfrak m= \mathfrak m_{\text{cont},n}$ is determined uniquely (up to an isomorphism) by a point $[\beta_1:\beta_2:\ldots:\beta_n]$ of the projective space $\mathbb {R P}^{n-1}$. Let $(D,Q)$ be a sub-Riemannian structure of constant type $(\mathfrak{m},\mathfrak Q)$ and $G^0(\mathfrak{m},\mathfrak Q)\subset G^0(\mathfrak m)$ be the group of automorphisms of a sub-Riemannian symbol $(\mathfrak{m}, \mathfrak Q)$. Let $P^0(\mathfrak{m},\mathfrak Q)$ be the set of all pairs $(x,\varphi)$, where $x\in M$ and $\varphi:\mathfrak{m}\to\mathfrak m(x)$ is an isomorhism of sub-Riemannian symbols $\bigl(\mathfrak {m},\mathfrak Q\bigr)$ and $\bigl(\mathfrak m(x), Q_x\bigr)$. Obviously, the bundle $P^0(\mathfrak{m},\mathfrak Q)$ is a reduction of $P^0(\mathfrak{m})$ with the structure group $G^0(\mathfrak{m},\mathfrak Q)$. \looseness=1 {\bf Example 5. Second order ordinary dif\/ferential equations up to point transformations.} Assume that $D$ is a contact distribution on a $3$-dimensional manifold endowed with two distinguished transversal line sub-distributions $L_1$ and $L_2$. Such structures appear in the study of second order ordinary dif\/ferential equations $y''=F(t,y,y')$ modulo point transformations. Indeed, let $J^i(\mathbb R,\mathbb R)$ be the space of $i$-jets of mappings from $\mathbb R$ to $\mathbb R$. As the distribution~$D$ we take the standard contact distribution on $J^1(\mathbb R,\mathbb R)$. In the standard coordinates $(t,y,p)$ on~$J^1(\mathbb R,\mathbb R)$ this distribution is given by the Pfaf\/f\/ian equation $dy-pdt=0$. The natural lifts to~$J^1$ of solutions of the dif\/ferential equation form the $1$-foliation tangent to~$D$. The tangent lines to this foliation def\/ine the sub-distribution~$L_1$. In the coordinates $(t,y,p)$ the sub-distribution~$L_1$ is generated by the vector f\/ield $\frac{\partial}{\partial t}+p\frac{\partial}{\partial y}+F(t,y,p)\frac{\partial}{\partial p}$. Finally, consider the natural bundle $J^1(\mathbb R,\mathbb R)\rightarrow J^0(\mathbb R,\mathbb R)$ and let $L_2$ be the distribution of the tangent lines to the f\/ibers. The sub-distribution $L_2$ is generated by the vector f\/ield $\frac{\partial}{\partial p}$. The triple $(D,L_1, L_2)$ is called the \emph{pseudo-product structure associated with the second order ordinary differential equation}. Two second order dif\/ferential equations are equivalent with respect to the group of point transformations if and only if there is a dif\/feomorphism of $J^{1}(\mathbb R,\mathbb R)$ sending the pseudo-product structure associated with one of them to the pseudo-product structure associated with the other one. This equivalence problem was treated by \'E.~Cartan in~\cite{cartproj} and earlier by A.~Tresse in~\cite{tresse1} and~\cite{tresse2}. The symbol of the distribution is $\mathfrak m_{\text{cont},1}\sim \eta_3$ (see Example~2 above) and the plane $\mathfrak g^{-1}$ is endowed with two distinguished transversal lines. This additional structure is encoded by the subgroup~$G^0$ of the group $G^0(\mathfrak m_{\text{cont},1})$ preserving each of these lines. Another important class of geometric structures that can be encoded in this way are $CR$-structures (see \S~10 of~\cite{tan} for more details). \subsection{Algebraic and geometric Tanaka prolongations}\label{section1.4} In \cite{tan} Tanaka solves the equivalence problem for structures of constant type $(\mathfrak m, \mathfrak g^0)$. Two of Tanaka's main constructions are the algebraic prolongation of the algebra $\mathfrak m+\mathfrak g^0$, and the geometric prolongation of structures of type $(\mathfrak m, \mathfrak g^0)$, imitated by the algebraic prolongation. First he def\/ines a graded Lie algebra, which is in essence the maximal (nondegenerated) graded Lie algebra, containing the graded Lie algebra $\displaystyle{\bigoplus_{i\leq 0}\mathfrak g^i}$ as its non-positive part. More precisely, Tanaka constructs a graded Lie algebra $\mathfrak g(\mathfrak m, \mathfrak g^0)= \displaystyle{\bigoplus_{i\in\mathbb Z}\mathfrak g^i(\mathfrak m,\mathfrak g^0)}$, satisfying the following three conditions: \begin{enumerate}\itemsep=0pt \item $\mathfrak g^i(\mathfrak m,\mathfrak g^0)=\mathfrak g^i$ for all $i\leq 0$; \item if $X\in \mathfrak g^i(\mathfrak m,\mathfrak g^0)$ with $i>0$ satisf\/ies $[X, \mathfrak g^{-1}]=0$, then $X=0$; \item $\mathfrak g(\mathfrak m, \mathfrak g^0)$ is the maximal graded Lie algebra, satisfying Properties 1 and 2. \end{enumerate} This graded Lie algebra $\mathfrak g(\mathfrak m, \mathfrak g^0)$ is called the \emph{algebraic universal prolongation} of the graded Lie algebra $\mathfrak m\oplus \mathfrak g^0$. An explicit realization of the algebra $\mathfrak g(\mathfrak m, \mathfrak g^0)$ will be described later in Section~\ref{section4}. It turns out (\cite[\S~6]{tan}, \cite[\S~2]{yam}) that the Lie algebra of inf\/initesimal symmetries of the f\/lat structure of type $(\mathfrak m, \mathfrak g^0)$ can be described in terms of $\mathfrak g(\mathfrak m, \mathfrak g^0)$. If $\dim \mathfrak g(\mathfrak m, \mathfrak g^0)$ is f\/inite (which is equivalent to the existence of $l>0$ such that $\mathfrak g^l(\mathfrak m,\mathfrak g^0)=0$), then the algebra of inf\/initesimal symmetries is isomorphic to $\mathfrak g(\mathfrak m, \mathfrak g^0)$. The analogous formulation in the case when $\mathfrak g(\mathfrak m, \mathfrak g^0)$ is inf\/inite dimensional may be found in \cite[\S~6]{tan}. Furthermore for a structure $P^0$ of type $(\mathfrak m, \mathfrak g^0)$, Tanaka constructs a sequence of bundles $\{P^i\}_{i\in\mathbb N}$, where $P^i$ is a principal bundle over $P^{i-1}$ with an abelian structure group of dimension equal to $\dim \mathfrak g^i(\mathfrak m,\mathfrak g^0)$. In general $P^i$ is not a frame bundle. This is the case only for $\mathfrak m=\mathfrak g^{-1}$; that is, for $G$-structures. But if $\dim \mathfrak g(\mathfrak m, \mathfrak g^0)$ is f\/inite or, equivalently, if there exists $l\geq 0$ such that $\mathfrak g^{l+1}(\mathfrak m,\mathfrak g^0)=0$, then the bundle $P^{l+\mu}$ is an $e$-structure over $P^{l+\mu-1}$, i.e. $P^{l+\mu-1}$ is endowed with a canonical frame (a structure of absolute parallelism). Note that all $P^i$ with $i\geq l$ are identif\/ied one with each other by the canonical projections (which are dif\/feomorphisms in that case). Hence, \emph{$P^{l}$ is endowed with a canonical frame}. Once a canonical frame is constructed the equivalence problem for structures of type $(\mathfrak m, \mathfrak g^0)$ is in essence solved. Moreover, $\dim \mathfrak g(\mathfrak m, \mathfrak g^0)$ gives the sharp upper bound for the dimension of the algebra of inf\/initesimal symmetries of such structures. By Tanaka's geometric prolongation we mean his construction of the sequence of bundles $\{P^i\}_{i\in\mathbb N}$. In this note we mainly concentrate on a description of this geometric prolongation using a language dif\/ferent from Tanaka's original one. In Section~\ref{section2} we review the prolongation of usual $G$-structures in the language of Singer and Sternberg. We do this in order to prepare the reader for the next section, where the f\/irst Tanaka geometric prolongation is given in a~completely analogous way. We believe that after reading Section~\ref{section3} the reader will already have an idea how to proceed with the higher order Tanaka prolongations so that technicalities of Section~\ref{section4} can be easily overcome. \section[Review of prolongation of $G$-structures]{Review of prolongation of $\boldsymbol{G}$-structures}\label{section2} Before treating the general case we review the prolongation procedure for structures with \mbox{$\mathfrak m=\mathfrak g^{-1}$}, i.e.\ for usual $G$-structures. We follow \cite{sinstern} and \cite{stern}. Let $\Pi_0:P^0\to M$ be the canonical projection and $V(\lambda)\subset T_\lambda P^0$ the tangent space at $\lambda$ to the f\/iber of $P^0$ over the point $\Pi_0(\lambda)$. The subspace $V(\lambda)$ is also called the \emph{ vertical subspace of $T_\lambda P^0$}. Actually, \begin{gather} \label{vert} V(\lambda)=\ker (\Pi_0)_*(\lambda). \end{gather} Recall that the space $V(\lambda)$ can be identif\/ied with the Lie algebra $\mathfrak g^0$ of $G^0$. The identif\/ication $I_\lambda:\mathfrak g^0\rightarrow V(\lambda)$ sends $X\in \mathfrak g^0$ to $\frac{d}{dt}\bigl(\lambda\cdot R_{e^{tX}} \bigr)|_{t=0}$, where $e^{tX}$ is the one-parametric subgroup generated by $X$. Recall also that an Ehresmann connection on the bundle $P^0$ is a distribution~$H$ on~$P^0$ such that \begin{gather} \label{Ehres} T_\lambda P^0=V(\lambda)\oplus H(\lambda)\qquad \forall\, \lambda\in P^0. \end{gather} A subspace $H(\lambda)$, satisfying \eqref{Ehres}, is a \emph{horizontal subspace of $T_\lambda P^0$}. Once an Ehresmann connection $H$ and a basis in the space $\mathfrak g^{-1}\oplus\mathfrak g^0$ are f\/ixed, the bund\-le~$P^0$ is endowed with a frame in a canonical way. Indeed, let $\lambda=(x,\varphi)\in P^0$. Then $\varphi\in {\rm Hom} (\mathfrak g^{-1}, T_xM)$. By \eqref{vert} and \eqref{Ehres} the restriction $(\Pi_0)_*|_{H(\lambda)}$ of the map $(\Pi_0)_*$ to the subspace~$H(\lambda)$ is an isomorphism between $H(\lambda)$ and $T_{\Pi_0(\lambda)}M$. Def\/ine the map $\varphi^{H(\lambda)}:\mathfrak g^{-1}\oplus\mathfrak g^0\rightarrow T_\lambda P^0$ as follows: \begin{gather} \varphi^{H(\lambda)}|_{\mathfrak g^{-1}}=\bigl((\Pi_0)_*|_{H(\lambda)}\bigr)^{-1}\circ\varphi,\nonumber\\ \varphi^{H(\lambda)}|_{\mathfrak g^0}=I_\lambda.\label{vfH0} \end{gather} If one f\/ixes a basis in $\mathfrak g^{-1}\oplus\mathfrak g^0$, then the images of this basis under the maps $\varphi^{H(\lambda)}$ def\/ine the frame (the structure of the absolute parallelism) on~$P^0$. The question is whether an Ehresmann connection can be chosen canonically. To answer this question, f\/irst one introduces a special $\mathfrak g^{-1}$-valued $1$-form $\omega$ on $P^0$ as follows: $\omega(Y)=\varphi^{-1}\circ (\Pi_0)_*(Y)$ for any $\lambda=(x,\varphi)\in P^0$ and $Y\in T_\lambda P^0$. This $1$-form is called the \emph{soldering $($tautological, fundamental$)$ form} of the $G^0$-structure $P^0$. Further, f\/ixing again a point $\lambda=(x,\varphi)\in P^0$, one def\/ines a \emph{structure function $($a torsion$)$} $C_H\in \text {Hom}(\mathfrak g^{-1}\wedge\mathfrak g^{-1},\mathfrak g^{-1})$ of a horizontal subspace $H$ of $T_\lambda P^0$, as follows: \begin{gather*} \forall\, v_1,v_2\in \mathfrak g^{-1}\qquad C_H(v_1,v_2)=-d\omega\bigl(\varphi^H(v_1),\varphi^H(v_2)\bigr), \end{gather*} where $\varphi^H$ is def\/ined by \eqref{vfH0}. Equivalently, \begin{gather*} C_H(v_1,v_2)=\omega\bigl([Y_1,Y_2](\lambda)\bigl) \end{gather*} for any vector f\/ields $Y_1$ and $Y_2$ such that $\omega(Y_i)\equiv v_i$ and $\varphi^H(v_i)=Y_i(\lambda)$, $i=1,2$. Speaking informally, the structure function $C_H$ encodes all information about horizontal parts at $\lambda$ of Lie brackets of vector f\/ields which are horizontal at $\lambda$ w.r.t.\ the splitting \eqref{Ehres} (with $H(\lambda)$ replaced by $H$). We now take another horizontal subspace $\widetilde H$ of $T_\lambda P^0$ and compare the structure functions~$C_{H}$ and $C_{\widetilde{H}}$. By construction, for any vector $v\in\mathfrak g ^{-1}$ the vector $\varphi^{\widetilde{H}}(v)-\varphi^{H}(v)$ belongs to $V(\lambda)$ ($\sim \mathfrak g^0$). Let \begin{gather*} f_{H\widetilde H}(v)\stackrel{\text{def}}{=} I_\lambda^{-1}\big(\varphi^{\widetilde{H}}(v)-\varphi^{H}(v)\big). \end{gather*} Then $f_{H\widetilde H}\in {\rm Hom}(\mathfrak g^{-1},\mathfrak g^0)$. In the opposite direction, it is clear that for any $f\in {\rm Hom}(\mathfrak g^{-1},\mathfrak g^{0})$ there exists a horizontal subspace $\widetilde H$ such that $f=f_{H\widetilde H}$. The map \begin{gather*} \partial: \ {\rm Hom}(\mathfrak g^{-1},\mathfrak g^{0})\rightarrow {\rm Hom}\big(\mathfrak g^{-1}\wedge\mathfrak g^{-1},\mathfrak g^{-1}\big), \end{gather*} def\/ined by \begin{gather} \label{Spencer} \partial f(v_1,v_2)=f(v_1)v_2- f(v_2)v_1 =[f(v_1),v_2]+[v_1,f(v_2)] \end{gather} is called the \emph{Spencer operator}\footnote{In \cite{stern} this operator is called the antisymmetrization operator, but we prefer to call it the Spencer operator, because, after certain intepretation of the spaces ${\rm Hom}(\mathfrak g^{-1},\mathfrak g^{0})$ and ${\rm Hom}(\mathfrak g^{-1}\wedge\mathfrak g^{-1},\mathfrak g^{-1})$, this operator can be identif\/ied with an appropriate $\delta$-operator introduced by Spencer in \cite{spenc} for the study of overdetermined systems of partial dif\/ferential equations. Indeed, since $\mathfrak g^0$ is a~subspace of $\mathfrak{gl}(\mathfrak g^{-1})$, the space ${\rm Hom}(\mathfrak g^{-1},\mathfrak g^{0})$ can be seen as a~subspace of the space of $\mathfrak g^{-1}$-valued one-forms on $\mathfrak g^{-1}$ with linear coef\/f\/icients, while ${\rm Hom}(\mathfrak g^{-1}\wedge\mathfrak g^{-1},\mathfrak g^{-1})$ can be seen as the space of $\mathfrak g^{-1}$-valued two-forms on $\mathfrak g^{-1}$ with constant coef\/f\/icients. Then the operator $\partial$ def\/ined by \eqref{Spencer} coincides with the restriction to ${\rm Hom}(\mathfrak g^{-1},\mathfrak g^{0})$ of the exterior dif\/ferential acting between the above-mentioned spaces of one-forms and two-forms, i.e.\ with the corresponding Spencer $\delta$-operator.}. By direct computations (\cite[p.~42]{sinstern}, \cite[p.~317]{stern}, or the proof of more general statement in Proposition \ref{prop1} below) one obtains the following identity \begin{gather*} C_{\widetilde {H}}=C_{H}+\partial f_{H \widetilde{H}}. \end{gather*} Now f\/ix a subspace \begin{gather*} \mathcal N\subset {\rm Hom}\big(\mathfrak g^{-1}\wedge\mathfrak g^{-1},\mathfrak g^{-1}\big) \end{gather*} complementary to $\text{Im}\, \partial$, so that \begin{gather*} {\rm Hom}\big(\mathfrak g^{-1}\wedge\mathfrak g^{-1},\mathfrak g^{-1}\big)= \text{Im} \,\partial\oplus\mathcal N. \end{gather*} Speaking informally, the subspace $\mathcal N$ def\/ines the normalization conditions for the f\/irst prolongation. The \emph{first prolongation of $P^0$} is the following bundle $(P^0)^{(1)}$ over $P^0$: \begin{gather*} \big(P^0\big)^{(1)} =\big\{(\lambda, H):\lambda\in P^0, H \text{ is a horizontal subspace of } T_\lambda P^0 \text{ with }C_H\in \mathcal N \big\}. \end{gather*} Alternatively, \begin{gather*} \big(P^0\big)^{(1)}=\big\{(\lambda,\varphi^H):\lambda\in P^0, H \text{ is a horizontal subspace of } T_\lambda P^0 \text { with } C_H\in \mathcal N\big\}. \end{gather*} In other words, the f\/iber of $(P^0)^{(1)}$ over a point $\lambda\in P^0$ is the set of all horizontal subspaces~$H$ of $T_\lambda P^0$ such that their structure functions satisfy the chosen normalization condition $\mathcal{N}$. Obviously, the f\/ibers of $(P^0)^{(1)}$ are not empty, and if two horizontal subspaces $H$, $\tilde H$ belong to the f\/iber, then $f_{H\widetilde H}\in \ker\, \partial$. The subspace $\mathfrak g^1$ of ${\rm Hom}(\mathfrak g^{-1},\mathfrak g^{0})$ def\/ined by \begin{gather*} \mathfrak g^1\stackrel{\text{def}}{=} \ker \partial. \end{gather*} is called \emph{the first algebraic prolongation of $\mathfrak g^0\subset\mathfrak {gl}(\mathfrak g^{-1})$}. Note that it is absolutely not important that $\mathfrak g^0$ be a subalgebra of $\mathfrak {gl}(\mathfrak g^{-1})$: the f\/irst algebraic prolongation can be def\/ined for a subspace of $\mathfrak {gl}(\mathfrak g^{-1})$ (see the further generalization below). If $\mathfrak g^1 =0$ then the choice of the ``normalization conditions'' $\mathcal N$ determines an Ehresmann connection on $P^0$ and $P^0$ is endowed with a canonical frame. As an example consider a~Riemannian structure. In this case $\mathfrak g^0=\mathfrak{so}(n)$, where $n=\dim \mathfrak g^{-1}$, and it is easy to show that $\mathfrak g^1=0$. Moreover, $\dim {\rm Hom}(\mathfrak g^{-1}\wedge\mathfrak g^{-1},\mathfrak g^{-1})= \dim {\rm Hom}(\mathfrak g^{-1},\mathfrak g^{0})=\frac{n^2(n-1)}{2}$. Hence, $\text{Im}\,\partial={\rm Hom}(\mathfrak g^{-1}\wedge\mathfrak g^{-1},\mathfrak g^{-1})$ and the complement subspace $\mathcal N$ must be equal to $0$. So, in this case one gets the canonical Ehresmann connection with zero structure function (torsion), which is nothing but the Levi-Civita connection. If $\mathfrak g^1\neq 0$, we continue the prolongation procedure by induction. Given a linear space $W$ denote by ${\rm Id}_W$ the identity map on $W$. The bundle $(P^0)^{(1)}$ is a frame bundle with the abelian structure group $G^1$ of all maps $A\in {\rm GL}(\mathfrak g^{-1}\oplus\mathfrak g^{0})$ such that \begin{gather} A|_{\mathfrak g^{-1}}={\rm Id}_{\mathfrak g^{-1}}+T, \nonumber\\ A|_{\mathfrak g^0}={\rm Id}_{\mathfrak g^0},\label{str} \end{gather} where $T\in \mathfrak g^1$. The right action $R_A$ of $A\in G_1$ on a f\/iber of $(P^0)^{(1)}$ is def\/ined by the following rule: $R_A (\varphi) =\varphi\circ A$. Observe that $\mathfrak g^1$ is isomorphic to the Lie algebra of $G^1$. Set $P^1=(P^0)^{(1)}$. The second prolongation $P^2$ of $P^0$ is by def\/inition the f\/irst prolongation of the frame bundle $P^1$, $P^2\stackrel{\text{def}}{=}(P^1)^{(1)}$ and so on by induction: the $i$-th prolongation $P^{i}$ is the f\/irst prolongation of the frame bundle $P^{i-1}$. Let us describe the structure group $G^i$ of the frame bundle $P^i$ over $P^{i-1}$ in more detail. For this one can def\/ine the Spencer operator and the f\/irst algebraic prolongation also for a~subspace~$W$ of ${\rm Hom} (\mathfrak g^{-1},V)$, where $V$ is a linear space, which does not necessary coincide with $\mathfrak g^{-1}$ as before. In this case the Spencer operator is the operator from ${\rm Hom} (\mathfrak g^{-1}, W)$ to ${\rm Hom}(\mathfrak g^{-1}\wedge \mathfrak g^{-1}, V)$, def\/ined by the same formulas, as in~\eqref{Spencer}. The f\/irst prolongation~$W^{(1)}$ of~$W$ is the kernel of the Spencer operator. Note that by def\/inition $\mathfrak g^{1}=(\mathfrak g^0)^{(1)}$. Then the $i$-th prolongation~$\mathfrak g^i$ of~$\mathfrak g^0$ is def\/ined by the following recursive formula: $\mathfrak g^i=(\mathfrak g^{i-1})^{(1)}$. Note that $\mathfrak g^i\subset {\rm Hom} (\mathfrak g^{-1},\mathfrak g^{i-1})$. By \eqref{str} and the def\/inition of the Spencer operator the bundle $P^i$ is a~frame bundle with the abelian structure group $G^i$ of all maps $\displaystyle{A\in {\rm GL}\bigl(\bigoplus_{p=-1}^{i-1}\mathfrak g^{p}\bigr)}$ such that \begin{gather*} A|_{\mathfrak g^-1}={\rm Id}_{\mathfrak g^{-1}}+T,\nonumber \\ A|_{\bigoplus_{p=0}^{i-1}\mathfrak g^{p}}={\rm Id}_{\bigoplus_{p=0}^{i-1}\mathfrak g^{p}}, \end{gather*} where $T\in \mathfrak g^i$. In particular, if $\mathfrak g^{l+1}=0$ for some $l\geq 0$, then the bundle $P^{l}$ is endowed with the canonical frame and we are done. \section{Tanaka's f\/irst prolongation}\label{section3} Now consider the general case. As before $P^0$ is a structure of constant type $(\mathfrak m, \mathfrak g^0)$. Let $\Pi_0:P^0\to M$ be the canonical projection. The f\/iltration $\{D^i\}_{i<0}$ of $TM$ induces a f\/iltration $\{D^i_0\}_{i\leq 0}$ of $T P^0$ as follows: \begin{gather*} D^0_0=\ker (\Pi_0)_*,\nonumber\\ D^i_0(\lambda)=\bigl\{v\in T_\lambda P^0: (\Pi_0)_*v\in D^i\bigl(\Pi_0(\lambda)\bigr)\bigr\}\qquad \forall\, i<0. \end{gather*} We also set $D^i_0=0$ for all $i>0$. Note that $D^0_0(\lambda)$ is the tangent space at $\lambda$ to the f\/iber of $P^0$ and therefore can be identif\/ied with $\mathfrak g^0$. Denote by $I_\lambda:\mathfrak g^0\to D^0_0(\lambda)$ the identifying isomorphism. Fix a point $\lambda\in P^0$ and let $\pi_0^i:D_0^i(\lambda)/ D_0^{i+2}(\lambda)\to D_0^i(\lambda)/ D_0^{i+1}(\lambda)$ be the canonical projection to the factor space. Note that $\Pi_{0_*}$ induces an isomorphism between the space $D_0^i(\lambda)/ D_0^{i+1}(\lambda)$ and the space $D^i(\Pi_0(\lambda))/ D^{i+1}(\Pi_0(\lambda))$ for any $i<0$. We denote this isomorphism by $\Pi_0^i$. The f\/iber of the bundle $P^0$ over a point $x\in M$ is a subset of the set of all maps \begin{gather*}\varphi\in \bigoplus_{i<0} \text {Hom}\bigl(\mathfrak g^i, D^i(x)/ D^{i+1}(x)\bigr), \end{gather*} which are isomorphisms of the graded Lie algebras $\mathfrak{m}=\displaystyle{\bigoplus_{i<0} \mathfrak g^i}$ and $\displaystyle{\bigoplus_{i<0} D^i(x)/D^{i+1}(x)}$. We are going to construct a new bundle $P^1$ over the bundle $P^0$ such that the f\/iber of $P^1$ over a point $\lambda=(x,\varphi)\in P^0$ will be a certain subset of the set of all maps \begin{gather*}\hat\varphi\in \bigoplus_{i\leq 0} \text{Hom}\bigl(\mathfrak g^i, D_0^i(\lambda)/ D_0^{i+2}(\lambda)\bigr) \end{gather*} such that \begin{gather} \varphi|_{\mathfrak g^i}=\Pi_0^i\circ\pi_0^i\circ\hat\varphi|_{\mathfrak g^i} \qquad \forall \, i<0,\nonumber\\ \hat\varphi|_{\mathfrak g^0}=I_\lambda.\label{hat} \end{gather} For this f\/ix again a point $\lambda=(x,\varphi)\in P^0$. For any $i<0$ choose a subspace $H^i\subset D_0^i(\lambda)/ D_0^{i+2}(\lambda)$, which is a complement of $D_0^{i+1}(\lambda)/D_0^{i+2}(\lambda)$ to $D_0^i(\lambda)/ D_0^{i+2}(\lambda)$: \begin{gather} \label{H1} D_0^i(\lambda)/ D_0^{i+2}(\lambda)=D_0^{i+1}(\lambda)/ D_0^{i+2}(\lambda)\oplus H^i. \end{gather} Then the map $\Pi^i_0\circ\pi^i_0|_{H^i}$ def\/ines an isomorphism between $H^i$ and $D^i\bigl(\Pi_0(\lambda)\bigr)/ D^{i+1}\bigl(\Pi_0(\lambda)\bigr)$. So, once a tuple of subspaces $\mathcal H=\{H^i\}_{i<0}$ is chosen, one can def\/ine a map \begin{gather*} \varphi^{\mathcal H}\in \displaystyle{\bigoplus_{i\leq 0} \text{Hom}\bigl(\mathfrak g^i, D_0^i(\lambda)/ D_0^{i+2}(\lambda)\bigr)} \end{gather*} as follows \begin{gather*} \varphi^{\mathcal H}|_{\mathfrak g^i}= \begin {cases}\big(\Pi^i_0\circ\pi_0^i|_{H^i}\big)^{-1}\circ\varphi|_{\mathfrak g^i} & \text{ if } i<0,\\ I_\lambda & \text { if } i=0. \end{cases} \end{gather*} Clearly $\hat\varphi=\varphi^{\mathcal H}$ satisf\/ies \eqref{hat}. Tuples of subspaces $\mathcal H=\{H^i\}_{i<0}$ satisfying \eqref{H1} play here the same role as horizontal subspaces in the prolongation of the usual $G$-structures. Can we choose a tuple $\{H^i\}_{i<0}$ in a canonical way? For this, by analogy with the prolongation of $G$-structure, we introduce a ``partial soldering form'' of the bundle $P^0$ and the structure function of a~tuple~$\mathcal H$. The \emph{soldering form} of $P^0$ is a tuple $\Omega_0=\{\omega_0^i\}_{i<0}$, where $\omega^i_0$ is a $\mathfrak g^i$-valued linear form on $D_0^i(\lambda)$ def\/ined by \begin{gather*} \omega_0^i(Y)=\varphi^{-1}\bigl(\bigl((\Pi_0)_*(Y)\bigr)_i\bigr), \end{gather*} where $\bigl((\Pi_0)_*(Y)\bigr)_i$ is the equivalence class of $(\Pi_0)_*(Y)$ in $D^i(x)/D^{i+1}(x)$. Observe that $D_0^{i+1}(\lambda)$ $=\ker \omega_0^i$. Thus the form $\omega_0^i$ induces the $\mathfrak g^i$-valued form $\bar \omega_0^i$ on $D_0^i(\lambda) /D_0^{i+1}(\lambda)$. The \emph {structure function $C_{\mathcal H}^0$ of the tuple $\mathcal H=\{H^i\}_{i<0}$} is the element of the space \begin{gather} \label{A0} \mathcal A_0= \left(\bigoplus_{i=-\mu}^{-2} {\rm Hom}(\mathfrak g^{-1}\otimes\mathfrak g^i,\mathfrak g^{i})\right) \oplus {\rm Hom}\big(\mathfrak g^{-1}\wedge\mathfrak g^{-1},\mathfrak g^{-1}\big) \end{gather} def\/ined as follows. Let $\text{pr}_i^{\mathcal H}$ be the projection of $D_0^i(\lambda)/ D_0^{i+2}(\lambda)$ to $D_0^{i+1}(\lambda)/ D_0^{i+2}(\lambda)$ parallel to $H^i$ (or corresponding to the splitting \eqref{H1}). Given vectors $v_1\in \mathfrak g^{-1}$ and $v_2\in\mathfrak g^{i}$, take two vector f\/ields $Y_1$ and $Y_2$ in a neighborhood of $\lambda$ in $P^0$ such that $Y_1$ is a section of $D_0^{-1}$, $Y_2$ is a~section of $D_0^i$, and \begin{gather} \omega_0^{-1}(Y_1)\equiv v_1,\qquad \omega_0^i(Y_2)\equiv v_2,\nonumber \\ Y_1(\lambda)=\varphi^{\mathcal H}(v_1),\qquad Y_2(\lambda)\equiv \varphi^{\mathcal H}(v_2)\,\,{\rm mod}\, D_0^{i+2}(\lambda).\label{Y1Y2} \end{gather} Then set \begin{gather} \label{structT1} C_{\mathcal H}^0(v_1,v_2)\stackrel{\text{def}}{=}\bar\omega_0^i\bigl({\rm pr}_{i-1}^{\mathcal H}\bigl([Y_1,Y_2](\lambda)\bigr)\bigr). \end{gather} In the above formula we take the equivalence class of the vector $[Y_1, Y_2](\lambda)$ in $D_0^{i-1}(\lambda)/D_0^{i+1}(\lambda)$ and then apply ${\rm pr}_{i-1}^{\mathcal H}$. One must show that $C_{\mathcal H}^0(v_1,v_2)$ does not depend on the choice of vector f\/ields~$Y_1$ and~$Y_2$, satisfying~\eqref{Y1Y2}. Indeed, assume that $\widetilde Y_1$ and $\widetilde Y_2$ are another pair of vector f\/ields in a neighborhood of $\lambda$ in~$P^0$ such that $\widetilde Y_1$ is a section of~$D_0^{-1}$, $\widetilde Y_2$ is a section of $D_0^i$, and they satisfy~\eqref{Y1Y2} with~$Y_1$,~$Y_2$ replaced by $\widetilde Y_1$, $\widetilde Y_1$. Then \begin{gather} \label{zz} \widetilde Y_1=Y_1+Z_1, \qquad \widetilde Y_2=Y_2+Z_2, \end{gather} where $Z_1$ is a section of the distribution $D_0^0$ such that $Z_1(\lambda)=0$ and $Z_2$ is a section of the distribution $D_0^{i+1}$ such that $Z_2(\lambda)\in D_0^{i+2}(\lambda)$. It follows that $[Y_1, Z_2](\lambda) \in D_0^{i+1}(\lambda)$ and $[Y_2, Z_1](\lambda)\in D_0^{i+1}(\lambda)$. This together with the fact that $[Z_1,Z_2]$ is a section of $D_0^{i+1}$ imply that \begin{gather*} [\widetilde Y_1,\widetilde Y_2](\lambda)\equiv [Y_1,Y_2]\ \ \text{mod}\ D_0^{i+1}(\lambda). \end{gather*} From \eqref{structT1} we see that the structure function is independent of the choice of vector f\/ields~$Y_1$ and~$Y_2$. We now take another tuple $\widetilde {\mathcal H}=\{\widetilde H^i\}_{i<0}$ such that \begin{gather} \label{tildeH1} D_0^i(\lambda)/ D_0^{i+2}(\lambda)=D_0^{i+1}(\lambda)/ D_0^{i+2}(\lambda)\oplus \widetilde H^i \end{gather} and consider how the structure functions $C_{\mathcal H}^1$ and $C_{\widetilde{\mathcal H}}^1$ are related. By construction, for any vector $v\in\mathfrak g ^i$ the vector $\varphi^{\widetilde{\mathcal H}}(v)-\varphi^{\mathcal H}(v)$ belongs to $D_0^{i+1}(\lambda)/ D_0^{i+2}(\lambda)$. Let \begin{gather*} f_{\mathcal H\widetilde {\mathcal H}}(v)\stackrel{\text{def}}{=} \begin{cases} \bar\omega_0^{i+1}\big(\varphi^{\widetilde{\mathcal H}}(v)-\varphi^{\mathcal H}(v)\big) & \text{ if } v\in \mathfrak g^i \text{ with } i<-1,\\ I_\lambda^{-1}\big(\varphi^{\widetilde{\mathcal H}}(v)-\varphi^{\mathcal H}(v)\big) & \text{ if } v\in \mathfrak g^{-1}. \end{cases} \end{gather*} Then $f_{\mathcal H\widetilde {\mathcal H}}\in \displaystyle{\bigoplus_{i<0}{\rm Hom}(\mathfrak g^i,\mathfrak g^{i+1})}$. Conversely, it is clear that for any $f\in\displaystyle{\bigoplus_{i<0}{\rm Hom}(\mathfrak g^i,\mathfrak g^{i+1})}$ there exists a tuple $\widetilde{\mathcal H}=\{\widetilde H^i\}_{i<0}$, satisfying \eqref{tildeH1}, such that $f=f_{\mathcal H \widetilde {\mathcal H}}$. Further, let $\mathcal A_0$ be as in \eqref{A0} and def\/ine a map \begin{gather*} \partial_0:\displaystyle{\bigoplus_{i<0}{\rm Hom}(\mathfrak g^i,\mathfrak g^{i+1})}\rightarrow \mathcal A_0 \end{gather*} by \begin{gather*} \partial_0 f(v_1,v_2)=[f(v_1),v_2]+[v_1,f(v_2)]-f([v_1,v_2]), \end{gather*} where the brackets $[\, \,, \,]$ are as in the Lie algebra $\mathfrak m\oplus\mathfrak g^0$. The map $\partial_0$ coincides with the Spencer operator \eqref{Spencer} in the case of $G$-structures. Therefore it is called the \emph{generalized Spencer operator for the first prolongation}. \begin{proposition} \label{prop1} The following identity holds \begin{gather} \label{structrans} C_{\widetilde {\mathcal H}}^0=C_{\mathcal H}^0+\partial_0f_{\mathcal H \widetilde{\mathcal H}}. \end{gather} \end{proposition} \begin{proof} Fix vectors $v_1\in \mathfrak g^{-1}$ and $v_2\in\mathfrak g^{i}$ and let $Y_1$ and $Y_2$ be two vector f\/ields in a neighborhood of $\lambda$ satisfying~\eqref{Y1Y2}. Take two vector f\/ields $\widetilde Y_1$ and $\widetilde Y_2$ in a neighborhood of $\lambda$ in~$P^0$ such that~$\widetilde Y_1$ is a section of $D_0^{-1}$, $\widetilde Y_2$ a section of $D_0^i$, and \begin{gather*} \omega_0^{-1}(\widetilde Y_1)\equiv v_1,\qquad \omega_0^i(\widetilde Y_2)\equiv v_2, \nonumber\\ \widetilde Y_1(\lambda)=\varphi^{\widetilde{\mathcal H}}(v_1),\qquad \widetilde Y_2(\lambda)\equiv \varphi^{\widetilde{\mathcal H}}(v_2)\,\,{\rm mod}\,\, D_0^{i+2}(\lambda). \end{gather*} Further, assume that vector f\/ields $Z_1$ and $Z_2$ are def\/ined as in \eqref{zz}. Then $Z_1$ is a section of $D_0^0$ and $Z_2$ is a section of $D_0^{i+1}$ such that \begin{gather} Z_1(\lambda)=I_\lambda\bigl(f_{\mathcal H\widetilde {\mathcal H}}(v_1)\bigr), \label{zz11}\\ f_{\mathcal H\widetilde {\mathcal H}}(v_2)=\begin{cases} \bar\omega_0^{i+1}\left(Z_2(\lambda)\right) & \text{ if } v\in \mathfrak g^i, \ i<-1,\\ I_\lambda^{-1}\left(Z_2(\lambda)\right) & \text{ if } v\in \mathfrak g^{-1}. \end{cases} \label{zz12} \end{gather} Hence $[Z_1, Y_2]$ and $[Y_1, Z_2]$ are sections of $D_0^i$, while $[Z_1, Z_2]$ is a section of $D_0^{i+1}$. This implies that \begin{gather} \label{inter1} \bar\omega_0^i\Bigl({\rm pr}_{i-1}^{\widetilde {\mathcal H}}\bigl([\widetilde Y_1,\widetilde Y_2](\lambda)\bigr)\Bigr)= \bar\omega_0^i\Bigl({\rm pr}_{i-1}^{\widetilde{\mathcal H}}\bigl([Y_1, Y_2](\lambda)\bigr)\Bigr)+\bar\omega_0^i\bigl([Z_1, Y_2]\bigr) + \bar\omega_0^i\bigl([Y_1, Z_2]\bigr). \end{gather} Further, directly from the def\/initions of $f_{\mathcal H\widetilde {\mathcal H}}$, ${\rm pr}_{i-1}^{\mathcal H}$, and ${\rm pr}_{i-1}^{\widetilde {\mathcal H}}$ it follows that \begin{gather} \label{inter2} \bar\omega_0^i\bigl({\rm pr}_{i-1}^{\widetilde {\mathcal H}}(w)\bigr)=\bar\omega_0^i\bigl({\rm pr}_{i-1}^{\mathcal H}(w)\bigr)-f_{\mathcal H\widetilde {\mathcal H}}\bigl(\bar\omega_0^{i-1}(w)\bigr) \qquad \forall \, w\in D_0^{i-1}(\lambda)/D_0^{i+1}(\lambda). \end{gather} Besides, from the def\/inition of the soldering form, the fact that $\varphi$ is an isomorphism of the Lie algebras $\mathfrak m$ and $\mathfrak m(x)= \displaystyle{\bigoplus_{i<0}D^i(x)/D^{i+1}(x)}$ , and relations \eqref{zz12} for $i<-1$ it follows that \begin{gather} \bar\omega_0^i\bigl([Y_1, Z_2]\bigr)=[v_1, f_{\mathcal H\widetilde {\mathcal H}}(v_2)]\qquad \forall\, i<-1. \label{inter3} \end{gather} Taking into account \eqref{Y1Y2} we get \begin{gather} \bar\omega_0^{i-1}([Y_1, Y_2])=[v_1,v_2]. \label{inter4} \end{gather} Finally, from \eqref{zz11} and \eqref{zz12} for $i=-1$, and the def\/inition of the action of $G^0$ on $P^0$ it follows that identity \eqref{inter3} holds also for $i=-1$, and that \begin{gather} \bar\omega_0^i\bigl([Z_1, Y_2]\bigr)=[f_{\mathcal H\widetilde {\mathcal H}}(v_1), v_2]. \label{inter5} \end{gather} Substituting \eqref{inter2}--\eqref{inter5} into \eqref{inter1} we get \eqref{structrans}. \end{proof} Now we proceed as in the case of $G$-structures. Fix a subspace \begin{gather*} \mathcal N_0\subset \mathcal A_0 \end{gather*} which is complementary to $\text{Im}\, \partial_0$, \begin{gather} \label{normsplit} \mathcal A_0= \text{Im} \,\partial_0\oplus\mathcal N_0. \end{gather} As for $G$-structures, the subspace $\mathcal N_0$ def\/ines the normalization conditions for the f\/irst prolongation. Then from the splitting \eqref{normsplit} it follows trivially that there exists a tuple $\mathcal H=\{H^i\}_{i<0}$ such that \begin{gather} \label{norm0} C_{\mathcal H}^0\in \mathcal N_0. \end{gather} A tuple $\widetilde {\mathcal H}=\{\widetilde H^i\}_{i<0}$ satisf\/ies $C_{\widetilde{\mathcal H}}^0\in \mathcal N_0$ if and only if $f_{\mathcal H\widetilde{\mathcal H}}\in \ker\, \partial_0$. In particular if $\ker \,\partial_0 =0$ then the tuple $\mathcal H$ is f\/ixed uniquely by condition \eqref{norm0}. Let \begin{gather*} \mathfrak g^1\stackrel{\text{def}}{=} \ker \partial_0. \end{gather*} The space $\mathfrak g^1$ is called the \emph{first algebraic prolongation} of the algebra $\mathfrak m\oplus \mathfrak g^0$. Here we consider~$\mathfrak g^1$ as an abelian Lie algebra. Note that the fact that the symbol $\mathfrak m$ is fundamental (that is, $\mathfrak g^{-1}$~generates the whole $\mathfrak m$) implies that \begin{gather*} \mathfrak g^1=\left\{f\in \bigoplus_{i<0}{\rm Hom}(\mathfrak g^i,\mathfrak g^{i+1}): f ([v_1,v_2])=[f (v_1),v_2]+[v_1, f( v_2)]\,\,\forall\, v_1, v_2 \in \mathfrak m\right\}. \end{gather*} The \emph{first $($geometric$)$ prolongation} of the bundle $P^0$ is the bundle $P^1$ over $P^0$ def\/ined by \begin{gather*} P^1=\big\{(\lambda, \mathcal H):\lambda\in P^0, C_{\mathcal H}^0\in \mathcal N_0\big\}. \end{gather*} Equivalently, \begin{gather*} P^1=\big\{(\lambda,\varphi^{\mathcal H}):\lambda\in P^0, C_{\mathcal H}^0\in \mathcal N_0\big\}. \end{gather*} It is a principal bundle with the abelian structure group $G^1$ of all maps $A\in\! \displaystyle{\bigoplus_{i<1}{\rm Hom}(\mathfrak g^i,\mathfrak g^{i}\oplus\mathfrak g^{i+1})}$ such that \begin{gather*} A|_{\mathfrak g^i}={\rm Id}_{\mathfrak g^i}+T_i, \qquad i<0,\nonumber\\ A|_{\mathfrak g^0}={\rm Id}_{\mathfrak g^0}, \end{gather*} where $T_i\in {\rm Hom}(\mathfrak g^i,\mathfrak g^{i+1})$ and $(T_{-\mu},\ldots,T_{-1})\in \mathfrak g^1$. The right action $R_A^1$ of $A\in G^1$ on a f\/iber of $P^1$ is def\/ined by $R_A^1 (\varphi^{\mathcal H}) =\varphi^{\mathcal H}\circ A$. Note that $G^1$ is an abelian group of dimension equal to $\dim \mathfrak g^1$. \section{Higher order Tanaka's prolongations}\label{section4} More generally, def\/ine the $k$-th algebraic prolongation $\mathfrak g^k$ of the algebra $\mathfrak m\oplus \mathfrak g^0$ by induction for any $k\in\mathbb N$. Assume that spaces $\mathfrak g^l\subset \displaystyle{\bigoplus_{i<0}{\rm Hom}(\mathfrak g^i,\mathfrak g^{i+l})}$ are def\/ined for all $0<l<k$. Set \begin{gather} \label{br2} [f,v]=-[v, f]=f(v) \qquad \forall \, f\in \mathfrak g^l, \ 0\leq l<k, \ \text{and} \ v\in\mathfrak m. \end{gather} Then let \begin{gather} \label{mgk} \mathfrak g^k\stackrel{\text{def}}{=}\left\{f\in \bigoplus_{i<0}{\rm Hom}(\mathfrak g^i,\mathfrak g^{i+k}): f ([v_1,v_2])=[f (v_1),v_2]+[v_1, f( v_2)]\,\,\forall\, v_1, v_2 \in \mathfrak m\right\}. \end{gather} Directly from this def\/inition and the fact that $\mathfrak m$ is fundamental (that is, it is generated by $\mathfrak g^{-1}$) it follows that if $f\in \mathfrak g^k$ satisf\/ies $f|_{\mathfrak g^{-1}}=0$, then $f=0$. The space $\bigoplus_{i\in Z} \mathfrak g^i$ can be naturally endowed with the structure of a graded Lie algebra. The brackets of two elements from $\mathfrak m$ are as in $\mathfrak m$. The brackets of an element with non-negative weight and an element from $\mathfrak m$ are already def\/ined by \eqref{br2}. It only remains to def\/ine the brackets $[f_1,f_2]$ for $f_1\in\mathfrak g^k$, $f_2\in \mathfrak g^l$ with $k,l\geq 0$. The def\/inition is inductive with respect to $k$ and $l$: if $k=l=0$ then the bracket $[f_1,f_2]$ is as in~$\mathfrak g^0$. Assume that $[f_1,f_2]$ is def\/ined for all $f_1\in\mathfrak g^k$, $f_2\in \mathfrak g^l$ such that a pair $(k,l)$ belongs to the set \begin{gather*} \{(k,l):0 \leq k\leq \bar k, 0\leq l\leq \bar l\}\backslash \{(\bar k,\bar l)\}. \end{gather*} Then def\/ine $[f_1, f_2]$ for $f_1\in\mathfrak g^{\bar k}$, $f_2\in \mathfrak g^{\bar l}$ to be the element of $\displaystyle{\bigoplus_{i<0}{\rm Hom}(\mathfrak g^i,\mathfrak g^{i+\bar k+\bar l})}$ given by \begin{gather} \label{posbr} [f_1,f_2]v\stackrel{\text{def}}{=} [f_1(v), f_2]+[f_1,f_2(v)]\qquad \forall \, v\in\mathfrak m. \end{gather} It is easy to see that $[f_1,f_2]\in \mathfrak g^{k+l}$ and that $\bigoplus_{i\in Z} \mathfrak g^i$ with bracket product def\/ined as above is a~graded Lie algebra. As a matter of fact \cite[\S~5]{tan} this graded Lie algebra satisf\/ies Properties~1--3 from Subsection~\ref{section1.4}. That is it is a realization of the algebraic universal prolongation $\mathfrak g(\mathfrak m, \mathfrak g^0)$ of the algebra $\mathfrak m\oplus\mathfrak g^0$. Now we are ready to construct the higher order geometric prolongations of the bundle $P^0$ by induction. Assume that all $l$-th order prolongations $P^l$ are constructed for $0\leq l\leq k$. We also set $P^{-1}=M$. We will not specify what the bundles $P^l$ are exactly. As in the case of the f\/irst prolongation $P^1$, their construction depends on the choice of normalization conditions on each step. But we will point out those properties of these bundles that we need in order to construct the $(k+1)$-st order prolongation $P^{k+1}$. Here are these properties: \begin{enumerate}\itemsep=0pt \item $P^l$ is a principal bundle over $P^{l-1}$ with an abelian structure group $G^l$ of dimension equal to $\dim \mathfrak g^l$ and with the canonical projection $\Pi_l$. \item The tangent bundle $T P^l$ is endowed with the f\/iltration $\{D^i_l\}$ as follows: For $l=-1$ it coincides with the initial f\/iltration $\{D^{i}\}_{i<0}$ and for $l\geq 0$ we get by induction \begin{gather*} D^l_l=\ker (\Pi_l)_*,\nonumber\\ D^i_l(\lambda_l)=\bigl\{v\in T_\lambda P^l: (\Pi_l)_*v\in D_{l-1}^i\bigl(\Pi_l(\lambda_l)\bigr)\bigr\} \qquad \forall\, i<l. \end{gather*} The subspaces $D^l_l(\lambda_l)$, as the tangent spaces to the f\/ibers of $P^l$ , are canonically identif\/ied with $\mathfrak g^l$. Denote by $I_{\lambda_l}:\mathfrak g^l\rightarrow D^l_l(\lambda_l)$ the identifying isomorphism. \item The f\/iber of $P^l$, $0\leq l\leq k$, over a point $\lambda_{l-1}\in P^{l-1}$ will be a certain subset of the set of all maps from \begin{gather*} \bigoplus_{i< l} \text{Hom}\bigl(\mathfrak g^i, D_{l-1}^i(\lambda_{l-1})/ D_{l-1}^{i+l+1}(\lambda_{l-1})\bigr). \end{gather*} If $l>0$ and $\lambda_l=(\lambda_{l-1}, \varphi_l) \in P^l$, then $\varphi_l|_{\mathfrak g^{l-1}}$ coincides with the identif\/ication of $\mathfrak g^{l-1}$ with $D^{l-1}_{l-1}(\lambda_{l-1})$ and the restrictions $\varphi_l|_{\mathfrak g^i}$ with $i\geq 0$ are the same for all $\lambda_l$ from the same f\/iber. \item Assume that $0<l\leq k$, $\lambda_{l-1}=(\lambda_{l-2}, \varphi_{l-1})\in P^{l-1}$ and $\lambda_l=(\lambda_{l-1},\varphi_l)\in P^l$. The maps $\varphi_{l-1}$ and $\varphi_l$ are related as follows: if \begin{gather} \label{pli} \pi_l^i: \ D_l^i(\lambda_{l})/ D_l^{i+l+2}(\lambda_{l})\to D_l^i(\lambda_{l})/ D_l^{i+l+1}(\lambda_{l}) \end{gather} are the canonical projections to a factor space and \begin{gather} \label{Pli} \Pi_l^i: \ D^i_l(\lambda_l)/D^{i+l+1}_l(\lambda_l)\rightarrow D^i_{l-1}\bigl(\Pi_l(\lambda_l)\bigr)/D^{i+l+1}_{l-1}(\Pi_l(\lambda_l)) \end{gather} are the canonical maps induced by $(\Pi_l)_*$, then \begin{gather*} \forall \, i<l \qquad \varphi_{l-1}|_{\mathfrak g^i}=\Pi_{l-1}^i\circ \pi_{l-1}^i\circ \varphi_{l}|_{\mathfrak g^i}. \end{gather*} Note that the maps $\Pi_l^i$ are isomorphisms for $i<0$ and the maps $\pi_l^i$ are identities for $i\geq 0$ (we set $D_l^i=0$ for $i>l$). \end{enumerate} Now we are ready to construct the $(k+1)$-st order Tanaka geometric prolongation. Fix a point $\lambda_k \in P^k$ and assume that $\lambda_k=(\lambda_{k-1},\varphi_k)$, where \begin{gather*} \varphi_k\in \displaystyle{\bigoplus_{i< k} \text{Hom}\bigl(\mathfrak g^i, D_{k-1}^i(\lambda_{k-1})/ D_{k-1}^{i+k+1}(\lambda_{k-1})\bigr)}. \end{gather*} Let $\mathcal H_k=\{H_k^i\}_{i<k}$ be the tuple of spaces such that $H_k^i=\varphi_k(\mathfrak g^i)$. Take a tuple $\mathcal H_{k+1}=\{H_{k+1}^i\}_{i<k}$ of linear spaces such that \begin{enumerate}\itemsep=0pt \item for $i<0$ the space $H_{k+1}^i$ is a complement of $D_k^{i+k+1}(\lambda_k)/D_k^{i+k+2}(\lambda_k)$ in $(\Pi_k^i\circ\pi_k^i)^{-1} (H_k^i)\subset D_{k}^i(\lambda_{k})/ D_{k}^{i+k+2}(\lambda_{k})$, \begin{gather} \label{Hk-} (\Pi_k^i\circ\pi_k^i)^{-1} (H_k^i)=D_k^{i+k+1}(\lambda_k)/D_k^{i+k+2}(\lambda_k)\oplus H_{k+1}^i; \end{gather} \item for $0\leq i<k$ the space $H_{k+1}^i$ is a complement of $D_k^k(\lambda_k)$ in $(\Pi_k^i)^{-1} (H_k^i)$, \begin{gather} \label{Hk+} (\Pi_k^i)^{-1} (H_k^i)=D_k^{k}(\lambda_k)\oplus H_{k+1}^i. \end{gather} \end{enumerate} Here the maps $\pi_k^i$ and $\Pi_k^i$ are def\/ined as in \eqref{pli} and \eqref{Pli} with $l=k$. Since $D_k^{i+k+1}(\lambda_k)/D_k^{i+k+2}(\lambda_k)=\ker \pi_k^i$ and $\Pi_k^i$ is an isomorphism for $i<0$, the map $\Pi_k^i\circ\pi^i_k|_{H_{k+1}^i}$ def\/ines an isomorphism between $H_{k+1}^i$ and $H_k^i$ for $i<0$. Additionally, by \eqref{Hk+} the map $(\Pi_l)_*|_{H_{k+1}^i}$ def\/ines an isomorphism between $H_{k+1}^i$ and $H_k^i$ for $0\leq i<k$. So, once a tuple of subspaces $\mathcal H_{k+1}=\{H_{k+1}^i\}_{i<k}$, satisfying \eqref{Hk-} and \eqref{Hk+}, is chosen, one can def\/ine a map \begin{gather*} \varphi^{\mathcal H_{k+1}}\in \bigoplus_{i\leq k} \text{Hom}\bigl(\mathfrak g^i, D_k^i(\lambda_k)/ D_k^{i+k+2}(\lambda_k)\bigr) \end{gather*} as follows \begin{gather*} \varphi^{\mathcal H_{k+1}}|_{\mathfrak g^i}= \begin{cases} \big(\Pi_k^i\circ\pi^i_k|_{H_{k+1}^i}\big)^{-1}\circ\varphi_k|_{\mathfrak g^i} &\text { if } i<0,\\ \bigl((\Pi_l)_*|_{H_{k+1}^i}\bigr)^{-1}\circ\varphi_k|_{\mathfrak g^i} &\text{ if }0\leq i<k,\\ I_{\lambda_k}& \text{ if } i=k. \end{cases} \end{gather*} Can we choose a tuple or a subset of tuples $\mathcal H_k$ in a canonical way? To answer this question, by analogy with Sections~\ref{section2} and~\ref{section3}, we introduce a ``partial soldering form'' of the bundle $P^k$ and the structure function of a tuple $\mathcal H_{k+1}$. The \emph{soldering form} of $P^k$ is a tuple $\Omega_k=\{\omega_k^i\}_{i<k}$, where~$\omega_k^i$ is a $\mathfrak g^i$-valued linear form on $D_k^i(\lambda_k)$ def\/ined by \begin{gather*} \omega_k^i(Y)=\varphi_k^{-1}\bigl(\bigl((\Pi_k)_*(Y)\bigr)_i\bigr). \end{gather*} Here $\bigl((\Pi_k)_*(Y)\bigr)_i$ is the equivalence class of $(\Pi_k)_*(Y)$ in $D_{k-1}^i(\lambda_{k-1})/D_{k-1}^{i+k+1}(\lambda_{k-1})$. By construction it follows immediately that $D_k^{i+1}(\lambda_k)=\ker \omega_k^i$. So, the form $\omega_k^i$ induces the $\mathfrak g^i$-valued form $\bar \omega_k^i$ on $D_{k}^i(\lambda_k) /D_k^{i+1}(\lambda_k)$. The \emph{ structure function $C_{\mathcal H_{k+1}}^k$ of a tuple $\mathcal H_{k+1}$} is the element of the space \begin{gather} {\mathcal A}_k=\left(\bigoplus_{i=-\mu}^{-2} {\rm Hom}\big(\mathfrak g^{-1}\otimes\mathfrak g^i,\mathfrak g^{i+k}\big)\right) \oplus {\rm Hom}\big(\mathfrak g^{-1}\wedge\,\mathfrak g^{-1},\mathfrak g^{k-1}\big)\nonumber\\ \phantom{{\mathcal A}_k=}{} \oplus\left( \bigoplus_{i=0}^{k-1} {\rm Hom}\big(\mathfrak g^{-1}\otimes\mathfrak g^i,\mathfrak g^{k-1}\big)\right)\label{Ak} \end{gather} def\/ined as follows: Let $\pi_l^{i,s}: D_l^i(\lambda_l)/D_l^{i+l+2}(\lambda_l)\rightarrow D_l^i(\lambda_l)/ D_l^{i+l+2-s}(\lambda_l)$ be the canonical projection to a factor space, where $-1\leq l\leq k$, $i\leq l$. Here, as before, we assume that $D_l^i=0$ for $i>l$. Note that the previously def\/ined $\pi_l^i$ coincides with $\pi_l^{i,1}$. By construction, one has the following two relations \begin{gather} D_{k}^i(\lambda_{k})/D_{k}^{i+k+2}(\lambda_{k})=\left(\bigoplus_{s=0}^k\pi_k^{i+s,s} (H_{k+1}^{i+s})\right)\oplus D_{k}^{i+k+1}(\lambda_{k})/D_{k}^{i+k+2}(\lambda_{k}) \quad \text{ if} \ \ i<0, \label{longsplit1} \\ D_{k}^i(\lambda_{k})=\left(\bigoplus_{s=i}^{k-1} H_{k+1}^{i}\right)\oplus D_{k}^{k}(\lambda_{k})\quad \text{if} \ \ 0\leq i<k. \label{longsplit2} \end{gather} Let $\text{pr}_i^{\mathcal H_{k+1}}$ be the projection of $D_k^i(\lambda_k)/ D_k^{i+k+2}(\lambda_k)$ to $D_k^{i+k+1}(\lambda_k)/ D_k^{i+k+2}(\lambda_k)$ correspon\-ding to the splitting \eqref{longsplit1} if $i<0$ or the projection of $D_k^i(\lambda_k)$ to $H_{k+1}^{k-1}$ corresponding to the splitting \eqref{longsplit2} if $0\leq i<k$. Given vectors $v_1\in \mathfrak g^{-1}$ and $v_2\in\mathfrak g^{i}$ take two vector f\/ields $Y_1$ and $Y_2$ in a neighborhood $U_k$ of $\lambda_k$ in $P^k$ such that for any $\tilde \lambda_k=(\tilde \lambda_{k-1},\tilde\varphi_k)\in U_k$, where $\tilde \varphi_k\in \displaystyle{\bigoplus_{i< k} \text{Hom}\bigl(\mathfrak g^i, D_{k-1}^i(\lambda_{k-1})/ D_{k-1}^{i+k+1}(\lambda_{k-1})\bigr)}$, one has \begin{gather} \Pi_{k_{*}}Y_1(\tilde \lambda_k)= \tilde \varphi_{k}(v_1), \qquad \Pi_{k_{*}}Y_2(\tilde\lambda_k)\equiv \tilde\varphi_{k}(v_2) \quad {\rm mod}\,\, D_{k-1}^{i+k+1}(\lambda_{k-1}),\nonumber \\ Y_1(\lambda)=\varphi^{\mathcal H_{k+1}}(v_1),\qquad Y_2(\lambda)\equiv\varphi^{\mathcal H_{k+1}}(v_2)\quad {\rm mod}\,\, D_k^{i+k+2}(\lambda).\label{Y1Y2k} \end{gather} Then set \begin{gather} \label{structTk} C_{\mathcal H^{k+1}}^k(v_1,v_2)\stackrel{\text{def}}{=} \begin{cases} \bar\omega_k^{i+k}\bigl({\rm pr}_{i-1}^{\mathcal H_{k+1}}\bigl([Y_1,Y_2]\bigr)\bigr) & \text{ if } i<0,\\ \omega_k^{k-1}\bigl({\rm pr}_{i-1}^{\mathcal H_{k+1}}\bigl([Y_1,Y_2]\bigr)\bigr)& \text{ if } 0\leq i<k. \end{cases} \end{gather} As in the case of the f\/irst prolongation, $C_{\mathcal H}^k(v_1,v_2)$ does not depend on the choice of vector f\/ields $Y_1$ and $Y_2$, satisfying \eqref{Y1Y2k}. Indeed, assume that $\widetilde Y_1$ and $\widetilde Y_2$ is another pair of vector f\/ields in a neighborhood of $\lambda_k$ in $P^k$ such that $\widetilde Y_1$ is a section of $D_k^{-1}$, $\widetilde Y_2$ is a section of $D_k^i$, and they satisfy \eqref{Y1Y2k} with $Y_1$, $Y_2$ replaced by $\widetilde Y_1$, $\widetilde Y_1$. Then \begin{gather*} \widetilde Y_1=Y_1+Z_1, \qquad \widetilde Y_2=Y_2+Z_2, \end{gather*} where $Z_1$ is a section of the distribution $D_k^k$ such that $Z_1(\lambda_k)=0 $ and $Z_2$ is a section of the distribution $D_k^ {\min\{i+k+1,k\}}$ such that $Z_2(\lambda_k)\in D_k^ {\min\{i+k+1,k\}+1} (\lambda_k)$. Then $[Y_1, Z_2](\lambda_k) \in D_k^ {\min\{i+k+1,k\}} (\lambda_k)$ and $[Y_2, Z_1](\lambda)\in D_k^ {\min\{i+k+1,k\}} (\lambda_k)$. This together with the fact that $[Z_1,Z_2]$ is a section of $D_k^ {\min\{i+k+1,k\}+1} $ implies that \begin{gather*} [\widetilde Y_1,\widetilde Y_2](\lambda)\equiv [Y_1,Y_2]\quad \text{mod}\,\,D_k^ {\min\{i+k+1,k\}} (\lambda). \end{gather*} From \eqref{structTk} it follows that the structure function is independent of the choice of vector f\/ields~$Y_1$ and~$Y_2$. Now take another tuple $\widetilde {\mathcal H}_{k+1}=\{\widetilde H_{k+1}^i\}_{i<k}$ such that \begin{enumerate} \item for $i<0$ the space $\widetilde H_{k+1}^i$ is a complement of $D_k^{i+k+1}(\lambda_k)/D_k^{i+k+2}(\lambda_k)$ in $(\Pi_k^i\circ\pi_k^i)^{-1} (H_k^i)\subset D_{k}^i(\lambda_{k})/ D_{k}^{i+k+2}(\lambda_{k})$, \begin{gather} \label{Hk-t} \big(\Pi_k^i\circ\pi_k^i\big)^{-1} (H_k^i)=D_k^{i+k+1}(\lambda_k)/D_k^{i+k+2}(\lambda_k)\oplus \widetilde H_{k+1}^i; \end{gather} \item for $0\leq i<k$ the space $\widetilde H_{k+1}^i$ is a complement of $D_k^k(\lambda_k)$ in $(\Pi_k^i)^{-1} (H_k^i)$, \begin{gather} \label{Hk+t} \big(\Pi_k^i\big)^{-1} (H_k^i)=D_k^{k}(\lambda_k)\oplus \widetilde H_{k+1}^i. \end{gather} \end{enumerate} How are the structure functions $C_{{\mathcal H}_{k+1}}^k$ and $C_{\widetilde{\mathcal H}_{k+1}}^k$ related? By construction, for any vector $v\in\mathfrak g ^i$ the vector $\varphi^{\widetilde{\mathcal H}_{k+1}}(v)-\varphi^{\mathcal H_{k+1}}(v)$ belongs to $D_k^{i+k+1}(\lambda_k)/ D_k^{i+k+2}(\lambda_k)$, for $i<0$, and to~$D_k^{k}(\lambda_k)$, for $0\leq i<k$. Let \begin{gather*} f_{\mathcal H_{k+1}\widetilde {\mathcal H}_{k+1}}(v)\stackrel{\text{def}}{=} \begin{cases} \bar\omega_k^{i+k+1}\big(\varphi^{\widetilde{\mathcal H}_{k+1}}(v)-\varphi^{\mathcal H_{k+1}}(v)\big) & \text{ if } v\in \mathfrak g^i \text{ with } i<-1,\\ I_\lambda^{-1}\big(\varphi^{\widetilde{\mathcal H}_{k+1}}(v)-\varphi^{\mathcal H_{k+1}}(v)\big) & \text{ if } v\in \mathfrak g^{i} \text{ with } -1\leq i<k. \end{cases} \end{gather*} Then \begin{gather*} f_{\mathcal H_{k+1}\widetilde {\mathcal H}_{k+1}}\in \displaystyle{\bigoplus_{i<0}{\rm Hom}(\mathfrak g^i,\mathfrak g^{i+k+1})}\oplus \displaystyle{\bigoplus_{i=0}^{k-1}{\rm Hom}(\mathfrak g^i,\mathfrak g^k)}. \end{gather*} In the opposite direction, it is clear that for any $f\in\displaystyle {\bigoplus_{i<0}{\rm Hom}(\mathfrak g^i,\mathfrak g^{i+k+1})}\oplus \displaystyle{\bigoplus_{i=0}^{k-1}{\rm Hom}(\mathfrak g^i,\mathfrak g^k)}$, there exists a tuple $\widetilde{\mathcal H}_{k+1}=\{\widetilde H_{k+1}^i\}_{i<k}$ satisfying \eqref{Hk-t} and \eqref{Hk+t} and such that $f=f_{\mathcal H_{k+1} \widetilde {\mathcal H}_{k+1}}$. Further, let $\mathcal A_k$ be as in \eqref{Ak} and def\/ine a map \begin{gather*} \partial_k:\displaystyle{\bigoplus_{i<0}{\rm Hom}(\mathfrak g^i,\mathfrak g^{i+k+1})}\oplus \displaystyle{\bigoplus_{i=0}^{k-1}{\rm Hom}(\mathfrak g^i,\mathfrak g^k)}\rightarrow \mathcal A_k \end{gather*} by \begin{gather} \label{Spk} \partial_k f(v_1,v_2)\\ \qquad{} =\begin{cases}[f(v_1),v_2]+[v_1,f(v_2)]-f([v_1,v_2])&\text{ if } v_1\in \mathfrak g^ {-1}, \ v_2\in \mathfrak g^i, \ i<0, \\ [v_1,f(v_2)] &\text{ if } v_1\in \mathfrak g^ {-1}, \ v_2\in \mathfrak g^i, \ 0\leq i <k-1, \end{cases}\nonumber \end{gather} where the brackets $[\, \,, \,]$ are as in the algebraic universal prolongation $\mathfrak g(\mathfrak m,\mathfrak g^0)$. For $k=0$ this def\/inition coincides with the def\/inition of the generalized Spencer operator for the f\/irst prolongation given in the previous section. The reason for introducing the operator $\partial_k$ is that the following generalization of identity~\eqref{structrans} holds: \begin{gather*} C_{\widetilde {\mathcal H}_{k+1}}^k=C_{\mathcal H_{k+1}}^k+\partial_kf_{\mathcal H_{k+1} \widetilde{\mathcal H}_{k+1}}. \end{gather*} A verif\/ication of this identity for pairs $(v_1,v_2)$, where $v_1\in \mathfrak g^{-1}$ and $v_2\in \mathfrak g^i$ with $i<0$, is completely analogous to the proof of Proposition~\ref{prop1}. For $i\geq 0$ one has to use the inductive assumption that the restrictions $\varphi_l|_{\mathfrak g^i}$ are the same for all $\lambda_l$ from the same f\/iber (see item~3 from the list of properties satisf\/ied by $P^l$ in the beginning of this section) and the splitting \eqref{longsplit2}. Now we proceed as in Sections~\ref{section2} and~\ref{section3}. Fix a subspace \begin{gather*} \mathcal N_k\subset\mathcal A_k \end{gather*} which is complementary to $\text{Im}\, \partial_k$, \begin{gather} \label{normsplitk} \mathcal A_k= \text{Im} \,\partial_k\oplus\mathcal N_k. \end{gather} As above, the subspace $\mathcal N_k$ def\/ines the normalization conditions for the f\/irst prolongation. Then from the splitting~\eqref{normsplitk} it follows trivially that there exists a tuple $\mathcal H_{k+1}=\{H_{k+1}^i\}_{i<k}$, satis\-fying~\eqref{Hk-} and~\eqref{Hk+}, such that \begin{gather*} C_{\mathcal H_{k+1}}^k\in \mathcal N_k \end{gather*} and $C_{\widetilde{\mathcal H}_{k+1}}^k\in \mathcal N_k$ for a tuple $\widetilde H_{k+1}=\{H_{k+1}^i\}_{i<k}$, satisfying \eqref{Hk-t} and \eqref{Hk+t}, if and only if $f_{\mathcal H_{k+1}\widetilde {\mathcal H}_{k+1}}\in \ker\, \partial_k$. Note also that \begin{gather} \label{kerk} f\in \ker\, \partial_k \,\Rightarrow\, f|_{\mathfrak g^i}=0 \qquad \forall \, 0\leq i\leq k-1. \end{gather} In other words, \begin{gather} \label{kerk1} \ker\, \partial_k\subset \displaystyle{\bigoplus_{i<0}{\rm Hom}\big(\mathfrak g^i,\mathfrak g^{i+k+1}\big)}. \end{gather} Indeed, if $f\in \ker\, \partial_k$, then by \eqref{Spk} for any $v_1\in\mathfrak g^{-1}$ and $v_2\in \mathfrak g^{i}$ with $0\leq i\leq k-1$ one has \begin{gather*} [v_1,f(v_2)]=-f(v_2)v_1=0. \end{gather*} In other words, $f(v_2)|_{\mathfrak g^{-1}}=0$ (recall that $f(v_2)\in \mathfrak g^k\subset \displaystyle{\bigoplus_{i<0}{\rm Hom}(\mathfrak g^i,\mathfrak g^{i+k})}$). Since $\mathfrak g^{-1}$ generates the whole symbol $\mathfrak m$ we see that $f(v_2)=0$ holds for any $v_2\in \mathfrak g^{i}$ with $0\leq i\leq k-1$. This proves that \eqref{kerk}. Further, comparing \eqref{Spk} and \eqref{kerk1} with \eqref{mgk} and using again the fact that $\mathfrak g^{-1}$ generates the whole symbol $\mathfrak m$ we obtain \begin{gather*} \ker \partial_k=\mathfrak g^{k+1}. \end{gather*} The \emph {$(k+1)$-st $($geometric$)$ prolongation} of the bundle $P^0$ is the bundle $P^{k+1}$ over $P^k$ def\/ined by \begin{gather*} P^{k+1}=\big\{(\lambda_k, \mathcal H_{k+1}):\lambda_k\in P^k, C_{\mathcal H_{k+1}}^k\in \mathcal N_k\big\}. \end{gather*} Equivalently, \begin{gather*} P^{k+1}=\big\{(\lambda,\varphi^{\mathcal H_{k+1}}):\lambda_k\in P^k, C_{\mathcal H_{k+1}}^k\in \mathcal N_k\big\}. \end{gather*} It is a principal bundle with the abelian structure group $G^{k+1}$ of all maps $A\in \displaystyle{\bigoplus_{i\leq k}}{\rm Hom}(\mathfrak g^i$, $\mathfrak g^{i}\oplus\mathfrak g^{i+k+1}) $ such that \begin{gather*} A|_{\mathfrak g^i}=\begin{cases}{\rm Id}_{\mathfrak g^i}+T_i &\text{ if } i<0,\\ {\rm Id}_{\mathfrak g^i} &\text{ if } 0\leq i\leq k, \end{cases} \end{gather*} where $T_i\in {\rm Hom}(\mathfrak g^i,\mathfrak g^{i+k+1})$ and $(T_{-\mu},\ldots,T_{-1})\in \mathfrak g^{k+1}$. The right action $R_A^{k+1}$ of $A\in G^{k+1}$ on a f\/iber of $P^{k+1}$ is def\/ined by $R_A^{k+1} (\varphi^{\mathcal H_{k+1}}) =\varphi^{\mathcal H_{k+1}}\circ A$. Obviously, $G^{k+1}$ is an abelian group of dimension equal to $\dim \mathfrak g^{k+1}$. It is easy to see that the bundle $P^{k+1}$ is constructed so that the Properties $1$--$4$, formulated in the beginning of the present section, hold for $l=k+1$ as well. Finally, assume that there exists $\bar l\geq 0$ such that $\mathfrak g^{\bar l}\neq 0$ but $\mathfrak g^{\bar l+1}= 0$. Since the symbol $\mathfrak m$ is fundamental, it follows that $\mathfrak g ^l=0$ for all $l>\bar l$. Hence, for all $l>\bar l$ the f\/iber of $P^l$ over a~point $\lambda_{l-1}\in P^{l-1}$ is a single point belonging to $\displaystyle{\bigoplus_{i=-\mu}^{l-1} \text{Hom}\bigl(\mathfrak g^i, D_{l-1}^i(\lambda_{l-1})/ D_{l-1}^{i+l+1}(\lambda_{l-1})\bigr)},$ where, as before, $\mu$ is the degree of nonholonomy of the distribution $D$. Moreover, by our assumption, $D_l^i=0$ if $l\geq\bar l$ and $i\geq \bar l$. Therefore, if $l=\bar l+\mu$, then $i+l+1>\bar l$ for $i\geq -\mu$ and the f\/iber of $P^l$ over $P^l$ is an element of ${\rm Hom} \Big(\displaystyle{\bigoplus_{i=-\mu}^{l-1}}\mathfrak g^i, T_{\lambda_{l-1}}P^{l-1}\Big)$. In other words, $P^{\bar l+\mu}$ def\/ines a~canonical frame on $P^{\bar l+\mu-1}$. But all bundles $P^l$ with $l\geq \bar l$ are identif\/ied one with each other by the canonical projections (which are dif\/feomorphisms in that case). As a conclusion we get an alternative proof of the main result of the Tanaka paper \cite{tan}: \begin{theorem} \label{main} If the $(\bar l+1)$-st algebraic prolongation of the graded Lie algebra $\mathfrak m\oplus\mathfrak g^0$ is equal to zero then for any structure $P^0$ of constant type $(\mathfrak m,\mathfrak g^0)$ there exists a canonical frame on the $\bar l$-th geometric prolongation $P^{\bar l}$ of $P^0$. \end{theorem} The power of Theorem \ref{main} is that it reduces the question of existence of a canonical frame for a~structure of constant type $(\mathfrak m,\mathfrak g^0)$ to the calculation of the universal algebraic prolongation of the algebra $\mathfrak m\oplus \mathfrak g^0$. But the latter is pure Linear Algebra: each consecutive algebraic prolongation is determined by solving the system of linear equations given by~\eqref{mgk}. Let us demonstrate this algebraic prolongation procedure in the case of the equivalence of second order ordinary dif\/ferential equations with respect to the group of point transformations (see Example~5 in Subsection~\ref{section1.3}). The result of this prolongation is very well known using the structure theory of simple Lie algebras (see discussions below), but this is one of the few nontrivial examples, where explicit calculations of algebraic prolongation can be written down in detail within one and a half pages. {\bf Continuation of Example 5.} Recall that our geometric structure here is a contact distribution $D$ on a $3$-dimensional manifold endowed with two distinguished transversal line sub-distributions. The symbol of $D$ is isomorphic to the $3$-dimensional Heisenberg algebra $\eta_3$ with grading $\mathfrak g^{-1}\oplus\mathfrak g^{-2}$, where $\mathfrak g^{-2}$ is the center of $\eta_3$. Besides, the plane $\mathfrak g^{-1}$ is endowed with two distinguished transversal lines $\ell_1$ and $\ell_2$. Let $X_1$ and $X_2$ be vectors spanning $\ell_1$ and $\ell_2$, respectively, and let $X_3=[X_1,X_2]$. Let $g^0$ be the algebra of all derivations on $\eta_3$ preserving the grading and the lines $\ell_1$ and $\ell_2$. Then \begin{gather*} \mathfrak g^0=\text{span}\big\{\Lambda_1^0,\Lambda_2^0\big\}, \end{gather*} where \begin{gather} \label{2ordlam} \Lambda_1^0(X_1)=X_1,\qquad \Lambda_1^0(X_2)=X_2,\qquad \Lambda_2^0(X_1)=X_1,\qquad \Lambda_2^0(X_2)=-X_2. \end{gather} Using the fact that $\Lambda_i^0$ is a derivation, we also have \begin{gather} \label{2ordlamx3} \Lambda_1^0(X_3)=2X_3,\qquad \Lambda_2^0(X_3)=0. \end{gather} {\bf a) Calculation of $\boldsymbol{\mathfrak g^{1}}$.} Given $\delta^1\in {\rm Hom}(\mathfrak g^{-1},\mathfrak g^0)\oplus {\rm Hom}(\mathfrak g^{-2},\mathfrak g^{-1})$ we have \begin{gather} \label{2ordg1} \delta^1(X_1)=\alpha_{11}\Lambda_1^0+\alpha_{12}\Lambda_2^0, \qquad \delta^1(X_2)=\alpha_{21}\Lambda_1^0+\alpha_{22}\Lambda_2^0 \end{gather} for some $\alpha_{ij}$, $1\leq i,j\leq 2$. If $\delta^1\in \mathfrak g^1$, then \eqref{2ordlam} yields \begin{gather} \label{2orddelx3} \delta^1(X_3)=[\delta^1(X_1), X_2]+[X_1,\delta^1(X_2)]\\ \phantom{\delta^1(X_3)}{} =\bigl(\alpha_{11}\Lambda_1^0\!+\alpha_{12}\Lambda_2^0\bigr)(X_2)\!-\! \bigl(\alpha_{21}\Lambda_1^0\!+\alpha_{22}\Lambda_2^0\bigr)(X_1)= -(\alpha_{21}\!+\alpha_{22})X_1\!+(\alpha_{11}\!-\alpha_{12})X_2.\! \nonumber \end{gather} Using \eqref{2ordlamx3} and \eqref{2orddelx3}, we have \begin{gather*} 0=\delta^1([X_1,X_3])=[\delta^1(X_1), X_3]+[X_1,\delta^1(X_3)]\\ \phantom{0}{} = (\alpha_{11}\Lambda_1^0+\alpha_{12}\Lambda_2^0\bigr)(X_3)+(\alpha_{11}-\alpha_{12})X_3= (3\alpha_{11}-\alpha_{12})X_3, \end{gather*} which implies that $\alpha_{12}= 3\alpha_{11}$. In the same way, from the identities $0=\delta^1[X_2,X_3]=[\delta^1(X_2), X_3]+[X_2,\delta^1(X_3)]$ one obtains easily that $\alpha_{22}=-3\alpha_{21}$. This completes the verif\/ication of conditions for $\delta^1$ to be in $\mathfrak g^{1}$. Hence \begin{gather*} \mathfrak g^1=\text{span}\big\{\Lambda_1^1,\Lambda_2^1\big\}, \end{gather*} where $\Lambda_1^1,\Lambda_2^1 \in {\rm Hom}(\mathfrak g^{-1},\mathfrak g^0)\oplus {\rm Hom}(\mathfrak g^{-2},\mathfrak g^{-1})$ such that \begin{alignat}{4} &\Lambda_1^1(X_1)=\Lambda_1^0+3\Lambda_2^0,\qquad & & \Lambda_1^1(X_2)=0,\qquad & &\Lambda_1^1(X_3)=-2X_2,& \nonumber\\ &\Lambda_2^1(X_1)=0,\qquad & & \Lambda_2^1(X_2)=\Lambda_1^0-3\Lambda_2^0,\qquad && \Lambda_2^1(X_3)=2X_1.& \label{2ordlam1} \end{alignat} ($\Lambda_1^1$ corresponds to $\delta^1$ as in \eqref{2ordg1} and \eqref{2orddelx3} with $\alpha_{11}=1$ and $\alpha_{21}=0$, while $\Lambda_2^1$ corresponds to $\delta^1$ with $\alpha_{11}=0$ and $\alpha_{21}=1$ in the same formulas). {\bf b) Calculation of $\boldsymbol{\mathfrak g^{2}}$.} Given $\delta^2\in {\rm Hom}(\mathfrak g^{-1},\mathfrak g^1)\oplus {\rm Hom}(\mathfrak g^{-2},\mathfrak g^{0})$ we have \begin{gather*} \delta^2(X_1)=\beta_{11}\Lambda_1^1+\beta_{12}\Lambda_2^1, \qquad \delta^2(X_2)=\beta_{21}\Lambda_1^1+\beta_{22}\Lambda_2^1 \end{gather*} for some $\beta_{ij}$, $1\leq i,j\leq 2$. If $\delta^2\in \mathfrak g^2$, then \eqref{2ordlam1} implies \begin{gather}\label{2orddelx32} \delta^2(X_3)=[\delta^2(X_1), X_2]+[X_1,\delta^2(X_2)] \\ \phantom{\delta^2(X_3)}{} =\bigl(\beta_{11}\Lambda_1^1+\beta_{12}\Lambda_2^1\bigr)(X_2)- \bigl(\beta_{21}\Lambda_1^1+\beta_{22}\Lambda_2^1\bigr)(X_1)= (\beta_{12}\!-\beta_{21})\Lambda_1^0\!-3(\beta_{12}\!+\beta_{21})\Lambda_2^0.\nonumber \end{gather} Using \eqref{2ordlam1} and \eqref{2orddelx32}, we have \begin{gather*} 0=\delta^2([X_1,X_3])=[\delta^2(X_1), X_3]+[X_1,\delta^2(X_3)] \\ \phantom{0}{}= (\beta_{11}\Lambda_1^1+\beta_{12}\Lambda_2^1\bigr)(X_3)- (\beta_{12}-\beta_{21})\Lambda_1^0(X_1)+3(\beta_{12}+\beta_{21})\Lambda_2^0(X_1)\\ \phantom{0}{} =4(\beta_{12}+\beta_{21})X_1-2\beta_{11}X_2, \end{gather*} which implies that \begin{gather} \label{2ordcond11} \beta_{11}=0, \qquad \beta_{21}=-\beta_{12}. \end{gather} Similarly, the identities $0=\delta^2[X_2,X_3]=[\delta^2(X_2), X_3]+[X_2,\delta^2(X_3)]$ implies $\beta_{22}=0$ in addition to \eqref{2ordcond11}. This completes the verif\/ications of conditions for $\delta^2$ to be in $\mathfrak g^{2}$. Hence \begin{gather*} \mathfrak g^2=\text{span}\{\Lambda\}, \end{gather*} where $\Lambda \in {\rm Hom}(\mathfrak g^{-1},\mathfrak g^1)\oplus {\rm Hom}(\mathfrak g^{-2},\mathfrak g^{0})$ is def\/ined by \begin{gather} \label{2ordlam2} \Lambda(X_1)=\Lambda_2^1, \qquad \Lambda(X_2)=-\Lambda_1^1,\qquad \Lambda(X_3)=2\Lambda_1^0. \end{gather} {\bf c) Calculation of $\boldsymbol{\mathfrak g^{3}}$.} Given $\delta^3\in {\rm Hom}(\mathfrak g^{-1},\mathfrak g^2)\oplus {\rm Hom}(\mathfrak g^{-2},\mathfrak g^{1})$ such that \begin{gather*} \delta^3(X_1)=\gamma_1\Lambda, \qquad \delta^2(X_2)=\gamma_2\Lambda. \end{gather*} for some $\gamma_{i}$, $i=1,2$. If $\delta^3\in \mathfrak g^2$, then \eqref{2ordlam2} implies $\delta^3(X_1)=-\gamma_1\Lambda_1^1-\gamma_2\Lambda_2^1$. From the identities $0=\delta^3[X_1,X_3]=[\delta^3(X_1), X_3]+[X_1,\delta^3(X_3)]$ it follows easily that $\gamma_1=0$. From the identities $0=\delta^3[X_2,X_3]=[\delta^3(X_2), X_3]+[X_2,\delta^3(X_3)]$ it follows easily that $\gamma_2=0$. Hence, \[ \mathfrak g^3=0. \] Thus the algebraic universal prolongation $\mathfrak g(\eta_3,\mathfrak g^0)=\mathfrak g^{-2}\oplus\mathfrak g^{-1}\oplus\mathfrak g^{0}\oplus\mathfrak g^{1}\oplus\mathfrak g^{2}$ is $8$-dimensional. Therefore, f\/ixing the normalization conditions at each step, one can construct the f\/irst and the second geometric prolongations $P^1$ and $P^2$, and \emph{for any contact distribution $D$ on $3$-dimensional manifold endowed with two distinguished transversal line sub-distributions there is a canonical frame on the $8$-dimensional bundle $P^2$}. Let us look at the algebra $\mathfrak g(\eta_3,\mathfrak g^0)$ in more detail. Applying \eqref{posbr} inductively, we see that all nonzero brackets of elements $\Lambda_1^0$, $\Lambda_2^0$, $\Lambda_1^1$, $\Lambda_2^1$, $\Lambda$ (spanning the subalgebra of elements with nonzero weights of $\mathfrak g(\eta_3,\mathfrak g^0))$ are as follows: \[ [\Lambda_1^1,\Lambda_1^0]=\Lambda_1^1,\!\!\qquad [\Lambda_1^1,\Lambda_2^0]=\Lambda_1^1,\!\!\qquad [\Lambda_2^1,\Lambda_1^0]=\Lambda_1^1,\!\!\qquad [\Lambda_2^1,\Lambda_1^0]=-\Lambda_2^1,\!\!\qquad [\Lambda_1^1,\Lambda_2^1]=2\Lambda.\! \] Considering all products in $\mathfrak g(\eta_3,\mathfrak g^0)$ it is not hard to see that $\mathfrak g(\eta_3,\mathfrak g^0)$ is isomorphic to $\mathfrak{sl}(3,\mathbb R)$. Indeed, if we denote by $E_{ij}$ the $3\times 3$-matrix such that its $(i,j)$ entry is equal to $1$ and all other entries vanish, then the following mapping is an isomorphism of algebras $\mathfrak g(\eta_3,\mathfrak g^0)$ and $\mathfrak{sl}(3,\mathbb R)$: \begin{gather*} X_1\mapsto E_{12},\qquad X_2\mapsto E_{23},\qquad X_3\mapsto E_{13},\\ \Lambda_1^1\mapsto -2E_{21},\qquad \Lambda_2^1\mapsto-2E_{23},\qquad \Lambda\mapsto -2E_{23},\\ \Lambda_1^0+3\Lambda_2^0\mapsto 2(E_{11}-E_{22}),\qquad\Lambda_1^0-3\Lambda_2^0\mapsto 2(E_{22}-E_{33}). \end{gather*} As a matter of fact, here we are in the situation, when $\mathfrak m\oplus\mathfrak g^0$ is a subalgebra of elements of nonnegative weights (a parabolic subalgbebra) of a graded simple Lie algebra (in the considered case $\eta_3\oplus\mathfrak g^0$ is a Borel subalgebra of $\mathfrak{sl}(3, \mathbb R)$). It was shown in \cite{yam} that, except for a few cases, the algebraic universal prolongation of a parabolic subalgebra of a simple Lie algebra is isomorphic to this simple Lie algebra. This result can be applied also to the algebra $\mathfrak m_{(2,5)}\oplus\mathfrak g^0(\mathfrak m_{(2,5)})$, corresponding to maximally nonholonomic rank $2$ distributions in $\mathbb R^5$ (see Example 2 above). In this case $\mathfrak m_{(2,5)}\oplus\mathfrak g^0(\mathfrak m_{(2,5)})$ is the subalgebra of elements of non-negative degree in the exceptional Lie algebra $G_2$, graded according to the coef\/f\/icient of the short simple root. So, according to~\cite{yam}, the algebraic universal prolongation of $\mathfrak m_{(2,5)}\oplus\mathfrak g^0(\mathfrak m_{(2,5)})$ is isomorphic to $G_2=\displaystyle{\bigoplus_{i=-3}^3}\mathfrak g^i$. This together with Theorem~\ref{main} implies that \emph{to any maximally nonholonomic rank $2$ distribution in $\mathbb R^5$ one can assign a canonical frame on the bundle $P^3$ of dimension equal to $\dim G_2=14$}. Note that this statement is still weaker than what Cartan proved in \cite{cartan}. Indeed, Cartan provides explicit expressions for the coframe and f\/inds the complete system of invariants, while Theorem~\ref{main} is only the existence statement. Finally note that the construction of the bundles $P^k$ (and therefore of the canonical frame) depends on the choice of the normalization conditions given by spaces $\mathcal N_k$, as in \eqref{normsplitk}. Under additional assumptions on the algebra $\mathfrak g(\mathfrak m,\mathfrak g^0)$ (for example, semisimplicity or existence of a~special bilinear form) the spaces $\mathcal N_k$ themselves can be taken in a canonical way at each step of the prolongation procedure. This allows to construct canonical frames satisfying additional nice properties. In particular, in another fundamental paper of Tanaka~\cite{tan2}, it was shown that if the algebraic universal prolongation $\mathfrak g(\mathfrak m,\mathfrak g^0)$ is a semisimple Lie algebra, then the so-called $\mathfrak g(\mathfrak m,\mathfrak g^0)$-valued \emph{normal} Cartan connection can be associated with a structure of type $(\mathfrak m,\mathfrak g^0)$. Roughly \mbox{speaking}, a Cartan connection gives the canonical frame which is compatible in a natural way with the whole algebra $\mathfrak g(\mathfrak m,\mathfrak g^0)$. This is a generalization of Cartan's results \cite{cartan} on maximally nonholonomic rank 2 distributions in $\mathbb R^5$. Further, T.~Morimoto \cite{mori} gave a general criterion (in terms of the algebra $\mathfrak g(\mathfrak m,\mathfrak g^0)$) for the existence of the normal Cartan connection for structures of type~$(\mathfrak m,\mathfrak g^0)$. All these developments are far beyond of the goals of the present note, so we do not want to address them in more detail here, referring the reader to the original papers. \LastPageEnding \end{document}
\begin{document} \markboth{David Loeffler and Sarah Livia Zerbes} {Iwasawa theory and p-adic L-functions over $\ZZ_p^2$-extensions} \title{IWASAWA THEORY AND P-ADIC L-FUNCTIONS OVER $\ZZ_p^2$-EXTENSIONS} \author{DAVID LOEFFLER} \thanks{Supported by a Royal Society University Research Fellowship.} \address{Mathematics Institute\\ Zeeman Building\\ University of Warwick\\ Coventry CV4 7AL, United Kingdom} \email{[email protected]} \author{SARAH LIVIA ZERBES} \thanks{Supported by EPSRC First Grant EP/J018716/1.} \address{Mathematics Department\\ University College London\\ Gower Street\\ London WC1E 6BT, United Kingdom} \email{[email protected]} \begin{abstract} We construct a two-variable analogue of Perrin-Riou's $p$-adic regulator map for the Iwasawa cohomology of a crystalline representation of the absolute Galois group of $\QQ_p$, over a Galois extension whose Galois group is an abelian $p$-adic Lie group of dimension 2. We use this regulator map to study $p$-adic representations of global Galois groups over certain abelian extensions of number fields whose localisation at the primes above $p$ is an extension of the above type. In the example of the restriction to an imaginary quadratic field of the representation attached to a modular form, we formulate a conjecture on the existence of a ``zeta element'', whose image under the regulator map is a $p$-adic $L$-function. We show that this conjecture implies the known properties of the 2-variable $p$-adic $L$-functions constructed by Perrin-Riou and Kim. \end{abstract} \keywords{Iwasawa theory, $p$-adic regulator, $p$-adic $L$-function} \subjclass[2010]{Mathematics Subject Classification 2010: Primary 11R23, 11G40. Secondary: 11S40, 11F80 } \maketitle \section{Introduction} \label{sect:intro} In the first part of this paper (Sections \ref{sect:yager} and \ref{sect:regulator}), we develop a ``two-variable'' analogue of Perrin-Riou's theory of $p$-adic regulator maps for crystalline representations of $p$-adic Galois groups. Let us briefly recall Perrin-Riou's cyclotomic theory as developed in \cite{perrinriou95}. Let $p$ be an odd prime, $F$ a finite unramified extension of $\QQ_p$, and $V$ a continuous $p$-adic representation of the absolute Galois group $\mathcal{G}_F$ of $F$, which is crystalline with Hodge--Tate weights $\ge 0$ and with no quotient isomorphic to the trivial representation. Then there is a ``regulator'' or ``big logarithm'' map \[ \mathcal{L}^\Gamma_{F, V} : H^1_{\Iw}(F(\mu_{p^\infty}), V) \rTo \mathcal{H}_{\QQ_p}(\Gamma) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V)\] which interpolates the values of the Bloch--Kato dual exponential and logarithm maps for the twists $V(j)$, $j \in \mathbb{Z}$, over each finite subextension $F(\mu_{p^n})$. Here $\mathcal{H}_{\QQ_p}(\Gamma)$ is the algebra of $\QQ_p$-valued distributions on the group $\Gamma = \Gal(F(\mu_{p^\infty}) / F) \cong \ZZ_p^\times$, and the Iwasawa cohomology $H^1_{\Iw}(F(\mu_{p^\infty}), V)$ is defined as $\QQ_p \otimes_{\ZZ_p} \varprojlim_n H^1(F(\mu_{p^n}), T)$ where $T$ is any $\mathcal{G}_F$-stable $\ZZ_p$-lattice in $V$. This map plays a crucial role in cyclotomic Iwasawa theory for $p$-adic representations of the Galois groups of number fields, as a bridge between cohomological objects and $p$-adic $L$-functions. It is natural to ask whether or not the construction of the maps $\mathcal{L}^\Gamma_{F, V}$ may be extended to consider twists of $V$ by more general characters of $\mathcal{G}_{F}$. In this paper, we give a complete answer to this question for characters factoring through an extension $K_\infty / F$ which is abelian over $\QQ_p$ (thus for all characters if $F = \QQ_p$). Any such character factors through the Galois group $G$ of an extension of the form $K_\infty = F_\infty(\mu_{p^\infty})$, where $F_\infty$ is an unramified extension of $F$ which is a finite extension of the unique unramified $\ZZ_p$-extension of $F$. Denote by $\widehat{F}_\infty$ the $p$-adic completion of $F_\infty$, and $\mathcal{H}_{\widehat{F}_\infty}(G)$ the algebra of $\widehat{F}_\infty$-valued distributions on $G$. \begin{theorem} For any crystalline representation $V$ of $\mathcal{G}_{F}$ with non-negative Hodge--Tate weights, there exists a regulator map \[ \mathcal{L}^{G}_{V} : H^1_{\Iw}(K_\infty, V) \rTo \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V)\] interpolating the maps $\mathcal{L}^\Gamma_{K, V}$ for all unramified extensions $K/F$ contained in $F_\infty$. \end{theorem} See Theorem \ref{thm:localregulator} for a precise statement of the result. Unlike the cyclotomic case, this result holds whether or not $V$ has trivial quotients. In Sections \ref{sect:semilocalregulator} and \ref{sect:defmoduleofLfunctions}, we use the $2$-variable $p$-adic regulator to study global Galois representations. Let $K$ be a finite extension of $\mathbb{Q}$, $\mathfrak{p}$ a prime of $K$ above $p$ which is unramified, and $K_\infty$ be a $p$-adic Lie extension of $K$ such that for any prime $\mathfrak{P}$ of $K_\infty$ above $\mathfrak{p}$, the local extension $K_{\infty, \mathfrak{P}} / K_{\mathfrak{p}}$ is of the type considered above. Let $G=\Gal(K_\infty\slash K)$. In Section \ref{sect:semilocalregulator}, we extend the regulator map to a map \[ \mathcal{L}^G_{\mathfrak{p}, V} : Z^1_{\Iw, \mathfrak{p}}(K_\infty, V) \rTo \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(K_{\mathfrak{p}}, V) \] where $Z^1_{\Iw, \mathfrak{p}}(K_\infty, V)$ is the direct sum of the Iwasawa cohomology groups at each of the primes $\mathfrak{q} \mid \mathfrak{p}$, and $\mathbb{D}_{\mathrm{cris}}(K_{\mathfrak{p}}, V)$ is the Fontaine $\mathbb{D}_{\mathrm{cris}}$ functor for $V$ regarded as a representation of a decomposition group at $\mathfrak{p}$. There is a natural localisation map \[ H^1_{\Iw, S}(K_\infty, V) \to \bigoplus_{\mathfrak{p} \mid p} Z^1_{\Iw, \mathfrak{p}}(K_\infty, V)\] where $H^1_{\Iw, S}(K_\infty,V)$ denotes the inverse limit of global cohomology groups unramified outside a fixed set of primes $S$. As in the case of Perrin-Riou's cyclotomic regulator map, our map $\mathcal{L}^G_V$ allows elements of Iwasawa cohomology (or, more generally, of its exterior powers) to be interpreted as $\mathbb{D}_{\mathrm{cris}}$-valued distributions on $G$ (after extending scalars). Assuming a plausible conjecture analogous to Leopoldt's conjecture, we use the map $\mathcal{L}^G_V$ to define a certain submodule $\mathbb{I}_{\arith}(V)$ of the distributions on $G$ with values in an exterior power of $\mathbb{D}_{\mathrm{cris}}$. Following Perrin-Riou \cite{perrinriou95}, we call $\mathbb{I}_{\arith}(V)$ the \emph{module of $2$-variable $L$-functions}. We conjecture that there exist special elements of the top exterior power of $H^1_{\Iw,S}(K_\infty, V)$ (``zeta elements'') whose images under the regulator map are $p$-adic $L$-functions, and that these should generate $\mathbb{I}_{\arith}(V)$ as a module over the Iwasawa algebra $\Lambda_{\QQ_p}(G)$. In Section \ref{sect:imquad}, we investigate in detail two instances of this conjecture that occur when the field $K$ is imaginary quadratic. We first show that for the representation $\ZZ_p(1)$, our regulator map coincides with the map constructed in \cite{yager82}. In this paper, Yager shows that his map sends the Euler system of elliptic units to Katz's $p$-adic $L$-function. As the second example, we study the representation attached to a weight 2 cusp form for $\GL_2 / K$: here we predict the existence of multiple distributions, depending on a choice of Frobenius eigenvalue at each prime above $p$ (Conjecture \ref{conj:twovarmf}), and we show that our conjectures imply the known properties of the 2-variable $p$-adic $L$-functions constructed by Perrin-Riou \cite{perrinriou88} (for $f$ ordinary) and by B.D.~Kim \cite{kim-preprint} (for $f$ non-ordinary). However, our conjectures also predict the existence of some new $p$-adic $L$-functions. (The existence of these $p$-adic $L$-functions is verified in a forthcoming paper \cite{loeffler13} of the first author.) In our paper \cite{leiloefflerzerbes12} (joint with Antonio Lei), we use the $2$-variable $p$-adic regulator to study the critical slope $p$-adic $L$-functions of an ordinary CM modular form. In this case, there are two candidates for the $p$-adic $L$-function, one arising from Kato's Euler system and a second from $p$-adic modular symbols. The latter has been studied by Bella\"\i che \cite{bellaiche11}, who has proved a formula (Theorem 2 of \emph{op.cit.}) relating it to the Katz $L$-function for the CM field. We use the methods of the present paper to prove a corresponding formula for the $L$-function arising from Kato's construction, implying that the two $p$-adic $L$-functions in fact coincide. \section{Setup and notation} \subsection{Fields and their extensions} \label{sect:fieldsandtheirextensions} Let $p$ be an odd prime, and denote by $\mu_{p^\infty}$ the set of $p$-power roots of unity. Let $K$ be a finite extension of either $\mathbb{Q}$ or $\QQ_p$. Define the Galois groups $\mathcal{G}_K=\Gal(\overline{K}\slash K)$ and $H_K=\Gal(\overline{K}\slash K(\mu_{p^\infty}))$. A \emph{$p$-adic Lie extension} of $K$ is a Galois extension $K_\infty/K$ such that $\Gal(K_\infty / K)$ is a compact $p$-adic Lie group of finite dimension. We write $\Gamma$ for the Galois group $\Gal(\mathbb{Q}(\mu_{p^\infty}) / \mathbb{Q})\cong \Gal(\QQ_p(\mu_{p^\infty})\slash \QQ_p)$, which we identify with $\ZZ_p^\times$ via the cyclotomic character $\chi$. Then $\Gamma \cong \Delta \times \Gamma_1$, where $\Delta$ is cyclic of order $p-1$ and $\Gamma_1 = \Gal(\QQ_p(\mu_{p^\infty}) / \QQ_p(\mu_p)) \cong \ZZ_p$, so in particular $\mathbb{Q}_\infty$ (resp. $\mathbb{Q}_{p,\infty}$) is a $p$-adic Lie extension of $\mathbb{Q}$ (resp. $\QQ_p$) of dimension $1$. \subsection{Iwasawa algebras and power series} \label{sect:iwasawaalgs} Let $G$ be a compact $p$-adic Lie group, and $L$ a complete discretely valued extension of $\QQ_p$ with ring of integers $\mathcal{O}_L$. We let $\Lambda_{\mathcal{O}_L}(G)$ be the Iwasawa algebra $\varprojlim_U \mathcal{O}_L[G /U]$, where the limit is taken over open subgroups $U \subseteq G$. We shall always equip this with the inverse limit topology (sometimes called the ``weak topology'') for which it is a Noetherian topological $\mathcal{O}$-algebra (cf.~\cite[Theorem 6.2.8]{emerton04}). If $L / \QQ_p$ is a finite extension then $\Lambda_{\mathcal{O}_L}(G)$ is compact (but not otherwise). We let $\Lambda_L(G) = L \otimes_{\mathcal{O}_L} \Lambda_{\mathcal{O}}(G)$, which is also Noetherian; it is isomorphic to the continuous dual of the space $C(G, L)$ of continuous $L$-valued functions on $G$. (See \cite[Corollary 2.2]{schneiderteitelbaum02} for a proof of the last statement when $L / \QQ_p$ is a finite extension; this extends immediately to general discretely-valued $L$, since $\Lambda_{L}(G) = L \mathbin{\hat\otimes}_{\QQ_p} \Lambda_{\QQ_p}(G)$ and similarly for $C(G, L)$.) Let $\mathcal{H}_L(G)$ be the space of $L$-valued locally analytic distributions on $G$ (the continuous dual of the space $C^{\mathrm{la}}(G, L)$ of $L$-valued locally analytic functions on $G$). There is an injective algebra homomorphism $\Lambda_L(G) \hookrightarrow \mathcal{H}_L(G)$ (see \cite[Proposition 2.2.7]{emerton04}), dual to the inclusion of $C^{\mathrm{la}}(G, L)$ as a dense subspace of $C(G, L)$. We endow $\mathcal{H}_L(G)$ with its natural topology as an inverse limit of Banach spaces, with respect to which the map $\Lambda_L(G) \hookrightarrow \mathcal{H}_L(G)$ is continuous. We shall mostly be concerned with the case when $G$ is abelian, in which case $G$ has the form $H \times \ZZ_p^d$ for $H$ a finite abelian group. In this case $\Lambda_{\mathcal{O}_L}(G)$ is isomorphic to the power series ring $\mathcal{O}_L[H][[X_1, \dots, X_d]]$, where $X_i = \gamma_i - 1$ for generators $\gamma_1, \dots, \gamma_d$ of the $\ZZ_p^d$ factor (see \cite[\S 8.4.1]{nekovar06}). The weak topology on $\Lambda_{\mathcal{O}_L}(G)$ is the $I$-adic topology, where $I$ is the ideal $(p, X_1, \dots, X_d)$. Meanwhile, $\mathcal{H}_L(G)$ identifies with the algebra of $L[H]$-valued power series in $X_1, \dots, X_d$ converging on the rigid-analytic unit ball $|X_i| < 1$, with the topology given by uniform convergence on the closed balls $|X_i| \le r$ for all $r < 1$. In particular, for the group $\Gamma \cong \ZZ_p^*$ as in Section \ref{sect:fieldsandtheirextensions}, we may identify $\mathcal{H}_L(\Gamma)$ with the space of formal power series \[ \{f \in L[\Delta][[X]]:\text{$f$ converges everywhere on the open unit $p$-adic disc}\},\] where $X$ corresponds to $\gamma - 1$ for $\gamma$ a topological generator of $\Gamma_1$; and $\Lambda_L(\Gamma)$ corresponds to the subring of $\mathcal{H}_L(\Gamma)$ consisting of power series with bounded coefficients. Similarly, we define $\mathcal{H}_L(\Gamma_1)$ as the subring of $\mathcal{H}_L(\Gamma)$ defined by power series over $\QQ_p$, rather than $\QQ_p[\Delta]$. For each $i \in \mathbb{Z}$, we define an element $\ell_i \in \mathcal{H}_{\QQ_p}(\Gamma_1)$ by \[ \ell_i = \frac{\log \gamma}{\log \chi(\gamma)} - i\] for any non-identity element $\gamma \in \Gamma_1$ (cf.~\cite[\S II.1]{berger03}); note that this differs by a sign from the element denoted by the same symbol in \cite{perrinriou94}. \subsection{Fontaine rings} \label{sect:Fontainerings} We review the definitions of some of Fontaine's rings that we use in this paper. Details can be found in \cite{berger04} or \cite{leiloefflerzerbes11}. Let $K$ be a finite extension of $\mathbb{Q}_p$; the rings we shall require are those denoted by $\mathbb{A}_{K}$, $\mathbb{A}^+_K$, $\mathbb{B}_K$, $\mathbb{B}^+_K$, and $\mathbb{B}^+_{\rig, K}$. These rings have intrinsic definitions independent of any choices and valid for any $K$; but we shall be interested in the case when $K$ is unramified over $\mathbb{Q}_p$. In this case, they have concrete (but slightly noncanonical) descriptions as follows. A choice of compatible system $(\zeta_n)_{n \ge 0}$ of $p$-power roots of unity defines an element $\pi \in \mathbb{A}_K^+$, and allows us to identify $\mathbb{A}_K^+$ with the formal power series ring $\mathcal{O}_K[[\pi]]$. The ring $\mathbb{A}_K$ is simply $\widehat{\mathbb{A}^+_K[1/\pi]}$. The ring $\mathbb{B}^+_K$ is defined as $\mathbb{A}^+_K[1/p]$, and similarly $\mathbb{B}_K = \mathbb{A}_K[1/p]$. Finally, we let $\mathbb{B}^+_{\rig, K}$ be the ring of power series $f \in K[[\pi]]$ which converge on the open unit disc $|\pi| < 1$. All these rings are endowed with an $\mathcal{O}_K$-linear action of $\Gamma$ by $\gamma(\pi)=(\pi+1)^{\chi(\gamma)}-1$, and with a Frobenius $\varphi$ which acts as the usual arithmetic Frobenius on $\mathcal{O}_K$ and on $\pi$ by $\varphi(\pi)=(\pi+1)^p-1$. There is also a left inverse $\psi$ of $\varphi$ on all of the above rings, satisfying \[ \varphi\circ\psi(f(\pi))=\frac{1}{p}\sum_{\zeta^p=1}f(\zeta(1+\pi)-1). \] Write $t=\log(1+\pi) \in \BB^+_{\rig,\Qp}$, and $q=\varphi(\pi)/\pi\in\mathbb{A}_{\QQ_p}^+$. A formal power series calculation shows that $g(t) = \chi(g) t$ for $g \in \Gamma$, and $\varphi(t) = pt$. The action of $\Gamma$ on $\mathbb{A}^+_{K}$ gives an isomorphism of $\Lambda_{\mathcal{O}_K}(\Gamma)$ with the submodule $(\mathbb{A}^+_{K})^{\psi=0}$, the so-called ``Mellin transform'' \begin{align*} \mathfrak{M}: \Lambda_{\mathcal{O}_K}(\Gamma) & \rightarrow (\mathbb{A}^+_{K})^{\psi=0} \\ f(\gamma-1) & \mapsto f(\gamma-1) \cdot (\pi+1). \end{align*} This extends to bijections $\Lambda_K(\Gamma) \cong (\mathbb{B}^+_{K})^{\psi=0}$ and $\mathcal{H}_K(\Gamma) \cong (\mathbb{B}^+_{\rig, K})^{\psi=0}$. (See \cite[\S 1.3]{perrinriou90}, \cite[Proposition 1.2.7]{perrinriou94}, or \cite[\S 1.C.2]{leiloefflerzerbes11} for more details.) \subsection{Crystalline and de Rham representations} \label{sect:crystallinereps} Let $K$ be a finite extension of $\QQ_p$, and $V$ a continuous representation of $\mathcal{G}_K$ on a $\QQ_p$-vector space of dimension $d$. Recall that $\mathbb{D}_{\dR}(V)$ denotes the space $(V \otimes_{\QQ_p} \mathbb{B}_{\dR})^{\mathcal{G}_K}$, where $\mathbb{B}_{\dR}$ is Fontaine's ring of periods. This space $\mathbb{D}_{\dR}(V)$ is a filtered $K$-vector space of dimension $\le d$, and we say $V$ is \emph{de Rham} if equality holds. If $j \in \mathbb{Z}$, $\Fil^j \mathbb{D}_{\dR}(V)$ denotes the $j$-th step in the Hodge filtration of $\mathbb{D}_{\dR}(V)$. If $L$ is a finite extension of $K$, we shall sometimes write $\mathbb{D}_{\dR}(L, V)$ for $\mathbb{D}_{\dR}(V|_{\mathcal{G}_L})$, which can be canonically identified with $L \otimes_K \mathbb{D}_{\dR}(V)$. We also consider the crystalline period ring $\BB_{\cris} \subset \mathbb{B}_{\dR}$, and define similarly $\mathbb{D}_{\mathrm{cris}}(V) = (V \otimes_{\QQ_p} \BB_{\cris})^{\mathcal{G}_K}$. This is a $K_0$-vector space of dimension $\le d$, where $K_0$ is the maximal unramified subspace of $K$, endowed with a semilinear Frobenius (acting as the usual arithmetic Frobenius on $K_0$). We say $V$ is \emph{crystalline} if $\dim_{K_0} \mathbb{D}_{\mathrm{cris}}(V) = d$, in which case $V$ is automatically de Rham, and there is a canonical isomorphism of $K$-vector spaces $\mathbb{D}_{\dR}(V) \cong K \otimes_{K_0} \mathbb{D}_{\mathrm{cris}}(V)$. As above, we will write $\mathbb{D}_{\mathrm{cris}}(L, V)$ for $\mathbb{D}_{\mathrm{cris}}(V|_{\mathcal{G}_L})$, where $L$ is a finite extension of $K$; if $V$ is crystalline over $K$ this is isomorphic to $L_0 \otimes_{K_0} \mathbb{D}_{\mathrm{cris}}(V)$. For an integer $j$, $V(j)$ denotes the $j$-th Tate twist of $V$, i.e. $V(j)=V\otimes_{\ZZ_p} (\varprojlim_n \mu_{p^n})^{\otimes j}$. If $\zeta = (\zeta_n)_{n \ge 0}$ is a choice of a compatible system of $p$-power roots of unity, this defines a basis vector $e_j$ of $\QQ_p(j)$ and an element $t^{-j} \in \mathbb{B}_{\dR}$; these each depend on $\zeta$, but the element $t^{-j} e_j \in \mathbb{D}_{\dR}(\QQ_p(j))$ does not, and tensoring with $t^{-j}e_j$ thus gives a canonical isomorphism $\mathbb{D}_{\dR}(\QQ_p(j)) \cong \QQ_p$ for each $j$. We write \[ \exp_{K,V} : \frac{\mathbb{D}_{\dR}(V)}{\Fil^0 \mathbb{D}_{\dR}(V) + \mathbb{D}_{\mathrm{cris}}(V)^{\varphi=1}} \rInto H^1(K,V)\] for the \emph{Bloch-Kato exponential} of $V$ over $K$ (c.f. \cite{blochkato90}), which is the boundary map in the cohomology of the ``fundamental exact sequence'' \[ 0 \rTo V \rTo V \otimes_{\QQ_p} \BB_{\cris}^{\varphi = 1} \rTo V \otimes_{\QQ_p} \left( \frac{\mathbb{B}_{\dR}}{\mathbb{B}_{\dR}^+}\right) \rTo 0. \] The image of this map is denoted $H^1_e(K,V)$, and we denote its inverse by \[ \log_{K, V} : H^1_e(K, V) \rTo^\cong \frac{\mathbb{D}_{\dR}(V)}{\Fil^0 \mathbb{D}_{\dR}(V) + \mathbb{D}_{\mathrm{cris}}(V)^{\varphi=1}}.\] We also denote by \[ \exp_{K,V}^*: H^1(K,V^*(1)) \rightarrow \Fil^0\mathbb{D}_{\dR}(V^*(1))\] the \emph{dual exponential} map, which is the dual of $\exp_{K, V}$ with respect to the Tate duality pairing (c.f.~\cite[\S II.1.4]{kato93}); it satisfies the identity \[ \langle \exp_{K, V}(a), b \rangle_{K} = \langle a, \exp^*_{K, V}(b)\rangle_{\dR}\] for all $a \in \mathbb{D}_{\dR}(V)$ and $b \in H^1(K, V)$, where $\langle -, - \rangle_{\mathrm{Tate}}$ is the Tate pairing and $\langle-, -\rangle_{\dR, K}$ is the pairing \[ \mathbb{D}_{\dR}(V) \otimes \mathbb{D}_{\dR}(V^*(1)) \rTo \mathbb{D}_{\dR}(\QQ_p(1)) \cong K \rTo^{\operatorname{trace}} \QQ_p.\] Finally, if $L$ is a number field, $V$ is a $p$-adic representation of $\mathcal{G}_L$ and $\mathfrak{p}$ is a prime of $L$ above $p$, we write $\mathbb{D}_{\dR}(L_\mathfrak{p}, V)$ and $\mathbb{D}_{\mathrm{cris}}(L_\mathfrak{p}, V)$ for the Fontaine spaces attached to $V$ regarded as a representation of $\Gal(\overline{L}_{\mathfrak{P}} / L_\mathfrak{p})$ for any choice of prime $\mathfrak{P} \mid \mathfrak{p}$ of $\overline{L}$; up to a canonical isomorphism these spaces are independent of the choice of $\mathfrak{P}$. \subsection{$(\varphi,\Gamma)$-modules and Wach modules} \label{sect:phigammawach} Let $K$ be a finite extension of $\QQ_p$, and let $T$ be a $\ZZ_p$-representation of $\mathcal{G}_K$ (that is, a finite-rank free module over $\ZZ_p$ with a continuous action of $\mathcal{G}_K$). Denote the $(\varphi,\Gamma)$-module of $T$ by $\mathbb{D}_K(T)$. This is a module over Fontaine's ring $\mathbb{A}_K$. If $K$ is unramified over $\QQ_p$ and $T$ is a $\ZZ_p$-representation of $\mathcal{G}_K$ which is crystalline (i.e.~such that $V = T[1/p]$ is crystalline), Wach and Berger have shown that there exists a canonical $\mathbb{A}^+_K$-submodule $\mathbb{N}_K(T) \subset \mathbb{D}_K(T)$, the \emph{Wach module} (see \cite{wach96}, \cite{berger04}); this is the unique submodule such that \begin{itemize} \item $\mathbb{N}_K(T)$ is free of rank $d$ over $\mathbb{A}^+_K$, \item the action of $\Gamma$ preserves $\mathbb{N}_K(T)$ and is trivial on $\mathbb{N}_K(T) / \pi \mathbb{N}_K(T)$, \item there exists $b \in \mathbb{Z}$ such that $\varphi(\pi^b \mathbb{N}_K(T)) \subseteq \pi^b \mathbb{N}_K(T)$ and the quotient $\pi^b \mathbb{N}_K(T) / \varphi^*(\pi^b \mathbb{N}_K(T))$ is killed by a power of $q = \varphi(\pi)/\pi$. \end{itemize} Here $\varphi^*(\pi^b \mathbb{N}_K(T))$ denotes the $\mathbb{A}^+_K$-submodule of $\mathbb{D}_K(T)$ generated by $\varphi(\pi^b \mathbb{N}_K(T))$. The following lemma is immediate from the definition of the functors $\mathbb{D}_K(-)$ and $\mathbb{N}_K(-)$: \begin{lemma} \label{lemma:unramifiedphiGammabasechange} Assume that $T$ is a $\ZZ_p$-representation of $\mathcal{G}_{K}$, and $L$ a finite extension of $K$, with $L$ and $K$ both unramified over $\QQ_p$. There is a canonical isomorphism of $(\varphi, \Gamma)$-modules \[ \mathbb{D}_L(T) \cong \mathbb{D}_K(T)\otimes_{\mathcal{O}_K}\mathcal{O}_L,\] where $\varphi$ acts on $\mathcal{O}_L$ via the arithmetic Frobenius $\sigma_p \in \Gal(L/\QQ_p)$. If $V = T[1/p]$ is crystalline, then this isomorphism restricts to an isomorphism \[ \mathbb{N}_L(T)\cong \mathbb{N}_K(T)\otimes_{\mathcal{O}_K}\mathcal{O}_L.\] \end{lemma} \subsection{Iwasawa cohomology and the Perrin-Riou pairing} \label{sect:iwasawacoho} Let $K$ be a finite extension of $\mathbb{Q}_\ell$ for some prime $\ell$ (which may or may not equal $p$) and let $T$ be a $\ZZ_p$-representation of $\mathcal{G}_K$. Let $K_\infty$ be a $p$-adic Lie extension of $K$. \begin{definition} \label{def:Iwasawacohomology} We define \[ H^i_{\Iw}(K_\infty, T) := \varprojlim H^i(L, T),\] where $L$ varies over the finite extensions of $K$ contained in $K_\infty$, and the inverse limit is taken with respect to the corestriction maps. If $V = \QQ_p \otimes_{\ZZ_p} T$, we write \[ H^i_{\Iw}(K_\infty, V) := \QQ_p \otimes_{\ZZ_p} H^i_{\Iw}(K_\infty,T)\] (which is independent of the choice of $\ZZ_p$-lattice $T \subset V$). \end{definition} It is clear that the groups $H^i_{\Iw}(K_\infty, T)$ are $\Lambda_{\ZZ_p}(G)$-modules; we show in \S \ref{appendix:iwacoho} below that they are finitely generated. There is a natural extension of the Tate pairing to this setting. We may clearly choose an increasing sequence $\{K_n\}$ of finite extensions of $K$ with $\bigcup_n K_n = K_\infty$ and each $K_n$ Galois over $K$. If $\langle -, - \rangle_{K_n}$ denotes the Tate pairing $H^1(K_n, T) \times H^1(K_n, T^*(1)) \to \ZZ_p$, and $x = (x_n)$ and $y = (y_n)$ are sequences in $H^1_{\Iw}(K_\infty, T)$ and $H^1_{\Iw}(K_\infty, T^*(1))$, then the sequence whose $n$-th term is \begin{equation} \label{eq:PRpairing} \sum_{\sigma \in \Gal(K_n / K)} \langle x_n, \sigma(y_n) \rangle_{K_n} [\sigma] \in \ZZ_p[\Gal(K_n / K)] \end{equation} is compatible under the natural projection maps, and hence defines an element of $\Lambda_{\ZZ_p}(G)$. \begin{definition} We define the \emph{Perrin-Riou pairing} to be the pairing \[ \langle - , - \rangle_{K_\infty, T} : H^1_{\Iw}(K_\infty, T) \times H^1_{\Iw}(K_\infty, T^*(1)) \to \Lambda_{\ZZ_p}(G)\] defined by the inverse limit of the pairings \eqref{eq:PRpairing}. \end{definition} It is easy to see that for $\alpha, \beta \in G$ we have \[ \langle \alpha x , \beta y \rangle_{K_\infty, T} = \alpha \cdot \langle x , y \rangle_{K_\infty, T} \cdot \beta^{-1}.\] (The above construction is valid for any $p$-adic Lie extension $K_\infty / K$, but in this paper we shall only use the above construction when $G$ is abelian, in which case the distinction between left and right multiplication is not significant.) \begin{lemma} \label{lemma:PRpairingtwist} If $\eta$ is any continuous $\ZZ_p$-valued character of $G$, and we identify $H^1_{\Iw}(K_\infty, T(\eta))$ with $H^1_{\Iw}(K_\infty, T)(\eta)$, then we have \[ \langle x, y \rangle_{K_\infty, T(\eta)} = \Tw_{\eta^{-1}} \langle x, y \rangle_{K_\infty, T},\] where $\Tw_{\eta}$ is the map $\Lambda_{\ZZ_p}(G) \to \Lambda_{\ZZ_p}(G)$ mapping $g \in G$ to $\eta(g) g$. \end{lemma} \begin{proof} This is immediate if $\eta$ has finite order, and follows for all $\eta$ by reduction modulo powers of $p$; cf.~\cite[\S 3.6.1]{perrinriou94}. \end{proof} If $V = T[1/p]$, we obtain by extending scalars a pairing \[ H^1_{\Iw}(K_\infty, V) \times H^1_{\Iw}(K_\infty, V^*(1)) \to \Lambda_{\QQ_p}(G)\] which we denote by $\langle -, - \rangle_{K_\infty, V}$. This pairing is independent of the choice of lattice $T \subseteq V$. It is clear that if $T$ is an $\mathcal{O}_E$-module for some finite extension $E / \QQ_p$, then we may similarly define an $\mathcal{O}_E$-linear analogue of the Perrin-Riou pairing, and in this case Lemma \ref{lemma:PRpairingtwist} applies to any $\mathcal{O}_E$-valued character $\eta$. \subsection{The Fontaine isomorphism} In the case when $K_\infty=K(\mu_{p^\infty})$, we can describe $H^1_{\Iw}(K_\infty,T)$ in terms of the $(\varphi, \Gamma)$-module $\mathbb{D}_K(T)$. Let $\Gamma_K = \Gal(K(\mu_{p^\infty}) / K)$, which we identify with a subgroup of $\Gamma$. The following result is originally due to Fontaine (unpublished); for a reference see \cite[Section II]{cherbonniercolmez99}. \begin{theorem} We have a canonical isomorphism of $\Lambda_{\ZZ_p}(\Gamma_K)$-modules \begin{equation} \label{eq:fontaineisom1} h^1_{\Iw,T}: \mathbb{D}_K(T)^{\psi=1} \rTo^\cong H^1_{\Iw}(K(\mu_{p^\infty}),T). \end{equation} \end{theorem} If $T$ is a representation of $\mathcal{G}_{\QQ_p}$, then the action of $\Gamma$ extends to an action of $\Gal(K(\mu_{p^\infty}) / \QQ_p)$ on both sides of equation \eqref{eq:fontaineisom1}, and the map $h^1_{\Iw, T}$ commutes with the action of this larger group. We shall apply this below in the case when $K$ is an unramified extension of $\QQ_p$, so $\Gamma_K = \Gamma$ and $\Gal(K(\mu_{p^\infty}) / \QQ_p) = \Gamma \times U_F$, where $U_F = \Gal(F / \QQ_p)$. Now let $K$ be a finite unramified extension of $\QQ_p$, and assume that $V$ is a crystalline representation of $\mathcal{G}_K$ whose Hodge-Tate weights\footnote{In this paper we adopt the convention that the Hodge--Tate weight of the cyclotomic character is $+1$.} lie in the interval $[a,b]$. The following result is due to Berger \cite[Theorem A.2]{berger03}. \begin{theorem} \label{thm:crystallinepsiinvariants} We have $\mathbb{D}_K(T)^{\psi=1}\subset \pi^{a-1}\mathbb{N}_K(T)$. Moreover, if $V$ has no quotient isomorphic to $\QQ_p(a)$, then $\mathbb{D}_K(T)^{\psi=1}\subset \pi^a\mathbb{N}_K(T)$. \end{theorem} In particular, if $V$ has non-negative Hodge--Tate weights and no quotient isomorphic to $\QQ_p$, we have $\mathbb{N}_K(T)^{\psi=1} = \mathbb{D}_K(T)^{\psi=1}$. Then \eqref{eq:fontaineisom1} becomes \begin{equation} \label{eq:fontaineisom2} h^1_{\Iw,T}: \mathbb{N}_K(T)^{\psi=1} \rTo^\cong H^1_{\Iw}(K(\mu_{p^\infty}),T). \end{equation} \subsection{Gauss sums, L- and epsilon-factors} \label{sect:epsfactors} In many of our formulae, epsilon-factors attached to characters of the Galois group (or rather the Weil group) of $\QQ_p$ will make an appearance, so we shall fix normalizations for these. We follow the conventions of \cite{deligne73}. Let $E$ be an algebraically closed field of characteristic 0, and let $\zeta = (\zeta_n)_{n \ge 0}$ be a choice of a compatible system of $p$-power roots of unity in $E$. The data of such a choice is equivalent to the data of an additive character $\lambda: \QQ_p \to E^\times$ with kernel $\ZZ_p$, defined by $\lambda(1/p^n) = \zeta_n$. We first define the Gauss sum of a finitely ramified character $\omega$ of the Weil group $W_{\mathbb{Q}_p}$, which will in fact depend only on the restriction of $\omega$ to the inertia subgroup $\Gal(\Qb_p / \QQ_{p}^{\mathrm{nr}})$. If $\omega$ has conductor $n$, then we define \[ \tau(\omega, \zeta) = \sum_{\sigma \in \Gal(\QQ_{p}^{\mathrm{nr}}(\mu_{p^n}) / \QQ_{p}^{\mathrm{nr}})} \omega(\sigma)^{-1} \zeta_{p^n}^\sigma.\] Now let us recall the definition of epsilon-factors given in \cite{deligne73} for locally constant characters of $\QQ_p^\times$. These depend on the character $\omega$, the auxilliary additive character $\lambda$, and a choice of Haar measure $\mathrm{d}x$; we choose $\mathrm{d}x$ so that $\ZZ_p$ has volume 1. The definition is given as \[ \varepsilon(\omega, \lambda, \mathrm{d}x) = \begin{cases} 1 & \text{if $\omega$ is unramified,} \\ \int_{\QQ_p^\times} \omega(x^{-1}) \lambda(x)\, \mathrm{d}x & \text{if $\omega$ is ramified.} \end{cases} \] As shown in \textit{op.cit.}, if the conductor of $\omega$ is $n$, then it suffices to take the integral over $p^{-n} \ZZ_p^\times$. For consistency with \cite{cfksv}, we will rather work with the additive character $\lambda(-x)$ rather than $\lambda(x)$; then we find that \[ \varepsilon(\omega, \lambda(-x), \mathrm{d}x) = \omega(p)^n \sum_{x \in (\mathbb{Z} / p^n \mathbb{Z})^\times} \omega(x)^{-1} \zeta_n^{-x}.\] We now recall that local reciprocity map $\operatorname{rec}_{\QQ_p}$ of class field theory identifies $W_{\QQ_p}^{\mathrm{ab}}$ with $\mathbb{Q}_p^\times$. Following \cite{deligne73}, we normalize $\operatorname{rec}_{\QQ_p}$ such that \emph{geometric} Frobenius elements of $W_{\QQ_p}^\mathrm{ab}$ are sent to uniformizers. Then the restriction of $\operatorname{rec}_{\QQ_p}$ to $\Gal(\mathbb{Q}_p^\mathrm{ab} / \QQ_{p}^{\mathrm{nr}})$ gives an isomorphism \[ \Gal(\mathbb{Q}_p^\mathrm{ab} / \QQ_{p}^{\mathrm{nr}}) \rTo \ZZ_p^\times.\] Our choice of normalization for the local reciprocity map implies that this coincides with the cyclotomic character. On the other hand, $p \in \mathbb{Q}_p^\times$ corresponds to $\tilde\sigma_p^{-1}$, where $\tilde\sigma_p$ is the unique element of $\Gal(\QQ_p^\mathrm{ab} / \QQ_p)$ which acts as the arithmetic Frobenius $\sigma_p$ on $\QQ_{p}^{\mathrm{nr}}$ and acts trivially on all $p$-power roots of unity. Hence \[ \varepsilon(\omega^{-1}, \lambda(-x), \mathrm{d}x) = \omega(\tilde\sigma_p)^{n} \sum_{\sigma \in \Gamma / \Gamma_n} \omega(\sigma) \zeta_n^{-\sigma} = \frac{p^n \omega(\tilde\sigma_p)^n}{\tau(\omega, \zeta)}.\] This quantity $\varepsilon(\omega^{-1}, \lambda(-x), \mathrm{d}x)$, which we shall abbreviate to $\varepsilon(\omega^{-1})$, will appear in our formulae for the two-variable regulator. We shall also need to consider the case when $E$ is a $p$-adic field and $\omega$ is a continuous character of $\mathcal{G}_{\QQ_p}^{\mathrm{ab}}$ which is Hodge--Tate, but not necessarily finitely ramified. Any such character is potentially crystalline, and a well-known construction of Fontaine \cite{fontaine94c} allows us to regard $\mathbb{D}_{\mathrm{pst}}(\omega)$ as a one-dimensional representation of the Weil group; concretely, if $\omega = \chi^j \omega'$ where $\omega'$ is finitely ramified, then $\sigma \in W_{\QQ_p}$ acts on $\mathbb{D}_{\mathrm{pst}}(\omega)$ as $p^{j n(\sigma)} \omega'(\sigma)$, where $n(\sigma)$ is the power of the arithmetic Frobenius by which $\sigma$ acts on $\QQ_{p}^{\mathrm{nr}}$. We define $\varepsilon(\omega) = \varepsilon(\omega, \lambda(-x), \mathrm{d}x)$ to be the epsilon-factor attached to $\mathbb{D}_{\mathrm{pst}}(\omega)$, so \[ \varepsilon(\omega^{-1}) = \frac{p^{n(1+j)} \omega(\tilde\sigma_p)^n}{\tau(\omega, \zeta)}.\] We write $P(\omega, X)$ for the $L$-factor of the Weil--Deligne representation $\mathbb{D}_{\mathrm{pst}}(\omega)$. This is a polynomial $P(\omega, X)$ in $X$, which is identically 1 if $\omega$ is not crystalline; otherwise, it is given by $P(\omega, X) = 1 - u X$, where $u$ is the scalar by which crystalline Frobenius acts on $\mathbb{D}_{\mathrm{cris}}(\omega)$, so $u = p^{-j} \omega'(\sigma_p)^{-1}$ if $\omega = \chi^j \omega'$ with $\omega'$ unramified. \section{Local theory: Yager modules and Wach modules} \label{sect:yager} \subsection{Some cohomological preliminaries} Let $F$ be a finite unramified extension of $\QQ_p$, and let $F_\infty / F$ be an unramified $p$-adic Lie extension with Galois group $U$. (Thus $U$ is either a finite cyclic group, or the product of such a group with $\ZZ_p$.) Let $\widehat{\cO}_{F_\infty}$ be the completion of the ring of integers of $F_\infty$. \begin{lemma} \label{lemma:ranktwistedmodule} Let $M$ be a free $\ZZ_p$-module of rank $d < \infty$, with a continuous action of $U$. Then the module \[ H^0(U, \widehat{\cO}_{F_\infty} \otimes_{\ZZ_p} M)\] is free of rank $d$ over $\mathcal{O}_F$, and \[ H^1(U, \widehat{\cO}_{F_\infty} \otimes M) = 0.\] \end{lemma} \begin{proof} This is a form of Hilbert's Theorem 90; for the form of the statement given here see e.g.~\cite[Proposition 1.2.4]{fontaine90}. \end{proof} We will need the following result on trace maps for unramified extensions. \begin{proposition} \label{prop:iwasawastructure} The module \[ \varprojlim_{K} \mathcal{O}_{K}, \] where $K$ varies over finite extensions of $F$ contained in $F_\infty$ and the inverse limit is with respect to the trace maps, is free of rank 1 over $\Lambda_{\mathcal{O}_F}(U)$. \end{proposition} \begin{proof} We first note that if $L / K$ is any finite unramified extension of local fields, then the trace map $\mathcal{O}_{L} \to \mathcal{O}_K$ is surjective, since the residue extension $k_{L} / k_K$ is separable and hence its trace map is surjective. Moreover, $\mathcal{O}_L$ is free of rank 1 over $\mathcal{O}_K[\Gal(L/K)]$; elements of $\mathcal{O}_{L}$ that generate it as a $\mathcal{O}_F[\Gal(L/K)]$-module are called \emph{integral normal basis generators} of $L/K$. We must show that there exists a trace-compatible sequence $x = (x_K) \in \varprojlim_{K} \mathcal{O}_{K}$ such that $x_K$ is an integral normal basis generator of $K/F$ for all $K$. Let $F_0$ be the largest subfield of $F_\infty$ such that $[F_0 : F]$ is prime to $p$; this is a finite extension of $F$, by our hypotheses on $F_\infty$. Choose a normal basis generator $x_0$ of $F_0 / F$. We claim that if $K$ is any finite extension of $F_0$ contained in $F_\infty$, and $x$ is any element of $\mathcal{O}_K$ with $\operatorname{Tr}_{K/F_0}(x) = x_0$, then $x$ is an integral normal basis generator of $K/F$. To prove this, consider the group ring $R = \mathcal{O}_F[\Gal(K/F)]$. As noted above, $\mathcal{O}_K$ is a free $R$-module of rank 1. Let $I$ be the ideal of $R$ given by the kernel of the natural map $\mathcal{O}_F[\Gal(K/F)] \to \mathcal{O}_F[\Gal(F_0/F)]$. Then $I$ is contained in the Jacobson radical $J$ of $R$ (indeed $J$ is generated by $I$ and $p$). So, by Nakayama's lemma, an element $x \in \mathcal{O}_K$ generates $\mathcal{O}_K$ as an $R$-module if and only if its image in $\mathcal{O}_K / I \mathcal{O}_K$ generates this quotient; but the trace map $\Tr_{K/F_0}: \mathcal{O}_K \to \mathcal{O}_{F_0}$ is surjective and factors through $\mathcal{O}_K / I \mathcal{O}_K$, and $\mathcal{O}_{F_0}$ and $\mathcal{O}_K / I \mathcal{O}_K$ are free $\ZZ_p$-modules of the same rank, so $\Tr_{K/F_0}$ must give an isomorphism $\mathcal{O}_K / I \mathcal{O}_K \to \mathcal{O}_{F_0}$. This proves the claim. So it suffices to take any element of $\varprojlim_{K} \mathcal{O}_{K}$ lifting $x_0$. \end{proof} \begin{remark} As noted in \cite{pickett10}, one can also deduce the above claim from the work of Semaev \cite[Lemma 4.1]{semaev88} on normal bases of extensions of \emph{finite} fields, which does not explicitly use Nakayama's lemma. \end{remark} \subsection{The Yager module} In this section we develop a variant of the construction in~\cite[\S 2]{yager82} in order to construct a certain module which, in a sense we shall make precise below, encodes the periods for the unramified characters of $\mathcal{G}_{F}$. \begin{definition} Let $K / F$ be a finite unramified extension. For $x \in \mathcal{O}_K$, we define \[ y_{K/F}(x) = \sum_{\sigma \in \Gal(K / F)} x^\sigma\, [\sigma^{-1}] \in \mathcal{O}_{K}[\Gal(K / F)].\] \end{definition} It is clear that $y_{K/F}$ is $\mathcal{O}_F$-linear and injective, and we have $y_{K/F}(x^g) = [g] y_{K/F}(x)$ for all $g \in \Gal(K / F)$, where $[u]$ is the image of $u$ in the group ring. Moreover, the image of $y_{K/F}$ is precisely the submodule $S_{K/F}$ of $\mathcal{O}_{K}[\Gal(K / F)]$ consisting of elements satisfying $y^g = [g] y$ for all $g \in \Gal(K / F)$, where $y^g$ denotes the action of $\Gal(K / F)$ on the coefficients $\mathcal{O}_K$. \begin{proposition} \label{prop:reduction} If $L \supset K \supset F$ are finite unramified extensions and $x \in \mathcal{O}_{L}$, the image of $y_{L/F}(x)$ under the reduction map \[ \mathcal{O}_{L}[\Gal(L / F)] \to \mathcal{O}_{L}[\Gal(K / F)] \] induced by the surjection $\Gal(L / F) \to \Gal(K / F)$ is equal to $y_{K/F}(\operatorname{Tr}_{L/K} x)$. In particular, the reduction has coefficients in $\mathcal{O}_{K}$. \end{proposition} \begin{proof} Clear from the formula defining the maps $y_{K/F}$ and $y_{L/F}$. \end{proof} Now let $F_\infty / F$ be any unramified $p$-adic Lie extension with Galois group $U$, as in the previous section. Passing to inverse limits with respect to the trace maps, we deduce that there is an isomorphism of $\Lambda_{\mathcal{O}_F}(U)$-modules \begin{equation} \label{def:y} y_{F_\infty / F} : \varprojlim_{F \subseteq K \subseteq F_\infty} \mathcal{O}_{K} \rTo^\cong S_{F_\infty / F} := \varprojlim_{F \subseteq K \subseteq F_\infty} S_{K/F}. \end{equation} \begin{proposition} We have \[ S_{F_\infty / F} = \{ f \in \Lambda_{\widehat{\cO}_{F_\infty}}(U) : f^u = [u] f\}\] for any topological generator $u$ of $U$. \end{proposition} \begin{proof} Let us set $X = \{ f \in \Lambda_{\widehat{\cO}_{F_\infty}}(U) : f^u = [u] f\}$. Let $F_n$ be a family of finite extensions of $F$ whose union is $F_\infty$, and let $U_n = \Gal(F_\infty / F_n)$. Firstly, since $S_{F_n / F} \subseteq \mathcal{O}_{F_n}[U / U_n] \subseteq \widehat{\cO}_{F_\infty}[U / U_n]$, we clearly have an embedding $S_{F_\infty / F} \hookrightarrow \Lambda_{\widehat{\cO}_{F_\infty}}(U)$, which must land in $X$, because of the Galois-equivariance property of the elements of $S_{F_n / F}$. However, it is clear that for any $x \in X$, the image $x_n$ of $x$ in $\widehat{\cO}_{F_\infty}[U / U_n]$ has coefficients in $\mathcal{O}_{F_n}$ (since $(\widehat{\cO}_{F_\infty})^{U_n} = \mathcal{O}_n$ by Lemma \ref{lemma:ranktwistedmodule}) and satisfies $(x_n)^u = [u] x_n$, thus lies in $S_{F_n}$. So the map $S_{F_\infty / F} \hookrightarrow X$ is a bijection. \end{proof} We shall always equip $S_{F_\infty / F}$ with the inverse limit topology (arising from the $p$-adic topology of the finitely generated $\ZZ_p$-modules $S_{F_n / F}$). This topology is compact and Hausdorff, and coincides with the subspace topology from $\Lambda_{\widehat{\cO}_{F_\infty}}(U)$. \begin{definition} We refer to $S_{F_\infty / F}$ as the \emph{Yager module}, since it is closely related to the objects appearing in \cite[\S 2]{yager82}. \end{definition} We now explain the relation between $S_{F_\infty / F}$ and the periods for characters of $U$. Let $M$ be a finite-rank free $\ZZ_p$-module with an action of $U$, given by a continuous map $\rho: U \to \operatorname{Aut}_{\ZZ_p}(M)$. Then $\rho$ induces a ring homomorphism $\Lambda_{\widehat{\cO}_{F_\infty}}(U) \to \widehat{\cO}_{F_\infty} \otimes_{\ZZ_p} \End_{\ZZ_p} M$, which we also denote by $\rho$. \begin{proposition} \label{prop:twist} Let $\omega \in S_{F_\infty / F}$. Then $\rho(\omega) \in \widehat{\cO}_{F_\infty} \otimes_{\ZZ_p} \End_{\ZZ_p}(M)$ is a period for $\rho$, in the sense that \[ \rho(\omega)^u = \rho(u) \cdot \rho(\omega) . \] for all $u \in U$. \end{proposition} \begin{proof} Since $\omega \in S_{F_\infty / F}$, we have $\omega^u = [u] \omega$ for any $u \in U$. However, the map $\Lambda_{\widehat{\cO}_{F_\infty}}(U) \to \widehat{\cO}_{F_\infty} \otimes_{\ZZ_p} \End_{\ZZ_p} M$ commutes with the action of $U$ on the coefficient ring $\widehat{\cO}_{F_\infty}$; so we have \[ \rho(\omega)^u = \rho(\omega^u) = \rho([u] \cdot \omega) = \rho(u) \rho(\omega).\] \end{proof} \begin{remark} After the results in this section had been proven, we discovered that similar results had been obtained by Pasol in his unpublished PhD thesis \cite[\S 2.5]{pasol05}. Our module $S_{F_\infty / F}$ is the same as his module $\mathbb{D}_0$. He uses the module $\mathbb{D}_0$ to relate Katz's $2$-variable $p$-adic $L$-functions attached to a CM elliptic curve to the modular symbols construction by Greenberg and Stevens \cite{greenbergstevens93}. \end{remark} \subsection{P-adic representations} Let $T$ be a crystalline $\ZZ_p$-representation of $\mathcal{G}_{F}$. If $K / F$ is any unramified extension, we have isomorphisms $\mathbb{N}_K(T) \cong \mathbb{N}_F(T) \otimes_{\mathcal{O}_F} \mathcal{O}_K$, so we have trace maps $\mathbb{N}_L(T) \to \mathbb{N}_K(T)$ for $L / K$ any two finite unramified extensions of $F$. \begin{definition} \label{def:Dinfty} Let $\mathbb{N}_{F_\infty}(T) = \varprojlim_{F \subseteq K \subset F_\infty} \mathbb{N}_K(T)$, where the inverse limit is taken with respect to the trace maps. \end{definition} By construction, $\mathbb{N}_{F_\infty}(T)$ has actions of $\Gamma$ and $U$, since these act on the modules $\mathbb{N}_K(T)$ compatibly with the trace maps. \begin{proposition} We have an isomorphism of topological modules \[ \mathbb{N}_{F_\infty}(T) \cong \mathbb{N}_F(T) \mathbin{\hat\otimes}_{\mathcal{O}_F} S_{F_\infty / F}.\] \end{proposition} \begin{proof} Clear by construction. \end{proof} By construction, $\mathbb{N}_{F_\infty}(T)$ has $\mathcal{O}_F$-linear actions of $\Gamma$ and of $U$, which extend to a continuous action of $\Lambda_{\mathcal{O}_F}(\Gamma \times U)$. Define $\varphi^* \mathbb{N}_{F_\infty}(T)$ as the $\mathbb{A}^+_{F}$-submodule of $\mathbb{N}_{F_\infty}(T)[q^{-1}]$ generated by $\varphi(\mathbb{N}_{F_\infty}(T))$; this is in fact an $\mathbb{A}^+_{F} \mathbin{\hat\otimes}_{\mathcal{O}_F} \Lambda_{\mathcal{O}_F}(U)$-submodule, since $\varphi$ acts bijectively on $\Lambda_{\mathcal{O}_F}(U)$. If $T$ has non-negative Hodge--Tate weights, then we have an inclusion \[ \mathbb{N}_{F_\infty}(T) \hookrightarrow \varphi^* \mathbb{N}_{F_\infty}(T),\] with quotient annihilated by $q^h$, for any $h$ such that the Hodge-Tate weights of $T$ lie in $[0, h]$. Note that the map $\varphi : \mathbb{N}_{F_\infty}(T) \to \varphi^* \mathbb{N}_{F_\infty}(T)$ commutes with the action of $G = U \times \Gamma$. Similarly, the maps $\psi$ on $\mathbb{N}_K(T)[q^{-1}]$ for each $K$ assemble to a map \[ \psi : \varphi^* \mathbb{N}_{F_\infty}(T) \to \mathbb{N}_{F_\infty}(T),\] which is a left inverse of $\varphi$. The following proposition will be important for constructing the regulator map: \begin{proposition} \label{prop:kernelofpsi} We have \[ \left( \varphi^* \mathbb{N}_{F_\infty}(T) \right)^{\psi = 0} = \left(\varphi^* \mathbb{N}_F(T)\right)^{\psi = 0} \mathbin{\hat\otimes}_{\mathcal{O}_F} S_{F_\infty / F}.\] \end{proposition} \begin{proof} Choose a basis $n_1, \dots, n_d$ of $\mathbb{N}_F(T)$ as an $\mathbb{A}^+_F$-module, and a basis $\Omega$ of $S_{F_\infty / F}$ as a $\Lambda_{\mathcal{O}_F}(U)$-module. Then any vector $v \in \varphi^* \mathbb{N}_{F_\infty}(T)$ can be uniquely written as \[ v = \sum_{i = 0}^{p-1} \sum_{j = 0}^d (1 + \pi)^i \varphi(x_{ij}) \cdot (\varphi(n_j) \otimes \Omega),\] for some $x_{ij} \in \mathbb{A}^+_{F} \mathbin{\hat\otimes}_{\mathcal{O}_F} \Lambda_{\mathcal{O}_F}(U)$, since $\{ (1 + \pi)^{i}: 0 \le i \le p-1\}$ is a basis of $\mathbb{A}^+_{F}$ over $\varphi(\mathbb{A}^+_{F})$. Applying $\psi$, we have \[ \psi(v) = \sum_{i = 0}^{p-1} \sum_{j = 0}^d \psi\left((1 + \pi)^i\right) x_{ij} \cdot (n_j \otimes \sigma_p^{-1}\Omega),\] where $\sigma_p$ is the arithmetic Frobenius element of $\Gal(F_\infty / \QQ_p)$. The element $\sigma_p^{-1} \Omega$ is also a $\Lambda_{\mathcal{O}_F}(U)$-generator of $S_{F_\infty / F}$. Moreover, it is well known that $\psi\left((1 + \pi)^i\right)$ is $1$ if $i = 0$ and $0$ if $1 \le i \le p-1$. So we have $\psi(v) = 0$ if and only if $v$ is in the submodule \[ \bigoplus_{i = 1}^{p-1} (1 + \pi)^i\varphi(\mathbb{N}_F(T)) \mathbin{\hat\otimes}_{\mathcal{O}_F} S_{F_\infty / F} = \varphi^*\mathbb{N}_F(T)^{\psi = 0} \mathbin{\hat\otimes}_{\mathcal{O}_F} S_{F_\infty/F}.\] \end{proof} \subsection{Recovering unramified twists} Let us pick a finite-rank free $\ZZ_p$-module $M$ equipped with a continuous action of $U$, via a homomorphism $\rho: \Lambda_{\ZZ_p}(U) \to \End_{\ZZ_p}(M)$ as above. There is a ``twisting'' map from $M \otimes_{\ZZ_p} \Lambda_{\ZZ_p}(U)$ to itself, defined by $m \otimes [u] \mapsto \rho(u)^{-1} m \otimes [u]$ for $u \in U$. This map intertwines two different actions of $U$: on the left-hand side the action given by \[ u \cdot (m \otimes [v]) = m \otimes [u^{-1}v]\] and on the right the action given by \[ u \cdot (m \otimes [v]) = \rho(u) m \otimes [u^{-1} v].\] Taking the completed tensor product with $\widehat{\cO}_{F_\infty}$ (endowed with its natural $U$-action) and passing to $U$-invariants, we obtain a bijection \[ i_M: M \otimes_{\ZZ_p} S_{F_\infty/F} \rTo^\cong S_{F_\infty/F} \cdot \left(M \otimes_{\ZZ_p} \widehat{\cO}_{F_\infty} \right)^{U}.\] \begin{proposition} There is a canonical isomorphism \begin{equation} \label{eq:unramifiedisom} \mathbb{N}_F(T) \otimes_{\mathcal{O}_F} \left(\widehat{\cO}_{F_\infty} \otimes_{\ZZ_p} M \right)^{U} \rTo \mathbb{N}_F(T \otimes_{\ZZ_p} M), \end{equation} commuting with the actions of $\mathbb{A}^+_{F}$, $\Gamma$, $\varphi$ and $\psi$ (where the latter two elements act on $\widehat{\cO}_{F_\infty}$ as the arithmetic Frobenius and its inverse). \end{proposition} \begin{proof} Wach modules are known to commute with tensor products \cite{berger04}, so it suffices to check that \[ \mathbb{N}_F(M) = \mathbb{A}^+_F \otimes_{\mathcal{O}_F} \left(\widehat{\cO}_{F_\infty} \otimes_{\ZZ_p} M \right)^{U}.\] This follows from the fact that there is a canonical embedding of $\widehat{\cO}_{F_\infty}$ into Fontaine's ring $\mathbb{A}$, hence there is a canonical inclusion \[ \left(\widehat{\cO}_{F_\infty} \otimes_{\ZZ_p} M \right)^{U} \subseteq \left(M \otimes_{\ZZ_p} \mathbb{A}\right)^{H_F} = \mathbb{D}_F(M).\] Since the left-hand side is free of rank $d$ over $\mathcal{O}_F$, extending scalars to $\mathbb{A}^+_F$ gives a submodule of $\mathbb{D}_F(M)$ which is free of rank $d$ over $\mathbb{A}^+_F$ and clearly satisfies the conditions defining the Wach module $\mathbb{N}_F(M) \subset \mathbb{D}_F(M)$. \end{proof} \begin{remark} Suppose (for simplicity) that $F = \QQ_p$ and $M \cong \ZZ_p$ with $U$ acting via a character $\tau: U \to \ZZ_p^\times$. Since $\left(M \otimes \widehat{\cO}_{F_\infty} \right)^{U}$ is free of rank 1 over $\ZZ_p$, any choice of basis of this space gives a non-canonical isomorphism between $\mathbb{N}_{\QQ_p}(T(\tau))$ and $\mathbb{N}_{\QQ_p}(T)$ with its $\varphi$-action twisted by $\tau(\sigma_p)^{-1}$. However, the isomorphism \eqref{eq:unramifiedisom} \emph{is} canonical and does not depend on any such choice. \end{remark} \begin{theorem} \label{thm:unramifiedtwist} There is a canonical isomorphism \[ i_{M}: M \otimes_{\ZZ_p} \mathbb{N}_{F_\infty}(T) \rTo^\cong \mathbb{N}_{\infty}(M \otimes_{\ZZ_p} T) \] which commutes with the actions of $\varphi$, $\Gamma$, $\mathbb{A}^+_{F}$ and $\End_{\mathcal{G}_F}(M)$, and satisfies \[ i_{M}(u \cdot x) = \rho(u)^{-1} u \cdot i_M(x)\] for $u \in U$ and $x \in \mathbb{N}_{F_\infty}(T)$. \end{theorem} \begin{proof} This follows immediately by tensoring the map \[ i_M : M \otimes_{\ZZ_p} S_{F_\infty / F} \rTo^\cong S_{F_\infty/F} \cdot \left(M \otimes_{\ZZ_p} \widehat{\cO}_{F_\infty}\right)^{U}\] with $\mathbb{N}_F(T)$, and using the isomorphism \eqref{eq:unramifiedisom}. \end{proof} \section{The 2-variable p-adic regulator} \label{sect:regulator} \subsection{A lemma on universal norms} Let $F$ be a finite unramified extension of $\QQ_p$, and let $T$ be a $\ZZ_p$-rep\-re\-sen\-ta\-tion of $\mathcal{G}_{F}$. \begin{definition} \label{def:goodcrystalline} The representation $T$ is \emph{good crystalline} if $V = T[1/p]$ is crystalline and has non-negative Hodge-Tate weights. \end{definition} By \cite[Theorem A.3]{berger03}, for any good crystalline $T$ there is a canonical isomorphism \[ H^1_{\Iw}(F(\mu_{p^\infty}), T) \rTo^\cong \left( \pi^{-1} \mathbb{N}_F(T) \right)^{\psi = 1}.\] We define a ``residue'' map \[ r_{F, V} : H^1_{\Iw}(F(\mu_{p^\infty}), T) \to \mathbb{D}_{\mathrm{cris}}(F, V)\] by composing the above isomorphism with the natural map \[ \pi^{-1} \mathbb{N}_F(T) \to \frac{\pi^{-1} \mathbb{N}_F(V)}{\mathbb{N}_F(V)} \cong \frac{\mathbb{N}_F(V)}{\pi\mathbb{N}_F(V)} \cong \mathbb{D}_{\mathrm{cris}}(F, V).\] As is shown in the proof of \cite[Theorem A.3]{berger03}, the image of the map $r_{F, V}$ is contained in $\mathbb{D}_{\mathrm{cris}}(F, V)^{\varphi = 1}$; in particular, if the latter space is zero, then $H^1_{\Iw}(F(\mu_{p^\infty}), T) \cong \mathbb{N}_F(T)^{\psi = 1}$. We now consider the behaviour of these maps in unramified towers. Let $F_\infty$ be an infinite unramified $p$-adic Lie extension of $F$, so we may write $F_\infty = \bigcup_n F_n$ where $F_0 / F$ is a finite extension and $F_n$ is the unramified extension of $F_0$ of degree $p^n$. As we have seen above, $\mathbb{D}_{\mathrm{cris}}(F_n, V) \cong \mathbb{D}_{\mathrm{cris}}(V) \otimes_{F} F_n$. Let us formally write $\mathbb{D}_{\mathrm{cris}}(F_\infty, V) = F_\infty \otimes_{F} \mathbb{D}_{\mathrm{cris}}(F, V)$. \begin{proposition} There is an $n_0$ (depending on $V$) such that \[ \mathbb{D}_{\mathrm{cris}}(F_\infty, V)^{\varphi = 1} = \mathbb{D}_{\mathrm{cris}}(F_n, V)^{\varphi = 1}.\] \end{proposition} \begin{proof} Since the spaces $\mathbb{D}_{\mathrm{cris}}(F_n, V)^{\varphi = 1}$ are an increasing sequence of finite-dimensional $\QQ_p$-vector spaces, it suffices to show that their union $\mathbb{D}_{\mathrm{cris}}(F_\infty, V)^{\varphi = 1}$ is finite-dimensional over $\QQ_p$. This follows from the fact that $F_\infty$ is a field, and $\varphi$ acts on $F_\infty$ as the arithmetic Frobenius $\sigma_p$, so $(F_\infty)^{\varphi = 1} = \QQ_p$. Thus \[\dim_{\QQ_p} \left(F_\infty \otimes_F \mathbb{D}_{\mathrm{cris}}(V)\right)^{\sigma_p \otimes \varphi = 1} \le \dim_{\QQ_p} V,\] by Propositions 1.4.2(i) and 1.6.1 of \cite{fontaine94b}. \end{proof} \begin{proposition} Let $\mathbb{D}_{\mathrm{cris}}(T)$ be the $\ZZ_p$-lattice in $\mathbb{D}_{\mathrm{cris}}(V)$ which is the image of $\mathbb{N}_F(T)$. If $m \ge n \ge n_0$, $x \in H^1_{\Iw}(F_{m}(\mu_{p^\infty}), T)$, and $y = \operatorname{cores}_{F_m/F_{n}}(x) \in H^1_{\Iw}(F_{n}(\mu_{p^\infty}), T)$, then we have \[ r_{F_{n}, V}(y) \in p^{m-n} \mathcal{O}_{F_n} \otimes_{\mathcal{O}_F} \mathbb{D}_{\mathrm{cris}}(T).\] \end{proposition} \begin{proof} This follows from the fact that for any $n \ge 0$, we have a commutative diagram \[ \begin{diagram} H^1_{\Iw}(F_{n+1}(\mu_{p^\infty}), T) & \rTo^{r_{F_{n+1}, V}} & \mathbb{D}_{\mathrm{cris}}(F_{n+1}, V)^{\varphi = 1}\\ \dTo^{\operatorname{cores}_{F_{n+1}/F_n}} & & \dTo^{\operatorname{Tr}_{F_{n+1}/F_n}} \\ H^1_{\Iw}(F_n(\mu_{p^\infty}), T) & \rTo^{r_{F_{n}, V}} & \mathbb{D}_{\mathrm{cris}}(F_n, V)^{\varphi = 1}. \end{diagram} \] If $n \ge n_0$, then the trace map on the right-hand side is simply multiplication by $[F_{n+1} : F_n] = p$. \end{proof} \begin{theorem} \label{thm:unramunivnorms} Let $F_\infty$ be an infinite unramified $p$-adic Lie extension of $F$, and let $x \in H^1_{\Iw}(F_\infty(\mu_{p^\infty}), T)$. Then for any $n \ge 0$, the image $y$ of $x$ in $H^1_{\Iw}(F(\mu_{p^\infty}), T) \cong \left(\pi^{-1}\mathbb{N}_F(T)\right)^{\psi = 1}$ is contained in $\mathbb{N}_F(T)^{\psi = 1}$. \end{theorem} \begin{proof} This follows immediately from the preceding proposition, since $r_{F_{n}, V}(y)$ must be divisible by arbitrarily large powers of $p$ and hence is zero. \end{proof} \subsection{The regulator map} \label{sect:2variableregulator} For the rest of Section \ref{sect:regulator}, we assume that $T$ is a good crystalline representation of $\mathcal{G}_{F}$, for $F$ a finite unramified extension of $\QQ_p$, and we let $F_\infty$ be any unramified $p$-adic Lie extension of $F$ with Galois group $U$ as before. We define $K_\infty = F_\infty(\mu_{p^\infty})$, and $G = \Gal(K_\infty / F) \cong U \times \Gamma$. \begin{proposition} \label{prop:yagercohomology2} We have a canonical isomorphism \[ H^1_{\Iw}(K_\infty ,T) \cong \left(\pi^{-1}\mathbb{N}_{F_\infty}(T)\right)^{\psi = 1}.\] If either $F_\infty / F$ is infinite, or $T$ has no quotient isomorphic to the trivial representation, then we have \[ H^1_{\Iw}(K_\infty, T) \cong \mathbb{N}_{F_\infty}(T)^{\psi = 1}.\] \end{proposition} \begin{proof} If $F_\infty / F$ is a finite extension, we may assume $F_\infty = F$, and this is \cite[Theorem A.1]{berger03}. If $F_\infty / F$ is an infinite extension, then we note that for each finite subextension $K / F$ contained in $F_\infty$ we have an isomorphism \[ H^1_{\Iw}(K(\mu_{p^\infty}),T) \cong \left(\pi^{-1}\mathbb{N}_{K}(T)\right)^{\psi=1},\] and if $L / K$ are two such fields, then the corestriction map \[ H^1_{\Iw}(L(\mu_{p^\infty}),T)\rTo H^1_{\Iw}(K(\mu_{p^\infty}),T)\] corresponds to the maps \[ \pi^{-1}\mathbb{N}_{L}(T)\rTo \pi^{-1}\mathbb{N}_{K}(T)\] induced from the trace map $\mathcal{O}_L \to \mathcal{O}_K$. By Theorem \ref{thm:unramunivnorms}, we have an isomorphism \[ H^1_{\Iw}(K_\infty, T) = \varprojlim_K \left( \pi^{-1} \mathbb{N}_{K}(T)\right)^{\psi=1} \cong \varprojlim_K \mathbb{N}_{K}(T)^{\psi=1}= \mathbb{N}_{F_\infty}(T)^{\psi=1}, \] which finishes the proof. \end{proof} As shown in \cite[Proposition 2.11]{leiloefflerzerbes11}, we have a $\Lambda_{\mathcal{O}_F}(\Gamma)$-equivariant embedding \[ \big(\varphi^*\mathbb{N}_F(T)\big)^{\psi=0}\subset \mathcal{H}_{F}(\Gamma) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V),\] which is continous with respect to the weak topology on $\varphi^* \mathbb{N}_F(T)^{\psi=0}$ and the usual Fr\'echet topology on $\mathcal{H}_{F}(\Gamma)$. Moreover, we have a continuous injection \[ S_{F_\infty / F} \hookrightarrow \Lambda_{\widehat{\cO}_{F_\infty}}(U) \hookrightarrow \mathcal{H}_{\widehat{F}_\infty}(U).\] Tensoring these together we obtain a continuous, $\Lambda_{\mathcal{O}_F}(G)$-linear map \begin{multline*} \big(\varphi^*\mathbb{N}_F(T)\big)^{\psi=0} \mathbin{\hat\otimes}_{\mathcal{O}_F} S_{F_\infty / F} \hookrightarrow \mathcal{H}_{\widehat{F}_\infty}(U) \mathbin{\hat\otimes}_{\mathcal{O}_F} \mathcal{H}_F(\Gamma) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V) \\ = \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V). \end{multline*} \begin{definition} \label{def:2variableregulator} We define the $p$-adic regulator \[ \mathcal{L}^{G}_V : H^1_{\Iw}(K_\infty, T) \rTo \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V)\] to be the composite map \begin{align*} H^1_{\Iw}(K_\infty, T) & \rTo^\cong \mathbb{N}_{F_\infty}(T)^{\psi=1} \cong \left(\mathbb{N}_F(T) \mathbin{\hat\otimes}_{\mathcal{O}_F} S_\infty\right)^{\psi = 1}\\ & \rTo^{1- \varphi} \big(\varphi^*\mathbb{N}_F(T)\big)^{\psi=0} \mathbin{\hat\otimes}_{\mathcal{O}_F} S_\infty \\ & \rTo \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V). \end{align*} Here, we use that $\varphi^* \mathbb{N}_{F_\infty}(T)^{\psi = 0} \cong \varphi^*\mathbb{N}_F(T)^{\psi=0} \mathbin{\hat\otimes}_{\ZZ_p} S_\infty$ by Proposition \ref{prop:kernelofpsi}. \end{definition} By construction, $\mathcal{L}^{G}_V$ is a morphism of $\Lambda_{\mathcal{O}_F}(G)$-modules. As suggested by the notation, we will usually invert $p$ and regard $\mathcal{L}^{G}_V$ as a map on $H^1_{\Iw}(K_\infty, V)$, associating to each compatible system of cohomology classes in $H^1_{\Iw}(K_\infty, V)$ a distribution on $G$ with values in $\widehat{F}_\infty \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V)$. We can summarise the properties of the map we have constructed by the following theorem: \begin{theorem} \label{thm:localregulator} Let $F$ be a finite unramified extension of $\QQ_p$, and $K_\infty$ a $p$-adic Lie extension of $F$ with Galois group $G$ such that \[ F(\mu_{p^\infty}) \subseteq K_\infty \subset F \cdot \QQ_p^{\mathrm{ab}}.\] Let $T$ be a crystalline representation of $\mathcal{G}_{F}$ with non-negative Hodge--Tate weights, and assume that either $K_\infty / F(\mu_{p^\infty})$ is infinite, or $T$ has no quotient isomorphic to the trivial representation. Then there exists a morphism of $\Lambda_{\mathcal{O}_F}(G)$-modules \[ \mathcal{L}_V^G : H^1_{\Iw}(K_\infty, T) \rTo \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V),\] where $F_\infty$ is the maximal unramified subfield of $K_\infty$, such that: \begin{enumerate} \item for any finite unramified extension $K / F$ contained in $K_\infty$, we have a commutative diagram \[ \begin{diagram} H^1_{\Iw}(K_\infty / \QQ_p, V) &&\rTo^{\mathcal{L}_V^G} &&\mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V)\\ \dTo & & & &\dOnto \\ H^1_{\Iw}(K(\mu_{p^\infty}), V )& \rTo^{\mathcal{L}_V^{G'}} & \mathcal{H}_{K}(G') \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V) & \rInto & \mathcal{H}_{\widehat{F}_\infty}(G') \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V). \end{diagram} \] Here $G' = \Gal(K(\mu_{p^\infty}) / \QQ_p)$, the right-hand vertical arrow is the map on distributions corresponding to the projection $G \twoheadrightarrow G'$, and the map $\mathcal{L}_V^{G'}$ is defined by \[ \mathcal{L}_V^{G'} = \sum_{\sigma \in \Gal(K / F)} [\sigma] \cdot \mathcal{L}^{\Gamma}_{K, V}(\sigma^{-1} \circ x),\] where $\mathcal{L}^\Gamma_{K, V}$ is the Perrin-Riou regulator map for $K(\mu_{p^\infty}) / K$. \item For any $x \in H^1_{\Iw}(F_\infty(\mu_{p^\infty}) / \QQ_p, V)$ and any character $\eta$ of $\Gamma$, the distribution $\pr^{\eta}(\mathcal{L}_{G, V}(x))$ on $U$, which is defined by twisting by $\eta$ and pushing forward along the projection to $U$, is bounded. \end{enumerate} Moreover, the conditions (1) and (2) above uniquely determine the morphism $\mathcal{L}_V^G$. \end{theorem} \begin{proof} Let us show first that the map $\mathcal{L}_{V}^G$ defined above satisfies (1) and (2). Let $T$ be a choice of lattice in $V$. Let $K$ be any finite unramified extension of $F$ contained in $F_\infty$. Then the diagram \[ \begin{diagram} \mathbb{N}_{F_\infty}(T)^{\psi = 1} & \rTo^{1-\varphi} & \varphi^*\mathbb{N}_{F_\infty}(T)^{\psi = 0}\\ \dTo & & \dTo\\ \mathbb{N}_K(T)^{\psi = 1} & \rTo^{1-\varphi} & \varphi^*\mathbb{N}_{K}(T)^{\psi = 0} \end{diagram} \] evidently commutes; and we also have a commutative diagram \[ \begin{diagram} \varphi^* \mathbb{N}_F(T)^{\psi = 0} \mathbin{\hat\otimes}_{\mathcal{O}_F} S_{F_\infty/F} & \rTo^{i_{F_\infty / F}} & \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V) \\ \dTo & & \dTo\\ \varphi^* \mathbb{N}_K(T)^{\psi = 0} \otimes_{\ZZ_p} S_{K/F} & \rTo^{i_{K/F}} & \mathcal{H}_{\widehat{F}_\infty}(G') \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V), \end{diagram} \] where the arrows $i_{F_\infty / F}$ and $i_{K/F}$ are induced by the inclusions $S_{F_\infty/F} \hookrightarrow \mathcal{H}_{\widehat{F}_\infty}(U)$ and $S_{K/F} \hookrightarrow \mathcal{O}_K[U'] \hookrightarrow \widehat{F}_\infty[U']$, where $U' = \Gal(K/F)$, and the right vertical arrow is the one arising from the projection $G \to G'$. If we combine the two diagrams using the identification $\mathbb{N}_K(T) \cong \mathbb{N}_F(T) \otimes_F S_{K/F}$ and similarly for $F_\infty$, the composite of the maps on the top row is the definition of $\mathcal{L}_{V}^G$, and the composite of the arrows on the bottom row is the map $\mathcal{L}_V^{G'}$. The commutativity of these diagrams therefore proves (1). Property (2) is clear, since the image of $\Lambda_{\widehat{F}_\infty}(U)$ in $\mathcal{H}_{\widehat{F}_\infty}(U)$ is exactly the bounded distributions. We now show that these properties characterise $\mathcal{L}^G_{V}$ uniquely. It suffices to show that (1) and (2) determine the value of $\mathcal{L}^G_{V}(x)$ at any character of $G$. Such a character has the form $\eta \varpi$ where $\eta$ is a character of $\Gamma$ and $\varpi$ is a character of $U$. Property (1) uniquely determines the value at $\eta \times \varpi$ if $\varpi$ has finite order, and property (2) implies that for each fixed $\eta$, the function $\varpi \mapsto \mathcal{L}^G_{V}(x)(\eta \times \varpi)$ is a bounded analytic function on the rigid space parametrising characters of $U$, and hence is determined uniquely by its values on finite-order $\varpi$'s. \end{proof} We now record some properties of the map $\mathcal{L}^G_{V}$. \begin{proposition} \label{prop:orderofdistributions} Let $W \subseteq \mathbb{D}_{\mathrm{cris}}(V)$ be a $\varphi$-invariant $F$-subspace such that all eigenvalues of $\varphi$ on the quotient $Q = \mathbb{D}_{\mathrm{cris}}(V) / W$ have $p$-adic valuation $\ge -h$ (where we normalise the $p$-adic valuation on $\Qb_p$ such that $v_p(p) = 1$). Then for any $x \in H^1_{\Iw}(K_\infty, V)$, the image of $x$ under \[ H^1_{\Iw}(K_\infty, V) \rTo^{\mathcal{L}^G_V} \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V) \to \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} Q\] lies in $D^{(0,h)}(G, \widehat{F}_\infty) \otimes Q$, where $D^{(0,h)}(G, \widehat{F}_\infty)$ is the space of $\widehat{F}_\infty$-valued distributions of order $(0, h)$ with respect to the subgroups $(U, \Gamma)$. \end{proposition} \begin{proof} This is immediate from the definition of the 2-variable regulator map and the corresponding statement for the 1-variable regulator, which is well known. \end{proof} \begin{proposition} \label{prop:galoisequivariance} If $u \in U$ and $\tilde u$ is the unique lifting of $u$ to $G$ acting trivially on $F(\mu_{p^\infty})$, then for any $x \in H^1_{\Iw}(K_\infty, V)$ we have \[\mathcal{L}^G_{V}(x)^u = [\tilde u] \cdot \mathcal{L}^G_{V}(x).\] \end{proposition} \begin{proposition} If $m_1, \dots, m_d$ are a $\Lambda_{\mathcal{O}_F}(\Gamma)$-basis of $\varphi^* \mathbb{N}_F(T)^{\psi = 0}$, and $\omega$ a $\Lambda_{\mathcal{O}_F}(U)$-basis of $S_\infty$, then the image of the $p$-adic regulator is contained in the $\Lambda_{\mathcal{O}_F}(G)$-span of the vectors \[ \left(i_\infty(m_j \mathbin{\hat\otimes} \omega)\right)_{j = 1, \dots, d}.\] \end{proposition} \begin{proposition} \label{prop:injectivereg} If $F_\infty / F$ is infinite, the regulator map $\mathcal{L}^{G}_V$ is injective. \end{proposition} \begin{proof} As before, let us identify $F_\infty$ with the unramified $\ZZ_p$-extension of a finite extension $F_0 / F$. Let $\pr_n:\mathbb{N}_{F_\infty}(T) \rightarrow \mathbb{N}_{F_n}(T)$ be the projection map; for $x\in\mathbb{N}_{F_\infty}(T)$, we have $\varphi(x) = x$ if and only if $\pr_n(x) \in \mathbb{N}_{F_n}(T)^{\varphi = 1}$ for all $n$. However, \[ \mathbb{N}_{F_n}(T)^{\varphi = 1} \subset \mathbb{D}_{F_n}(T)^{\varphi=1} = T^{H_{F_n}}.\] As $T$ is a finitely generated $\ZZ_p$-module, there must be some $m$ such that $T^{H_{F_n}} = T^{H_{F_m}}$ for all $n \ge m$. However, for $n \ge m$ the projection map $T^{H_{F_{n+1}}} \to T^{H_{F_{n}}}$ is multiplication by $p$; so $\pr_m(x)$ is divisible by arbitrarily high powers of $p$ and is thus zero. Hence $x = 0$. \end{proof} The next statement requires some extra notation. Let $\varpi$ be a continuous character $U \to \mathcal{O}_E^\times$, where $E$ is some finite extension of $\QQ_p$. Then there is an obvious isomorphism \begin{equation} \label{eq:iwacohotwist} H^1_{\Iw}(K_\infty, T(\varpi)) \cong H^1_{\Iw}(K_\infty, T)(\varpi). \end{equation} Moreover, via the isomorphism $V \otimes_{\QQ_p} \BB_{\cris} \cong \mathbb{D}_{\mathrm{cris}}(V) \otimes_{F} \BB_{\cris}$, we can regard the space \[ \mathbb{D}_{\mathrm{cris}}(V(\varpi)) = \left( E \otimes_{\QQ_p} V \otimes_{\QQ_p} \BB_{\cris}\right)^{\mathcal{G}_{\QQ_p} = \varpi^{-1}}\] as a subspace of $E \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V) \otimes_{F} \BB_{\cris}$. Since the natural inclusion $\widehat{F}_\infty \hookrightarrow \BB_{\cris}$ induces an injection \[ (E \otimes_{\QQ_p} K_\infty)^{U = \varpi^{-1}} \hookrightarrow (E \otimes_{\QQ_p} \BB_{\cris})^{\mathcal{G}_{\QQ_p} = \varpi^{-1}}\] which must be an isomorphism (as the right-hand side must have $E$-dimension $\le 1$), we have a canonical isomorphism \[ \mathbb{D}_{\mathrm{cris}}(V(\varpi)) = \mathbb{D}_{\mathrm{cris}}(V) \otimes_{F} \left(E \otimes_{\QQ_p} K_\infty\right)^{U = \varpi^{-1}}.\] In particular, there is a canonical isomorphism \[ \widehat{F}_\infty \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V(\varpi)) \cong E \otimes_{\QQ_p} \widehat{F}_\infty \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V).\] We also have a canonical map \[ \Tw_{\varpi^{-1}} : E \otimes_{\QQ_p} \mathcal{H}_{\widehat{F}_\infty}(G) \to E \otimes_{\QQ_p} \mathcal{H}_{\widehat{F}_\infty}(G)\] which on group elements corresponds to the map $g \mapsto \varpi(g)^{-1} g$. Tensoring with the canonical isomorphism above, we obtain a map (which we also denote by $\Tw_{\varpi^{-1}}$) \[ E \otimes_{\QQ_p} \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V) \rTo \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V(\varpi)).\] \begin{proposition} \label{prop:twisting} With the identifications described above, the regulator $\mathcal{L}^G_V$ is invariant under unramified twists: there is a commutative diagram \begin{diagram} \mathcal{O}_E \otimes H^1_{\Iw}(K_\infty, T) & \rTo^{\mathcal{L}_V^G} & E \otimes_{\QQ_p} \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V) \\ \dTo^\cong & & \dTo^{\Tw_{\varpi^{-1}}}\\ H^1_{\Iw}(K_\infty, T(\varpi)) & \rTo^{\mathcal{L}_{V(\varpi)}^{G}} & \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V(\varpi)) \end{diagram} \end{proposition} \begin{proof} By \eqref{eq:iwacohotwist}, we have canonical isomorphisms $H^1_{\Iw}(K_\infty, T) \otimes_{\ZZ_p} \mathcal{O}_E \cong \mathbb{N}_{F_\infty}(T)^{\psi = 1} \otimes_{\ZZ_p} \mathcal{O}_E$, and $H^1_{\Iw}(K_\infty, T(\varpi)) \cong \mathbb{N}_{F_\infty}(T(\varpi))^{\psi = 1}$. We can therefore rewrite the above diagram to obtain the following: \begin{diagram} \mathbb{N}_{F_\infty}(T)^{\psi = 1} \otimes_{\ZZ_p} \mathcal{O}_E & \rTo^{1-\varphi} & \mathbb{N}_{F_\infty}(T)^{\psi = 0} \otimes_{\ZZ_p} \mathcal{O}_E & \rTo & \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V) \otimes_{\QQ_p} E \\ \dTo & & \dTo & & \dTo^{\Tw_{\varpi^{-1}}}\\ \mathbb{N}_{F_\infty}(T(\varpi))^{\psi = 1} & \rTo^{1-\varphi} & \mathbb{N}_{F_\infty}(T(\varpi))^{\psi = 0} & \rTo & \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V(\varpi)). \end{diagram} Here the left and middle vertical maps are obtained by restriction from that of Theorem \ref{thm:unramifiedtwist}, taking $\tau = \varpi^{-1}$; as noted above, this isomorphism commutes with $\varphi$ and $\psi$. The commutativity of the left square is clear. Moreover, the isomorphisms \[ \mathbb{N}_F(T(\varpi)) \cong \mathbb{N}_F(T) \otimes_{\ZZ_p} \left( \mathcal{O}_E \otimes_{\ZZ_p} \widehat{\cO}_{F_\infty}\right)^{U = \varpi^{-1}}\] and \[ \mathbb{D}_{\mathrm{cris}}(V(\varpi)) \cong \mathbb{D}_{\mathrm{cris}}(V) \otimes_{\ZZ_p} \left( \mathcal{O}_E \otimes_{\ZZ_p} \widehat{\cO}_{F_\infty} \right)^{U = \varpi^{-1}}\] are compatible (since the first is given by multiplication in $\mathbb{A}$, the second in $\BB_{\cris}$, and the inclusion of $\widehat{\cO}_{F_\infty}$ in $\BB_{\cris}$ factors through the natural maps $\mathbb{A}^+ \hookrightarrow \tilde{\mathbb{A}}^+ \hookrightarrow \mathbb{A}_{\cris}$). Hence the commutativity of the right square follows, as the twisting maps $\Lambda_{\widehat{\cO}_{F_\infty}}(U) \to \Lambda_{\widehat{\cO}_{F_\infty}}(U)$ and $\mathcal{H}_{\widehat{F}_\infty}(U) \to \mathcal{H}_{\widehat{F}_\infty}(U)$ are evidently compatible. \end{proof} \subsection{An explicit formula for the values of the regulator} \label{sect:explicitformula} In this section, we use the results from the previous section to give a direct interpretation of the value of the regulator map $\mathcal{L}^G_V$ at any de Rham character of $G$, relating these to the values of the Bloch-Kato exponential maps for $V$ and its twists. In this section we assume (for simplicity) that $F = \QQ_p$. As above, let $\varpi$ be a continuous character of $U$ with values in $\mathcal{O}_E$, for some finite extension $E / \QQ_p$. Combining Proposition \ref{prop:twisting} with the defining property of $\mathcal{L}^G_{V(\varpi)}$ in Theorem \ref{thm:localregulator}, we have: \begin{theorem} The following diagram commutes: \[ \begin{diagram} H^1_{\Iw}(K_\infty, V) & \rTo^{\mathcal{L}_{V}^G} & \mathcal{H}_{\widehat{F}_\infty}(G)^{\circ} \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V) \\ \dTo_{\pr_{\Iw}^\varpi} & & \dTo_{\pr^\varpi_{\cris}}\\ H^1_{\Iw}(\QQ_p(\mu_{p^\infty}), V(\varpi)) & \rTo^{\mathcal{L}^\Gamma_{V(\varpi)}} & \mathcal{H}_{\QQ_p}(\Gamma) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V(\varpi)). \end{diagram} \] \end{theorem} Here $\mathcal{H}_{\widehat{F}_\infty}(G)^\circ$ denotes the subspace of $\mathcal{H}_{\widehat{F}_\infty}(G)$ satisfying the Galois-equivariance property of Proposition \ref{prop:galoisequivariance}. The map $\pr_{\Iw}^\varpi$ is the composite of the isomorphism \eqref{eq:iwacohotwist} with the corestriction map; the right-hand vertical map is the composite of $\Tw_{\varpi^{-1}}$ with push-forward to $\Gamma$. (Hence both vertical maps are $U$-equivariant, if we let $U$ act on the bottom row by $\varpi^{-1}$.) We now apply the results of \S \ref{appendix:cyclo} to each unramified twist $V(\varpi)$ of $V$ to determine exactly the values of $\mathcal{L}_V^G$ at any character of $G$ which is Hodge--Tate, in terms of the dual exponential and logarithm maps (cf.~\S\ref{sect:crystallinereps} above). \begin{definition} Let $\omega$ be any continuous character of $G$ with values in some finite extension $E / \QQ_p$. For $x\in H^1_{\Iw}(K_\infty,V)$, we write $x_{\omega, 0}$ for the image of $x$ in $H^1(\QQ_p, V(\omega^{-1}))$. \end{definition} We can now apply Theorem \ref{thm:explicitformulacyclo} to obtain the following formulae for the values of $\mathcal{L}^G_V$: \begin{theorem} \label{thm:explicitformula} Let $x\in H^1_{\Iw}(K_\infty,V)$. Let $j$ be the Hodge--Tate weight of $\omega$, and $n$ its conductor. If $n = 0$, suppose that $\mathbb{D}_{\mathrm{cris}}(V(\omega^{-1}))^{\varphi = p^{-1}} = 0$. Then we have \begin{multline*} \mathcal{L}^G_{V}(x)(\omega) = \Gamma^*(1+j) \cdot \varepsilon(\omega^{-1}) \cdot \frac{\Phi^n P(\omega^{-1}, \Phi)}{P(\omega, p^{-1} \Phi^{-1})} \\ \times \begin{cases} \exp^*_{V(\omega^{-1})^*(1)}(x_{\omega,0}) \otimes t^{-j} e_j & \text{if $j \ge 0$,}\\ \log_{\QQ_p, V(\omega^{-1})}(x_{\omega,0}) \otimes t^{-j} e_j & \text{if $j \le -1$,} \end{cases} \end{multline*} where the notation is as follows: \begin{itemize} \item $\Gamma^*(1 + j)$ is the leading term of the Taylor expansion of the Gamma function at $1 + j$, \[ \Gamma^*(1 + j) = \begin{cases} j! & \text{if $j \ge 0$,}\\ \frac{(-1)^{-j-1}}{(-j-1)!} & \text{if $j \le -1$.} \end{cases} \] \item $P_\omega$ and $\varepsilon(\omega)$ are the $L$ and $\varepsilon$-factors of the Weil--Deligne representation $\mathbb{D}_{\mathrm{pst}}(\omega)$ (see \S \ref{sect:epsfactors} above). \item $\Phi$ denotes the operator on $\mathbb{D}_{\mathrm{cris}}(V) \otimes_{\QQ_p} \widehat{F}_\infty$ which is obtained by extending the Frobenius of $\mathbb{D}_{\mathrm{cris}}(V)$ to act trivially on $\widehat{F}_\infty$ (rather than as the usual Frobenius on $\widehat{F}_\infty$). \end{itemize} \end{theorem} \begin{remark} To define $\mathcal{L}^G_{V}$ we made a choice of compatible system of $p$-power roots of unity $\zeta$; but the dependence of $\mathcal{L}^G_V$ on $\zeta$ is clear from the formula of Theorem \ref{thm:explicitformula}. If we temporarily write $\mathcal{L}^{G}_{V}(x, \zeta)$ for the regulator using the roots of unity $\zeta$, then for any $\gamma \in \Gamma$ we have \[ \mathcal{L}^G_V(x, \gamma \zeta)(\omega) = \omega(\tilde \gamma)^{-1} \mathcal{L}^G_V(x, \zeta)(\omega),\] where $\tilde \gamma$ is the unique lifting of $\gamma$ to the inertia subgroup of $G$. \end{remark} \subsection{A local reciprocity formula} \label{sect:localrecip} Our final local result will be an analogue of Perrin-Riou's local reciprocity formula, relating the maps $\mathcal{L}^G_V$ and $\mathcal{L}^G_{V^*(1)}$. The cyclotomic version of this formula, conjecture $\operatorname{Rec}(V)$ in \cite{perrinriou94}, was originally formulated in terms of Perrin-Riou's exponential map $\Omega_{V, h}$, and proved independently by Colmez \cite{colmez98} and Benois \cite{benois00}. In Appendix \ref{appendix:cyclo} below we formulate and prove a version using the map $\mathcal{L}^\Gamma_V$ instead. Here, as in Appendix \ref{appendix:cyclo}, it will be convenient to us to extend the definition of the regulator map to representations which are crystalline, but which may have some negative Hodge--Tate weights. To do this, we note that if $V$ is good crystalline, then for any $k \ge 0$ we have \[ \Tw_{\chi}\left( \mathcal{L}^G_{V(1)}(x \otimes e_{1})\right) = \ell_{-1}\left(\mathcal{L}^G_V(x)\right) \otimes t^{-1} e_1.\] So for arbitrary $V$, and any $j \gg 0$ such that $V(j)$ is good crystalline, we may \emph{define} $\mathcal{L}^G_{V}$ by the formula \[ \mathcal{L}^G_V(x) = (\ell_{-1} \circ \dots \circ \ell_{-j})^{-1}\left( \Tw_{\chi^j}\left(\mathcal{L}^G_{V(j)}(x \otimes e_{j})\right)\right) \otimes t^j e_{-j}\] and this does not depend on the choice of $j$; this then takes values in the fraction field of $\mathcal{H}_{\widehat{F}_\infty}(G)$. \begin{theorem} For any crystalline representation $V$ and any classes $x \in H^1_{\Iw}(K_\infty, V)$ and $y \in H^1_{\Iw}(K_\infty, V^*(1))$, we have \[ \langle \mathcal{L}_V^G(x), \mathcal{L}_{V^*(1)}^G(y) \rangle_{\mathrm{cris}, V} = -\sigma_{-1} \cdot \ell_0 \cdot \langle x, y \rangle_{K_\infty, V},\] where $\sigma_{-1}$ denotes the unique element of the inertia subgroup of $G$ such that $\chi(\sigma_{-1}) = -1$. \end{theorem} \begin{proof} Using Lemma \ref{lemma:PRpairingtwist} and Proposition \ref{prop:twisting} for each unramified character $\tau$ of $G$ reduces this immediately to the corresponding statement for the cyclotomic regulator maps $\mathcal{L}^\Gamma_{V(\tau)}$, which is Theorem \ref{thm:cycloreciprocity}. \end{proof} \section{Regulators for extensions of number fields} \label{sect:globalregulator} \label{sect:semilocalregulator} \newcommand{\overline{K}_{\fp}}{\overline{K}_{\mathfrak{p}}} In this section, we show how to define an extension of the regulator map in the context of certain $p$-adic Lie extensions of number fields. This section draws heavily on the cyclotomic case studied by Perrin-Riou in \cite{perrinriou94}; see also \cite{iovitapollack06} for the case of more general $\ZZ_p$-extensions of number fields. Let $K$ be a number field, $p$ a (rational) prime, and $\mathfrak{p}$ a prime of $K$ above $p$. We choose a prime $\mathfrak{P}$ of $\overline{K}$ above $\mathfrak{p}$. \subsection{Semilocal cohomology} Let $T$ be a finitely generated $\ZZ_p$-module with a continuous action of $\mathcal{G}_K$. For each finite extension $L$ of $K$, the set of primes $\mathfrak{q}$ of $L$ above $\mathfrak{p}$ is finite, and for each $i$ we may define the semilocal cohomology group \[ Z^i_{\mathfrak{p}}(L, T) = \bigoplus_{\mathfrak{q} \mid \mathfrak{p}} H^i(L_\mathfrak{q}, T).\] If $L / K$ is Galois, with Galois group $G$, then we have a canonical isomorphism \begin{equation} \label{eq:semilocalcoho} Z^i_{\mathfrak{p}}(L, T) \cong \ZZ_p[G] \otimes_{\ZZ_p[G_\mathfrak{P}]} H^i(L_\mathfrak{P}, T), \end{equation} where $G_\mathfrak{P}$ is the decomposition group of $\mathfrak{P}$ in $G$. In particular, it has an action of $\ZZ_p[G]$, and it is easy to see that the localization map \[ \loc_{\mathfrak{p}} = \bigoplus_{\mathfrak{q} \mid \mathfrak{p}} \loc_\mathfrak{q} : H^i(L, T) \to Z^i_\mathfrak{p}(L, T)\] is $G$-equivariant. If now $K_\infty / K$ is a $p$-adic Lie extension of number fields with Galois group $G$, we may define semilocal Iwasawa cohomology groups \[ Z^i_{\Iw, \mathfrak{p}}(K_\infty, T) = \varprojlim_{K'} Z^i_{\mathfrak{p}}(K', T),\] where the inverse limit is over finite Galois extensions $K' / K$ contained in $K_\infty$. The isomorphisms \eqref{eq:semilocalcoho} for each finite subextension imply that \begin{equation} \label{eq:semilocaliwasawacoho} Z^i_{\Iw, \mathfrak{p}}(K_\infty, T) = \Lambda_{\ZZ_p}(G) \otimes_{\Lambda_{\ZZ_p}(G_\mathfrak{P})} H^i_{\Iw}(K_{\infty, \mathfrak{P}}, T). \end{equation} \begin{theorem} \label{thm:semilocalreg} Let $K_\infty / K$ be any $p$-adic Lie extension of number fields with Galois group $G$, $\mathfrak{p}$ a prime of $K$ above $p$, and $\mathfrak{P}$ a prime of $\overline{K}$ above $\mathfrak{p}$, such that \begin{itemize} \item $K_\mathfrak{p}$ is unramified over $\QQ_p$, \item the completion $K_{\infty, \mathfrak{P}}$ is of the form $F_\infty(\mu_{p^\infty})$, for $F_\infty$ an infinite unramified extension of $K_{\mathfrak{p}}$. \end{itemize} Then there is a unique homomorphism of $\Lambda_{\ZZ_p}(G)$-modules \[ \mathcal{L}^G_{V} : Z^1_{\Iw, \mathfrak{p}}(K_\infty, V) \to \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V), \] where $\widehat{F}_\infty$ is the $p$-adic completion of the maximal unramified subfield of $K_{\infty, \mathfrak{P}}$, whose restriction to $H^1_{\Iw}(K_{\infty, \mathfrak{P}}, V)$ is the local regulator map $\mathcal{L}^{G_\mathfrak{P}}_V$. \end{theorem} \begin{proof} Immediate by tensoring the local regulator $\mathcal{L}^{G_\mathfrak{P}}_V$ with $\Lambda_{\ZZ_p}(G)$, using equation \eqref{eq:semilocaliwasawacoho}. \end{proof} \begin{remark} Note that if $\mathfrak{p}$ is finitely decomposed in $K_\infty$, so $[G : G_{\mathfrak{P}}]$ is finite, one can describe $\mathcal{L}^G_{V}$ as a direct sum of local regulators: \[ \mathcal{L}^G_{V}(x) = \bigoplus_{\sigma \in G / G_{\mathfrak{P}}} [\sigma] \cdot \mathcal{L}^{G_{\mathfrak{P}}}_{V}(\loc_{\mathfrak{p}} \sigma^{-1}(x)).\] However, the construction also applies when $\mathfrak{p}$ is infinitely decomposed. Thus, for instance, if $d > 1$ and $K$ is a CM field of degree $2d$ in which $p$ splits completely, then one can take $K_\infty$ to be the $(d + 1)$-dimensional abelian $p$-adic Lie extension given by the ray class field $K(p^\infty)$. \end{remark} \begin{remark} One can use the regulator maps to construct Coleman maps and restricted Selmer groups of $V$ over $K_\infty$, in the spirit of the constructions in \cite{leiloefflerzerbes10} for the cyclotomic extension. \end{remark} \subsection{The module of p-adic L-functions} \label{sect:defmoduleofLfunctions} We now assume that the number field $K$ is totally complex and Galois over $\mathbb{Q}$, and that $p$ splits completely in $K$, $(p)=\mathfrak{p}_1\dots \mathfrak{p}_e$. For each of these primes, fix an embedding of $\overline{\QQ}$ into $\overline{K_{\mathfrak{p}_i}}$. Let $T$ be a $\ZZ_p$-representation of $\mathcal{G}_K$, and let $V=T[p^{-1}]$. \begin{assumption} \label{assumption:crystalline} For all $1\leq i\leq e$, the restriction of $V$ to $\mathcal{G}_{K_{\mathfrak{p}_i}}$ is good crystalline. \end{assumption} Let $S$ be the finite set of primes of $K$ containing all the primes above $p$, all the archimedean places and all the places whose inertia group acts non-trivially on $T$. Denote by $K^S$ the maximal extension of $K$ unramified outside $S$. Let $K_\infty$ be a $p$-adic Lie extension of $K$ contained in $K^S$ which is Galois over $\mathbb{Q}$ and satisfies the conditions of Theorem \ref{thm:semilocalreg} for each of the primes $\mathfrak{p}_1, \dots, \mathfrak{p}_e$. \begin{definition} Define $H^1_{\Iw,S}(K_\infty,T)=\varprojlim H^1(\Gal(K^S\slash K_n),T)$, where $\{K_n\}$ is a sequence of finite extensions of $K$ such that $K_\infty=\bigcup K_n$. We also let \[ H^1_{\Iw,S}(K_\infty,V)=H^1_{\Iw,S}(K_\infty,T)\otimes_{\ZZ_p}\QQ_p.\] \end{definition} \begin{assumption} \label{assumption:eltsorderp} The Galois group $G = \Gal(K_\infty / K)$ has no element of order $p$. \end{assumption} \begin{remark} Examples of $p$-adic Lie extensions satisfying the above hypotheses occur naturally in the context of class field theory; for instance, if $K$ is a CM field in which $p$ splits, and $K_\infty$ the ray class field $K(p^\infty)$, all the conditions are automatic except possibly \ref{assumption:eltsorderp}, and this may be dealt with by replacing $K_\infty$ by a finite subextension. We shall study extensions of this type in more detail in \S \ref{sect:imquad} below, where we take $K$ to be an imaginary quadratic field. \end{remark} As $K_\infty$ is a Galois extension of $\mathbb{Q}$, the Galois groups $\Gal(K_{\infty, \mathfrak{p}_i}\slash K_{\mathfrak{p}_i})$, $1\leq i\leq e$, are conjugate to each other in $\Gal(K_\infty\slash \mathbb{Q})$, as are their inertia subgroups. If $L_{\infty,i}$ denotes the maximal unramified extension of $K_{\mathfrak{p},i}$ in $K_{\infty,\mathfrak{p}_i}$, we get canonical identifications of $L_{\infty,i}$ with $L_{\infty,j}$ for all $1\leq i,j\leq e$. We can therefore drop the index and denote this unramified extension of $\QQ_p$ by $F_\infty$. As explained in Section \ref{sect:semilocalregulator}, for $1\leq i\leq e$, we have a regulator map \[\mathcal{L}^G_{V,\mathfrak{p}_i} : Z^1_{\Iw, \mathfrak{p}_i}(K_\infty, T) \rightarrow \mathbb{D}_{\cris,\mathfrak{p}_i}(V) \otimes_{\QQ_p} \mathcal{H}_{\widehat{F}_\infty}(G).\] Via the localisation map $\loc_{\mathfrak{p}_i}:H^1_{\Iw,S}(K_\infty, T) \rightarrow Z^1_{\Iw, \mathfrak{p}_i}(K_\infty, T)$, it induces a map \[ H^1_{\Iw,S}(K_\infty, T) \rTo \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{\QQ_p} \mathbb{D}_{\cris,\mathfrak{p}_i}(V)\] which we also denote by $\mathcal{L}^G_{V,\mathfrak{p}_i}$. Let \[ \mathbb{D}_p(V) =\mathbb{D}_{\mathrm{cris}}\Big(\big(\Ind_{K\slash\mathbb{Q}}V\big)|_{\mathcal{G}_{\QQ_p}}\Big) \cong \bigoplus_{i=1}^e \mathbb{D}_{\cris,\mathfrak{p}_i}(V). \] Define \[ \mathcal{L}^G_V = \bigoplus_{i=1}^e \mathcal{L}^G_{V,\mathfrak{p}_i} : H^1_{\Iw,S}(K_\infty,V) \rTo \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{\QQ_p} \mathbb{D}_p(V).\] Denote by $\mathcal{K}_{\widehat{F}_\infty}(G)$ the fraction field of $\mathcal{H}_{\widehat{F}_\infty}(G)$. Assume that Conjecture $\Leop(K_\infty,V)$ (as formulated in \S \ref{sect:globalranks} below) olds, so $H^2_{\Iw}(K_\infty\slash K,V)$ is $\Lambda_{\QQ_p}(G)$-torsion. Let $d = \frac{1}{2}[K : \mathbb{Q}] \dim_{\QQ_p}(V)$. As $\rank_{\Lambda_{\ZZ_p}(G)} H^1_{\Iw,S}(K_\infty,T)=d$ by Theorem \ref{thm:globalrank}, the regulator $\mathcal{L}^G_V$ induces a map\footnote{For the definition of the determinant of a finitely generated $\Lambda_{\ZZ_p}(G)$-module, see \cite{knudsenmumford76}; c.f. also \cite[\S 3.1.5]{perrinriou94}.} \[ \det \mathcal{L}^G_V : \det_{\Lambda_{\QQ_p}(G)} H^1_{\Iw,S}(K_\infty,V) \rTo \mathcal{K}_{\widehat{F}_\infty}(G)\otimes_{\QQ_p}\bigwedge^d \mathbb{D}_p(V).\] \begin{definition} \label{def:Iarith} Define $\mathbb{I}_{\arith,p}(V)$ to be the $\Lambda_{\QQ_p}(G)$-submodule of $\mathcal{K}_{\widehat{F}_\infty}(G)\otimes_{\QQ_p}\bigwedge^d\mathbb{D}_p(V)$ \[ \mathbb{I}_{\arith,p}(V)= \det \mathcal{L}^G_V\big(H^1_{\Iw}(K_\infty,T)\big) \otimes \left(\det H^2_{\Iw}(K_\infty,T)\right)^{-1}.\] \end{definition} In the spirit of Perrin-Riou (c.f. \cite[\S 3.1]{perrinriou03}), we can give an explicit description of $\mathbb{I}_{\arith,p}(V)$ as follows. Let $f_2 \in \Lambda_{\QQ_p}(G)$ be a generator of the characteristic ideal of $H^2_{\Iw}(K_\infty,T)$, so $\det H^2_{\Iw}(K_\infty,T) = f_2^{-1} \Lambda_{\QQ_p}(G)$. \begin{proposition} \label{prop:explicitdescription} Let $\mathfrak{c}=\{c_1,\dots,c_d\}\subset H^1_{\Iw,S}(K_\infty,V)$ be elements such that if $\mathcal{C}$ denotes the $\Lambda_{\QQ_p}(G)$-submodule of $H^1_{\Iw,S}(K_\infty,V)$ spanned by the elements of $\mathfrak{c}$, then the quotient $H^1_{\Iw,S}(K_\infty,V)\slash \mathcal{C}$ is $\Lambda_{\QQ_p}(G)$-torsion. Denote by $f_{\mathfrak{c}}\in\Lambda_{\QQ_p}(G)$ the corresponding characteristic element. Then \[ \mathbb{I}_{\arith,p}(V)=\Lambda_{\QQ_p}(G) \, f_2 f_{\mathfrak{c}}^{-1} \, \mathcal{L}^G_V(c_1)\wedge\dots\wedge \mathcal{L}^G_V(c_d).\] \end{proposition} \begin{proof} Clear from the construction. \end{proof} \begin{remark} If $H^1_{\Iw,S}(K_\infty,V)$ is free as a $\Lambda_{\QQ_p}(G)$-module, then $\mathbb{I}_{\arith,p}(V)$ must be contained in $\mathcal{H}_{\widehat{F}_\infty}(G)$. \end{remark} \begin{remark} Via the isomorphism \[ \mathcal{K}_{\widehat{F}_\infty}(G)\otimes\bigwedge^d\mathbb{D}_p(V)\cong \Hom_{\QQ_p}\Big(\bigwedge^d\mathbb{D}_p(V^*(1)),\mathcal{K}_{\widehat{F}_\infty}(G)\Big),\] we can consider the $\Lambda_{\QQ_p}(G)$-module $\mathbb{I}_{\arith,p}(V)$ as a submodule of \[ \Hom_{\QQ_p}\big(\bigwedge^d\mathbb{D}_p(V^*(1)),\mathcal{K}_{\widehat{F}_\infty}(G)\big).\] \end{remark} The following proposition implies that $\mathbb{I}_{\arith,p}(V)\neq 0$: \begin{proposition} \label{prop:smallkernel} Assume that conjecture $\Leop(K_\infty,V^*(1))$ holds. Then the kernel of the homomorphism \[ \loc_p:H^1_{\Iw,S}(K_\infty, T) \rTo \bigoplus_{i=1}^e Z^1_{\Iw,\mathfrak{p}_i}(K_\infty,T)\] is $\Lambda_{\ZZ_p}(G)$-torsion. \end{proposition} \begin{proof} We adapt the arguments in \cite[\S A.2]{perrinriou95}. For $0\leq j\leq 2$, define the $\Lambda_{\ZZ_p}(G)$-modules \[ Z^j_S(K_\infty,T)=\bigoplus_{v\in S} H^j_{\Iw}(K_{\infty,v},T)\hspace{3ex} \text{and}\hspace{3ex} Z^j_p(K_\infty,T)= \bigoplus_{i=1}^e Z^j_{\Iw,\mathfrak{p}_i}(K_\infty,T).\] Also, define \[ X^i_{\infty,S}(K_\infty,T)=H^i\big(G_S(K_\infty),V^*(1)\slash T^*(1)\big). \] Taking the limit over the $K_{m,n}$ of the Poitou-Tate exact sequence gives an exact sequence of $\Lambda_{\ZZ_p}(G)$-modules \begin{align*} 0 &\rTo X^2_{\infty,S}(K_\infty,T)^\vee \rTo H^1_{\Iw,S}(K_\infty,T) \rTo^{\loc_S} Z^1_S(K_\infty,T) \\ &\rTo X^1_{\infty,S}(K_\infty,T)^\vee \rTo H^2_{\Iw,S}(K_\infty,T) \rTo Z^2_{S}(K_\infty,T) \\ &\rTo X^0_{\infty,S}(K_\infty,T)^\vee\rTo 0. \end{align*} By Theorem \ref{thm:globalrank}, since we are assuming $\Leop(K_\infty,V^*(1))$, the module $X^2_{\infty,S}(K_\infty,T)$ is $\Lambda_{\ZZ_p}(G)$-cotorsion. Thus $\ker(\loc_S)$ is $\Lambda_{\ZZ_p}(G)$-torsion. As \[ \rank_{\Lambda_{\ZZ_p}(G)}Z^1_S(K_\infty,T)=\rank_{\Lambda_{\ZZ_p}(G)}Z^1_p(K_\infty,T)\] by Proposition \ref{thm:localrank}, this implies the result. \end{proof} \begin{corollary} If conjecture $\Leop(K_\infty,V)$ holds, the $\Lambda_{\ZZ_p}(G)$-mod\-ule $\mathbb{I}_{\arith,p}(V)$ is non-zero. \end{corollary} \begin{proof} Consequence of Propositions \ref{prop:smallkernel} and \ref{prop:injectivereg}. \end{proof} As in the cyclotomic case, we conjecture that $\mathbb{I}_{\arith,p}(V)$ should have a canonical basis vector -- a $p$-adic $L$-function for $V$ -- whose image under evaluation at de Rham characters of $G$ is related to the critical $L$-values of $V$ and its twists. In the above generality this is a somewhat vain exercise as even the analytic continuation and algebraicity of the values of the complex $L$-function is conjectural. In the next section, we shall make this philosophy precise in some special cases; we shall show that it is consistent with known results regarding $p$-adic $L$-functions, but that it also implies some new conjectures regarding $p$-adic $L$-functions of modular forms. \section{Imaginary quadratic fields} \label{sect:imquad} \subsection{Setup} \label{sect:setupimquad} Throughout this section, let $K$ be an imaginary quadratic field in which $p$ splits; write $(p)=\mathfrak{p} {\bar{\fp}}$. We now introduce a specific class of extensions $K_\infty / K$ for which the hypotheses in Section \ref{sect:defmoduleofLfunctions} are satisfied. Let $\mathfrak{f}$ be an integral ideal of $K$ prime to $\mathfrak{p}$ and ${\bar{\fp}}$, and let $K_\infty$ be the ray class field $K(\mathfrak{f} p^\infty)$. We assume that $\mathfrak{f}$ is stable under $\Gal(K / \mathbb{Q})$, which is equivalent to the assumption that $K_\infty$ is Galois over $\mathbb{Q}$. It is well known that $K_\infty \supseteq K(\mu_{p^\infty})$, and that the primes $\mathfrak{p}$ and ${\bar{\fp}}$ are finitely decomposed in $K$; so $G = \Gal(K_\infty / K)$ is an abelian $p$-adic Lie group of dimension 2, and the decomposition groups $G_{\mathfrak{p}}$ and $G_{{\bar{\fp}}}$ are open subgroups. \begin{lemma} If $p$ is coprime to the order of the ray class group $\operatorname{Cl}_{\mathfrak{f}}(K)$, then $G$ has no elements of order $p$. \end{lemma} \begin{proof} By class field theory, we have an exact sequence \[ 0 \rTo \overline{U_\mathfrak{f}} \rTo (\mathcal{O}_K \otimes \ZZ_p)^\times \rTo \operatorname{Cl}_{\mathfrak{f} p^\infty}(K) \rTo \operatorname{Cl}_{\mathfrak{f}}(K) \rTo 0,\] where $U_\mathfrak{f}$ is the group of units of $\mathcal{O}_K$ that are 1 modulo $\mathfrak{f}$ and $\overline{U_\mathfrak{f}}$ is the closure of $U_\mathfrak{f}$ in $(\mathcal{O}_K \otimes \ZZ_p)^\times$. So it suffices to show that the quotient \[ \frac{(\mathcal{O}_K \otimes \ZZ_p)^\times}{\overline{U_\mathfrak{f}}}\] is $p$-torsion-free. However, since $K$ is imaginary quadratic, $\overline{U_\mathfrak{f}} = U_\mathfrak{f}$ is a finite group, and as $p$ is odd and split in $K$, we have $p \nmid |\overline{U_\mathfrak{f}}|$. Since $(\mathcal{O}_K \otimes \ZZ_p)^\times$ is $p$-torsion-free, the result follows. \end{proof} Let us write $\Gamma_\mathfrak{p} = \Gal(K_\infty / K(\mathfrak{f} {\bar{\fp}}^\infty))$ and $\Gamma_{{\bar{\fp}}} = \Gal(K_\infty / K(\mathfrak{f} \mathfrak{p}^\infty))$. Note that $\Gamma_\mathfrak{p}$ and $\Gamma_{{\bar{\fp}}}$ are $p$-adic Lie groups of rank 1 whose intersection is trivial, and the open subgroup $\Gal(K_\infty / K(\mathfrak{f}))$ is isomorphic to $\Gamma_\mathfrak{p} \times \Gamma_{{\bar{\fp}}}$. Let $T$ be a $\ZZ_p$-representation of $\mathcal{G}_K$ of rank $d$, and let $V=T[p^{-1}]$. As in the previous section, assume that for $\star\in \{\mathfrak{p}, {\bar{\fp}} \}$, the restriction of $V$ to $\mathcal{G}_{K_\star}$ is good crystalline. \begin{lemma} We have a canonical decomposition \[ \bigwedge^d \mathbb{D}_p(V) \cong \bigoplus_{m+n=d} \left( \bigwedge^m \mathbb{D}_{\cris,\mathfrak{p}}(V)\otimes_{\QQ_p} \bigwedge^n \mathbb{D}_{\cris,{\bar{\fp}}}(V) \right). \] \end{lemma} \begin{proof} Clear from the definition. \end{proof} To simplify the notation, let us write \[\mathbb{D}_p(V)^{(m,n)}=\bigwedge^m \mathbb{D}_{\cris,\mathfrak{p}}(V)\otimes_{\QQ_p} \bigwedge^n \mathbb{D}_{\cris,{\bar{\fp}}}(V).\] \subsection{Galois descent of the module of L-functions} \label{sect:Galoisdescent} \begin{definition} For $m,n\in\mathbb{N}$ such that $m+n=d$, define $\mathbb{I}^{(m,n)}_{\arith,p}(V)$ to be the image of $\mathbb{I}_{\arith,p}(V)$ in $\mathcal{K}_{\widehat{F}_\infty}(G)\otimes_{\QQ_p} \mathbb{D}_p(V)^{(m,n)}$ induced from the projection map $\bigwedge^d \mathbb{D}_p(V)\rTo \mathbb{D}_p(V)^{(m,n)}$. \end{definition} The following theorem is the main result of this section: \begin{theorem} \label{thm:coefficients} If $d = 2n$, then $\mathbb{I}^{(n,n)}_{\arith,p}(V) \subset \mathcal{K}_{L}(G) \otimes_{\QQ_p} \mathbb{D}_p(V)^{(n,n)}$, where $L$ is a finite unramified extension of $\QQ_p$. \end{theorem} The rest of this section is devoted to proving this theorem. As in Proposition \ref{prop:galoisequivariance}, let $\tilde \sigma_{\mathfrak{p}} \in G_{\mathfrak{p}}$ be the unique element of $G_{\mathfrak{p}}$ which lifts the Frobenius automorphism at $\mathfrak{p}$ of $K(\mathfrak{f} {\bar{\fp}}^\infty)$ and which is trivial on $K(\mu_{p^\infty})$, and similarly for ${\bar{\fp}}$. \begin{lemma} The element $\tilde \sigma_{\mathfrak{p}} \tilde \sigma_{{\bar{\fp}}} \in G$ has finite order. \end{lemma} \begin{proof} Let us consider the ``semilocal Artin map'' \[ \theta = (\theta_\mathfrak{p}, \theta_{{\bar{\fp}}}) : K_{\mathfrak{p}}^\times \times K_{{\bar{\fp}}}^\times \to G.\] Here $\theta_{\mathfrak{p}}$ is the Artin map for $K_\mathfrak{p}$, normalised so that uniformisers map to geometric Frobenius elements. The kernel of $\theta$ is the image in $K_{\mathfrak{p}}^\times \times K_{{\bar{\fp}}}^\times$ of the elements of $K^\times$ which are units outside $p$ and congruent to $1 \bmod \mathfrak{f}$. By the functoriality of the global Artin map (cf.~\cite[VI.5.2]{neukirch99}), there is a commutative diagram \[ \begin{diagram} K_{\mathfrak{p}}^\times \times K_{{\bar{\fp}}}^\times & \rTo^\theta & G \\ \dTo^{N_{K/\mathbb{Q}}} & & \dTo \\ \QQ_p^\times & \rTo & \Gamma, \end{diagram} \] The bottom horizontal map is the local Artin map for $\mathbb{Q}(\mu_{p^\infty}) / \mathbb{Q}$; if we identify $\Gamma$ with $\ZZ_p^\times$, this map is the identity on $\ZZ_p^\times$ and sends $p$ to 1. Consider the element $(p, 1)$ of $K_{\mathfrak{p}}^\times \times K_{{\bar{\fp}}}^\times$. The image of this in the group $\Gal(K(\mathfrak{f} {\bar{\fp}}^\infty) / K)$ is the Frobenius $\sigma_p$. Its image in $\QQ_p^\times$ is $p$, which is mapped to the identity in $\Gamma$. Hence the image of $(p, 1)$ in $G$ is $\tilde \sigma_\mathfrak{p}$. Similarly, $(1, p)$ is a lifting of $\tilde \sigma_{{\bar{\fp}}}$. Hence $\tilde \sigma_{\mathfrak{p}} \tilde \sigma_{{\bar{\fp}}}$ is the image of the element $(p, p)$ of $K_{\mathfrak{p}}^\times \times K_{{\bar{\fp}}}^\times$. Thus if $m$ is such that $p^m = 1 \bmod \mathfrak{f}$, $(p^m, p^m) \in K_{\mathfrak{p}}^\times \times K_{{\bar{\fp}}}^\times$ is in the kernel of $\theta$, and hence $(\tilde \sigma_{\mathfrak{p}} \tilde \sigma_{{\bar{\fp}}})^m = 1$ in $G$. \end{proof} \begin{corollary} \label{cor:coefficients} Let $x_1,\dots,x_n$ be any elements of $Z^1_{\mathfrak{p}}(K_\infty, V)$, and similarly let $y_1,\dots,y_n \in Z^1_{{\bar{\fp}}}(K_{\infty}, V)$. Then the element \[ \left(\mathcal{L}^G_{V, \mathfrak{p}}(x_1) \wedge \dots \wedge \mathcal{L}^G_{V, \mathfrak{p}}(x_n) \right) \otimes \left(\mathcal{L}^G_{V, {\bar{\fp}}}(y_1) \wedge \dots \wedge \mathcal{L}^G_{V, {\bar{\fp}}}(y_n) \right) \] of $ \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{\QQ_p} \mathbb{D}_p(V)^{(n, n)}$ in fact lies in \( \mathcal{H}_{L}(G) \otimes_{\QQ_p} \mathbb{D}_p(V)^{(n, n)},\) where $L$ is a finite unramified extension of $\QQ_p$. \end{corollary} \begin{proof} Clear, since the Frobenius automorphism of $\widehat{F}_\infty$ acts on this element as multiplication by $[\tilde{\sigma}_{\mathfrak{p}} \tilde{\sigma}_{{\bar{\fp}}}]^n$, which we have seen has finite order. \end{proof} \begin{remark} As is clear from the proof of the lemma and its corollary, the degree of $L / \QQ_p$ is bounded by the exponent of the ray class group of $K$ modulo $\mathfrak{f}$, and in particular is independent of $V$. \end{remark} We deduce Theorem \ref{thm:coefficients} by combining Corollary \ref{cor:coefficients} with Proposition \ref{prop:explicitdescription}. \subsection{Orders of distributions} Let us choose subspaces $W_\mathfrak{p} \subseteq \bigwedge^m \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V)$ and $W_{{\bar{\fp}}} \subseteq \bigwedge^n \mathbb{D}_{\mathrm{cris}}(K_{\bar{\fp}}, V)$. Then the space \[ Q = \left( \bigwedge^m \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V) / W_\mathfrak{p} \right) \otimes_{\QQ_p} \left( \bigwedge^n \mathbb{D}_{\mathrm{cris}}(K_{\bar{\fp}}, V) / W_{\bar{\fp}} \right)\] is a quotient of $\mathbb{D}_p(V)^{(m, n)}$ and hence of $\mathbb{D}_p(V)$. So, for any $c_1, \dots, c_d \in H^1_{\Iw, S}(K_\infty, V)$, we may consider the projection of \[ \mathcal{L}^G_V(c_1, \dots, c_d) = \mathcal{L}^G_V(c_1) \wedge \dots \wedge \mathcal{L}^G_V(c_d)\] to $Q$. \begin{theorem} \label{thm:orderofproduct} The distribution $\pr_Q\left(\mathcal{L}^G_V(c_1) \wedge \dots \wedge \mathcal{L}^G_V(c_d)\right)$ is a distribution on $G$ of order $(m h_\mathfrak{p}, n h_{{\bar{\fp}}})$ with respect to the subgroups $(\Gamma_{\mathfrak{p}}, \Gamma_{{\bar{\fp}}})$, where $h_\mathfrak{p}$ (resp. $h_{{\bar{\fp}}}$) is the largest valuation of any eigenvalue of $\varphi$ on $\wedge^m \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V) / W_\mathfrak{p}$ (resp. on $\wedge^n \mathbb{D}_{\mathrm{cris}}(K_{\bar{\fp}}, V) / W_{{\bar{\fp}}}$). \end{theorem} \begin{proof} Let us write $c_{j, \mathfrak{p}}$ for the localisation of $c_j$ at $\mathfrak{p}$, and similarly for ${\bar{\fp}}$. By Proposition \ref{prop:orderofdistributions}, for each subset $\{j_1, \dots, j_m\} \subseteq \{1, \dots, d\}$ of order $m$, the projection of the element \[ \mathcal{L}_{V, \mathfrak{p}}^G(c_{j_1,\mathfrak{p}}) \wedge \dots \wedge \mathcal{L}_{V, \mathfrak{p}}^G(c_{j_m, \mathfrak{p}})\] to $\bigwedge^m \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V) / W_{\mathfrak{p}}$ is a distribution of order $(h_{\mathfrak{p}}, 0)$ with respect to the subgroups $(\Gamma_\mathfrak{p}, U)$, where $U = \Gal(K_\infty / K(\mu_{p^\infty}))$. By the change-of-variable result of Proposition \ref{prop:changevar} in the appendix, it is also a distribution of order $(h_{\mathfrak{p}},0)$ with respect to the subgroups $(\Gamma_{\mathfrak{p}}, \Gamma_{{\bar{\fp}}})$. We have also a corresponding result for the projection to $\bigwedge^n \mathbb{D}_{\mathrm{cris}}(K_{\bar{\fp}}, V) / W_{{\bar{\fp}}}$ of the distribution obtained from any $n$-element subset of $\{1, \dots, d\}$: this gives a distribution with order $(0, h_{{\bar{\fp}}})$ with respect to $(\Gamma_\mathfrak{p}, \Gamma_{\bar{\fp}})$. Since the product of distributions of order $(a,0)$ and $(0, b)$ is a distribution of order $(a, b)$ by Proposition \ref{prop:orderofproduct}, the product of any two such subsets gives a distribution with values in $Q$ of order $(h_\mathfrak{p}, h_{{\bar{\fp}}})$. Since $\pr_Q\left(\mathcal{L}^G_V(c_1) \wedge \dots \wedge \mathcal{L}^G_V(c_d)\right)$ is a finite linear combination of products of this form, the theorem follows. \end{proof} \subsection{Example 1: Gr\"ossencharacters and Katz's L-function} \label{sect:KatzLfunction} \subsubsection{Kummer maps} We recall the well-known local theory of exponential maps for the representation $\ZZ_p(1)$. For any finite extension $L / \QQ_p$, there is a Kummer map $\kappa_L: \mathcal{O}_L^\times \to H^1(L, \ZZ_p(1))$, whose kernel is the Teichm\"uller lifting of $k_L^\times$. In particular, the restriction to the kernel $U^1(L)$ of reduction modulo the maximal ideal is an injection. Moreover, after inverting $p$, we have a commutative diagram relating the Kummer map to the exponential map of Bloch--Kato (see \cite{blochkato90}): \[ \begin{diagram} \QQ_p \otimes_{\ZZ_p} U^1(L) & \rTo^{\kappa_L} & H^1(L, \QQ_p(1)) \\ \dTo & & \dEq\\ \mathbb{D}_{\dR, L}(\QQ_p(1)) &\rTo^{\exp_{L, \QQ_p(1)}} & H^1(L, \QQ_p(1)) \end{diagram} \] where the vertical map sends $u$ to $t^{-1} \log(u) \otimes e_1$, where $e_1$ is the basis vector of $\QQ_p(1)$ corresponding to our compatible system of roots of unity. The maps $\kappa_L$ are compatible with the norm and corestriction maps for finite extensions $L' / L$, so for an infinite algebraic extension $K_\infty / \QQ_p$ we can take the inverse limit over the finite extensions of $\QQ_p$ contained in $K_\infty$ to define \[ \kappa_{K_\infty} : U^1(K_\infty) \rTo H^1_{\Iw}(K_\infty, \QQ_p(1)),\] where $U^1(K_\infty) := \varprojlim_{K' \subset K_\infty} U^1(K')$. \subsubsection{Coleman series} We recall the following basic result, due to Coleman. Let $\mathcal{F}$ be any height 1 Lubin--Tate group over $\QQ_p$, and $F$ an unramified extension of $\QQ_p$. Fix a generator $v = (v_n)$ of the Tate module of $\mathcal{F}$ (that is, a norm-compatible sequence of $p^n$-torsion points of $\mathcal{F}$). \begin{theorem}[{\cite{coleman79}}] \label{thm:colemanstheorem} Let $F$ be a finite unramified extension of $\QQ_p$. Then for each $\beta = (\beta_n) \in U^1(F(\mathcal{F}_{p^\infty}))$, there is a unique power series \[ g_{F,\mathcal{F}}(\beta) \in \mathcal{O}_F[[X]]^{\times,N_\mathcal{F} = 1}\] where $N_\mathcal{F}$ is Coleman's norm operator, such that for all $n \ge 1$ we have \[ \beta_n = [g_{F,\mathcal{F}}(\beta)]^{\sigma^{-n}}(v_n).\] \end{theorem} Here $\sigma$ is the arithmetic Frobenius automorphism of $F / \QQ_p$, which we extend to an automorphism of $\mathcal{O}_F[[X]]$ acting trivially on the variable $X$. If $\mathcal{F}$ is the formal multiplicative group $\hat{\mathbb{G}}_m$, then we shall drop the suffix $\mathcal{F}$; and we take $v_n = \zeta_n - 1$, where $(\zeta_n)$ is our chosen compatible sequence of $p$-power roots of unity. In this case, if we identify $X$ with the variable $\pi$ in Fontaine's rings, the relation between the map $g_F$ and the Perrin-Riou regulator map is given by the following diagram: \[ \begin{diagram} U^1(F(\mu_{p^\infty})) & \rTo^{\kappa_{F(\mu_{p^\infty})}} & H^1_{\Iw}(F(\mu_{p^\infty}), \ZZ_p(1))\\ \dTo^{g_F} & & \\ \mathcal{O}_F[[ \pi ]]^{\times,N = 1} & & \dTo^{\mathcal{L}_{F, \QQ_p(1)}^\Gamma}\\ \dTo^{(1 - \tfrac{\varphi}{p})\log} & & \\ \mathcal{O}_F[[ \pi ]]^{\psi = 0} & \rTo & \mathcal{H}(\Gamma) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(F, \QQ_p(1)) \end{diagram} \] If we identify $\mathbb{D}_{\mathrm{cris}}(F, \QQ_p(1))$ with $F$ via the basis vector $t^{-1} \otimes e_1$, then the bottom map sends $f \in \mathcal{O}_F[[ \pi ]]^{\psi = 0}$ to $\ell_0 \cdot \mathfrak{M}^{-1}(f)$, where $\ell_0 = \frac{\log \gamma}{\log \chi(\gamma)}$ for any non-identity element $\gamma \in \Gamma_1$ and $\mathfrak{M}$ is the Mellin transform as defined in Section \ref{sect:Fontainerings}. (See e.g.~the proof of Proposition 1.5 of \cite{leiloefflerzerbes11}.) Thus the image of the bottom map is precisely $\ell_0 \cdot \Lambda_{\mathcal{O}_F}(\Gamma) \subseteq \mathcal{H}_F(\Gamma)$; and if we define \[ h_F(\beta) = \ell_0^{-1}\cdot \mathcal{L}_{F, \QQ_p(1)}^\Gamma(\kappa_{F_\infty}(\beta)) \in \Lambda_{\mathcal{O}_F}(\Gamma),\] then we have \[ \mathfrak{M}(h_F(\beta)) = (1 - \tfrac{\varphi}{p})\log g_F(\beta).\] \subsubsection{Two-variable Coleman series} Now let $K_\infty / \QQ_p$ be an abelian $p$-adic Lie extension containing $\QQ_p(\mu_{p^\infty})$ such that $G=\Gal(K_\infty\slash\QQ_p)$ is a $p$-adic Lie group of dimension $2$. Let $\widehat{F}_\infty$ be the completion of the maximal unramifed subextension of $K_\infty$. We define \[ h_{\infty} : U^1(K_\infty) \to \Lambda_{\widehat{\cO}_{F_\infty}}(G)\] to be the unique map such that the composite \[ U^1(K_\infty) \rTo^{\kappa_{K_\infty}} H^1_{\Iw}(K_\infty/\QQ_p, \ZZ_p(1)) \rTo^{\mathcal{L}_{\QQ_p(1)}^G} \mathcal{H}_{\widehat{F}_\infty}(G)\] is equal to $\ell_0 \cdot h_\infty$. \begin{proposition} \label{prop:colemanmaps} The element $h_\infty(\beta)$ is uniquely determined by the relation \[ h_\infty(\beta) = \sum_{\sigma \in U_F} h_{F}(\beta_F)^\sigma [\sigma] \pmod{I_F}\] for all unramified subextensions $F \subset K_\infty$, where $U_F = \Gal(F / \QQ_p)$, $I_F$ is the kernel of the natural map $\Lambda_{\widehat{\cO}_{F_\infty}}(G) \to \Lambda_{\widehat{\cO}_{F_\infty}}(U_F \times \Gamma)$, and $\beta_F$ denotes the image of $\beta$ in $U^1(F(\mu_{p^\infty}))$. \end{proposition} \begin{proof} This follows from the compatibility of the maps $\mathcal{L}^\Gamma$ and $\mathcal{L}^G$ (Theorem \ref{thm:localregulator}(i)). \end{proof} We would like to compare this result to Theorem 5 of \cite{yager82}. Our method differs from that of Yager, as we build measures on $G$ out of measures on the Galois groups of extensions $F(\mu_{p^\infty}) / F$ for unramified extensions $F \subset K_\infty$, while Yager considers instead the extensions $F(\mathcal{F}_{p^\infty})/F$ where $\mathcal{F}$ is the Lubin--Tate group corresponding to an elliptic curve with CM by $\mathcal{O}_K$. Let $\mathcal{F}$ be any Lubin--Tate formal group over $\QQ_p$ which becomes isomorphic to $\hat{\mathbb{G}}_m$ over $\widehat{F}_\infty$. If $F$ is any finite unramified extension of $\QQ_p$ contained in $K_\infty$, then $F(\mathcal{F}_{p^\infty}) \subseteq K_\infty$. For any $\beta \in U^1(K_\infty)$, let $\beta_{F, \mathcal{F}}$ be its image in $U^1(F(\mathcal{F}_{p^\infty}))$. Then Coleman's theorem (Theorem \ref{thm:colemanstheorem}) gives us an element \[ g_{F, \mathcal{F}}(\beta_{F, \mathcal{F}}) \in \mathcal{O}_F[[X]]^{\times,N_\mathcal{F} = 1}.\] We write \[ h_{F, \mathcal{F}}(\beta_{F, \mathcal{F}}) = \mathfrak{M}^{-1} \left[ \left(1 - \tfrac{\varphi}{p}\right) \log\left( g_{F, \mathcal{F}}(\beta) \circ \theta\right)\right] \] where $\theta$ is the unique power series in $\widehat{\cO}_{F_\infty}[[X]]$ giving an isomorphism $\mathcal{F} \rTo^\cong \hat{\mathbb{G}}_m$ such that $v_n$ maps to $\zeta_n - 1$. \begin{theorem}[de Shalit] We have \[ h_\infty(\beta) = \sum_{\sigma \in U_F} h_{F, \mathcal{F}}(\beta_{F,\mathcal{F}})^\sigma [\sigma] \pmod{I_{F, \mathcal{F}}},\] where $I_{F, \mathcal{F}}$ is the kernel of the natural map \[ \Lambda_{\widehat{\cO}_{F_\infty}}(G) \rTo \Lambda_{\widehat{\cO}_{F_\infty}}( \Gal(F(\mathcal{F}_{p^\infty}) / F)).\] \end{theorem} \begin{proof} See \cite[\S I.3.8]{deshalit87}. (Note that the theorem is stated there for $K_\infty = \mathbb{Q}_p^{\mathrm{ab}}$, the maximal abelian extension of $\QQ_p$; but the theorem, and the proof given, are true with $K_\infty$ replaced by any smaller extension over which the formal groups concerned become isomorphic.) \end{proof} It follows that the map $h_{\infty}$ defined above coincides with the map constructed (under more restrictive hypotheses) by Yager in \cite{yager82}. In particular, if $c$ denotes the element of the global $H^1_{\Iw}$ obtained by applying the Kummer map to the elliptic units, then $\mathcal{L}^G_{\mathfrak{p}, V} (c)$ is equal to $\ell_0 \mu$ where $\mu$ is Katz's $p$-adic $L$-function. \subsection{Example 2: Two-variable L-functions of modular forms} We now consider the restriction to $\mathcal{G}_{K}$ of the representation $V$ of $\mathcal{G}_\mathbb{Q}$ attached to a modular form $f$ of weight 2, level $N$ prime to $p \Delta_{K/\mathbb{Q}}$, and character $\delta$. Let $E \subseteq \Qb_p$ be the completion of $\mathbb{Q}(f) \subseteq \overline{\QQ}$ at our chosen prime of $\overline{\QQ}$. We take $V = V_f^*$, so $V$ has Hodge--Tate weights $\{0, 1\}$ at each of $\mathfrak{p}$ and ${\bar{\fp}}$. Let $\{\alpha, \beta\}$ be the roots of $X^2 - a_p X + p \delta(p)$, so the eigenvalues of $\varphi$ on either $\mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V)$ or $\mathbb{D}_{\mathrm{cris}}(K_{\bar{\fp}}, V)$ are $\alpha^{-1}$ and $\beta^{-1}$. \begin{definition} A $p$-refinement of $f$ is a pair $u = (u_\mathfrak{p}, u_{{\bar{\fp}}}) \subseteq \{\alpha, \beta\}^{\times 2}$. We say that $u$ is \emph{non-critical} if $v_p(u_\mathfrak{p}), v_p(u_{{\bar{\fp}}}) < 1$; otherwise $u$ is \emph{critical}. \end{definition} Let $K_\infty$ be an extension of $K$ with Galois group $G$, satisfying the hypotheses specified in Section \ref{sect:setupimquad}. For a finite-order character $\omega$ of $G$, let $L_{\{\mathfrak{p}, {\bar{\fp}}\}}(f/K, \omega^{-1}, s)$ denote the twisted $L$-function of $f$ with the Euler factors at $\mathfrak{p}$ and ${\bar{\fp}}$ removed. Let $\Omega^+_f$ and $\Omega^-_f$ be the real and complex periods of $f$ (which are defined up to multiplication by an element of $\mathbb{Q}(f)^\times$). \begin{conjecture}[Existence of $L$-functions] \label{conj:lfunc} Let $(u_\mathfrak{p}, u_{\bar{\fp}})$ be a $p$-refinement which is non-critical. Then there exists a distribution $\mu_f(u_\mathfrak{p}, u_{\bar{\fp}})$ on $G$, of order $(v_p(u_\mathfrak{p}), v_p(u_{{\bar{\fp}}}))$ with respect to the subgroups $(\Gamma_\mathfrak{p}, \Gamma_{{\bar{\fp}}})$, such that for all finite-order characters $\omega$ we have \begin{align} & \int_{G} \omega\, \mathrm{d}\mu_f(u_\mathfrak{p}, u_{\bar{\fp}}) = \notag \\ & \left( \prod_{q \in \{\mathfrak{p}, {\bar{\fp}}\}} u_q^{-c_q(\omega)} e_q(\omega^{-1}) \frac{P_q(\omega^{-1}, u_q^{-1})}{P_q(\omega, p^{-1} u_q)} \right) \frac{L_{\{\mathfrak{p}, {\bar{\fp}}\}} (f / K, \omega^{-1}, 1)}{\Omega^+_f \Omega^-_f}. \label{eq:interpolating} \end{align} \end{conjecture} \begin{remark} The definition of the order of a distribution on $\ZZ_p^2$ is given in Section \ref{sect:order}. The hypothesis that the $p$-refinement be non-critical implies that the distribution $\mu_f(u_\mathfrak{p}, u_{\bar{\fp}})$ is unique if it exists, since a distribution of order $(r,s)$ with $r,s < 1$ is uniquely determined by its values at finite-order characters. \end{remark} Two approaches are known to the construction of such $L$-functions: either via $p$-adic interpolation of Rankin--Selberg convolutions, as in \cite{hida88,perrinriou88,kim-preprint}, or via the combinatorics of modular symbols on symmetric spaces attached to $\GL(2, \mathbb{A}_K)$, as in \cite{haran87}. The details have not been written down in the full generality described above (although M.~Emerton and B.~Zhang have announced results of this kind in a paper which is currently in preparation). The literature to date contains constructions of $\mu_f(u_\mathfrak{p}, u_{\bar{\fp}})$ in the following cases: \begin{itemize} \item if $f$ is ordinary, $\delta = 1$, and $u$ is the ``ordinary refinement'' $(\alpha, \alpha)$ where $\alpha$ is the unit root \cite{haran87} \item if $f$ is ordinary, $u$ is the ordinary refinement, and $G$ decomposes as a direct product of eigenspaces for complex conjugation \cite{perrinriou88} \item if $f$ is non-ordinary, $u_{\mathfrak{p}} = u_{{\bar{\fp}}}$, $[K(\mathfrak{f} p) : K]$ is prime to $p$, $\delta^2 = 1$, and we consider only the restriction of the distribution to the set of characters whose restriction to $\Gal(K(\mathfrak{f} p) / K)$ does not factor through a Dirichlet character via the norm map \cite{kim-preprint}. \end{itemize} \begin{remark}\mbox{~} (i) We have chosen to write the interpolating formula \eqref{eq:interpolating} in a way that emphasises the similarity with that of \cite{cfksv}. The cited references use a range of different formulations, and the distributions they construct differ from ours by various correction factors; but in each case the \emph{existence} of a measure satisfying their conditions is equivalent to the conjecture above. (ii) If $f$ is ordinary and $u$ is the ordinary refinement, the condition that $\mu_f(u)$ has order $(0, 0)$ is simply that it be a measure. In the non-ordinary case considered by Kim, the condition that $\mu_f(u)$ has order $(v_p(u_\mathfrak{p}), v_p(u_{{\bar{\fp}}}))$ is more delicate, and depends crucially on the decomposition of $\Gal(K_\infty / K(\mathfrak{f}))$ as the direct product of the distinguished subgroups $\Gamma_{\mathfrak{p}}$ and $\Gamma_{{\bar{\fp}}}$ corresponding to the two primes above $p$. \end{remark} We now give a conjectural interpretation of these $p$-adic $L$-functions in terms of our regulator map $\mathcal{L}^G_V$. Let us write \[ Z^1_{\Iw, p}(V) = Z^1_{\Iw, \mathfrak{p}}(V) \oplus Z^1_{\Iw, {\bar{\fp}}}(V).\] We write $\exp^*_{V}$ for the map $\exp^*_{K_\mathfrak{p}, V} \oplus \exp^*_{K_{\bar{\fp}}, V} : Z^1_{\Iw,p}(V) \to \mathbb{D}_p(V)$, and similarly $\mathcal{L}^G_V$ for the map $\mathcal{L}^G_{\mathfrak{p}, V} \oplus \mathcal{L}^G_{{\bar{\fp}}, V} : Z^1_{\Iw, p}(V) \to \mathcal{H}_{\widehat{F}_\infty}(G) \otimes \mathbb{D}_p(V)$. Both of these induce maps on the wedge square, which we denote by the same symbols. The following conjecture can be seen as a special case of the very general ``$\zeta$-isomorphism conjecture'' of Fukaya and Kato (Conjecture 2.3.2 of \cite{fukayakato06}), applied to the module $\Lambda_{\ZZ_p}(G) \otimes T$ for $T$ a $\ZZ_p$-lattice in $V$. \begin{conjecture} \label{conj:twovarmf} Choose a basis $v$ of $\Fil^0 \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V) \otimes_{\QQ_p} \Fil^0 \mathbb{D}_{\mathrm{cris}}(K_{\bar{\fp}}, V) \subseteq \mathbb{D}_p(V)^{(1, 1)}$. Then there is a distinguished element $\mathfrak{c} \in \bigwedge^2 H^1_{\Iw, S}(K_\infty, V_f)$ such that for all finite-order characters $\omega$, we have \[ \exp^*_{V(\omega^{-1})^*(1)}(\mathfrak{c}_\omega) = \frac{L(f / K, \omega^{-1}, 1)}{ \Omega^+_f \Omega^-_f }\, v.\] Moreover, $\mathfrak{c}$ is a $\Lambda_{\ZZ_p}(G)$-basis of $\mathbb{I}_{\arith,p}(V)$. \end{conjecture} We choose a basis $v_{\mathfrak{p}, \alpha}, v_{\mathfrak{p}, \beta}$ of $\varphi$-eigenvectors in $\mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V)$, and similarly for $\mathbb{D}_{\mathrm{cris}}(K_{\bar{\fp}}, V)$; and for a $p$-refinement $u = (u_\mathfrak{p}, u_{{\bar{\fp}}})$, we let $v_u = v_{\mathfrak{p}, u_{\mathfrak{p}}} \otimes v_{{\bar{\fp}}, u_{\bar{\fp}}} \in \mathbb{D}_p(V)^{(1, 1)}$. We may normalise such that $v_{\mathfrak{p}} = v_{\mathfrak{p}, \alpha} + v_{\mathfrak{p}, \beta}$ is a basis of $\Fil^0 \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V)$ (and respectively for ${\bar{\fp}}$); then $v = v_{\mathfrak{p}} \otimes v_{{\bar{\fp}}}$ is a basis of $\Fil^0 \mathbb{D}_p(V)$. \begin{proposition} Let $\mathfrak{c} \in \bigwedge^2 H^1_{\Iw, S}(K_\infty, V)$. Then for each $p$-refinement $u$ (critical or otherwise), the projection of $\mathcal{L}^G_V(\mathfrak{c})$ to the subspace $E \cdot v_u \subseteq \mathbb{D}_p(V)^{(1, 1)}$ is a distribution of order $(v_p(u_\mathfrak{p}), v_p(u_{{\bar{\fp}}}))$. If $\mathfrak{c}$ satisfies the condition of Conjecture \ref{conj:twovarmf}, then the projection of $\mathcal{L}^G_V(\mathfrak{c})$ satisfies the interpolating property \eqref{eq:interpolating}. \end{proposition} \begin{proof} The values of $\mathcal{L}^G_{V}(\mathfrak{c})$ at $\omega$ can be expressed in terms of those of the dual exponential map using Proposition \ref{thm:explicitformula}, which clearly gives the formula of \eqref{eq:interpolating}. The statement regarding the orders of the projections is an instance of Theorem \ref{thm:orderofproduct}. Concretely, suppose we choose elements $\mathfrak{c}_1, \mathfrak{c}_2$ such that $\mathfrak{c}_1 \wedge \mathfrak{c}_2 = \mathfrak{c}$. Then we have \begin{multline*} \mathcal{L}^G_{V}(\mathfrak{c}) = (v_{\mathfrak{p}, \alpha}\mathcal{L}_{V, \mathfrak{p}}(\mathfrak{c}_1)_{\alpha} + v_{\mathfrak{p}, \beta}\mathcal{L}_{V, \mathfrak{p}}(\mathfrak{c}_1)_{\beta} + v_{{\bar{\fp}}, \alpha}\mathcal{L}_{V, {\bar{\fp}}}(\mathfrak{c}_1)_{\alpha} + v_{{\bar{\fp}}, \beta}\mathcal{L}_{V, {\bar{\fp}}}(\mathfrak{c}_1)_{\beta}) \\ \wedge (v_{\mathfrak{p}, \alpha}\mathcal{L}_{V, \mathfrak{p}}(\mathfrak{c}_2)_{\alpha} + v_{\mathfrak{p}, \beta}\mathcal{L}_{V, \mathfrak{p}}(\mathfrak{c}_2)_{\beta} + v_{{\bar{\fp}}, \alpha}\mathcal{L}_{V, {\bar{\fp}}}(\mathfrak{c}_2)_{\alpha} + v_{{\bar{\fp}}, \beta}\mathcal{L}_{V, {\bar{\fp}}}(\mathfrak{c}_2)_{\beta}), \end{multline*} so the projection of $\mathcal{L}^G_{V}(\mathfrak{c})$ to the line spanned by $v_u$ is \[ v_u \cdot \begin{vmatrix} \mathcal{L}^G_{V, \mathfrak{p}}(\mathfrak{c}_1)_{u_\mathfrak{p}} & \mathcal{L}^G_{V, \mathfrak{p}}(\mathfrak{c}_2)_{u_\mathfrak{p}}\\ \mathcal{L}^G_{V, {\bar{\fp}}}(\mathfrak{c}_1)_{u_{\bar{\fp}}} & \mathcal{L}^G_{V, {\bar{\fp}}}(\mathfrak{c}_2)_{u_{\bar{\fp}}} \end{vmatrix}. \] Since $\mathcal{L}^G_{V, \mathfrak{p}}(\mathfrak{c}_?)_{u_\mathfrak{p}}$ (for $? \in \{1, 2\}$) is a distribution of order $(v_p(u_\mathfrak{p}), 0)$, and $\mathcal{L}^G_{V, {\bar{\fp}}}(\mathfrak{c}_?)_{u_{\bar{\fp}}}$ is a distribution of order $(0, v_p(u_{\bar{\fp}}))$, the determinant gives a distribution of order $(v_p(u_\mathfrak{p}), v_p(u_{{\bar{\fp}}})$ as claimed. \end{proof} In particular, when the refinement $u$ is non-critical, we conclude that Conjecture \ref{conj:twovarmf} implies Conjecture \ref{conj:lfunc} and the projection of $\mathcal{L}^G_{V}(\mathfrak{c})$ to $v_u$ must be equal to the uniquely determined distribution $\mu_f(u)$. \begin{remark} If Conjecture \ref{conj:twovarmf} holds, then one can also project the element $\mathcal{L}^G_V(\mathfrak{c})$ into $\mathbb{D}_p(V)^{(2, 0)}$ (or into $\mathbb{D}_p(V)^{(0, 2)}$). The resulting distributions are of a rather simpler type: if $\mathfrak{c} = \mathfrak{c}_1 \wedge \mathfrak{c}_2$ as before, then \[ \pr_{2, 0} \mathcal{L}^G_{V}(\mathfrak{c}) = \mathcal{L}^G_{\mathfrak{p}, V}(\mathfrak{c}_1) \wedge \mathcal{L}^G_{\mathfrak{p}, V}(\mathfrak{c}_2). \] This is a distribution on $G$ with values in the 1-dimensional space $\mathbb{D}_p(V)^{(2, 0)} = \det_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V)$ of order $(1, 0)$, divisible by the image in $\mathcal{H}_{\widehat{F}_\infty}(G)$ of the distribution $\ell_0 \in \mathcal{H}_{\QQ_p}(\Gamma_\mathfrak{p})$, so dividing by this factor gives a bounded measure on $G$ with values in $\widehat{F}_\infty$. Note that acting by the arithmetic Frobenius of $\widehat{F}_\infty$ on this measure corresponds to multiplication by $[\sigma_\mathfrak{p}]^2$, so it \emph{never} descends to a finite extension of $\QQ_p$. It is natural to conjecture (and would follow from Conjecture 2.3.2 of \cite{fukayakato06}) that if $\tau$ is a character of $G$ whose Hodge--Tate weights at $\mathfrak{p}$ and ${\bar{\fp}}$ are $(r,s)$ with $r \ge 1$ and $s \le -1$, so $\Fil^0 \bigwedge^2 \mathbb{D}_{p}(V(\tau^{-1})) = \mathbb{D}_p(V)^{(2,0)}$, then the value of $\pr_{2, 0} \mathcal{L}^G_V(\mathfrak{c})$ at $\tau$ should (after dividing by an appropriate period) correspond to the value at $1$ of the $L$-function of the automorphic representation $\operatorname{BC}(\pi_f) \otimes \tau$ of $\operatorname{GL}(2,\mathbb{A}_K)$. Up to a shift by the cyclotomic character, this corresponds to the set of characters denoted by $\Sigma^{(2)}(\mathfrak{f})$ in \cite{BDP13}, while the finite-order characters covered by the interpolating property in Conjecture \ref{conj:twovarmf} correspond to the set denoted there by $\Sigma^{(1)}(\mathfrak{f})$. If this conjecture holds, the image of $\pr_{2, 0} \mathcal{L}^G_{V}(\mathfrak{c}) / \ell_0$ in the Galois group of the anticyclotomic $\ZZ_p$-extension of $K$ should be related to the $L$-functions of \cite[Proposition 6.10]{BDP13} and \cite{brakocevic}, which interpolate the $L$-values of twists of $f$ by anticyclotomic characters in $\Sigma^{(2)}(\mathfrak{f})$. We intend to study this question further in a future paper. \end{remark} \appendix \section{Local and global Iwasawa cohomology} \label{appendix:iwacoho} In this section, we shall recall some results on the structure of Iwasawa cohomology groups of $p$-adic Galois representations over towers of representations of local and global fields. These are generalizations of well-known results for cyclotomic towers due to Perrin-Riou (cf.~\cite[\S 2]{perrinriou92}); much more general results have since been obtained by Nekovar \cite{nekovar06} and we briefly indicate how to derive the results we need from those of \emph{op.cit.}. \subsection{Conventions} We shall work with extensions of (local or global) fields $F_\infty/ F$ whose Galois group is of the form $G = \Delta \times \ZZ_p^e$, where $e \ge 1$ and $\Delta$ is a finite abelian group of order prime to $p$. The Iwasawa algebra $\Lambda_{\ZZ_p}(G)$ is a reduced ring, but it is not in general an integral domain; rather, it is isomorphic to the direct product of the subrings $e_\eta \Lambda_{\ZZ_p}(G)$, where $\eta$ ranges over the $\Qb_p / \QQ_p$-conjugacy classes of characters of $\Delta$. For each such $\eta$, $e_\eta \Lambda_{\ZZ_p}(G)$ is a local integral domain. In order to greatly simplify the presentation of our results, we shall adopt a minor abuse of notation, following the conventions of \cite{perrinriou95}. \begin{definition} We shall say that a $\Lambda_{\ZZ_p}(G)$-module has rank $r$ if $M_\eta$ has rank $r$ over $e_\eta \Lambda_{\ZZ_p}(G)$ for all $\eta$. \end{definition} When using this notation it is important to bear in mind that when $\Delta$ is not trivial, most finitely generated $\Lambda_{\ZZ_p}(G)$ modules will not have a rank. \subsection{The local case} \label{sect:localranks} Let $F$ be a finite extension of $\mathbb{Q}_\ell$, for some prime $\ell$. Let $V$ a $\QQ_p$-representation of $\mathcal{G}_{F}$ of dimension $d$, and choose a Galois invariant $\ZZ_p$-lattice $T$. For $F_\infty / F$ an abelian extension satisfying the conditions above, we define \[ H^i_{\Iw}(F_\infty, T) = \varprojlim_{K} H^i(K, T)\] where the limit is over all finite extensions $K/F$ contained in $F_\infty$, with respect to the corestriction maps; and $H^i_{\Iw}(F_\infty, V) = \QQ_p \otimes_{\ZZ_p} H^i_{\Iw}(F_\infty, T)$. \begin{theorem} \label{thm:localrank} The groups $H^i_{\Iw}(F_\infty, T)$ are finitely-generated $\Lambda_{\ZZ_p}(G)$-modules, zero if $i \ne \{0, 1\}$. We have an isomorphism \[ H^2_{\Iw}(F_\infty, T) \cong H^0(F_\infty, T^\vee(1))^\vee,\] where $(-)^\vee$ denotes the Pontryagin dual; in particular $H^2_{\Iw}(F_\infty, T)$ is $\Lambda_{\ZZ_p}(G)$-torsion. The group $H^1_{\Iw}(F_\infty, T)$ has well-defined rank given by \[ \operatorname{rk}_{\Lambda_{\ZZ_p}(G)} H^1_{\Iw}(F_\infty, T) = \begin{cases} 0& \text{if $\ell \ne p$,} \\ [F : \QQ_p] d & \text{if $\ell = p$}.\end{cases}\] \end{theorem} \begin{proof} We have assumed that $G$ has a subgroup isomorphic to $\ZZ_p^e$ with $e \ge 1$; thus the profinite degree of $F_\infty / F$ is divisible by $p^\infty$, so $H^0_{\Iw}(F_\infty, T) = 0$ by \cite[8.3.5 Proposition]{nekovar06}. For the finiteness statements for $i > 0$, we note that \[ H^i_{\Iw}(F_\infty, T) \cong H^i(F, \Lambda_{\ZZ_p}(G) \otimes_{\ZZ_p} T)\] by \cite[8.4.4.2 Proposition]{nekovar06}, where the action of $\mathcal{G}_F$ on $\Lambda_{\ZZ_p}(G)$ is via the inverse of the canonical character $\mathcal{G}_F \to G \to \Lambda_{\ZZ_p}(G)^\times$. This implies the finite generation of the groups $H^i_{\Iw}(F_\infty, T)$, and their vanishing for $i \ge 3$, by Proposition 4.2.2 of \emph{op.cit.}. The isomorphism $H^2_{\Iw}(F_\infty, T)^\vee \cong H^0(F_\infty, T^\vee(1))$ follows by applying local Tate duality to each finite extension $K / F$ contained in $F_\infty$. Finally, the formula for the rank of $H^1_{\Iw}(F_\infty, T)$ follows from Tate's local Euler characteristic formula for finite modules and Corollary 4.6.10 of \emph{op.cit.}. \end{proof} \subsection{The global case} \label{sect:globalranks} We now let $K$ be a number field. Let $V$ be a $\QQ_p$-representation of $\mathcal{G}_K$ of dimension $d$, and choose a $G_K$-invariant $\ZZ_p$-lattice $T$. Let $S$ be a finite set of places of $K$ containing all the primes above $p$, all infinite places and all the places whose inertia group acts non-trivially on $V$, and let $K^S$ be the maximal extension of $K$ unramified outside $S$. \begin{theorem}[Tate's global Euler characteristic formula] If $M$ is a $\ZZ_p$-module of finite length with a continuous action of $\Gal(K^S / K)$, then the modules $H^i(K^S / K, M)$ are finite groups, zero for $i \ge 3$. If $K$ is totally complex, then we have \[ \prod_{i =0}^2 \left( \# H^i(K^S / K, M)\right)^{(-1)^i} = (\# M)^{-\tfrac{1}{2}[K : \mathbb{Q}]}.\] \end{theorem} \begin{proof} See \cite[8.3.17, 8.6.14]{nsw}. \end{proof} We now consider a Galois extension $K_\infty / K$, contained in $K^S$, whose Galois group $G$ is of the form $\Delta \times \ZZ_p^e$, where $e\geq 1$ and $\Delta$ is abelian of order prime to $p$, as above. For $i \ge 0$, we define \[ H^i_{\Iw, S}(K_\infty, T) = \varprojlim_{L} H^i(K^S / L, T)\] where the limit is taken over number fields $L$ satisfying $K \subseteq L \subset K_\infty$, with respect to the corestriction maps. \begin{theorem} \label{thm:globalrank} The groups $H^i_{\Iw, S}(K_\infty, T)$ are finitely-generated $\Lambda_{\ZZ_p}(G)$-modules, zero if $i = 0$ or $i \ge 3$. If $K$ is totally complex, then for each character $\eta$ of $\Delta$ we have \[ \rank_{e_\eta \Lambda_{\ZZ_p}(G)} e_\eta H^1_{\Iw, S}(K_\infty, T) = \tfrac 1 2 [K : \mathbb{Q}] d + \rank_{e_\eta \Lambda_{\ZZ_p}(G)} e_\eta H^2_{\Iw, S}(K_\infty, T).\] \end{theorem} \begin{proof} This follows exactly as in Theorem \ref{thm:localrank}, using Tate's global Euler characteristic formula in place of the local one. (There are no issues with real embeddings, thanks to our running assumption that $p$ be odd.) \end{proof} \begin{proposition} \label{prop:leopoldt} The following statements are equivalent: \begin{enumerate}[(i)] \item $H^2_{\Iw, S}(K_\infty, T)$ is $\Lambda_{\ZZ_p}(G)$-torsion. \item For each character $\eta$ of $\Delta$, there is a character $\tau$ of $G$ such that $\tau|_\Delta = \eta$ and $H^2(K^S / K, V(\tau)) = 0$. \item $H^2(K^S / K_\infty, T \otimes \QQ_p/\ZZ_p)$ is a cotorsion $\Lambda_{\ZZ_p}(G)$-module. \item $H^2(K^S / K_\infty, T \otimes \QQ_p/\ZZ_p) = 0$. \end{enumerate} \end{proposition} \begin{proof} Since $\Delta$ has order prime to $p$, we may assume $\Delta = 1$, so $G \cong \ZZ_p^e$ and $\Lambda = \Lambda_{\ZZ_p}(G)$ is a local integral domain. We first show (i) $\Leftrightarrow$ (ii). By \cite[8.4.8.2 Corollary, (ii)]{nekovar06} we have an isomorphism \[ H^2_{\Iw, S}(K_\infty, T) \otimes_{\Lambda} \ZZ_p(\tau) \cong H^2(K^S / K, T(\tau^{-1})).\] If $H^2_{\Iw, S}(K_\infty, T)$ is torsion, then it is annihilated by some non-zero $f \in \Lambda$. Since $f \ne 0$, there exists a character $\tau$ such that $f(\tau) \ne 0$; but by the above formula $f(\tau)$ annihilates $H^2(K^S / K, T(\tau^{-1}))$, so $H^2(K^S / K, V(\tau^{-1})) = 0$. Conversely, if $H^2(K^S/K, V(\tau^{-1})) = 0$ for some $\tau$, then $H^2(K^S / K, T(\tau^{-1}))$ is $\ZZ_p$-torsion, so by a form of Nakayama's lemma -- see \cite[Theorem 2]{balisterhowson} -- we can conclude that $H^2_{\Iw, S}(K_\infty, T)$ is a torsion $\Lambda$-module. We now show (ii) $\Leftrightarrow$ (iii). We know that $H^2(K^S / K, T(\tau) \otimes \QQ_p/\ZZ_p)$ is finite if and only if $H^2(K^S / K, V(\tau)) = 0$. From the Hochschild--Serre spectral sequence and Poincar\'e duality for $G$-cohomology we have an isomorphism \[ H^2(K^S / K_\infty, T \otimes \QQ_p/\ZZ_p)^\vee \otimes_{\Lambda} \ZZ_p(\tau) \cong H^2(K^S / K, T(\tau) \otimes \QQ_p/\ZZ_p)^\vee\] and we conclude by the same argument as before. To finish the proof, it suffices to show that (iii) $\Rightarrow$ (iv). We claim that the module $H^2(K^S / K_\infty, T \otimes \QQ_p/\ZZ_p)$ is co-free over $\Lambda$, i.e.~its Pontryagin dual $X = H^2(K^S / K_\infty, T \otimes \QQ_p/\ZZ_p)^\vee$ is a free $\Lambda$-module; thus if it is cotorsion, it must be zero. For $e = 1$ this is a theorem of Greenberg, cf.~\cite[Proposition 1.3.2]{perrinriou95}, so we shall reduce to this case by induction on $e$. Let us choose topological generators $\gamma_1, \dots, \gamma_e$ of $\Gal(K_\infty / K) \cong \ZZ_p^e$, and set $u_i = [\gamma_i] - 1 \in \Lambda$. Then $\Lambda \cong \ZZ_p[[u_1, \dots, u_e]]$ and in particular $(p, u_1, \dots, u_e)$ is a regular sequence for $\Lambda$; so in order to show that $X$ is free, it suffices to show that $X[u_e] = 0$ and $X/u_e X$ is free as a module over $\Lambda / u_e \Lambda$. If we let $U$ be the subgroup of $G$ generated by $\gamma_e$, then \[ X[u_e] = H^1(U, H^2(K_\infty, T \otimes \QQ_p/\ZZ_p))^\vee,\] and by the Hochschild--Serre exact sequence, $H^1(U, H^2((K_\infty)^U, T \otimes \QQ_p/\ZZ_p))$ injects into $H^3(K_\infty^{U}, T \otimes \QQ_p/\ZZ_p)$, which is 0 (since $p$ is odd); and we have \[ X / u_e X = H^2((K_\infty)^U, T \otimes \QQ_p/\ZZ_p)^\vee,\] which (by the induction hypothesis) is free over $\ZZ_p[[u_1, \dots, u_{e-1}]]$, so we are done. \end{proof} To define our module of $p$-adic $L$-functions we will need to assume the following conjecture, which corresponds to the ``conjecture de Leopoldt faible'' of \cite[\S 1.3]{perrinriou95}: \begin{conjecture}[Conjecture $\operatorname{Leop}(K_\infty, V)$] \label{conj:leopoldt} The equivalent conditions of Proposition \ref{prop:leopoldt} hold, for some (and hence every) $\ZZ_p$-lattice $T$ in $V$. \end{conjecture} Note that if $K_\infty, L_\infty$ are two extensions of $K$ satisfying our conditions, with $K_\infty \subseteq L_\infty$, and $\Gal(L_\infty/K_\infty)$ is torsion-free (hence isomorphic to a product of copies of $\ZZ_p$), then conjecture $\Leop(K_\infty, V)$ implies conjecture $\Leop(L_\infty, V)$, since $\Gal(K_\infty / K)$ and $\Gal(L_\infty / K)$ have the same torsion subgroup and thus condition (ii) of Proposition \ref{prop:leopoldt} for $K_\infty$ implies the corresponding condition for $L_\infty$. It is conjectured that $\Leop(K(\mu_{p^\infty}), V)$ should hold for any $V$, and this is known in many cases; see \cite[Appendix B]{perrinriou95}. \begin{example} Let $V$ be the 2-dimensional $p$-adic representation of $\mathcal{G}_\mathbb{Q}$ associated to a modular form, $K / \mathbb{Q}$ an imaginary quadratic field, and $K_\infty$ the unique $\ZZ_p^2$-extension of $K$. Then $\operatorname{Leop}(K_\infty, V)$ holds. To see this, we use the fact that $\operatorname{Leop}(K_\infty, V)$ is implied by $\operatorname{Leop}(K^{\mathrm{cyc}}, V)$, where $K^{\mathrm{cyc}}$ is the cyclotomic $\ZZ_p$-extension of $K$. However, by Shapiro's lemma the conjecture $\operatorname{Leop}(K^{\mathrm{cyc}}, V)$ is equivalent to $\operatorname{Leop}(\mathbb{Q}^{\operatorname{cyc}}, V \oplus V(\varepsilon_K))$, where $\mathbb{Q}^{\operatorname{cyc}}$ is the cyclotomic $\ZZ_p$-extension of $\mathbb{Q}$ and $\varepsilon_K$ is the quadratic Dirichlet character associated to $K$. The conjectures and $\operatorname{Leop}(\mathbb{Q}^{\operatorname{cyc}}, V)$ and $\operatorname{Leop}(\mathbb{Q}^{\operatorname{cyc}}, V(\varepsilon_K))$ follow from \cite[Theorem 12.4]{kato04} applied to $f$ and its twist by $\varepsilon_K$. \end{example} \begin{corollary} If $K$ is totally complex and Conjecture $\operatorname{Leop}(K_\infty, V)$ holds, then the module $H^1_{\Iw, S}(K_\infty, T)$ has well-defined $\Lambda_{\ZZ_p}(G)$-rank, equal to $\tfrac{1}{2}[K : \mathbb{Q}]d$, where $d=\rank_{\ZZ_p}T$. \end{corollary} \section{Explicit formulae for Perrin-Riou's p-adic regulator} \label{appendix:cyclo} In this section, we give the proof of the formulae for the cyclotomic regulator used in the proof of Proposition \ref{thm:explicitformula}. As we work only over $\QQ_p$ here, we shall write $\mathbb{D}(-)$ and $\mathbb{N}(-)$ for $\mathbb{D}_{\QQ_p}(-)$ and $\mathbb{N}_{\QQ_p}(-)$ respectively. Let $V$ be a good crystalline representation of $\mathcal{G}_{\QQ_p}$, and $x \in \mathcal{H}(\Gamma) \otimes_{\Lambda_{\ZZ_p}(\Gamma)} H^1_{\Iw}(\mathbb{Q}_{p, \infty}, V)$. We write $x_j$ for the image of $x$ in $H^1_{\Iw}(\mathbb{Q}_{p, \infty}, V(-j))$, and $x_{j, n}$ for the image of $x_j$ in $H^1(\mathbb{Q}_{p, n}, V(-j))$. If we identify $x$ with its image in $\mathbb{D}(V)^{\psi = 1}$, then $x_j$ corresponds to the element $x \otimes e_{-j} \in \mathbb{D}(V)^{\psi = 1} \otimes e_{-j} = \mathbb{D}(V(-j))^{\psi = 1}$. Since $V$ has non-negative Hodge--Tate weights, we may interpret $x$ as an element of the module $\left( \mathbb{B}^+_{\rig, \QQ_p}\left[\tfrac1t\right] \otimes \mathbb{D}_{\mathrm{cris}}(V)\right)^{\psi = 1}$. We shall assume: \[ \tag{$\dag$} x \in \left(\mathbb{B}^+_{\rig, \QQ_p} \otimes_{\mathbb{B}^+_{\QQ_p}} \mathbb{N}(V)\right)^{\psi = 1} \subseteq \left( \mathbb{B}^+_{\rig, \QQ_p}\left[\tfrac1t\right] \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V)\right)^{\psi = 1}.\] This condition is satisfied in the following two situations: \begin{itemize} \item if $V$ has no quotient isomorphic to $\QQ_p$, by \cite[Theorem A.3]{berger03}; \item or if $x$ is in the image of the Iwasawa cohomology over $F_\infty(\mu_{p^\infty})$, by Theorem \ref{thm:unramunivnorms} above. \end{itemize} We will base our proofs on the work of Berger \cite{berger03}, so we recall the notation of that reference. Let $\partial$ denote the differential operator $(1 + \pi) \tfrac{\mathrm{d}}{\mathrm{d}\pi}$ on $\BB^+_{\rig,\Qp}$. We also use Berger's notation $\partial_V \circ \varphi^{-n}$ for the map \[ \BB^+_{\rig,\Qp}\left[\tfrac1t\right] \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V) \rTo \mathbb{Q}_{p, n}\otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V)\] which sends $\pi^k\otimes v$ to the constant coefficient of $(\zeta_n \exp(t/p^n) - 1)^k \otimes \varphi^{-n}(v)\in \mathbb{Q}_{p,n}((t))\otimes_{\QQ_p}\mathbb{D}_{\mathrm{cris}}(V)$. For $m \in \mathbb{Z}$, define $\Gamma^*(m)$ to be the leading term of the Taylor series expansion of $\Gamma(x)$ at $x = m$ (cf.~\cite[\S 3.3.6]{fukayakato06}); thus \[ \Gamma^*(1 + j) = \begin{cases} j! & \text{if $j \ge 0$,} \\ \frac{(-1)^{-j-1}}{(-j-1)!} & \text{if $j \le -1$.}\end{cases}.\] \begin{proposition} For $x$ satisfying $(\dag)$, let us define \[ R_{j, n}(x) = \frac{1}{\Gamma^*(1 + j)} \times \begin{cases} p^{-n} \partial_{V(-j)}(\varphi^{-n}(\partial^{j} x \otimes t^j e_{-j})) & \text{if $n \ge 1$,}\\ (1 - p^{-1} \varphi^{-1}) \partial_{V(-j)}(\partial^{j} x \otimes t^j e_{-j}) & \text{if $n = 0$.} \end{cases} \] Then we have \[ R_{j, n}(x) = \begin{cases} \exp^*_{\mathbb{Q}_{p, n}, V(-j)^*(1)}(x_{j, n}) & \text{for $j \ge 0$,}\\ \log_{\mathbb{Q}_{p, n}, V(-j)}(x_{j, n}) & \text{for $j \le -1$.} \end{cases} \] \end{proposition} \begin{proof} This result is essentially a minor variation on \cite[Theorem II.10]{berger03}. The case $j \ge 0$ is immediate from Theorem II.6 of \emph{op.cit.} applied with $V$ replaced by $V(-j)$ and $x$ by $x \otimes e_{-j}$, using the formula \[ \partial_{V(-j)}(\varphi^{-n}(x \otimes e_{-j})) = \frac{1}{j!} \partial_{V(-j)}(\varphi^{-n}(\partial^j x \otimes t^{j} e_{-j})).\] For the formula when $j \le -1$, we choose an auxilliary integer $h \ge 1$ such that $\Fil^{-h} \mathbb{D}_{\mathrm{cris}}(V) = \mathbb{D}_{\mathrm{cris}}(V)$. The element $\partial^{j} x \otimes t^{j} e_{-j}$ lies in $\left(\BB^+_{\rig,\Qp} \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V(-j))\right)^{\psi = 1}$, by (\dag). Applying Theorem II.3 of \emph{op.cit.} with $V$, $h$ and $x$ replaced by $V(-j)$, $h-j$, and $\partial^{j} x \otimes t^{-j} e_j$, we see that \[ \Gamma^*(j+1) R_{j, n}(x) = \Gamma^*(j - h + 1) \log_{\mathbb{Q}_{p, n}, V(-j)} \left[ \left(\ell_0 \dots \ell_{h-1} x \right)_{j, n}\right]. \] For $x \in \mathcal{H}(\Gamma) \otimes_{\Lambda_{\ZZ_p}(\Gamma)} H^1_{\Iw}(\mathbb{Q}_{p, \infty}, V)$, we have \[ \left( \ell_r x\right)_{j, n} = (j - r) x_{j, n},\] so (since $j \le -1$) we have \[ \left( \ell_0 \dots \ell_{h-1} x \right)_{j, n} = (j)(j-1) \dots (j - h + 1) x_{j, n} = \frac{\Gamma^*(j+1)}{\Gamma^*(j - h + 1)} x_{j, n}\] as required. \end{proof} \begin{proposition} If $x$ is as above, and $\mathcal{L}^\Gamma_V(x)$ is the unique element of $\mathcal{H}(\Gamma) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V)$ such that $\mathcal{L}^\Gamma_V(x) \cdot (1 + \pi) = (1 - \varphi) x$, then for any $j \in \mathbb{Z}$ we have \[ (1 - \varphi) \cdot \partial_{V(-j)}(\varphi^{-n}(\partial^{j} x \otimes t^j e_{-j})) = \mathcal{L}^\Gamma(x)(\chi^j) \otimes t^j e_{-j},\] while for any finite-order character $\omega$ of $\Gamma$ of conductor $n \ge 1$, we have \begin{multline*} \left(\sum_{\sigma \in \Gamma / \Gamma_n} \omega(\sigma)^{-1} \sigma\right) \cdot \partial_{V(-j)}(\varphi^{-n}(\partial^{j} x \otimes t^j e_{-j})) \\= \tau(\omega) \varphi^{-n} \left(\mathcal{L}^\Gamma(x)(\chi^j \omega) \otimes t^j e_{-j}\right). \end{multline*} \end{proposition} \begin{proof} We note that \[ \mathcal{L}^\Gamma_{V(-j)}(\partial^{j} x \otimes t^j e_{-j}) = \Tw_j(\mathcal{L}^\Gamma_V(x)) \otimes t^{j} e_{-j},\] so it suffices to prove the result for $j = 0$. Suppose we have \[ x = \sum_{k \ge 0} v_k \pi^k,\quad v_k \in \mathbb{D}_{\mathrm{cris}}(V).\] Then \[ \partial_V(\varphi^{-n}(x)) = \sum_{k \ge 0} \varphi^{-n}(v_k) \left(\zeta_{p^n} - 1 \right)^k.\] On the other hand \[ \partial_V(\varphi^{-n}((1-\varphi) x)) = \sum_{k \ge 0} \varphi^{-n}(v_k) \left(\zeta_{p^n} - 1 \right)^k - \sum_{k \ge 0} \varphi^{1-n}(v_k) \left(\zeta_{p^{n-1}} - 1 \right)^k.\] Applying the operator $e_\omega = \sum_{\sigma \in \Gamma / \Gamma_n} \omega(\sigma)^{-1} \sigma$, we have for $n \ge 1$ \[ e_\omega \cdot \partial_V(\varphi^{-n}(x)) = e_\omega \cdot \partial_V(\varphi^{-n}((1-\varphi) x)),\] since $e_\omega$ is zero on $\mathbb{Q}_{p, n-1}((t))$. However, since the map $\partial_V \circ \varphi^{-n}$ is a homomorphism of $\Gamma$-modules, we have \begin{align*} e_\omega \cdot \partial_V(\varphi^{-n}((1-\varphi) x)) &= e_\omega \cdot \partial_V( \mathcal{L}^\Gamma(x) \cdot (1 + \pi))\\ &= \varphi^{-n}(\mathcal{L}^\Gamma(x)) \cdot e_\omega \partial_{\QQ_p}(\varphi^{-n}(1 + \pi))\\ &= \tau(\omega) \varphi^{-n}\left( \mathcal{L}^\Gamma(x)(\omega)\right). \end{align*} This completes the proof of the proposition for $j = 0$. \end{proof} \begin{definition} Let $x\in H^1_{\Iw}(\mathbb{Q}_{p,\infty},V)$. If $\eta$ is any continuous character of $\Gamma$, denote by $x_\eta$ the image of $x$ in $H_{\Iw}^1(\mathbb{Q}_{p,\infty}, V(\eta^{-1}))$. If $n\geq 0$, denote by $x_{\eta,n}$ the image of $x_\eta$ in $H^1(\mathbb{Q}_{p,n}, V(\eta^{-1}))$. \end{definition} Thus $x_{\chi^j, n} = x_{j, n}$. in the previous notation. The next lemma is valid for arbitrary de Rham representations of $\mathcal{G}_{\QQ_p}$ (with no restriction on the Hodge--Tate weights): \begin{lemma} For any finite-order character $\omega$ factoring through $\Gamma / \Gamma_n$, with values in a finite extension $E / \QQ_p$, we have \[ \sum_{\sigma \in \Gamma / \Gamma_n} \omega(\sigma)^{-1} \exp^*_{\mathbb{Q}_{p,n}, V^*(1)}(x_{0, n})^\sigma = \exp^*_{\QQ_p, V(\omega^{-1})^*(1)}(x_{\omega,0})\] and \[ \sum_{\sigma \in \Gamma / \Gamma_n} \omega(\sigma)^{-1} \log_{\mathbb{Q}_{p, n}, V}(x_{0,n})^\sigma = \log_{\QQ_p, V(\omega^{-1})}(x_{\omega,0})\] where we make the identification \[ \mathbb{D}_{\dR}(V(\omega^{-1})) \cong \left(E \otimes_{\QQ_p} \mathbb{Q}_{p, n} \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V)\right)^{\Gamma = \omega}.\] \end{lemma} \begin{proof} This follows from the compatibility of the maps $\exp^*$ and $\log$ with the corestriction maps (cf~\cite[\S\S II.2 \& II.3]{berger03}). \end{proof} Combining the three results above, we obtain: \begin{theorem} \label{thm:explicitformulacyclo} Let $j \in \mathbb{Z}$ and let $x$ satisfy $(\dag)$. Let $\eta$ be a continuous character of $\Gamma$ of the form $\chi^j \omega$, where $\omega$ is a finite-order character of conductor $n$. \begin{enumerate}[(a)] \item If $j \ge 0$, we have \begin{multline*} \mathcal{L}^\Gamma_{V}(x)(\eta) = j! \times\\ \begin{cases} (1 - p^j \varphi)(1 - p^{-1-j} \varphi^{-1})^{-1} \left( \exp^*_{\QQ_p, V(\eta^{-1})^*(1)}(x_{\eta,0}) \otimes t^{-j} e_j\right) & \text{if $n = 0$,}\\ \tau(\omega)^{-1} p^{n(1+j)} \varphi^n \left(\exp^*_{\QQ_p, V(\eta^{-1})^*(1)}(x_{\eta,0}) \otimes t^{-j} e_j\right) & \text{if $n \ge 1$.} \end{cases} \end{multline*} \item If $j \le -1$, we have \begin{multline*} \mathcal{L}^\Gamma_{V}(x)(\eta) = \frac{(-1)^{-j-1}}{(-j-1)!} \times\\ \begin{cases} (1 - p^j \varphi)(1 - p^{-1-j} \varphi^{-1})^{-1} \left( \log_{\QQ_p, V(\eta^{-1})}(x_{\eta,0}) \otimes t^{-j} e_j\right) & \text{if $n = 0$,}\\ \tau(\omega)^{-1} p^{n(1+j)} \varphi^n \left(\log_{\QQ_p, V(\eta^{-1})}(x_{\eta,0}) \otimes t^{-j} e_j\right) & \text{if $n \ge 1$.} \end{cases} \end{multline*} \end{enumerate} (In both cases, we assume that $(1 - p^{-1-j} \varphi^{-1})$ is invertible on $\mathbb{D}_{\mathrm{cris}}(V)$ when $\eta = \chi^j$.) \end{theorem} From this theorem it is straightforward to deduce a version of Perrin-Riou's explicit reciprocity formula, relating the regulator for $V$ and for $V^*(1)$. We recall from \ref{sect:iwasawacoho} the definition of the Perrin-Riou pairing \[ \langle -, - \rangle_{\mathbb{Q}_{p, \infty}} : H^1_{\Iw}(\mathbb{Q}_{p, \infty}, V) \times H^1_{\Iw}(\mathbb{Q}_{p, \infty}, V^*(1)) \to \Lambda_{\QQ_p}(\Gamma).\] Let $h$ be sufficiently large that $V^*(1 + h)$ has Hodge--Tate weights $\ge 0$. Recall that we write $y_{-h}$ for the image of $y$ in $H^1_{\Iw}(\mathbb{Q}_{p, n}, V^*(1 + h))$. Define $\mathcal{L}^\Gamma_{V^*(1)}$ by \begin{multline} \label{eq:twist1} \mathcal{L}^\Gamma_{V^*(1)}(y) = (\ell_{-1} \ell_{-2} \cdot \ell_{-h})^{-1} \Tw_{-h} \left(\mathcal{L}_{V^*(1 + h)}(y_{-h})\right) \otimes t^h e_{-h} \\ \in \operatorname{Frac} \mathcal{H}_{\QQ_p}(\Gamma) \otimes \mathbb{D}_{\mathrm{cris}}(V^*(1)); \end{multline} note that this definition is independent of the choice of $h \gg 0$. Write $\langle \cdot, \cdot \rangle_{\cris, V}$ for the natural pairing $\mathbb{D}_{\mathrm{cris}}(V) \times \mathbb{D}_{\mathrm{cris}}(V^*(1)) \to \mathbb{D}_{\mathrm{cris}}(\QQ_p(1)) \cong \QQ_p$. We extend the crystalline pairing $\Lambda_{\ZZ_p}(\Gamma)$-linearly in the first argument and antilinearly in the second argument. \begin{theorem} \label{thm:cycloreciprocity} For all $x\in H^1_{\Iw}(\mathbb{Q}_{p,\infty},V)$ and $y\in H^1_{\Iw}(\mathbb{Q}_{p,\infty},V^*(1))$, we have \[ \left\langle \mathcal{L}_V(x), \mathcal{L}_{V^*(1)}(y)\right\rangle_{\cris, V} = -\sigma_{-1} \cdot \ell_0 \cdot \langle x, y \rangle_{\mathbb{Q}_{p, \infty}, V},\] where $\sigma_{-1}$ is the unique element of $\Gamma$ such that $\chi(\sigma_{-1}) = -1$. \end{theorem} \begin{proof} By Theorem \ref{thm:explicitformulacyclo} (a), for $j \ge 1+h$ we have \[ \mathcal{L}_V(x)(\chi^j) = j! (1 - p^j \varphi)(1 - p^{-1-j} \varphi^{-1})^{-1} \left( \exp^*_{0, V^*(1+j)}(x_{j,0}) \otimes t^{-j} e_j\right)\] and \begin{multline*} \mathcal{L}_{V^*(1 + h)}(y_{-h})(\chi^{h-j}) \otimes t^h e_{-h} =\frac{(-1)^{j-h-1}}{(j-h-1)!} \times\\ (1 - p^{-j} \varphi)(1 - p^{j-1} \varphi^{-1})^{-1} \left( \log_{\QQ_p, V^*(1 + j)} (y_{-j,0}) \otimes t^{j} e_{-j}\right). \end{multline*} Hence we have \begin{align*} \big\langle \mathcal{L}_V(x)(\chi^j), & \mathcal{L}_{V^*(1 + h)}(y_{-h})(\chi^{h-j}) \otimes t^h e_{-h}\big\rangle_{\cris, V} \\ & = \frac{(-1)^{h-j-1} j!}{(j-h-1)!} \langle \exp^*_{0, V^*(1+j)}(x_{j,0}), \log_{\QQ_p, V^*(1 + j)} (y_{-j,0}) \rangle_{\cris, V(-j)}\\ & = \frac{(-1)^{h-j-1} j!}{(j-h-1)!} \langle x_{j,0}, y_{-j,0}\rangle_{\QQ_p, V(-j)}\\ & = (-1)^{h+1} \left[ \sigma_{-1} \cdot (\ell_0 \dots \ell_{h}) \cdot \langle x, y \rangle_{\mathbb{Q}_{p, \infty}, V} \right] (\chi^j). \end{align*} Using the definition of $\mathcal{L}^\Gamma_{V^*(1)}$ as in \eqref{eq:twist1}, this relation takes the more pleasing form \[ \left\langle \mathcal{L}_V(x), \mathcal{L}_{V^*(1)}(y)\right\rangle_{\cris, V} = -\sigma_{-1} \cdot \ell_0 \cdot \langle x, y \rangle_{\mathbb{Q}_{p, \infty}, V}.\] \end{proof} \section{Functions of two p-adic variables} Let $p$ be a prime. We let $L$ be a complete discretely valued subfield of $\CC_p$, and let $v_p$ denote the $p$-adic valuation on $L$, normalised in the usual fashion, so $v_p(p) = 1$. \subsection{Functions and distributions of one variable} We recall the theory in the one-variable case, as presented in \cite{colmez10}. Let $h \in \mathbb{R}$, $h \ge 0$. Let $f$ be a function $\ZZ_p \to L$. We say $f$ has order $h$ if, informally, it may be approximated by a Taylor series of degree $[h]$ at every point with an error term of order $h$. More precisely, $f$ has order $h$ if there exist functions $f^{(j)}, 0 \le j \le [h]$, such that the quantity \[ \varepsilon_f(x,y) = f(x + y) - \sum_{j=0}^{[h]} \frac{f^{(j)}(x) y^j}{j!},\] satisfies \[ \sup_{x \in \ZZ_p, y \in p^n \ZZ_p} v_p \left( \varepsilon_f(x, y)\right) -hn \to \infty\] as $n \to \infty$. (It is clear that this determines the functions $f^{(0)}, \dots, f^{(j)}$ uniquely.) We write $C^h(\ZZ_p, L)$ for the space of such functions, with a Banach space structure given by the valuation \[ v_{C^h}(f) = \inf\left( \inf_{0 \le j \le [h], x \in \ZZ_p} v_p(f^{(j)}(x)), \inf_{x,y \in \ZZ_p} v_p(\varepsilon_f(x,y)) - h v_p(y)\right).\] We define the space $D^h(\ZZ_p, L)$ of \emph{distributions of order h} to be the continuous dual of $C^h(\ZZ_p, L)$. Then we have the following celebrated theorem, due to Mahler \cite{mahler58} for $h = 0$ and to Amice \cite{amice64} for $h > 0$: \begin{enumerate} \item[(1)] The space $C^h(\ZZ_p, L)$ has a Banach space basis given by the functions \[ x \mapsto p^{[h \ell(n)]} \binom{x}{n}\] for $n \ge 0$, where $\ell(n)$ is, as in \S 1.3.1 of \cite{colmez10}, the smallest integer $m$ such that $p^m > n$. \item[(2)] The space $LP^{N}(\ZZ_p, L)$ of $L$-valued locally polynomial functions of degree $N$ is dense in $C^h(\ZZ_p, L)$ for any $N \ge [h]$, and a linear functional \[ \mu: LP^{N}(\ZZ_p, L) \to L\] extends continuously to a distribution of order $h$ if and only there is a constant $C$ such that we have \[ v_p\left(\int_{x \in a + p^n \ZZ_p} \left( \frac{x - a}{p^n} \right)^k \, \mathrm{d}\mu\right) \ge C - hn\] for all $a \in \ZZ_p$, $n \in \mathbb{N}$ and $0 \le k \le N$. \end{enumerate} A modern account of this theorem is given in \cite[\S\S 1.3, 1.5]{colmez10}. \subsection{The two-variable case} \label{sect:order} We now consider functions of two variables. For $a, b \ge 0$, we define the space \[ C^{(a,b)}(\ZZ_p^2, L) := C^a(\ZZ_p, L) \mathbin{\hat\otimes}_L C^b(\ZZ_p, L),\] with its natural completed tensor product topology. We regard this as a space of functions on $\ZZ_p^2$ in the obvious way, and refer to these as the $L$-valued functions on $\ZZ_p^2$ of order $(a, b)$. It is clear that $C^{(0, 0)}(\ZZ_p^2, L)$ is simply the space of continuous $L$-valued functions on $\ZZ_p^2$, and that if $a' \ge a$ and $b' \ge b$, then $C^{(a',b')}(\ZZ_p^2, L)$ is dense in $C^{(a,b)}(\ZZ_p^2, L)$. Moreover, for any $(a,b)$ the space $LA(\ZZ_p^2, L)$ of locally analytic functions on $\ZZ_p^2$ is a dense subspace of $C^{(a,b)}(\ZZ_p^2, L)$. Note that any choice of Banach space bases for the two factors in the tensor product gives a Banach space basis for $C^{(a,b)}(\ZZ_p^2, L)$. In particular, from (1) above we have a Banach basis given by the functions \[ c_{n_1,n_2} : (x_1,x_2) \mapsto p^{[a \ell(n_1)] + [b \ell(n_2)]} \binom{x_1}{n_1}\binom{x_2}{n_2}.\] The following technical proposition will be useful to us in the main text: \begin{proposition} \label{prop:changevar} For any $h \ge 0$, the space $C^{(0, h)}(\ZZ_p^2, L)$ is invariant under pullback via the map $\Phi: (x, y) \to (x, ax + y)$, for any $a \in \ZZ_p$. \end{proposition} \begin{proof} It suffices to show that $\Phi^*(c_{n_1,n_2})$ can be written as a convergent series in terms of the functions $c_{m_1, m_2}$ with uniformly bounded coefficients. We find that \begin{align*} \Phi^*(c_{n_1,n_2})(x_1, x_2) &= p^{[h \ell(n_2)]} \binom{x_1}{n_1} \binom{ax_1 + x_2}{n_2}\\ &= \sum_{i=0}^{n_1} p^{[h \ell(n_2)]} \binom{x_1}{n_1} \binom{ax_1}{n_2 - i} \binom{x_2}{i}. \end{align*} The functions $x_1 \mapsto \binom{x_1}{n_1} \binom{ax_1}{n_2 - i}$ are continuous $\ZZ_p$-valued functions on $\ZZ_p$, and hence the coefficients of their Mahler expansions are integral; and since the function $\ell(n)$ is increasing, we see that the coefficients of $\Phi^*(c_{n_1,n_2})$ in this basis are in fact bounded by 1. \end{proof} Dually, we define a distribution of order $(a,b)$ to be an element of the dual of $C^{(a,b)}(\ZZ_p^2, L)$; the space $D^{(a,b)}(\ZZ_p^2, L)$ of such distributions is canonically isomorphic to the completed tensor product $D^a(\ZZ_p, L) \mathbin{\hat\otimes}_L D^b(\ZZ_p, L)$. An analogue of (2) above is also true for these spaces. Let us write $LP^{(N_1, N_2)}(\ZZ_p^2, L)$ for the space of functions on $\ZZ_p^2$ which are locally polynomial of degree $\le N_1$ in $x_1$ and of degree $\le N_2$ in $x_2$; that is, the algebraic tensor product $LP^{N_1}(\ZZ_p, L) \otimes_L LP^{N_2}(\ZZ_p, L)$. \begin{proposition} Suppose $N_1 \ge [a]$ and $N_2 \ge [b]$. Then the subspace $LP^{(N_1, N_2)}(\ZZ_p^2, L)$ is dense in $C^{(a,b)}(\ZZ_p^2, L)$, and a linear functional on $LP^{(N_1, N_2)}(\ZZ_p^2, L)$ extends to an element of $D^{(a, b)}(\ZZ_p, L)$ if and only if there is a constant $C$ such that \begin{multline} \label{eq:ordercondition} v_p \left(\int_{(x_1, x_2) \in (a_1 + p^{n_1}\ZZ_p) \times (a_2 + p^{n_2}\ZZ_p)} \left( \frac{x_1 - a_1}{p^{n_1}}\right)^{k_1} \left( \frac{x_2 - a_2}{p^{n_2}}\right)^{k_2}\,\mathrm{d}\mu\right) \\ \ge C - a n_1 - b n_2 \end{multline} for all $(a_1, a_2) \in \ZZ_p^2$, $(n_1, n_2) \in \mathbb{N}^2$, $0 \le k_1 \le N_1$ and $0 \le k_2 \le N_2$. \end{proposition} The proof of this result is virtually identical to the 1-variable case, so we shall not give the full details here. In particular, if $a,b < 1$, we may take $N_1 = N_2 = 0$, and a distribution of order $(a, b)$ is uniquely determined by its values on locally constant functions, or equivalently, by its values on the indicator functions of open subsets of $\ZZ_p^2$. Conversely, a finitely-additive function $\mu$ on open subsets of $\ZZ_p^2$ defines a distribution of order $(a,b)$ if and only if there is $C$ such that \[ v_p \mu\left( (a_1 + p^{n_1}\ZZ_p) \times (a_2 + p^{n_2}\ZZ_p) \right) \ge C - a n_1 - b n_2.\] The following is easily verified: \begin{proposition} \label{prop:orderofproduct} The convolution of distributions of order $(a,b)$ and $(a', b')$ has order $(a + a', b + b')$. \end{proposition} It is important to note that the spaces of functions and of distributions of order $(a,b)$ depend on a choice of coordinates; they are not invariant under automorphisms of $\ZZ_p^2$, \emph{even if} $a = b$. However, dualising Proposition \ref{prop:changevar} above, the space of distributions of order $(0, h)$ is invariant under automorphisms preserving the subgroup $(0, \ZZ_p)$. \begin{remark} One can also define a function $f:\ZZ_p^2 \to L$ to be of order $h$, for a single non-negative real $h$, if $f$ has a Taylor expansion of degree $[h]$ at every point, with the error term $\varepsilon(x,y)$ (defined as above) satisfying \[ \inf_{x \in \ZZ_p^2, y \in p^n \ZZ_p^2} v_p \varepsilon(x,y) - hn \to \infty.\] This definition \emph{is} invariant under automorphisms of $\ZZ_p$ (and indeed under arbitrary morphisms of locally $\QQ_p$-analytic manifolds). However it is not so convenient for us, since locally constant functions are only dense for $h < 1$, and a finitely-additive function on open subsets extends to a linear functional on this space if we can find a $C$ such that \begin{equation} \label{eq:badordercondition} v_p \mu\left( a + p^n \ZZ_p^2 \right) \ge C - nh. \end{equation} The requirement that this be satisfied, for some $h < 1$, is much stronger than the requirement that \eqref{eq:ordercondition} is satisfied for some $a, b < 1$. \end{remark} We shall also use the concept of distributions of order $(a, b)$ on a slightly larger class of group: if we have an abelian $p$-adic Lie group $G$, and an open subgroup $H$ with distinguished subgroups $H_1,H_2$ such that $H = H_1 \times H_2$ and $H_1 \cong H_2 \cong \ZZ_p$, then we may define a distribution on $G$ to have order $(a, b)$ if its restriction to every coset of $H$ has order $(a, b)$ in the above sense. Note that this does not depend on a choice of generators of the groups $H_i$, but it does depend on the choice of the subgroups $H_1, H_2$; so when there is a possibility of ambiguity we shall write ``order $(a,b)$ with respect to the subgroups $H_1, H_2$''. Note that an application of Proposition \ref{prop:changevar} shows that a distribution has order $(0, h)$ with respect to the subgroups $(H_1, H_2)$ if and only if it has order $(0, h)$ with respect to $(H_1', H_2)$ for any other subgroup $H_1'$ complementary to $H_2$; that is, in this special case the definition of ``order $(0, h)$'' depends only on the choice of $H_2$. \end{document}
\begin{document} \title{Finding direct product decompositions in polynomial time} \author{James B. Wilson} \address{ Department of Mathematics\\ The Ohio State University\\ Columbus, OH 43210 } \email{[email protected]} \date{\today} \thanks{This research was supported in part by NSF Grant DMS 0242983.} \keywords{direct product, polynomial time, group variety, p-group, bilinear map} \begin{abstract} A polynomial-time algorithm is produced which, given generators for a group of permutations on a finite set, returns a direct product decomposition of the group into directly indecomposable subgroups. The process uses bilinear maps and commutative rings to characterize direct products of $p$-groups of class $2$ and reduces general groups to $p$-groups using group varieties. The methods apply to quotients of permutation groups and operator groups as well. \end{abstract} \maketitle \tableofcontents \section{Introduction} Forming direct products of groups is an old and elementary way to construct new groups from old ones. This paper concerns reversing that process by efficiently decomposing a group into a direct product of nontrivial subgroups in a maximal way, i.e. constructing a \emph{Remak decomposition} of the group. We measure efficiency by describing the time (number of operations) used by an algorithm, as a function of the input size. Notice that a small set of generating permutations or matrices can specify a group of exponentially larger size; hence, there is some work just to find the order of a group in polynomial time. In the last 40 years, problems of this sort have been attacked with ever increasing dependence on properties of simple groups, and primitive and irreducible actions, cf. \cite{Seress:book}. A polynomial-time algorithm to construct a Remak decomposition is an obvious addition to those algorithms and, as might be expected, our solution depends on many of those earlier works. Surprisingly, the main steps involve tools (bilinear maps, commutative rings, and group varieties) that are not standard in Computational Group Theory. We solve the Remak decomposition problem for permutation groups and describe the method in a framework suitable for other computational settings, such as matrix groups. We prove: \begin{thm}\label{thm:FindRemak} There is a deterministic polynomial-time algorithm which, given a permutation group, returns a Remak decomposition of the group. \end{thm} It seems natural to solve the Remak decomposition problem by first locating a direct factor of the group, constructing a direct complement, and then recursing on the two factors. Indeed, Luks \cite{Luks:comp} and Wright \cite{Wright:comp} (cf. \thmref{thm:FindComp}) gave polynomial-time algorithms to test if a subgroup is a direct factor and if so to construct a direct complement. But how do we find a proper nontrivial direct factor to start with? A critical case for that problem is $p$-groups. A $p$-group generally has an exponential number of normal subgroups so that searching for direct factors of a $p$-group appears impossible. The algorithm for \thmref{thm:FindRemak} does not proceed in the natural fashion just described, and it is more of a construction than a search. In fact, the algorithm does not produce a single direct factor of the original group until the final step, at which point it has produced an entire Remak decomposition. It was the study of central products of $p$-groups which inspired the approach we use for \thmref{thm:FindRemak}. In \cite{Wilson:unique-cent,Wilson:algo-cent}, central products of a $p$-group $P$ of class $2$ were linked, via a bilinear map $\mathsf{Bi} (P)$, to idempotents in a Jordan algebra in a way that explained their size, their $(\Aut P)$-orbits, and demonstrated how to use the polynomial-time algorithms for rings (Ronyai \cite{Ronyai}) to construct fully refined central decompositions all at once (rather than incrementally refining a decomposition). This approach is repeated here, only we replace Jordan algebras with a canonical commutative ring $C(P):=C(\mathsf{Bi} (P))$ (cf. \eqref{eq:Bi} and \defref{def:centroid}). Thus, we characterize directly indecomposable $p$-groups of class $2$ as follows: \begin{thm}\label{thm:indecomp-class2} If $P$ and $Q$ are finite $p$-groups of nilpotence class $2$ then $C(P\times Q)\cong C(P)\oplus C(Q)$. Hence, if $C(P)$ is a local ring and $\zeta_1(P)\leq \Phi(P)$, then $P$ is directly indecomposable. Furthermore, if $P^p=1$ then the converse also holds. \end{thm} The algorithm applies the implications of \thmref{thm:indecomp-class2} and begins with the \emph{unique} Remak decomposition of a commutative ring. This process is repeated across several sections of the group. Using group varieties we organize the various sections. Group varieties behave well regarding direct products and come with natural and computable normal subgroups used to create the sections. To work within these sections of a permutation group we have had to prove \thmref{thm:FindRemak} in the generality of quotients of permutation groups and thus we have used the Kantor-Luks polynomial-time quotient group algorithms \cite{KL:quotient}. Those methods depend on the Classification of Finite Simple Groups and, in this way, so does \thmref{thm:FindRemak}. A final generalization of the main result is the need to allow groups with operators $\Omega$ and consider Remak $\Omega$-decompositions. The most general version of our main result is summarized in \thmref{thm:FindRemak-Q} followed by a variant for matrix groups in \corref{coro:FindRemak-matrix}. \thmref{thm:FindRemak} was proved in 2008 \cite{Wilson:thesis}. That same year, with entirely different methods, Kayal-Nezhmetdinov \cite{KN:direct} proved there is a deterministic polynomial-time algorithm which, given a group $G$ specified by its multiplication table (i.e. the size of input is $|G|^2$), returns a Remak decomposition of $G$. The same result follows as a corollary to \thmref{thm:FindRemak} by means of the regular permutation representation of $G$. \thmref{thm:nearly-linear} states that in that special situation there is a nearly-linear-time algorithm for the task. \subsection{Outline} We organize the paper as follows. In Section \ref{sec:background} we introduce the notation and definitions we use throughout. This includes the relevant group theory background, discussion of group varieties, rings and modules, and a complete listing of the prerequisite tools for \thmref{thm:FindRemak}. In Section \ref{sec:lift-ext} we show when and how a direct decomposition of a subgroup or quotient group can be extended or lifted to a direct decomposition of the whole group (Sections \ref{sec:induced}--\ref{sec:chains}). That task centers around the selection of good classes of groups as well as appropriate normal subgroups. The results in that section are largely non-algorithmic though they lay foundations for the correctness proofs and suggest how the data will be processed by the algorithm for \thmref{thm:FindRemak}. Section \ref{sec:lift-ext-algo} applies the results of the earlier section to produce a polynomial-time algorithm which can effect the lifting/extending of direct decompositions of subgroups and quotient groups. First we show how to construct direct $\Omega$-complements of a direct $\Omega$-factor of a group (Section \ref{sec:complements}) by modifying some earlier unpublished work of Luks \cite{Luks:comp} and Wright \cite{Wright:comp}. Those algorithms answer Problem 2, and (subject to some constraints) also Problem 4 of \cite[p. 13]{KN:direct}. The rest of the work concerns the algorithm \textalgo{Merge} described in Section \ref{sec:merge} which does the `glueing' together of direct factors from a normal subgroup and its quotient. In Section \ref{sec:bi} we characterize direct decompositions of $p$-groups of class $2$ by means of an associated commutative ring and prove \thmref{thm:indecomp-class2}. We close that section with some likely well-known results on groups with trivial centers. In Section \ref{sec:Remak} we prove \thmref{thm:FindRemak} and its generalization \thmref{thm:FindRemak-Q}. This is a specific application which demonstrates the general framework setup in Sections \ref{sec:lift-ext} and \ref{sec:lift-ext-algo}. \thmref{thm:FindRemak-Q} answers Problem 3 of \cite[p. 13]{KN:direct} and \corref{coro:FindRemak-matrix} essentially answers Problem 5 of \cite[p. 13]{KN:direct}. Section \ref{sec:ex} is an example of how the algorithm's main components operate on a specific group. The execution is explained with an effort to indicate where some of the subtle points in the process arise. Section \ref{sec:closing} wraps up loose ends and poses some questions. \section{Background}\label{sec:background} We begin with a survey of the notation, definitions, and algorithms we use throughout the paper. Much of the preliminaries can be found in standard texts on Group Theory, consider \cite[Vol. I \S\S 15--18; Vol. II \S\S 45--47]{Kurosh:groups}. Typewriter fonts $\mathtt{X}, \mathtt{R}$, etc. denote sets without implied properties; Roman fonts $G$, $H$, etc., denote groups; Calligraphic fonts $\mathcal{H}, \mathcal{X}$, etc. denote sets and multisets of groups; and the Fraktur fonts $\mathfrak{X}$, $\mathfrak{N}$, etc. denote classes of groups. With few exceptions we consider only finite groups. Functions are evaluated on the right and group actions are denoted exponentially. We write $\End G$ for the set of endomorphisms of $G$ and $\Aut G$ for the group of automorphisms. The \emph{centralizer} of a subgroup $H\leq G$ is $C_G(H)=\{g\in G: H^g=H\}$. The \emph{upper central series} is $\{\zeta_i(G): i\in\mathbb{N}\}$ where $\zeta_0(G)=1$, $\zeta_{i}(G)\normal \zeta_{i+1}(G)$ and $\zeta_{i+1}(G)/\zeta_i(G)=C_{G/\zeta_i(G)}(G/\zeta_i(G))$, for all $i\in\mathbb{N}$. The commutator of subgroups $H$ and $K$ of $G$ is $[H,K]=\langle [h,k]:h\in H, k\in K\rangle$. The \emph{lower central series} is $\{\gamma_i(G):i\in\mathbb{Z}^+\}$ where $\gamma_1(G)=G$ and $\gamma_{i+1}(G)=[G,\gamma_i(G)]$ for all $i\in\mathbb{Z}^+$. The \emph{Frattini} subgroup $\Phi(G)$ is the intersection of all maximal subgroups. \subsection{Operator groups}\label{sec:op-groups} An $\Omega$-group $G$ is a group, a possibly empty set $\Omega$, and a function $\theta:\Omega\to \End G$. Throughout the paper we write $g^{\omega}$ for $g(\omega\theta)$, for all $g\in G$ and all $\omega\in \Omega$. With the exception of Section \ref{sec:gen-ops}, we insist that $\Omega\theta\subseteq \Aut G$. In a natural way, $\Omega$-groups have all the usual definitions of $\Omega$-subgroups, quotient $\Omega$-groups, and $\Omega$-homomorphisms. Call $H$ is \emph{fully invariant}, resp. \emph{characteristic} if it is an $(\End G)-$, resp. $(\Aut G)-$, subgroup. As we insist that $\Omega\theta\subseteq \Aut G$, in this work every characteristic subgroup of $G$ is automatically an $\Omega$-subgroup. Let $\Aut_{\Omega} G$ denote the $\Omega$-automorphisms of $G$. We describe normal $\Omega$-subgroups $M$ of $G$ simply as $(\Omega\union G)$-subgroup of $G$. The following characterization is critical to our proofs. \begin{align} \label{eq:central} \Aut_{\Omega\cup G} G & = \{\varphi\in \Aut_{\Omega} G: \forall g\in G, g\varphi \equiv g \pmod{\zeta_1(G)}\}. \end{align} It is also evident that $\Aut_{\Omega\cup G} G$ acts as the identity on $\gamma_2(G)$. Such automorphisms are called \emph{central} but for uniformity we described them as $(\Omega\cup G)$-automorphisms. We repeatedly use the following property of the $(\Omega\cup G)$-subgroup lattice. \begin{lemma}[Modular law]\cite[Vol. II \S 44: pp. 91-92]{Kurosh:groups}\label{lem:modular} If $M$, $H$, and $R$ are $(\Omega\cup G)$-subgroups of an $\Omega$-group $G$ and $M\leq H$, then $H\cap RM=(H\cap R)M$. \end{lemma} \subsection{Decompositions, factors, and refinement}\label{sec:decomps} Let $G$ be an $\Omega$-group. An \emph{$\Omega$-decomposition} of $G$ is a set $\mathcal{H}$ of $(\Omega\union G)$-subgroups of $G$ which generates $G$ but no proper subset of $\mathcal{H}$ does. A \emph{direct $\Omega$-decomposition} is an $\Omega$-decomposition $\mathcal{H}$ where $H\intersect \langle\mathcal{H}-\{H\}\rangle=1$, for all $H\in\mathcal{H}$. In that case, elements $H$ of $\mathcal{H}$ are direct $\Omega$-factors of $G$ and $\langle\mathcal{H}-\{H\}\rangle$ is a \emph{direct $\Omega$-complement} to $H$. Call $G$ \emph{directly $\Omega$-indecomposable} if $\{G\}$ is the only direct $\Omega$-decomposition of $G$. Finally, a \emph{Remak $\Omega$-decomposition} means a direct $\Omega$-decomposition consisting of directly $\Omega$-indecomposable groups. Our definitions imply that the trivial subgroup $1$ is not a direct $\Omega$-factor. Furthermore, the only direct decomposition of $1$ is $\emptyset$ and so $1$ is not directly $\Omega$-indecomposable. We repeatedly use for the following notation. Fix an $\Omega$-decomposition $\mathcal{H}$ of an $\Omega$-group $G$, and an $(\Omega\union G)$-subgroup $M$ of $G$. Define the sets \begin{align} \mathcal{H}\intersect M & = \{ H\intersect M : H\in\mathcal{H} \} -\{1\},\\ \mathcal{H}M & = \{ HM : H\in\mathcal{H}\} - \{M\},\textnormal{ and }\\ \mathcal{H}M/M & = \{ HM/M : H\in\mathcal{H} \} -\{M/M\}. \end{align} If $f:G\to H$ is an $\Omega$-homomorphism then define \begin{align} \mathcal{H}f = \{ Hf: H\in\mathcal{H}\}-\{1\}. \end{align} Each of these sets consists of $\Omega$-subgroups of $G\intersect M$, $M$, $G/M$, and $\im f$ respectively. It is not generally true that these sets are $\Omega$-decompositions. In particular, for arbitrary $M$, we should not expect a relationship between the direct $\Omega$-decompositions of $G/M$ and those of $G$. If $\mathfrak{X}$ is a class of groups then set \begin{align} \mathcal{H}\cap \mathfrak{X} & = \{ H\in\mathcal{H}: H\in\mathfrak{X}\},\textnormal{ and}\\ \mathcal{H}-\mathfrak{X} & = \mathcal{H}-(\mathcal{H}\cap\mathfrak{X}). \end{align} An $\Omega$-decomposition $\mathcal{H}$ of $G$ \emph{refines} an $\Omega$-decomposition $\mathcal{K}$ of $G$ if for each $H\in\mathcal{H}$, there a unique $K\in\mathcal{K}$ such that $H\leq K$ and furthermore, \begin{equation}\label{eq:refine} \forall K\in\mathcal{K},\quad K =\langle H\in\mathcal{H} : H\leq K\rangle. \end{equation} When $\mathcal{K}$ is a direct $\Omega$-decomposition, \eqref{eq:refine} implies the uniqueness preceding the equation. If $\mathcal{H}$ is a direct $\Omega$-decomposition then $\mathcal{K}$ is a direct $\Omega$-decomposition. An essential tool for us is the so called ``Krull-Schmidt'' theorem for finite groups. \begin{thm}[``Krull-Schmidt'']\label{thm:KRS}\cite[Vol. II, p. 120]{Kurosh:groups} If $G$ is an $\Omega$-group and $\mathcal{R}$ and $\mathcal{T}$ are Remak $\Omega$-decompositions of $G$, then for every $\mathcal{X}\subseteq \mathcal{R}$, there is a $\varphi\in \Aut_{\Omega\cup G} G$ such that $\mathcal{X}\varphi\subseteq \mathcal{T}$ and $\varphi$ is the identity on $\mathcal{R}-\mathcal{X}$. In particular, $\mathcal{R}\varphi=\mathcal{X}\varphi\sqcup (\mathcal{R}-\mathcal{X})$ is a Remak $\Omega$-decomposition of $G$. \end{thm} \begin{remark} The ``Krull-Schmidt'' theorem combines two distinct properties. First, it is a theorem about exchange (as compared to a basis exchange). That property was proved by Wedderburn \cite{Wedderburn:direct} in 1909. Secondly, it is a theorem about the transitivity of a group action. That property was the contribution of Remak \cite{Remak:direct} in 1911. Remak was made aware of Wedderburn's work in the course of publishing his paper and added to his closing remarks \cite[p. 308]{Remak:direct} that Wedderburn's proof contained an unsupported leap (specifically at \cite[p.175, l.-4]{Wedderburn:direct}). This leap is not so great by contemporary standards, for example it occurs in \cite[p.81, l.-12]{Rotman:grp}. Few references seem to be made to Wedderburn's work following Remak's publication. In 1913, Schmidt \cite{Schmidt:direct} simplified and extended the work of Remak and in 1925 Krull \cite{Krull:direct} considered direct products of finite and infinite abelian $\Omega$-groups. Fitting \cite{Fitting:direct} invented the standard proof using idempotents, Ore \cite{Ore:lattice1} grounded the concepts in Lattice theory, and in several works Kurosh \cite[\S 17, \S\S 42--47]{Kurosh:groups} and others unified and expanded these results. By the 1930's direct decompositions of maximum length appear as ``Remak decompositions'' while at the same time the theorem is referenced as ``Krull-Schmidt''. \end{remark} \subsection{Free groups, presentations, and constructive presentations} \label{sec:free} In various places we use free groups. Fix a set $\mathtt{X}\neq \emptyset$ and a group $G$. Let $G^{\mathtt{X}}$ denote the set of functions from $\mathtt{X}$ to $G$, equivalently, the set of all $\mathtt{X}$-tuples of $G$. Every $f\in G^{\mathtt{X}}$ is the restriction of a unique homomorphism $\hat{f}$ from the free group $F(\mathtt{X})$ into $G$, that is: \begin{equation} \forall x\in\mathtt{X}, \quad x\hat{f} = xf. \end{equation} We use $\hat{f}$ exclusively in that manner. As usual we call $\langle\mathtt{X} | \mathtt{R}\rangle$ a \emph{presentation} for a group $G$ with respect to $f:\mathtt{X}\to G$ if $\mathtt{X} f$ generates $G$ and $\ker \hat{f}$ is the smallest normal subgroup of $F(\mathtt{X})$ containing $\mathtt{R}$. Following \cite[Section 3.1]{KLM:Sylow}, $\{\langle\mathtt{X}|\mathtt{R}\rangle, f:\mathtt{X}\to G,\ell:G\to F(\mathtt{X})\}$ is a \emph{constructive presentation} for $G$, if $\langle\mathtt{X} | \mathtt{R}\rangle$ is a presentation for $G$ with respect to $f$ and $\ell \hat{f}$ is the identity on $G$. More generally, if $M$ is a normal subgroup of $G$ then call $\{\langle\mathtt{X}| \mathtt{R}\rangle,f:\mathtt{X}\to G,\ell:G\to F(\mathtt{X})\}$ a \emph{constructive presentation for $G$ mod $M$} if $\langle \mathtt{X}|\mathtt{R}\rangle$ is a presentation of $G/M$ with respect to the induced function $\mathtt{X}\overset{f}{\to} G\to G/M$, also $\ell\hat{f}$ is the identity on $G$, and $M\ell\leq \langle \mathtt{R}^{F(\mathtt{X})}\rangle$. \subsection{Group classes, varieties, and verbal and marginal subgroups} \label{sec:varieties} In this section we continue the notation given in Section \ref{sec:free} and introduce the vocabulary and elementary properties of group varieties studies at length in \cite{HNeumann:variety}. By a \emph{class of $\Omega$-groups} we shall mean a class which contains the trivial group and is closed to $\Omega$-isomorphic images. If $\mathfrak{X}$ is a class of ordinary groups, then $\mathfrak{X}^{\Omega}$ denotes the subclass of $\Omega$-groups in $\mathfrak{X}$. A \emph{variety} $\mathfrak{V}=\mathfrak{V}(\mathtt{W})$ is a class of groups defined by a set $\mathtt{W}$ of words, known as \emph{laws}. Explicitly, $G\in\mathfrak{X}$ if, and only if, every $f\in G^{\mathtt{X}}$ has $\mathtt{W}\subseteq \ker \hat{f}$. We say that $w\in F(\mathtt{X})$ is a \emph{consequence} of the laws $\mathtt{W}$ if for every $G\in\mathfrak{V}$ and every $f\in G^{\mathtt{X}}$, $w\in \ker \hat{f}$. The relevance of these classes to direct products is captured in the following: \begin{thm}[Birkhoff-Kogalovski]\cite[15.53]{HNeumann:variety}\label{thm:BK} A class of groups is a variety if, and only if, it is nonempty and is closed to homomorphic images, subgroups, and direct products (including infinite products). \end{thm} Fix a word $w\in F(\mathtt{X})$. We regard $w$ as a function $G^{\mathtt{X}}\to G$, denoted $w$, where \begin{equation}\label{eq:w-map} \forall f\in G^{\mathtt{X}},\quad w(f) = w\hat{f}. \end{equation} On occasion we write $w(f)$ as $w(g_1,g_2,\dots)$, where $f\in G^{\mathtt{X}}$ is understood as the tuple $(g_1,g_2,\dots)$. For example, if $w=[x_1,x_2]$, then $w:G^2\to G$ can be defined as $w(g_1,g_2)=[g_1,g_2]$, for all $g_1,g_2\in G$. Levi and Hall separately introduced two natural subgroups to associate with the function $w:G^{\mathtt{X}}\to G$. First, to approximate the image of $w$ with a group, we have the \emph{verbal} subgroup \begin{equation}\label{eq:def-verbal} w(G) = \langle w(f): f\in G^{\mathtt{X}}\rangle. \end{equation} Secondly, to mimic the radical of a multilinear map, we use the \emph{marginal} subgroup \begin{equation}\label{eq:marginal} w^*(G) = \{ g \in G~:~\forall f'\in \langle g\rangle^{\mathtt{X}}, \forall f\in G^{\mathtt{X}},~w(ff')=w(f)\}. \end{equation} (To be clear, $ff'\in G^{\mathtt{X}}$ is the pointwise product: $x(ff')=(xf)(xf')$ for all $x\in \mathtt{X}$.) Thus, $w:G^{\mathtt{X}}\to G$ factors through $w:(G/w^*(G))^{\mathtt{X}}\to w(G)$. For a set $\mathtt{W}$ of words, the $\mathtt{W}$-verbal subgroup is $\langle w(G): w\in \mathtt{W}\rangle$ and the $\mathtt{W}$-marginal subgroup is $\bigcap \{w^*(G): w\in \mathtt{W}\}$. Observe that for finite sets $\mathtt{W}$ a single word may be used instead, e.g. replace $\mathtt{W}=\{[x_1,x_2], x_1^2\} \subseteq F(\{x_1,x_2\})$ with $w=[x_1,x_2]x_3^2\in F(\{x_1,x_2,x_3\})$. If we have a variety $\mathfrak{V}$ defined by two sets $\mathtt{W}$ and ${\tt U}$ of laws, then every $u\in {\tt U}$ is a consequence of the laws $\mathtt{W}$. From the definitions above it follows that $u(G)\leq \mathtt{W}(G)$ and $\mathtt{W}^*(G)\leq u^*(G)$. Reversing the roles of $\mathtt{W}$ and ${\tt U}$, it follows that $\mathtt{W}(G)={\tt U}(G)$ and $\mathtt{W}^*(G)={\tt U}^*(G)$. This justifies the notation \begin{align*} \mathfrak{V}(G) & = \mathfrak{V}(\mathtt{W})(G)=\mathtt{W}(G),\\ \mathfrak{V}^*(G) & = \mathfrak{V}(\mathtt{W})^*(G) = \mathtt{W}^*(G). \end{align*} The verbal and marginal groups are dual in the following sense \cite{Hall:margin}: for a group $G$, \begin{equation} \mathfrak{V}(G)=1\quad \Leftrightarrow\quad G\in\mathfrak{V} \quad \Leftrightarrow \quad \mathfrak{V}^*(G)=G. \end{equation} Also, verbal subgroups are radical, $\mathfrak{V}(G/\mathfrak{V}(G))=1$, and marginal subgroups are idempotent, $\mathfrak{V}^*(\mathfrak{V}^*(G))=\mathfrak{V}^*(G)$, but verbal subgroups are not generally idempotent and marginal subgroups are not generally radical. \begin{ex}\label{ex:varieties} \begin{enumerate}[(i)] \item The class $\mathfrak{A}$ of abelian groups is a group variety defined by $[x_1, x_2]$. The $\mathfrak{A}$-verbal subgroup of a group is the commutator subgroup and the $\mathfrak{A}$-marginal subgroup is the center. \item The class $\mathfrak{N}_c$ of nilpotent groups of class at most $c$ is a group variety defined by $[x_1,\dots,x_{c+1}]$ (i.e. $[x_1]=x_1$ and $[x_1,\dots,x_{i+1}]=[[x_1,\dots,x_i],x_{i+1}]$, for all $i\in \mathbb{N}$). Also, $\mathfrak{N}_c(G)=\gamma_{c+1}(G)$ and $\mathfrak{N}_c^*(G)=\zeta_c(G)$ \cite[2.3]{Robinson}. \item The class $\mathfrak{S}_d$ of solvable groups of derived length at most $d$ is a group variety defined by $\delta_d(x_1,\dots,x_{2^d})$ where $\delta_1(x_1)=x_1$ and for all $i\in\mathbb{N}$, $$\delta_{i+1}(x_1,\dots,x_{2^{i+1}}) =[\delta_i(x_1,\dots,x_{2^i}),\delta_i(x_{2^i+1},\dots,x_{2^{i+1}})].$$ Predictably, $\mathfrak{S}_d(G)=G^{(d)}$ is the $d$-th derived group of $G$. It appears that $\mathfrak{S}_d^*(G)$ is not often used and has no name. (This may be good precedent for $\mathfrak{S}_d^*(G)$ can be trivial while $G$ is solvable; thus, the series $\mathfrak{S}^*_1(G)\leq \mathfrak{S}^*_2(G)\leq \cdots $ need not be strictly increasing.) \end{enumerate} \end{ex} Verbal and marginal subgroups are characteristic in $G$ and verbal subgroups are also fully invariant \cite{Hall:margin}. So if $G$ is an $\Omega$-group then so is $\mathfrak{V}(G)$. Moreover, \begin{equation}\label{eq:verbal-closure} G\in\mathfrak{V}^{\Omega} \textnormal{ if, and only if, $G$ is an $\Omega$-group and } \mathfrak{V}(G)=1. \end{equation} Unfortunately, marginal subgroups need not be fully invariant (e.g. the center of a group). In their place, we use the $\Omega$-invariant marginal subgroup $(\mathfrak{V}^{\Omega})^{*}(G)$, i.e. the largest normal $\Omega$-subgroup of $\mathfrak{V}^*(G)$. Since $\mathfrak{V}$ is closed to subgroups it follows that $(\mathfrak{V}^{\Omega})^{*}(G)\in\mathfrak{V}$. Furthermore, if $G$ is an $\Omega$-group and $G\in\mathfrak{V}$ then $\mathfrak{V}^*(G)=G$ and so the $\Omega$-invariant marginal subgroup is $G$. Thus, \begin{equation}\label{eq:marginal-closure} G\in\mathfrak{V}^{\Omega} \textnormal{ if, and only if, $G$ is an $\Omega$-group and } \mathfrak{V}^{*}(G)=G. \end{equation} In our special setting all operators act as automorphisms and so the invariant marginal subgroup is indeed the marginal subgroup. Nevertheless, to avoid confusion insist that the marginal subgroup of a variety of $\Omega$-groups refers to the $\Omega$-invariant marginal subgroup. \subsection{Rings, frames, and modules}\label{sec:rings} We involve some standard theorems for associative unital finite rings and modules. Standard references for our uses include \cite[Chapters 1--3]{Herstein:rings} and \cite[Chapters I--II, V.3]{Jacobson:Lie}. Throughout this section $R$ denotes a finite associative unital ring. A $e\in R-\{0\}$ is \emph{idempotent} if $e^2=e$. An idempotent is \emph{proper} if it is not $1$ (as we have excluded $0$ as an idempotent). Two idempotents $e,f\in R$ are \emph{orthogonal} if $ef=0=fe$. An idempotent is \emph{primitive} if it is not the sum of two orthogonal idempotents. Finally, a \emph{frame} $\mathcal{E}\subseteq R$ is a set of pairwise orthogonal primitive idempotents of $R$ which sum to $1$. We use the following properties. \begin{lem}[Lifting idempotents]\label{lem:lift-idemp} Let $R$ be a finite ring. \begin{enumerate}[(i)] \item If $e\in R$ such that $e^2-e\in J(R)$ (the Jacobson radical) then for some $n\leq \log_2 |J(R)|$, $(e^2-e)^n=0$ and \begin{equation*} \hat{e}= \sum_{i=0}^{n-1}\binom{2n-1}{i} e^{2n-1-i}(1-e)^i \end{equation*} is an idempotent in $R$. Furthermore, $\widehat{1-e}=1-\hat{e}$. \item $\mathcal{E}$ is a frame of $R/J(R)$ then $\hat{\mathcal{E}}=\{\hat{e}:e\in\mathcal{E}\}$ is a frame of $R$. \item Frames in $R$ are conjugate by a unit in $R$; in particular, if $R$ is commutative then $R$ has a unique frame. \end{enumerate} \end{lem} \begin{proof} Part (i) is verified directly, compare \cite[(6.7)]{Curtis-Reiner}. Part (ii) follows from induction on (i). For (iii) see \cite[p. 141]{Curtis-Reiner}. \end{proof} If $M$ is an $R$-module and $e$ is an idempotent of $\End_R M$ then $M=Me\oplus M(1-e)$. Furthermore, if $M=E\oplus F$ as an $R$-module, then the projection $e_E:M\to M$ with kernel $F$ and image $E$ is an idempotent endomorphism of $M$. Thus, every direct $R$-decomposition $\mathcal{M}$ of $M$ is parameterized by a set $\mathcal{E}(\mathcal{M})=\{e_E : E\in\mathcal{M}\}$ of pairwise orthogonal idempotents of $\End_R M$ which sum to $1$. Remak $R$-decompositions of $M$ correspond to frames of $\End_R M$. \subsection{Polynomial-time toolkit} \label{sec:tools} We use this section to specify how we intend to compute with groups of permutations. We operate in the context of quotients of permutation groups and borrow from the large library of polynomial-time algorithms for this class of groups. We detail the problems we use in our proof of \thmref{thm:FindRemak} so that in principle any computational domain with polynomial-time algorithms for these problems will admit a theorem similar to \thmref{thm:FindRemak}. The majority of algorithms which we cite do not provide specific estimates on the polynomial timing. Therefore, our own main theorems will not have specific estimates. The group $S_n$ denotes the permutations on $\{1,\dots,n\}$. Given $\mathtt{X}\subseteq S_n$, a \emph{straight-line program} over $\mathtt{X}$ is a recursively defined function on $\mathtt{X}$ which evaluates to a word over $\mathtt{X}$, but can be stored and evaluated in an efficient manner; see \cite[p. 10]{Seress:book}. To simplify notation we treat these as elements in $S_n$. Write $\mathbb{G}_n$ for the class of groups $G$ encoded by $(\mathtt{X}:\mathtt{R})$ where $\mathtt{X}\subseteq S_n$ and $\mathtt{R}$ is a set of straight-line programs such that \begin{equation}\label{eq:def-G} G=\langle\mathtt{X}\rangle/N,\qquad N:=\left\langle \mathtt{R}^{\langle \mathtt{X}\rangle}\right\rangle\leq \langle\mathtt{X}\rangle\leq S_n. \end{equation} The notation $\mathbb{G}_n$ intentionally avoids reference to the permutation domain as the algorithms we consider can be adapted to other computational domains. Also, observe that a group $G\in\mathbb{G}_n$ may have no small degree permutation representation. For example, the extraspecial group $2^{1+2n}_+$ is a quotient of $D_8^{n}\leq S_{4n}$; yet, the smallest faithful permutation representation of $2^{1+2n}_+$ has degree $2^n$ \cite[Introduction]{Neumann:perm-grp}. It is misleading to think of $\mathtt{R}$ in \eqref{eq:def-G} as relations for the generators $\mathtt{X}$; indeed, elements in $\mathtt{X}$ are also permutations and so there are relations implied on $\mathtt{X}$ which may not be implied by $\mathtt{R}$. We write $\ell(\mathtt{R})$ for the sum of the lengths of straight-line programs in $\mathtt{R}$. A homomorphism $f:G\to H$ of groups $G=(\mathtt{X} :\mathtt{R}),H=({\tt Y}:{\tt S}) \in\mathbb{G}_n$ is encoded by storing $\mathtt{X} f$ as straight-line programs in ${\tt Y}$. An $\Omega$-group $G$ is encoded by $G=(\mathtt{X}:\mathtt{R})\in\mathbb{G}_n$ along with a function $\theta:\Omega\to \End G$. We write $\mathbb{G}_n^{\Omega}$ for the set of $\Omega$-groups encoded in that fashion. A \emph{polynomial-time} algorithm with input $G=(\mathtt{X}:\mathtt{R})\in \mathbb{G}_n^{\Omega}$ returns an output using a polynomial in $|\mathtt{X}|n+\ell(\mathtt{R})+\ell(\Omega)$ number of steps. In some cases $|\mathtt{X}|n+\ell(\mathtt{R})\in O(\log |G|)$; so, $|G|$ can be exponentially larger than the input size. When we say ``given an $\Omega$-group $G$'' we shall mean $G\in\mathbb{G}_n^{\Omega}$. Our objective in this paper is to solve the following problem. \begin{prob}{\sc Remak-$\Omega$-Decomposition}\label{prob:FindRemak} \begin{description} \item[Given] an $\Omega$-group $G$, \item[Return] a Remak $\Omega$-decomposition for $G$. \end{description} \end{prob} The problems \probref{prob:Order}--\probref{prob:MinSNorm} have polynomial-time solutions for groups in $\mathbb{G}_n^{\Omega}$. \begin{prob}{\sc Order}\label{prob:Order}\cite[P1]{KL:quotient} \begin{description} \item[Given] a group $G$, \item[Return] $|G|$. \end{description} \end{prob} \begin{prob}{\sc Member}\label{prob:Member}\cite[3.1]{KL:quotient} \begin{description} \item[Given] a group $G$, a subgroup $H=(\mathtt{X}':\mathtt{R}')$ of $G$, and $g\in G$, \item[Return] false if $g\notin H$; else, a straight-line program in $\mathtt{X}'$ reaching $g\in H$. \end{description} \end{prob} We require the means to solve systems of linear equations, or determine that no solution exists, in the following generalized setting. \begin{prob}{\sc Solve}\label{prob:Solve}\cite[Proposition 3.7]{KLM:Sylow} \begin{description} \item[Given] a group $G$, an abelian normal subgroup $M$, a function $f\in G^{\mathtt{X}}$ of constants in $G$, and a set $\mathtt{W}\subseteq F(\mathtt{X})$ of words encoded via straight-line programs; \item[Return] false if $w(f\mu)\neq 1$ for all $\mu\in M^{\mathtt{X}}$; else, generators for the solution space $\{\mu\in M^{\mathtt{X}} : w(f\mu)=1\}$. \end{description} \end{prob} \begin{prob}{\sc Presentation}\label{prob:pres}\cite[P2]{KL:quotient} \begin{description} \item[Given] given a group $G$ and a normal subgroup $M$, \item[Return] a constructive presentation $\{\langle\mathtt{X}|\mathtt{R}\rangle, f,\ell\}$ for $G$ mod $M$. \end{description} \end{prob} \begin{prob}{\sc Minimal-Normal}\label{prob:MinNorm}\cite[P11]{KL:quotient} \begin{description} \item[Given] a group $G$, \item[Return] a minimal normal subgroup of $G$. \end{description} \end{prob} \begin{prob}{\sc Normal-Centralizer}\label{prob:CentNorm}\cite[P6]{KL:quotient} \begin{description} \item[Given] a group $G$ and a normal subgroup $H$, \item[Return] $C_G(H)$. \end{description} \end{prob} \begin{prob}{\sc Primary-Decomposition}\label{prob:Primary} \begin{description} \item[Given] an abelian group $A\in \mathbb{G}_n$, \item[Return] a primary decomposition for $A=\bigoplus_{v\in{\tt B}} \mathbb{Z}_{p^e} v$, where for each $v\in {\tt B}$, $|v|=p^e$ for some prime $p=p(v)$. \end{description} \end{prob} We call $\mathcal{X}$, as in {\sc Primary-Decomposition}, a \emph{basis} for $A$. The polynomial-time solution of {\sc Primary-Decomposition} is routine. Let $A=(\mathtt{X} : \mathtt{R})\in \mathbb{G}_n$. Use {\sc Order} to compute $|A|$. As $A$ is a quotient of a permutation group, the primes dividing $|A|$ are less than $n$. Thus, pick a prime $p\mid |A|$ and write $|A|=p^e m$ where $(p,m)=1$. Set $A_p=A^{m}$. Using {\sc Member} build a basis ${\tt B}_p$ for $A_p$ by unimodular linear algebra. (Compare \cite[Section 2.3]{Wilson:algo-cent}.) The return is $\bigsqcup_{p\mid |A|} {\tt B}_p$. We involve some problems for associative rings. For ease we assume that all rings $R$ are finite of characteristic $p^e$ and specified with a basis ${\tt B}$ over $\mathbb{Z}_{p^e}$. To encode the multiplication in $R$ we store structure constants $\{\lambda_{xy}^z \in \mathbb{Z}_{p^e} : x,y,z\in {\tt B}\}$ which are defined so that: \begin{equation*} \left(\sum_{x\in\mathcal{X}} r_x x\right)\left(\sum_{y\in\mathcal{X}} s_y y\right) =\sum_{z\in{\tt B}} \left(\sum_{x,y\in\mathcal{X}} r_x \lambda_{xy}^{z} s_y\right)z \end{equation*} where, for all $x$ and all $y$ in ${\tt B}$, $r_x,s_y\in\mathbb{Z}_{p^e}$. \begin{prob}{\sc Frame}\label{prob:Frame} \begin{description} \item[Given] an associative unital ring $R$, \item[Return] a frame of $R$. \end{description} \end{prob} {\sc Frame} has various nondeterministic solutions \cite{EG:fast-alge,Ivanyos:fast-alge} with astonishing speed. However, we need a deterministic solution such as in the work of Ronyai. \begin{thm}[Ronyai \cite{Ronyai}]\label{thm:Frame} For rings $R$ specified as an additive group in $\mathbb{G}_n$ with a basis and with structure constants with respect to the basis, {\sc Frame} is solvable in polynomial-time in $p+n$ where $|R|=p^n$. \end{thm} \begin{proof} First pass to ${\bf R}=R/pR$ and so create an algebra over the field $\mathbb{Z}_p$. Now \cite[Theorem 2.7]{Ronyai} gives a deterministic polynomial-time algorithm which finds a basis for the Jacobson radical of ${\bf R}$. This allows us to pass to ${\bf S}={\bf R}/J({\bf R})$, which is isomorphic to a direct product of matrix rings over finite fields. Finding the frame for ${\bf S}$ can be done by finding the minimal ideals $\mathcal{M}$ of ${\bf S}$ \cite[Corollary 3.2]{Ronyai}. Next, for each $M\in\mathcal{M}$, build an isomorphism $M\to M_n(\mathbb{F}_q)$ \cite[Corollary 5.3]{Ronyai} and choose a frame of idempotents from $M_n(\mathbb{F}_q)$ and let $\mathcal{E}_M$ be the pullback to $M$. Set $\mathcal{E} =\bigsqcup_{M\in\mathcal{M}} \mathcal{E}_M$ noting that $\mathcal{E}$ is a frame for ${\bf} S$. Hence, use the power series of \lemref{lem:lift-idemp} to lift the frame $\mathcal{E}$ to a frame $\hat{\mathcal{E}}$ for $R$. \end{proof} With \thmref{thm:Frame} we setup and solve a special instance of \thmref{thm:FindRemak}. \begin{prob}{\sc Abelian.Remak-$\Omega$-Decomposition}\label{prob:FindRemak-abelian} \begin{description} \item[Given] an abelian $\Omega$-group $A$, \item[Return] a Remak $\Omega$-decomposition for $A$. \end{description} \end{prob} \begin{coro}\label{coro:FindRemak-abelian} {\sc Abelian.Remak-$\Omega$-Decomposition} has a polynomial-time solution. \end{coro} \begin{proof} Let $A\in\mathbb{G}_n^{\Omega}$ be abelian. \emph{Algorithm.} Use {\sc Primary-Decomposition} to write $A$ in a primary decomposition. For each prime $p$ dividing $|A|$, let $A_p$ be the $p$-primary component. Write a basis for $\End A_p$ (noting that $\End A_p$ is a checkered matrix ring determined completely by the Remak decomposition of $A_p$ as a $\mathbb{Z}$-module \cite[p. 196]{McDonald:fin-ring}) and use {\sc Solve} to find a basis for $\End_{\Omega} A$. Finally, use {\sc Frame} to find a frame $\mathcal{E}_p$ for $\End_{\Omega} A_p$. Set $\mathcal{A}_p=\{Ae: e\in\mathcal{E}\}$. Return $\bigsqcup_{p\mid |A|} \mathcal{A}_p$. \emph{Correctness.} Every direct $\Omega$-decomposition of $A$ corresponds to a set of pairwise orthogonal idempotents in $\End_{\Omega} A$ which sum to $1$. Furthermore, Remak $\Omega$-decomposition correspond to frames. \emph{Timing.} The polynomial-timing follows from \thmref{thm:Frame} together with the observation that $p\leq n$ whenever $A\in \mathbb{G}_n$. \end{proof} \begin{remark}\label{rem:matrix} In the context of groups of matrices our solution to {\sc Abelian.Remak-$\Omega$-decomposition} is impossible as it invokes integer factorization and {\sc Member} is a version of a discrete log problem in that case. The primes involved in the orders of matrix groups can be exponential in the input length and so these two routines are infeasible. For solvable matrix groups whose primes are bound and so called $\Gamma_d$-matrix groups the required problems in this section have polynomial-time solutions, cf. \cite{Luks:mat,Taku}. \end{remark} \begin{prob}{\sc Irreducible}\label{prob:Irreducible}\cite[Corollary 5.4]{Ronyai} \begin{description} \item[Given] an associative unital ring $R$, an abelian group $V$, and a homomorphism $\varphi:R\to \End V$, \item[Return] an irreducible $R$-submodule of $V$. \end{description} \end{prob} As with the algorithm {\sc Frame}, there are nearly optimal nondeterministic methods for {\sc Irreducible}, for example, the MeatAxe \cite{Meataxe1,Meataxe2}; however, we are concerned here with a deterministic method solely. \begin{prob}{\sc Minimal-$\Omega$-Normal}\label{prob:MinSNorm} \begin{description} \item[Given] an $\Omega$-group $G$ where $\Omega$ acts on $G$ as automorphisms, \item[Return] a minimal $(\Omega\cup G)$-subgroup of $G$. \end{description} \end{prob} \begin{prop} {\sc Minimal-$\Omega$-Normal} has a polynomial-time solution. \end{prop} \begin{proof} Let $G=(\mathtt{X}: \mathtt{R})\in\mathbb{G}_n^{\Omega}$. \emph{Algorithm.} Use {\sc Minimal-Normal} to compute a minimal normal subgroup $N$ of $G$. Using {\sc Member}, run the following transitive closure: set $M:=N$, then while there exists $w\in \Omega\cup \mathtt{X}$ such that $M^w\neq M$, set $N=\langle M,M^w\rangle$. Now $M=\langle N^{\Omega\cup G}\rangle$. If $N$ is non-abelian then return $M$; otherwise, treat $M$ as an $(\Omega\cup G)$-module and use {\sc Irreducible} to find an irreducible $(\Omega\cup G)$-submodule $K$ of $M$. Return $K$. \emph{Correctness.} Note that $M=\langle N^{\Omega\cup G}\rangle=N N^{w_1} N^{w_2}\cdots N^{w_t}$ for some $w_1,\dots,w_t\in \langle \Omega\theta\rangle\ltimes G\leq \Aut G\ltimes G$. As $N$ is minimal normal, so is each $N^{w_i}$ and therefore $M$ is a direct product of isomorphic simple groups. If $N$ is non-abelian then the normal subgroups of $M$ are its direct factors and furthermore, every direct factor $F$ of $M$ satisfies $M=\langle F^{\Omega\cup G}\rangle$. If $N$ is abelian then $N\cong\mathbb{Z}_p^d$ for some prime $p$. A minimal $(\Omega\cup G)$-subgroup of $N$ is therefore an irreducible $(\Omega\cup G)$-submodule of $V$. \emph{Timing.} First the algorithm executes a normal closure using the polynomial-time algorithm {\sc Member}. We test if $N$ is abelian by computing the commutators of the generators. The final step is the polynomial-time algorithm {\sc Irreducible}. \end{proof} \section{Lifting, extending, and matching direct decompositions}\label{sec:lift-ext} We dedicate this section to understanding when a direct decomposition of a quotient or subgroup lifts or extends to a direct decomposition of the whole group. Ultimately we plan these ideas for use in the algorithm for \thmref{thm:FindRemak}, but the questions have taken on independent intrigue. The highlights of this section are Theorems \ref{thm:Lift-Extend} and \ref{thm:chain} and Corollaries \ref{coro:canonical-graders} and \ref{coro:canonical-grader-II}. Fix a short exact sequence of $\Omega$-groups: \begin{equation}\label{eq:SES} \xymatrix{ 1\ar[r] & K \ar[r]^{i} & G\ar[r]^{q} & Q\ar[r] & 1. } \end{equation} With respect to \eqref{eq:SES} we study instances of the following problems. \begin{description} \item[Extend] for which direct $(\Omega\cup G)$-decomposition $\mathcal{K}$ of $K$, is there a Remak $\Omega$-decomposition $\mathcal{R}$ of $G$ such that $\mathcal{K}i = \mathcal{R}\cap (Ki)$. \item[Lift] for which direct $(\Omega\cup G)$-decomposition $\mathcal{Q}$ of $Q$, is there a Remak $\Omega$-decomposition $\mathcal{R}$ of $G$ such that $\mathcal{Q} = \mathcal{R}q$. \item[Match] for which pairs $(\mathcal{K},\mathcal{Q})$ of direct $(\Omega\cup G)$-decompositions of $K$ and $Q$ respectively, is there a Remak $\Omega$-decomposition of $G$ which is an extension of $\mathcal{K}$ and a lift of $\mathcal{Q}$, i.e. $\mathcal{K}i=\mathcal{R}\cap (Ki)$ and $\mathcal{Q}=\mathcal{R}q$. \end{description} Finding direct decompositions which extend or lift is surprisingly easy (\thmref{thm:Lift-Extend}), but we have had only narrow success in finding matches. Crucial exceptions are $p$-groups of class $2$ (\thmref{thm:Match-class2}) where the problem reduces to commutative ring theory. \subsection{Graded extensions}\label{sec:induced} In this section we place some reasonable parameters on the short exact sequences which we consider in the role of \eqref{eq:SES}. This section depends mostly on the material of Sections \ref{sec:op-groups}--\ref{sec:decomps}. \begin{lemma}\label{lem:induced} Let $G$ be a group with a direct $\Omega$-decomposition $\mathcal{H}$. If $X$ is an $(\Omega\cup G)$-subgroup of $G$ and $X=\langle \mathcal{H}\intersect X\rangle$, then \begin{enumerate}[(i)] \item $\mathcal{H}\intersect X$ is a direct $\Omega$-decomposition of $X$, \item $\mathcal{H}X/X$ is a direct $\Omega$-decomposition of $G/X$, \item $\mathcal{H}-\{H\in\mathcal{H} : H\leq X\}$, $\mathcal{H}X$, and $\mathcal{H}X/X$ are in a natural bijection, and \item if $Y$ is an $(\Omega\cup G)$-subgroup of $G$ with $Y=\langle\mathcal{H}\cap Y\rangle$ then $\mathcal{H}\cap (X\cap Y)=\langle\mathcal{H}\cap (X\cap Y)\rangle$ and $\mathcal{H}\cap XY=\langle \mathcal{H}\cap XY\rangle$. \end{enumerate} \end{lemma} \begin{proof} For (i), $(H\cap X)\cap \langle \mathcal{H}\cap X- \{H\cap X\}\rangle=1$ for all $H\cap X\in\mathcal{H}\cap X$. For (ii), let $|\mathcal{H}|>1$, take $H\in\mathcal{H}$, and set $J=\langle\mathcal{H}-\{H\}\rangle$. From (i): $HX\intersect JX=(H\times (J\intersect X))\intersect ((H\intersect X)\times J)=(H\intersect X)\times (J\intersect X)=X$. For (iii), the functions $H\mapsto HX\mapsto HX/X$, for each $H\in\mathcal{H}-\{H\in\mathcal{H}: H\leq X\}$, suffice. Finally for (iv), let $g\in X\cap N$. So there are unique $h\in H$ and $k\in\langle\mathcal{H}-\{H\}\rangle$ with $g=hk$. By (i) and the uniqueness, we get that $h\in (H\cap X)\cap (H\cap Y)$ and $k\in \langle \mathcal{H}-\{H\}\rangle \cap (X\cap Y)$. So $g\in \langle \{H\cap (X\cap N),\langle \mathcal{H}-\{H\}\rangle \cap (X\cap Y)\}\rangle$. By induction on $|\mathcal{H}|$, $X\cap Y\leq \langle \mathcal{H}\cap (X\cap Y)\rangle \leq X\cap N$. The last argument is similar. \end{proof} We now specify which short exact sequence we consider. \begin{defn}\label{def:graded} A short exact sequence $1\to K\overset{i}{\to} G\overset{q}{\to} Q\to 1$ of $\Omega$-groups is \emph{$\Omega$-graded} if for all (finite) direct $\Omega$-decomposition $\mathcal{H}$ of $G$, it follows that $Ki = \langle \mathcal{H}\cap (Ki) \rangle$. Also, if $M$ is an $(\Omega\cup G)$-subgroup of $G$ such that the canonical short exact sequence $1\to M\to G\to G/M\to 1$ is $\Omega$-graded then we say that $M$ is $\Omega$-graded. \end{defn} \lemref{lem:induced} parts (i) and (ii) imply that every direct $\Omega$-decomposition of $G$ induces direct $\Omega$-decompositions of $K$ and $Q$ whenever $1\to K\overset{i}{\to} G\overset{q}{\to} Q\to 1$ is $\Omega$-graded. The universal quantifier in the definition of graded exact sequences may seem difficult to satisfy; nevertheless, in Section \ref{sec:direct-ext} we show many well-known subgroups are graded, for example the commutator subgroup. \begin{prop}\label{prop:graded-lat} \begin{enumerate}[(i)] \item If $M$ is an $\Omega$-graded subgroup of $G$ and $N$ an $(\Omega\cup G)$-graded subgroup of $M$, then $N$ is an $\Omega$-graded subgroup of $G$. \item The set of $\Omega$-graded subgroups of $G$ is a modular sublattice of the lattice of $(\Omega\cup G)$-subgroups of $G$. \end{enumerate} \end{prop} \begin{proof} For (i), if $\mathcal{H}$ is a direct $\Omega$-decomposition of $G$ then by \lemref{lem:induced}(i), $\mathcal{H}\cap M$ is direct $\Omega$-decomposition of $M$ and so $\mathcal{H}\cap N=(\mathcal{H}\cap M)\cap N$ is a direct $\Omega$-decomposition of $N$. Also (ii) follows from \lemref{lem:induced}(iv). \end{proof} \begin{lem}\label{lem:KRS} For all Remak $\Omega$-decomposition $\mathcal{H}$ and all direct $\Omega$-decomposition $\mathcal{K}$ of $G$, \begin{enumerate}[(i)] \item $\mathcal{H}M$ refines $\mathcal{K}M$ for all $(\Omega\cup G)$-subgroups $M\geq \zeta_1(G)$, \item $\mathcal{H}\intersect M$ refines $\mathcal{K}\intersect M$ for all $(\Omega\cup G)$-subgroups $M \leq \gamma_2(G)$. \end{enumerate} \end{lem} \begin{proof} Let $\mathcal{T}$ be a Remak $\Omega$-decomposition of $G$ which refines $\mathcal{H}$. By \thmref{thm:KRS}, there is a $\varphi\in \Aut_{\Omega\cup G} G$ such that $\mathcal{R}\varphi=\mathcal{T}$. Form \eqref{eq:central} it follows that $\mathcal{R}\zeta_1(G)=\mathcal{R}\zeta_1(G)\varphi=\mathcal{T}\zeta_1(G)$ and $\mathcal{R}\cap \gamma_2(G)=(\mathcal{R}\cap\gamma_2(G))\varphi=\mathcal{T}\cap\gamma_2(G)$. \end{proof} \begin{thm}\label{thm:Lift-Extend} Given the commutative diagram in \figref{fig:LIFT-EXT} which is exact and $\Omega$-graded in all rows and all columns, the following hold. \begin{figure} \caption{A commutative diagram of $\Omega$-groups which is exact and $\Omega$-graded in all rows and all columns.} \label{fig:LIFT-EXT} \end{figure} \begin{enumerate}[(i)] \item If $\zeta_1(\hat{Q})r=1$ then for every Remak $\Omega$-decomposition $\hat{\mathcal{Q}}$ of $\hat{Q}$ and every Remak $\Omega$-decomposition $\mathcal{H}$ of $G$, $\mathcal{Q}:=\hat{\mathcal{Q}}r$ refines $\mathcal{H}q$. In particular, $\mathcal{H}$ lifts a partition of $\mathcal{Q}$ which is unique to $(G, i,q)$. \item If $\gamma_2(K)\leq \hat{K}j$ then for every Remak $(\Omega\cup G)$-decomposition $\mathcal{K}$ of $K$ and every Remak $\Omega$-decomposition $\mathcal{H}$ of $G$, $\mathcal{K}i\cap \hat{K}\hat{i}$ refines $\mathcal{H}\cap \left(\hat{K}\hat{i}\right)$. In particular, $\mathcal{H}$ extends a partition of $\hat{\mathcal{K}}:= (\mathcal{K}\cap \hat{K}j)j^{-1}$ which is unique to $(G,\hat{i},\hat{q})$. \end{enumerate} \end{thm} \begin{proof} Fix a Remak $\Omega$-decomposition $\mathcal{H}$ of $G$. As $\hat{K}$ and $K$ are $\Omega$-graded, it follows that $\mathcal{H}\hat{q}$ is a direct $\Omega$-decompositions of $\hat{Q}$ (\lemref{lem:induced}(ii)). Let $\mathcal{T}$ be a Remak $\Omega$-decomposition of $\hat{Q}$ which refines $\mathcal{H}\hat{q}$. By \lemref{lem:KRS}(i), $\hat{\mathcal{Q}}\zeta_1(\hat{Q}) =\mathcal{T}\zeta_1(\hat{Q})$ and so $\hat{\mathcal{Q}}r = \mathcal{T}r$. Therefore, $\mathcal{Q}:=\hat{\mathcal{Q}}r$ refines $\mathcal{H}\hat{q}r=\mathcal{H}q$. That proves (i). To prove (ii), by \lemref{lem:induced}(i) we have that $\mathcal{H}\cap (Ki)$ is a direct $(\Omega\cup G)$-decompositions of $Ki$. Let $\mathcal{T}$ be a Remak $(\Omega\cup G)$-decomposition of $Ki$ which refines $\mathcal{H}\cap (Ki)$. By \lemref{lem:KRS}(ii), $\hat{\mathcal{K}}=\mathcal{K}i\cap (\hat{K}\hat{i})=\mathcal{T}\cap \left(\hat{K}\hat{i}\right)$. Therefore, $\mathcal{K}i \cap \left(\hat{K}\hat{i}\right)$ refines $\mathcal{H}\cap \left(\hat{K}\hat{i}\right)$. \end{proof} \thmref{thm:Lift-Extend} implies the following special setting where the match problem can be answered. This is the only instance we know where the matching problem can be solved without considering the cohomology of the extension. \begin{coro}\label{coro:Match-perfect-centerless} If $1\to K\to G\to Q\to 1$ is a $\Omega$-graded short exact sequence where $K=\gamma_2(K)$ and $\zeta_1(Q)=1$; then for every Remak $(\Omega\cup G)$-decomposition $\mathcal{K}$ of $K$ and $\mathcal{Q}$ of $Q$, there are partitions $[\mathcal{K}]$ and $[\mathcal{Q}]$ unique to the short exact sequence such that every Remak $\Omega$-decomposition $\mathcal{H}$ of $G$ matches $([\mathcal{K}],[\mathcal{Q}])$. \end{coro} \subsection{Direct classes, and separated and refined decompositions}\label{sec:direct-class} In this section we begin our work to consider the extension, lifting, and matching problems in a constructive fashion. We introduce classes of groups which are closed to direct products and direct decompositions and show how to use these classes to control the exchange of direct factors. \begin{defn} A class $\mathfrak{X}$ (or $\mathfrak{X}^{\Omega}$ if context demands) of $\Omega$-groups is \emph{direct} if $1\in\mathfrak{X}$, and $\mathfrak{X}$ is closed to $\Omega$-isomorphisms, as well as the following: \begin{enumerate}[(i)] \item if $G\in\mathfrak{X}$ and $H$ is a direct $\Omega$-factor of $G$, then $H\in\mathfrak{X}$, and \item if $H,K\in\mathfrak{X}$ then $H\times K\in\mathfrak{X}$. \end{enumerate} \end{defn} Every variety of $\Omega$-groups is a direct class by \thmref{thm:BK} and to specify the finite groups in a direct class it is sufficient to specify the directly $\Omega$-indecomposable group it contains. However, in practical terms there are few settings where the directly $\Omega$-indecomposable groups are known. \begin{defn} A direct $\Omega$-decomposition $\mathcal{H}$ is \emph{$\mathfrak{X}$-separated} if for each $H\in\mathcal{H}-\mathfrak{X}$, if $H$ has a direct $\Omega$-factor $K$, then $K\notin\mathfrak{X}$. If additionally every member of $\mathcal{H}\cap \mathfrak{X}$ is directly $\Omega$-indecomposable, then $\mathcal{H}$ is \emph{$\mathfrak{X}$-refined}. \end{defn} \begin{prop}\label{prop:direct-class} Suppose that $\mathfrak{X}$ is a direct class of $\Omega$-groups, $G$ an $\Omega$-group, and $\mathcal{H}$ a direct $\Omega$-decomposition of $G$. The following hold. \begin{enumerate}[(i)] \item $\langle\mathcal{H}\cap\mathfrak{X}\rangle\in\mathfrak{X}$. \item If $\mathcal{H}$ is $\mathfrak{X}$-separated and $\mathcal{K}$ is a direct $\Omega$-decomposition of $G$ which refines $\mathcal{H}$, then $\mathcal{K}$ is $\mathfrak{X}$-separated. \item $\mathcal{H}$ is a $\mathfrak{X}$-separated if, and only if, $\{\langle\mathcal{H}-\mathfrak{X}\rangle, \langle\mathcal{H}\cap\mathfrak{X}\rangle\}$ is $\mathfrak{X}$-separated. \item Every Remak $\Omega$-decomposition is $\mathfrak{X}$-refined. \item If $\mathcal{H}$ and $\mathcal{K}$ are $\mathfrak{X}$-separated direct $\Omega$-decompositions of $G$ then $(\mathcal{H}-\mathfrak{X})\sqcup (\mathcal{K}\cap\mathfrak{X})$ is an $\mathfrak{X}$-separated direct $\Omega$-decomposition of $G$. \end{enumerate} \end{prop} \begin{proof} First, (i) follows as $\mathfrak{X}$ is closed to direct $\Omega$-products. For (ii), notice that a direct $\Omega$-factor of a $K\in \mathcal{K}$ is also a direct $\Omega$-factor of the unique $H\in\mathcal{H}$ where $K\leq H$. For (iii), the reverse direction follows from (ii). For the forward direction, let $K$ be a direct $\Omega$-factor of $\langle \mathcal{H}-\mathfrak{X}\rangle$. Because $\mathfrak{X}$ is closed to direct $\Omega$-factors, if $K\in\mathfrak{X}$ then so is every directly $\Omega$-indecomposable direct $\Omega$-factor of $K$, and so we insist that $K$ is directly $\Omega$-indecomposable. Therefore $K$ lies in a Remak $\Omega$-decomposition of $\langle \mathcal{H}-\mathfrak{X}\rangle$. Let $\mathcal{R}$ be a Remak $\Omega$-decomposition of $\langle \mathcal{H}-\mathfrak{X}\rangle$ which refines $\mathcal{H}-\mathfrak{X}$. By \thmref{thm:KRS} there is a $\varphi\in \Aut_{\Omega\cup G} \langle \mathcal{H}-\mathfrak{X}\rangle$ such that $K\varphi\in \mathcal{R}$ and so $K\varphi$ is a direct $\Omega$-factor of the unique $H\in\mathcal{H}$ where $K\varphi\leq H$. As $\mathcal{H}$ is $\mathfrak{X}$-separated and $K\varphi$ is a direct $\Omega$-factor of $H\in\mathcal{H}$, it follows that $K\varphi \notin\mathfrak{X}$. Thus, $K\notin\mathfrak{X}$ and $\{\langle\mathcal{H}-\mathfrak{X}\rangle, \langle\mathcal{H}\cap\mathfrak{X}\rangle\}$ is $\mathfrak{X}$-separated. For (iv), note that elements of a Remak $\Omega$-decomposition have no proper direct $\Omega$-factors. Finally for (v), let $\mathcal{R}$ and $\mathcal{T}$ be a Remak $\Omega$-decompositions of $G$ which refine $\mathcal{H}$ and $\mathcal{K}$ respectively. Set $\mathcal{U}=\{R\in\mathcal{R}: R\leq \langle \mathcal{H}\cap \mathfrak{X}\rangle\}$. By \thmref{thm:KRS} there is a $\varphi\in\Aut_{\Omega\cup G} G$ such that $\mathcal{U}\varphi\subseteq \mathcal{T}$ and $\mathcal{R}\varphi =(\mathcal{R}-\mathcal{U})\sqcup \mathcal{U}\varphi$. As $\mathfrak{X}$ is closed to isomorphisms, it follows that $\mathcal{U}\varphi\subseteq\mathcal{T}\cap\mathfrak{X}$. As $\mathcal{H}$ is $\mathfrak{X}$-separated, $\mathcal{U}=\mathcal{R}\cap\mathfrak{X}$. As $\Aut_{\Omega\cup G} G$ is transitive on the set of all Remak $\Omega$-decompositions of $G$ (\thmref{thm:KRS}), we have that $|\mathcal{T}\cap\mathfrak{X}|=|\mathcal{R}\cap\mathfrak{X}|=|\mathcal{U}\varphi|$. In particular, $\mathcal{U}\varphi=\mathcal{T}\cap\mathfrak{X}= \{T\in\mathcal{T}: T\leq \langle\mathcal{K}\cap\mathfrak{X}\rangle\}$. Hence, $\mathcal{R}\varphi$ refines $(\mathcal{H}-\mathfrak{X})\sqcup (\mathcal{K}\cap \mathfrak{X})$ and so the latter is a direct $\Omega$-decomposition. \end{proof} \subsection{Up grades and down grades}\label{sec:direct-ext} Here we introduce a companion subgroup to a direct class $\mathfrak{X}$ of $\Omega$-groups. These groups specify the kernels we consider in the problems of extending and lifting in concrete settings. \begin{defn}\label{def:grader} An \emph{up $\Omega$-grader} (resp. \emph{down $\Omega$-grader}) for a direct class $\mathfrak{X}$ of $\Omega$-groups is a function $G\mapsto \mathfrak{X}(G)$ of finite $\Omega$-groups $G$ where $\mathfrak{X}(G)\in\mathfrak{X}$ (resp. $G/\mathfrak{X}(G)\in\mathfrak{X}$) and such that the following hold. \begin{enumerate}[(i)] \item If $G\in\mathfrak{X}$ then $\mathfrak{X}(G)=G$ (resp. $\mathfrak{X}(G)=1$). \item $\mathfrak{X}(G)$ is an $\Omega$-graded subgroup of $G$. \item For direct $\Omega$-factor $H$ of $G$, $\mathfrak{X}(H)=H\cap \mathfrak{X}(G)$. \end{enumerate} The pair $(\mathfrak{X},G\mapsto \mathfrak{X}(G))$ is an up/down \emph{$\Omega$-grading pair}. \end{defn} If $(\mathfrak{X},G\mapsto\mathfrak{X}(G))$ is an $\Omega$-grading pair then we have $\mathfrak{X}(H\times K)=\mathfrak{X}(H)\times \mathfrak{X}(K)$. First we concentrate on general and useful instances of grading pairs. \begin{prop}\label{prop:V-inter-1} The marginal subgroup of a variety of $\Omega$-groups is an up $\Omega$-grader and the verbal subgroup is a down $\Omega$-grader for the variety. \end{prop} \begin{proof} Let $\mathfrak{V}=\mathfrak{V}^{\Omega}$ be a variety of $\Omega$-groups with defining laws $\mathtt{W}$ and fix an $\Omega$-group $G$. As the marginal function is idempotent, \eqref{eq:marginal-closure} implies that $\mathfrak{V}^*(G)\in\mathfrak{V}$ and that if $G\in\mathfrak{V}$ then $G=\mathfrak{V}^*(G)$. Similarly, verbal subgroups are radical so that by \eqref{eq:verbal-closure} we have $G/\mathfrak{V}(G)\in\mathfrak{V}$ and when $G\in\mathfrak{V}$ then $\mathfrak{V}(G)=1$. It remains to show properties (ii) and (iii) of \defref{def:grader}. Fix a direct $\Omega$-decomposition $\mathcal{H}$ of $G$, fix an $H\in\mathcal{H}$, and set $K=\langle\mathcal{H}-\{H\}\rangle$. For each $f\in G^{\mathtt{X}}=(H\times K)^{\mathtt{X}}$ there are unique $f_H\in H^{\mathtt{X}}$ and $f_K\in K^{\mathtt{X}}$ such that $f=f_H f_K$. Thus, for all $w\in \mathtt{W}$, $w(f)=w(f_H)w(f_K)$ and so $w(H\times K)=w(H)\times w(K)$. Hence, $\mathfrak{V}(H\times K)=\mathfrak{V}(H)\times \mathfrak{V}(K)$. By induction on $|\mathcal{H}|$, $\mathcal{H}\cap\mathfrak{V}(G)=\{\mathfrak{V}(H):H\in\mathcal{H}\}$ is a direct $\Omega$-decomposition of $\mathfrak{V}(G)$. So $\mathfrak{V}(G)$ is a down $\Omega$-grader. For the marginal case, for all $f'\in \langle (h,k)\rangle^{\mathtt{X}}\leq (H\times K)^{\mathtt{X}}=G^{\mathtt{X}}$ and all $f\in G^{\mathtt{X}}$, again there exist unique $f_H,f'_H\in H^{\mathtt{X}}$ and $f_K,f'_K\in K^{\mathtt{X}}$ such that $f=f_H f_K$ and $f'=f'_H f'_K$. Also, $w(f f')=w(f)$ if, and only if, $w(f_H f'_H)=w(f_H)$ and $w(f_K f'_K)=w(f_K)$. Thus, $w^*(H\times K)=w^*(H)\times w^*(K)$. Hence, $\mathfrak{V}^*(H\times K) =\mathfrak{V}^*(H)\times \mathfrak{V}^*(K)$ and by induction $\mathcal{H}\cap\mathfrak{V}^*(G)$ is a direct $\Omega$-decomposition of $\mathfrak{V}^*(G)$. Thus, $\mathfrak{V}^*(G)$ is an up $\Omega$-grader. \end{proof} \begin{remark} There are examples of infinite direct decompositions $\mathcal{H}$ of infinite groups $G$ and varieties $\mathfrak{V}$, where $\mathfrak{V}(G)\neq \langle \mathcal{H}\cap \mathfrak{V}(G)\rangle$ \cite{Asmanov}. However, our definition of grading purposefully avoids infinite direct decompositions. \end{remark} With \propref{prop:V-inter-1} we get a simultaneous proof of some individually evident examples of direct ascenders and descenders. \begin{coro}\label{coro:canonical-graders} Following the notation of \exref{ex:varieties} we have the following. \begin{enumerate}[(i)] \item The class $\mathfrak{N}_c$ of nilpotent groups of class at most $c$ is a direct class with up grader $G\mapsto \zeta_c(G)$ and down grader $G\mapsto \gamma_c(G)$. \item The class $\mathfrak{S}_d$ of solvable groups of derived length at most $d$ is a direct class with up grader $G\mapsto (\delta_d)^*(G)$ and down grader $G\mapsto G^{(d)}$. \item For each prime $p$ the class $\mathfrak{V}([x,y]z^p)$ of elementary abelian $p$-groups is a direct class with up grader $G\mapsto \Omega_1(\zeta_1(G))$ and down grader $G\mapsto [G,G]\mho_1(G)$.\footnote{Here $\Omega_1(X)=\langle x\in X: x^p=1\rangle$ and $\mho_1(X)=\langle x^p : x\in G\rangle$, which are traditional notations having nothing to do with our use of $\Omega$ for operators elsewhere.} \end{enumerate} \end{coro} We also wish to include direct classes $\mathfrak{N}:=\bigcup_{c\in\mathbb{N}} \mathfrak{N}_c$ and $\mathfrak{S}:=\bigcup_{d\in\mathbb{N}} \mathfrak{S}_d$. These classes are not varieties (they are not closed to infinite direct products as required by \thmref{thm:BK}). Therefore, we must consider alternatives to verbal and marginal groups for appropriate graders. Our approach mimics the definitions $G\mapsto O_p(G)$ and $G\mapsto O^p(G)$. We explain the up grader case solely. \begin{defn} For a class $\mathfrak{X}$, the $\mathfrak{X}$-core, $O_{\mathfrak{X}}(G)$, of a finite group $G$ is the intersection of all maximal $(\Omega\cup G)$-subgroups contained in $\mathfrak{X}$. \end{defn} If $\mathfrak{V}$ is a union of a chain $\mathfrak{V}_0\subseteq \mathfrak{V}_1\subseteq\cdots $ of varieties then $1\in\mathfrak{V}$, and so the maximal $(\Omega\cup G)$-subgroups of a group $G$ contained in $\mathfrak{V}$ is nonempty. Also $\mathfrak{V}$ is closed to subgroups so that $O_{\mathfrak{V}}(G)\in \mathfrak{V}$. \begin{ex}\label{ex:cores} \begin{enumerate}[(i)] \item $O_{\mathfrak{A}}(G)$ is the intersection of all maximal normal abelian subgroups of $G$. Generally there can be any number of maximal normal abelian subgroups of $G$ so $O_{\mathfrak{A}}(G)$ is not a trivial intersection. \item $O_{\mathfrak{N}_c}(G)$ is the intersection of all maximal normal nilpotent subgroups of $G$ with class at most $c$. As in (i), this need not be a trivial intersection. However, if $c>\log |G|$ then all nilpotent subgroups of $G$ have class at most $c$ and therefore $O_{\mathfrak{N}}(G)=O_{\mathfrak{N}_c}(G)$ is the Fitting subgroup of $G$: the unique maximal normal nilpotent subgroup of $G$. \item $O_{\mathfrak{S}_d}(G)$, $d>\log |G|$, is the unique maximal normal solvable subgroup of $G$, i.e.: the solvable radical $O_{\mathfrak{S}}(G)$ of $G$. \end{enumerate} \end{ex} \begin{lemma}\label{lem:margin-join} Let $\mathfrak{V}$ be a group variety of $\Omega$-groups and $G$ an $\Omega$-group. If $H$ is a $\mathfrak{V}$-subgroup of $G$ then so is $\mathfrak{V}^*(G)H$, that is: $\mathfrak{V}^*(G)H\in\mathfrak{V}$. \end{lemma} \begin{proof} Let $\mathtt{W}$ be a set of defining laws for $\mathfrak{V}$. Let $f'\in G^{\mathtt{X}}$ with $\im f\subseteq \mathfrak{V}^*(G) H$. Thus, for all $w\in \mathtt{W}$, there is a decomposition $f=f' f''$ where $\im f'\subseteq w^*(G)$ and $\im f''\subseteq H$. As $w^*(G)$ is marginal to $G$ it is marginal to $H$ and so $w(f)=w(f'')$. As $H\in\mathfrak{V}$, $w(f'')=1$. Thus, $w(f)=1$ and so $w(w^*(G)H)=1$. It follows that $\mathfrak{V}^*(G)H\in\mathfrak{V}$. \end{proof} \begin{prop}\label{prop:margin-core} If $\mathfrak{V}$ is a group variety of $\Omega$-groups and $G$ an $\Omega$-group, then \begin{enumerate}[(i)] \item $\mathfrak{V}^*(G)\leq O_{\mathfrak{V}}(G)$, and \item if $M$ is an $(\Omega\cup G)$-subgroup then $O_{\mathfrak{V}}(G)O_{\mathfrak{V}}(M)$ is an $(\Omega\cup G)$-subgroup contained in $\mathfrak{V}$. \end{enumerate} \end{prop} \begin{proof} $(i)$. By \lemref{lem:margin-join}, every maximal normal $\mathfrak{V}$-subgroup of $G$ contains $\mathfrak{V}^*(G)$. $(ii)$. As $M\normaleq G$ and $O_{\mathfrak{V}}(M)$ is characteristic in $M$, it follows that $O_{\mathfrak{V}}(M)$ is a normal $\mathfrak{V}$-subgroup of $G$. Thus, $O_{\mathfrak{V}}(M)$ lies in a maximal normal $\mathfrak{V}$-subgroup $N$ of $G$. As $O_{\mathfrak{V}}(G)\leq N$ we have $O_{\mathfrak{V}}(G)O_{\mathfrak{V}}(M) \leq N\in\mathfrak{V}$. As $\mathfrak{V}$ is closed to subgroups, it follows that $O_{\mathfrak{V}}(G)O_{\mathfrak{V}}(M)$ is in $\mathfrak{V}$. \end{proof} \begin{remark} It is possible to have $\mathfrak{V}^*(G)<O_{\mathfrak{V}}(G)$. For instance, with $G=S_3\times C_2$ and the class $\mathfrak{A}$ of abelian groups, the $\mathfrak{A}$-marginal subgroup is the center $1\times C_2$, whereas the $\mathfrak{A}$-core is $C_3\times C_2$. \end{remark} \begin{prop}\label{prop:V-inter-core} Let $G$ be a finite group with a direct decomposition $\mathcal{H}$. If $\mathfrak{V}$ is a group variety then \begin{equation*} \mathcal{H}\intersect O_{\mathfrak{V}}(G) =\{O_{\mathfrak{V}}(H): H\in\mathcal{H}\} \end{equation*} and this is a direct decomposition of $O_{\mathfrak{V}}(G)$. In particular, $G\mapsto O_{\mathfrak{V}}(G)$ is an up $\Omega$-grader. Furthermore, if $\mathfrak{V}$ is a union of a chain $\mathfrak{V}_0\subseteq \mathfrak{V}_1\subseteq\cdots $ of group varieties then $O_{\mathfrak{V}}(G)$ is an up $\Omega$-grader. \end{prop} \begin{proof} Let $H\in \mathcal{H}$ and $K:=\langle \mathcal{H}-\{H\}\rangle$. Let $M$ be a maximal normal $\mathfrak{V}$-subgroup of $G=H\times K$. Let $M_H$ be the projection of $M$ to the $H$-component. As $\mathfrak{V}$ is closed to homomorphic images, $M_H\in\mathfrak{V}$. Furthermore, $M_H\normaleq H$ so there is a maximal normal $\mathfrak{V}$-subgroup $N$ of $H$ such that $M_H\leq N$. We claim that $MN\in\mathfrak{V}$. As $G=H\times K$, every $g\in M$ has the unique form $g=hk$, $h\in H$, $k\in K$. As $M_H$ is the projection of $M$ to $H$, $h\in M_H\leq N$. Thus, $g,h\in MN$ so $k\in MN$. Thus, $MN=N\times M_K$, where $M_K$ is the projection of $M$ to $K$. Now let $\mathfrak{V}=\mathfrak{V}(w)$. For each $f:X\to MN$, write $f=f_N \times f_K$ where $f_N:X\to N$ and $f_K:X\to M_K$. Hence, $w(f)=w(f_N \times f_K)=w(f_N)\times w(f_K)$. However, $w(N)=1$ and $w(M_K)=1$ as $N,M_K\in\mathfrak{V}$. Thus, $w(f)=1$, which proves that $w(MN)=1$. So $MN\in\mathfrak{V}$ as claimed. As $M$ is a maximal normal $\mathfrak{V}$-subgroup of $G$, $M=MN$ and $N=M_H$. Hence, $H\intersect M=N$ is a maximal normal $\mathfrak{V}$-subgroup of $H$. So we have characterized the maximal normal $\mathfrak{V}$-subgroups of $G$ as the direct products of maximal normal $\mathfrak{V}$-subgroups of members $H\in\mathcal{H}$. Thus, $\mathcal{H}\intersect O_{\mathfrak{V}}(G) =\{O_{\mathfrak{V}}(H) : H\in\mathcal{H}\}$ and this generates $O_{\mathfrak{V}}(G)$. By \lemref{lem:induced}, $\mathcal{H}\intersect O_{\mathfrak{V}}(G)$ is a direct decomposition of $O_{\mathfrak{V}}(G)$. \end{proof} \begin{coro}\label{coro:canonical-grader-II} \begin{enumerate}[(i)] \item The class $\mathfrak{N}$ of nilpotent groups is a direct class and $G\mapsto O_{\mathfrak{N}}(G)$ (the Fitting subgroup) is up grader. \item The class $\mathfrak{S}$ of solvable groups is a direct class and $G\mapsto O_{\mathfrak{S}}(G)$ (the solvable radical) is an up grader. \end{enumerate} \end{coro} \begin{proof} For a finite group $G$, the Fitting subgroup is the $\mathfrak{N}_c$-core where $c>|G|$. Likewise, the solvable radical is the $\mathfrak{S}_c$-core for $d>|G|$. The rest follows from \propref{prop:V-inter-core}. \end{proof} We now turn our attention away from examples of grading pairs and focus on their uses. In particular it is for the following ``local-global'' property which clarifies, in the up grader case, when a direct factor of a subgroup is also a direct factor of the whole group. \begin{prop}\label{prop:extendable} Let $G\mapsto \mathfrak{X}(G)$ be an up $\Omega$-grader for a direct class $\mathfrak{X}$ of $\Omega$-groups and let $G$ be an $\Omega$-group. If $H$ is an $(\Omega\cup G)$-subgroup of $G$ and the following hold: \begin{enumerate}[(a)] \item for some direct $\Omega$-factor $R$ of $G$, $H\mathfrak{X}(G)=R\mathfrak{X}(G)>\mathfrak{X}(G)$, and \item $H$ lies in an $\mathfrak{X}$-separated direct $(\Omega\cup G)$-decomposition of $H\mathfrak{X}(G)$; \end{enumerate} then $H$ is a direct $\Omega$-factor of $G$. \end{prop} \begin{proof} By (a) there is a direct $(\Omega\cup G)$-complement $C$ in $G$ to $R$. Also $\mathfrak{X}(G)=\mathfrak{X}(R)\times \mathfrak{X}(C)$, as $\mathfrak{X}(G)$ is $\Omega$-graded. Hence, $R\mathfrak{X}(G)=R\times \mathfrak{X}(C)$. By (b), there is an $\mathfrak{X}$-separated direct $\Omega$-decomposition $\mathcal{H}$ of $H\mathfrak{X}(G)$ such that $H\in\mathcal{H}$. As $H\mathfrak{X}(G)>\mathfrak{X}(G)$ it follows that $H\notin\mathfrak{X}$ and so by \lemref{lem:induced}(iii), $\mathcal{H}-\mathfrak{X}=\{H\}$ and $X=\langle\mathcal{H}\cap\mathfrak{X}\rangle\in \mathfrak{X}$. So $$R\times \mathfrak{X}(C)=R\mathfrak{X}(G)=H\mathfrak{X}(G)=H\times X.$$ Let $\mathcal{A}$ be Remak $(\Omega\cup G)$-decomposition of $R$. Since $\mathfrak{X}(C)\in\mathfrak{X}$, $\mathcal{A}\sqcup\{\mathfrak{X}(C)\}$ is an $\mathfrak{X}$-separated direct $(\Omega\cup G)$-decomposition of $R\mathfrak{X}(G)$. By \propref{prop:direct-class}(v), $$\mathcal{C}=\{H\}\sqcup \{\mathfrak{X}(C)\}\sqcup (\mathcal{A}\cap\mathfrak{X})$$ is an $\mathfrak{X}$-separated direct $(\Omega\cup G)$-decomposition of $R\mathfrak{X}(G)$, and we note that $\{H\}=\mathcal{C}-\mathfrak{X}$. We claim that $\{H,C\}\sqcup(\mathcal{A}\cup \mathfrak{X})$ is a direct $\Omega$-decomposition of $G$. Indeed, $H\cap \langle C,\mathcal{A}\cap \mathfrak{X}\rangle\leq R\mathfrak{X}(G)\cap C\mathfrak{X}(G) =\mathfrak{X}(G)$ and so $H\cap \langle C,\mathcal{A}\cap \mathfrak{X}\rangle =H\cap \langle \mathfrak{X}(C),\mathcal{A}\cap \mathfrak{X}\rangle=1$. Also, $\mathfrak{X}(C)\leq \langle H,C,\mathcal{A}\cap\mathfrak{X}\rangle$ thus $\langle H,C,\mathcal{A}\cap\mathfrak{X}\rangle=G$. As the members of $\{H,C\}\sqcup(\mathcal{A}\cap\mathfrak{X})$ are $(\Omega\cup G)$-subgroups we have proved the claim. In particular, $H$ is a direct $\Omega$-factor of $G$. \end{proof} \subsection{Direct chains}\label{sec:chains} In \thmref{thm:Lift-Extend} we specified conditions under which any direct decomposition of an appropriate subgroup, resp. quotient, led to a solution of the extension (resp. lifting) problem. However, within that theorem we see that it is not the direct decomposition of the subgroup (resp. quotient group) which can be extended (resp. lifted). Instead it a some unique partition of the direct decomposition. Finding the correct partition by trial and error is an exponentially sized problem. To avoid this we outline a data structure which enables a greedy algorithm to find this unique partition. The algorithm itself is given in Section \ref{sec:merge}. The key result of this section is \thmref{thm:chain}. Throughout this section we suppose that $G\to \mathfrak{X}(G)$ is an (up) $\Omega$-grader for a direct class $\mathfrak{X}$. \begin{defn}\label{def:chain} A \emph{direct chain} is a proper chain $\mathcal{L}$ of $(\Omega\cup G)$-subgroups starting at $\mathfrak{X}(G)$ and ending at $G$, and where there is a direct $\Omega$-decomposition $\mathcal{R}$ of $G$ with: \begin{enumerate}[(i)] \item for all $L\in\mathcal{L}$, $L=\langle\mathcal{R}\cap L\rangle$, and \item for each $L\in\mathcal{L}-\{G\}$, there is a unique $R\in\mathcal{R}$ such that the successor $M\in\mathcal{L}$ to $L$ satisfies: $R\mathfrak{X}(G)\cap L\neq R\mathfrak{X}(G)\cap M$. We call $R$ the \emph{direction of $L$}. \end{enumerate} We call $\mathcal{R}$ a set of directions for $\mathcal{L}$. \end{defn} If $\mathcal{L}$ is a direct chain with directions $\mathcal{R}$, then for all $L\in\mathcal{L}$, $\mathcal{R}\cap L$ is a direct $\Omega$-decomposition of $L$ (\lemref{lem:induced}(i)). When working with direct chains it helps to remember that for all $(\Omega\cup G)$-subgroups $L$ and $R$ of $G$, if $\mathfrak{X}(G)\leq L$, then $(R\cap L)\mathfrak{X}(G)=R\mathfrak{X}(G)\cap L$. Also, if $\mathfrak{X}(G)\leq L< M\leq G$, $L=\langle\mathcal{R}\cap L\rangle$ and $M=\langle\mathcal{R}\cap M\rangle$, and \begin{equation}\label{eq:unique-direction} \forall R\in\mathcal{R}-\mathfrak{X},\qquad R\mathfrak{X}(G)\cap L=R\mathfrak{X}(G)\cap M \end{equation} then $L=\langle\mathcal{R}\cap L\rangle=\langle\mathcal{R}\cap L,\mathfrak{X}(G)\rangle =\langle\mathcal{R}\cap M,\mathfrak{X}(G)\rangle=\langle\mathcal{R}\cap M\rangle=M$. Therefore, it suffices to show there is at most one $R\in\mathcal{R}-\mathfrak{X}$ such that $R\mathfrak{X}(G)\cap L\neq R\mathfrak{X}(G)\cap M$. \begin{lemma}\label{lem:cap} Suppose that $\mathcal{H}=\mathcal{H}\mathfrak{X}(G)$ is an $(\Omega\cup G)$-decomposition of $G$ such that $\mathcal{H}$ refines $\mathcal{R}\mathfrak{X}(G)$, for a direct $\Omega$-decomposition $\mathcal{R}$. It follows that, if $L=\langle\mathcal{J},\mathfrak{X}(G)\rangle$, for some $\mathcal{J}\subseteq \mathcal{H}$, then $L=\langle \mathcal{R}\cap L\rangle$. \end{lemma} \begin{proof} As $\mathfrak{X}(G)\leq L$, for each $R\in\mathcal{R}$, $R\cap \mathfrak{X}(G)\leq R\cap L$. As $\mathfrak{X}(G)$ is $(\Omega\cup G)$-graded, $\mathfrak{X}(G)=\langle\mathcal{R}\cap \mathfrak{X}(G)\rangle$. Thus, $\mathfrak{X}(G)\leq \langle \mathcal{R}\cap L\rangle$. Also, $\mathcal{H}$ refines $\mathcal{R}\mathfrak{X}(G)$. Thus, for each $J\in\mathcal{J}\subseteq \mathcal{H}$ there is a unique $R\in\mathcal{R}-\{R\in\mathcal{R}: R\leq \mathfrak{X}(G)\}$ such that $J\leq R\mathfrak{X}(G)$. As $L=\langle\mathcal{J},\mathfrak{X}(G)\rangle$, $J\leq L$ and so $J\leq R\mathfrak{X}(G)\cap L=(R\cap L)\mathfrak{X}(G)$. Now $R\cap L,\mathfrak{X}(G)\leq \langle \mathcal{R}\cap L\rangle$ thus $J\leq \langle\mathcal{R}\cap L\rangle$. Hence $L=\langle\mathcal{J},\mathfrak{X}(G)\rangle\leq \langle\mathcal{R}\cap L\rangle\leq L$. \end{proof} \begin{lemma}\label{lem:drop-H} If $\mathcal{H}$ is an $(\Omega\cup G)$-decomposition of $G$ and $\mathcal{R}$ a direct $(\Omega\cup G)$-decomposition of $G$ such that $\mathcal{H}=\mathcal{H}\mathfrak{X}(G)$ refines $\mathcal{R}\mathfrak{X}(G)$, then for all $\mathcal{J}\subset\mathcal{H}$ and all $H\in\mathcal{H}-\mathcal{J}$, there is a unique $R\in\mathcal{R}$ such that $H\leq R\mathfrak{X}(G)$ and $$\langle\mathcal{R}-\{R\}\rangle \mathfrak{X}(G)\cap \langle H,\mathcal{J},\mathfrak{X}(G)\rangle =\langle\mathcal{R}-\{R\}\rangle\mathfrak{X}(G)\cap \langle\mathcal{J},\mathfrak{X}(G)\rangle.$$ \end{lemma} \begin{proof} Fix $\mathcal{J}\subseteq\mathcal{H}$ and $H\in\mathcal{H}-\mathcal{J}$. By the definition of refinement there is a unique $R\in\mathcal{R}$ such that $H\leq R\mathfrak{X}(G)$. Set $J=\langle\mathcal{J},\mathfrak{X}(G)\rangle$ and $C=\langle \mathcal{R}-\{R\}\rangle$. By \lemref{lem:cap}, $\mathcal{R}\cap HJ$ and $\mathcal{R}\cap J$ are direct $(\Omega\cup G)$-decompositions of $HJ$ and $J$ respectively. As $J=(R\cap J)\times (C\cap J)$ and $\mathfrak{X}(G)\leq J$, we get that $J=(R\mathfrak{X}(G)\cap J)(C\mathfrak{X}(G)\cap J)$. Also, $\mathfrak{X}(G)$ is $(\Omega\cup G)$-graded; hence, by \lemref{lem:induced}(ii), $G/\mathfrak{X}(G)=R\mathfrak{X}(G)/\mathfrak{X}(G) \times C\mathfrak{X}(G)/\mathfrak{X}(G)$ and $C\mathfrak{X}(G)\cap R\mathfrak{X}(G)= \mathfrak{X}(G)$. Combining the modular law with $\mathfrak{X}(G)\leq H\leq R\mathfrak{X}(G)$ and $R\mathfrak{X}(G)\cap C\mathfrak{X}(G)=\mathfrak{X}(G)$ we have that \begin{align*} C\mathfrak{X}(G)\cap HJ & = C\mathfrak{X}(G) \cap \Big (H(R\mathfrak{X}(G)\cap J)\cdot (C\mathfrak{X}(G)\cap J)\Big)\\ & = \Big(C\mathfrak{X}(G)\cap H(R\mathfrak{X}(G)\cap J)\Big) ( C\mathfrak{X}(G)\cap J)\\ & = (C\mathfrak{X}(G)\cap R\mathfrak{X}(G)\cap HJ)(C\mathfrak{X}(G)\cap J) \\ & = \mathfrak{X}(G)(C\mathfrak{X}(G)\cap J)=C\mathfrak{X}(G)\cap J. \end{align*} Thus, $C\mathfrak{X}(G)\cap HJ=C\mathfrak{X}(G)\cap J$. \end{proof} \begin{prop}\label{prop:chain-chain} If $\mathcal{H}=\mathcal{H}\mathfrak{X}(G)$ is an $(\Omega\cup G)$-decomposition of $G$ and $\mathcal{R}$ is a direct $\Omega$-decomposition of $G$ such that $\mathcal{H}$ refines $\mathcal{R}\mathfrak{X}(G)$, then every maximal proper chain $\mathscr{C}$ of subsets of $\mathcal{H}$ induces a direct chain $\{\langle \mathcal{C},\mathfrak{X}(G)\rangle : \mathcal{C}\in\mathscr{C}\}$. \end{prop} \begin{proof} For each $\mathcal{C}\subseteq\mathcal{H}$, by \lemref{lem:cap}, $\langle\mathcal{C}\rangle=\big\langle\mathcal{R}\cap \langle\mathcal{C}\rangle\big\rangle$. The rest follows from \lemref{lem:drop-H}. \end{proof} The following \thmref{thm:chain} is a critical component of the proof of the algorithm for \thmref{thm:FindRemak}, specifically in proving \thmref{thm:merge}. What it says is that we can proceed through any direct chain as the $\mathfrak{X}$-separated direct decompositions of lower terms in the chain induce direct factors of the next term in the chain, and in a predictable manner. \begin{thm}\label{thm:chain} If $\mathcal{L}$ is a direct chain with directions $\mathcal{R}$, $L\in\mathcal{L}-\{G\}$, and $R\in\mathcal{R}$ is the direction of $L$, then for every $\mathfrak{X}$-separated direct $(\Omega\cup G)$-decomposition $\mathcal{K}$ of $L$ such that $\mathcal{K}\mathfrak{X}(G)$ refines $\mathcal{R}\mathfrak{X}(G)\cap L$, it follows that $$\big\{K\in\mathcal{K}-\mathfrak{X}: K\leq \langle \mathcal{R}-\{R\}\rangle \mathfrak{X}(G)\big\}$$ lies in an $\mathfrak{X}$-separated direct $(\Omega\cup G)$-decomposition of the successor to $L$. \end{thm} \begin{proof} Let $M$ be the successor to $L$ in $\mathcal{L}$ and set $C=\langle\mathcal{R}-\{R\}\rangle$. As $\mathcal{K}\mathfrak{X}(G)$ refines $\mathcal{R}\mathfrak{X}(G)\cap L$, it also refines $\{R\mathfrak{X}(G)\cap L,C\mathfrak{X}(G)\cap L\}$ and so \begin{align*} C\mathfrak{X}(G)\cap L & = \langle K\in\mathcal{K}, K\leq C\mathfrak{X}(G)\rangle = \langle K\in\mathcal{K}-\mathfrak{X}, K\leq C\mathfrak{X}(G)\rangle\mathfrak{X}(G). \end{align*} Since $\mathcal{K}$ is $\mathfrak{X}$-separated $F=\langle K\in\mathcal{K}-\mathfrak{X}, K\leq C\mathfrak{X}(G)\rangle$ has no direct $(\Omega\cup G)$-factor in $\mathfrak{X}$. Also, as the direction of $L$ is $R$, $C\mathfrak{X}(G)\cap M=C\mathfrak{X}(G)\cap L$ and so \begin{align*} (C\cap M)\mathfrak{X}(G) & = C\mathfrak{X}(G)\cap M\\ & = C\mathfrak{X}(G)\cap L\\ & = \langle K\in\mathcal{K}-\mathfrak{X}, K\leq C\mathfrak{X}(G)\rangle \mathfrak{X}(G)\\ & =F \times \langle \mathcal{K}\cap \mathfrak{X}\rangle. \end{align*} Using $(M,F,C\cap M)$ in the role of $(G,H,R)$ in \propref{prop:extendable}, it follows that $F$ is a direct $(\Omega\cup G)$-factor of $M$. In particular, $\{K\in\mathcal{K}-\mathfrak{X}, K\leq C\mathfrak{X}(G)\}$ lies in a direct $(\Omega\cup G)$-decomposition of $M$. \end{proof} \section{Algorithms to lift, extend, and match direct decompositions}\label{sec:lift-ext-algo} Here we transition into algorithms beginning with a small modification of a technique introduced by Luks and Wright to find a direct complement to a direct factor (\thmref{thm:FindComp-invariant}). We then produce an algorithm \textalgo{Merge} (\thmref{thm:merge}) to lift direct decompositions for appropriate quotients. That algorithm is the work-horse which glues together the unique constituents predicted by \thmref{thm:Lift-Extend}. That task asks us to locate a unique partition of a certain set, but in a manner that does not test each of the exponentially many partitions. The proof relies heavily on results such as \thmref{thm:chain} to prove that an essentially greedy algorithm will suffice. For brevity we have opted to describe the algorithms only for the case of lifting decompositions. The natural duality of up and down graders makes it possible to modify the methods to prove similar results for extending decompositions. This section assumes familiarity with Sections \ref{sec:tools} and \ref{sec:lift-ext}. \subsection{Constructing direct complements}\label{sec:complements} In this section we solve the following problem in polynomial-time. \begin{prob}{\sc Direct-$\Omega$-Complement} \begin{description} \item[Given] a $\Omega$-group $G$ and an $\Omega$-subgroup $H$, \item[Return] an $\Omega$-subgroup $K$ of $G$ such that $G=H\times K$, or certify that no such $K$ exists. \end{description} \end{prob} Luks and Wright gave independent solutions to {\sc Direct-$\emptyset$-Complement} in back-to-back lectures at the University of Oregon \cite{Luks:comp,Wright:comp}. \begin{thm}[Luks \cite{Luks:comp},Wright \cite{Wright:comp}]\label{thm:FindComp} For groups of permutations, {\sc Direct-$\emptyset$-Complement} has a polynomial-time solution \end{thm} Both \cite{Luks:comp} and \cite{Wright:comp} reduce \textalgo{Direct-$\emptyset$-Complement} to the following problem (here generalized to $\Omega$-groups): \begin{prob}{\sc $\Omega$-Complement-Abelian} \begin{description} \item[Given] an $\Omega$-group $G$ and an abelian $(\Omega\cup G)$-subgroup $M$, \item[Return] an $\Omega$-subgroup $K$ of $G$ such that $G=M\rtimes K$, or certify that no such $K$ exists. \end{description} \end{prob} To deal with operator groups we use some modifications to the problems above. Many of the steps are conceived within the group $\langle \Omega \theta\rangle\ltimes G\leq \Aut G\ltimes G$. However, to execute these algorithms we cannot assume that $\langle \Omega\theta\rangle\ltimes G$ is a permutation group as it is possible that these groups have no small degree permutation representations (e.g. $G=\mathbb{Z}_p^d$ and $\langle \Omega\theta\rangle=\GL(d,p)$). Instead we operate within $G$ and account for the action of $\Omega$ along the way. \begin{lemma}\label{lem:pres} Let $G$ be an $\Omega$-group where $\theta:\Omega\to \Aut G$. If $\{\langle \mathtt{X} |\mathtt{R}\rangle,f,\ell\}$ is a constructive presentation for $G$ and $\langle \Omega|\mathtt{R}'\rangle$ a presentation for $A:=\langle \Omega\theta\rangle\leq \Aut G$ with respect to $\theta$, then $\langle \Omega\sqcup \mathtt{X}| \mathtt{R}'\ltimes\mathtt{R}\rangle$ is a presentation for $A\ltimes G$ with respect to $\theta\sqcup f$, where \begin{equation*} \mathtt{R}'\ltimes\mathtt{R}= \mathtt{R}'\sqcup\mathtt{R}\sqcup \{(xf)^{s}\ell\cdot (x^{s})^{-1} : x\in\mathtt{X},s\in\Omega\},\textnormal{ and} \end{equation*} \begin{equation*} \forall z\in \Omega\sqcup \mathtt{X},\quad z(\theta\sqcup f)=\left\{\begin{array}{cc} z\theta & z\in\Omega,\\ zf & z\in \mathtt{X}. \end{array}\right. \end{equation*} \end{lemma} \begin{proof} Without loss of generality we assume $F(\Omega),F(\mathtt{X})\leq F(\Omega\sqcup\mathtt{X})$. Let $K$ be the normal closure of $\mathtt{R}'\ltimes \mathtt{R} $ in $F(\Omega\sqcup\mathtt{X})$. For each $s\in\Omega$ and each $x\in\mathtt{X}$ it follows that $Kx^{s}=K(xf)^{s}\ell\leq N=\langle K,F(\mathtt{X})\rangle$. In particular, $N$ is normal in $F(\Omega\sqcup\mathtt{X})$. Set $C=\langle K,F(\Omega)\rangle$. It follows that $F(\Omega\sqcup\mathtt{X}) =\langle C,N\rangle=CN$. Thus, $H=F(\Omega\sqcup\mathtt{X})/K=CN/K=(C/K)(N/K)$ and $N/K$ is normal in $H$. Since $C/K$ and $N/K$ satisfy the presentations for $A$ and $G$ respectively, it follows that $H$ is a quotient of $A\ltimes G$. To show that $H\cong A\ltimes G$ it suffices to notice that $A\ltimes G$ satisfies the relations in $\mathtt{R}'\ltimes\mathtt{R} $, with respect to $\Omega\sqcup\mathtt{X}$ and $\theta\sqcup\ell$. Indeed, for all $s\in \Omega$ and all $x\in\mathtt{X}$ we see that \begin{align*} x^{s}(\widehat{\theta\sqcup f}) & = (s\theta^{-1},1)(1,xf)(s\theta,1) = (1,(xf)^{s}) = (1, (xf)^{s} \ell \hat{f}) = (xf)^{s} \ell(\widehat{\theta\sqcup f}), \end{align*} which implies that $(xf)^{s}\ell (x^{s})^{-1}\in \ker \widehat{\theta\sqcup f}$; so, $K\leq \ker \widehat{\theta\sqcup f}$. Hence, $\langle \Omega\sqcup \mathtt{X}| \mathtt{R}'\ltimes\mathtt{R} \rangle$ is a presentation for $A\ltimes G$. \end{proof} \begin{prop}\label{prop:InvComp} {\sc $\Omega$-Complement-Abelian} has a polynomial-time solution. \end{prop} \begin{proof} Let $M, G\in \mathbb{G}_n$, and $\theta:\Omega\to \Aut G$ a function, where $M$ is an abelian $(\Omega\cup G)$-subgroup of $G$. \emph{Algorithm.} Use {\sc Presentation} to produce a constructive presentation $\{\langle \mathtt{X}|\mathtt{R}\rangle,f,\ell\}$ for $G$ mod $M$. For each $s\in \Omega$ and each $x\in\mathtt{X}$, define $$r_{s,x}=(xf^{s})\ell\cdot(x^{s})^{-1}\in F(\Omega\sqcup\mathtt{X}).$$ Use {\sc Solve} to decide if there is a $\mu\in M^{\mathtt{X}}$ where \begin{align}\label{eq:comp-rel-1} \forall r\in \mathtt{R} ,&\quad r(f\mu) = 1,\textnormal{ and }\\ \label{eq:comp-rel-2} \forall s\in\Omega,\forall x\in\mathtt{X} & \quad r_{s,x}(f\mu)=1. \end{align} If no such $\mu$ exists, then assert that $M$ has no $\Omega$-complement in $G$; otherwise, return $K=\langle x(f\mu)=(xf)(x\mu) : x\in\mathtt{X}\rangle$. \emph{Correctness.} Let $A=\langle\Omega\theta\rangle\leq \Aut G$ and let $\langle \Omega|\mathtt{R} '\rangle$ be a presentation of $A$ with respect to $\theta$. The algorithm creates a constructive presentation $\{\langle\mathtt{X}|\mathtt{R}\rangle ,f,\ell\}$ for $G$ mod $M$ and so by \lemref{lem:pres}, $\langle\Omega\sqcup \mathtt{X}|\mathtt{R} '\ltimes\mathtt{R} \rangle$ is a presentation for $A\ltimes G$ mod $M$ with respect to $\theta\sqcup f$. First suppose that the algorithm returns $K=\langle x(f\mu):x\in\mathtt{X}\rangle$. As $\mathtt{X} f\subseteq KM$ we get that $G=\langle \mathtt{X} f\rangle\leq KM\leq G$. By \eqref{eq:comp-rel-1}, $r(f\mu)=1$ for all $r\in \mathtt{R} $. Therefore $K$ satisfies the defining relations of $G/M\cong K/(K\cap M)$, which forces $K\cap M=1$ and so $G=K\ltimes M$. By \eqref{eq:comp-rel-1} and \eqref{eq:comp-rel-2}, the generator set $\Omega\theta\sqcup\{x\bar{\mu}:x\in\mathtt{X} \}f$ of $\langle A,K\rangle$ satisfies the defining relations $\mathtt{R} '\ltimes \mathtt{R} $ of $(A\ltimes G)/M$ and so $\langle A,K\rangle$ is isomorphic to a quotient of $(A\ltimes G)/M$ where $K$ is the image of $G/M$. This shows $K$ is normal in $\langle A,K\rangle$. In particular, $\langle K^{\Omega}\rangle\leq K$. Therefore if the algorithm returns a subgroup then the return is correct. Now suppose that there is a $K\leq G$ such that $\langle K^{\Omega}\rangle\leq K$ and $G=K\ltimes M$. We must show that in this case the algorithm returns a subgroup. We have that $G=\langle\mathtt{X} f\rangle$ and the generators $\mathtt{X} f$ satisfy (mod $M$) the relations $\mathtt{R} $. Let $\varphi:G/M\to K$ be the isomorphism $kM\varphi=k$, for all $km\in KM=G$, where $k\in K$ and $m\in M$. Define $\tau:\mathtt{X} \to M$ by $x\tau=(xf)^{-1} (xfM)\varphi$, for all $x\in\mathtt{X} $. Notice $\langle x(x\tau): x\in\mathtt{X} \rangle=K$. Furthermore, $\Phi:(a,hM)\mapsto (a,hM\varphi)$ is an isomorphism $A\ltimes(G/M)\to A\ltimes K$. As $\mathtt{R} \subseteq F(\mathtt{X} )$ it follows that $r((\theta\sqcup f)\Phi)=r(f)\Phi=1$, for all $r\in \mathtt{R}$. Also, \begin{align*} \forall z\in \Omega\sqcup \mathtt{X} ,& \quad z(\theta\sqcup f)\Phi = \left\{ \begin{array}{cc} (z\theta, 1), & z\in \Omega;\\ (1,(xfM)\varphi) = (1, x\bar{\tau}), & z\in \mathtt{X} . \end{array}\right. \end{align*} Therefore, $r(f\tau)=r((\theta\sqcup f)\Phi)=1$ for all $r\in\mathtt{R} $. Thus, an appropriate $\tau\in M^{\mathtt{X} }$ exists and the algorithm is guaranteed to find such an element and return an $\Omega$-subgroup of $G$ complementing $M$. \emph{Timing.} The algorithm applies two polynomial-time algorithms. \end{proof} \begin{thm}\label{thm:FindComp-invariant} \textalgo{Direct-$\Omega$-Complement} has a polynomial-time solution. \end{thm} \begin{proof} Let $H, G\in\mathbb{G}_n$ and $\theta:\Omega\to \Aut G$, where $\langle H^{\Omega}\rangle\leq H\leq G$. \emph{Algorithm.} Use {\sc Member} to determine if $H$ is an $(\Omega\cup G)$-subgroup of $G$. If not, then this certifies that $H$ is not a direct factor of $G$. Otherwise, use {\sc Normal-Centralizer} to compute $C_G(H)$ and $ \zeta_1(H)$. Using {\sc Member}, test if $G=HC_G(H)$ and if $\langle C_G(H)^{\Omega}\rangle= C_G(H)$. If either fails, then certify that $H$ is not a direct $\Omega$-factor of $G$. Next, use \propref{prop:InvComp} to find an $\Omega$-subgroup $K\leq C_G(H)$ such that $C_G(H)=\zeta_1(H)\rtimes K$, or determine that no such $K$ exists. If $K$ exists, return $K$; otherwise, $H$ is not a direct $\Omega$-factor of $G$. \emph{Correctness.} Note that if $G=H\times J$ is a direct $\Omega$-decomposition then $H$ and $J$ are $(\Omega\cup G)$-subgroups of $G$, $G=HC_G(H)$, and $C_G(H)=\zeta_1(H)\times J$. As $\Omega\theta\subseteq \Aut G$, $\zeta_1(H)$ is an $\Omega$-subgroup and therefore $C_G(H)$ is an $\Omega$-subgroup. Therefore the tests within the algorithm properly identify cases where $H$ is not a direct $\Omega$-factor of $G$. Finally, if the algorithm returns an $\Omega$-subgroup $K$ such that $C_G(H)=\zeta_1(H)\rtimes K=\zeta_1(G)\times K$, then $G=H\times K$ is a direct $\Omega$-decomposition. \emph{Timing.} The algorithm makes a bounded number of calls to polynomial-time algorithms. \end{proof} \subsection{Merge}\label{sec:merge} In this section we provide an algorithm which given an appropriate direct decomposition of a quotient group produces a direct decomposition of original group. Throughout this section we assume that $(\mathfrak{X},G\mapsto \mathfrak{X}(G))$ is an up $\Omega$-grading pair in which $\zeta_1(G)\leq \mathfrak{X}(G)$. The constraints of exchange by $\Aut_{\Omega\cup G} G$ given in \lemref{lem:KRS} can be sharpened to individual direct factors as follows. (Note that \propref{prop:back-forth} is false when considering the action of $\Aut G$ on direct factors.) \begin{prop}\label{prop:back-forth} Let $X$ and $Y$ be direct $\Omega$-factors of $G$ with no abelian direct $\Omega$-factor. The following are equivalent. \begin{enumerate}[(i)] \item $X\varphi=Y$ for some $\varphi\in \Aut_{\Omega\cup G} G$. \item $X\zeta_1(G)=Y\zeta_1(G)$. \end{enumerate} \end{prop} \begin{proof} By \eqref{eq:central}, $\Aut_{\Omega\cup G} G$ is the identity on $G/\zeta_1(G)$; therefore (i) implies (ii). Next we show (ii) implies (i). Recall that $\mathfrak{A}$ is the class of abelian groups. Let $\{X,A\}$ and $\{Y,B\}$ be direct $\Omega$-decompositions of $G$. Choose Remak $(\Omega\cup G)$-decompositions $\mathcal{R}$ and $\mathcal{C}$ which refine $\{X,A\}$ and $\{Y,B\}$ respectively. Let $\mathcal{X}=\{R\in\mathcal{R}: R\leq X\}$. By \thmref{thm:KRS} there is a $\varphi\in\Aut_{\Omega\cup G} G$ such that $\mathcal{X}\varphi\subseteq \mathcal{C}$. However, $\varphi$ is the identity on $G/\zeta_1(G)$. Hence, $\langle\mathcal{X}\rangle\zeta_1(G)=X\zeta_1(G)\varphi=Y\zeta_1(G)$. Thus, $\mathcal{X}\varphi\subseteq \{C\in \mathcal{C}: C\leq Y\zeta_1(G)\}-\mathfrak{A}$. Yet, $\mathcal{C}$ refines $\{Y,B\}$ and $Y$ has no direct $\Omega$-factor in $\mathfrak{A}$. Thus, $$\{C\in \mathcal{C}: C\leq Y\zeta_1(G)=Y\times \zeta_1(B)\}-\mathfrak{A} =\{C\in\mathcal{C}:C\leq Y\}.$$ Thus, $\mathcal{X}\varphi\subseteq\mathcal{Y}$. By reversing the roles of $X$ and $Y$ we see that $\mathcal{Y}\varphi'\subseteq\mathcal{X}$ for some $\varphi'$. Thus, $|\mathcal{X}|=|\mathcal{Y}|$. So we conclude that $\mathcal{X}\varphi=\mathcal{Y}$ and $X\varphi=Y$. \end{proof} \begin{thm}\label{thm:Extend} There is a polynomial-time algorithm which, given an $\Omega$-group $G$ and a set $\mathcal{K}$ of $(\Omega\cup G)$-subgroups such that \begin{enumerate}[(a)] \item $\mathfrak{X}(\langle\mathcal{K}\rangle)=\mathfrak{X}(G)$ and \item $\mathcal{K}$ is a direct $(\Omega\cup G)$-decomposition of $\langle \mathcal{K}\rangle$, \end{enumerate} returns a direct $\Omega$-decomposition $\mathcal{H}$ of $G$ such that \begin{enumerate}[(i)] \item $|\mathcal{H}-\mathcal{K}|\leq 1$, \item if $K\in\mathcal{K}$ such that $\langle \mathcal{H}\cap\mathcal{K}, K\rangle$ has a direct $\Omega$-complement in $G$, then $K\in\mathcal{H}$; and \item if $K\in\mathcal{K}-\mathfrak{X}$ such that $K$ is a direct $(\Omega\cup G)$-factor of $G$, then $K\in\mathcal{H}$. \end{enumerate} \end{thm} \begin{proof} \emph{Algorithm.} \begin{code}{Extend$(~G,~\mathcal{K}~)$} $\mathcal{L}=\emptyset$; $\lfloor G\rfloor = G$;\\ \textit{/* Using the algorithm for \thmref{thm:FindComp-invariant} to determine the existence of $H$, execute the following. */}\\ \cwhile{$\exists K\in\mathcal{K}, \exists H, \mathcal{L}\sqcup\{K,H\}$ is a direct $\Omega$-decomposition of $G$} \begin{block*} $\lfloor G\rfloor = H$;\\ $\mathcal{L} = \mathcal{L}\sqcup\{K\}$;\\ $\mathcal{K} = \mathcal{K}-\{K\}$;\\ \end{block*} \creturn{$\mathcal{H}=\mathcal{L}\sqcup\{\lfloor G\rfloor\}$} \end{code} \emph{Correctness.} We maintain the following loop invariant (true at the start and end of each iteration of the loop): $\mathcal{L}\sqcup\{\lfloor G\rfloor\}$ is a direct $(\Omega\cup G)$-decomposition of $G$ and $\mathcal{L}\subseteq \mathcal{K}$. The loop exits once $\mathcal{L}\sqcup\{\lfloor G\rfloor\}$ satisfies (ii). Hence, $\mathcal{H}=\mathcal{L}\sqcup\{\lfloor G\rfloor\}$ satisfies (i) and (ii). For (iii), suppose that $\mathcal{K}$ is $\mathfrak{X}$-separated and that $K\in(\mathcal{K}-\mathfrak{X})-\mathcal{H}$ such that $K$ is a direct $(\Omega\cup G)$-factor of $G$. Let $\langle F^{\Omega}\rangle\leq F\leq G$ such that $\{F,K\}$ is a direct $(\Omega\cup G)$-decomposition of $G$ and $\mathcal{R}$ a Remak $(\Omega\cup G)$-decomposition of $G$ which refines $\{F,K\}$. Also let $\mathcal{T}$ be a Remak $(\Omega\cup G)$-decomposition of $G$ which refines $\mathcal{H}$. Set $\mathcal{X}=\{R\in\mathcal{R}: R\leq K\}$, and note that $\mathcal{X}\subseteq \mathcal{R}-\mathfrak{X}$ as $K$ has no direct $\Omega$-factor in $\mathfrak{X}$. By \thmref{thm:KRS} we can exchange $\mathcal{X}$ with a $\mathcal{Y}\subseteq \mathcal{T}-\mathfrak{X}$ to create a Remak $(\Omega\cup G)$-decomposition $(\mathcal{T}-\mathcal{Y})\sqcup \mathcal{X}$ of $G$. As $\zeta_1(G)\leq\mathfrak{X}(G)$ we get $\mathcal{R}\mathfrak{X}(G)=\mathcal{T}\mathfrak{X}(G)$ and $\mathcal{X}\mathfrak{X}(G)=\mathcal{Y}\mathfrak{X}(G)$ (\lemref{lem:KRS}, \propref{prop:back-forth}). Thus, by (a) and then (b), \begin{align*} \langle\mathcal{Y}\rangle \cap \langle\mathcal{H}\cap \mathcal{K}\rangle & \equiv \langle\mathcal{X}\rangle \cap \langle\mathcal{H}\cap \mathcal{K}\rangle & \pmod{\mathfrak{X}(G)}\\ & \equiv K \cap \langle\mathcal{H}\cap \mathcal{K}\rangle & \pmod{\mathfrak{X}(\langle \mathcal{K}\rangle)}\\ & \leq K\cap \langle \mathcal{K}-\{K\}\rangle\\ & \equiv 1 \end{align*} Therefore $\langle\mathcal{Y}\rangle \leq \langle(\mathcal{T}-\mathfrak{X})-\{T\in\mathcal{T}: T\leq \langle\mathcal{H}-\mathcal{K}\rangle\}\rangle$. Thus, $$\mathcal{J}=(\mathcal{H}\cap\mathcal{K})\sqcup \{K\}\sqcup \{\langle (\mathcal{T} -\mathcal{Y})-\{T\in\mathcal{T}:T\leq \langle\mathcal{H}\cap\mathcal{K}\rangle\}$$ is a direct $\Omega$-decomposition of $G$ and $(\mathcal{H}\cap\mathcal{K})\sqcup\{K\}\subseteq \mathcal{J}\cap\mathcal{K}$ which shows that $\mathcal{L}$ is not maximal. By the contrapositive we have (iii). \emph{Timing.} This loop makes $|\mathcal{K}|\leq \log_2 |G|$ calls to a polynomial-time algorithm for \textalgo{Direct-$\Omega$-Complement}. \end{proof} Under the hypothesis of \thmref{thm:Extend} it is not possible to extend (iii) to say that if $K\in\mathcal{K}$ and $K$ is a direct $\Omega$-factor of $G$ then $K\in \mathcal{H}$. Consider the following example (where $\Omega=\emptyset$). \begin{ex} Let $G=D_8\times \mathbb{Z}_2$, $D_8=\langle a,b|a^4,b^2,(ab)^2\rangle$. Use $\mathfrak{A}$ (the class of abelian groups) for $\mathfrak{X}$ and $\mathcal{K}=\{\langle (0,1)\rangle,\langle (a^2,1)\rangle\}$. Each member of $\mathcal{K}$ is a direct factor of $G$, but $\mathcal{K}$ is not contained in any direct decomposition of $G$. \end{ex} \begin{lem}\label{lem:count} If $\mathcal{K}$ is a $\mathfrak{X}$-refined direct $(\Omega\cup G)$-decomposition of $G$ such that $\mathcal{K}\mathfrak{X}(G)$ refines $\mathcal{R}\mathfrak{X}(G)$ for some Remak $(\Omega\cup G)$-decomposition of $G$, then $\mathcal{K}$ is a Remak $(\Omega\cup G)$-decomposition of $G$. \end{lem} \begin{proof} As $\mathcal{R}$ is a Remak $(\Omega\cup G)$-decomposition of $G$, by \lemref{lem:KRS}, $\mathcal{R}\mathfrak{X}(G)$ refines $\mathcal{K}\mathfrak{X}(G)$ and so $\mathcal{K}\mathfrak{X}(G)=\mathcal{R}\mathfrak{X}(G)$. Hence, $|\mathcal{K}-\mathfrak{X}|=|\mathcal{R}-\mathfrak{X}|$ and because $\mathcal{K}$ is $\mathfrak{X}$-refined we also have: $|\mathcal{K}\cap\mathfrak{X}|=|\mathcal{R}\cap\mathfrak{X}|$. Therefore, $|\mathcal{K}|=|\mathcal{K}-\mathfrak{X}|+|\mathcal{K}\cap\mathfrak{X}| =|\mathcal{R}-\mathfrak{X}|+|\mathcal{R}\cap\mathfrak{X}|=|\mathcal{R}|$. As every Remak $(\Omega\cup G)$-decomposition of $G$ has the same size, it follows that $\mathcal{K}$ cannot be refined by a larger direct $(\Omega\cup G)$-decomposition of $G$. Hence $\mathcal{K}$ is a Remak $(\Omega\cup G)$-decomposition of $G$. \end{proof} \begin{thm}\label{thm:merge} There is a polynomial-time algorithm which, given $G\in\mathbb{G}_n$, sets $\mathcal{A},\mathcal{H}\subseteq\mathbb{G}_n$, and a function $\theta:\Omega\to\Aut G$, such that \begin{enumerate}[(a)] \item $\mathcal{A}$ is a Remak $(\Omega\cup G)$-decomposition of $\mathfrak{X}(G)$, \item $\forall H\in\mathcal{H}$, $\mathfrak{X}(H)=\mathfrak{X}(G)$, \item $\mathcal{H}/\mathfrak{X}(G)$ is a direct $\Omega$-decomposition of $G/\mathfrak{X}(G)$; \end{enumerate} returns an $\mathfrak{X}$-refined direct $\Omega$-decomposition $\mathcal{K}$ of $G$ with the following property. If $\mathcal{R}$ is a direct $\Omega$-decomposition of $G$ where $\mathcal{H}$ refines $\mathcal{R}\mathfrak{X}(G)$ then $\mathcal{K}\mathfrak{X}(G)$ refines $\mathcal{R}\mathfrak{X}(G)$; in particular, if $\mathcal{R}$ is Remak then $\mathcal{K}$ is Remak. \end{thm} \begin{proof} \emph{Algorithm.} \begin{code}{Merge$(~\mathcal{A},~\mathcal{H}~)$} $\mathcal{K} = \mathcal{A}$;\\ $\forall H\in\mathcal{H}$\\ \begin{block*} $\mathcal{K}=${\tt Extend}$(~\langle H,\mathcal{K}\rangle,~\mathcal{K}~)$; \end{block*} \creturn{$\mathcal{K}$} \end{code} \emph{Correctness.} Fix a direct $\Omega$-decomposition $\mathcal{R}$ of $G$ where $\mathcal{H}$ refines $\mathcal{R}\mathfrak{X}(G)$. We can assume $\mathcal{R}$ is $\mathfrak{X}$-refined. The loop runs through a maximal chain $\mathscr{C}$ of subsets of $\mathcal{H}$ and so we track the iterations by considering the members of $\mathscr{C}$. By \propref{prop:chain-chain}, $\mathcal{L}=\{L=L_{\mathcal{C}}=\langle\mathcal{C},\mathfrak{X}(G)\rangle: \mathcal{C}\in\mathscr{C}\}$ is a direct chain. We claim the following properties as loop invariants. At the iteration $\mathcal{C}\in\mathscr{C}$, we claim that $(\mathcal{C},L,\mathcal{K})$ satisfies: \begin{enumerate}[(P.1)] \item\label{P:1} $\mathfrak{X}(L)=\mathfrak{X}(G)$, \item\label{P:3} $\mathcal{K}\mathfrak{X}(G)$ refines $\mathcal{R}\mathfrak{X}(G)\cap L$, and \item\label{P:4} $\mathcal{K}$ is an $\mathfrak{X}$-refined direct $(\Omega\cup G)$-decomposition of $L$. \end{enumerate} Thus, when the loop completes, $L=\langle\mathcal{H}\rangle=G$. By (P.\ref{P:3}) $\mathcal{K}\mathfrak{X}(G)$ refines $\mathcal{R}\mathfrak{X}(G)$. By (P.\ref{P:4}), $\mathcal{K}$ is an $\mathfrak{X}$-refined direct $\Omega$-decomposition of $G$. Following \lemref{lem:count}, if $\mathcal{R}$ is a Remak $(\Omega\cup G)$-decomposition of $G$ then $\mathcal{K}$ is a Remak $(\Omega\cup G)$-decomposition. We prove (P.\ref{P:1})--(P.\ref{P:4}) by induction. As we begin with $\mathcal{K}=\mathcal{A}$, in the base case $\mathcal{C}=\emptyset$, $L=\mathfrak{X}(G)$, and so (P.\ref{P:1}) holds. As $\mathcal{K}\mathfrak{X}(G)=\emptyset$ and $\mathcal{R}\mathfrak{X}(G)\cap\mathfrak{X}(G)=\emptyset$ we have (P.\ref{P:3}). Also (P.\ref{P:4}) holds because of (a). Now suppose for induction that for some $\mathcal{C}\in\mathscr{C}$, $(\mathcal{C},L,\mathcal{K})$ satisfies (P.\ref{P:1})--(P.\ref{P:4}). Let $\mathcal{D}=\mathcal{C}\sqcup\{H\}\in\mathscr{C}$ be the successor to $\mathcal{C}$, for the appropriate $H\in\mathcal{H}-\mathcal{C}$. Set $M=\langle H,L\rangle$, and $\mathcal{M}={\tt Extend}(M,\mathcal{K})$. Since $H\leq M$ it follows from (b) that $\mathfrak{X}(G)\leq \mathfrak{X}(M) \leq \mathfrak{X}(H)=\mathfrak{X}(G)$ so that $\mathfrak{X}(M)=\mathfrak{X}(G)$; hence, (P.\ref{P:1}) holds for $(\mathcal{D}, M,\mathcal{M})$. Next we prove (P.\ref{P:3}) holds for $(\mathcal{D}, M,\mathcal{M})$. As $L,M\in\mathcal{L}$ and $\mathcal{L}$ is a direct chain with directions $\mathcal{R}$, $\mathcal{R}\cap L$ and $\mathcal{R}\cap M$ are direct $(\Omega\cup G)$-decomposition of $L$ and $M$, respectively. Following \thmref{thm:Extend}(i), $|\mathcal{M}-\mathcal{K}|\leq 1$. As $H\nleq L$, $\mathcal{M}\neq \mathcal{K}$, and there is a group $\lfloor H\rfloor$ in $\mathcal{M}-\mathcal{K}$ with $H\leq \lfloor H\rfloor\mathfrak{X}(G)$. By assumption, $\mathcal{H}$ refines $\mathcal{R}\mathfrak{X}(G)$. Hence, there is a unique $R\in \mathcal{R}-\mathfrak{X}$ such that $\mathfrak{X}(G)<H\leq R\mathfrak{X}(G)$. Indeed, $R$ is the direction of $L$. Let $C=\langle (\mathcal{R}-\{R\})-\mathfrak{X}\rangle$ and define $$\mathcal{J}=\{K\in\mathcal{K}-\mathfrak{X}: K\leq C\mathfrak{X}(G)\}.$$ As the direction of $L$ is $R$, $C\mathfrak{X}(G)\cap M=C\mathfrak{X}(G)\cap L=\langle\mathcal{J}\rangle\mathfrak{X}(G)$ and by \thmref{thm:chain}, $\mathcal{J}$ lies in a $\mathfrak{X}$-separated direct $(\Omega\cup G)$-decomposition of $M$. Thus, by \thmref{thm:Extend}(ii), $\mathcal{J}\subseteq \mathcal{M}\cap \mathcal{K}$. Also, $M = \langle\mathcal{M}-\mathcal{J}\rangle\times \langle \mathcal{J}\rangle$ and $\mathfrak{X}(M)=\mathfrak{X}(G)$, so \begin{align*} M/\mathfrak{X}(G) & = \langle\mathcal{M}-\mathcal{J}\rangle\mathfrak{X}(G)/\mathfrak{X}(G)\times \langle\mathcal{J}\rangle\mathfrak{X}(G)/\mathfrak{X}(G)\\ & = \langle\mathcal{M}-\mathcal{J}\rangle\mathfrak{X}(G)/\mathfrak{X}(G)\times (C\mathfrak{X}(G) \cap M)/\mathfrak{X}(G). \end{align*} Thus, $\langle\mathcal{M}-\mathcal{J}\rangle\mathfrak{X}(G)\cap C\mathfrak{X}(G)=\mathfrak{X}(G)$. Suppose that $X$ is a directly $(\Omega\cup G)$-indecomposable direct $(\Omega\cup G)$-factor of $\langle\mathcal{M}-\mathcal{J}\rangle\mathfrak{X}(G)$ which does not lie in $\mathfrak{X}$. As $\mathcal{R}\cap M$ is a direct $(\Omega\cup M)$-decomposition of $M$ and $X$ lies in a Remak $(\Omega\cup G)$-decomposition of $M$, then by \lemref{lem:KRS}, $X\leq R\mathfrak{X}(M)=R\mathfrak{X}(G)$ or $X\leq C\mathfrak{X}(M)=C\mathfrak{X}(G)$. Yet, $X\not\in\mathfrak{X}$ so that $X\nleq \mathfrak{X}(G)$ and $$X\cap C\mathfrak{X}(G)\leq \langle\mathcal{M}-\mathcal{J}\rangle\mathfrak{X}(G)\cap C\mathfrak{X}(G) =\mathfrak{X}(G);$$ hence, $X\nleq C\mathfrak{X}(G)$. Thus, $X\leq R\mathfrak{X}(G)$ and as $X$ is arbitrary, we get $$\langle\mathcal{M}-\mathcal{J}\rangle\mathfrak{X}(G)\leq R\mathfrak{X}(G).$$ As $M/\mathfrak{X}(G)=(R\mathfrak{X}(G)\cap M)/\mathfrak{X}(G)\times (C\mathfrak{X}(G)\cap M)/\mathfrak{X}(G)$ we indeed have $$\langle\mathcal{M}-\mathcal{J}\rangle\mathfrak{X}(G)= R\mathfrak{X}(G)\cap M.$$ In particular, $\mathcal{M}\mathfrak{X}(G)$ refines $\mathcal{R}\mathfrak{X}(G)\cap M$ and so (P.\ref{P:3}) holds. Finally to prove (P.\ref{P:4}) it suffices to show that $\lfloor H\rfloor$ has no direct $(\Omega\cup G)$-factor in $\mathfrak{X}$. Suppose otherwise: so $\lfloor H\rfloor$ has a direct $(\Omega\cup G)$-decomposition $\{H_0,A\}$ where $A\in\mathfrak{X}$ and $A$ is directly $(\Omega\cup G)$-indecomposable. Swap out $\lfloor H\rfloor$ in $\mathcal{M}$ for $\{H_0,A\}$ creating $$\mathcal{M}'=(\mathcal{M}-\{\lfloor H\rfloor \})\sqcup\{H_0,A\} =(\mathcal{M}\cap\mathcal{K})\sqcup\{H_0,A\}.$$ As $A\in \mathfrak{X}$ it follows that $A\leq \mathfrak{X}(M)=\mathfrak{X}(G)=\mathfrak{X}(L)$. In particular, $A\leq L\leq M$. As $A$ is a direct $(\Omega\cup G)$-factor of $M$, $A$ is also a direct $(\Omega\cup G)$-factor of $L$. Since $\langle A,\mathcal{M}\cap \mathcal{K}\rangle\leq L$ it follows that $$\mathcal{M}'\cap L=\{H_0 \cap L, A\}\sqcup (\mathcal{M}\cap \mathcal{K})$$ is a direct $(\Omega\cup G)$-decomposition of $L$. Furthermore, $A$ is directly $(\Omega\cup G)$-in\-de\-comp\-o\-sa\-ble, $A\in\mathfrak{X}$, and $A$ lies in a Remak $(\Omega\cup G)$-decomposition of $L$. Also $\mathcal{K}\cap\mathfrak{X}$ lies in a Remak $(\Omega\cup G)$-decomposition $\mathcal{T}$ of $L$ in which $\mathcal{K}\cap\mathfrak{X}=\mathcal{T}\cap\mathfrak{X}$ (\propref{prop:direct-class}(iv) and (v)); thus, by \thmref{thm:KRS} there is a $B\in\mathcal{K}\cap \mathfrak{X}$ such that $$(\mathcal{M}'\cap L-\{A\})\sqcup \{B\}$$ is a direct $(\Omega\cup G)$-decomposition of $L$. Hence, $\mathcal{M}''=(\mathcal{M}'-\{A\})\sqcup\{B\}$ is a direct $(\Omega\cup G)$-decomposition of $M$. However, $\mathcal{M}''\cap \mathcal{K}=(\mathcal{M}\cap \mathcal{K})\cup\{B\}$. By \thmref{thm:Extend}(i), $\mathcal{M}\cap \mathcal{K}$ is maximal with respect to inclusion in $\mathcal{K}$, such that $\mathcal{M}\cap \mathcal{K}$ is contained in a direct $(\Omega\cup G)$-decomposition of $M$. Thus, $B\in\mathcal{M}\cap\mathcal{K}$. That is, impossible since it would imply that $\mathcal{M}'\cap L$ and $(\mathcal{M}'-\{A\})\cap L$ are both direct $(\Omega\cup G)$-decompositions of $L$, i.e. that $A\cap L=1$, But $1<A\leq L$. This contradiction demonstrates that $\lfloor H\rfloor$ has no direct $(\Omega\cup G)$-factor in $\mathfrak{X}$. Therefore, $\mathcal{M}$ is $\mathfrak{X}$-refined. Having shown that $M$ and $\mathcal{M}$ satisfy (P.\ref{P:1})--(P.\ref{P:4}), at the end of the loop $\mathcal{K}$ and $L$ are reassigned to $\mathcal{M}$ and $M$ respectively and so maintain the loop invariants. \emph{Timing.} The algorithm loops over every element of $\mathcal{H}$ applying the polynomial-time algorithm of \thmref{thm:Extend} once in each loop. Thus, {\tt Merge} is a polynomial-time algorithm. \end{proof} \section{Bilinear maps and $p$-groups of class $2$}\label{sec:bi} In this section we introduce bilinear maps and a certain commutative ring as a means to access direct decompositions of a $p$-group of class $2$. In our minds, those groups represent the most difficult case of the direct product problem. This is because $p$-groups of class $2$ have so many normal subgroups, and many of those pairwise intersect trivially making them appear to be direct factors when they are not. Thus, a greedy search is almost certain to fail. Instead, we have had to consider a certain commutative ring that can be derived from a $p$-group. As commutative rings have unique Remak decomposition, and a decomposable $p$-group will have many Remak decompositions, we might expect such a method to have lost vital information. However, in view of results such as \thmref{thm:Lift-Extend} we recognize that in fact what we will have constructed leads us to a matching for the extension $1\to \zeta_1(G)\to G\to G/\zeta_1(G)\to 1$. Unless specified otherwise, in this section $G$ is a $p$-group of class $2$. \subsection{Bilinear maps} Here we introduce $\Omega$-bilinear maps and direct $\Omega$-decompositions of $\Omega$-bilinear maps. This allows us to solve the match problem for $p$-groups of class $2$. Let $V$ and $W$ denote abelian $\Omega$-groups. A map $b:V\times V\to W$ is $\Omega$-\emph{bilinear} if \begin{align} b(u+u',v+v') & = b(u,v)+b(u',v)+b(u,v')+b(u',v'), \textnormal{ and }\\ b(ur,v) & = b(u,v)r = b( u,vr), \end{align} for all $u,u'v,v'\in V$ and all $r\in \Omega$. Every $\Omega$-bilinear map is also $\mathbb{Z}$-bilinear. Define \begin{equation} b(X,Y) = \langle b(x,y) : x\in X, y\in Y\rangle \end{equation} for $X,Y\subseteq V$. If $X\leq V$ then define the \emph{submap} \begin{equation} b_X:X\times X\to b(X,X) \end{equation} as the restriction of $b$ to inputs from $X$. The \emph{radical} of $b$ is \begin{equation} \rad b = \{ v\in V : b(v,V)=0=b(V,v) \}. \end{equation} We say that $b$ is \emph{nondegenerate} if $\rad b=0$. Finally, call $b$ \emph{faithful} $\Omega$-bilinear when $(0:_{\Omega} V)\cap (0:_{\Omega} W)=0$, where $(0:_{\Omega} X)=\{r\in \Omega: Xr=0\}$, $X\in \{V,W\}$. \begin{defn} Let $\mathcal{B}$ be a family of $\Omega$-bilinear maps $b:V_b\times V_b\to W_b$, $b\in\mathcal{B}$. Define $\oplus\mathcal{B}=\bigoplus_{b\in\mathcal{B}} b$ as the $\Omega$-bilinear map $\bigoplus_{b\in\mathcal{B}} V_b\times\bigoplus_{b\in\mathcal{B}} V_b \to \bigoplus_{b\in\mathcal{B}} W_b$ where: \begin{equation} \left(\oplus\mathcal{B}\right)\left( (u_b)_{b\in\mathcal{B}},(v_b)_{b\in\mathcal{B}}\right) = (b(u_b,v_b))_{b\in\mathcal{B}},\qquad \forall (u_b)_{b\in\mathcal{B}},(v_b)_{b\in\mathcal{B}} \in \bigoplus_{b\in\mathcal{B}}V_b. \end{equation} \end{defn} \begin{lem}\label{lem:internal-direct-bi} If $b:V\times V\to W$ is an $\Omega$-bilinear map, $\mathcal{C}$ a finite set of submaps of $b$ such that \begin{enumerate}[(i)] \item $\{X_c: c:X_c\times X_c\to Z_c\in\mathcal{C}\}$ is a direct $\Omega$-decomposition of $V$, \item $\{Z_c: c:X_c\times X_c\to Z_c\in\mathcal{C}\}$ is a direct $\Omega$-decomposition of $W$, and \item $b(X_c,X_{d})=0$ for distinct $c,d\in\mathcal{C}$; \end{enumerate} then $b=\bigoplus\mathcal{C}$. \end{lem} \begin{proof} By $(i)$, we may write each $u\in V$ as $u=(u_c)_{c\in\mathcal{C}}$ with $u_c\in X_c$, for all $c:X_c\times X_c\to Z_c\in \mathcal{C}$. By $(iii)$ followed by $(ii)$ we have that $b(u,v)=\sum_{c,d\in\mathcal{C}} b(u_c,v_d) =\sum_{c\in\mathcal{C}} c(u_c,v_c) =\left(\oplus\mathcal{C}\right)(u,v).$ \end{proof} \begin{defn}\label{def:ddecomp-bi} A \emph{direct $\Omega$-decomposition} of an $\Omega$-bilinear map $b:V\times V\to W$ is a set $\mathcal{B}$ of submaps of $b$ satisfying the hypothesis of \lemref{lem:internal-direct-bi}. Call $b$ directly $\Omega$-indecomposable if its only direct $\Omega$-decomposition is $\{b\}$. A Remak $\Omega$-decomposition of $b$ is an $\Omega$-decompositions whose members or directly $\Omega$-indecomposable. \end{defn} The bilinear maps we consider were created by Baer \cite{Baer:class-2} and are the foundation for the many Lie methods that have been associated to $p$-groups. Further details of our account can be found in \cite[Section 5]{Warfield:nil}. The principle example of such maps is the commutation of an $\Omega$-group $G$ where $\gamma_2(G)\leq \zeta_1(G)$. There we define $V=G/\zeta_1(G)$, $W=\gamma_2(G)$, and $b=\mathsf{Bi} (G):V\times V\to W$ where \begin{equation}\label{eq:Bi} b(\zeta_1(G) x,\zeta_1(G) y )=b(x,y),\qquad \forall x,\forall y, x,y\in G. \end{equation} It is directly verified that $b$ is $\mathbb{Z}_{p^e}[\Omega]$-bilinear where $G^{p^e}=1$, and furthermore, nondegenerate. When working in $V$ and $W$ we use additive notation. Given $H\leq G$ we define $U=H\zeta_1(G)/\zeta_1(G)\leq V$, $Z=H\cap \gamma_2(G)\leq W$, and $c:=\mathsf{Bi} (H;G):U\times U\to Z$ where \begin{equation} c(u,v) = b(u,v),\qquad \forall u\forall v, u,v\in U. \end{equation} \begin{prop}\label{prop:p-group-bi} If $G$ is a $\Omega$-group and $\gamma_2(G)\leq \zeta_1(G)$, then every direct $\Omega$-decomposition $\mathcal{H}$ of $G$ induces a direct $\Omega$-decomposition \begin{equation} \mathsf{Bi} (\mathcal{H}) = \{ \mathsf{Bi} (H; G) : H\in\mathcal{H}\}. \end{equation} If $\mathsf{Bi} (P)$ is directly $\Omega$-indecomposable and $\zeta_1(G)\leq \Phi(G)$, then $G$ is directly $\Omega$-indecomposable. \end{prop} \begin{proof} Set $b:=\mathsf{Bi} (G)$. By \lemref{lem:induced} and \propref{prop:V-inter-1}, $\mathcal{H}\zeta_1(G)/\zeta_1(G)$ is a direct $\Omega$-decomposition of $V=G/\zeta_1(G)$ and $\mathcal{H}\cap \gamma_2(G)$ is a direct $\Omega$-decomposition of $W=\gamma_2(G)$. Furthermore, for each $H\in\mathcal{H}$, $$b(H\zeta_1(G)/\zeta_1(G), \langle\mathcal{H}-\{H\}\rangle\zeta_1(G)/\zeta_1(G)) =[H,\langle\mathcal{H}-\{H\}\rangle]=0\in W.$$ In particular, $\mathsf{Bi} (\mathcal{H})$ is a direct $\Omega$-decomposition of $b$. Finally, if $\mathsf{Bi} (P)$ is directly indecomposable then $|\mathsf{Bi} (\mathcal{H})|=1$. Thus, $\mathcal{H}\zeta_1(G)=\{G\}$. Therefore $\mathcal{H}$ has exactly one non-abelian member. Take $Z\in\mathcal{H}\cap\mathfrak{A}$. As $Z$ is abelian, $Z\leq \zeta_1(G)$. If $\zeta_1(G)\leq \Phi(G)$ then the elements of $G$ are non-generators. In particular, $G=\langle\mathcal{H}\rangle=\langle \mathcal{H}-\{Z\}\rangle$. But by definition no proper subset of decomposition generates the group. So $\mathcal{H}\cap\mathfrak{A}=\emptyset$. Thus, $\mathcal{H}=\{G\}$ and $G$ is directly $\Omega$-indecomposable. \end{proof} Baer and later others observed a partial reversal of the map $G\mapsto \mathsf{Bi} (G)$. Our account follows \cite{Warfield:nil}. In particular, if $b:V\times V\to W$ is a $\mathbb{Z}_{p^e}$-bilinear map then we may define a group $\mathsf{Grp} (b)$ on the set $V\times W$ where the product is given by: \begin{equation} (u,w)*(u',w') = (u+u', w+b(u,u')+w'), \end{equation} for all $(u,w)$ and all $(u',w')$ in $V\times W$. The following are immediate from the definition. \begin{enumerate}[(i)] \item $(0,0)$ is the identity and for all $(u,w)\in V\times W$, $(u,w)^{-1}=(-u,-w + b(u,u))$. \item For all $(u,w)$ and all $(v,w)$ in $V\times W$, $[(u,w), (v,w')] = (0, b(u,v)-b(v,u))$. \end{enumerate} If $b$ is $\Omega$-bilinear then $\mathsf{Grp} (b)$ is an $\Omega$-group where $$\forall s\in \Omega, \forall (u,w)\in V\times W,\qquad (u,w)^{s}=(u^s, w^s).$$ In light of (ii), if $p>2$ and $b$ is alternating, i.e. for all $u$ and all $v$ in $V$, $b(u,v)=-b(v,u)$, then $[(u,w),(v,w')]=(0,2b(u,v))$. For that reason it is typical to consider $\mathsf{Grp} (\frac{1}{2}b)$ in those settings so that $[(u,w),(v,w')]=(0,b(u,v))$. We shall not require this approach. If $G^p=1$ then $G\cong \mathsf{Grp} (\mathsf{Bi} (G))$ \cite[Proposition 3.10(ii)]{Wilson:unique-cent}. \begin{coro}\label{coro:exp-p} If $G$ is a $p$-group with $G^p=1$ and $\gamma_2(G)\leq \zeta_1(G)$ then $G$ is directly $\Omega$-indecomposable if, and only if, $\mathsf{Bi} (G)$ is directly $\Omega$-indecomposable and $\zeta_1(G)\leq \Phi(G)$. \end{coro} \begin{proof} The reverse directions is \propref{prop:p-group-bi}. We focus on the forward direction. As $G^p=1$ it follows that $G\cong \mathsf{Grp} (\mathsf{Bi} (G))=:\hat{G}$. Set $b:=\mathsf{Bi} (G)$. Let $\mathcal{B}$ be a direct $\Omega$-decomposition of $b$, and therefore also of $\mathsf{Bi} (G)$. For each $c:X_c\times X_c\to Z_c\in\mathcal{B}$, define $\mathsf{Grp} (c,b)=X_c\times Z_c\leq V\times W$. We claim that $\mathsf{Grp} (c;b)$ is an $\Omega$-subgroup of $\mathsf{Grp} (b)$. In particular, $(0,0)\in \mathsf{Grp} (c;b)$ and for all $(x,w),(y,w')\in \mathsf{Grp} (c;b)$, $(x,w)*(-y,-w'+b(y,y))= (x-y, w-b(x,y)-w'+b(y,y))\in X_c\times Z_c=\mathsf{Grp} (c;b)$. Furthermore, \begin{align*} \left[\mathsf{Grp} (c;b), \mathsf{Grp} \left( \sum_{d\in\mathcal{C}-\{c\}} d; b\right) \right] & = \left( 0, 2b\left( X_c, \sum_{d\in\mathcal{C}-\{c\}} X_d\right) \right)=(0,0). \end{align*} Combined with $\mathsf{Grp} (b)=\langle \mathsf{Grp} (c;b): c\in\mathcal{C}\rangle$ it follows that $\mathsf{Grp} (c;b)$ is normal in $\mathsf{Grp} (b)$. Finally, \begin{align*} \mathsf{Grp} (c;b) \cap \mathsf{Grp} \left(\sum_{d\in\mathcal{C}-\{c\}} d; b\right) & = (X_c\times Z_c) \cap \sum_{d\in\mathcal{C}-\{c\}}(X_d\times Z_d) = 0\times 0. \end{align*} Thus, $\mathcal{H}=\{\mathsf{Grp} (c;b): c\in\mathcal{C}\}$ is a direct $\Omega$-decomposition of $\mathsf{Grp} (b)$. As $G$ is directly $\Omega$-indecomposable it follows that $\mathcal{H}=\{G\}$ and so $\mathcal{C}=\{b\}$. Thus, $b$ is directly $\Omega$-indecomposable. \end{proof} \subsection{Centroids of bilinear maps} \label{sec:enrich} In this section we replicate the classic interplay of idempotents of a ring and direct decompositions of an algebraic object, but now for context of bilinear maps. The relevant ring is the centroid, defined similar to centroid of a nonassociative ring \cite[Section X.1]{Jacobson:Lie}. As with nonassociative rings, the idempotents of the centroid of a bilinear map correspond to direct decompositions. Myasnikov \cite{Myasnikov} may have been the first to generalize such methods to bilinear maps. \begin{defn}\label{def:centroid} The \emph{centroid} of an $\Omega$-bilinear $b:V\times V\to W$ is \begin{equation*} C_{\Omega}(b) = \{ (f,g)\in \End_{\Omega} V\oplus \End_{\Omega} W: b(uf,v)=b(u,v)g=b(u,vf),\forall u,v\in V\}. \end{equation*} If $\Omega=\emptyset$ then write $C(b)$. \end{defn} \begin{lem}\label{lem:centroid} Let $b:V\times V\to W$ be an $\Omega$-bilinear map. Then the following hold. \begin{enumerate}[(i)] \item $C_{\Omega}(b)$ is a subring of $\End_{\Omega} V\oplus \End_{\Omega} W$, and $V$ and $W$ are $C_{\Omega}(b)$-modules. \item If $b$ is $K$-bilinear for a ring $K$, then $K/(0:_K V)\cap (0:_K W)$ embeds in $C(b)$. In particular, $C(b)$ is the largest ring over which $b$ is faithful bilinear. \item If $b$ is nondegenerate and $W=b(V,V)$ then $C_{\Omega}(b)=C(b)$ and $C(b)$ is commutative. \end{enumerate} \end{lem} \begin{proof} Parts (i) and (ii) are immediate from the definitions. For part (iii), if $s\in \Omega$ and $(f,g)\in C(b)$, then $b((su)f,v)=b(su,vf)=sb(u,vf)=b(s(uf),v)$ for all $u$ and all $v\in V$. As $b$ is nondegenerate and $b((su)f-s(uf),V)=0$, it follows that $(su)f=s(uf)$. In a similar fashion, $g\in\End_{\Omega}W$ so that $(f,g)\in C_{\Omega}(b)$. For part (iii) we repeat the same shuffling game above: if $(f,g),(f',g')\in C(b)$ then $b(u(ff'),v)=b(u,vf)f'=b(u(f'f),v)$. By the nondegenerate assumption we get that $ff'=f'f$ and also $gg'=g'g$. \end{proof} \begin{remark} If $\rad b=0$ and $(f,g),(f',g)\in C(b)$ then $f=f'$. If $W=b(V,V)$ and $(f,g),(f,g')\in C(b)$ then $g=g'$. So if $\rad b=0$ and $W=b(V,V)$ then the first variable determines the second and vice-versa. \end{remark} \subsection{Idempotents, frames, and direct $\Omega$-decompositions}\label{sec:bi-direct} In this section we extend the usual interplay of idempotents and direct decompositions to the context of bilinear maps and them $p$-groups of class $2$. This allows us to prove \thmref{thm:indecomp-class2}. This section follows the notation described in Subsection \ref{sec:rings}. \begin{lem}\label{lem:idemp} Let $b:V\times V\to W$ be an $\Omega$-bilinear map. \begin{enumerate}[(i)] \item A set $\mathcal{B}$ of $\Omega$-submaps of $b$ is a direct $\Omega$-decomposition of $b$ if, and only if, \begin{equation} \mathcal{E}(\mathcal{B}) =\{ (e(V_c),e(W_c)) : c:V_c\times V_c\to W_c\in \mathcal{B}\}. \end{equation} is a set of pairwise orthogonal idempotents of $C_{\Omega}(b)$ which sum to $1$. \item $\mathcal{B}$ is a Remak $\Omega$-decomposition of $b$ if, and only if, $\mathcal{E}(\mathcal{B})$ is a frame. \item If $b$ is nondegenerate and $W=b(V,V)$, then $b$ has a unique Remak $\Omega$-decomposition of $b$. \end{enumerate} \end{lem} \begin{proof} For $(i)$, by \defref{def:ddecomp-bi}, $\{V_b: b\in\mathcal{B}\}$ is a direct decomposition of $V$ and $\{W_b:b\in\mathcal{B}\}$ is a direct decomposition of $W$. Thus, $\mathcal{E}(\mathcal{B})$ is a set of pairwise orthogonal idempotents which sum to $1$. Let $(e,f)\in\mathcal{E}(\mathcal{X})$. As $1-e=\sum_{(e',f')\in\mathcal{E}(\mathcal{B})-\{(e,f)\}} e'$ it follows that for all $u,v\in V$ we have $b(ue,v(1-e))\in b(Ve,V(1-e))=0$ by the assumptions on $\mathcal{B}$. Also, $b(ue,v e)\in Wf$. Together we have: \begin{eqnarray*} b(ue,v) & = & b(ue,v e) + b(ue,v(1-e)) = b(ue,v e),\\ b(u,ve) & = & b(u(1-e),v e) + b(ue,ve) = b(ue,v e),\textnormal{ and }\\ b(u,v)f & = & \left(\sum_{(e',f')\in \mathcal{E}(\mathcal{B})} b(ue',ve')f'\right)f = b(ue,v e)f=b(ue,ve). \end{eqnarray*} Thus $b(ue,v)=b(u,v)f=b(u,ve)$ which proves $(e,f)\in C_{\Omega}(b)$; hence, $\mathcal{E}(\mathcal{B})\subseteq C_{\Omega}(b)$. Now suppose that $\mathcal{E}$ is a set of pairwise orthogonal idempotents of $C_{\Omega}(b)$ which sum to $1$. It follows that $\{Ve: (e,f)\in\mathcal{E}\}$ is a direct $\Omega$-decomposition of $V$ and $\{Wf: (e,f)\in\mathcal{E}\}$ is a direct $\Omega$-decomposition of $W$. Finally, $b(ue,ve')=b(uee',v)=0$. Thus, $\{b|_{(e,f)}:V_e\times V_e\to W_f: (e,f)\in\mathcal{E}\}$ is a direct $\Omega$-decomposition of $C(b)$. Now $(ii)$ follows. For $(iii)$, we now by \lemref{lem:centroid}(ii) that $C(b)=C_{\Omega}(b)$ is commutative Artinian. The rest follows from \lemref{lem:lift-idemp}(iv). \end{proof} \begin{thm}\label{thm:Match-class2} If $G$ is a $p$-group and $\gamma_2(G)\leq \zeta_1(G)$, then there is a unique frame $\mathcal{E}$ in $C(\mathsf{Bi} (G))$. Furthermore, if $\gamma_2(G)=\zeta_1(G)$ then every Remak $\Omega$-decomposition $\mathcal{H}$ of $G$ matches a unique partition of $(\mathcal{K},\mathcal{Q})$ where \begin{align*} \mathcal{K} := \{ W\hat{e}: (e,\hat{e})\in \mathcal{E}\},\\ \mathcal{Q} := \{ Ve : (e,\hat{e})\in\mathcal{E}\}. \end{align*} If $G^p=1$ then every Remak $\Omega$-decomposition of $G$ matches $(\mathcal{K},\mathcal{Q})$. \end{thm} \begin{proof} This follows from \propref{prop:p-group-bi}, \lemref{lem:idemp}, and \corref{coro:exp-p}. \end{proof} \subsection{Proof of \thmref{thm:indecomp-class2}} This follows from \thmref{thm:Match-class2}. $\Box$ \subsection{Centerless groups} We close this section with a brief consideration of groups with a trivial center. \begin{lemma}\label{lem:centerless} Let $G$ be an $\Omega$-group with $\zeta_1(G)=1$ and $N$ a minimal $(\Omega\union G)$-subgroup of $G$. Then the following hold. \begin{enumerate}[(i)] \item $G$ has a unique Remak $\Omega$-decomposition $\mathcal{R}$. \item There is a unique $R\in\mathcal{R}$ such that $N\leq R$. \item $\{C_R(N),\langle\mathcal{R}-\{R\}\rangle\}$ is a direct $(\Omega\union G)$-decomposition of $C_G(N)$. \item Every Remak $(\Omega\union G)$-decomposition $\mathcal{H}$ of $C_G(N)$ refines $\{C_R(N),\langle\mathcal{R}-\{R\}\rangle\}$. \end{enumerate} \end{lemma} \begin{proof} Given Remak $\Omega$-decompositions $\mathcal{R}$ and $\mathcal{S}$ of $G$, by \lemref{lem:KRS} and the assumption that $\zeta_1(G)=1$, it follows that $\mathcal{R}=\mathcal{R}\zeta_1(G)=\mathcal{S}\zeta_1(G)=\mathcal{S}$. This proves (i). For (ii), if $N$ is a minimal $(\Omega\union G)$-subgroup of $G$ then $[R,N]\leq R\intersect N\in \{1,N\}$, for all $R\in\mathcal{R}$. If $[R,N]=1$ for all $R\in\mathcal{R}$ then $N\leq \zeta_1(G)=1$ which contradicts the assumption that $N$ is minimal. Thus, for some $R\in\mathcal{R}$, $N\leq R$. The uniqueness follows as $R\intersect\langle\mathcal{R}-\{R\}\rangle=1$. By (ii), $[N,\langle R-\{R\}\rangle]=[R,\langle \mathcal{R}-\{R\}\rangle]=1$ which shows $\langle \mathcal{R}-\{R\}\rangle\leq C_G(N)$. Hence, $C_G(N)=C_R(N)\times \langle\mathcal{R}-\{R\}\rangle$. This proves (iii). Finally we prove (iv). Let $\mathcal{K}$ be a Remak $(\Omega\cup G)$-decomposition of $C_G(N)$. Let $\mathcal{S}$ be a Remak $(\Omega\cup G)$-decomposition of $C_G(N)$ which refines the direct $(\Omega\cup G)$-decomposition $C_G(N)=C_R(N)\times \langle\mathcal{R}-\{R\}\rangle$ given by (iii). Note that $\mathcal{R}-\{R\}\subseteq \mathcal{S}$ as members of $\mathcal{R}$ cannot be refined further. By \thmref{thm:KRS}, there is a $\mathcal{J}\subseteq\mathcal{K}$ such that we may exchange $\mathcal{R}-\{R\}\subseteq \mathcal{S}$ with $\mathcal{J}$; hence, $\{C_R(N)\}\sqcup \mathcal{J}$ is a direct $(\Omega\union G)$-decomposition of $C_G(N)$. Now $R\intersect \langle\mathcal{J}\rangle \leq C_R(N)\intersect \langle\mathcal{J}\rangle=1$. Also \begin{equation} \langle R,\mathcal{J}\rangle=\langle R, C_R(N),\mathcal{J}\rangle =\langle R,\mathcal{R}-\{R\}\rangle=G. \end{equation} As every member of $\mathcal{J}$ is an $(\Omega\union G)$-subgroup of $G$, it follows that the are normal in $G$ and so $\{R\}\sqcup\mathcal{J}$ is a direct $\Omega$-decomposition of $G$. As the members of $\mathcal{J}$ are $\Omega$-indecomposable it follows that $\{R\}\sqcup\mathcal{J}$ is a Remak $\Omega$-decomposition of $G$. However, $G$ has a unique Remak $\Omega$-decomposition so $\mathcal{J}=\mathcal{R}-\{R\}$. As $\mathcal{J}$ was a subset of an arbitrary Remak $(\Omega\union G)$-decomposition of $C_G(N)$ it follows that every Remak $(\Omega\union G)$-decomposition of $C_G(N)$ contains $\mathcal{R}-\{R\}$. \end{proof} \begin{prop}\label{prop:centerless-Extend} For groups $G$ with $\zeta_1(G)=1$, the set $\mathcal{M}$ of minimal $(\Omega\cup G)$-subgroups is a direct $(\Omega\cup G)$-decomposition of the socle of $G$ and furthermore there is a unique partition of $\mathcal{M}$ which extends to the Remak $\Omega$-decomposition of $G$. \end{prop} The following consequence shows how the global Remak decomposition of a group with trivial solvable radical is determined precisely from a unique partition of the Remak decomposition of it socle. \begin{coro} If $G$ has trivial solvable radical and $\mathcal{R}$ is its Remak decomposition then $\mathcal{R}=\{C_G(C_G(\soc(R))): R\in\mathcal{R}\}.$ \end{coro} \section{The Remak Decomposition Algorithm}\label{sec:Remak} In this section we prove \thmref{thm:FindRemak}. The approach is to break up a given group into sections for which a Remak $(\Omega\cup G)$-decomposition can be computed directly. The base cases include $\Omega$-modules (\corref{coro:FindRemak-abelian}), $p$-groups of class $2$ (which follows from \thmref{thm:Match-class2}), and groups with a trivial center. We use \thmref{thm:Lift-Extend} as justification that we can interlace these base cases to sequentially lift direct decomposition via the algorithm {\tt Merge} of \thmref{thm:merge}. \subsection{Finding Remak $\Omega$-decompositions for nilpotent groups of class $2$} \label{sec:FindRemak-class2} In this section we prove \thmref{thm:FindRemak} for the case of nilpotent groups $G$ of class $2$. The algorithm depends on \thmref{thm:Match-class2} and \thmref{thm:merge}. To specify a $\mathbb{Z}$-bilinear map $b:V\times V\to W$ for computation we need only provide the \emph{structure constants} with respect to fixed bases of $V$ and $W$. Specifically let $\mathcal{X}$ be a basis of $V$ and $\mathcal{Y}$ a basis of $W$. Define $B_{xy}^{(z)}\in\mathbb{Z}$ so that the following equation is satisfied: \begin{align*} b\left(\sum_{x\in\mathcal{X}} \alpha_x x,\sum_{y\in\mathcal{X}} \beta_y y \right) & = \sum_{z\in\mathcal{Z}} \left(\sum_{x,y\in\mathcal{X}} \alpha_x B_{xy}^{(z)} \beta_y\right)z & (\forall x\in\mathcal{X},\forall\alpha_x,\beta_x\in\mathbb{Z}). \end{align*} \begin{lem}\label{lem:Remak-bilinear} There is a deterministic polynomial-time algorithm, which given $\Omega$-modules $V$ and $W$ and a nondegenerate $\Omega$-bilinear map $b:V\times V\to W$ with $W=b(V,V)$, returns a Remak $\Omega$-decomposition of $b$. \end{lem} \begin{proof} \emph{Algorithm}. Solve a system of linear equations in the (additive) abelian group $\End_{\Omega} V\times \End_{\Omega} W$ to find generators for $C_{\Omega}(b)$. Use {\sc Frame} to find a frame $\mathcal{E}$ of $C_{\Omega}(b)$. Return $\{b|_{(e,f)}:Ve\times Ve\to Wf : (e,f)\in\mathcal{E}\}$. \emph{Correctness}. This is supported by \lemref{lem:idemp} and \thmref{thm:Frame}. \emph{Timing}. This follows from the timing of {\sc Solve} and {\sc Frame}. \end{proof} \begin{thm}\label{thm:FindRemak-class2} There is a polynomial-time algorithm which, given a nilpotent $\Omega$-group of class $2$, returns a Remak $\Omega$-decomposition of the group. \end{thm} \begin{proof} Let $G\in\mathbb{G}_n^{\Omega}$ with $\gamma_2(G)\leq \zeta_1(G)$. \emph{Algorithm}. Use {\sc Order} to compute $|G|$. For each prime $p$ dividing $|G|$, write $|G|=p^e m$ where $(p,m)=1$ and set $P:=G^{m}$. Set $b_p:=\mathsf{Bi} (P)$. Use the algorithm of \lemref{lem:Remak-bilinear} to find a direct $\Omega$-decomposition $\mathcal{B}$ of $b$. Define each of the following: \begin{align*} \mathcal{X}(\mathcal{B}) & = \{ X_c : c:X_c\times X_c\to Z_c\in \mathcal{B}\}\\ \mathcal{H} & = \{H\leq P: \zeta_1(P)\leq H, H/\zeta_1(P)\in \mathcal{X}(\mathcal{B})\}. \end{align*} Use \corref{coro:FindRemak-abelian} to build a Remak $\Omega$-decomposition $\mathcal{Z}$ of $\zeta_1(P)$. Set $\mathcal{R}_p:={\tt Merge}(\mathcal{Z},\mathcal{H})$. Return $\bigcup_{p\mid |G|} \mathcal{R}_p$. \emph{Correctness}. By \lemref{lem:Remak-bilinear} the set $\mathcal{B}$ is the unique Remak $\Omega$-decomposition of $\mathsf{Bi} (G)$. By \thmref{thm:Match-class2} and \thmref{thm:merge} the return a Remak $\Omega$-decomposition of $G$. \emph{Timing}. The algorithm uses a constant number of polynomial time subroutines. \end{proof} We have need of one final observation which allows us to modify certain decompositions into ones that match the hypothesis \thmref{thm:merge}(b) when the up grading pair is $(\mathfrak{N}_c,G\mapsto \zeta_c(G))$. \begin{lemma}\label{lem:centralize} There is a polynomial-time algorithm which, given an $\Omega$-decomposition $\mathcal{H}= \mathcal{H}\zeta_c(G)$ of a group $G$, returns the finest $\Omega$-decomposition $\mathcal{K}$ refined by $\mathcal{H}$ and such that for all $K\in\mathcal{K}$, $\zeta_c(K)=\zeta_c(G)$. (The proof also shows there is a unique such $\mathcal{K}$.) \end{lemma} \begin{proof} Observe that $\mathcal{K}=\{\langle H\in\mathcal{H}: [K,H,\dots,H]\neq 1\rangle: K\in\mathcal{K}\}$. We can create $\mathcal{K}$ by a transitive closure algorithm. \end{proof} \begin{thm}\label{thm:FindRemak-Q} {\sc Find-$\Omega$-Remak} has polynomial-time solution. \end{thm} \begin{proof} Let $G\in \mathbb{G}_n^{\Omega}$. \emph{Algorithm.} If $G=1$ then return $\emptyset$. Otherwise, compute $\zeta_1(G)$. If $G=\zeta_1(G)$ then use {\sc Abelian.Remak-$\Omega$-Decomposition} and return the result. Else, if $\zeta_1(G)=1$ then use {\sc Minimal-$\Omega$-Normal} to find a minimal $(\Omega\cup G)$-subgroup $N$ of $G$. Use {\sc Normal-Centralizer} to compute $C_G(N)$. If $C_G(N)=1$ then return $\{G\}$. Otherwise, recurse with $C_G(N)$ in the role of $G$ to find a Remak $\Omega$-decomposition $\mathcal{K}$ of $C_G(N)$. Call $\mathcal{H}:=\textalgo{Extend}(G,\mathcal{K})$ to create a direct $\Omega$-decomposition $\mathcal{H}$ extending $\mathcal{K}$ maximally. Return $\mathcal{H}$. Now $G>\zeta_1(G)>1$. Compute $\zeta_2(G)$ and use \thmref{thm:FindRemak-class2} to construct a Remak $(\Omega\cup G)$-decomposition $\mathcal{A}$ of $\zeta_2(G)$. If $G=\zeta_2(G)$ then return $\mathcal{A}$; otherwise, $G>\zeta_2(G)$ (consider \figref{fig:rel-ext}). Use a recursive call on $G/\zeta_1(G)$ to find $\mathcal{H}=\mathcal{H}\zeta_1(G)$ such that $\mathcal{H}/\zeta_1(G)$ is a Remak $\Omega$-decomposition of $G/\zeta_1(G)$. Apply \lemref{lem:centralize} to $\mathcal{H}$ and then set $\mathcal{J}:=\textalgo{Merge}(\mathcal{A},\mathcal{H})$, and return $\mathcal{J}$. \emph{Correctness.} The case $G=\zeta_1(G)$ is proved by \corref{coro:FindRemak-abelian} and the case $G=\zeta_2(G)$ is proved in \thmref{thm:FindRemak-class2}. Now suppose that $G>\zeta_1(G)=1$. Following \lemref{lem:centerless}, $G$ has a unique Remak $\Omega$-decomposition $\mathcal{R}$ and there is a unique $R\in\mathcal{R}$ such that $N\leq R$ and $\langle\mathcal{R}-\{R\}\rangle\leq C_G(N)$. So if $C_G(N)=1$ then $G$ is directly indecomposable and the return of the algorithm is correct. Otherwise the algorithm makes a recursive call to find a Remak $(\Omega\cup G)$-decomposition $\mathcal{K}$ of $C_G(N)$. By \lemref{lem:centerless}(iv), $\mathcal{K}$ contains $\mathcal{R}-\{R\}$ and so there is a unique maximal extension of $\mathcal{K}$, namely $\mathcal{R}$, and so by \thmref{thm:Extend}, the algorithm $\textalgo{Extend}$ creates the Remak $\Omega$-decomposition of $G$ so the return in this case is correct. Finally suppose that $G>\zeta_2(G)\geq\zeta_1(G)>1$. There we have the commutative diagram \figref{fig:rel-ext} which is exact in rows and columns. \begin{figure} \caption{The relative extension $1<\zeta_1(G)\leq\zeta_2(G)<G$. The rows and columns are exact.} \label{fig:rel-ext} \end{figure} By \thmref{thm:Lift-Extend}, $\mathcal{H}\zeta_2(G)$ refines $\mathcal{R}\zeta_2(G)$ and so the algorithm \textalgo{Merge} is guaranteed by \thmref{thm:merge} to return a Remak $\Omega$-decomposition of $G$ (consider \figref{fig:recurse}). \begin{figure} \caption{The recursive step parameters feed into {\tt Merge} to produce a Remak $\Omega$-decomposition of $G$.} \label{fig:recurse} \end{figure} \emph{Timing.} The algorithm enters a recursive call only if $\zeta_1(G)=1$ or $G>\zeta_2(G)\geq \zeta_1(G)>1$. As these two case are exclusive there is at most one recurse call made by the algorithm. The remainder of the algorithm uses polynomial time methods as indicated. \end{proof} \subsection{Proof of \thmref{thm:FindRemak}} This is a corollary to \thmref{thm:FindRemak-Q} $\Box$ \begin{coro}\label{coro:FindRemak-matrix} $\textalgo{FindRemak}$ has a deterministic polynomial-time solution for matrix $\Gamma_d$-groups. \end{coro} \begin{proof} This follows from Section \ref{sec:tools}, \remref{rem:matrix}, and \thmref{thm:FindRemak-Q}. \end{proof} \subsection{General operator groups}\label{sec:gen-ops} Now we suppose that $G\in \mathbb{G}_n$ is a $\Omega$-group for a general set $\Omega$ of operators. That is, $\Omega\theta\subseteq \End G$. To solve \textalgo{Remak-$\Omega$-Decomposition} in full generality it suffices to reduce to the case where $\Omega$ acts as automorphisms on $G$, where we invoke \thmref{thm:FindRemak-Q}. For that suppose we have $\omega\theta\in \End G-\Aut G$. By Fitting lemma we have that: \begin{equation} G=\ker \omega^{\ell(G)} \times \im \omega^{\ell(G)}. \end{equation} To compute such a decomposition we compute $\im\omega^{\ell(G)}$ and then apply {\sc Direct-$\Omega$-Complement} to compute $\ker \omega^{\ell(G)}$. As $\Omega$ is part of the input, we may test each $\omega\in\Omega$ to find those $\omega$ where $\omega\theta\notin\Aut G$, and with each produce a direct $\Omega$-decomposition. The restriction of $\omega$ to the constituents induces either zero map, or an automorphism. Thus the remaining cases are handled by \thmref{thm:FindRemak-Q}. $\Box$ \section{An example}\label{sec:ex} Here we give an example of the execution of the algorithm for \thmref{thm:FindRemak-Q} which covers several of the interesting components (but of course fails to address all situations). We will operate without a specific representation in mind, since we are interested in demonstrating the high-level techniques of the algorithm for \thmref{thm:FindRemak-Q}. We trace through how the algorithm might process the group $$G=D_8 \times Q_8\times \SL(2,5)\times \big(\SL(2,5)\circ \SL(2,5)\big).$$ First the algorithm recurses until it reaches the group $$\hat{G}=G/\zeta_2(G)\cong \PSL(2,5)^3.$$ At this point it finds a minimal normal subgroup $N$ of $\hat{G}$, of which there are three, so we pick $N=\PSL(2,5)\times 1\times 1$. Next the algorithm computes a Remak decomposition of $C_G(N)=1\times \PSL(2,5)\times \PSL(2,5)$. At this point the algorithm returns the unique Remak decomposition $$\mathcal{Q}:=\{\PSL(2,5)\times 1\times 1,1\times \PSL(2,5)\times 1,1\times 1\times \PSL(2,5)\}.$$ These are pulled back to the set $\{H_1, H_2, H_3\}$ of subgroups in $G$. Next the algorithm constructs a Remak $G$-decomposition of $\zeta_2(G)$. For that the algorithm constructs the bilinear map of commutation from $\zeta_2(G)/\zeta_1(G)\cong \mathbb{Z}_2^4$ into $\gamma_2(\zeta_2(G))=\langle z_1,z_2\rangle\cong \mathbb{Z}_2^2$, i.e. $$b:=\mathsf{Bi} (\zeta_2(G)):\mathbb{Z}_2^4\times \mathbb{Z}_2^4\to \mathbb{Z}_2^2$$ Below we have described the structure constants for $b$ in a nice basis but remark that unless we already know the direct factors of $\zeta_2(G)$ it is unlikely to have such a natural form. \begin{equation} b(u,v) = u \begin{bmatrix} 0 & z_1 & & \\ -z_1 & 0 & & \\ & & 0 & z_2 \\ & & -z_2 & 0 \end{bmatrix} v^t,\qquad \forall u,v\in \mathbb{Z}_2^4. \end{equation} A basis for the centroid of $b$ is computed: \begin{equation} C(b) = \left\{\left(\begin{bmatrix} a & 0 & & \\ 0 & a & & \\ & & b & 0 \\ & & 0 & b \end{bmatrix}, \begin{bmatrix} a & 0 \\ 0 & b\end{bmatrix}\right) : a,b\in\mathbb{Z}_2\right\}\cong \mathbb{Z}_2\oplus\mathbb{Z}_2. \end{equation} Next, the unique frame $\mathcal{E}=\{ (I_2\oplus 0_2, 1\oplus 0), (0_2\oplus I_2,0\oplus 1)\}$ of $C(b)$ is built and used to create the subgroups $\mathcal{K}:=\{D_8\times Z(Q_8), Z(D_8)\times Q_8\}$ in $\zeta_2(G)$. Here, using an arbitrary basis $\mathcal{X}$ for $\zeta_1(G)=\mathbb{Z}_2^2\times \mathbb{Z}_4^2$, the algorithm {\tt Merge}$(\mathcal{X},\mathcal{K})$ constructs a Remak decomposition $\mathcal{A}:=\{H,K,C_1, C_2\}$ of $\zeta_2(G)$ where $H\cong D_8$, $K\cong Q_8$, and $C_1\cong C_2\cong \mathbb{Z}_4$. Finally, the algorithm {\tt Merge}$(\mathcal{A},\mathcal{H})$ returns a Remak decomposition of $G$. To explain the merging process we trace that algorithm through as well. Let $R=\SL(d,q)\times 1\times 1$ and $S=1\times (\SL(d,q)\circ\SL(d,q))$. These groups are directly indecomposable direct factors of $G$ and serve as the hypothesized directions of for the direct chain used by {\tt Merge}. Without loss of generality we index the $H$'s so that $H_2=R\zeta_2(G)$ and $H_1H_3=S\zeta_2(G)$ and $$G/\zeta_2(G)=\PSL(d,q)\times \PSL(d,q)\times \PSL(d,q) =H_2/\zeta_2(G)\times H_1/\zeta_2(G)\times H_3/\zeta_2(G).$$ Furthermore, $\zeta_2(H_i)=\zeta_2(G)$ for all $i\in\{1,2,3\}$. Therefore, $(\mathcal{A},\mathcal{H})$ satisfies the hypothesis of \thmref{thm:merge}. The loop in {\tt Merge} begins with $\mathcal{K}_0=\mathcal{A}$ and seeks to extend $\mathcal{A}$ to $H_1$ by selecting an appropriate subset $\mathcal{A}_1\subseteq \mathcal{K}_0=\mathcal{A}$ and finding a complement $\lfloor H_1\rfloor\leq H_1$ such that $\mathcal{K}_1=\mathcal{A}_1\sqcup\{\lfloor H_1\rfloor\}$ is a direct decomposition of $H_1$. The configuration at this stage is seen in \figref{fig:merge-1}. By \thmref{thm:Extend}, we have $H,K\in \mathcal{A}_1$ (as those lie outside the center) and one of the $C_i$'s (though no unique choice exists there). In the second loop iteration we extend $\mathcal{K}_1$ to a $\mathfrak{N}_2$-refined direct decomposition if $H_1 H_2$. This selects a subset $\mathcal{A}_2\subseteq \mathcal{K}_1\cap \zeta_2(G)$. Also $H_1$ and $H_2$ are in different directions, specifically $H_2=R\zeta_2(G)$ and $H_1\leq S\zeta_2(G)$, so the algorithm is forced to include $\lfloor H_1\rfloor\in\mathcal{K}_2$ (cf. \thmref{thm:Extend}(iii)) and then creates a complement $\lfloor H_2\rfloor\cong\SL(2,5)$ to $\langle \mathcal{A}_2,\lfloor H_1\rfloor\rangle$. The configuration is illustrated in \figref{fig:merge-2}. As before, we have $H,K\in\mathcal{K}_2$ as well, but the cyclic groups are now gone as the centers of $\lfloor H_i\rfloor$, $i\in\{1,2\}$, fill out a direct decomposition of $\zeta_2(G)$. Finally, in the third loop iteration, the direction is back towards $S$ and so the extension $\mathcal{K}_3$ of $\mathcal{K}_2$ to $H_1 H_2 H_3$ contains $\lfloor H_2\rfloor$ and is $\mathfrak{N}_2$-refined. However, the group $\lfloor H_1\rfloor$ is not a direct factor of $G$ as it is one term in nontrivial central product. Therefore that group is replaced by a subgroup $\lfloor H_1 H_3\rfloor\cong \SL(d,q)\circ\SL(d,q)$. The final configuration is illustrated in \figref{fig:merge-3}. $\mathcal{K}_3$ is a Remak decomposition of $G$. \begin{figure} \caption{The lattice encountered during the first iteration of the loop in the algorithm ${\tt Merge}(\mathcal{A},\{H_1,H_2,H_3\})$.} \label{fig:merge-1} \end{figure} \begin{figure} \caption{The lattice encountered during the second iteration of the loop in the algorithm ${\tt Merge}(\mathcal{A},\{H_1,H_2,H_3\})$.} \label{fig:merge-2} \end{figure} \begin{figure} \caption{The lattice of encountered during the third iteration of the loop in the algorithm ${\tt Merge}(\mathcal{A},\{H_1,H_2,H_3\})$.} \label{fig:merge-3} \end{figure} \section{Closing remarks}\label{sec:closing} Historically the problem of finding a Remak decomposition focused on groups given by their multiplication table since even there there did not seem to be a polynomial-time solution. It was known that a Remak decomposition could be found by checking all partitions of all minimal generating sets of a group $G$ and so the problem had a sub-exponential complexity of $|G|^{\log |G|+O(1)}$. That placed it in the company of other interesting problems including testing for an isomorphism between two groups \cite{Miller:nlogn}. Producing an algorithm that is polynomial-time in the size of the group's multiplication table (i.e. polynomial in $|G|^2$) was progress, achieved independently in \cite{KN:direct} and \cite{Wilson:thesis}. Evidently, \thmref{thm:FindRemak} provides a polynomial-time solution for groups input in this way (e.g. use a regular representation). With a few observations we sharpen \thmref{thm:FindRemak} in that specific context to the following: \begin{thm}\label{thm:nearly-linear} There is a deterministic nearly-linear-time algorithm which, given a group's multiplication table, returns a Remak decomposition of the group. \end{thm} \begin{proof} The algorithm for \thmref{thm:FindRemak-Q} is polynomial in $\log |G|$. As the input length here is $|G|^2$, it suffices to show that the problems listed in Section \ref{sec:tools} have $O(|G|^2\log^c |G|)$-time or better solutions. Evidently, \textalgo{Order}, \textalgo{Member}, \textalgo{Solve} each have brute-force linear-times solutions. \textalgo{Presentation} can be solved in linear-time by selecting a minimal generating set $\{g_1,\dots,g_{\ell}\}$ (which has size $\log |G|$) and acting on the cosets of $\{\langle g_i,\dots, g_{\ell}\rangle : 1\leq i\leq \ell\}$ produce defining relations of the generators in fashion similar to \cite[Exercise 5.2]{Seress:book}. For \textalgo{Minimal-Normal}, begin with an element and takes it normal closure. If this is a proper subgroup recurse, otherwise, try an element which is not conjugate to the first and repeat until either a proper normal subgroup is discovered or it is proved that group is simple. That takes $O(|G|^2)$-time. The remaining algorithms \textalgo{Primary-Decomposition}, \textalgo{Irreducible}, and \textalgo{Frame} have brute force linear-time solutions. Thus, the algorithm can be modified to run in times $O(|G|^2 \log^c |G|)$. \end{proof} Section \ref{sec:lift-ext} lays out a framework which permits for a local view of the direct products of group. We have some lingering questions in this area. \begin{enumerate} \item What is the best series of subgroups to use for the algorithm of \thmref{thm:FindRemak-Q}? Corollaries \ref{coro:canonical-graders} and \ref{coro:canonical-grader-II} offer alternatives series to use in the algorithm. There is an option for a top-down algorithm based on down graders. That may allow for a black-box algorithm since verbal subgroups can be constructed in black-box groups; see \cite[Section 2.3.4]{Seress:book}. \item Is their a parallel NC solution for \textalgo{Remak-$\Omega$-Decomposition}? We can speculate how this may proceed. First, select an appropriate series $1\leq G_1\leq\cdots \leq G_n=G$ for $G$ and distribute and use parallel linear algebra methods to find Remak decompositions $\mathcal{A}_{i0}$ of each $G_{i+1}/G_{i}$, for $1\leq i<n$. Then for $0\leq j\leq \log n$, for each $1\leq i\leq n/2^j$ in parallel compute $\mathcal{A}_{i(j+1)}:=\textalgo{Merge}(\mathcal{A}_{ij},\mathcal{A}_{(i+1)j})$. When $j=\lfloor \log n\rfloor$ we have a direct decomposition $\mathcal{A}_{1\log n}$ of $G$ and have used poly-logarithmic time. Unfortunately, \thmref{thm:merge}(a) is not satisfied in these recursions, so we cannot be certain that the result is a Remak decomposition. \end{enumerate} \section*{Acknowledgments} I am indebted to W. M. Kantor for taking a great interest in this work and offering guidance. Thanks to E. M. Luks, C.R.B. Wright, and \'{A}. Seress for encouragement and many helpful remarks. \def$'${$'$} \end{document}
\begin{document} \title [Evolutionary behavior in a two-locus system] {Evolutionary behavior in a two-locus system} \author{A. M. Diyorov, U. A. Rozikov} \address{A. \ M. \ Diyorov \\ The Samarkand branch of TUIT, Samarkand, Uzbekistan.} \email {[email protected]} \address{ U.A. Rozikov$^{a,b,c}$\begin{itemize} \item[$^a$] V.I.Romanovskiy Institute of Mathematics, 9, Universitet str., 100174, Tashkent, Uzbekistan; \item[$^b$] AKFA University, 264, Milliy Bog street, Yangiobod QFY, Barkamol MFY, Kibray district, 111221, Tashkent region, Uzbekistan; \item[$^c$] National University of Uzbekistan, 4, Universitet str., 100174, Tashkent, Uzbekistan. \end{itemize}} \email{[email protected]} \begin{abstract} In this short note we study a dynamical system generated by a two-parametric quadratic operator mapping 3-dimensional simplex to itself. This is an evolution operator of the frequencies of gametes in a two-locus system. We find the set of all (a continuum set) fixed points and show that each fixed point is non-hyperbolic. We completely describe the set of all limit points of the dynamical system. Namely, for any initial point (taken from the 3-dimensional simplex) we find an invariant set containing the initial point and a unique fixed point of the operator, such that the trajectory of the initial point converges to this fixed point. \end{abstract} \maketitle {\bf Mathematics Subject Classifications (2010).} 37N25, 92D10. {\bf{Key words.}} loci, gamete, dynamical system, fixed point, trajectory, limit point. \section{Introduction} In this paper following \cite[page 68]{E} we define an evolution operator of a population assuming viability selection, random mating and discrete non-overlapping generations. Consider two loci $A$ (with alleles $A_1$, $A_2$) and $B$ (with alleles $B_1$, $B_2$). Then we have four gametes: $A_1B_1$, $A_1B_2$, $A_2B_1$, and $A_2B_2$. Denote the frequencies of these gametes by $x$, $y$, $u$, and $v$ respectively. Thus the vector $(x, y, u, v)$ can be considered as a state of the system, and therefore, one takes it as a probability distribution on the set of gametes, i.e. as an element of 3-dimensional simplex, $S^3$. Recall that $(m-1)$-dimensional simplex is defined as $$ S^{m-1}=\{x=(x_1,...,x_m)\in \mathbb R^m: x_i\geq 0, \sum^m_{i=1}x_i=1 \}.$$ Following \cite[Section 2.10]{E} we define the frequencies $(x', y', u', v')$ in the next generation as \begin{equation}\label{yq} W: \begin{array}{llll} x'=x+a\cdot (yu-xv)\\[2mm] y'=y-a\cdot (yu-xv)\\[2mm] u'=u-b\cdot (yu-xv)\\[2mm] v'=v+b\cdot (yu-xv), \end{array}\end{equation} where $a,b\in [0,1]$. It is easy to see that this quadratic operator, $W$, maps $S^3$ to itself. Indeed, we have $x'+y'+u'+v'=1$ and each coordinate is non-negative, for example, we check it for $y'$: $$ y'=y-a\cdot (yu-xv)=y(1-au)+axv\geq y(1-au)\geq 0,$$ these inequalities follow from the conditions that $x,y,u,v,a\in [0,1]$, and therefore, we have $0\leq au\leq 1$. The operator (\ref{yq}), for any initial point (state) $t_0=(x_0, y_0, u_0, v_0)\in S^3$, defines its trajectory: $\{t_n=(x_n, y_n, u_n, v_n)\}_{n=0}^\infty$ as $$t_{n}=(x_{n}, y_{n}, u_{n}, v_{n})=W^{n}(t_0), n=0,1,2,...$$ Here $W^n$ is the $n$-fold composition of $W$ with itself: $$W^n(\cdot)=\underbrace{W(W(W\dots (W}_{n \,{\rm times}}(\cdot)))\dots).$$ {\bf The main problem} in theory of dynamical system (see \cite{De}) is to study the sequence $\{t_n\}_{n=0}^\infty$ for each initial point $t_0\in S^3$. In general, if a dynamical system is generated by a nonlinear operator then complete solution of the main problem may be very difficult. But in this short note we will completely solve this main problem for nonlinear operator (\ref{yq}). \begin{rk} Using $1=x+y+u+v$ (on $S^3$) one can rewrite operator (\ref{yq}) as \begin{equation}\label{yqs} W: \begin{array}{llll} x'=x^2+xy+xu+(1-a)xv+ayu\\[2mm] y'=xy+y^2+(1-a)yu+axv+yv\\[2mm] u'=xu+(1-b)yu+u^2+bxv+uv\\[2mm] v'=(1-b)xv+yv+byu+uv+v^2. \end{array}\end{equation} Note that the operator (\ref{yqs}) is in the form of quadratic stochastic operator (QSO), i.e., $V: S^{m-1}\to S^{m-1}$ defined by $$V: x_k'=\sum_{i,j=1}^mP_{ij,k}x_ix_j,$$ where $P_{ij,k}\geq 0$, $\sum_kP_{ij,k}=1$. The operator is not studied in general, but some large class of QSO's are studied (see for example \cite{GMR}, \cite{L}, \cite{Rpd}, \cite{RS}, \cite{RZ}, \cite{RZh} and the references therein). But the operator (\ref{yq}) was not studied yet. \end{rk} \section{The set of limit points} \begin{rk} The case $a=b=0$ is very trivial, so we will not consider this case.\end{rk} Recall that a point $t\in S^3$ is called a fixed point for $W: S^3\to S^3$ if $W(t)=t$. Denote the set of all fixed points by Fix$(W)$. It is easy to see that for any $a, b\in [0,1]$, $a+b\ne 0$ the set of all fixed points of (\ref{yq}) is $${\rm Fix}(W)=\{t=(x,y,u,v)\in S^3: yu-xv=0\}.$$ This is a continuum set of fixed points. The main problem is completely solved in the following result: \begin{thm}\label{tm} For any initial point $(x_0, y_0, u_0, v_0)\in S^3$ the following assertions hold \begin{itemize} \item[1.] If $(x_0+y_0)(u_0+v_0)=0$ then $(x_0, y_0, u_0, v_0)$ is fixed point. \item[2.] If $(x_0+y_0)(u_0+v_0)\ne 0$ then trajectory has the following limit: $$ \lim_{n\to\infty}(x_{n}, y_n, u_{n}, v_n)=$$ $$\left(A(x_0, u_0)(x_0+y_0), A(y_0, v_0)(x_0+y_0), A(x_0, u_0)(u_0+v_0), A(y_0, v_0)(u_0+v_0)\right)\in {\rm Fix}(W), $$ where $$A(x,u)={bx+au\over (u_0+v_0)a+(x_0+y_0) b}.$$ \end{itemize} \end{thm} \begin{proof} We note that for each $\alpha\in [0,1]$ the following set is invariant: $$X_{\alpha}=\{t=(x,y,u,v)\in S^3: x+y=\alpha, \ \ u+v=1-\alpha\},$$ i.e., $W(X_\alpha)\subset X_\alpha$. Note also that $$S^3=\bigcup_{\alpha\in [0,1]} X_\alpha.$$ The part 1 of theorem follows in the case $\alpha=0$ and $\alpha=1$. Indeed, for $\alpha=0$ we have $$ X_{0}=\{t=(0,0,u,v)\in S^3: u+v=1\},$$ and in the case of $\alpha=1$ we get $$ X_{1}=\{t=(x,y,0,0)\in S^3: x+y=1\}.$$ Note that in both case the restriction of operator on the corresponding set is an id-operator, i.e., all points of the set are fixed points. Now to prove part 2 we consider the case $\alpha\in (0,1)$. Since $X_\alpha$ is an invariant, it suffices to study limit points of the operator on sets $X_\alpha$, for each $\alpha\in (0,1)$ separately. To do this, we reduce operator $W$ on the invariant set $X_\alpha$ (i.e., replace $y=\alpha-x$, $v=1-\alpha-u$): \begin{equation}\label{ya} W_\alpha: \begin{array}{ll} x'=(1-a+a\alpha)x+a\alpha u\\[2mm] u'=(1-\alpha)bx+(1-b\alpha)u, \end{array}\end{equation} where $a,b \in [0,1]$, $\alpha\in (0,1)$ $x\in [0,\alpha]$, $u\in [0, 1-\alpha]$. It is easy to find the set of all fixed points: $${\rm Fix}(W_{\alpha})=\{(x,u)\in [0,\alpha]\times [0,1-\alpha]: (1-\alpha)x-\alpha u=0\}.$$ The operator $W_\alpha$ is a linear operator given by the matrix \begin{equation}\label{m} M_\alpha=\left( \begin{array}{cc} 1-a+a\alpha &a\alpha \\[2mm] (1-\alpha)b &1-b\alpha \end{array}\right). \end{equation} Eigenvalues of the linear operator are \begin{equation}\label{ev} \lambda_1=1, \ \ \lambda_2=1-(1-\alpha)a-\alpha b. \end{equation} For any $a,b \in [0,1]$, $a+b\ne 0$, $\alpha\in (0,1)$ we have $0< (1-\alpha)a+\alpha b< 1$, therefore, $0<\lambda_2 < 1.$ By (\ref{ya}) we define trajectory of an initial point $(x_0, u_0)$ as $$(x_{n+1}, u_{n+1})=M_\alpha (x_n, u_n)^T, \ \ n\geq 0.$$ Thus \begin{equation}\label{sh} (x_{n}, u_{n})=M_\alpha^n\, (x_0, u_0)^T, \ \ n\geq 1. \end{equation} Therefore we need to find $M_\alpha^n$. To find it we use a little Cayley-Hamilton Theorem\footnote{https://www.freemathhelp.com/forum/threads/formula-for-matrix-raised-to-power-n.55028/} to obtain the following formula $$ M_\alpha^n={\lambda_2 \, \lambda^n_1- \lambda_1 \, \lambda^n_2\over \lambda_2-\lambda_1}\cdot I_2+ {\lambda^n_2- \lambda^n_1\over \lambda_2-\lambda_1}\cdot M_\alpha, $$ where $I_2$ is $2\times 2$ unit matrix and $\lambda_1, \lambda_2$ are eigenvalues (defined in (\ref{ev})). By explicit formula (\ref{ev}) we get the following limit $$ \lim_{n\to \infty}M_\alpha^n={\lambda_2 \over \lambda_2-\lambda_1}\cdot I_2- {1\over \lambda_2-\lambda_1}\cdot M_\alpha={1\over (1-\alpha)a+\alpha b}\cdot \left( \begin{array}{cc} \alpha b &\alpha a \\[2mm] (1-\alpha)b &(1-\alpha)a \end{array}\right). $$ Using this limit, for any initial point $(x_0, u_0)\in [0,\alpha]\times [0,1-\alpha]$ we get \begin{equation}\label{s} \lim_{n\to\infty}(x_{n}, u_{n})=\lim_{n\to\infty}M_\alpha^n\, (x_0, u_0)^T={bx_0+au_0\over (1-\alpha)a+\alpha b}\cdot(\alpha, 1-\alpha)\in {\rm Fix}(W_\alpha). \end{equation} By (\ref{s}) we obtain \begin{lemma} For any initial point $(x_0, y_0, u_0, v_0)\in S^3\setminus (X_0\cup X_1)$ there exists $\alpha\in (0,1)$ such that $(x_0, y_0, u_0, v_0)\in X_\alpha$ and the trajectory of this initial point (under operator $W$, defined in (\ref{yq})) has the following limit $$ \lim_{n\to\infty}(x_{n}, y_n, u_{n}, v_n)=$$ $$\left(A(x_0, u_0)\alpha, A(y_0, v_0)\alpha, A(x_0, u_0)(1-\alpha), A(y_0, v_0)(1-\alpha)\right)\in {\rm Fix}(W), $$ where $$A(x,u)={bx+au\over (1-\alpha)a+\alpha b}.$$ \end{lemma} In this lemma we note that $\alpha=x_0+y_0$ and $1-\alpha=u_0+v_0$, therefore, the part 2 of Theorem follows, where limit point of trajectory of each initial point is given as function of the initial point only. Theorem is proved. \end{proof} \section{Biological interpretations} The results of Theorem \ref{tm} have the following biological interpretations: Let $t=(x_0, y_0, u_0, v_0)\in \mathcal S^{3}$ be an initial state (the probability distribution on the set $\{A_1B_1, A_1B_2, A_2B_1, A_2B_2\}$ of gametes). Theorem \ref{tm} says that, as a rule, the population tends to an equilibrium state with the passage of time. Part 1 of Theorem \ref{tm} means that if at an initial time we had only two gametes then the (initial) state remains unchanged. Part 2 means that depending on the initial state future of the population is stable: gametes survive with probability $$ A(x_0, u_0)(x_0+y_0), A(y_0, v_0)(x_0+y_0), A(x_0, u_0)(u_0+v_0), A(y_0, v_0)(u_0+v_0)$$ respectively. From the existence of the limit point of any trajectory and from the explicit form of ${\rm Fix}(W)$ it follows that $$\lim_{n\to \infty}(y_nu_n-x_nv_n)=0.$$ This property, biologically means (\cite[page 69]{E}), that the population asymptotically goes to a state of linkage equilibrium with respect to two loci. \section*{ Acknowledgements} Rozikov thanks Institut des Hautes \'Etudes Scientifiques (IHES), Bures-sur-Yvette, France for support of his visit to IHES. The work was partially supported by a grant from the IMU-CDC. \end{document}
\begin{document} \allowdisplaybreaks \def\mathbb R} \def\ff{\frac} \def\ss{\sqrt{\mathbb R} \def\ff{\frac} \def\ss{\sqrt} \def\B{\mathbf B} \def\mathbb W{\mathbb W} \def\mathbb N} \def\kk{\kappa} \def\m{{\bf m}{\mathbb N} \def\kk{\kappa} \def\m{{\bf m}} \def\varepsilon}\def\ddd{D^*{\varepsilon}\def\ddd{D^*} \def\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho} \def\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma{\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma} \def\nabla} \def\pp{\partial} \def\E{\mathbb E{\nabla} \def\pp{\partial} \def\E{\mathbb E} \def\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D} \def\sigma} \def\ess{\text{\rm{ess}}{\sigma} \def\ess{\text{\rm{ess}}} \def\begin} \def\beq{\begin{equation}} \def\F{\scr F{\begin} \def\beq{\begin{equation}} \def\F{\scr F} \def\text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}{\text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}} \def\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega{\text{\rm{e}}} \def\uparrow{\underline a} \def\OO{\Omega} \def\oo{\omega} \def\tilde} \def\Ric{\text{\rm{Ric}}{\tilde} \def\text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}{\text{\rm{Ric}}} \def\text{\rm{cut}}} \def\P{\mathbb P} \def\ifn{I_n(f^{\bigotimes n}){\text{\rm{cut}}} \def\P{\mathbb P} \def\ifn{I_n(f^{\bigotimes n})} \def\scr C} \def\aaa{\mathbf{r}} \def\r{r{\scr C} \def\aaa{\mathbf{r}} \def\r{r} \def\text{\rm{gap}}} \def\prr{\pi_{{\bf m},\varrho}} \def\r{\mathbf r{\text{\rm{gap}}} \def\prr{\pi_{{\bf m},\varrho}} \def\r{\mathbf r} \def\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda{\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} \def\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I{\scr L}\def\Tt{\tilde} \def\Ric{\text{\rm{Ric}}} \def\TT{\tilde} \def\Ric{\text{\rm{Ric}}}\def\II{\mathbb I} \def{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb H{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb H} \def\scr M}\def\Q{\mathbb Q} \def\texto{\text{o}} \def\LL{\Lambda{\scr M}\def\Q{\mathbb Q} \def\texto{\text{o}} \def\LL{\Lambda} \def{\rm Rank}} \def\B{\scr B} \def\i{{\rm i}} \def\HR{\hat{\R}^d{{\rm Rank}} \def\B{\scr B} \def{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb H{{\rm i}} \def\HR{\hat{\mathbb R} \def\ff{\frac} \def\ss{\sqrt}^d} \def\rightarrow}\def\l{\ell{\rightarrow}\def\l{\ell}\def\iint{\int} \def\scr E}\def\Cut{{\rm Cut}{\scr E}\def\Cut{{\rm Cut}} \def\scr A} \def\Lip{{\rm Lip}{\scr A} \def\Lip{{\rm Lip}} \def\scr B}\def\Ent{{\rm Ent}}\def\L{\scr L{\scr B}\def\Ent{{\rm Ent}}\def\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I{\scr L} \def\mathbb R} \def\ff{\frac} \def\ss{\sqrt{\mathbb R} \def\ff{\frac} \def\ss{\sqrt} \def\B{\mathbf B} \def\mathbb N} \def\kk{\kappa} \def\m{{\bf m}{\mathbb N} \def\kk{\kappa} \def\m{{\bf m}} \def\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho} \def\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma{\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma} \def\nabla} \def\pp{\partial} \def\E{\mathbb E{\nabla} \def\pp{\partial} \def\E{\mathbb E} \def\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D} \def\sigma} \def\ess{\text{\rm{ess}}{\sigma} \def\ess{\text{\rm{ess}}} \def\begin} \def\beq{\begin{equation}} \def\F{\scr F{\begin} \def\beq{\begin{equation}} \def\F{\scr F} \def\text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}{\text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}} \def\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega{\text{\rm{e}}} \def\uparrow{\underline a} \def\OO{\Omega} \def\oo{\omega} \def\tilde} \def\Ric{\text{\rm{Ric}}{\tilde} \def\text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}{\text{\rm{Ric}}} \def\text{\rm{cut}}} \def\P{\mathbb P} \def\ifn{I_n(f^{\bigotimes n}){\text{\rm{cut}}} \def\P{\mathbb P} \def\ifn{I_n(f^{\bigotimes n})} \def\scr C} \def\aaa{\mathbf{r}} \def\r{r{\scr C} \def\aaa{\mathbf{r}} \def\r{r} \def\text{\rm{gap}}} \def\prr{\pi_{{\bf m},\varrho}} \def\r{\mathbf r{\text{\rm{gap}}} \def\prr{\pi_{{\bf m},\varrho}} \def\r{\mathbf r} \def\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda{\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} \def\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I{\scr L}\def\Tt{\tilde} \def\Ric{\text{\rm{Ric}}} \def\TT{\tilde} \def\Ric{\text{\rm{Ric}}}\def\II{\mathbb I} \def{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb H{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb H} \def\scr M}\def\Q{\mathbb Q} \def\texto{\text{o}} \def\LL{\Lambda{\scr M}\def\Q{\mathbb Q} \def\texto{\text{o}} \def\LL{\Lambda} \def{\rm Rank}} \def\B{\scr B} \def\i{{\rm i}} \def\HR{\hat{\R}^d{{\rm Rank}} \def\B{\scr B} \def{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb H{{\rm i}} \def\HR{\hat{\mathbb R} \def\ff{\frac} \def\ss{\sqrt}^d} \def\rightarrow}\def\l{\ell{\rightarrow}\def\l{\ell} \def\infty}\def\I{1}\def\U{\scr U{\infty}\def\I{1}\def\U{\scr U} \title{{f Distribution Dependent SDEs with Singular Coefficients} \begin{abstract} Under integrability conditions on distribution dependent coefficients, existence and uniqueness are proved for McKean-Vlasov type SDEs with non-degenerate noise. When the coefficients are Dini continuous in the space variable, gradient estimates and Harnack type inequalities are derived. These generalize the corresponding results derived for classical SDEs, and are new in the distribution dependent setting. \end{abstract} \noindent AMS subject Classification:\ 60H1075, 60G44. \\ \noindent Keywords: Distribution dependent SDEs, Krylov's estimate, Zvonkin's transform, log-Harnack inequality. \vskip 2cm \section{Introduction} In order to characterize nonlinear Fokker-Planck equations using SDEs, distribution dependent SDEs have been intensively investigated, see \cite{SZ, MV} and references within for McKean-Vlasov type SDEs, and \cite{DV1,DV2, CA} and references within for Landau type equations. To ensure the existence and uniqueness of these type SDEs, growth/regularity conditions are used. On the other hand, however, due to Krylov's estimate and Zvonkin's transform, the well-posedness of classical SDEs is proved under an integrability condition, which allows the drift unbounded on compact sets. The purpose of this paper is to extend this result to the distribution dependent situation, and to establish gradient estimates and Harnack type inequalities for the distributions under Dini continuity of the drift, which is much weaker than the Lipschitz condition used in \cite{FYW1, HRW}. Let $\scr P$ be the set of all probability measures on $\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$. Consider the following distribution-dependent SDE on $\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$: \beq\label{E1} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t= b_t(X_t, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t +\sigma} \def\ess{\text{\rm{ess}}_t(X_t, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t,\end{equation} where $W_t$ is the $d$-dimensional Brownian motion on a complete filtration probability space $(\OO,\{\F_t\}_{t\ge 0},\P)$, $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}$ is the law of $X_t$, and $$b: \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times \scr P\rightarrow}\def\l{\ell \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d,\ \ \sigma} \def\ess{\text{\rm{ess}}: \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times \scr P\rightarrow}\def\l{\ell \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\otimes\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$$ are measurable. When a different probability measure $\tilde} \def\Ric{\text{\rm{Ric}}\P$ is concerned, we use $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_\xi|\tilde} \def\Ric{\text{\rm{Ric}} \P$ to denote the law of a random variable $\xi$ under the probability $\tilde} \def\Ric{\text{\rm{Ric}}\P$. By using a priori Krylov's estimate, a weak solution can be constructed for \eqref{E1} by using an approximation argument as in the classical setting, see \cite{GM} and references within. To prove the existence of strong solution, we use a fixed distribution $\mu_t$ to replace the law of solution $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}$, so that the distribution SDE \eqref{E1} reduces to the classical one. We prove that when the reduced SDE has strong uniqueness, the weak solution of \eqref{E1} also provides a strong solution. We will then use Zvonkin's transform to investigate the uniqueness, for which we first identify the distributions of given two solutions, so that these solutions solve the common reduced SDE, and thus, the pathwise uniqueness follows from existing argument developed for the classical SDEs. However, there is essential difficulty to identify the distributions of two solutions of \eqref{E1}. Once we have constructed the desired Zvonkin's transform for \eqref{E1} with singular coefficients, gradient estimates and Harnack type inequalities can be proved as in the regular situation considered in \cite{FYW1}. The remainder of the paper is organized as follows. In Section 2 we summarize the main results of the paper. To prove these results, some preparations are addressed in Section 3, including a new Krylov's estimate, two lemmas on weak convergence of stochastic processes, and a result on the existence of strong solutions for distribution dependent SDEs. Finally, the main results are proved in Sections 4 and 5. \section{Main results} We first recall Krylov's estimate in the study of SDEs. We will fix a constant $T>0$, and only consider solutions of \eqref{E1} up to time $T$.. For a measurable function $f$ defined on $[0,T]\times\mathbb{R}^d$, let $$\|f\|_{L^q_p(s,t)}=\left(\int_s^t\left(\int_{\mathbb{R}^d}|f_r(x)|^p\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\right)^{\frac{q}{p}}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^{\frac{1}{q}}, \ \ p,q\ge 1, 0\le s\le t\le T. $$ When $s=0$, we simply denote $\|f\|_{L^q_p(0,t)}=\|f\|_{L^q_p(t)}$. A key step in the study of singular SDEs is to establish Krylov type estimate (see for instance \cite{KR}). For later use we introduce the following notion of $K$-estimate. We consider the following class of number pairs $(p,q)$: $$\scr K:=\Big\{(p,q)\in (1,\infty)\times(1,\infty):\ \ff d p +\ff 2 q<2\Big\}.$$ \begin} \def\beq{\begin{equation}} \def\F{\scr F{defn}[Krylov's Estimate] \emph{An $\F_t$-adapted process $\{X_{s}\}_{0\le s\le T}$ is said to satisfy $K$-estimate, if for any $(p,q)\in \scr K$, there exist constants $\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho\in (0,1)$ and $C>0$ such that for any nonnegative measurable function $f$ on $[0,T]\times \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$, \beq\label{KR1} \E\bigg(\int_s^t f_r(X_r) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\Big| \F_s\bigg) \le C (t-s)^\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho \|f\|_{L_p^q(T)},\ \ \ 0\le s\le t\le T. \end{equation}} \end{defn} We note that \eqref{KR1} implies the following Khasminskii type estimate, see for instance \cite[Lemma 3.5]{XZ} and it's proof: there exists a constant $c>0$ such that \beq\label{APP3} \E\bigg(\bigg(\int_s^t f_r(X_r) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\bigg)^n\Big| \F_s\bigg) \le c n! (t-s)^{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho n}\|f\|_{L_p^q(T)}^n,\ \ \ 0\le s\le t\le T, \end{equation} and for any $\ll>0$ there exists a constant $\LL=\LL(\ll,\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho,c)>0$ such that \beq\label{KR2} \E\big(\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\ll\int_0^T f_r(X_r) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r}\big| \F_s\big) \le \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\LL \left(1+\|f\|_{L_p^q(T)}\right)},\ \ s\in[0,T]. \end{equation} Let $\theta\in [1,\infty)$, we will consider the SDE \eqref{E1} with initial distributions in the class $$\scr P_\theta := \big\{\mu\in \scr P: \mu(|\cdot|^\theta)<\infty\big\}.$$ It is well known that $\scr P_\theta$ is a Polish space under the Warsserstein distance $$\mathbb W_\theta(\mu,\nu):= \inf_{\pi\in \scr C} \def\aaa{\mathbf{r}} \def\r{r(\mu,\nu)} \bigg(\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} |x-y|^\theta \pi(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y)\bigg)^{\ff 1 {\theta}},\ \ \mu,\nu\in \scr P_{\theta},$$ where $\scr C} \def\aaa{\mathbf{r}} \def\r{r(\mu,\nu)$ is the set of all couplings of $\mu$ and $\nu$. Moreover, the topology induced by $\mathbb W_\theta$ on $\scr P_\theta$ coincides with the weak topology. In the following three subsections, we state our main results on the existence, uniqueness and Harnack type inequalities respectively for the distribution dependent SDE \eqref{E1}. \subsection{Existence and uniqueness} Let $$\scr P_\theta^a=\big\{\mu\in \scr P_\theta: \mu \text{\ is\ absolutely\ continuous\ with\ respect\ to the Lebesgue measure\ }\big\}.$$ To construct a weak solution of \eqref{E1} by using approximation argument as in \cite{GM, MV}, we need the following assumptions for some $\theta\ge 1$. \begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[$(H^\theta)$] There exists a sequence $(b^n,\sigma^n)_{n\ge 1}$, where $$b^n: [0,T]\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times \scr P_\theta \rightarrow}\def\l{\ell \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d,\ \ \sigma^n: [0,T]\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times \scr P_\theta \rightarrow}\def\l{\ell \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\otimes \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d $$ are measurable, such that the following conditions hold: \item[$\ (1)$] For $ \mu\in \scr P_\theta^a$ and $\mu^n\rightarrow}\def\l{\ell \mu$ in $\scr P_\theta$, $$ \lim_{n\rightarrow}\def\l{\ell\infty} \big\{ |b_t^n(x,\mu^n)-b_t(x,\mu)|+ \|\sigma^n_t(x,\mu^n)- \sigma_t(x,\mu)\| \big\} =0,\ \ \text{a.e.}\ \ (t,x)\in [0,T]\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d. $$ \item[$\ (2)$] There exist $K>1$, $(p,q)\in \scr K$ and nonnegative $G\in L_p^q(T)$ such that for any $n\ge 1$, $$ |b_t^n(x,\mu)|^2\le G(t,x)+K,\ \ K^{-1} I \le (\sigma^n_t(\sigma^n_t)^\ast)(x,\mu)\le K I$$ for all $(t,x,\mu)\in [0,T]\times \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times \scr P_\theta.$ \item[$\ (3)$] For each $n\ge 1$, there exists a constant $K_n>0$ such that $\|b^n\|_{\infty}\le K_n$ and \begin{equation}\label{con}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} &|b_t^n(x,\mu)-b_t^n(y,\nu)|+ \|\sigma^n_t(x,\mu)- \sigma_t^n(y,\nu)\|\\ &\le K_n\big\{|x-y|+ \mathbb W_\theta(\mu,\nu)\big\},\ \ (t,x,y)\in [0,T]\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d,\ \mu,\nu\in \scr P_\theta.\end{split} \end{equation} \end{enumerate} The main result in this part is the following. \begin{thm}\label{T1.1} Assume $(H^\theta)$ for some constant $\theta\ge 1$. Let $X_0$ be an $\F_0$-measurable random variable on $\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ with $\mu_0:=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}\in \scr P_\theta$. Then the following assertions hold. \begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[$(1)$] The SDE $\eqref{E1}$ has a weak solution with initial distribution $\mu_0$ satisfying $ \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_\cdot} \in C([0,T];\scr P_\theta)$ and the $K$-estimate. \item[$(2)$] If $\sigma$ is uniformly continuous in $x\in\mathbb{R}^d$ uniformly with respect to $(t,\mu)\in[0,T]\times\scr P_{\theta},$ and for any $\mu_\cdot\in C([0,T]; \scr P_\theta)$, $b^\mu_t(x):= b_t(x, \mu_t)$ and $\sigma} \def\ess{\text{\rm{ess}}^\mu_t(x):= \sigma} \def\ess{\text{\rm{ess}}_t(x,\mu_t)$ satisfy $| b^\mu|^2+\|\nabla} \def\pp{\partial} \def\E{\mathbb E \sigma} \def\ess{\text{\rm{ess}}^\mu\|^2 \in L_p^q(T)$ for some $(p,q)\in \scr K$, where $\nabla} \def\pp{\partial} \def\E{\mathbb E$ is the weak gradient in the space variable $x\in \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$, then the SDE \eqref{E1} has a strong solution satisfying $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_\cdot}\in C([0,T];\scr P_\theta)$ and the $K$-estimate. \item[$(3)$] If, in addition to the condition in $(2)$, there exists a constant $L\,>0$ such that \beq\label{LIP} \|\sigma} \def\ess{\text{\rm{ess}}_t(x,\mu)-\sigma} \def\ess{\text{\rm{ess}}_t(x,\nu)\|+ |b_t(x,\mu)-b_t(x,\nu)|\le L\, \mathbb W_\theta(\mu,\nu)\end{equation} holds for all $ \mu,\nu\in \scr P_\theta$ and $ (t,x)\in [0,T]\times \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d,$ then the strong solution is unique. \end{enumerate} \end{thm} When $b$ and $\sigma} \def\ess{\text{\rm{ess}}$ do not depend on the distribution, Theorem \ref{T1.1} reduces back to the corresponding results derived for classical SDEs with singular coefficients, see for instance \cite{Z2} and references within. To compare Theorem \ref{T1.1} with recent results on the existence and uniqueness of McKean-Vlasov type SDEs derived in \cite{CR, MV}, we consider a specific class of coefficients where the dependence on distributions is of integral type. For $\mu\in \scr P$ and a (possibly multidimensional valued) real function $f\in L^1(\mu)$, let $\mu(f)= \int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} f \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu$. Let \beq\label{EX1} b_t(x,\mu):= B_t(x,\mu(\psi_b(t,x,\cdot)),\ \ \sigma} \def\ess{\text{\rm{ess}}_t(x,\mu):= \Sigma_t(x,\mu(\psi_\sigma} \def\ess{\text{\rm{ess}}(t,x,\cdot))\end{equation} for $(t,x,\mu)\in [0,T]\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times \scr P_\theta,$ where for some $k \in \mathbb N,$ $$\psi_b, \psi_\sigma} \def\ess{\text{\rm{ess}}: [0,T]\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d \rightarrow}\def\l{\ell \mathbb R} \def\ff{\frac} \def\ss{\sqrt^k$$ are measurable and bounded such that for some constant $\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho>0$, \beq\label{EX2} |\psi_b(t,x,y)-\psi_b(t,x,y')|+ |\psi_\sigma} \def\ess{\text{\rm{ess}}(t,x,y)-\psi_\sigma} \def\ess{\text{\rm{ess}}(t,x,y')| \le \delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho |y-y'| \end{equation} holds for all $ (t,x)\in [0,T]\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ and $y,y'\in \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d,$ and $$B: [0,T]\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^k \rightarrow}\def\l{\ell \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d,\ \ \Sigma: [0,T]\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^k \rightarrow}\def\l{\ell \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\otimes\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$$ are measurable and continuous in the third variable in $\mathbb R} \def\ff{\frac} \def\ss{\sqrt^k$. We make the following assumption. \begin{enumerate} \item[\bf (A)] Let $(b,\sigma} \def\ess{\text{\rm{ess}})$ in $\eqref{EX1}$ for $(B,\Sigma)$ such that \eqref{EX2} holds, $B_t(x,\cdot)$ and $\Sigma_t(x,\cdot)$ are continuous for any $(t,x)\in [0,T]\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$. Moreover, there exist constant $K>1$, $(p,q)\in \scr K$ and nonnegative $F\in L_p^q(T)$ such that \beq\label{EX3} |b_t(x,\mu)|^2\le F(t,x)+K,\ \ K^{-1} I\le \sigma} \def\ess{\text{\rm{ess}}_t(x,\mu)\sigma} \def\ess{\text{\rm{ess}}_t(x,\mu)^* \le K I\end{equation} for all $(t,x,\mu)\in [0,T]\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times \scr P_\theta.$ \end{enumerate} \begin} \def\beq{\begin{equation}} \def\F{\scr F{cor}\label{C1.2} Assume {\bf (A)}. Then the following assertions hold. \begin{enumerate} \item[(1)] Assertion $(1)$ in Theorem $\ref{T1.1}$ holds. \item[(2)] If moreover, $\sigma$ is uniformly continuous in $x\in\mathbb{R}^d$ uniformly with respect to $(t,\mu)\in[0,T]\times\scr P_{\theta},$ and for any $\mu_\cdot\in C([0,T]; \scr P_\theta)$, $b^\mu_t(x):= b_t(x, \mu_t)$ and $\sigma} \def\ess{\text{\rm{ess}}^\mu_t(x):= \sigma} \def\ess{\text{\rm{ess}}_t(x,\mu_t)$ satisfy $| b^\mu|^2+\|\nabla} \def\pp{\partial} \def\E{\mathbb E \sigma} \def\ess{\text{\rm{ess}}^\mu\|^2 \in L_p^q(T)$ for some $(p,q)\in \scr K$, where $\nabla} \def\pp{\partial} \def\E{\mathbb E$ is the weak gradient in the space variable $x\in \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$, then assertion $(2)$ in Theorem $\ref{T1.1}$ hold. \item[(3)] \ Besides the conditions in (2), if there exists a constant $c>0$ such that $$|B_t(x,y)-B_t(x,y')| +\|\Sigma_t(x,y)-\Sigma_t(x,y')\|\le c |y-y'|,\ \ (t,x)\in [0,T]\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d, y,y'\in \mathbb R} \def\ff{\frac} \def\ss{\sqrt^k,$$ then for any $\F_0$-measurable random variable $X_0$ on $\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ with $\mu_0:=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}\in \scr P_\theta$ for some $\theta\ge 1$, the SDE $\eqref{E1}$ has a unique strong solution with $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_\cdot}$ continuous in $\scr P_\theta.$ \end{enumerate} \end{cor} In the next corollary on the existence of weak solution we do not assume \eqref{EX1}. This result will be used in Section 5. \begin} \def\beq{\begin{equation}} \def\F{\scr F{cor}\label{C1.3}Assume that \eqref{LIP}, \eqref{EX3} hold. Then the SDE $\eqref{E1}$ has a weak solution with initial distribution $\mu_0$ satisfying $ \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_\cdot} \in C([0,T];\scr P_\theta)$ and the $K$-estimate. \end{cor} We now explain that results in Corollary \ref{C1.2} and Corollary \ref{C1.3} are new comparing with existing results on McKean-Vlasov SDEs. We first consider the model in \cite{CR} where $\psi_b$ and $\psi_\sigma} \def\ess{\text{\rm{ess}}$ are $\mathbb R} \def\ff{\frac} \def\ss{\sqrt$-valued functions such that $$\|B\|_\infty +\sup_{(t,x,r)\in[0,T]\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt}| \pp_r B_t(x, r)|<\infty,$$ $\psi_b$ is H\"older continuous, $\psi_\sigma} \def\ess{\text{\rm{ess}}$ is Lipschitz continuous, and for some constants $C>1$, $\theta\in (0,1]$, \begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} & C^{-1} I\le \Sigma \Sigma^*\le CI, \\ & \|\Sigma_t(x,r)- \Sigma_t(x',r')\| \le C(|x-x'| +|r-r'|), \\ &\|\pp_r \Sigma_t(x,r)- \pp_r \Sigma_t(x',r)\| \le C|x-x'|^\theta.\end{align*} Then \cite[Theorem 1]{CR} says that when $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}\in \scr P_2$ the SDE \eqref{E1} has a unique strong solution. Obviously, the above conditions imply $\|b\|_\infty+\|\nabla} \def\pp{\partial} \def\E{\mathbb E\sigma} \def\ess{\text{\rm{ess}}\|_\infty<\infty$, but this is not necessary for conditions in Corollary \ref{C1.2} and Corollary \ref{C1.3}. Next, \cite{MV} considers \eqref{E1} with $$b_t(x,\mu):= \int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} \tilde} \def\Ric{\text{\rm{Ric}} b_t(x,y)\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y),\ \ \sigma} \def\ess{\text{\rm{ess}}_t(x,\mu):= \int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} \tilde} \def\Ric{\text{\rm{Ric}}\sigma} \def\ess{\text{\rm{ess}}_t(x,y) \mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y)$$ for measurable functions $$\tilde} \def\Ric{\text{\rm{Ric}} b: [0,T]\times \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d \rightarrow}\def\l{\ell \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d,\ \ \tilde} \def\Ric{\text{\rm{Ric}} \sigma} \def\ess{\text{\rm{ess}}: [0,T]\times \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d \rightarrow}\def\l{\ell \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\otimes\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$$ satisfying $$\|\tilde} \def\Ric{\text{\rm{Ric}} \sigma} \def\ess{\text{\rm{ess}}_t(x,y)\|+|\tilde} \def\Ric{\text{\rm{Ric}} b_t(x,y)|\le C(1+|x|),\ \ \tilde} \def\Ric{\text{\rm{Ric}}\sigma} \def\ess{\text{\rm{ess}}\tilde} \def\Ric{\text{\rm{Ric}}\sigma} \def\ess{\text{\rm{ess}}^*\ge C^{-1}I$$ for some constant $C>1.$ Then \cite[Theorem 1]{MV} says that when $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}\in \scr P_4$, \eqref{E1} has a weak solution. If moreover $\sigma} \def\ess{\text{\rm{ess}}$ does not depend on the distribution and $\|\nabla} \def\pp{\partial} \def\E{\mathbb E \sigma} \def\ess{\text{\rm{ess}}\|_\infty<\infty$, then \cite[Theorem 2]{MV} shows that when $\E \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{r|X_0|^2}<\infty$ for some $r>0$, the SDE \eqref{E1} has a unique strong solution. Obviously, to apply these results it is necessary that $b$ and $\nabla} \def\pp{\partial} \def\E{\mathbb E \sigma} \def\ess{\text{\rm{ess}}$ are (locally) bounded, which is however not necessary for the condition in Corollary \ref{C1.2} and Corollary \ref{C1.3}. \subsection{Harnack inequality} In this subsection, we investigate the dimension-free log-Harnack inequality introduced in \cite{RW10} for \eqref{E1}, see \cite{Wbook} and references within for general results on these type Harnack inequalities and applications. We establish Harnack inequalities for $P_tf$ using coupling by change of measures (see for instance \cite[\S 1.1]{Wbook}). To this end, we need to assume that the noise part is distribution-free; that is, we consider the following special version of \eqref{E1}: \beq\label{E11} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t= b_t(X_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t +\sigma} \def\ess{\text{\rm{ess}}_t(X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t,\ \ t\in [0,T]\end{equation} As in \cite{FYW1}, we define $P_tf(\mu_0)$ and $P_t^*\mu_0$ as follows: $$(P_tf)(\mu_0)= \int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} f \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D(P_t^*\mu_0)= \E f(X_t(\mu_0)),\ \ f\in \B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d), t\in [0,T], \mu_0\in \scr P_2,$$ where $X_t(\mu_0)$ solves \eqref{E11} with $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}=\mu_0.$ Let $$\D=\bigg\{\phi: [0,\infty)\rightarrow}\def\l{\ell [0,\infty) \text{\ is\ increasing}, \phi^2 \text{\ is\ concave,} \int_0^1\ff{\phi(s)}s\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s<\infty\bigg\}.$$ We will need the following assumption. \begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[$\bf{(H)}$] $\|b\|_{\infty}<\infty$ and there exist a constant $K>1$ and $\phi\in \D$ such that for any $t\in[0,T],\ x,y\in \mathbb{R}^d,$ and $\mu,\nu\in \scr P_2$, \beq\label{H1} K^{-1}I\leq(\sigma_t \sigma_t^{\ast})(x) \leq KI,\ \|\sigma_t(x)-\sigma_t(y)\|^2_{\mathrm{HS}}\leq K|x-y|^2, \end{equation} \beq\label{b-phi} |b_t(x,\mu)-b_t(y,\nu)|\leq \phi(|x-y|)+K\mathbb{W}_2(\mu,\nu). \end{equation} \end{enumerate} \begin} \def\beq{\begin{equation}} \def\F{\scr F{thm}\label{T3.1} Assume {\bf (H)}. There exists a constant $C>0$ such that \beq\label{LH2}(P_{t}\log f)(\nu_0)\le \log (P_{t}f)(\mu_0)+ \ff{C}{t\land 1}\mathbb W_2(\mu_0,\nu_0)^2\end{equation} for any $t\in (0,T],\mu_0,\nu_0\in\scr P_2, f\in \B_b^+(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$ with $f\geq 1.$ Moreover, there exists a constant $p_0>1$ such that for any $p>p_0$, \beq\label{H2'}(P_{t}f)^p(\nu_0)\le (P_{t}f^p)(\mu_0)\exp\left\{\ff{c}{t\land 1}\mathbb W_2(\mu_0,\nu_0)^2\right\}\end{equation} for any $t\in (0,T],\mu_0,\nu_0\in\scr P_2, f\in \B_b^+(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$ and some constant $c=c(p,K)>0$. \end{thm} \subsection{Shift Harnack inequality} In this section we establish the shift Harnack inequality for $P_t$ introduced in \cite{W14a}. To this end, we assume that $\sigma_t(x,\mu)$ does not depend on $x$. So SDE \eqref{E1} becomes \beq\label{E5} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t= b_t(X_t, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t +\sigma} \def\ess{\text{\rm{ess}}_t(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t,\ \ t\in [0,T].\end{equation} \begin} \def\beq{\begin{equation}} \def\F{\scr F{thm}\label{T5.1} Let $\sigma} \def\ess{\text{\rm{ess}}: [0,T]\times \scr P_2\rightarrow}\def\l{\ell \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\otimes \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ and $b: [0,\infty)\times \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times\scr P_2\rightarrow}\def\l{\ell \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ be measurable such that $\sigma} \def\ess{\text{\rm{ess}}$ is invertible with $\|\sigma} \def\ess{\text{\rm{ess}}_t\|_{\infty}+\|\sigma} \def\ess{\text{\rm{ess}}_t^{-1}\|_{\infty}$ is bounded in $t\in [0,T]$, and $b$ satisfies the corresponding conditions in {\bf (H)}. \begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[$(1)$] For any $p>1, t\in [0,T], \mu_0\in \scr P_2, v\in\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ and $f\in \B_b^+(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$, \begin} \def\beq{\begin{equation}} \def\F{\scr F{align*}(P_{t}f)^p(\mu_0)\le &(P_{t}f^p(v+\cdot))(\mu_0)\\ &\times \exp\bigg[\ff{p\, \int_0^t \|\sigma} \def\ess{\text{\rm{ess}}_s^{-1}\|_{\infty}^2 \big\{|v|/t+\phi(s|v|/t)\big\}^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s}{2(p-1)}\bigg].\end{align*} Moreover, for any $f\in \B_b^+(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$ with $f\geq 1$, $$(P_{t}\log f)(\mu_0)\le \log (P_{t} f(v+\cdot))(\mu_0)+\frac{1}{2}\int_0^t \|\sigma} \def\ess{\text{\rm{ess}}_s^{-1}\|_{\infty}^2 \big\{|v|/t+\phi(s|v|/t)\big\}^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s.$$ \end{enumerate} \end{thm} \section{Preparations} We first present a new result on Krylov's estimate, then recall two lemmas from \cite{GM} for the construction of weak solution, and finally introduce two lemmas on the existence and uniqueness of strong solutions. \subsection{Krylov's estimate} Consider the following SDE on $\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$: \beq\label{EN} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t= b_t(X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t + \sigma} \def\ess{\text{\rm{ess}}_t(X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t,\ \ t\in [0,T].\end{equation} \begin} \def\beq{\begin{equation}} \def\F{\scr F{lem}\label{KK} Let $T>0$, and let $p,q\in (1,\infty)$ with $\ff d p +\ff 2 q<1$. Assume that $\sigma_t(x)$ is uniformly continuous in $x\in\mathbb{R}^d$ uniformly with respect to $t\in[0,T]$, and that for a constant $K>1$ and some nonnegative function $F\in L_p^q(T)$ such that \beq\label{APP1} K^{-1}I\le \sigma} \def\ess{\text{\rm{ess}}_t(x)\sigma} \def\ess{\text{\rm{ess}}_t(x)^*\le K I,\ \ (t,x)\in [0,T]\times \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d,\end{equation} \beq\label{APP2} |b_t(x)| \le K+ F(t,x),\ \ (t,x)\in [0,T]\times \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d.\end{equation} Then for any $(\aa,\bb)\in \scr K$, there exist constants $C=C(\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho,K, \aa,\bb, \|F\|_{L_p^q(T)})>0$ and $\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho=\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho(\aa,\bb)>0$, such that for any $s\in [0,T)$ and any solution $(X_{s,t})_{t\in [s,T]}$ of $\eqref{EN}$ from time $s$, \beq\label{APP'}\E\bigg[\int_s^t |f|(r, X_{s,r}) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\Big| \F_s\bigg]\le C (t-s)^{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho}\|f\|_{L_{\aa}^{\bb}(T)},\ t\in [s,T], f\in L_{\aa}^{\bb}(T).\end{equation} \end{lem} \begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} When $b$ is bounded, the assertion is due to \cite[Theorem 2.1]{Z2}. If $|b|\leq K+F$ for some constant $K>0$ and $0\leq F\in L_p^q(T)$, then we have a decomposition $b=b^{(1)}+b^{(2)}$ with $\|b^{(1)}\|_{\infty}\leq K$ and $|b^{(2)}|\leq F$, for instance, $b^{(1)}=\frac{b}{1\vee(|b|/K)}$. Letting the diffeomorphisms $\{\theta_t\}_{t\in[0,T]}$ on $\mathbb{R}^d$ be constructed in \cite[Lemma 4.3]{Z2} for $b^{(2)}$ replacing $b$, then $Y_{s,t}=\theta_t(X_{s,t})$ solves \beq\label{EN'} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D Y_t= \bar{b}_t(Y_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t + \bar{\sigma} \def\ess{\text{\rm{ess}}}_t(Y_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t,\ \ t\in [s,T],\end{equation} where $\bar{b}$ is bounded, and $\bar{\sigma} \def\ess{\text{\rm{ess}}}$ is uniformly continuous in $x\in\mathbb{R}^d$ uniformly with respect to $t\in[0,T]$. Moreover, there exists a constant $\bar{K}>1$ depending on $K$ and $\|F\|_{L_p^q(T)}$ such that \beq\label{APP1'} \bar{K}^{-1}I\le \bar{\sigma} \def\ess{\text{\rm{ess}}}_t(x)\bar{\sigma} \def\ess{\text{\rm{ess}}}_t(x)^*\le \bar{K} I,\ \ (t,x)\in [0,T]\times \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d,\end{equation} and $$\|\bar{b}\|_{\infty}+\|\nabla \theta\|_{\infty}+\|\nabla \theta^{-1}\|_{\infty}\leq \bar{ K}.$$ Again by \cite[Theorem 2.1]{Z2}, there exists a constant $C=C(\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho,\bar{K}, \aa,\bb)>0$ and $\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho=\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho(\aa,\bb)>0$ such that \beq\label{APP3'}\E\bigg[\int_s^t |f|(r, Y_{s,r}) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\Big| \F_s\bigg]\le C (t-s)^{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho}\|f\|_{L_{\aa}^{\bb}(T)},\ t\in [s,T], f\in L_{\aa}^{\bb}(T).\end{equation} This together with $\|\nabla \theta\|_{\infty}<\bar{K}$ implies that \begin{align*}&\E\bigg[\int_s^t |f|(r, X_{s,r}) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\Big| \F_s\bigg] =\E\bigg[\int_s^t |f|(r, \theta_r^{-1}(Y_{s,r})) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\Big| \F_s\bigg]\\ &\leq C (t-s)^{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho}\left(\int_0^T\left(\int_{\mathbb{R}^d}|f(r,\theta^{-1}_r(x))|^\alpha\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\right)^{\frac{\beta}{\alpha}}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^{\frac{1}{\beta}}\\ &=C (t-s)^{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho}\left(\int_0^T\left(\int_{\mathbb{R}^d}|f(r,y)|^\alpha|\mathrm{det} \nabla\theta_r|\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y\right)^{\frac{\beta}{\alpha}}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^{\frac{1}{\beta}}\\ &\le C (t-s)^{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho}\|f\|_{L_{\aa}^{\bb}(T)},\ t\in [s,T], f\in L_{\aa}^{\bb}(T).\end{align*} Then the proof is finished. \end{proof} \subsection{Convergence of stochastic processes} To prove Theorem \ref{T1.1}(1), we will use the following two lemmas due to \cite[Lemma 5.1, 5.2]{GM}. \begin{lem}\label{PC} Let $\{\psi^n\}_{n\geq 1}$ be a sequence of $d$-dimensional processes defined on some probability space. Assume that \begin{align}\label{Ub}\lim_{R\rightarrow}\def\l{\ell\infty}\sup_{n\geq 1}\sup_{t\in[0,T]}\P(|\psi^n_t|>R)=0, \end{align} and for any $\vv>0$, \begin{align}\label{ETC} \lim_{\theta\to0}\sup_{n\geq 1}\sup_{s,t\in[0,T]}\{\P(|\psi^n_t-\psi^n_s|>\varepsilon): |t-s|\leq \theta\}=0. \end{align} Then there exist a sequence $\{n_k\}_{k\geq 1}$, a probability space $(\tilde{\Omega},\tilde{\F}, \tilde{\P})$ and stochastic processes $\{X_t, X^k_t\}_{t\in [0,T]} (k \geq 1)$, such that for every $t\in [0,T]$, $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\psi^{n_k} _t}|\P=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X^k_t}|\tilde{\P}$, and $X^k_t$ converges to $X_t$ in probability $\tilde} \def\Ric{\text{\rm{Ric}}\P$ as $k\rightarrow}\def\l{\ell\infty$. \end{lem} \begin{lem}\label{SL} Let $\{\eta^n\}_{n\geq 1}$ and $\eta$ be uniformly bounded $\mathbb{R}^{d}\otimes\mathbb{R}^{k}$-valued stochastic processes, and let $W^n_t$ and $W_t$ for $t\in [0,T]$ be Wiener processes such that the stochastic It\^{o} integrals $$I^n_t:=\int^t _0 \eta^n_s \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W^n_s,\ \ I_t := \int ^t _0 \eta_s \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_s,\ \ t\in [0,T]$$ are well-defined. Assume that $\eta^n_t \rightarrow}\def\l{\ell \eta_t$ and $W^n_t \rightarrow}\def\l{\ell W_t$ in probability for every $t\in [0,T]$. Then $$\lim_{n\rightarrow}\def\l{\ell\infty}\P\left(\sup_{t\in[0,T]}|I^n_t-I_t|\geq\varepsilon\right)=0,\ \ \vv>0.$$ \end{lem} \subsection{Existence and uniqueness on strong solutions} We first present a result on the existence of strong solutions deduced from weak solutions, then introduce a result on the existence and uniqueness of strong solutions under a Lipschitz type condition. \begin{lem}\label{SS} Let $(\bar\Omega, \bar\F_t,\bar W_t,\bar\P)$ and $\bar{X}_t$ be a weak solution to \eqref{E1} with $\mu_t:=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\bar X_t}|\bar\P=\mu_t$. If the SDE \begin{align}\label{class} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t= b_t(X_t,\mu_t)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+ \sigma_t(X_t,\mu_t)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t,\ \ 0\le t\le T \end{align} has a unique strong solution $X_t$ up to life time with $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}=\mu_0$, then \eqref{E1} has a strong solution. \end{lem} \begin{proof} Since $\mu_t= \scr L_{\bar X_t}|\bar \P$, $\bar{X}_t$ is a weak solution to \eqref{class}. By Yamada-Watanabe principle, the strong uniqueness of \eqref{class} implies the weak uniqueness, so that $X_t$ is nonexplosive with $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}=\mu_t, t\ge 0$. Therefore, $X_t$ is a strong solution to \eqref{E1}. \end{proof} \begin{lem}\label{SS2} Let $\theta\ge 1$ and $\delta_0$ be the Dirac measure at point $0$. If $b_t(0,\delta_0)$ is bounded in $t\in[0,T]$, and there exists a constant $L>0$ such that \beq\label{LIPS} \begin} \def\beq{\begin{equation}} \def\F{\scr F{split} &\|\sigma} \def\ess{\text{\rm{ess}}_t(x,\mu)-\sigma} \def\ess{\text{\rm{ess}}_t(y,\nu)\|+|b_t(x,\mu)-b_t(y,\nu)|\\ &\le L\big\{|x-y|+\mathbb W_\theta(\mu,\nu)\big\},\ \ x,y\in\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d, \mu,\nu\in \scr P_\theta, t\in [0,T],\end{split}\end{equation} then for any $X_0$ with $\E |X_0|^\theta<\infty$, \eqref{E1} has a unique strong solution $(X_t)_{t\in [0,T]}$. \end{lem} \begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} When $\theta\ge 2$ the assertion follows from \cite[Theorem 2.1]{FYW1}. So we only consider $\theta<2$. As explained in \cite{FYW1} that it suffices to find a constant $t_0\in (0,T)$ independent of $X_0$ such that \eqref{E1} has a unique strong solution up to time $t_0$ and $\sup_{t\in [0,t_0]} \E |X_t|^\theta<\infty$. Let $X_t^{(0)}=X_0 $ and $\mu_t^{(0)}=\mu_0$ for $t\in [0,T].$ For any $n\ge 1$, consider the SDE $$\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t^{(n)}= b_t(X_t^{(n)}, \mu_t^{(n-1)})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+ \sigma} \def\ess{\text{\rm{ess}}_t(X_t^{(n)}, \mu_t^{(n-1)})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t,\ \ X_0^{(n)}=X_0,$$ where $\mu_t^{(n-1)}=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t^{(n-1)}}, 0\le t\le T.$ By \cite[Lemma 2.3(1)]{FYW1}, for any $n\ge 1$ this SDE has a unique solution and \beq\label{SS3} \sup_{s\in [0,T]} \E |X_s^{(n)}|^\theta<\infty,\ \ n\ge 1.\end{equation} Moreover, letting $$\xi_t^{(n)}:= X_t^{(n+1)}- X_t^{(n)},\ \ \LL_t^{(n)}:= \sigma} \def\ess{\text{\rm{ess}}_t(X_t^{(n+1)},\mu_t^{(n)})- \sigma} \def\ess{\text{\rm{ess}}_t(X_t^{(n)},\mu_t^{(n-1)}),$$ \cite[(2.11)]{FYW1} implies $$\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D |\xi_t^{(n)}|^2 \le 2 \langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma\LL_t^{(n)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t, \xi_t^{(n)}\> + K_0 \big\{|\xi_t^{(n)}|^2 + \mathbb W_\theta(\mu_t^{(n)}, \mu_t^{(n-1)})^2\big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t,\ \ n\ge 1, t\in [0,T] $$ for some constant $K_0>0$. Since $\xi_0^{(n)}=0$, it follows that \begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} \E|\xi_t^{(n)}|^2 &\le \int_0^t K_0 \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{K_0(t-s)} \mathbb W_\theta(\mu_s^{(n)},\mu_s^{(n-1)})^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\\ &\le t K_0\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{K_0T} \sup_{s\in [0,t]}\big(\E |\xi_t^{(n-1)}|^\theta\big)^{\ff 2 \theta}, \ \ t\in [0,T], n\ge 1.\end{align*} Since $\theta<2$, by Jensen's inequality we may find out a constant $K_1>0$ such that $$\sup_{s\in [0,t]}\E |\xi_s^{(n)}|^\theta\le K_1 t^{\ff \theta 2} \sup_{s\in [0,t]}\E |\xi_s^{(n-1)}|^\theta,\ \ n\ge 1, t\in [0,T].$$ So, taking $t_0\in (0, T\land K_1^{-\ff 2 \theta})$, we may find a constant $\vv\in (0,1)$ such that $$\sup_{s\in [0,t]}\E |\xi_s^{(n)}|^\theta\le \vv^{n} \sup_{s\in [0,t_0]} \E |X_s^{(1)}-X_0|^\theta<\infty,\ \ n\ge 1,\in[0,t_0].$$ Therefore, for any $t\in [0,t_0]$ there exists an $\F_t$-measurable random variable $X_t$ on $\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ such that $$\lim_{n\rightarrow}\def\l{\ell\infty} \sup_{t\in [0,t_0]}\mathbb W_\theta(\mu_t^{(n)},\mu_t)^\theta\le \lim_{n\rightarrow}\def\l{\ell\infty} \sup_{t\in [0,t_0]} \E |X_t^{(n)}- X_t|^\theta =0,$$ where $\mu_t:=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}$. Combining this with \eqref{LIPS} and letting $n\rightarrow}\def\l{\ell\infty$ in the equation $$X_t^{(n)}= \int_0^t b_s(X_s^{(n)}, \mu_s^{(n-1)}) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s +\int_0^t \sigma} \def\ess{\text{\rm{ess}}_s(X_s^{(n)}, \mu_s^{(n-1)}) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_s,\ \ n\ge 1, t\in [0,t_0],$$ we derive for every $t\in [0,t_0]$, $$X_t= \int_0^t b_s(X_s, \mu_s) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s +\int_0^t \sigma} \def\ess{\text{\rm{ess}}_s(X_s, \mu_s) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_s.$$ Thus, $(X_s)_{s\in [0,t_0]}$ has a continuous version which is a strong solution of \eqref{E1} up to time $t_0$. The uniqueness is trivial by using condition \eqref{LIPS} and It\^o's formula. \end{proof} \section{Proofs of Theorem \ref{T1.1} and Corollary \ref{C1.2}} \subsection{Proof of Theorem \ref{T1.1}(1)-(2)} According to \cite{Z2}, the condition in Theorem \ref{T1.1}(2) implies that the SDE \eqref{class} has a unique strong solution. So, by Lemma \ref{SS}, Theorem \ref{T1.1}(2) follows from Theorem \ref{T1.1}(1). Below we only prove the existence of weak solution. By Lemma \ref{SS2}, condition (3) in $(H^\theta)$ implies that the SDE \beq\label{X^n} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X^n_t= b_t^n(X^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t^n})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t + \sigma} \def\ess{\text{\rm{ess}}_t^n(X^n_t, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t^n}) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t,\ \ X_0^n= X_0\end{equation} has a unique strong solution $(X^n_t)_{t\in [0,T]}$. So, Lemma \ref{KK}, \eqref{con} and condition (2) in $(H^\theta)$ imply that for any $(p,q)\in \scr K$, \beq\label{KRE} \E \int_s^t f(r,X_r^n)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r \le C(t-s)^\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho \|f\|_{L_p^q(T)},\ \ 0\le f\in L_p^q(T), n\ge 1\end{equation} holds for some constants $C>0$ and $ \delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho\in (0,1).$ We first show that Lemma \ref{PC} applies to $\psi_n:=(X^n,W)$, for which it suffices to verify conditions \eqref{Ub} and \eqref{ETC} for $\psi_n:=X^n$. By condition (2) in $(H^\theta)$ and \eqref{APP3} implied by \eqref{APP'}, there exist constants $c_1,c_2>0$ such that \begin{equation}\label{AP3}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} \E |X ^n_t|^\theta &\leq c_1\bigg\{\E |X_0|^\theta+\E\bigg(\int_0^T|b^n_t(X^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X^n_t})|\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\bigg)^\theta\\ &\ \ \ \ \ \ \ \ \ + \E\left(\int_0^T\|\sigma^n_t(X^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X^n_t})\|^2\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\right)^\frac{\theta}{2}\bigg\}\\ &\leq c_2\Big(\E |X_0|^\theta + T^\theta+ \|G\|_{L^{q}_{p}(T)}^\theta + T^{\ff \theta 2}\Big) <\infty, \ \ n\ge 1, t\in [0,T]. \end{split}\end{equation} Thus, \eqref{Ub} holds for $\psi_n:=X^n.$ Next, by the same reason, there exists a constant $c_3>0$ such that for any $0 \leq s \leq t \leq T$, \begin{align*} \E |X^n_t-X^n_s|&\leq \E\int_s^t|b^n_r(X^n_r,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X^n_r})|\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r+ \E\left(\int_s^t\|\sigma^n_r(X^n_r,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X^n_r})\|^2\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\right)^\frac{1}{2}\\ &\leq c_3\big(t-s + (t-s)^\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho\|G\|_{L^q_p(T)}+ (t-s)^{\frac{1}{2}}\big). \end{align*}Hence, \eqref{ETC} holds for $\psi_n:=X^n$. According to Lemma \ref{PC}, there exists a subsequence of $(X^n,W)_{n\ge 1}$, denoted again by $(X^n,W)_{n\ge 1}$, stochastic processes $(\tilde{X}^n,\tilde{W}^n)_{n\ge 1}$ and $(\tilde{X}, \tilde{W})$ on a complete probability space $(\tilde{\OO}, \tilde{\F}, \tilde{\P})$ such that $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(X^n, W)}|\P=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{(\tilde{X}^n, \tilde{W}^n)}|\tilde{\P}$ for any $n\geq 1$, and for any $t\in [0,T]$, $\lim_{n\rightarrow}\def\l{\ell\infty}(\tilde{X}^n_t, \tilde} \def\Ric{\text{\rm{Ric}} W_t^n)=(\tilde{X}_t, \tilde} \def\Ric{\text{\rm{Ric}} W_t)$ in the probability $\tilde} \def\Ric{\text{\rm{Ric}}\P$. As in \cite{GM}, let $\tilde{\F}^n_{t}$ be the completion of the $\sigma} \def\ess{\text{\rm{ess}}$-algebra generated by the $\{\tilde{X}^n_s, \tilde{W}^n_s: s\leq t\}$. Then as shown in \cite{GM}, $\tilde{X}^n_t$ is $\tilde{\F}^n_{t}$-adapted and continuous (since $X^n$ is continuous and $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X^n}|\P=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde} \def\Ric{\text{\rm{Ric}} X^n}|\tilde} \def\Ric{\text{\rm{Ric}}\P$), $\tilde{W}^n$ is a $d$-dimensional Brownian motion on $(\tilde} \def\Ric{\text{\rm{Ric}} \OO, \{\tilde} \def\Ric{\text{\rm{Ric}} \F_t^n\}_{t\in [0,T]},\tilde} \def\Ric{\text{\rm{Ric}}\P)$, and $(\tilde{X}^n_t,\tilde{W}^n_t)_{t\in [0,T]}$ solves the SDE \beq\label{titlde-X^n} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde{X}^n_t= b^n_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^n_t}|\tilde{\P})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+ \sigma^n_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^n_t}|\tilde{\P})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde{W}^n_t,\ \ \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^n_0}|\tilde{\P}=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}|\P. \end{equation} Simply denote $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^n_t}|\tilde{\P}=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^n_t}$ and $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}_t}|\tilde{\P}=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}_t}$. Then $(\tilde{X_t},\tilde{W_t})_{t\in [0,T]}$ is a weak solution to \eqref{E1} provided for any $\varepsilon>0$, \beq\label{(A1)} \lim_{n\rightarrow}\def\l{\ell\infty}\tilde{\P}\left(\sup_{s\in[0,T]}\int_{0}^s| b^n_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^n_t})-b_t(\tilde{X}_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}_t})|\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\geq\varepsilon\right)=0,\end{equation} and \beq\label{(A2)} \lim_{n\rightarrow}\def\l{\ell\infty}\tilde{\P}\left(\sup_{s\in[0,T]}\left| \int_{0}^s\sigma} \def\ess{\text{\rm{ess}}^n_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^n_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\tilde{W}^n_t-\int_{0}^s\sigma} \def\ess{\text{\rm{ess}}_t(\tilde{X}_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}_t})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde{W}_t\right|\geq\varepsilon\right)=0.\end{equation} In the following we prove these two limits respectively. \begin} \def\beq{\begin{equation}} \def\F{\scr F{proof}[Proof of \eqref{(A1)}] For any $n\ge m\ge 1$, we have $$\int_{0}^s | b^n_t({\tilde X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{\tilde X}^n_t})-b_t({\tilde X}_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{\tilde X}_t})|\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\le I_1(s)+ I_2(s)+I_3(s),$$ where \begin{align*} &I_1(s):= \int_{0}^s| b^n_t({\tilde X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{\tilde X}^n_t})-b^{m}_t({\tilde X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{\tilde X}_t})|\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t,\\ &I_2(s):=\int_{0}^s|b^{m}_t({\tilde X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{\tilde X}_t})-b^{m}_t({\tilde X}_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{\tilde X}_t})|\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t,\\ &I_3(s):= \int_{0}^s| b^{m}_t({\tilde X}_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{\tilde X}_t})-b_t({\tilde X}_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{{\tilde X}_t})|\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t. \end{align*} Below we estimate these $I_i(s)$ respectively. Firstly, by Chebyshev's inequality and \eqref{KRE}, we arrive at \begin{align*} \tilde{\P}(\sup_{s\in[0,T]}I_1(s)\geq\frac{\varepsilon}{3})&\leq \frac{9}{\varepsilon^2}\E\int_{0}^T1_{\{|\tilde{X}^n_t|\leq R\}}| b^n_t(\tilde{X}^n_t,\tilde{\mu}^n_t)-b^{m}_t(\tilde{X}^n_t,\tilde{\mu}_t)|^2\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\ &+\frac{9}{\varepsilon^2}\E\int_{0}^T1_{\{|\tilde{X}^n_t|> R\}}| b^n_t(\tilde{X}^n_t,\tilde{\mu}^n_t)-b^{m}_t(\tilde{X}^n_t,\tilde{\mu}_t)|^2\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\ &\leq \frac{9C}{\varepsilon^2}\left(\int_{0}^T\left(\int_{|x|\leq R}|b^n_t(x,\tilde{\mu}^n_t)-b^{m}_t(x,\tilde{\mu}_t)|^{2p}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\right)^{q/p}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\right)^{\frac{1}{q}}\\ &+\frac{36K}{\varepsilon^2}\int_{0}^T\tilde{\P}(|\tilde{X}^n_t|> R)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+\frac{36C}{\varepsilon^2}\|G1_{\{|\cdot|>R\}}\|_{L_p^q(T)}. \end{align*} Since $\tilde{X}^n_t$ converges to $\tilde{X}_t$ in probability, \eqref{AP3} implies $$\lim_{n\rightarrow}\def\l{\ell\infty} \mathbb W_\theta(\tilde} \def\Ric{\text{\rm{Ric}}\mu_t^n,\mu_t) =0,$$ and $$\lim_{n\rightarrow}\def\l{\ell\infty}\tilde{\P}(|\tilde{X}^n_t|> R)\leq\tilde{\P}(|\tilde{X}_t|\geq R).$$ Then it follows from $(H^\theta)$ (1) and (3) that \begin{align*} &\lim_{n\rightarrow}\def\l{\ell\infty}|b^n_t(x,\tilde{\mu}^n_t)-b_t(x,\tilde{\mu}_t)|=0, \ \ a.e. \ \ t\in[0,T],x\in\mathbb{R}^d. \end{align*} So, by condition (2) in $(H^\theta)$, we may apply the dominated convergence theorem to derive \begin{equation}\begin{split}\label{I1J} &\limsup_{n\rightarrow}\def\l{\ell\infty}\tilde{\P}(\sup_{s\in[0,T]}I_1(s)\geq\frac{\varepsilon}{4})\\ &\leq \frac{9C}{\varepsilon^2}\left(\int_{0}^T\left(\int_{|x|\leq R}|b_t(x,\tilde{\mu}_t)-b^{m}_t(x,\tilde{\mu}_t)|^{2p}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\right)^{q/p}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\right)^{\frac{1}{q}}\\ &+\frac{36K}{\varepsilon^2}\int_{0}^T\tilde{\P}(|\tilde{X}_t|\geq R)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+\frac{36C}{\varepsilon^2}\|G1_{\{|\cdot|>R\}}\|_{L_p^q(T)}. \end{split}\end{equation} Since $b^{m}$ is bounded and continuous, it follows that \begin{align*} &\limsup_{n\rightarrow}\def\l{\ell\infty}\tilde{\P}\Big(\sup_{s\in[0,T]}I_2(s)\geq\frac{\varepsilon}{3}\Big)\leq\limsup_{n\rightarrow}\def\l{\ell\infty}\frac{3}{\varepsilon} \E\int_{0}^T|b^{m}_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}_t})-b^{m}_t(\tilde{X}_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}_t})|\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t=0. \end{align*} Finally, since $\tilde} \def\Ric{\text{\rm{Ric}} X^n_t\rightarrow}\def\l{\ell \tilde} \def\Ric{\text{\rm{Ric}} X_t$ in probability, estimate \eqref{KRE} also holds for $\tilde} \def\Ric{\text{\rm{Ric}} X$ replacing $\tilde} \def\Ric{\text{\rm{Ric}} X^n$. Therefore, inequality \eqref{I1J} holds for $I_3$ replacing $I_1$. In conclusion, we arrive at \begin{align*} &\limsup_{n\rightarrow}\def\l{\ell\infty}\tilde{\P}\Big(\sup_{s\in[0,T]}\int_{0}^s| b^n_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^n_t})-b_t(\tilde{X}_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}_t})|\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\geq\varepsilon\Big)\\ &\leq \limsup_{n\rightarrow}\def\l{\ell\infty}\sum_{i=1}^3\tilde{\P}\Big(\sup_{s\in[0,T]}I_i(s)\geq\frac{\varepsilon}{3}\Big)\\ &\leq \frac{18C}{\varepsilon^2}\left(\int_{0}^T\left(\int_{|x|\leq R}|b_t(x,\tilde{\mu}_t)-b^{m}_t(x,\tilde{\mu}_t)|^{2p}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\right)^{q/p}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\right)^{\frac{1}{q}}\\ &+\frac{72K}{\varepsilon^2}\int_{0}^T\tilde{\P}(|\tilde{X}_t|\geq R)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+\frac{72C}{\varepsilon^2}\|G1_{\{|\cdot|>R\}}\|_{L_p^q(T)}. \end{align*} for any $m>0$ and $R>0$. Then letting first $m\rightarrow}\def\l{\ell\infty$ and then $R\rightarrow}\def\l{\ell\infty$, due to (1) and (2) in $(H^\theta)$, we obtain from the dominated convergence theorem that \begin{align*} &\limsup_{n\rightarrow}\def\l{\ell\infty}\tilde{\P}\Big(\sup_{s\in[0,T]}\int_{0}^s| b^n_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^n_t})-b_t(\tilde{X}_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}_t})|\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\geq\varepsilon\Big)=0. \end{align*}\end{proof} \begin} \def\beq{\begin{equation}} \def\F{\scr F{proof}[Proof of \eqref{(A2)}] For any $n\ge m\ge 1$ we have \begin{align*} &\left| \int_{0}^s\sigma} \def\ess{\text{\rm{ess}}^n_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^n_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\tilde{W}^n_t-\int_{0}^s\sigma} \def\ess{\text{\rm{ess}}_t(\tilde{X}_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}_t})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde{W}_t\right|\\ &\leq\left| \int_{0}^s\sigma} \def\ess{\text{\rm{ess}}^n_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^n_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\tilde{W}^n_t-\int_{0}^s\sigma} \def\ess{\text{\rm{ess}}^{m}_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^{m}_t})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde{W}^n_t\right|\\ &+\left| \int_{0}^s\sigma} \def\ess{\text{\rm{ess}}^{m}_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^{m}_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\tilde{W}^n_t-\int_{0}^s\sigma} \def\ess{\text{\rm{ess}}^{m}_t(\tilde{X}_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^{m}_t})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde{W}_t\right|\\ &+\left| \int_{0}^s\sigma} \def\ess{\text{\rm{ess}}^{m}_t(\tilde{X}_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^{m}_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\tilde{W}_t-\int_{0}^s\sigma} \def\ess{\text{\rm{ess}}_t(\tilde{X}_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}_t})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde{W}_t\right|\\ &=:J_1(s)+J_2(s)+J_3(s). \end{align*} By Chebyshev's inequality, BDG inequality and \eqref{KRE}, we have \begin{align*} \tilde{\P}\Big(\sup_{s\in[0,T]}J_1(s)\geq\frac{\varepsilon}{3}\Big)&\leq \frac{9}{\varepsilon^2}\E\int_{0}^T1_{\{|\tilde{X}^n_t|\leq R\}}\| \sigma^n_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^n_t})-\sigma^{m}_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^{m}_t})\|_{HS}^2\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\ &+ \frac{9}{\varepsilon^2}\E\int_{0}^T1_{\{|\tilde{X}^n_t|>R\}}\| \sigma^n_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^n_t})-\sigma^{m}_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^{m}_t})\|_{HS}^2\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\ &\leq \frac{9C}{\varepsilon^2}\left(\int_{0}^T\left(\int_{|x|\leq R}\|\sigma} \def\ess{\text{\rm{ess}}^n_t(x,\tilde{\mu}^n_t)-\sigma} \def\ess{\text{\rm{ess}}^{m}_t(x,\tilde{\mu}^{m}_t)\|_{HS}^{2p}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\right)^{\frac{q}{p}}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\right)^{\frac{1}{q}}\\ &+\frac{18dK}{\varepsilon^2}\int_{0}^T\tilde{\P}(|\tilde{X}^n_t|> R)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t. \end{align*} By condition (1) in $(H^\theta)$, and $\tilde} \def\Ric{\text{\rm{Ric}}\mu^n_t\rightarrow}\def\l{\ell\tilde} \def\Ric{\text{\rm{Ric}}\mu_t$ in $\scr P_\theta$ as observed above, we have \begin{align*} &\lim_{n\rightarrow}\def\l{\ell\infty}\|\sigma} \def\ess{\text{\rm{ess}}^n_t(x,\tilde{\mu}^n_t)-\sigma} \def\ess{\text{\rm{ess}}_t(x,\tilde{\mu}_t)\|=0, \end{align*} and $$\lim_{n\rightarrow}\def\l{\ell\infty}\tilde{\P}(|\tilde{X}^n_t|> R)\leq\tilde{\P}(|\tilde{X}_t|\geq R).$$ So, the dominated convergence theorem gives \begin{equation}\begin{split}\label{I_1} &\limsup_{n\rightarrow}\def\l{\ell\infty}\tilde{\P}\Big(\sup_{s\in[0,T]}J_1(s)\geq\frac{\varepsilon}{3}\Big)\\ &\leq \frac{9C}{\varepsilon^2}\left(\int_{0}^T\left(\int_{|x|\leq R}\|\sigma} \def\ess{\text{\rm{ess}}_t(x,\tilde{\mu}_t)-\sigma} \def\ess{\text{\rm{ess}}^{m}_t(x,\tilde{\mu}^{m}_t)\|_{HS}^{2p}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\right)^{\frac{q}{p}}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\right)^{\frac{1}{q}}\\ &+\frac{18dK}{\varepsilon^2}\int_{0}^T\tilde{\P}(|\tilde{X}_t|> R)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t. \end{split}\end{equation} Similarly, \begin{align*} & \tilde{\P}\Big(\sup_{s\in[0,T]}J_3(s)\geq\frac{\varepsilon}{3}\Big)\\ &\leq \frac{9C}{\varepsilon^2}\left(\int_{0}^T\left(\int_{|x|\leq R}\|\sigma} \def\ess{\text{\rm{ess}}_t(x,\tilde{\mu}_t)-\sigma} \def\ess{\text{\rm{ess}}^{m}_t(x,\tilde{\mu}^{m}_t)\|_{HS}^{2p}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\right)^{\frac{q}{p}}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\right)^{\frac{1}{q}}\\ &+\frac{18dK}{\varepsilon^2}\int_{0}^T\tilde{\P}(|\tilde{X}_t|> R)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t. \end{align*} So, applying Lemma \ref{SL} to \begin{align*} \eta_n(t):=\sigma} \def\ess{\text{\rm{ess}}^{m}_t(\tilde{X}^n_t,\tilde{\mu}^{m}_t), \ \ \eta(t):=\sigma} \def\ess{\text{\rm{ess}}^{m}_t(\tilde{X}_t,\tilde{\mu}^{m}_t), \end{align*} we conclude that when $n\rightarrow}\def\l{\ell\infty$, $$ \int_{0}^s\sigma} \def\ess{\text{\rm{ess}}^{m}_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^{m}_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\tilde{W}^n_t\rightarrow}\def\l{\ell\int_{0}^s\sigma} \def\ess{\text{\rm{ess}}^{m}_t(\tilde{X}_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^{m}_t})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde{W}_t$$ in probability $\tilde} \def\Ric{\text{\rm{Ric}} \P$, uniformly in $s\in[0, T ]$. Hence, \begin{align*} &\lim_{n\rightarrow}\def\l{\ell\infty}\tilde{\P}\left(\sup_{s\in[0,T]}\left| \int_{0}^s\sigma} \def\ess{\text{\rm{ess}}^n_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^n_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\tilde{W}^n_t-\int_{0}^s\sigma} \def\ess{\text{\rm{ess}}_t(\tilde{X}_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}_t})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde{W}_t\right|\geq\varepsilon\right)\\ &\leq \frac{18C}{\varepsilon^2}\left(\int_{0}^T\left(\int_{|x|\leq R}\|\sigma} \def\ess{\text{\rm{ess}}_t(x,\tilde{\mu}_t)-\sigma} \def\ess{\text{\rm{ess}}^{m}_t(x,\tilde{\mu}^{m}_t)\|_{HS}^{2p}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\right)^{\frac{q}{p}}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\right)^{\frac{1}{q}}\\ &+\frac{36dK}{\varepsilon^2}\int_{0}^T\tilde{\P}(|\tilde{X}_t|> R)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t. \end{align*} Letting first $m\rightarrow}\def\l{\ell\infty$ and then $R\rightarrow}\def\l{\ell\infty$, we prove that when $n\rightarrow}\def\l{\ell\infty$, $$ \int_{0}^s\sigma} \def\ess{\text{\rm{ess}}^{n}_t(\tilde{X}^n_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}^{n}_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\tilde{W}^n_t\rightarrow}\def\l{\ell\int_{0}^s\sigma} \def\ess{\text{\rm{ess}}_t(\tilde{X}_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde{X}_t})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde{W}_t$$ in probability $\tilde} \def\Ric{\text{\rm{Ric}}\P$, uniformly in $s\in[0, T ]$.\end{proof} \subsection{Proof of Theorem \ref{T1.1}(3)} We will use the following result for the maximal operator: \begin{align}\label{max} \scr M}\def\Q{\mathbb Q} \def\texto{\text{o}} \def\LL{\Lambda h(x):=\sup_{r>0}\frac{1}{|B(x,r)|}\int_{B(x,r)}h(y)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D y,\ \ h\in L^1_{loc}(\mathbb{R}^d), x\in \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d, \end{align} where $B(x,r):=\{y: |x-y|<r\},$ see \cite[Appendix A]{CD}. \begin{lem} \label{Hardy} There exists a constant $C>0$ such that for any continuous and weak differentiable function $f$, \beq\label{HH1} |f(x)-f(y)|\leq C|x-y|(\scr M}\def\Q{\mathbb Q} \def\texto{\text{o}} \def\LL{\Lambda |\nabla f|(x)+\scr M}\def\Q{\mathbb Q} \def\texto{\text{o}} \def\LL{\Lambda |\nabla f|(y)),\ \ {\rm a.e.}\ x,y\in\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d.\end{equation} Moreover, for any $p>1$, there exists a constant $C_{p}>0$ such that \beq\label{HH2} \|\scr M}\def\Q{\mathbb Q} \def\texto{\text{o}} \def\LL{\Lambda f\|_{L^p}\leq C_{p}\|f\|_{L^p},\ \ f\in L^p(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d). \end{equation} \end{lem} Let $X$ and $Y$ be two solutions to \eqref{E1} with $X_0=Y_0$, and let $\mu_t=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}, \nu_t=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_t}, t\in [0,T].$ Then $\mu_0=\nu_0$. Let $$b_t^\mu(x)= b_t(x, \mu_t),\ \ \ \sigma} \def\ess{\text{\rm{ess}}_t^\mu(x)= \sigma} \def\ess{\text{\rm{ess}}_t(x,\mu_t),\ \ (t,x)\in [0,T]\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d,$$ and define $b_t^\nu, \sigma} \def\ess{\text{\rm{ess}}_t^\nu$ in the same way using $\nu_t$ replacing $\mu_t$. Then \beq\label{E1'}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} &\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t= b^\mu_t(X_t)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+ \sigma^\mu_t(X_t)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t,\\ &\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D Y_t= b_t^\nu(Y_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t +\sigma} \def\ess{\text{\rm{ess}}_t^\nu(Y_t) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t.\end{split} \end{equation} For any $\lambda>0$, consider the following PDE for $u: [0,T]\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\rightarrow}\def\l{\ell \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$: \beq\label{PDE} \frac{\partial u_t}{\partial t}+\frac{1}{2}\mathrm{Tr} (\sigma^\mu_t(\sigma_t^\mu)^\ast\nabla^2u_t)+\nabla_{b_t^\mu}u_t+b_t^\mu=\lambda u_t,\ \ u_T=0. \end{equation} By Lemma \ref{KK} and \cite[Theorem 3.1]{Z3}, when $\ll$ is large enough \eqref{PDE} has a unique solution $\mathbf{u}^{\lambda,\mu}$ satisfying \begin{align}\label{u0} \|\nabla \mathbf{u}^{\lambda,\mu}\|_{\infty}\leq \frac{1}{5}, \end{align} and \beq\label{u01} \|\nabla^2 \mathbf{u}^{\lambda,\mu}\|_{L^{2q}_{2p}(T)}<\infty.\end{equation} Let $\theta^{\lambda,\mu}_t(x)=x+\mathbf{u}^{\lambda,\mu}_t(x)$. By \eqref{E1'}, \eqref{PDE} and It\^o's formula (see \cite{Z2} for more details), we have \beq\label{E-X} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \theta^{\lambda,\mu}_t(X_t)= \lambda \mathbf{u}^{\lambda,\mu}_t(X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+ (\nabla\theta_t^{\lambda,\mu}\sigma^\mu_t)(X_t)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t, \end{equation} and \beq\begin{split}\label{E-Y} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \theta^{\lambda,\mu}_t(Y_t)&=\lambda \mathbf{u}^{\lambda,\mu}_t(Y_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+(\nabla\theta_t^{\lambda,\mu}\sigma^\nu_t)(Y_t)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t +[\nabla\theta_t^{\lambda,\mu}(b^\nu_t-b^\mu_t)](Y_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\ &+\frac{1}{2}\mathrm{Tr} [(\sigma^\nu_t(\sigma^\nu_t)^\ast-\sigma^\mu_t(\sigma^\mu_t)^\ast)\nabla^2\mathbf{u}^{\lambda,\mu}_t](Y_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t. \end{split}\end{equation} Let $\xi_t=\theta^{\lambda,\mu}_t(X_t)-\theta^{\lambda,\mu}_t(Y_t)$. By \eqref{E-X}, \eqref{E-Y} and It\^o's formula, we obtain \begin{equation*}\begin{split} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D|\xi_t|^2 =&2\lambda\left<\xi_t,\mathbf{u}^{\lambda,\mu}_t(X_t)-\mathbf{u}^{\lambda,\mu}_t(Y_t)\right\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\ &+2\left\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma\xi_t,[(\nabla\theta_t^{\lambda,\mu}\sigma^\mu_t)(X_t)-(\nabla\theta_t^{\lambda,\mu}\sigma^\nu_t)(Y_t)]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t\right\>\\ &+\left\|(\nabla\theta_t^{\lambda,\mu}\sigma^\mu_t)(X_t)-(\nabla\theta_t^{\lambda,\mu}\sigma^\nu_t)(Y_t)\right\|^2_{HS}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\ &-2\left\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma\xi_t, [\nabla\theta_t^{\lambda,\mu}(b^\nu_t-b^\mu_t)](Y_t)\right\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\ &-\left\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma\xi_t,\mathrm{Tr} [(\sigma^\nu_t(\sigma^\nu_t)^\ast-\sigma^\mu_t(\sigma^\mu_t)^\ast)\nabla^2\mathbf{u}^{\lambda,\mu}_t](Y_t)\right\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t. \end{split}\end{equation*}So, for any $m\ge 1$, \beq\label{NN1}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D|\xi_t|^{2m} =\, &2m\lambda|\xi_t|^{2(m-1)}\left<\xi_t,\mathbf{u}^{\lambda,\mu}_t(X_t)-\mathbf{u}^{\lambda,\mu}_t(Y_t)\right\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\ &+2m|\xi_t|^{2(m-1)}\left\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma\xi_t,[(\nabla\theta_t^{\lambda,\mu}\sigma^\mu_t)(X_t)-(\nabla\theta_t^{\lambda,\mu}\sigma^\nu_t)(Y_t)]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t\right\>\\ &+m|\xi_t|^{2(m-1)}\left\|(\nabla\theta_t^{\lambda,\mu}\sigma^\mu_t)(X_t)-(\nabla\theta_t^{\lambda,\mu}\sigma^\nu_t)(Y_t)\right\|^2_{HS}\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\ &+2m(m-1) |\xi_t|^{2(m-2)}\left|[(\nabla\theta_t^{\lambda,\mu}\sigma^\mu_t)(X_t)-(\nabla\theta_t^{\lambda,\mu}\sigma^\nu_t)(Y_t)]^\ast\xi_t \right|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\ &-2m|\xi_t|^{2(m-1)}\left\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma\xi_t, [\nabla\theta_t^{\lambda,\mu}(b^\nu_t-b^\mu_t)](Y_t)\right\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\ &-m|\xi_t|^{2(m-1)}\left\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma\xi_t,\mathrm{Tr} [(\sigma^\nu_t(\sigma^\nu_t)^\ast-\sigma^\mu_t(\sigma^\mu_t)^\ast)\nabla^2\mathbf{u}^{\lambda,\mu}_t](Y_t)\right\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t.\end{split}\end{equation} By \eqref{u0}, \eqref{LIP}, Lemma \ref{Hardy}, and noting that the distributions of $X_t$ and $Y_t$ are absolutely continuous with respect to the Lebesgue measure, we may find out a constant $c_1>0$ such that \beq\label{XPP1}|\xi_t|^{2(m-1)} |\xi_t|\cdot|\mathbf{u}^{\lambda,\mu}_t(X_t)-\mathbf{u}^{\lambda,\mu}_t(Y_t)|\le c_1 |\xi_t|^{2m},\end{equation} \beq\label{XPP2}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} &|\xi_t|^{2(m-2)} \left|[(\nabla\theta_t^{\lambda,\mu}\sigma^\mu_t)(X_t)-(\nabla\theta_t^{\lambda,\mu}\sigma^\nu_t)(Y_t)]^\ast\xi_t \right|^2\\ &\le |\xi_t|^{2(m-1)}\left\|(\nabla\theta_t^{\lambda,\mu}\sigma^\mu_t)(X_t)-(\nabla\theta_t^{\lambda,\mu}\sigma^\nu_t)(Y_t)\right\|^2_{HS}\\ &\le |\xi_t|^{2(m-1)} \Big\{C |\xi_t| \scr M\big(\|\nabla} \def\pp{\partial} \def\E{\mathbb E^2\theta_t^{\ll,\mu}\|+\|\nabla\sigma} \def\ess{\text{\rm{ess}}_t^\mu\|\big)(X_t)\\ &\qquad\qquad\qquad + C |\xi_t| \scr M\big(\|\nabla} \def\pp{\partial} \def\E{\mathbb E^2\theta_t^{\ll,\mu}\|+\|\nabla\sigma} \def\ess{\text{\rm{ess}}_t^\mu\|\big)(Y_t)+ \mathbb W_\theta(\mu_t,\nu_t)\Big\}^2\\ &\le c_1|\xi_t|^{2m} \big\{\scr M\big(\|\nabla} \def\pp{\partial} \def\E{\mathbb E^2\theta_t^{\ll,\mu}\|+\|\nabla\sigma} \def\ess{\text{\rm{ess}}_t^\mu\|\big)(X_t) + \scr M\big(\|\nabla} \def\pp{\partial} \def\E{\mathbb E^2\theta_t^{\ll,\mu}\|+\|\nabla\sigma} \def\ess{\text{\rm{ess}}_t^\mu\|\big)(Y_t)\big\}^2\\ &\quad + c_1 |\xi_t|^{2m}+c_1\mathbb W_\theta(\mu_t,\nu_t)^{2m},\end{split}\end{equation} \beq\label{XPP3} \begin} \def\beq{\begin{equation}} \def\F{\scr F{split}&|\xi_t|^{2(m-1)} |\xi_t| \cdot |\{\nabla} \def\pp{\partial} \def\E{\mathbb E\theta_t^{\ll,\mu}(b_t^\nu-b_t^\mu)\}(Y_t)|\\ &\le L \|\nabla} \def\pp{\partial} \def\E{\mathbb E\theta^{\ll,\mu}\|_{T,\infty }|\xi_t|^{2(m-1)}|\xi_t| \mathbb W_\theta(\mu_t,\nu_t)\le c_1\big(|\xi_t|^{2m} +\mathbb W_\theta(\mu_t,\nu_t)^{2m}\big), \end{split}\end{equation} and for some constants $c_0,c_1>0$ \beq\label{XPP4}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} &|\xi_t|^{2(m-1)}|\xi_t|\cdot \big|\mathrm{Tr} [(\sigma^\nu_t(\sigma^\nu_t)^\ast-\sigma^\mu_t(\sigma^\mu_t)^\ast)\nabla^2\mathbf{u}^{\lambda,\mu}_t](Y_t)\big|\\ &\le c_0 |\xi_t|^{2m-1} \mathbb W_\theta(\mu_t,\nu_t) \|\nabla} \def\pp{\partial} \def\E{\mathbb E^2\mathbf{u}^{\lambda,\mu}_t\|(Y_t)\\ &\le c_1 |\xi_t|^{2m}|\|\nabla} \def\pp{\partial} \def\E{\mathbb E^2\mathbf{u}^{\lambda,\mu}_t\|^{\ff{2m}{2m-1}}(Y_t) + c_1\mathbb W_\theta(\mu_t,\nu_t)^{2m}.\end{split}\end{equation} Combining \eqref{XPP1}-\eqref{XPP4} with \eqref{NN1}, and noting that $\ff{2m}{2m-1}\le 2$, we arrive at \beq\label{NNP}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D |\xi_t|^{2m} \le c_2 |\xi_t|^{2m} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D A_t + c_2 \mathbb W_\theta(\mu_t,\nu_t)^{2m}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t + \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D M_t\end{equation} for some constant $c_2>0$, a local martingale $M_t$, and \begin{align*} A_t:=\int_0^t\Big\{&1+ |\nabla} \def\pp{\partial} \def\E{\mathbb E^2{\mathbf u}_s^{\ll,\mu}(Y_s)|^2 +\big(\scr M\big(\|\nabla} \def\pp{\partial} \def\E{\mathbb E^2\theta_s^{\ll,\mu}\|+\|\nabla} \def\pp{\partial} \def\E{\mathbb E\sigma} \def\ess{\text{\rm{ess}}_s^\mu\|\big)(X_s) \\ &+ \scr M\big(\|\nabla} \def\pp{\partial} \def\E{\mathbb E^2\theta_s^{\ll,\mu}\|+\|\nabla} \def\pp{\partial} \def\E{\mathbb E\sigma} \def\ess{\text{\rm{ess}}_s^\mu\|\big)(Y_s)\big)^2\Big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s. \end{align*} By the stochastic Gronwall lemma due to \cite[Lemma 3.8]{XZ}, when $2m>\theta$ this implies \beq\label{NN2}\mathbb W_\theta(\mu_t,\nu_t)^{2m}\le (\E |\xi_t|^\theta)^{\ff{2m}\theta} \le c_2\big(\E\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\ff{c_2\theta}{2m-\theta}A_t}\big)^{\ff{2m-\theta}{\theta}} \int_0^t\mathbb W_\theta(\mu_s,\nu_s)^{2m}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s,\ \ t\in [0,T].\end{equation} Since by Lemma \ref{KK}, \eqref{HH2}, \eqref{u01} and the Khasminskii type estimate, see for instance \cite[Lemma 3.5]{XZ}, we have $$\E\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\ff{c_2\theta}{2m-\theta}A_T}<\infty,$$ so that by Gronwall's lemma we prove $\mathbb W_\theta(\mu_t,\nu_t)=0$ for all $t\in [0,T].$ Then by \eqref{E1'} both $X_t$ and $Y_t$ solve the same SDE with coefficients $b_t^\mu$ and $\sigma} \def\ess{\text{\rm{ess}}_t^\mu$, and due to \cite{Z2}, the condition $1_D(|b_t^\mu|^2+|\nabla} \def\pp{\partial} \def\E{\mathbb E \sigma} \def\ess{\text{\rm{ess}}_t^\mu|^2)\in L_p^q(T)$ for compact $D\subset \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ implies the pathwise uniqueness of this SDE, so we conclude that $X_t=Y_t$ for all $t\in [0,T].$ \subsection{Proof of Corollary \ref{C1.2} and Corollary \ref{C1.3}} \begin{proof}[Proof of Corollary \ref{C1.2}] We set $a_t(x,\mu) :=(\sigma} \def\ess{\text{\rm{ess}}\sigma} \def\ess{\text{\rm{ess}}^\ast)_t(x,\mu)$ for $t \in [0,T]$, and $b_t( x,\mu) := 0$, $a_t( x,\mu) := I$ for $t \in \mathbb R} \def\ff{\frac} \def\ss{\sqrt\setminus [0,T]$. Let $0\le \rr\in C_0^\infty(\mathbb R} \def\ff{\frac} \def\ss{\sqrt\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$ with support contained in $\{(r,x): |(r,x)|\le 1\}$ such that $\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} \rr(r,x)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x=1.$ For any $n\ge 1$, let $\rr_n(r,x)= n^{d+1} \rr(nr, nx)$ and define \begin{equation}\begin{split}\label{approx} &a^n_t(x,\mu)=\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} \sigma_s\sigma^\ast_s(x',\mu) \rr_n (t-s, x-x')\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x',\\ &b^n_t(x,\mu)=\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} b_s(x',\mu) \rr_n (t-s, x-x')\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x',\ \ (t,x,\mu)\in \mathbb R} \def\ff{\frac} \def\ss{\sqrt\times\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times\scr P. \end{split}\end{equation} Let $\hat{\sigma}_t^n=\ss{a^n_t}$ and $\hat{\sigma}_t=\ss{a_t}$. Consider the following SDE: \beq\label{E1''} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t= b_t(X_t, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t +\hat{\sigma} \def\ess{\text{\rm{ess}}}_t(X_t, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t.\end{equation} We first show that $(b,\hat{\sigma} \def\ess{\text{\rm{ess}}})$ satisfies assumption $(H^\theta)$. Firstly, \eqref{EX1}-\eqref{EX2} and the continuity in the third variable of $B$ and $\Sigma$ imply that $b$ and $\sigma$ are continuous in the third variable $\mu\in \scr P_\theta$. Thus, (1) in $(H^\theta)$ holds. As to $(H^\theta)$ (2), since by \cite {Z2}, it holds that $$\lim_{n\rightarrow}\def\l{\ell\infty}\|F-F\ast\rho_n\|_{L^q_p(T)}=0,$$ there exists a subsequence $n_k$ such that $$\|F-F\ast\rho_{n_k}\|_{L^q_p(T)}<2^{-k}.$$ Letting $$G=\sum_{k=1}^{\infty}|F-F\ast\rho_{n_k}|+F,$$ then $\|G\|_{L^q_p(T)}\leq 1+\|F\|_{L^q_p(T)}$ and noting $|b^{n_k}|^2\leq K+F\ast\rho_{n_k}$, we have $|b^{n_k}|^2\leq K+G$. So, using the subsequence $b^{n_k}$ replacing $b^n$, we verify condition (2) in $(H^\theta)$. Finally, by \eqref{EX1}, for any $n\ge 1$ there exists a constant $c_n>0$ such that \begin{align*} |b_t^n(x,\mu)-b_s^n(x',\nu)|+ \|\hat{\sigma}_t^n(x,\mu)-\hat{\sigma}_s^n(x',\nu)\| \le c_n \big(|t-s|+|x-x'| + \mathbb W_1(\mu,\nu)\big) \end{align*} holds for all $s,t\in \mathbb R} \def\ff{\frac} \def\ss{\sqrt, x,x'\in \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ and $\mu,\nu\in \scr P_1$. So, for any $\theta\ge 1,$ condition (3) in $(H^\theta)$ holds. By Theorem \ref{T1.1} (1), SDE \eqref{E1''} has a weak solution. Noting that $\sigma\sigma^\ast=\hat{\sigma}\hat{\sigma}^\ast$, the SDE \eqref{E1} also has a weak solution. Finally, the strong existence and uniqueness follow from Theorem \ref{T1.1} (2) and (3). \end{proof} \begin{proof}[Proof of Corollary \ref{C1.3}] Let $b_t^n$ and $a_t^n$ be in \eqref{approx}, and let $\hat{\sigma}_t^n=\ss{a^n_t}$ and $\hat{\sigma}_t=\ss{a_t}$. Then \eqref{LIP} and \eqref{approx} imply $(b,\hat{\sigma})$ satisfy $H^\theta$. Then we may complete the proof as in the proof of Corollary \ref{C1.3} (1). \end{proof} \section{Proofs of Theorems \ref{T3.1} and \ref{T5.1}} \subsection{Proof of Theorem \ref{T3.1}} According to \cite[Theorem 1.2 (2)]{WZ'} for $d_1=0$, Corollary \ref{C1.3}, and Lemma \ref{SS}, {\bf(H)} implies the existence and uniqueness of solution to \eqref{E1}. For any $\mu\in \scr P_2$ we let $\mu_t=P_t^*\mu$ be the distribution of $X_t$ which solves \eqref{E11} with $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}=\mu.$ We first figure out the outline of proof using coupling by change of measure as in \cite{W11,Wbook}. From now on, we fix $t_0\in (0,T]$ and $\mu_0,\nu_0\in \scr P_2$, and take $\F_0$-measurable variables $X_0$ and $Y_0$ in $\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ such that $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}=\mu_0, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_0}=\nu_0$ and \beq\label{I1} \E |X_0-Y_0|^2 = \mathbb W_2(\mu_0,\nu_0)^2.\end{equation} Let $X_t$ with $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}=\mu_0$ solve \eqref{E11}, we have \beq\label{CP1} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t= b_t(X_t,\mu_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+\sigma} \def\ess{\text{\rm{ess}}_t(X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t.\end{equation} To establish the log-Harnack inequality, We construct a process $Y_t$ such that for a weighted probability measure $\Q:=R\P$ \beq\label{CP2} X_{t_0}=Y_{t_0}\ \Q\text{-a.s., \ \ and}\ \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_{t_0}}|\Q=P_{t_0}^*\nu_0=:\nu_{t_0}. \end{equation} Then $$ (P_{t_0} f)(\nu_0)= \E_\Q[f(Y_{t_0})]=\E[R_{t_0}f(X_{t_0})],\ \ f\in \B_b(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d).$$ So, by Young's inequality we obtain the log-Harnack inequality: \beq\label{LHI}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} (P_{t_0}\log f)(\nu_0)&\le \E[R_{t_0}\log R_{t_0}]+ \log\E[f(X_{t_0})]\\ &=\log (P_{t_0}f)(\mu_0)+ \E[R_{t_0}\log R_{t_0}],\ \ f\in \B_b^+(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d), f\geq 1.\end{split}\end{equation} To construct the desired $Y_t$, we follow the line of \cite{WZ'} using Zvonkin's transform. As shown in \cite[Theorem 3.10]{WZ'} for $d_1=0$ that Assumption {\bf(H)} implies that for large enough $\ll>0$, the PDE \eqref{PDE} has a unique solution $\mathbf{u}^{\lambda,\mu}$ satisfying \begin{align}\label{u} \|\mathbf{u}^{\lambda,\mu}\|_{\infty}+\|\nabla \mathbf{u}^{\lambda,\mu}\|_{\infty}+\|\nabla^2 \mathbf{u}^{\lambda,\mu}\|_{\infty}\leq \frac{1}{5}. \end{align} $\|\nabla} \def\pp{\partial} \def\E{\mathbb E^2\mathbf{u}^{\lambda,\mu}\|_{\infty}<\infty$ together with the Lipschitzian continuity of $\sigma$ implies that the increasing process $A_t$ in \eqref{NNP} satisfies $$\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D A_t\le c \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t$$ for some constant $c>0$. Moreover, $\E |\xi_t|^2\ge c'\mathbb W_2(\mu_t,\nu_t)^2$ holds for some constant $c'>0$. So, with $m=1, \theta=2, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}=\mu_0$ and $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_0}=\nu_0$, the inequality \eqref{NNP} gives \beq\label{*D*} \mathbb W_2(\mu_t,\nu_t)\le \kk \mathbb W_2(\mu_0,\nu_0),\ \ t\in [0,T]\end{equation} for some constant $\kk>0$. As in \cite[\S2]{W11}, let $\gamma=\frac{72}{25}K+\frac{2d}{25\delta}+\frac{12\lambda}{25}$ and take \beq\label{Xi} \zeta_t= \ff {12} {25\gamma} \Big(1-\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\frac{25\gamma}{16}(t-t_0)}\Big),\ \ t\in [0,t_0],\end{equation} and let $Y_t$ solve the modified SDE \beq\label{CY} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D Y_t = \Big\{b_t(Y_t,\nu_t)+ \ff 1 {\zeta_t} \sigma} \def\ess{\text{\rm{ess}}_t(Y_t)\sigma} \def\ess{\text{\rm{ess}}_t(X_t)^{-1}(X_t-Y_t)\Big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+ \sigma} \def\ess{\text{\rm{ess}}_t(Y_t) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t,\ \ t\in [0,t_0).\end{equation} Since $\sup_{t\in [0,T]}\nu_t(|\cdot|^2)<\infty$, this SDE has a unique solution $(Y_t)_{t\in [0,t_0)}$. Let $$\tau_n:=t_0\land \inf\{t\in [0,t_0): |X_t|+|Y_t|\ge n\},\ \ n\ge 1,$$ where $\inf\emptyset :=\infty$ by convention. We have $\tau_n\uparrow t_0$ as $n\uparrow\infty$. To see that the process $Y$ meets the above requirement, we first prove that \beq\label{R} R_s:= \exp\bigg[\int_0^s \ff 1 {\zeta_t} \big\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma\sigma} \def\ess{\text{\rm{ess}}_t(X_t)^{-1}(Y_t-X_t),\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t\big\>-\ff 1 2 \int_0^s \ff {|\sigma} \def\ess{\text{\rm{ess}}_t(X_t)^{-1}(Y_t-X_t)|^2} {\zeta_t^2}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\bigg]\end{equation} for $s\in [0,t_0)$ is a uniformly integrable martingale, and hence extends also to time $t_0$. \begin} \def\beq{\begin{equation}} \def\F{\scr F{lem}\label{L3.1} Assume {\bf (A1)}-{\bf (A2)} and let $X_0,Y_0$ be two $\F_0$-measurable random variables such that $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}=\mu_0, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_0}=\nu_0$, and \beq\label{I1'} \E|X_0-Y_0|^2= \mathbb W_2(\mu_0,\nu_0)^2.\end{equation} Then there exists a constant $c>0$ uniformly in $t_0\in (0,T)$ such that \beq\label{ES} \sup_{t\in [0,t_0)}\E[R_t\log R_t]\le \ff c {t_0}\mathbb W_2(\mu_0,\nu_0)^2.\end{equation} Consequently, $R_t$ extends to $t=t_0$, $\Q:= R_{t_0}\P$ is a probability measure under which $\eqref{CY}$ has a unique solution $(Y_t)_{t\in [0,t_0]}$ satisfying \beq\label{ES'0} \Q(X_{t_0}=Y_{t_0})=1. \end{equation} \end{lem} \begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} By {\bf (A1)}, for any $n\ge 1$ and $t\in (0,t_0)$, the process $(R_{s\land \tau_n})_{s\in [0,t]}$ is a uniformly integrable continuous martingale. So, for the first assertion it suffices to find out a constant $c>0$ uniformly in $t_0\in (0,T)$ such that \beq\label{ES'} \sup_{n\ge 1}\E[R_{t\land\tau_n}\log R_{t\land\tau_n}]\le \ff c {t_0} \mathbb W_2(\mu_0,\nu_0)^2,\ \ t\in [0,t_0).\end{equation} To this end, for fixed $t\in (0,T)$ and $n\ge 1$, we consider the weighted probability $\Q_{t,n}:= R_{t\land \tau_n}\P$. By Girsnaov's theorem $(\tilde} \def\Ric{\text{\rm{Ric}} W_s)_{s\in [0, t\land\tau_n]}$ is a $d$-dimensional Brownian motion under $\Q_{t,n}$. Reformulating \eqref{CP1} and \eqref{CY} as \begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} &\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_s= b_s(X_s,\mu_s) - \ff{X_s-Y_s}{\zeta_s}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s +\sigma} \def\ess{\text{\rm{ess}}_s(X_s)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde} \def\Ric{\text{\rm{Ric}} W_s,\\ &\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D Y_s= b_s(Y_s,\nu_s)+ \sigma} \def\ess{\text{\rm{ess}}_s(Y_s) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde} \def\Ric{\text{\rm{Ric}} W_s,\ \ s\in [0, t\land \tau_n],\end{split}\end{equation*} where $$\tilde} \def\Ric{\text{\rm{Ric}} W_t=W_t+\int_{0}^{t}\ff 1 {\zeta_s} \sigma} \def\ess{\text{\rm{ess}}_s(X_s)^{-1}(X_s-Y_s)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_s.$$ Next, we fix $\lambda=\lambda_0$. Letting $\theta^{\lambda,\mu}_t(x)=x+\mathbf{u}^{\lambda,\mu}_t(x)$, combining \eqref{PDE} and It\^{o}'s formula, we arrive at \beq\label{E-X'} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \theta^{\lambda,\mu}_t(X_t)= \lambda \mathbf{u}^{\lambda,\mu}_t(X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+ (\nabla\theta_t^{\lambda,\mu}\sigma_t)(X_t)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde{W}_t-\nabla\theta_t^{\lambda,\mu}(X_t)\ff{X_t-Y_t}{\zeta_t}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t, \end{equation} and \beq\begin{split}\label{E-Y'} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \theta^{\lambda,\mu}_t(Y_t)&=\lambda \mathbf{u}^{\lambda,\mu}_t(Y_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+(\nabla\theta_t^{\lambda,\mu}\sigma_t)(Y_t)\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde{W}_t+[\nabla\theta_t^{\lambda,\mu}(b^\nu_t-b^\mu_t)](Y_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t \end{split}\end{equation} By It\^o's formula under probability $\Q_{t,n}$, we obtain \beq\begin{split}\label{EX-Y'} &\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D |\theta^{\lambda,\mu}_t(Y_t)-\theta^{\lambda,\mu}_t(X_t)|^2\\ &=2\langle\theta^{\lambda,\mu}_t(X_t)-\theta^{\lambda,\mu}_t(Y_t),\lambda \mathbf{u}^{\lambda,\mu}_t(X_t)-\lambda \mathbf{u}^{\lambda,\mu}_t(Y_t)\rangle\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\ &+2\langle\theta^{\lambda,\mu}_t(X_t)-\theta^{\lambda,\mu}_t(Y_t),(\nabla\theta_t^{\lambda,\mu}\sigma_t)(X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde{W}_t-(\nabla\theta_t^{\lambda,\mu}\sigma_t)(Y_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde{W}_t\rangle\\ &+\|\nabla\theta_t^{\lambda,\mu}\sigma_t)(X_t)-\nabla\theta_t^{\lambda,\mu}\sigma_t)(Y_t)\|^2_{HS}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\ &-2\langle\theta^{\lambda,\mu}_t(X_t)-\theta^{\lambda,\mu}_t(Y_t),[\nabla\theta_t^{\lambda,\mu}(b^\nu_t-b^\mu_t)](Y_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\rangle\\ &-2\Big\langle\theta^{\lambda,\mu}_t(X_t)-\theta^{\lambda,\mu}_t(Y_t),\nabla\theta_t^{\lambda,\mu}(X_t)\ff{X_t-Y_t}{\zeta_t}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\Big\rangle. \end{split}\end{equation} By \eqref{u} we have \begin{align*} &-\Big\langle\theta^{\lambda,\mu}_t(X_t)-\theta^{\lambda,\mu}_t(Y_t),\nabla\theta_t^{\lambda,\mu}(X_t)\ff{X_t-Y_t}{\zeta_t}\Big\rangle\\ &=-\Big\langle X_t-Y_t +\mathbf{u}^{\lambda,\mu}_t(X_t)-\mathbf{u}^{\lambda,\mu}_t(Y_t),\ff{X_t-Y_t}{\zeta_t}+\nabla \mathbf{u}_t^{\lambda,\mu}(X_t)\ff{X_t-Y_t}{\zeta_t}\Big\rangle\\ &=-\Big\langle X_t-Y_t,\ff{X_t-Y_t}{\zeta_t}\Big\rangle-\Big\langle \mathbf{u}^{\lambda,\mu}_t(X_t)-\mathbf{u}^{\lambda,\mu}_t(Y_t),\ff{X_t-Y_t}{\zeta_t}\Big\rangle\\ &-\Big\langle X_t-Y_t,\nabla \mathbf{u}_t^{\lambda,\mu}(X_t)\ff{X_t-Y_t}{\zeta_t}\Big\rangle-\Big\langle \mathbf{u}^{\lambda,\mu}_t(X_t)-\mathbf{u}^{\lambda,\mu}_t(Y_t),\nabla \mathbf{u}_t^{\lambda,\mu}(X_t)\ff{X_t-Y_t}{\zeta_t}\Big\rangle\\ &\leq -\frac{14}{25}\ff{|X_t-Y_t|^2}{\zeta_t}. \end{align*} So, \begin{align*} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D |\theta^{\lambda,\mu}_s(Y_s)-\theta^{\lambda,\mu}_s(X_s)|^2&\le \Big\{\gamma|X_s-Y_s|^2 +\frac{72}{25}\kk_2(T) |X_s-Y_s|\mathbb W_2(\mu_s,\nu_s) -\frac{4}{5}\ff{|X_s-Y_s|^2}{\zeta_s}\Big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s \\ &+\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D M_s,\ \ s\in [0, t\land \tau_n] \end{align*} for some $\Q_{t,n}$-martingale $M_s$. By \eqref{Xi} we have $$\frac{4}{5}-\gamma\zeta_s+ \frac{16}{25}\zeta_s' =\ff {8} {25},$$ By It\^o's formula, there exists a constant $c_2>0$ such that Then \beq\label{V1}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split}&\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \ff{|\theta^{\lambda,\mu}_s(Y_s)-\theta^{\lambda,\mu}_s(X_s)|^2}{\zeta_s}\\ &\le \ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D M_s}{\zeta_s}+ c_2\mathbb W_2(\mu_s,\nu_s)^2 \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s -\ff{|X_s-Y_s|^2}{\zeta_s^2}\Big\{\frac{4}{5}-\gamma\zeta_s+ \frac{16}{25}\zeta_s'- \ff {1} {25}\Big\} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\\ &\le \ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D M_s}{\zeta_s}+ c_2\mathbb W_2(\mu_s,\nu_s)^2 \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s -\ff{7|X_s-Y_s|^2}{25\zeta_s^2},\ \ s\in [0, t\land \tau_n].\end{split}\end{equation} Combining this with \eqref{*D*}, \eqref{I1} and \eqref{V1}, we arrive at \beq\label{V2} \E_{\Q_{t,n}} \int_0^{t\land\tau_n} \ff{|X_s-Y_s|^2}{\zeta_s^2}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\le \ff {c_1} {t_0}\mathbb W_2(\mu_0,\nu_0)^2,\ \ t\in[0,t_0) \end{equation} for some constant $c_1>0$. Therefore, there exists a constant $C>0$ such that \begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} \E[R_{t\land \tau_n}\log R_{t\land \tau_n}]&= \ff 1 2 \E_{\Q_{t,n}}\int_0^{t\land\tau_n} \ff {|\sigma} \def\ess{\text{\rm{ess}}_s(X_s)^{-1}(Y_s-X_s)|^2} {\zeta_s^2}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s \\ &\le \ff{C}{t_0}\mathbb W_2(\mu_0,\nu_0)^2,\ \ t\in (0,t_0).\end{align*} Thus, \eqref{ES} holds. By \eqref{ES} and the martingale convergence theorem, $(R_{t})_{t\in [0,t_0]}$ is a uniformly integrable martingale, so $\Q:= R_{t_0}\P$ is a probability measure. By Girsanov theorem, we can reformulate \eqref{CY} as \beq\label{ET}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D Y_t= b_t(Y_t,\nu_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t +\sigma} \def\ess{\text{\rm{ess}}_t(Y_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde} \def\Ric{\text{\rm{Ric}} W_t,\end{equation} which has a unique solution $(Y_t)_{t\in [0,t_0]}$. By \eqref{ES}, $$\E_\Q \int_0^{t_0} \ff{|X_t-Y_t|^2}{\zeta_t^2}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t<\infty.$$ Since $X_t-Y_t$ is continuous and $\int_0^{t_0}\ff 1 {\zeta_t}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t=\infty$, this implies $\Q(X_{t_0}=Y_{t_0})=1.$ \end{proof} \begin} \def\beq{\begin{equation}} \def\F{\scr F{proof}[Proof of Theorem \ref{T3.1}] Consider the distribution dependent SDE $$\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\tilde} \def\Ric{\text{\rm{Ric}} X_t= b_t(\tilde} \def\Ric{\text{\rm{Ric}} X_t, \scr L_{\tilde} \def\Ric{\text{\rm{Ric}} X_t}|\tilde} \def\Ric{\text{\rm{Ric}} \P)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+ \sigma} \def\ess{\text{\rm{ess}}_t(\tilde} \def\Ric{\text{\rm{Ric}} X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde} \def\Ric{\text{\rm{Ric}} W_t,\ \ \tilde} \def\Ric{\text{\rm{Ric}} X_0= Y_0.$$ By the weak uniqueness we have $\scr L_{\tilde} \def\Ric{\text{\rm{Ric}} X_t}|\tilde} \def\Ric{\text{\rm{Ric}} \P= P_t^*\nu_0= \nu_t$ for $t\in [0,t_0]$. Combining this with \eqref{ET} and the strong uniqueness, we conclude that $\tilde} \def\Ric{\text{\rm{Ric}} X_t=Y_t$ for $t\in [0,T]$. Therefore, \eqref{LHI} and Lemma \ref{L3.1} lead to \begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} (P_{t_0} \log f)(\nu_0)\le \log (P_{t_0}f)(\mu_0) +\ff C {t_0}\mathbb W_2(\mu_0,\nu_0)^2,\ \ t_0\in (0,T].\end{align*} Finally, the Harnack inequality with power \eqref{H2'} follows from \cite[Section 3.4]{Wbook}. \end{proof} \subsection{Proof of Theorem \ref{T5.1}} \begin} \def\beq{\begin{equation}} \def\F{\scr F{proof}Fix $t_0>0$. Denote $\mu_t=P_t^*\mu_0=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}, t\in [0,t_0]$. Then \eqref{E5} becomes \beq\label{E5'} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t= b_t(X_t, \mu_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t +\sigma} \def\ess{\text{\rm{ess}}_t(\mu_t) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t,\ \ \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}=\mu_0.\end{equation} Let $Y_t=X_t+\ff {tv}{t_0},\ t\in [0,t_0]$. Then $$ \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D Y_t= b_t(Y_t, \mu_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t +\sigma} \def\ess{\text{\rm{ess}}_t(\mu_t) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde} \def\Ric{\text{\rm{Ric}} W_t,\ \ \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_0}=\mu_0, t\in [0,t_0], $$ where \begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} &\tilde} \def\Ric{\text{\rm{Ric}} W_t:= W_t +\int_0^t \eta_s\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s,\\ &\eta_t:= \sigma} \def\ess{\text{\rm{ess}}_t^{-1}\Big\{\ff v {t_0} + b_t(X_t,\mu_t)-b_t\Big(X_t+\ff {tv}{t_0}, \mu_t\Big)\Big\}.\end{align*} Let $R_{t_0}= \exp[-\int_0^{t_0} \langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma\eta_t, \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W_t\>-\ff 1 2\int_0^{t_0} |\eta_s|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s].$ By the Girsanov theorem we obtain $$(P_{t_0} f)(\mu_0)=\E [R_{t_0} f(Y_{t_0})]= \E[R_{t_0} f(X_{t_0}+v)]\le (P_{t_0} f^p(v+\cdot))^{\ff 1 p}(\mu_0) \big(\E R_{t_0}^{\ff p{p-1}}\big)^{\ff {p-1}p},$$ and by Young's inequality, we obtain \begin{align*}(P_{t_0} \log f)(\mu_0)&=\E [R_{t_0} \log f(Y_{t_0})]\\ &= \E[R_{t_0} \log f(X_{t_0}+v)]\le \log P_{t_0} f(v+\cdot)(\mu_0)+ \E R_{t_0}\log R_{t_0}.\end{align*} Then we have \begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} &\E R_{t_0}^{\ff p {p-1}} \le \sup_{\Omega}\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\ff{p}{2(p-1)^2} \int_0^{t_0} |\eta_s|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s} \\ &\le \exp\bigg[\ff{p\, \int_0^{t_0} \|\sigma} \def\ess{\text{\rm{ess}}_t^{-1}\|_{\infty}^2 \big\{|v|/{t_0}+\phi(t|v|/{t_0})\big\}^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t}{2(p-1)^2}\bigg].\end{align*} and \begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} &\E R_{t_0}\log R_{t_0}= \E_\Q \log R_{t_0} \le \frac{1}{2}\E_\Q\int_0^{t_0} |\eta_s|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\\ &\le \frac{1}{2}\int_0^{t_0} \|\sigma} \def\ess{\text{\rm{ess}}_t^{-1}\|_{\infty}^2 \big\{|v|/{t_0}+\phi(t|v|/{t_0})\big\}^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t.\end{align*} \end{proof} \end{document}
\begin{document} \begin{abstract} We give a bijective proof of the MacMahon-type equidistribution over the group of signed even permutations $C_2 \wr A_n$ that was stated in~[Bernstein. Electron. J. Combin. 11 (2004) 83]. This is done by generalizing the bijection that was introduced in the bijective proof of the equidistribution over the alternating group $A_n$ in~[Bernstein and Regev. S{\'e}m. Lothar. Combin. 53 (2005) B53b]. \end{abstract} \title{A $\maj$-$\inv$ bijection for $C_2\wr A_n$} \section{Introduction} In~\cite{macmahon:indices} MacMahon proved that two \emph{permutation statistics\/}, namely the \emph{length\/} (or \emph{inversion number\/}) and the \emph{major index\/}, are equidistributed over the symmetric group $S_n$ for every $n>0$ (see also~\cite{macmahon:combinatory}). The question of finding a bijective proof of this remarkable fact arose naturally. That open problem was finally solved by Foata~\cite{foata:netto}, who gave a canonical bijection on $S_n$, for each $n$, that maps one statistic to the other. In~\cite{foSch:major}, Foata and Sch\"utzenberger proved a refinement by \emph{inverse descent classes\/} of MacMahon's theorem. The theorem has received many additional refinements and generalizations, including~\cite{carlitz:qBernoulli, carlitz:qEulerian, garsia:permutation, reiner:signed, krattenthaler:major, adin:flag, regev:wreath, regev:qstatistics, stanley:sign}. In~\cite{adin:hyperoctahedral}, Adin, Brenti and Roichman gave an analogue of MacMahon's theorem for the group of signed permutations $B_n = C_2 \wr S_n$. A refinement of that result by inverse descent classes appeared in~\cite{adin:equiHyperoct}, and a bijective proof was given in~\cite{foata:signedI}. These results are the ``signed'' analogues of MacMahon's theorem, its refinement by Foata and Sch\"utzenberger and Foata's bijection, respectively. The MacMahon equidistribution does not hold when the $S_n$ statistics are restricted to the alternating subgroups $A_n \subset S_n$. However, in~\cite{regev:alternating}, Regev and Roichman defined the $\ell_A$ (\emph{$A$-length\/}), $\rmaj_{A_n}$ (\emph{alternating reverse major index}\/) and $\del_A$ (\emph{$A$-delent number\/}) statistics on $A_n$, and proved the following refined analogue of MacMahon's theorem: \begin{thm}[{see~\cite[Theorem~6.1(2)]{regev:alternating}}] For every $n>0$, \begin{multline*} \sum_{w \in A_{n+1}} q^{\ell_A(w)} t^{\del_A(w)} = \sum_{w \in A_{n+1}} q^{\rmaj_{A_{n+1}}(w)} t^{\del_A(w)} \\ = (1+2qt)(1+q+2q^2 t)\cdots(1+q+\dots+q^{n-2}+2q^{n-1}t) . \end{multline*} \end{thm} A bijective proof was later given in~\cite{bernstein:foataForAn} in the form of a mapping $\Psi:A_{n+1}\to A_{n+1}$ with the following properties. \begin{thm}[{see~\cite[Theorem~5.8]{bernstein:foataForAn}}]\label{PR:Psi} \begin{enumerate} \item The mapping $\Psi$ is a bijection of $A_{n+1}$ onto itself. \item For every $v\in A_{n+1}$, $\rmaj_{A_{n+1}}(v) = \ell_A(\Psi(v))$. \item For every $v\in A_{n+1}$, $\del_A(v)=\del_A(\Psi(v))$. \end{enumerate} \end{thm} A ``signed'' analogue of the equidistribution over $A_n$ was given in~\cite{bernstein:macmahon} by defining the $\ell_L$ (\emph{$L$-length\/}) and $\nrmaj_{L_n}$ (\emph{negative alternating reverse major index\/}) statistics on the group of signed even permutations $L_n = C_2 \wr A_n \subset B_n$ and proving the following. \begin{prop}[see~{\cite[Proposition~4.1]{bernstein:macmahon}}]\label{PR:1} For every $B \subseteq [n+1]$ \begin{eqnarray*} \sum_{\{\, \pi \in L_{n+1} \mid \Neg(\pi^{-1})\subseteq B\,\} }q^{\nrmaj_{L_{n+1}}(\pi)} = \sum_{\{\, \pi \in L_{n+1} \mid \Neg(\pi^{-1})\subseteq B\,\} }q^{\ell_L(\pi)} \\ = \prod_{i \in B}(1+q^i)\prod_{i=1}^{n-1}(1+q+\dots+q^{i-1}+2q^i) , \end{eqnarray*} where $\Neg(\pi^{-1}) = \{\, -\pi(i) \mid 1 \le i \le n+1,\;\pi(i)<0 \,\}$. \end{prop} The main result in this note is a bijective proof of Proposition~\ref{PR:1}. It is accomplished by defining a mapping $\Theta : L_{n+1} \to L_{n+1}$ for every $n>0$ and proving the following theorem. \begin{thm}[see Theorem~\ref{TH:main}] The mapping $\Theta$ is a bijection of $L_{n+1}$ onto itself, and for every $\pi \in L_{n+1}$, $\nrmaj_{L_{n+1}}(\pi) = \ell_L(\Theta(\pi))$ and $\Neg(\pi^{-1}) = \Neg(\Theta(\pi)^{-1})$. \end{thm} The rest of this note is organized as follows: in Section~\ref{SEC:bg} we introduce some definitions and notations and give necessary background. In Section~\ref{SEC:decomp} we review the definition of the bijection $\Psi$ and the Main Lemma of~\cite{bernstein:macmahon}, which gives a unique decomposition of elements of $L_n$. In Section~\ref{SEC:main} we define the bijection $\Theta$ and prove the main result. \section{Background and notation}\label{SEC:bg} \subsection{Notation} For an integer $a\ge 0$, let $[a]=\{1,2,\dots,a\}$ (where $[0]=\emptyset$). Let $C_k$ be the cyclic group of order $k$, let $S_n$ be the symmetric group acting on $1,\dots,n$, and let $A_n \subset S_n$ denote the alternating group. \subsection{The symmetric group} Recall that $S_n$ is a Coxeter group of type $A$, its Coxeter generators being the adjacent transpositions $\{\,s_i\,\}_{i=1}^{n-1}$ where $s_i:=(i,i+1)$. The defining relations are the Moore-Coxeter relations: \[ \begin{split} s_i^2 = 1 &\quad (1\le i \le n-1),\\ (s_i s_{i+1})^3 = 1 &\quad (1 \le i < n-1),\\ (s_i s_j)^2 = 1 &\quad (|i-j|>1). \end{split} \] For every $j>0$, let \[ R^S_j = \{ 1,\,s_j,\, s_j s_{j-1},\,\dots,\,s_j s_{j-1} \cdots s_1 \} \subseteq S_{j+1} . \] Recall the following fact. \begin{thm}[{see~\cite[pp.~61--62]{goldschmidt:characters}}]\label{THM:SCanRep} Let $w \in S_n$. Then there exist unique elements $w_j \in R_j^S$, $1 \le j \le n-1$, such that $w = w_1 \cdots w_{n-1}$. Thus, the presentation $w=w_1\cdots w_{n-1}$ is unique. Call that presentation {\em the $S$-canonical presentation of $w$}. \end{thm} \subsection{The hyperoctahedral group} \emph{The hyperoctahedral group\/} $B_n := C_2 \wr S_n$ is the group of all bijections $\sigma$ of $\{\pm 1,\pm 2,\dots,\pm n\}$ to itself satisfying $\sigma(-i)=-\sigma(i)$, with function composition as the group operation. It is also known as the group of \emph{signed permutations\/}. For $\sigma\in B_n$, we shall use \emph{window notation\/}, writing $\sigma=[\sigma_1,\dots,\sigma_n]$ to mean that $\sigma(i)=\sigma_i$ for $i\in [n]$, and let $\Neg(\sigma) := \{\,i\in[n] \mid \sigma(i)<0 \,\}$. $B_n$ is a Coxeter group of type $B$, generated by $s_1,\dots,s_{n-1}$ together with an exceptional generator $s_0:=[-1,2,3,\dots,n]$ (see~\cite[Section~8.1]{bjorner:combinatorics}). In addition to the above relations between $s_1,\dots,s_{n-1}$, we have: $s_0^2 = 1$, $(s_0 s_1)^4 = 1$, and $s_0 s_i = s_i s_0$ for all $1<i<n$. \subsection{The alternating group} Let $a_i := s_1 s_{i+1}$, $1 \le i \le n-1$. Then the set $A = \{\,a_i\,\}_{i=1}^{n-1}$ generates the alternating group $A_{n+1}$. This generating set comes from~\cite{mitsuhashi:alternating}, where it is shown that the generators satisfy the relations \[ \begin{split} a_1^3 = 1,&\\ a_i^2 = 1 &\quad (1 < i \le n-1),\\ (a_i a_{i+1})^3 = 1 &\quad (1 \le i < n-1),\\ (a_i a_j)^2 = 1 &\quad (|i-j|>1) \end{split} \] (see~\cite[Proposition~2.5]{mitsuhashi:alternating}). For every $j>0$, let \[ R_j^A = \{1,\,a_j,\,a_j a_{j-1},\,\dots,\,a_j \cdots a_2,\,a_j \cdots a_2 a_1,\,a_j \cdots a_2 a_1^{-1}\} \subseteq A_{j+2} \] (for example, $R_3^A = \{1, a_3, a_3 a_2, a_3 a_2 a_1, a_3 a_2 a_1^{-1}\}$). One has the following \begin{thm}[{see~\cite[Theorem~3.4]{regev:alternating}}]\label{THM:ACanRep} Let $v \in A_{n+1}$. Then there exist unique elements $v_j \in R_j^A$, $1 \le j \le n-1$, such that $v = v_1 \cdots v_{n-1}$, and this presentation is unique. Call that presentation {\em the $A$-canonical presentation of $v$}. \end{thm} \subsection{The group of signed even permutations}\label{SEC:L} Our main result concerns the group $L_n:=C_2 \wr A_n$. It is the subgroup of $B_n$ of index 2 containing the \emph{signed even permutations\/}. For a more detailed discussion of $L_n$, see~\cite[Section~3]{bernstein:macmahon} \subsection{$B_n$, $A_{n+1}$ and $L_{n+1}$ statistics} Let $r=x_1 x_2\dots x_m$ be an $m$-letter word on a linearly-ordered alphabet $X$. The \emph{inversion number\/} of $r$ is defined as \[\inv(r):=\#\{\,1\le i<j \le m \mid x_i>x_j\,\} ,\] its \emph{descent set\/} is defined as \[ \Des(r) := \{\,1 \le i < m \mid x_i>x_{i+1}\,\} , \] and its \emph{descent number\/} as \[ \des(r) := \abs{\Des(r)} . \] For example, with $X=\mathbb{Z}$ with the usual order on the integers, if $r = 3,-4,2,1,5,-6$, then $\inv(r) = 8$, $\Des(r) = \{1,\,3,\,5\}$ and $\des(r) = 3$. It is well known that if $w\in S_n$ then $\inv(w) = \ell_S(w)$, where $\ell_S(w)$ is the \emph{length\/} of $w$ with respect to the Coxeter generators of $S_n$, and that $\Des(w) = \Des_S(w):= \{\,1 \le i < n \mid \ell_S(w s_i) < \ell_S(w)\,\}$, which is the descent set of $w$ in the Coxeter sense. Define the \emph{$B$-length\/} of $\sigma \in B_n$ in the usual way, i.e., $\ell_B(\sigma)$ is the length of $\sigma$ with respect to the Coxeter generators of $B_n$. The $B$-length can be computed in a combinatorial way as \[ \ell_B(\sigma) = \inv(\sigma) + \sum_{i \in \Neg(\sigma^{-1})} i \] (see, for example,~{\cite[Section~8.1]{bjorner:combinatorics}}). Given $\sigma\in B_n$, the \emph{$B$-delent number\/} of $\sigma$, $\del_B(\sigma)$, is defined as the number of left-to-right minima in $\sigma$, namely \[ \del_B(\sigma) := \#\{\,2\le j \le n \mid \text{$\sigma(i)>\sigma(j)$ for all $1 \le i < j$} \,\} . \] For example, the left-to-right minima of $\sigma=[5,\,-1,\,2,\,-3,\,4]$ are $\{2,\,4\}$, so $\del_B(\sigma)=2$. The \emph{$A$-length} statistic on $A_{n+1}$ was defined in~\cite{regev:alternating} as the length of the $A$-canonical presentation. Given $v \in A_{n+1}$, $\ell_A(v)$ can be computed directly as \begin{equation}\label{EQ:Alen} \ell_A(v) = \ell_S(v)-\del_S(v) = \inv(v)-\del_B(v) \end{equation} (see~\cite[Proposition~4.4]{regev:alternating}). \begin{defn}[{see~\cite[Definition~3.15]{bernstein:macmahon}}]\label{DEF:Llen} Let $\sigma \in B_n$. Define the \emph{$L$-length of $\sigma$\/} by \[ \ell_L(\sigma) = \ell_B(\sigma)-\del_B(\sigma) = \inv(\sigma)-\del_B(\sigma)+\sum_{i\in\Neg(\sigma^{-1})}i . \] \end{defn} Given $\pi \in L_{n+1}$, let \[ \Des_A(\pi) := \{\, 1 \le i \le n-1 \mid \ell_L(\pi a_i) \le \ell_L(\pi) \,\} , \] \[ \rmaj_{L_{n+1}}(\pi) := \sum_{i \in \Des_A(\pi)} (n-i) , \] and \[ \nrmaj_{L_{n+1}}(\pi) := \rmaj_{L_{n+1}}(\pi) + \sum_{i\in \Neg(\pi^{-1})}i . \] For example, if $\pi=[5,-1,2,-3,4]$ then $\Des_A(\pi) = \{1,2\}$, $\rmaj_{L_5}(\pi)=5$, and $\nrmaj_{L_5}(\pi)=5+1+3=9$. \begin{rem}\label{RE:coincide} Restricted to $A_{n+1}$, the $\rmaj_{L_{n+1}}$ statistic coincides with the $\rmaj_{A_{n+1}}$ statistic as defined in~\cite{regev:alternating} and used in Theorem~\ref{PR:Psi}. \end{rem} \section{The bijection $\Psi$ and the decomposition lemma}\label{SEC:decomp} \subsection{The {F}oata bijection} The {\em second fundamental transformation on words\/} $\Phi$ was introduced in~\cite{foata:netto} (for a full description, see~\cite[Section~10.6]{lothaire:words}). It is defined on any finite word $r=x_1 x_2 \dots x_m$ whose letters $x_1,\dots,x_m$ belong to a totally ordered alphabet. Instead of the original recursive definition, we give the algorithmic description of $\Phi$ from~\cite{foSch:major}. \begin{algo}[$\Phi$]\label{ALGO:Phi} Let $r=x_1 x_2 \dots x_m$ ; 1. Let $i:=1$, $r'_i := x_1$ ; 2. If $i=m$, let $\Phi(r):=r'_i$ and stop; else continue; 3. If the last letter of $r'_i$ is less than or equal to (respectively greater than) $x_{i+1}$, cut $r'_i$ after every letter less than or equal to (respectively greater than) $x_{i+1}$ ; 4. In each compartment of $r'_i$ determined by the previous cuts, move the last letter in the compartment to the beginning of it; let $t'_i$ be the word obtained after all those moves; put $r'_{i+1} := t'_i \, x_{i+1}$ ; replace $i$ by $i+1$ and go to step 2. \end{algo} \subsection{The covering map $f$ and its local inverses $g_u$} Recall the $S$- and $A$-canonical presentations from Theorems~\ref{THM:SCanRep} and~\ref{THM:ACanRep}. The following {\em covering map\/} $f$, which plays an important role in the construction of the bijection $\Psi$, relates between $S_n$ and $A_{n+1}$ by canonical presentations. \begin{defn}[{see~\cite[Definition~5.1]{regev:alternating}}] Define $f:R_j^A \to R_j^S$ by \begin{enumerate} \item $f(a_j a_{j-1}\cdots a_\ell) = s_j s_{j-1}\cdots s_\ell$ if $\ell\ge 2$, and \item $f(a_j\cdots a_1) = f(a_j\cdots a_1^{-1}) = s_j\cdots s_1$. \end{enumerate} Now extend $f:A_{n+1} \to S_n$ as follows: let $v\in A_{n+1}$, $v=v_1 \cdots v_{n-1}$ its $A$-canonical presentation, then \[ f(v) := f(v_1)\cdots f(v_{n-1}), \] which is clearly the $S$-canonical presentation of $f(v)$. \end{defn} In other words, given $v\in A_{n+1}$ in canonical presentation $v=a_{i_1}^{\epsilon_1} a_{i_2}^{\epsilon_2} \cdots a_{i_r}^{\epsilon_r}$, we obtain $f(v)$ simply by replacing each $a$ by an $s$ (and deleting the exponents): $f(v) = s_{i_1} s_{i_2} \cdots s_{i_r}$. The following maps serve as ``local inverses'' of $f$. \begin{defn} For $u \in A_{n+1}$ with $A$-canonical presentation $u= u_1 u_2 \cdots u_{n-1}$, define $g_u:R_j^S \to R_j^A$ by \[ g_u(s_j s_{j-1}\cdots s_\ell) = a_j a_{j-1} \cdots a_\ell \quad \text{if \;$\ell\ge 2$,\; and} \quad g_u(s_j s_{j-1}\cdots s_1) = u_j. \] Now extend $g_u:S_n \to A_{n+1}$ as follows: let $w\in S_n$, $w=w_1 \cdots w_{n-1}$ its $S$-canonical presentation, then \[ g_u(w) := g_u(w_1)\cdots g_u(w_{n-1}), \] which is clearly the $A$-canonical presentation of $g_u(w)$. \end{defn} \subsection{The bijection $\Psi$} Let $w= x_1 x_2 \dots x_m$ be an $m$-letter word on some alphabet $X$. Denote the {\em reverse\/} of $w$ by $\mathbf{r}(w):=x_m x_{m-1} \dots x_1$, and let $\overleftarrow\Phi := \mathbf{r} \Phi \mathbf{r}$, the {\em right-to-left Foata transformation}. \begin{defn}\label{DEF:Psi} Define $\Psi:A_{n+1} \to A_{n+1}$ by $\Psi(v) = g_v(\overleftarrow\Phi(f(v)))$ . \end{defn} That is, the image of $v$ under $\Psi$ is obtained by applying $\overleftarrow\Phi$ to $f(v)$ in $S_n$, then using $g_v$ as an ``inverse'' of $f$ in order to ``lift'' the result back to $A_{n+1}$. Some of the key properties of $\Psi$ are given in Theorem~\ref{PR:Psi}. \subsection{The decomposition lemma} \begin{defn} Let $r=x_1\dots x_m$ be an $m$-letter word on a linearly-ordered alphabet $X$. Define $\sort(r)$ to be the non-decreasing word with the letters of $r$. \end{defn} For example, with $X=\mathbb{Z}$ with the usual order on the integers,\\ $\sort(-4,\,2,\,3,\,-5,\,1,\,2) = -5,\,-4,\,1,\,2,\,2,\,3$. \begin{defn} For $\pi \in L_{n+1}$, define $s(\pi) \in L_{n+1}$ by \[ s(\pi) = \begin{cases} \sort(\pi), &\text{if $\sum_{i \in \Neg(\pi^{-1})} i$ is even};\\ \sort(\pi) s_1, &\text{otherwise}. \end{cases} \] \end{defn} The following lemma gives a unique decomposition of every element in $L_n$ into a descent-free factor and a signless even factor. \begin{lem}\label{LE:main} For every $\pi \in L_{n+1}$, the only $\sigma \in L_{n+1}$ such that $\sigma^{-1}\pi \in A_{n+1}$ and $\des_A(\sigma)=0$ is $\sigma = s(\pi)$. Moreover, $\sigma = s(\pi)$ and $u = \sigma^{-1}\pi$ satisfy $\Des_A(u)=\Des_A(\pi)$, $\inv(u)-\del_B(u) = \inv(\pi)-\del_B(\pi)$, and $\Neg(\pi^{-1}) = \Neg(\sigma^{-1})$. \end{lem} See~\cite[Lemma~4.6]{bernstein:macmahon} for the proof. \begin{cor}\label{COR:uniq} If $\sigma \in L_{n+1}$ and $\des_A(\sigma) = 0$, then for every $u \in A_{n+1}$, $s(\sigma u) = \sigma$. \end{cor} \section{The main result}\label{SEC:main} \begin{defn} Define $\Theta:L_{n+1}\to L_{n+1}$ for each $n>0$ by \[ \Theta(\pi) = s(\pi) \Psi( s(\pi)^{-1} \pi ) . \] \end{defn} \begin{thm}\label{TH:main} The mapping $\Theta$ is a bijection of $L_{n+1}$ onto itself, and for every $\pi \in L_{n+1}$, $\nrmaj_{L_{n+1}}(\pi) = \ell_L(\Theta(\pi))$ and $\Neg(\pi^{-1}) = \Neg(\Theta(\pi)^{-1})$. \end{thm} \begin{exmp} As an example, let $\pi = [3,\,-6,\,-4,\,5,\,2,\,-1] \in L_6$. We have $\Des_A(\pi) = \{1,\,3,\,4\}$ and therefore $\nrmaj_{L_6}(\pi) = 4+2+1+6+4+1 = 18$. Since $\sum_{i\in\Neg(\pi^{-1})}i = 11$ is odd, we have $\sigma:=s(\pi)=\sort(\pi)s_1=[-4,\,-6,\,-1,\,2,\,3,\,5]$ and $u:=\sigma^{-1}\pi = [5,\,2,\,1,\,6,\,4,\,3]$. One can verify that the $A$-canonical presentation of $u$ is $u = (1)(a_2)(a_3 a_2 a_1^{-1})(a_4 a_3)$, so $f(u)=(1)(s_2)(s_3 s_2 s_1)(s_4 s_3) = [4,\,1,\,5,\,3,\,2]$. Next we compute $\overleftarrow\Phi(f(u))$ as follows: $r:=\mathbf{r}(f(u)) = [2,\,3,\,5,\,1,\,4]$. Applying Algorithm~\ref{ALGO:Phi} to $r$ we get \[ \begin{aligned} r'_1 &= 2 \mid\\ r'_2 &= 2 \mid 3 \mid\\ r'_3 &= 2 \mid 3 \mid 5 \mid\\ r'_4 &= 2 \mid 3 \mid 5\;\;\;1 \mid\\ \Phi(r) = r'_5 &= 2\;\;\;3\;\;\;1\;\;\;5\;\;\;4 \quad, \end{aligned} \] so $v:=\overleftarrow\Phi(f(u)) = [4,\,5,\,1,\,3,\,2]$, whose $S$-canonical presentation is\\ $v=(1)(s_2)(s_3 s_2 s_1)(s_4 s_3 s_2)$. Therefore $\Psi(u)=g_u(v) = (1)(a_2)(a_3 a_2 a_1^{-1})(a_4 a_3 a_2) = [2,\,5,\,6,\,1,\,4,\,3]$. Finally, $\Theta(\pi) = \sigma\Psi(u) = [-6,\,3,\,5,\,-4,\,2,\,-1]$, and indeed $\ell_L(\Theta(\pi)) = 7-0+11 = 18 = \nrmaj_{L_6}(\pi)$. \end{exmp} \begin{proof}[Proof of Theorem~\ref{TH:main}] The bijectivity of $\Theta$ follows from the bijectivity of $\Psi$ together with Corollary~\ref{COR:uniq}. Let $\pi \in L_{n+1}$, $\sigma = s(\pi)$ and $u = \sigma^{-1}\pi$. By Definition~\ref{DEF:Llen}, \[ \ell_L(\Theta(\pi)) = \ell_L(\sigma \Psi(u)) = \inv(\sigma\Psi(u))-\del_B(\sigma\Psi(u))+\sum_{i\in\Neg((\sigma\Psi(u))^{-1})} i . \] By Corollary~\ref{COR:uniq} and Lemma~\ref{LE:main}, \[ \inv(\sigma\Psi(u))-\del_B(\sigma\Psi(u)) = \inv(\Psi(u))-\del_B(\Psi(u)) \] and \[ \Neg((\sigma\Psi(u))^{-1}) = \Neg(\sigma^{-1}) = \Neg(\pi^{-1}) , \] so \[ \ell_L(\Theta(\pi)) = \inv(\Psi(u))-\del_B(\Psi(u)) + \sum_{i \in \Neg(\pi^{-1})} i . \] By identity~\eqref{EQ:Alen} and Theorem~\ref{PR:Psi}, \[ \inv(\Psi(u))-\del_B(\Psi(u)) = \ell_A(\Psi(u)) = \rmaj_{A_{n+1}}(u) = \sum_{i \in \Des_A(u)} i . \] Again by Lemma~\ref{LE:main}, $\Des_A(u) = \Des_A(\pi)$, whence by Remark~\ref{RE:coincide}, $\rmaj_{A_{n+1}}(u) = \rmaj_{L_{n+1}}(\pi)$. Thus \[ \ell_L(\Theta(\pi)) = \rmaj_{L_{n+1}}(\pi)+\sum_{i \in \Neg(\pi^{-1})} i = \nrmaj_{L_{n+1}}(\pi) . \qedhere \] \end{proof} \end{document}
\begin{document} \title{Quantum-coherent coupling of a mechanical oscillator\\to an optical cavity mode} \author{E.\ Verhagen$^{1,\dagger}$, S.\ Del{\'e}glise$^{1,\dagger}$, S.\ Weis$^{1,2,\dagger}$, A. Schliesser$^{1,2,\dagger}$, T.\ J.\ Kippenberg$^{1,2}$} \email{[email protected]} \affiliation{$^{1}$Ecole Polytechnique F$\acute{e}$d$\acute{e}$rale de Lausanne (EPFL), 1015 Lausanne, Switzerland} \affiliation{$^{2}$Max Planck Institute of Quantum Optics, 85748 Garching, Germany} \affiliation{$^{\dagger}$These authors contributed equally to this work.} \begin{abstract} Quantum control of engineered mechanical oscillators can be achieved by coupling the oscillator to an auxiliary degree of freedom, provided that the coherent rate of energy exchange exceeds the decoherence rate of each of the two sub-systems. We achieve such quantum-coherent coupling between the mechanical and optical modes of a micro-optomechanical system. Simultaneously, the mechanical oscillator is cooled to an average occupancy of n=1.7$\pm$0.1 motional quanta. Pulsed optical excitation reveals the exchange of energy between the optical light field and the micromechanical oscillator in the time domain at the level of less than one quantum on average. These results provide a route towards the realization of efficient quantum interfaces between mechanical oscillators and optical fields. \end{abstract} \maketitle Mechanical oscillators are at the heart of many precision experiments, such as single spin detection \cite{Rugar2004} or atomic force microscopy and can exhibit exceptionally low dissipation. The possibility to control the quantum states of such engineered micro- or nanomechanical oscillators, similar to the control achieved over the motion of trapped ions \cite{Leibfried2003}, has been a subject of longstanding interest \cite{Braginsky1992, Schwab2005}, with prospects of quantum state transfer \cite{Zhang2003,Tian2010,Akram2010,Stannigel2010}, entanglement of mechanical oscillators \cite{Vitali2007} and testing of quantum theory in macroscopic systems \cite{Marshall2003,Paternostro2007}. However, such experiments require coupling the mechanical oscillator to an auxiliary system\textemdash whose quantum state can be controlled and measured\textemdash with a coherent coupling rate that exceeds the decoherence rate of each of the subsystems. Equivalent control of atoms has been achieved in the context of cavity Quantum Electrodynamics (cQED \cite{Kimble1998}) and has over the past decades been extended to various other systems such as superconducting circuits \cite{Chiorescu2004}, solid state emitters\cite{Hennessy2007} or the light field itself \cite{Deleglise2008}. Recently, elementary quantum control at the single-phonon level has been demonstrated for the first time, by coupling a piezo-electrical dilatation oscillator to a superconducting qubit \cite{OConnell2010}. An alternative and highly versatile route is to use the radiation-pressure-induced coupling of optical and mechanical degrees of freedom, inherent to optical microresonators \cite{Kippenberg2005}, which can be engineered in numerous forms at the micro- or nanoscale \cite{Kippenberg2008,Clerk2010,Favero2009a}. This coupling can be described by the interaction Hamiltonian $H=\hbar \ensuremath{g_\mathrm{0}} \hat{a}^\dagger \hat{a} (\hat{b}^\dagger + \hat{b})$, where $\hat{a}$ ($\hat{b}$) is the photon (phonon) annihilation operator, $\hbar$ is the reduced Planck constant and $\ensuremath{g_\mathrm{0}}$ is the vacuum optomechanical coupling rate. In the resolved sideband regime (where the mechanical resonance frequency $\ensuremath{\Omega_\mathrm{m}}$ exceeds the cavity energy decay rate $\kappa$), with an intense laser tuned close to the lower optomechanical sideband, one obtains in the rotating wave approximation the effective Hamiltonian \begin{equation} H=\hbar g \left(\hat{a}\hat{b}^\dagger+\hat{a}^\dagger\hat{b}\right) \label{effH} \end{equation} for the operators $\hat{a}$ and $\hat{b}$ now displaced by their steady state values. We have introduced here the field-enhanced coupling rate \cite{Dobrindt2008,Marquardt2007} $g=\sqrt{\bar{n}_\mathrm{c}}\ensuremath{g_\mathrm{0}}$, where $\bar{n}_\mathrm{c}$ denotes the average number of photons in the cavity. In the absence of decoherence, the unitary evolution (\ref{effH}) corresponds to swapping of the (displaced) optical and mechanical quantum states with a period of $2 \pi/\ensuremath{\Omega_\mathrm{c}}$, where $\ensuremath{\Omega_\mathrm{c}}=2g$ is the coherent energy exchange rate. This state swapping is at the heart of most quantum control protocols \cite{Zhang2003,Tian2010,Akram2010,Stannigel2010}. In practice, however, this unitary evolution is compromised by the coupling of both degrees of freedom to their respective environments. Hence, it is important for $\ensuremath{\Omega_\mathrm{c}}$ to exceed both the optical decoherence rate $\kappa$ and the mechanical decoherence rate $\gamma$, defined here as the inverse time needed for a single excitation to be lost into the environment. Reaching this regime of quantum-coherent coupling has proven challenging. While the onset of normal mode splitting \cite{Dobrindt2008, Marquardt2007} (which occurs for $\ensuremath{\Omega_\mathrm{c}}>\kappa/2$, when $\kappa$ largely exceeds the mechanical dissipation rate $\ensuremath{\Gamma_\mathrm{m}}$) could be observed in a room temperature experiment \cite{Groblacher2009a}, the mechanical decoherence rate $\gamma=\ensuremath{\Gamma_\mathrm{m}} (\ensuremath{\bar{n}_\mathrm{m}}+1)\approx\ensuremath{\Gamma_\mathrm{m}} \ensuremath{\bar{n}_\mathrm{m}}$ (where $\ensuremath{\bar{n}_\mathrm{m}}=k_\mathrm{B} T / \hbar \ensuremath{\Omega_\mathrm{m}}$) is strongly enhanced by the coupling to the environment at equilibrium temperature $T$. The condition $\ensuremath{\Omega_\mathrm{c}}>(\kappa,\gamma)$ is analogous to the strong coupling regime encountered in atomic cavity QED \cite{Kimble1998} with the atomic decay rate being replaced by $\gamma$, and the Rabi frequency by the optomechanical coupling rate $\ensuremath{\Omega_\mathrm{c}}$. Recently, a superconducting microwave cavity electromechanical system has accessed this regime \cite{Teufel2011, Teufel2011b}. If implemented at optical frequencies, the availability of well-developed quantum optical techniques, and the possibility to transport quantum information through room-temperature fiber links could give access to a range of new applications \cite{Zhang2003,Tian2010,Akram2010,Stannigel2010,Groblacher2009a}. \begin{figure}\label{fig1} \end{figure} Here we demonstrate for the first time the quantum-coherent coupling of an optical cavity field to a micromechanical oscillator. The experimental setting is a micro-optomechanical system in the form of a spoke-supported toroidal optical microcavity \cite{Anetsberger2008}. Such devices exhibit high quality factor whispering gallery mode resonances (with a typical cavity decay rate $\kappa/2\pi<10\,\mathrm{MHz}$) coupled to mechanical radial breathing modes via radiation pressure. The vacuum optomechanical coupling rate $\ensuremath{g_\mathrm{0}}=\frac{\omega}{R}x_{\mathrm{ZPM}}$ can be increased by reducing the radius $R$ of the cavity. However, the larger per photon force $\hbar \omega / R$ is then usually partially compensated by the increase in the mechanical resonance frequency $\ensuremath{\Omega_\mathrm{m}}$\textemdash and correspondingly smaller zero point motion $x_\mathrm{ZPM}=\sqrt{\hbar/(2m_\mathrm{eff}\ensuremath{\Omega_\mathrm{m}})}$. Moreover, small structures also generally feature larger dissipation through clamping losses. To compensate these opposing effects we use an optimized spoke anchor design (cf. Fig. \ref{fig1} and appendix) that maintains low clamping losses and a moderate mechanical resonance frequency while reducing the dimensions of the structure. Devices fabricated in this manner (with $R=15\,\mu\mathrm{m}$) exhibited coupling rates as high as $\ensuremath{g_\mathrm{0}}=2\pi \times 3.4\,\mathrm{kHz}$ for a resonance frequency of 78 MHz, as determined independently at room temperature \cite{Gorodetsky2010}. {\begin{figure}\label{fig2} \end{figure}} {\begin{figure*}\label{setup} \label{fig3} \end{figure*}} \begin{figure} \caption{ \textbf{Coherent exchange between the optical field and the micromechanical oscillator} probed in the time domain as measured (``Data'') and calculated numerically (``Model''). A modulation pulse (blue traces) applied to the phase modulator creates a light pulse probing the dynamics of the optomechanical system. The response of the system is encoded into the optical pulse at the output of the coupling fiber as recorded by the homodyne receiver (red traces, 250,000 averages). Using the full model of the system, the mechanical displacement can be simulated in addition (green traces). In the regime of weak coupling (top panel), the optical output pulse exhibits only a weak signature of the nearly unperturbed mechanical ringdown excited by the short burst of radiation pressure. In the case of strong coupling (middle and bottom panel) the modulated envelopes of the time-domain response indicate several cycles of oscillation between (very-low energy $\bar{n}=0.9$, bottom) coherent optical and mechanical excitations of the system.} \label{fig4} \end{figure} To reduce the mechanical decoherence rate $\gamma=\ensuremath{\Gamma_\mathrm{m}}\ensuremath{\bar{n}_\mathrm{m}}$, the microcavity, coupled to a tapered optical fiber, is embedded in a helium-3 cryostat \cite{Riviere2011}. For low optical power, the $^\mathrm{3}\mathrm{He}$ buffer gas enables thermalization of the resonator over the entire cryogenic temperature range ($T_\mathrm{min}=650\,\mathrm{mK}$) in spite of its weak thermal anchoring to the substrate. A quantum-limited continuous-wave Ti:Sapphire laser provides both the coupling field and homodyne local oscillator used to measure the weak phase fluctuations imprinted on the field emerging from the cryostat by mechanical displacement fluctuations. While the coherent coupling rate $\Omega_c$ can be determined unambiguously by probing the coherent response of the system \cite{Weis2010}, the mechanical decoherence rate is affected in a non-trivial way by the light-absorption-dependent sample temperature and the mechanical mode's coupling to its environment \cite{Riviere2011}, which is dominated by two-level fluctuators at cryogenic temperatures \cite{Arcizet2009a}. In order to systematically assess the decoherence rate, the coupling laser's frequency $\omega_\mathrm{l}=\omega_\mathrm{c}+\Delta$ is swept in the vicinity of the lower mechanical sideband. This allows to bring the displaced cavity mode $\hat{a}$ (of frequency $\left|\Delta\right|$) in and out of resonance with the mechanical mode $\hat{b}$ (of frequency $\Omega_m$). For each detuning point, we acquire the coherent response of the system to an optical excitation in a first step using a balanced homodyne detection scheme. These spectra (Fig \ref{fig2}a) allow to determine all parameters of the model characterizing the optomechanical interaction (appendix). For large detunings $\left|\Delta\right|>\ensuremath{\Omega_\mathrm{m}}$, they essentially feature a Lorentzian response of width $\kappa$ and center frequency $\left|\Delta\right|$. The sharp dip at $\Omega\approx\ensuremath{\Omega_\mathrm{m}}$ originates from Optomechanically Induced Transparency \cite{Weis2010}, and for $\ensuremath{\Omega_\mathrm{m}}=-\Delta$, its width is approximately $\ensuremath{\Omega_\mathrm{c}}^2/\kappa$. The coupling rate, as derived from a fit of the coherent response for a laser power of 0.56 mW, is $\ensuremath{\Omega_\mathrm{c}}=2\sqrt{\bar{n}_\mathrm{c}}\ensuremath{g_\mathrm{0}}=2\pi \times(3.7\pm0.05)\,\mathrm{MHz}$ (corresponding to an intracavity photon number of $\bar{n}_\mathrm{c}=3\cdot10^5$). Additionally, for each value of the detuning the noise spectrum of the homodyne signal is recorded in the absence of any external excitation (Fig. \ref{fig2}b). The observed peak represents the phase fluctuations imprinted on the transmitted light by the mechanical mode's thermal motion. The constant noise background on these spectra is the shot-noise level for the (constant) laser power used throughout the laser sweep (see appendix $\;$ for details). The amplitude of the peak is determined by the coupling to and the temperature of the environment, and therefore allows to extract the mechanical decoherence rate. All parameters now being fixed, it is moreover possible to retrieve the mechanical displacement spectrum (Fig. \ref{fig2}c). As can be seen, for detunings close to the sideband, when the (displaced) optical and mechanical modes are degenerate, the fluctuations are strongly reduced. This effect of optomechanical cooling can be understood in a simple picture: In the regime $\ensuremath{\Omega_\mathrm{c}}\ll\kappa$, the optical decay is faster than the swapping between the vacuum in the displaced optical field and the thermal state in the mechanical oscillator. In this case, the mechanical oscillator is coupled to an effective optical bath at near-zero thermal occupancy $\ensuremath{\bar{n}_\mathrm{min}}$ with the rate $\Gamma_\mathrm{cool}=\ensuremath{\Omega_\mathrm{c}}^2/\kappa$. Ideally, $\ensuremath{\bar{n}_\mathrm{min}}=\kappa^2/16\ensuremath{\Omega_\mathrm{m}}^2\ll1$ is governed by non-resonant Stokes terms $\hat{a}^\dagger\hat{b}^\dagger+\hat{a}\hat{b}$ neglected in the Hamiltonian (\ref{effH}). Any excess noise in the coupling beam will, however, cause an effective increase of $\ensuremath{\bar{n}_\mathrm{min}}$ and precludes any quantum state manipulation due to the impurity of the field's quantum state. In practice it has proven crucial to eliminate phase noise originating from Guided Acoustic Wave Brillouin Scattering \cite{Shelby1985a} in the optical fibers by engineering their acoustic modes using HF etching (appendix). Evaluating the mechanical decoherence rate for $\Delta=-\ensuremath{\Omega_\mathrm{m}}$ at a cryostat setpoint of 0.65~K, we find $\gamma=2 \pi \times (2.2\pm0.2)\,\mathrm{MHz}$\textemdash significantly smaller than $\ensuremath{\Omega_\mathrm{c}}$. Simultaneously, the average occupancy of the mechanical mode is reduced to $\bar{n}=1.7\pm0.1$, which is limited by the onset of normal mode splitting. Indeed, as $\ensuremath{\Omega_\mathrm{c}}$ approaches $\kappa$, the thermal fluctuations are only partially dissipated into the optical bath, and partially written back onto the mechanics after one Rabi cycle. Note that this occupancy, corresponding to 37\% ground state occupation, is associated with a sideband asymmetry of $1-\bar{n}/(\bar{n}+1)\approx 40\%$. A measurement of this asymmetry, similar to those performed on trapped ions \cite{Leibfried2003}, would yield direct evidence of the quantum nature of macroscopic mechanical oscillators. We subsequently increase the strength of the coupling field to reach $\ensuremath{\Omega_\mathrm{c}}\approx\kappa$. The signature of normal mode splitting \cite{Dobrindt2008,Marquardt2007,Groblacher2009a,Teufel2011} can be seen from both the coherent response and the fluctuation spectra in Fig.~\ref{fig3}. Both detuning series exhibit a clear anti-crossing, the splitting frequency being $5.7\,\mathrm{MHz}$. The decoherence rate (see Fig.~\ref{fig3}d) is slightly raised compared to Fig.~\ref{fig2} due to laser heating and a higher buffer gas temperature of 0.8K, amounting to $\gamma=2\pi\times (5.6\pm0.9)\,\mathrm{MHz}$ at the lower mechanical sideband. We hence demonstrate $\ensuremath{\Omega_\mathrm{c}}/\gamma=1.0$, which constitutes a four-orders of magnitude improvement over previous work in the optical domain \cite{Groblacher2009a}, and brings the system into the regime of quantum-coherent coupling. As a proof of principle, we finally demonstrate the dynamical exchange of weak coherent excitations between the optical and mechanical degrees of freedom in the time domain (Fig.~\ref{fig4}). To this end, we modulate the coupling laser's phase at the mechanical resonance frequency, with a Gaussian envelope of 54 nano-second duration. This modulation creates a pair of sidebands\textemdash one of which is resonant with the optical cavity\textemdash that contain on average ten quanta per pulse (cf. appendix). By measuring the homodyne signal it is possible to directly observe the coherent exchange of energy. In the regime of weak coupling (Fig.~\ref{fig4}a) the optical pulse excites the mechanical mode to a finite oscillation amplitude, which decays slowly with the nearly unperturbed mechanical dissipation rate. This in turn only imprints a weak signature on the homodyne signal. Increasing the laser power, we reach $\ensuremath{\Omega_\mathrm{c}} / 2 \pi = 11.4 \mathrm{MHz}>\kappa / 2 \pi=7.1\mathrm{MHz}$, and the envelope of the homodyne signal\textemdash corresponding to the amplitude of the measured sideband field\textemdash undergoes several cycles of energy exchange with the mechanical oscillator, before it decays with a modified rate of $\left(\kappa+\ensuremath{\Gamma_\mathrm{m}}\right)/2\approx\kappa/2$, corresponding to the decay rate of the optomechanical polariton excited. Using our model matched to previously taken coherent response measurements, we can derive not only the homodyne signature\textemdash which reproduces our data very well\textemdash but also the expected mechanical oscillations resulting from this pulsed excitation (see Fig.~\ref{fig4}). These reveal very clearly how the excitation cycles continuously between the optical and mechanical modes. Although the employed detection is far from optimized for time-domain experiments, we have achieved a signal-to-noise ratio of 40 by averaging 250,000 traces within a total acquisition time of approximately two minutes. Reducing the excitation further so that one pulse contains less than one photon on average, a clear signature of several swapping cycles can still be observed (Fig.~\ref{fig4}c). Replacing the weak coherent state by a single photon \cite{Akram2010}, we expect Rabi-like oscillation of the Fock state from optics to mechanics and back. A repeated quadrature measurement yielding a bimodal distribution would be an unambiguous signature of the quantum nature of the state after the full swap \cite{Lvovsky2001}. Obtaining quantum-coherent coupling $\ensuremath{\Omega_\mathrm{c}}\gtrsim(\gamma,\kappa)$ has several interesting consequences. First, it allows the mapping of in principle any quantum state of the optical field onto the mechanical mode via the use of a time-dependent coupling field. As a simple example, initialization of the mode in the ground state can be efficiently achieved in this regime using a \mbox{$\pi$-pulse} which swaps the thermal state of the oscillator and the vacuum in the optical field \cite{Jacobs2010}. Note that the manipulation of large quantum states becomes increasingly challenging since the lifetime of the number state $|n\rangle$ scales with $1/n$. In this context, it will be beneficial to further reduce the decoherence rate by limiting spurious laser heating and employing materials with low intrinsic loss, as well as to increase the optomechanical coupling rate by further miniaturization. The regime of quantum-coherent coupling demonstrated here has been proposed as a general quantum link between electromagnetic fields of vastly different frequencies, e.g., different wavelengths in the optical spectrum or microwave and optical photons \cite{Tian2010,Regal2011}. In that respect, the efficient coupling of the demonstrated system to a low-loss single mode optical fiber is beneficial. Moreover, quantum-coherent coupling enables the use of the mechanical oscillator as a transducer to link otherwise incompatible elements in hybrid quantum systems, such as solid-state spin, charge, or superconducting qubits and propagating optical fields \cite{Stannigel2010}. In this context, we note that the mechanical modes of silica microresonators have already been shown to be intrinsically coupled to effective two-level systems in the form of structural defect states in the glass \cite{Arcizet2009a}. In conclusion, the reported experiments\textemdash demonstrating quantum coherent coupling between a micromechanical oscillator and an optical mode\textemdash represent a first step into the experimental investigation and quantum optical control of the most tangible harmonic oscillator: a mechanical vibration. {\small $\hphantom{xxx}$\newline\noindent\textbf{Acknowledgements} This work was supported by the ERC Starting Grant SiMP, The Defense Advanced Research Agency (DARPA), The NCCR of Quantum Engineering and the SNF. E.V. acknowledges support from a Rubicon Grant by NWO, cofinanced by a Marie Curie Cofund Action. S.D. is funded by a Marie Curie Individual Fellowship.\newline} { \newcommand{\nocontentsline}[3]{} \renewcommand{\addcontentsline}[2][]{\nocontentsline#1{#2}} \begin{thebibliography}{10} \bibitem{Rugar2004} Rugar, D., Budakian, R., Mamin, H.~J., and Chui, B.~W. \newblock {\em Nature}{ \bf 430}, 329--332 (2004). \bibitem{Leibfried2003} Leibfried, D., Blatt, R., Monroe, C., and Wineland, D. \newblock {\em Rev. Mod. Phys.}{ \bf 75}(1), 281--324 Mar (2003). \bibitem{Braginsky1992} Braginsky, V.~B. and Khalili, F.~Y. \newblock {\em Quantum {M}easurement}. \newblock Cambridge University Press, (1992). \bibitem{Schwab2005} Schwab, K.~C. and Roukes, M.~L. \newblock {\em Phys. Today}{ \bf 58}(7), 36--42 (2005). \bibitem{Zhang2003} Zhang, J., Peng, K., and Braunstein, S.~L. \newblock {\em Phys. Rev. A}{ \bf 68}, 013808 (2003). \bibitem{Tian2010} Tian, L. and Wang, H. \newblock {\em Phys. Rev. A}{ \bf 82}(5), 053806 Nov (2010). \bibitem{Akram2010} Akram, U., Kiesel, N., Aspelmeyer, M., and Milburn, G.~J. \newblock {\em New J. Phys.}{ \bf 12}, 083030 (2010). \bibitem{Stannigel2010} Stannigel, K., Rabl, P., Sorensen, A.~S., Zoller, P., and Lukin, M.~D. \newblock {\em Phys. Rev. Lett.}{ \bf 105}, 220501 (2010). \bibitem{Vitali2007} Vitali, D., Gigan, S., Ferreira, A., Bohm, H.~R., Tombesi, P., Guerreiro, A., Vedral, V., Zeilinger, A., and Aspelmeyer, M. \newblock {\em Phys. Rev. Lett.}{ \bf 98}, 030405 (2007). \bibitem{Marshall2003} Marshall, W., Simon, C., Penrose, R., and Bouwmeester, D. \newblock {\em Phys. Rev. Lett.}{ \bf 91}, 130401 (2003). \bibitem{Paternostro2007} Paternostro, M., Vitali, D., Gigan, S., Kim, M.~S., Brukner, C., Eisert, J., and Aspelmeyer, M. \newblock {\em Phys. Rev. Lett.}{ \bf 99}(25), 250401 Dec (2007). \bibitem{Kimble1998} Kimble, H.~J. \newblock {\em Phys. Scripta}{ \bf 1998}(T76), 127 (1998). \bibitem{Chiorescu2004} Chiorescu, I., Bertet, P., Semba, K., Nakamura, Y., Harmans, C.~J.~P.~M., and Mooij, J.~E. \newblock {\em Nature}{ \bf 431}, 159 (2004). \bibitem{Hennessy2007} Hennessy, K., Badolato, A., Winger, M., Gerace, D., Atatüre, M., Gulde, S., Fält, S., Hu, E.~L., and Imamoglu, A. \newblock {\em Nature}{ \bf 445}, 896--899 (2007). \bibitem{Deleglise2008} Del{\'e}glise, S., Dotsenko, I., Sayrin, C., Bernu, J., Brune, M., Raimond, J.-M., and Haroche, S. \newblock {\em Nature}{ \bf 455}(7212), 510--514 Sep (2008). \bibitem{OConnell2010} O'Connell, A.~D., Hofheinz, M., Ansmann, M., Bialczak, R.~C., Lenander, M., Lucero, E., Neeley, M., Sank, D., Wang, H., Weides, M., Wenner, J., Martinis, J.~M., and Cleland, A.~N. \newblock {\em Nature}{ \bf 464}(7289), 697--703 Apr (2010). \bibitem{Kippenberg2005} Kippenberg, T.~J., Rokhsari, H., Carmon, T., Scherer, A., and Vahala, K.~J. \newblock {\em Phys. Rev. Lett.}{ \bf 95}, 033901 (2005). \bibitem{Kippenberg2008} Kippenberg, T.~J. and Vahala, K.~J. \newblock {\em Science}{ \bf 321}, 1172--1176 (2008). \bibitem{Clerk2010} Clerk, A.~A., Devoret, M.~H., Girvin, S.~M., Marquardt, F., and Schoelkopf, R.~J. \newblock {\em Rev. Mod. Phys.}{ \bf 82}(2), 1155--1208 Apr (2010). \bibitem{Favero2009a} Favero, I. and Karrai, K. \newblock {\em Nature Photon.}{ \bf 3}, 201--205 (2009). \bibitem{Dobrindt2008} Dobrindt, J.~M., Wilson-Rae, I., and Kippenberg, T.~J. \newblock {\em Phys. Rev. Lett.}{ \bf 101}, 263602 (2008). \bibitem{Marquardt2007} Marquardt, F., Chen, J.~P., Clerk, A.~A., and Girvin, S.~M. \newblock {\em Phys. Rev. Lett.}{ \bf 99}, 093902 (2007). \bibitem{Groblacher2009a} Gr{\"o}blacher, S., Hammerer, K., Vanner, M.~R., and Aspelmeyer, M. \newblock {\em Nature}{ \bf 460}, 724--727 (2009). \bibitem{Teufel2011} Teufel, J.~D., Li, D., Allman, M.~S., Cicak, K., Sirois, A. J .and~Whittaker, J.~D., and Simmonds, R.~W. \newblock {\em Nature}{ \bf 471}, 204--–208 (2011). \bibitem{Teufel2011b} Teufel, J.~D., Donner, T., Li, D., Harlow, J.~W., Allman, M.~S., Cicak, I.~K., Sirois, A.~J., Whittaker, J.~D., Lehnert, K.~W., and Simmonds, R.~W. \newblock {\em arXiv:1103.2144}{ \bf } (2011). \bibitem{Anetsberger2008} Anetsberger, G., Rivi{\`e}re, R., Schliesser, A., Arcizet, O., and Kippenberg, T.~J. \newblock {\em Nature Photon.}{ \bf 2}, 627--633 (2008). \bibitem{Gorodetsky2010} Gorodetsky, M., Schliesser, A., Anetsberger, G., Deleglise, S., and Kippenberg, T.~J. \newblock {\em Opt. Express}{ \bf 18}, 23236--23246 (2010). \bibitem{Riviere2011} Rivi{\`e}re, R., Del{\'e}glise, S., Weis, S., Gavartin, E., Arciezt, O., Schliesser, A., and Kippenberg, T.~J. \newblock {\em Phys. Rev. A}{ \bf 83}, 063835 (2011). \bibitem{Weis2010} Weis, S., Rivi{\`e}re, R., Del{\'e}glise, S., Gavartin, E., Arcizet, O., Schliesser, A., and Kippenberg, T.~J. \newblock {\em Science}{ \bf 330}, 1520--1523 (2010). \bibitem{Arcizet2009a} Arcizet, O., Rivi{\`e}re, R., Schliesser, A., Anetsberger, G., and Kippenberg, T.~J. \newblock {\em Phys. Rev. A}{ \bf 80}, 021803(R) (2009). \bibitem{Shelby1985a} Shelby, R.~M., Levenson, M.~D., and Bayer, P.~W. \newblock {\em Phys. Rev. B}{ \bf 31}(8), 5244--5252 Apr (1985). \bibitem{Lvovsky2001} Lvovsky, A.~I., Hansen, H., Aichele, T., Benson, O., Mlynek, J., and Schiller, S. \newblock {\em Phys. Rev. Lett.}{ \bf 87}(5), 050402 Jul (2001). \bibitem{Jacobs2010} {Jacobs}, K., {Nurdin}, H.~I., {Strauch}, F.~W., and {James}, M. \newblock {\em arXiv:1003.2653}{ \bf } March (2010). \bibitem{Regal2011} Regal, C.~A. and Lehnert, K.~W. \newblock {\em J. Phys.: Conference Series}{ \bf 264}(1), 012025 (2011). \end{thebibliography} } \onecolumngrid \setcounter{figure}{0} \setcounter{equation}{0} \setcounter{section}{0} \renewcommand \theequation {A\arabic{equation}} \renewcommand \thefigure {A\arabic{figure}} \renewcommand \thesection {A\arabic{section}} \begin{center} \Large{ \textbf{Appendix}} \end{center} \setcounter{tocdepth}{3} \tableofcontents \section{Experimental details} \subsection{Experimental setup} \label{ss:setup} Figure \ref{fig:Setup} shows a schematic of the employed experimental setup. At the heart of the optical setup is a Ti:Sapphire laser (Sirah Matisse TX) operating at a wavelength around $780\,\mathrm{nm}$. The laser exhibits quantum limited amplitude and phase noise at Fourier frequencies relevant for this experiment. During the experiments the laser is locked to an external reference cavity such that drifts of the laser detuning $\Delta$ can be neglected during the acquisition time. \begin{figure} \caption{\textbf{Setup.} See text for details.} \label{fig:Setup} \end{figure} The sample itself resides in a helium-3 exchange gas cryostat (Oxford Instruments Heliox TL) that is used for cryogenic pre-cooling of the mechanical mode to low temperatures. Since the toroids are situated directly above the surface of the liquified helium-3, the achievable temperature is directly linked to the vapor pressure curve of helium-3. As the toroidal microstructures are thermally very well isolated from the substrate, one relies on cooling via the helium-3 exchange gas. As a consequence, cryostat temperature setpoints of at least 650 mK (corresponding to pressures larger than $\approx0.15\,\mathrm{mbar}$) are favorable. Coupling of light into the toroid is achieved via a tapered optical fiber that is approached using piezo positioners, which are compatible with low temperature operation (Attocube GmbH). The fiber ends are guided through and out of the cryostat and constitute one arm of the interferometer that is part of the balanced homodyne detection scheme. The length of one of the ca. 8 m long arms is servo-locked using a movable mirror to cancel the DC-component of the interferometer's signal. This setting allows a shot-noise limited read-out of the phase noise imprinted onto the transmitted laser field. The laser passes an electro-optical modulator that allows to create sidebands around the laser frequency. Last, the laser power is stabilized actively in absolute terms at the input of the experiment to ensure operation at a constant light intensity. The three colored building blocks highlighted in Figure \ref{fig:Setup} depict the three different measurements that are routinely performed one after the other. \begin{itemize} \item \emph{Coherent response.} A network analyzer sweeps the upper modulation sideband over the optical resonance and demodulates the corresponding (coherent) signal (cf. section \ref{CoherentDynamics}).\\ \item \emph{Noise spectrum.} Connecting only an electronic spectrum analyzer gives access to the incoherent noise spectrum (cf. section \ref{NoiseCovariances}).\\ \item \emph{Time domain response.} Sending a pulsed stimulus from an arbitrary waveform generator to the EOM which modulates the coupling laser gives access to the dynamic time domain response of the optomechanical system (cf. section \ref{timeDomain}).\\ \end{itemize} \subsection{Influence of guided acoustic wave Brillouin scatting \label{GAWBS}} A crucial prerequisite for optomechanical measurements in the quantum regime is the use of a quantum limited laser source. From the point of view of quantum manipulations, added noise in the coupling beam corresponds to an improper state preparation, the optical beam being in a statistical mixture of pure quantum states. In the weak coupling limit $\Omega_c \ll \kappa$, where the optical field acts as an effective bath, these extra fluctuations correspond to an increased temperature of the bath and prevents cooling close to the quantum ground state \cite{Schliesser2008B, Rabl2009B, Diosi2008B}. In addition, classical laser noise driving the optomechanical system can lead to ambiguous signatures such as squashing in the noise spectra, as reported previously \cite{Rocheleau2010B}. We have verified in our previous work \cite{Riviere2011B}, that the employed laser source is quantum limited. However, as is well known from fiber-based quantum optics experiments \cite{Shelby1985aB}, optical fibers can give rise to classical phase noise, in the form of guided acoustic wave Brillouin scattering (GAWBS). This process involves thermally driven radial mechanical modes of the fiber, that also modulate the optical path length. To investigate the presence of GAWBS we have recorded the noise spectrum from the homodyne detector when the fiber is retracted away from the cavity in an imbalanced Mach-Zehnder interferometer. Several classical peaks are observed on top of the shot-noise background (not corrected for the detector response)(Fig. \ref{fig:GAWBS}). \begin{figure} \caption{\textbf{Engineering of the fiber GAWBS noise spectrum.} The figure shows a broadband background spectrum of the imbalanced homodyne signal where the GAWBS modes are visible. Blue, green and red trace are taken with unmodified, partly stripped buffer and (almost entirely) etched fiber respectively. As expected for guided dilatational acoustic waves of the optical fiber the frequencies are increased by a factor of about 1.3 for a thinned fiber of around $95\,\mu\mathrm{m}$ diameter (as compared to $125\,\mu\mathrm{m}$ before). Doublets in the red trace are due to slightly different final etching radii (difference is about $3\,\mu\mathrm{m}$) of the different fibers in our setup (i.e., local oscillator fiber and the signal fiber). The difference in relative heights of the peaks is attributed to varying readout conditions, and as such only the peak's frequencies are of interest. The inset shows a zoom of the background for the final setup (i.e., etching reduced diameter fibers, balanced homodyne arm lengths) and illustrates the achieved improvements, i.e., the reduced contribution of GAWBS to the background at the mechanical resonance frequency.} \label{fig:GAWBS} \end{figure} The width (i.e. damping) of the noise peaks was observed to narrow dramatically when the buffer was partly stripped off the fiber, clearly demonstrating the mechanical nature of the peaks. One of the peaks coincides with the mechanical resonance frequency of $78\,\mathrm{MHz}$. However, the frequency of the dilatational fiber modes is proportional to the inverse fiber radius and can therefore be shifted by etching the fiber cladding in an HF solution. Immersing the fibers (without removing the acrylate buffer, which is permeable to HF) in a $40\,\%$ HF solution for 50 minutes reduced the cladding diameter from $125\,\mu\mathrm{m}$ to $95\,\mu\mathrm{m}$. This increased the GAWBS mode frequencies of all fibers in the setup by $\approx30\,\%$, shifting them away from the mechanical resonance frequency of the toroid (Fig. \ref{fig:GAWBS}a). The data shown in Fig. \ref{fig:GAWBS}b have been taken under the same conditions as the lowest occupation run shown in Fig. 2 of the main manuscript. The residual noise at $78\,\mathrm{MHz}$ (due to small portions of fibers that have not been etched) is approximately a factor seven smaller than the initial peaks, corresponding to a noise level of approximately $2\,\%$ of the shot-noise. This noise is generated along the fibers, both before and after the cavity. Special care was taken to minimize the length of unetched fiber before the cavity. The fact that an influence of the cavity detuning and coupling parameters on the transduction of these classical noise peaks into a measured signal is not discernible indicates that indeed most remaining noise originates from fiber after the cavity. Under this assumption, the independent noise of the GAWBS can be subtracted from the signal in order to estimate the decoherence rate and occupation. Figure \ref{fig:GAWBS}c in the main text shows that the shape of the spectra, as predicted from independently measured parameters, is in excellent agreement with the data after subtraction, in which no signs of squashing are observed. Nonetheless, we have performed an additional analysis for the lowest-occupancy data under the assumption that half of the noise is generated before the cavity, which leads to deviations of the decoherence rate and occupation of $7\,\%$ and $5\,\%$, respectively. This upper bound of the influence of GAWBS, corresponding to an uncertainty of 0.08 phonons, is included in the quoted errors (cf. section \ref{ErrorAnalysis}). \subsection{Time-domain response \label{timeDomain}} In order to probe the coherent dynamics of the optomechanical system in the time domain, the strong pump beam is tuned to the red sideband, and an RF pulse, resonant with the mechanical oscillator, is sent to the EOM. The upper modulation sideband excites the strongly coupled system. The subsequent evolution of the transmitted signal is recorded using the homodyne detector and an oscilloscope. An arbitrary signal generator (Agilent 33250A) is used to generate the RF pulses. The time dependent voltage $U(t)$ is a sine wave modulated by a Gaussian envelope: \begin{align} U(t)&=E(t) \sin(\Omega_\mathrm{mod} t+\phi_\mathrm{0})\\ E(t)&=U_0 e^{-\left(\frac{t-t_0}{\tau}\right)^2} \end{align} with a carrier frequency $\Omega_\mathrm{mod} = 2 \pi \times 77\,\mathrm{MHz}$ and an envelope duration $\tau = 32\,\mathrm{ns}$ ($\mathrm{FWHM} = 54\,\mathrm{ns}$). A digital oscilloscope, synchronously triggered with the signal generator is used to record and average the homodyne response. The very small signal originating from the balanced detectors is amplified and filtered, around a frequency of $75\,\mathrm{MHz}$, with a bandwidth of $100\,\mathrm{MHz}$. For low excitation amplitude, averaging is necessary to extract the coherent response out of the incoherent thermal and quantum noises from the optomechanical system. The modulation depth $\beta(t)$ corresponding to the instantaneous value of the slowly varying envelope is given by: \begin{align} \beta(t)=\pi \frac{E(t)}{V_\pi}, \end{align} where $V_\pi = 154$ V is the voltage corresponding to a $\pi$ phase shift of the beam in the EOM (NewFocus 4002). For a weak modulation depth ($\beta \ll 1$), a fraction $\left(\beta/2\right)^2$ of the optical carrier power $P_\mathrm{c}$ is scattered into each of the two first modulation sidebands. The total optical power in the upper sideband can hence be simply approximated by \begin{align} P(t)\approx P_c \left(\pi \frac{E(t)}{2 V_\pi}\right)^2 \end{align} The total energy in the pulse can then be obtained by integrating the instantaneous power over the duration of the pulse. The average number of photons in one pulse is hence given by: \begin{align} n \approx \frac{1}{\hbar \omega}\int_{-\infty}^{+\infty}{P_c \left(\frac{\pi E(t)}{2 V_\pi}\right)^2 \mathrm{d}t} = \frac{ \pi^{5/2} }{4 \sqrt{2} } \frac{\tau P_c}{\hbar \omega} \left(\frac{U_0}{V_\pi}\right)^2 \end{align} \section{Optimized spoke-anchored toroidal resonator} \subsection{Sample design \label{SampleDesign}} The optomechanical microresonators investigated in this work are specially designed toroidal microcavities, optimized to achieve large optomechanical coupling rates and small dissipation. Toroidal silica whispering gallery mode microresonators exhibit mechanical modes coupled to the optical modes through radiation pressure \cite{Kippenberg2005B}. Of particular interest is the lowest order radial breathing mode (RBM), whose motion maximally modulates the optical cavity length. In the context of quantum-coherent coupling, it is important to simultaneously achieve large values of $\Omega_\mathrm{c}/\gamma$ and $\Omega_\mathrm{c}/\kappa$, where $\Omega_\mathrm{c}=2 g_\mathrm{0} \left|\bar{a}\right|$. The vacuum optomechanical coupling rate $g_0$ is given by $\frac{\omega}{R}\sqrt{\hbar/\left(2m_\mathrm{eff}\Omega_\mathrm{m}\right)}$, where $R$ is the toroid radius, $m_\mathrm{eff}$ is the effective mass, and $\omega$ and $\Omega_\mathrm{m}$ are the optical and mechanical resonance frequencies, respectively. For a given incident power, optical frequency, and environment temperature, and assuming the resolved sideband regime and the coupling laser being tuned to the lower mechanical sideband, one obtains $\Omega_\mathrm{c}/\gamma \propto \left( R \,\Gamma_\mathrm{m} \sqrt{\Omega_\mathrm{m} m_\mathrm{eff} / \kappa}\right)^{-1}$ and $\Omega_\mathrm{c}/\kappa \propto \left( R \,\Omega_\mathrm{m}^{3/2} \sqrt{m_\mathrm{eff} \kappa}\right)^{-1}$. It is therefore obviously beneficial to reduce the sample dimensions to decrease $R$ and $m_\mathrm{eff}$. However, such miniaturization is generally accompanied by an increase of the mechanical frequency $\Omega_\mathrm{m}$, as well as an increase of $\Gamma_\mathrm{m}$ due to enhanced clamping losses. In microtoroids, both of these adverse effects can be countered in a design in which the toroid is suspended by spokes from the central pillar, as shown in Fig. \ref{fig:SampleDesign}a. The introduction of spokes serves three purposes. First, they isolate the mechanical motion of the toroidal RBM from the pillar support, strongly reducing clamping losses \cite{Anetsberger2008B}. Second, they reduce the mechanical mode volume and thereby the effective mass. Third, the effective spring constant is reduced, which lowers the mechanical resonance frequency. In practice, one needs to carefully consider the precise dimensions and positioning of the spokes, as these strongly affect both clamping losses $\Gamma_\mathrm{clamp}$ and $g_0$. Figure \ref{fig:SampleDesign}b shows the displacement profile of the RBM of a spoke-supported toroid of radius $R=15$ $\mu$m for various combinations of spoke length and position, as simulated with a finite element method. \begin{figure} \caption{\textbf{Sample optimization by finite element modeling.} See text for details.} \label{fig:SampleDesign} \end{figure} The SiO$_2$ thickness is 1 $\mu$m, the minor toroid radius is 2 $\mu$m, the spoke width is 500 nm, the pillar diameter is 1 $\mu$m, and the toroid is vertically offset from the middle SiO$_2$ disk by 400 nm. Since we are interested in the RBM only, it suffices to simulate 1/8 portion of the microresonator while assuming symmetric boundary conditions on both of the two `cut' planes. As can be seen from these examples, the mechanical mode profiles can change drastically depending on the spoke dimensions. Of the examples in Fig. \ref{fig:SampleDesign}b, only `D' depicts a mode that is purely localized to the outer toroid, with purely radial displacement, as illustrated in the cross-section in Fig. \ref{fig:SampleDesign}c. The origin of this wildly varying nature of the RBM is revealed in Fig. \ref{fig:SampleDesign}d, where the radius $r_\mathrm{i}$ of the inner disk (defining the spoke placement) and the spoke length $l_\mathrm{s}$ are varied systematically. The colorscale depicts the parameter $F$, defined as \begin{equation} F = \frac{2\pi E_\mathrm{mech}}{c \rho \Omega_\mathrm{m}^2 \int_{A_\mathrm{p}} \left| \Delta z \left( \mathbf{x} \right) \right|^2 dA}. \end{equation} Here, $E_\mathrm{mech}$ is the total mechanical energy in the mode, $c$ the speed of sound in silica, $\rho$ the density of silica, and $\Delta z\left(\mathbf{x}\right)$ the out-of-plane displacement amplitude, with the integration extending over the area $A_\mathrm{p}$ of the interface between the pillar and the silica disk. $F$ is proportional to the expected value of $\Gamma_\mathrm{clamp}^{-1}$, when the clamping area $A_\mathrm{p}$ is considered as a membrane radiating energy with a power $P=c \rho \Omega_\mathrm{m}^2 \int_{A_\mathrm{p}} \left| \Delta z \left( \mathbf{x} \right) \right|^2 dA$ \cite{Anetsberger2008B}. A previous study has experimentally found a correspondence of $F\approx\left(3\Gamma_\mathrm{clamp}/\left(2\pi\right)\right)^{-1}$ for larger toroids. As can be seen from the figure, the expected clamping losses vary strongly with spoke dimensions, ranging from $10^1$ to $10^6$ Hz. Most notably, several lines can be identified in this parameter space where clamping losses are large (indicated by the dashed lines). For parameter combinations along each of these lines, the RBM frequency approaches that of another mechanical mode of the structure. As a result, the two modes exhibit an anticrossing, with the hybridized modes showing a character of both uncoupled modes. This is the case for examples `A', `B', and `C' in Fig. \ref{fig:SampleDesign}b, which show the RBM hybridized with a flexural mode of the inner SiO$_2$ disk, the outermost SiO$_2$ membrane, and the spoke itself, respectively. In the vicinity of these anticrossings, the vertical displacement at the pillar, and as such the radiation into the substrate $F^{-1}$, are strongly enhanced. To achieve a design that exhibits small clamping losses, it is therefore crucial to avoid these parameter regions, as is the case for mode `D' in Fig. \ref{fig:SampleDesign}b. The aforementioned anticrossings affect the coupling rate $g_0$ as well, albeit to a lesser degree. Fig. \ref{fig:SampleDesign}e shows $g_\mathrm{0}$, calculated as in \cite{Schliesser2010B}, assuming the optical mode is localized at the edge of the toroid with negligible transverse size. At the anticrossings, $g_\mathrm{0}$ is reduced (i.e., the effective mass is increased), as a significant part of the mode's energy is in that case associated with displacements that do not modulate the cavity length. Away from the anticrossings, however, the RBM mode is localized exclusively in the toroid and outermost part of the membrane, well isolated from the inner disk and pillar support. As a result, $m_\mathrm{eff}$ is nearly identical to the physical mass of this volume. It is therefore important to minimize the volume of the outermost membrane, i.e., the distance between the spokes and the toroid. As can be seen in Fig. \ref{fig:SampleDesign}f, this simultaneously allows to reach the smallest possible resonance frequency. In practice, the laser reflow process used to form the toroid poses a lower limit on the remaining distance between spokes and toroid. \subsection{Sample fabrication} To fabricate the spoke-anchored microresonators, we use a combination of optical lithography and dry etching techniques outlined in Fig. \ref{fig:SampleFabrication}. In a first step (b), a disk including the spokes is transferred in a 1~$\mu$m thick film of thermal oxide on a Si wafer (a), through optical lithography followed by reactive ion etching of the SiO$_2$. In a second photolithography step (c), smaller disks of photoresist are defined that cover the center of the SiO$_2$ disks, including the spokes. These serve to protect the exposed Si surface between the spokes during the subsequent isotropic XeF$_2$ etch (d) of the Si substrate. Care is taken to stop the etch shortly before it reaches the apertures in the SiO$_2$ disk. After removing the protective photoresist disks, a laser reflow of the underetched disk is performed (e), forming the silica toroid. Finally (f), a second XeF$_2$ etch releases the toroid and reduces the pillar diameter, typically to a value smaller than 1 $\mu$m. \begin{figure} \caption{\textbf{Sample fabrication.} See text for details.} \label{fig:SampleFabrication} \end{figure} \subsection{Sample characterization} The vacuum optomechanical coupling rate $\ensuremath{g_0}$ is measured at room temperature in a vacuum chamber. Therefore, the mechanical motion is read out using an external cavity tunable diode laser at 1550 nm, that is locked to a cavity resonance. In order to avoid any radiation pressure effects we perform these measurements at very low laser power (typically around 100 nW). The transmitted light is amplified by a low noise erbium-doped fiber amplifier and sent onto a photodector. For absolute calibration of the mechanical spectrum registered by an electronic spectrum analyzer, we use a phase-modulation technique \cite{Gorodetsky2010B}. We extract a vacuum optomechanical coupling rate of $\ensuremath{g_0}=1700\,\mathrm{Hz}$ for a wavelength of 1550 nm (i.e. $\ensuremath{g_0}=3400\,\mathrm{Hz}$ at 780 nm). \begin{figure} \caption{\textbf{Sample characterization.} a) shows a calibrated mechanical noise spectrum for the resonator used throughout the manuscript (except the last panel of Fig. 4 of the main manuscript), which was measured at room temperature in vacuum. The fit (red line) was used to extract a vacuum optomechanical coupling rate of 3400 Hz at 780 nm. b) mechanical damping of this toroid vs. cryostat temperature. The red line is a fit according to the TLS model presented in \cite{Enss2005B,Riviere2011B}, the grey lines represent the contributions from resonant (dotted) and relaxational (dashed) processes. The fit yields a negligible contribution of clamping losses.} \label{fig:SampleCharacterization} \end{figure} The mechanical linewidth measured at room temperature (8.1 kHz) was found to be higher than expected from the calculated F-parameter (cf. section \ref{SampleDesign}). However, performing the same measurements in the cryostat (cf. Fig. \ref{fig:SampleCharacterization}b), a linewidth as low as 3.6 kHz was found on the same microresonator, indicating that a loss mechanism other than clamping losses must dominate at room temperature. Since there losses due to two level fluctuators (TLS, \cite{Enss2005B,Arcizet2009aB}) have been found to be significantly lower (linewidths below 4 kHz have been measured at room temperature for conventional toroids of similar frequency), we believe that the dominating loss mechanism is thermo-elastic damping (TED) \cite{Nowacki1975B}. At low temperatures, where TED is strongly reduced, the main loss mechanism is coupling to TLS. Figure \ref{fig:SampleCharacterization}b shows the measured temperature dependence of the mechanical linewidth at low temperature, obtained with the laser (with 100 nW power) resonant with the, in this case, strongly overcoupled optical resonance to avoid dynamical backaction. The variation of $\Gamma_\mathrm{m}$ with temperature can be fitted using a model for the TLS losses \cite{Enss2005B,Riviere2011B}. It is found that this mechanism dominates the total losses for all reachable cryogenic temperatures. This means that it is not possible to retrieve an accurate estimation of the temperature-independent contribution $\Gamma_\mathrm{clamp}$. We can however conclude that it must be at least smaller than 2 kHz for this sample. This shows that in our optimized spoke-supported design, we have successfully mitigated the clamping losses to the level where they are insignificant compared to intrinsic dissipation. \section{Modeling of optomechanical interaction} This section summarizes the theoretical model which was used to extract all entities of interest from our data. Figure \ref{f:model} shows the parameters and variables of the model, and their mutual connections. \subsection{Conservative dynamics} The conservative dynamics of an optomechanical system are described by the Hamiltonian \cite{Law1995B} \begin{align} \label{e:cons} H&=\frac{1}{4}\hbar \Omega_\mathrm{m}\left(\ensuremath{\mhat q}^2+\ensuremath{\mhat p}^2\right)+\hbar \ensuremath{\omega_\mathrm{c}} \left(\ensuremath{\mhat a^\dagger} \ensuremath{\mhat a}+\frac{1}{2}\right)+\hbar \ensuremath{g_0} \ensuremath{\mhat q} \,\ensuremath{\mhat a^\dagger} \ensuremath{\mhat a}, \end{align} where mechanical quadrature operators $q$ and $p$ are related to the corresponding mechanical ladder operators $b$ and $b^\dagger$ via \begin{align} \ensuremath{\mhat q}&=b+b^\dagger\\ \ensuremath{\mhat p} &=(b-b^\dagger)/i. \end{align} With these definitions $[\ensuremath{\mhat q},\ensuremath{\mhat p}]=2i$, and the actual mechanical displacement and momentum are given by $x'=\ensuremath{x_{\mathrm{ZPF}}} q$ and $p'=\hbar p/2\ensuremath{x_{\mathrm{ZPF}}}$ with the amplitude of the zero-point motion \begin{align} \ensuremath{x_{\mathrm{ZPF}}}&=\sqrt{\frac{\hbar}{2 m_\mathrm{eff} \ensuremath{\Omega_\mathrm{m}}}}. \end{align} The vacuum optomechanical coupling rate $\ensuremath{g_0}$ quantifies the strength of the optomechanical interaction and is given by $\ensuremath{g_0}=\ensuremath{g_0} \ensuremath{x_{\mathrm{ZPF}}}$ with $\ensuremath{g_0}=\partial \omega_\mathrm{c}/\partial x$. {\small \begin{figure} \caption{\textbf{Theoretical model used.} See text for details.} \label{f:model} \end{figure}} \subsection{Quantum Langevin equations} The Hamiltonian (\ref{e:cons}) determines the conservative evolution of the optomechanical degrees of freedom. Optical and mechanical dissipation, and the corresponding fluctuations, can be taken into account by introducing the mechanical dissipation rate $\ensuremath{\Gamma_\mathrm{m}}$ and the optical dissipation rate $\kappa=\ensuremath{\kappa_0}+\ensuremath{\kappa_\mathrm{ex}}$ (where $\ensuremath{\kappa_\mathrm{ex}}$ represents losses to the coupling waveguide and $\ensuremath{\kappa_0}$ all other optical losses) as well as the optical noise terms $\ensuremath{\delta \mhat s_\mathrm{in}}$, $\ensuremath{\delta \mhat s_\mathrm{cav}}$ and the thermal Langevin force, which we express as a rate $\ensuremath{\delta\! \mhat f_\mathrm{th}}$ by writing the physical force in momentum units of $\hbar/2\ensuremath{x_{\mathrm{ZPF}}}$. This leads to the well-known Langevin equations of cavity optomechanics \cite{Fabre1994B, Mancini1994B, Gardiner2004B} \begin{align} \dot\ensuremath{\mhat a}(t)&=\left(i \Delta-\frac{\kappa}{2}\right) \ensuremath{\mhat a}(t)- i \ensuremath{g_0} \ensuremath{\mhat q}(t) \ensuremath{\mhat a}(t) + \sqrt{\ensuremath{\kappa_\mathrm{ex}}} (\ensuremath{\bar s_\mathrm{in}}+\ensuremath{\delta \mhat s_\mathrm{in}}(t))+{\sqrt{\ensuremath{\kappa_0}}}{\ensuremath{\delta \mhat s_\mathrm{cav}}(t)}\\ \dot \ensuremath{\mhat q}(t)&=\ensuremath{\Omega_\mathrm{m}} \ensuremath{\mhat p}(t)\label{e:pQLE} \\ \dot\ensuremath{\mhat p}(t)&=- \ensuremath{\Omega_\mathrm{m}} \ensuremath{\mhat q}(t)- 2\ensuremath{g_0} \ensuremath{\mhat a^\dagger}(t) \ensuremath{\mhat a}(t)-{\ensuremath{\Gamma_\mathrm{m}}} \ensuremath{\mhat p}(t) + \ensuremath{\delta\! \mhat f_\mathrm{th}}(t) \label{e:xQLE}, \end{align} where the convention $\Delta=\ensuremath{\omega_\mathrm{l}}-\ensuremath{\omega_\mathrm{c}}$ was used to denote the detuning of the laser (angular) frequency $\ensuremath{\omega_\mathrm{l}}$ from the bare cavity resonance frequency $\ensuremath{\omega_\mathrm{c}}$, and $\ensuremath{\mhat a}$ is expressed in a frame rotating at $\ensuremath{\omega_\mathrm{l}}$. In order to accurately model the response of the optomechanical system over a wide range of parameters (detuning, Fourier frequency, optical and mechanical excitation) for a single set of parameters, we have refined this generic model by including other effects which are known to be inherent to most optical microcavities, and are discussed in the following. \emph{Photothermoelastic backaction.} Thermoelastic forces driven by temperature gradients induced by light absorption can induce mechanical displacements. The starting point to model these displacements are the coupled equations of motion known from the standard theory of thermoelasticity \cite{Nowacki1975B} \begin{align} \mu\, {\vec \nabla}^2 \vec u + (\lambda+\mu) \vec \nabla ( \vec \nabla \cdot {\vec u})+\vec f &= (3\lambda+2\mu)\alpha \vec \nabla \theta +\rho {\ddot {\vec u}}\\ \label{e:thermaldiffusion} k_\mathrm{t} {\vec \nabla}^2 \theta - c_\mathrm{t} \rho \dot \theta&= (3 \lambda+2\mu)\alpha T_0 (\vec \nabla \cdot \dot{ \vec u}) - v\, \kappa_\mathrm{abs} \ensuremath{\mhat a^\dagger} \ensuremath{\mhat a}. \end{align} These equations connect the displacement field $\vec u (\vec r,t)$ and the temperature elevation $\theta (\vec r,t)$ above the mean temperature $T_0$. Here, $\lambda$ and $\mu$ are the Lam\'e parameters, $\alpha$ the thermal expansion coefficient, $\rho$ the mass density, $\vec f$ a body force (e.g.\ due to radiation pressure), $k_\mathrm{t}$ the thermal conductivity, $\kappa_\mathrm{abs}$ the photon absorption rate, $c_\mathrm{t}$ the heat capacity, and the function $v(\vec r)$ describes the spatial distribution of the light absorption. Evidently, a thermoelastic body force \begin{align} \vec f_\mathrm{te}(\vec r,t)=-(3\lambda+2\mu)\alpha \vec \nabla \theta (\vec r,t) \end{align} acts on the mechanical modes of the structure when a temperature gradient $\vec\nabla \theta (\vec r,t)$ is present. In the scalar representation of the mechanical dynamics, we therefore have to add a thermoelastic force $f_\mathrm{te}(t)$ proportional to the material parameters $\lambda$, $\mu$ and $\alpha$, as well as an overlap integral of the mechanical mode's displacement pattern and the temperature gradient $\vec \nabla \theta (\vec r,t)$. Assuming that the temperature gradients are predominantly driven by the absorption of laser light in the resonator , one can express the scalar photothermoelastic force as \begin{align} f_\mathrm{pte}(t)= \chi_\mathrm{pte}(t)\ast�\kappa_\mathrm{abs} a^\dagger(t) a(t), \end{align} where we have absorbed the spatial overlap integrals between the mechanical and (the gradient of) the thermal modes, as well as the thermal modes and the spatial pattern of light absorption into the magnitude of the function $\chi_\mathrm{pte}(t)$. The temporal dynamics of the adjustment of the relevant temperature gradients to a changing amount of light absorption is represented by the time-dependence of $\chi_\mathrm{pte}(t)$ (``$\ast$'' denotes a convolution). Note that while this formulation accounts for the quantum fluctuations of the intracavity field $a(t)$, the statistical nature of photon absorption events is neglected. This is justified considering that the quantum fluctuations of optical heat deposition (``photothermoelastic shot noise'') have a much smaller effect on the mechanical mode than the direct fluctuations of the radiation pressure term $2\ensuremath{g_0}\, a^\dagger a$. \emph{Dynamic photothermorefractive frequency shift.} A temperature elevation $\theta(\vec r,t)$ within the optical mode volume furthermore changes the refractive index, and therefore the optical resonance frequency. In analogy to the description in the previous section, we are using a simple scalar description of the form \begin{align} \Delta \omega_\mathrm{ptr}(t)= \chi_\mathrm{ptr}(t)\ast\kappa_\mathrm{abs} a^\dagger(t) a(t) \end{align} for this frequency shift, where the response function $\chi_\mathrm{ptr}(t)$ accommodates spatial overlap integrals of the light absorption pattern $v(\vec r)$ and the thermal modes as well as the temporal dynamics of the latter, and, in addition, the spatial sampling of the induced refractive index changes \begin{align} \Delta n(\vec r,t)=\frac{\mathrm{d}n}{\mathrm{d}T} \theta(\vec r,t) \end{align} by the optical mode. \emph{Thermorefractive noise.} The local temperature elevation $\theta(\vec r, t)$ also undergoes {thermal} fluctuations\textemdash independent of the presence of light. Within a volume $V$, they amount to squared fluctuations of \cite{Landau1980B} \begin{align} \left\langle \theta(\vec r, t)^2 \right\rangle_{V}=\frac{k_\mathrm{B} T_0^2}{c_p \rho V}. \end{align} The spatial distribution of these fluctuations can be calculated using a Langevin ansatz \cite{Braginsky1999B}, by adding a fluctuational source term to the heat diffusion equation (\ref{e:thermaldiffusion}). Predominantly via the thermorefractive effect ($\mathrm{d}n/\mathrm{d}T\neq 0$), the resulting temperature fluctuations again induce resonance frequency fluctuations $\delta \omega_\mathrm{tr}(t)$. Its temporal correlation function (or equivalently, power spectral density) have been estimated for simple whispering-gallery mode resonator geometries \cite{ Gorodetsky2004B, Schliesser2008bB, Anetsberger2010B}. Taking these additional three effects into account, the equations of motion can be written as \begin{align} \dot\ensuremath{\mhat a}(t)&=\left(i (\Delta-\Delta\omega_\mathrm{ptr}(t)-\delta\omega_\mathrm{tr}(t))-\frac{\kappa}{2}\right) \ensuremath{\mhat a}(t)- i \ensuremath{g_0} \ensuremath{\mhat q}(t) \ensuremath{\mhat a}(t) + \nonumber\\ \qquad &+ \sqrt{\ensuremath{\kappa_\mathrm{ex}}} (\ensuremath{\bar s_\mathrm{in}}+\ensuremath{\delta \mhat s_\mathrm{in}}(t))+{\sqrt{\ensuremath{\kappa_0}}}{\ensuremath{\delta \mhat s_\mathrm{cav}}(t)}\\ \dot\ensuremath{\mhat q}(t)&=\ensuremath{\Omega_\mathrm{m}} \ensuremath{\mhat p}(t) \\ \dot\ensuremath{\mhat p}(t)&=- \ensuremath{\Omega_\mathrm{m}} \ensuremath{\mhat q}(t)- 2 \ensuremath{g_0} \ensuremath{\mhat a^\dagger}(t) \ensuremath{\mhat a}(t)-{\ensuremath{\Gamma_\mathrm{m}}} \ensuremath{\mhat p}(t) +\ensuremath{\delta\! \mhat f_\mathrm{th}}(t)+ f_\mathrm{pte}(t) . \end{align} \subsection{Linearized model} A large coherent field sent to the optomechanical system induces a relatively large classical intracavity field $\ensuremath{\bar a}$, and induces a displacement of the mechanical mode by $\bar q$. If the system is stable around this steady-state, the dynamics of the small fluctuations around this equilibrium are described by a set of equations obtained via the substitution $\ensuremath{\mhat a}(t)=\ensuremath{\bar a}+\ensuremath{\delta \mhat a}(t)$ and $\mhat q(t)=\bar q+\ensuremath{\delta \mhat q}(t)$, and retaining only first-order terms in the fluctuations. This yields \begin{align} \dot\ensuremath{\delta \mhat a}(t)&=\left(+i \bar\Delta-\frac{\kappa}{2}\right) \ensuremath{\delta \mhat a}(t) -i \kappa_\mathrm{abs} \ensuremath{\bar a} \chi_\mathrm{ptr}(t) \ast (\ensuremath{\bar a}^* \ensuremath{\delta \mhat a}(t)+ \ensuremath{\bar a} \ensuremath{\delta \mhat a^{\mdagger}}(t)) - i \ensuremath{g_0} \ensuremath{\bar a} \ensuremath{\delta \mhat q}(t) - \nonumber\\ &\qquad - i \ensuremath{\bar a} \delta \omega_\mathrm{tr}(t) +\sqrt{\ensuremath{\kappa_\mathrm{ex}}} \ensuremath{\delta \mhat s_\mathrm{in}}(t)+{\sqrt{\ensuremath{\kappa_0}}}{\ensuremath{\delta \mhat s_\mathrm{cav}}(t)} \end{align} \begin{align} \ensuremath{\Omega_\mathrm{m}}^{-1}\left[ \delta \ddot{ \ensuremath{\mhat q}}(t)+{\ensuremath{\Gamma_\mathrm{m}}} \, \delta \dot{\ensuremath{\mhat q}}(t)+ \ensuremath{\Omega_\mathrm{m}}^2\, \ensuremath{\delta \mhat q}(t)\right]&=- 2 \ensuremath{g_0} (\ensuremath{\bar a}\ensuremath{\delta \mhat a^{\mdagger}}(t)+\ensuremath{\bar a}^* \ensuremath{\delta \mhat a}(t)) + \nonumber\\ &\qquad +\ensuremath{\delta\! \mhat f_\mathrm{th}}(t)+\chi_\mathrm{pte}(t)\ast(\ensuremath{\bar a}\ensuremath{\delta \mhat a^{\mdagger}}(t)+\ensuremath{\bar a}^* \ensuremath{\delta \mhat a}(t)) \end{align} with $\bar \Delta=\ensuremath{\omega_\mathrm{l}}-\left(\ensuremath{\omega_\mathrm{c}}+\ensuremath{g_0}\bar q+ \kappa_\mathrm{abs} |\ensuremath{\bar a}|^2 (\chi_\mathrm{ptr}(t)\ast 1)\right)$. This set of equations is best solved in the Fourier domain, yielding \begin{align} \left(-i(\bar\Delta+\Omega)+\kappa/2\right)\ensuremath{\delta \mhat a}(\ensuremath{\Omega})&= -i\kappa_\mathrm{abs} \ensuremath{\bar a} \chi_\mathrm{ptr}(\ensuremath{\Omega}) (\ensuremath{\bar a}^* \ensuremath{\delta \mhat a}(\ensuremath{\Omega})+ \ensuremath{\bar a} \ensuremath{\delta \mhat a^{\mdagger}}(\ensuremath{\Omega})) - i \ensuremath{g_0} \ensuremath{\bar a} \ensuremath{\delta \mhat q}(\ensuremath{\Omega}) + \nonumber\\ &\qquad -i \ensuremath{\bar a} \delta \omega_\mathrm{tr}(\ensuremath{\Omega}) + \sqrt{\ensuremath{\kappa_\mathrm{ex}}} \ensuremath{\delta \mhat s_\mathrm{in}}(\ensuremath{\Omega})+{\sqrt{\ensuremath{\kappa_0}}}{\ensuremath{\delta \mhat s_\mathrm{cav}}(\ensuremath{\Omega})} \\ \frac{- \ensuremath{\Omega}^2- i \ensuremath{\Omega} \ensuremath{\Gamma_\mathrm{m}} + \ensuremath{\Omega_\mathrm{m}}^2}{\ensuremath{\Omega_\mathrm{m}}} \ensuremath{\delta \mhat q}(\ensuremath{\Omega})&=\left(-2 \ensuremath{g_0} +\chi_\mathrm{pte}(\ensuremath{\Omega}) \right)(\ensuremath{\bar a}\ensuremath{\delta \mhat a^{\mdagger}}(\ensuremath{\Omega})+\ensuremath{\bar a}^* \ensuremath{\delta \mhat a}(\ensuremath{\Omega})) + \ensuremath{\delta\! \mhat f_\mathrm{th}}(\ensuremath{\Omega}). \end{align} For simplicity, we refer to the Fourier transform of the respective functions by simply writing them with a frequency ($\Omega$) argument. Note that $\ensuremath{\delta \mhat a^{\mdagger}}(\ensuremath{\Omega})$ denotes the Fourier transform of $\ensuremath{\delta \mhat a^{\mdagger}}(t)$, equal to $[\ensuremath{\delta \mhat a}(-\ensuremath{\Omega})]^\dagger$; and that $[\ensuremath{\delta \mhat q}(-\ensuremath{\Omega})]^\dagger=\ensuremath{\delta \mhat q}(\ensuremath{\Omega})$ for the Hermitian operator $\ensuremath{\delta \mhat q}(t)$. To further simplify the problem, we approximate the response functions of the photothermal effects by a single-pole, low-pass response, assuming implicitly that the relevant temperature (gradient) distributions adjust themselves only with a certain delay to a change in the absorbed optical power. Assuming that this delay is larger than the relevant oscillation periods considered here, one can approximate \begin{align} \chi_\mathrm{ptr}(\Omega)&\approx \frac{g_\mathrm{ptr}}{\kappa_\mathrm{abs}} \frac{\ensuremath{\Omega_\mathrm{m}}}{-i\ensuremath{\Omega}}�\\ \chi_\mathrm{pte}(\Omega)&\approx{2 g_\mathrm{pte}}{} \frac{\ensuremath{\Omega_\mathrm{m}}}{-i\ensuremath{\Omega}} \end{align} and finally obtains \begin{align} \left(-i(\bar\Delta+\Omega)+\kappa/2\right)\ensuremath{\delta \mhat a}(\ensuremath{\Omega})&= \ensuremath{\bar a} {g_\mathrm{ptr}} \frac{\ensuremath{\Omega_\mathrm{m}}}{\ensuremath{\Omega}} (\ensuremath{\bar a}^* \ensuremath{\delta \mhat a}(\ensuremath{\Omega})+ \ensuremath{\bar a} \ensuremath{\delta \mhat a^{\mdagger}}(\ensuremath{\Omega})) - i \ensuremath{g_0} \ensuremath{\bar a} \ensuremath{\delta \mhat q}(\ensuremath{\Omega}) + \nonumber\\ &\qquad -i \ensuremath{\bar a} \delta \omega_\mathrm{tr}(\ensuremath{\Omega}) + \sqrt{\ensuremath{\kappa_\mathrm{ex}}} \ensuremath{\delta \mhat s_\mathrm{in}}(\ensuremath{\Omega})+{\sqrt{\ensuremath{\kappa_0}}}{\ensuremath{\delta \mhat s_\mathrm{cav}}(\ensuremath{\Omega})} \label{e:omsimple} \\ \frac{- \ensuremath{\Omega}^2- i \ensuremath{\Omega} \ensuremath{\Gamma_\mathrm{m}} + \ensuremath{\Omega_\mathrm{m}}^2}{\ensuremath{\Omega_\mathrm{m}}} \ensuremath{\delta \mhat q}(\ensuremath{\Omega})&=-2\left(\ensuremath{g_0} + i g_\mathrm{pte}\frac{\ensuremath{\Omega_\mathrm{m}}}{\ensuremath{\Omega}}\right)(\ensuremath{\bar a}\ensuremath{\delta \mhat a^{\mdagger}}(\ensuremath{\Omega})+\ensuremath{\bar a}^* \ensuremath{\delta \mhat a}(\ensuremath{\Omega})) + \ensuremath{\delta\! \mhat f_\mathrm{th}}(\ensuremath{\Omega}). \end{align} These equations are used to calculate the coherent response and fluctuation spectra (cf. sections \ref{NoiseCovariances}, \ref{CoherentDynamics}). \subsection{Homodyne detection \label{HomodyneDetection}} The optomechanical experiment is embedded into one arm of a balanced homodyne interferometer. At the initial beamsplitter, the laser field (and fluctuations in the fiber mode) are split up into a `local oscillator' arm, and the arm that serves as input to the cavity: \begin{align} s_\mathrm{in}&=\sqrt{1-r}s_\mathrm{las}-\sqrt{r}s_\mathrm{bs}\\ s_\mathrm{lo}&=\sqrt{r}s_\mathrm{las}+\sqrt{1-r}s_\mathrm{bs}, \end{align} evidently valid both in time and frequency domain. Here, we also take into account the vacuum fluctuations $\delta s_\mathrm{bs}$ entering the beamsplitter at the unoccupied port, \begin{align} s_\mathrm{las}&= \bar s_\mathrm{las}+\delta s_\mathrm{las}\\ s_\mathrm{bs}&=\delta s_\mathrm{bs}. \end{align} The field $s_\mathrm{in}$ drives both the mean field $\ensuremath{\bar a}$ and the field fluctuations within the cavity, as described in the previous section. The intracavity field $a$, in turn, couples back into the single-mode fiber taper, and the usual input-output formalism gives the field $s_\mathrm{out}$ at the output of the cavity via the relation \begin{align} s_\mathrm{in}-s_\mathrm{out}=\sqrt{\ensuremath{\kappa_\mathrm{ex}}} a \end{align} We furthermore take into account that only a fraction $\eta_\mathrm{cryo}$ of the light power at the output of the cavity is measured as `signal' in the homodyne detector due to optical losses, e.g.\ in the cryostat. For $\eta_\mathrm{cryo}< 1$, we again have to account for quantum vacuum $\delta s_\mathrm{cryo}$ that enters the optical mode, \begin{align} s_\mathrm{sig}&=\sqrt{ \eta_\mathrm{cryo}}s_\mathrm{out}+\sqrt{1- \eta_\mathrm{cryo}}s_\mathrm{cryo}\\ s_\mathrm{cryo}&=\delta s_\mathrm{cryo}. \end{align} Finally, in the homodyne receiver, the differential signal \begin{align} \delta h&= \bar s_\mathrm{lo} e^{+i \phi_\mathrm{lo}} \delta s_\mathrm{sig}^\dagger + \bar s_\mathrm{lo}^* e^{-i \phi_\mathrm{lo}} \delta s_\mathrm{sig} +\bar s_\mathrm{sig} e^{-i \phi_\mathrm{lo}} \delta s_\mathrm{lo}^\dagger+\bar s_\mathrm{sig}^* e^{+i \phi_\mathrm{lo}} \delta s_\mathrm{lo} \label{e:homo} \end{align} is measured. The fluctuational terms $\delta h$ and $\ensuremath{\delta \mhat q}$ of interest can then be expressed as a linear function of the fluctuations driving the system, \begin{align} \begin{pmatrix} \delta h \\ \ensuremath{\delta \mhat q} \end{pmatrix} &=M\cdot \begin{pmatrix} \delta s_\mathrm{las}& \delta s_\mathrm{las}^\dagger& \delta s_\mathrm{bs}& \delta s_\mathrm{bs}^\dagger& \ensuremath{\delta \mhat s_\mathrm{cav}}& \ensuremath{\delta \mhat s_\mathrm{cav}^{\mdagger}}& \delta s_\mathrm{cryo}& \delta s_\mathrm{cryo}^\dagger& \delta \omega_\mathrm{tr}& \delta\! f_\mathrm{th} \end{pmatrix}^T. \label{e:aux1} \end{align} Here, the coefficients of the matrix $M$ follow directly from the relations (\ref{e:omsimple})-(\ref{e:homo}). \subsection{Calculation of noise covariances \label{NoiseCovariances}} \label{ss:covariances} We assume that all input noise terms of eq.\ (\ref{e:aux1}) can be described by zero-mean Gaussian noise operators whose variances are known. Representing the covariances between two noise operators $x$ and $y$ as a symmetrized spectrum $\bar S_{xy}(\Omega)$ defined according to \begin{align} \frac{1}{2} \left \langle \left \{ x(\ensuremath{\Omega}),y(\ensuremath{\Omega}') \right\} \right\rangle=2\pi \bar S_{x y}(\ensuremath{\Omega})\,\delta(\ensuremath{\Omega}+\ensuremath{\Omega}'), \end{align} the only non-zero covariances are characterized by the spectra \begin{align} \bar S_{ \delta s_\mathrm{las}^\dagger \delta s_\mathrm{las}}(\ensuremath{\Omega})&= \bar S_{\ensuremath{\delta \mhat s_\mathrm{cav}^{\mdagger}} \ensuremath{\delta \mhat s_\mathrm{cav}}}(\ensuremath{\Omega})= \bar S_{ \delta s_\mathrm{bs}^\dagger \delta s_\mathrm{bs}}(\ensuremath{\Omega})= \bar S_{ \delta s_\mathrm{cryo}^\dagger \delta s_\mathrm{cryo}}(\ensuremath{\Omega})=\frac{1}{2} \label{e:qn} \end{align} for the optical quantum noise entering the system, \begin{align} \bar S_{ \ensuremath{\delta\! \mhat f_\mathrm{th}} \ensuremath{\delta\! \mhat f_\mathrm{th}}}(\ensuremath{\Omega})&\approx 4 \bar n_\mathrm{m} \ensuremath{\Gamma_\mathrm{m}} \label{e:tn} \end{align} {for the thermal Langevin force, where we have assumed $ \bar n_\mathrm{m}\approx k_\mathrm{B} T/\hbar \ensuremath{\Omega_\mathrm{m}}\gg 1$, and} \begin{align} \bar S_{ \delta \omega_ \mathrm{tr} \delta \omega_\mathrm{tr}}(\ensuremath{\Omega})&=\bar S_\mathrm{trn}(\ensuremath{\Omega}), \label{e:trn} \end{align} for the thermorefractive noise \cite{Gorodetsky2004B}, whose contribution we found to be negligible in the data presented in this manuscript. By the linearity of equation (\ref{e:aux1}), it follows that the covariance matrix $N_\mathrm{out}$ of the output noise operators is then related to the input covariance matrix $N_\mathrm{in}$ by the simple expression \begin{align} N_\mathrm{out}&=M(+\ensuremath{\Omega})\cdot N_\mathrm{in} \cdot M(-\ensuremath{\Omega})^T. \label{e:noise} \end{align} \subsection{Coherent dynamics of the system \label{CoherentDynamics}} In order to calculate the coherent response of the system to the probing by a phase-modulated input, eq.\ (\ref{e:aux1}) can be used. By assuming a sufficiently narrow detection bandwidth and/or sufficiently large phase modulation of depth $\delta \varphi$, one can set \begin{align} \ensuremath{\delta \mhat s_\mathrm{cav}^{\mdagger}}&\approx\ensuremath{\delta \mhat s_\mathrm{cav}}\approx \delta s_\mathrm{bs}^\dagger \approx \delta s_\mathrm{bs}\approx\delta s_\mathrm{cryo}^\dagger \approx\delta s_\mathrm{cryo}\approx \ensuremath{\delta\! \mhat f_\mathrm{th}}\approx \delta \omega_{tr}\approx 0 \label{e:coh1}\\ \delta s_\mathrm{las}&= i \bar s_\mathrm{las} \delta \varphi, \label{e:coh2} \end{align} and calculate the frequency-dependent transfer function from a phase modulation $\delta \varphi$ to the homodyne signal $\delta h$. This coherent response is obviously directly measured in the sideband sweeps that we routinely perform (cf.\ section \ref{ss:setup}). Moreover, by multiplication of the (complex) spectrum of the excitation pulse with this transfer function, the response of the homodyne signal in the time domain can be numerically determined via the inverse Fourier transform. \subsection{Analysis of the coherent response} The coherent response spectra are important to accurately extract the different parameters of the optomechanical interaction as well as to calibrate the mechanical noise spectra. A typical coherent response is shown in Fig. \ref{f:CoherentResponse}a. The Lorentzian peak centered around $\Omega_\mathrm{mod}=140$~MHz results from the absorption of the upper modulation sideband by the cavity and reflects the optical response of the system. The maximum of the homodyne signal is obtained when the modulation sideband is resonant with the cavity. Hence, the center frequency and width of this peak correspond to the detuning $|\Delta|$ and the linewidth $\kappa$ of the cavity, respectively. The sharp feature at $\Omega_\mathrm{mod} = \Omega_\mathrm{m}$ is the manifestation of Optomechanically Induced Transparency \cite{Weis2010B}; an interference effect due to the resonant excitation of the mechanical mode. For weak coupling power and/or large detuning, the dynamics of the mechanical mode is hardly affected by the optomechanical interaction and the width of the dispersive feature is given by the mechanical linewidth $\ensuremath{\Gamma_\mathrm{m}}$. For larger laser power, the width of the OMIT window increases, reflecting the width of the damped mode $\Gamma_\mathrm{m} + \Omega_c^2 \kappa / \left( \kappa^2 + 4 (\Delta + \Omega_\mathrm{m})^2 \right)$. Hence, the fit of the coherent response allows to extract the coupling rate $\Omega_c$ and the corresponding intracavity field $\bar a$. We introduce $\ensuremath{\bar a}_0\equiv\ensuremath{\bar a}/\frac{\kappa/2}{-i \ensuremath{\bar \Delta}+\kappa/2}$ to obtain a parameter independent of detuning. The model of eq.\ (\ref{e:aux1}), assuming pure radiation pressure backaction, fits the measurements well (cf.\ Fig.~\ref{f:CoherentResponse}a). However, as can be seen in Fig.~\ref{f:CoherentResponse}b, a small systematic deviation appears for high coupling power. This systematic effect is very well reproduced by the model including the photothermoelastic effect. Finally, Figure \ref{f:CoherentResponse}c shows a series of coherent response spectra taken for decreasing laser detunings, and a laser power of 0.6 mW. The observed increase of the amplitudes for small detuning can be fitted accurately by introducing the photothermorefractive effect in the model (red lines). The parameter $g_\mathrm{ptr}$ introduced here is dependent on detuning since the thermorefractive coefficient $\frac{\mathrm{d}n}{\mathrm{d}T}$ depends on temperature. For example, we have extracted from the fits to the full detuning series of Fig.\ 2 of the main manuscript $\ensuremath{\Omega_\mathrm{m}}/2\pi=78.2\unit{MHz}$, $\kappa/2\pi=6.0\unit{MHz}$, $\ensuremath{\bar a}_0=14.2\cdot10^3$ (with $\ensuremath{g_0}/2\pi=3.4\unit{kHz}$), $g_\mathrm{pte}/2\pi=-122\unit{Hz}$ and $g_\mathrm{ptr}/2\pi=0.32\unit{Hz}$ (at the lower mechanical sideband). {\small \begin{figure} \caption{\textbf{Fitting the model to the coherent response.} (a) A coherent response spectrum taken with a power of 0.56~mW, at T=0.65~K. (b) Spectrum for 1.4~mW, at T=0.8~K with fits including the photothermoelastic effect (red line) and without (yellow dashed). (c) Spectra for 0.6~mW at T=0.75~K, for various detunings. The photothermorefractive effect is included in the fitted model and accounts for the increased amplitude for small detuning. } \label{f:CoherentResponse} \end{figure}} \subsection{Extraction of the decoherence rate} The fits of the coherent response spectra determine all parameters characterizing the optomechanical interaction, and therefore the transduction of mechanical displacement fluctuations to optical fluctuations. The spectral shape of the noise originating from the Langevin force is thus fixed, so that the amplitude of this contribution can be fitted using the model of eq.\ (\ref{e:noise}). As we fit the spectral density of the actually measured \emph{voltage} signal, these extracted amplitudes depend on the gain of the subsequent detection chain, which is not precisely known. This ambiguity is removed by a calibration technique \cite{Riviere2011B} based on a reference phase modulation, which allows to relate noise spectra taken under arbitrarily different acquisition conditions. In this manner, we link the low-temperature noise spectra to a measurement at a higher cryostat temperature (4 K), in which a high helium gas pressure, and low optical power ($\sim100\,\mathrm{nW}$) ensure the thermalization of the sample, so that the Langevin force is known to an estimated accuracy of 3\%. In this high-temperature measurement, a known phase modulation is applied, whose amplitude can be compared with the coherent response spectra acquired with every low-temperature measurement. Assuming that no drift occurs in the phase modulation chain, this method allows to absolutely calibrate the Langevin force\textemdash and therefore the mechanical decoherence rate\textemdash in the low-temperature measurements. Importantly, this derivation reveals possible changes of the decoherence rate both due to a changed temperature (bath occupation $\bar n_\mathrm{m}$) and mechanical dissipation rate $\ensuremath{\Gamma_\mathrm{m}}$. \subsection{Error analysis \label{ErrorAnalysis}} We use the large number of traces acquired during a detuning sweep to estimate an error on each of the four parameters assumed to be independent of the detuning ($\Omega_\mathrm{m}$, $\kappa$, $\bar{a}_\mathrm{0}$, $g_\mathrm{pte}$). This is achieved by successively letting each of these parameters vary with the detuning, while the three others are still fitted globally. The error on each parameter $X$ is obtained by calculating the standard deviation $ \Delta X = \sqrt{\langle(X-X_\mathrm{0})^2\rangle}, $ where $X_\mathrm{0}$ is the value obtained when all four parameters are kept constant over the whole detuning range. Advantageously, this procedure reflects also systematic errors due to drifts of the experimental settings over the detuning series, and physical effects that are not captured by the model. The following uncertainties were obtained with this method for the run presented in Fig. 2 of the main manuscript: $ \Omega_\mathrm{m}/2\pi=(78.2260\pm0.0007)\unit{MHz}$, $\kappa/2\pi =(6.04\pm0.08)\unit{MHz}$, $\ensuremath{\bar a}_{0} = (14.2\pm0.2)\times 10^3$, $g_\mathrm{pte}/2\pi=(122\pm52)\unit{Hz}$. These errors, by affecting the shape of the expected noise spectra, also translate in an error on the fitted decoherence rate and occupation. A Monte-Carlo approach is used to assess the final error on $\gamma$ and $\bar{n}$. The fit of the noise spectrum is repeated with a set of randomly drawn parameters, assuming an independent normal distribution for each of the previous parameters. Importantly, the resulting uncertainty depends on the particular detuning point. On the lower optomechanical sideband, the standard deviation of the results is given by $\left(\Delta \gamma/\gamma\right)_\mathrm{model} = 6 \,\%$ and $\left(\Delta \bar n/\bar n\right)_\mathrm{model} =4\,\%$. Another source of uncertainty for these two parameters is the independent calibration of the optomechanical transduction that we estimate to be on the order of $\Delta_\mathrm{calib} = 3 \%$ from the scatter between calibration measurements taken at different probing power. Finally, as discussed in section \ref{GAWBS}, an uncertainty $\Delta_\mathrm{GAWBS}$ is quadratically added to account for the possible presence of GAWBS in the optical fibers before the cavity. The total error for this example is given by \begin{align*} \frac{\Delta \gamma}{\gamma} = \sqrt{{\left(\frac{\Delta \gamma} {\gamma}\right)^2}_\mathrm{model} + {\Delta_\mathrm{calib}}^2 + \Delta_\mathrm{GAWBS}^2} = 10\,\%\\ \frac{\Delta \bar n}{\bar n} = \sqrt{{\left(\frac{\Delta n} {n}\right)^2}_\mathrm{model} + {\Delta_\mathrm{calib}}^2 + \Delta_\mathrm{GAWBS}^2} = 7\,\%. \end{align*} \end{document}
\begin{document} \title[Bohr's phenomenon on a regular condensator in the complex plane]{Bohr's phenomenon on a regular condensator in the complex plane} \date{\today} \author[P. Lassère]{Lassère Patrice} \email{[email protected]} \address{Lassère Patrice : Institut de Mathématiques, , UMR CNRS 5580, Universit\'e Paul Sabatier, 118 route de Narbonne, 31062 TOULOUSE, FRANCE} \author[E. Mazzilli]{Mazzilli Emmanuel} \email{[email protected]} \address{Université Lille 1, 59655 Cedex, VILLENEUVE D'ASCQ, FRANCE.} \keywords{Functions of a complex variable, Inequalities, Schauder basis.} \subjclass{Primary 30B10, 30A10.} \begin{abstract} We prove the following generalisation of Bohr's theorem : let $K\subset\mathbb C$ a continuum, $(F_{K,n})_{n\geq 0}$ its Faber polynomials, $\Omega_R$ the level sets of the Green function of $\bar{\mathbb C}\setminus K$ with singularity at infinity, then there exists $R_0$ such that for any $f=\sum_n a_n F_{K,n}\in\mathscr O(\Omega_{R_0})$ : $f( \Omega_{R_0})\subset D(0,1)$ implies $\sum_n\left\vert a_n \right\vert\cdot\Vert F_{K,n}\Vert_K<1$. \end{abstract} \maketitle \section{Introduction} The well-known Bohr's theorem \cite{bohr} states that for any function $f(z)=\sum_{n\geq 0}\,a_n z^n$ holomorphic on the unit disc $\mathbb D$ : $$\left(\ \left \vert\sum_{n\geq 0}\, a_n z^n\right\vert<1,\ \forall\,z\in\mathbb D\ \right)\ \implies\ \left( \sum_{n\geq 0}\, \left\vert a_n z^n\right\vert<1,\ \forall\,z\in D(0,1/3)\ \right)$$ and the constant $1/3$ is optimal. Our goal in this work is to study Bohr's theorem in the following context. Let $K\subset\mathbb C$ be a compact in the complex plane. What are the open sets $\Omega$ containing $K$ such that the space $\mathscr O(\Omega)$ admits a topological basis\footnote{For all $f\in\mathscr O(\Omega)$ there exists an unique sequence $(a_n)_n$ of complex numbers such that $f=\sum_{n\geq 0} a_n\varphi_n$ for the usual compact convergence topology of $\mathscr O(\Omega)$.} $(\varphi_n)_n$ which verifies, for every holomorphic function $f=\sum_{n\geq 0} a_n\varphi_n\in\mathscr O(\Omega)$ : $$\left(\ \left \vert\sum_{n\geq 0}\, a_n \varphi_n(z)\right\vert<1,\ \forall\,z\in\Omega\ \right)\ \implies\ \left( \sum_{n\geq 0}\, \left\vert a_n \right\vert\cdot\Vert\varphi_n\Vert_K<1\ \right)\ ?$$ In this case we say that the family $(K, \Omega, (\varphi_n)_{n\geq 0})$ satisfies \textbf{Bohr's property} or that \textbf{Bohr's phenomenon} is observed. \noindent \textbf{Some examples : } $\bullet$ The family $(\overline{D(0,1/3)}, D(0,1), (z^n)_{n\geq 0})$ satisfies Bohr's phenomenon (this is Bohr's classic theorem). \noindent $\bullet$ Note that the family $(\overline{D(0,1/3)}, D(0,1), ((3z)^n)_{n\geq 0})$ also satisfies Bohr's phenomenon. This example will play a special role in the following, since $((3z)^n)_{n\geq 0}$ is the Faber polynomial basis associated with the compact $\overline{D(0,1/3)}$. \noindent $\bullet$ On the other hand, the family $(\overline{D(0,2/3)}, D(0,1), (z^n)_{n\geq 0})$ does not satisfy Bohr's phenomenon (due to optimality of the constant $1/3$ in Bohr's theorem). As a starting point, for a given compact $K$ we must choose a ``good'' open neighborhood $\Omega$, that admits for $\mathscr O(\Omega)$ a ``nice'' basis $(\varphi_n)_n$. ``Nice'' here means that there are good local estimates for $\varphi_n$ on $\Omega$ but not only, since, unlike for other well-known theorems for power series on the disc \cite{lasserenguyen}, Bohr's theorem cannot be extended to all basis. For example, as pointed out by Aizenberg \cite{AAD}, it is necessary that one of the elements of the basis be a constant function. We want to focus on the following situation : \begin{defn} Let $K$ be a compact in $\mathbb C$ including at least two points, $K$ is a continuum if $\overline{\mathbb C}\setminus K$ is simply connected. \end{defn} When $K$ is a continuum it can be associated with the sequence $(F_{K,n})_n$ of its Faber polynomials. In more detail, let $\Phi\ :\ \overline{\mathbb C}\setminus K\to \overline{\mathbb C}\setminus{\overline {\mathbb D}}$ be the unique conformal mapping that verifies $$\Phi(\infty)=\infty,\quad \Phi'(\infty)=\gamma>0.$$ Therefore $\Phi$ admits a Laurent development close to the infinity point under the form: $$\Phi(z)=\gamma z+\gamma_0+\dfrac{\gamma_1}{z}+\dots+\dfrac{\gamma_k}{z^k}+\dots$$ and then for $n\in\mathbb N$ : $$\begin{aligned}\Phi^n(z)&=\left( \gamma z+\gamma_0+\dfrac{\gamma_1}{z}+\dots+\dfrac{\gamma_k}{z^k}+\dots \right)^n\\ &=\underbrace{\gamma^n z^n+a_{n-1}^{(n)}z^{n-1}+\dots+a_{1}^{(n)}z+a_{0}^{(n)}}_{ F_{K,n}(z)}+\underbrace{ \dfrac{b_{1}^{(n)}}{z}+\dfrac{b_{2}^{(n)}}{z^2}+\dots+\dfrac{b_{k}^{(n)}}{z^k}+\dots}_{ E_{K,n}(z)} \end{aligned}$$ $F_{K,n}$ is the polynomial part of the Laurent expansion at infinity of $\Phi^n$. It is a common basis for the spaces $\mathscr O(K),\ \mathscr O(\Omega_R), (R>1)$ where\footnote{\samepage $\Omega_R$ is also the level set of the Siciak-Zaharjuta extremal function $\Phi_K(z):=\sup\{\vert p(z)\vert^{1/{\text{deg}}(p)}\}$ where the supremum is taken over all complex polynomials $p$ such that $\Vert p\Vert_K\leq 1$. $\Phi_K$ is also related to the classical Green function for $\bar{\mathbb C}\setminus K$ with pole at infinity $g_K\ :\ \mathbb C\setminus K\to ]0,+\infty[$ by the equality $\log\Phi_K=g_K$ on $\mathbb C\setminus K$ . Recall that $g_K$ is the unique harmonic positive function on $\mathbb C\setminus K$ such that $\lim_{z\to\infty} \left( g_K(z)-\log\vert z\vert\right) $ exists and is finite and $\lim_{z\to w}g_K(z)=0,\ \forall\,w\in\partial(\mathbb C\setminus K)$. } $\Omega_R:=\{ z\in\mathbb C\ :\ \vert \Phi(z)\vert<R\}\cup K$. This polynomial basis exhibits remarkable properties (the relevant reference is the work by P.K.Suetin \cite{suetin}) similar to the Taylor basis $(z^n)_n$ on discs $D(0,R)$. In particular, the level sets $\Omega_R$ are the convergence domains of the series $\sum_{n\geq 0} a_n F_{K,n}$ and for any compact $L\subset \overline{\mathbb C}\setminus K$ we have $$\lim_{n\to\infty} \Vert F_{K,n}\Vert_L^{1/n}= \Vert \Phi\Vert_L. $$ This formula is the one variable version of a more general formula (see \cite{nguyen}). In this work, we show (Theorem 3.1) that for every continuum $K$ there exists an $R_0>1$ such that for any $R\geq R_0$ the family $(K,\Omega_R, (F_{K,n})_{n\geq 0})$ verifies Bohr's property. We start by studying the cases of an elliptic condensator (i.e. $K=[-1,1]$) which had been considered in a different form by Kaptanoglu and Sadik in an interesting study \cite{kap} which motivated this article (see remark 2.4). \noindent \textbf{Acknowledgement.} Finally, we thank the Anonymous Referee for useful suggestions improving significantly the paper. \section{An example : the ``elliptic'' condensator $K=[-1,1]$} Let us examine in this section the particular case where $K:=[-1,1 ]$. This is a ``fundamental'' example because this is one of the very few case (see \cite{suetin}, \cite{he} for circular lunes) where the explicit form of the conformal map $\Phi \ :\ \Omega:=\overline{\mathbb{C}}\setminus K\to \{\vert w\vert >1\}$ allows us to obtain a more precise estimation of the Faber polynomials of $K$ (see \cite{suetin}). \noindent Here, $\Phi^{-1}(w)={1\over 2}(w+{w}^{-1})$ is the Zhukovskii function, the Faber polynomials $(F_{K,n})_n$ form a common basis for the spaces $\mathscr O(\Omega_R)$, $(R>1)$ where the boundary $\partial\Omega_R=\Phi^ {-1}(\{\vert w \vert=R\}$ of the level set $\Omega_R$ is given by the equation : $$2z=R e^{i\theta}+R^{-1} e^{-i\theta}.$$ Theses are ellipses with foci $1$ et $-1$ and eccentricity $\varepsilon= \frac{2R}{1+R^2}$. We observe that the polynomials $F_{K,n}$ enjoy in the target coordinates ``$w$'' a much more convenient form for computation than in the source coordinates ``$z$''. Indeed $\Phi$ presents a simple pole at infinity which implies that $\Phi^n+{1/ \Phi^n}$ et $\Phi^n$ have the same principal part. We observe also that $\Phi(z)=z+\sqrt{z^2-1},$ which implies ${1/\Phi(z)}=z-\sqrt{z^2-1}$. From these last identities we can deduce\footnote{We can also deduce (see \cite{suetin}, pp. 36-37) that if $K=[-1,1]$, then the Faber polynomials are the Tchebyshev polynomials of the first kind (up to a constant $2$ if $n\geq 1$) : $F_{K,0}(z)=T_0(z),\ F_{K,n}(z)=2T_n(z),\ (n\geq 1)$ where $T_n(x)=\cos(n{ \text{arccos}} x)$.} that ${1/\Phi^n}+\Phi^n$ extends as a polynomial on $\Bbb{C}$. This is $F_{K,n}$ and if we write $F_{K,n}$ in the target coordinates ``$w$'', we get : $$F_{K,n}(w)=w^n+ w^{-n}.$$ This important equality will allow us to write any function $f(z)=\sum_n a_n F_{K,n}(z), z\in \Omega_R$, holomorphic on $ \Omega_R$ under the form $$f(z)=f(\Phi^{-1}(w))=\sum_n a_n F_{K,n}((\Phi^{-1}(w))=\sum_n a_n \left( w^n+w^{-n}\right),\quad 1<\vert w\vert <R,$$ and we shall often use this device from now on. Now let us look at Bohr's phenomenon for the elliptic condensator $(K:=[-1,1], \Omega_R, (F_{K,n})_{n\geq 0})$ given that $R>R_0$ is large enough. Then next proposition is, in our particular case, the equivalent version of Caratheodory's inequality. \begin{prop} Let $f(w)=a_0+\sum_{1}^{\infty}a_n(w^n+ w^{-n})\in \mathscr O(\{1<\vert w\vert <R\})$. Suppose that $\texttt{re}({f})>0$, then : $$\vert a_n\vert\leq {2\texttt{re}(a_0)\over R^n- R^{-n}},\quad\forall\,n>0. $$ \end{prop} \noindent\textbf{Proof : } Let $1<r<R$, then for all $n>0$ we have $$\begin{aligned}&a_n r^{-n}&=&{1\over 2\pi}\int_{0}^{2\pi}e^{in\theta}f(r e^{i\theta})d\theta,\\ &\overline a_nr^n&=&{1\over 2\pi}\int_{0}^{2\pi}e^{in\theta}\bar f(r e^{i\theta})d\theta. \end{aligned}$$ which easily gives (remember that $\texttt{re}(f)>0$) : $$\vert a_n\vert \cdot\left(r^n-r^{-n}\right)\leq\left\vert {a_nr^{-n}}+\bar a_nr^n\right\vert\leq {1\over \pi}\int_{0}^{2\pi}\texttt{re}(f(r e^{i\theta}))d\theta=2\texttt{re}({a_0}),$$ to get the expected result, (just let $r$ tend to $R$). $\blacksquare$ \begin{lem} Let $f=a_0+\sum_{n=1}^{\infty}a_n(w^n+w^{-n}) \in \mathscr O(\{1<\vert w\vert <R\})$. Suppose that $\vert f\vert <1$ and $a_0>0$, then\footnote{ Note that $\vert f\vert<1$ implies $a_0<1$.} we have : $$\vert a_n\vert\leq {2(1-a_0)\over R^n-R^{-n}}.$$ \end{lem} \noindent\textbf{Proof : } This is classical : let $g=1-f$, then $\texttt{re}({g})>0$ on $\{1<\vert w\vert <R\}$ and by prop. 2.1 : $$\vert a_n\vert\leq {2(1-a_0)\over R^n-R^{-n}}.$$ $\blacksquare$ \begin{prop} For all $R\geq R_0=5.1284...$ the family $(K:=[-1,1], \Omega_R, (F_{K,n})_{n\geq 0})$ satisfies Bohr's phenomenon ($ \Omega_{R_0}$ is the ellipse with eccentricity $\varepsilon_0=0.3757...$). \end{prop} \noindent\textbf{Proof : } Let $f=a_0+\sum_{1}^{\infty}a_nF_{K,n}\in\mathscr O(\Omega_R)$ and suppose that $\vert f\vert <1$ on $\Omega_{R}$. In the variables ``$w$'' : $f(w)=a_0+\sum_{1}^{\infty} a_n (w^n+ w^{-n})$ on $\{1<\vert w\vert <R\}$ and up to a rotation (changing nothing by symmetry), we can suppose that $a_0\geq 0$. Then by lemma 2.2 : $$\begin{aligned}a_0+\sum\vert a_n\vert\cdot\Vert F_{K,n}\Vert_{K}&\leq a_0+2(1-a_0)\sum_{n=1}^{\infty}{r^n+{r^{-n}}\over R^n-{R^{-n}}},\quad (1<r<R)\\ &\leq a_0+(1-a_0)\sum_{n=1}^{\infty}{4R^n\over R^{2n}-1}. \end{aligned}$$ This gives $$a_0+\sum_{n=1}^{\infty}\vert a_n\vert\cdot\Vert F_{K,n}\Vert_{K}<1$$ if $$\varphi(R):=\sum_{1}^{\infty}{4R^n\over R^{2n}-1}<1.$$ But $\varphi$ strictly decreases on $]1,\infty[$, $\lim_{1_+}\varphi(R)=+\infty$, $\lim_{+\infty}\varphi(R)=0$ therefore, there exists a unique $R_0>1$ such that $\varphi(R)-1=0$ on $]1,\infty[$ ; Mathematica gives $R_0=5.1284...$ corresponding to an eccentricity of $\varepsilon_0=0.3757...$ ; $(K:=[-1,1], \Omega_R, (F_{K,n})_{n\geq 0})$ satisfies Bohr's phenomenon for all $R\geq R_0$. $\blacksquare$ \begin{rem} Using theorem 7 in \cite{kap}, we can deduce a weaker version of proposition 2.3 with $R_0=5.1573...$ and $\varepsilon_0=0.3738..$, so, proposition 2.3 is a slighty stronger version of theorem 7 in \cite{kap}. In another work \cite{lasseremanu} we calculate exactly the infimum of $R_0$ satisfying proposition 2.3 i.e. what we call the Bohr's radius of $K=[-1,1]$ in Theorem 3.1. \end{rem} \section{ Bohr's phenomenon on an arbitrary Green condensator} \subsection{Estimations of Faber polynomials on a Green condensator } In this paragraph, we recall classical inequalities (see \cite{suetin}) on Faber polynomials of $K$ that we will use in paragraph 3.2. Let $K\subset\mathbb C$ be a continuum, $(F_{K,n})_{n\geq 0}$ its Faber polynomials. Recall that $\Phi^n(z)=F_{K,n}(z)+E_{K,n}(z)$ where $E_{K,n}$ is the meromorphic part in the Laurent developement of $\Phi^n$ in a neighborhood of infinity. If $\Omega_r$, $(r>1)$ is the level set $\{ z\in\mathbb C\ :\ \vert\Phi(z)\vert<r\}$ then we have the following integral formulas for Faber polynomials (see Suetin, \cite{suetin}, pp 42) : $$\forall\,z\in\Omega_r\ :\quad F_{K,n}(z)=\int_{\partial\Omega_r}{\Phi^n(t)\over t-z}dt,\ \ \leqno{(1)}$$ $$\forall\,z\in\mathbb C\setminus \overline{\Omega_r}\ :\quad E_{K,n}(z)=\int_{\partial\Omega_r}{\Phi^n(t)\over t-z}dt,\ \ \leqno{(2)}$$ Formula (2) leads to the following estimations for all $1<r<R$ : $$\forall\,z\in\mathbb C\setminus {\Omega_R}\ :\quad\vert E_{K,n}(z)\vert\leq \int_{\partial\Omega_r}\left\vert{\Phi^n(t)\over t-z}\right\vert\cdot \vert dt\vert\leq {r^n\texttt{lg}(\partial\Omega_r)\over \texttt{dist}(z,\partial\Omega_r)},\ \leqno{(3)}$$ ($\texttt{lg}(\partial\Omega_r)$ is the euclidian length $\partial\Omega_r$, $\texttt{dist}(z,\partial\Omega_r)$ is the euclidian distance from $z$ to $\partial\Omega_r$) and $$\forall\,z\in\partial\Omega_R\ :\ \quad \vert F_{K,n}(z)\vert\leq R^n\left(1+{r^n\over R^n}\cdot {\texttt{lg}(\partial\Omega_r)\over \texttt{dist}(z,\partial\Omega_r)}\right) ,\ \ \leqno{(4)}$$ for all $1<r<R$. Then if $R$ is large enough, precisely if $${r^n\over R^n}\cdot{\texttt{lg}(\partial\Omega_r)\over \texttt{dist}(z,\partial\Omega_r)}<1$$ then for all $n>0$, we have : $$\forall\,z\in\partial\Omega_R\ :\ \quad\vert F_{K,n}(z)\vert\geq R^n\left(1-{r^n\over R^n}\cdot{\texttt{lg}(\partial\Omega_r)\over \texttt{dist}(z,\partial\Omega_r)}\right)>0.$$ With formula (1), we deduce the estimation, for all $ r>1$ and $z\in K$ : $$\vert F_{K,n}(z)\vert\leq \int_{\partial\Omega_{r}}\left\vert{\Phi^n(t) \over t-z}\right\vert\cdot\vert dt\vert\leq r^n{\texttt{lg}(\partial\Omega_{r})\over \texttt{dist}(z,\partial\Omega_{r})}.\ \leqno{(5)}$$ If moreover the compact $K$ is a domain defined by a real analytic Jordan curve, then Caratheodory's theorem ensures that $\Phi$ extends as a biholomorphism on a neighborhood of $\partial K$, say up to $\partial\Omega_{r_0}$, where $r_0<1$. From this, we get for all $r_0<R$ : $$\forall\,z\in\mathbb C\setminus\Omega_R\ :\ \quad\vert E_{K,n}(z)\vert\leq \int_{\partial\Omega_{r_0}}\left\vert{\Phi^n(t)\over t-z}\right\vert\cdot\vert dt\vert\leq {r_{0}^n\texttt{lg}(\partial\Omega_{r_0})\over \texttt{dist}(z,\partial\Omega_{r_0})},$$ and so the estimations $$\begin{aligned} R^n\left(1-{r_{0}^n\over R^n}\cdot{\texttt{lg}(\partial\Omega_{r_0})\over \texttt{dist}(z,\partial\Omega_{r_0})}\right)&\leq\vert F_{K,n}(z)\vert\\ &\leq R^n\left(1+{r_{0}^n\over R^n}\cdot {\texttt{lg}(\partial\Omega_{r_0})\over \texttt{dist}(z,\partial\Omega_{r_0})}\right), \end{aligned}$$ for all $z\in\mathbb C\setminus\Omega_R,\ r_0<R$. \subsection{Bohr's phenomenon on a Green condensator. } In this paragraph we extend proposition 2.3 for all continuum $K$ in the complex plane, precisely : \begin{theo} For all continuum $K\subset\mathbb C$, there exists a constant $R_K>1$ such that for all $R>R_K$ the family $(K,\Omega_R,(F_{K, n})_{n\geq 0},)$ satisfies Bohr's phenomenon and the infimum $R_0$ of such $R$ will be called the \textbf{Bohr's radius} of $K$. \end{theo} For example the Bohr radius for a disc $K=D(a,r)$ is $3$ due to Bohr's classic theorem, and in \cite{lasseremanu} we compute the exact value of $R_0$ when $K=[-1,1]$. Before proving theorem 3.1, some intermediate results are necessary. Let $K$ be a continuum, $(F_{K,n})_{n\geq 0}$ its sequence of Faber polynomials and $z_0\in \partial K$. Consider the family $(\varphi_{n\geq 0})_{n\geq 0}$ where $\varphi_0\equiv 1$ and $\varphi_n=F_{K,n}-F_{K,n}(z_0)\ (n\geq 1)$. It is clear that $(\varphi_n)_{n\geq 0}$ is again a basis of the spaces $\mathscr O(\Omega_R)$ for all $R>1$ and we have \begin{theo} The family $( K, \Omega_R, (\varphi_n)_{n\geq 0})$ enjoys Bohr's property for $R$ large enough. That is to say, there exists $R>1$ such that all holomorphic function $f=\sum_n a_n\varphi_n\in\mathscr O(\Omega_R) $ with values in $\mathbb D$ satisfy $$\sum_{n\geq 0} \vert a_n\vert\cdot\Vert \varphi_n\Vert_K=\vert f(z_0)\vert+\sum_{n\geq 1} \vert a_n\vert\cdot\Vert \varphi_n\Vert_K <1.$$ \end{theo} \noindent \textbf{Proof : } Let $R_0>1$. We can suppose without loss of generality that $z_0=0$. Because $\varphi_n(0)=0$ for all $n>0$ we can apply theorem 3.3 in \cite{AAD} on the open set $\Omega_{R_0}$. This implies that there exists $D(0,\rho_0)$ where $\rho_0$ is small enough and a compact $K_1\subset\Omega_{R_0}$ such that : $$\vert f(0)\vert +\sum_{n\geq 1}\vert a_n\vert\cdot \Vert \varphi_n\Vert_{D(0,\rho_0)}\leq \Vert f\Vert_{K_1},$$ for any function $f=\sum_n\, a_n\varphi_n\in\mathscr O(\Omega_{R_0})$. Now choose $\rho_1>0$ such that $K_1\subset D(0,\rho_1)$. We have : $$\vert f(0)\vert +\sum_{n\geq 1}\vert a_n\vert\cdot \Vert \varphi_n\Vert_{D(0,\rho_0)}\leq \Vert f\Vert_{D(0,\rho_1)},\leqno{(6)}$$ for all $f=\sum_n a_n\varphi_n\in\mathscr O(\Omega_R)$ where $R$ is choosen large enough so that $D(0,\rho_1)\subset \Omega_R$. Let $f\in \mathscr O(\Omega_R)$ such that $\Vert f\Vert_{\Omega_R}\leq 1$; the invariant form of Schwarz's lemma (\cite{goluzin}, chapter 8) gives the following estimation on any disc $D(0,\rho)\subset \Omega_R$ ($\rho\geq \rho_1$) : $$\Vert f\Vert_{D(0,\rho_1)}\leq {\rho_1 \rho^{-1}+\vert f(0)\vert\over 1+\vert f(0)\vert \rho_1 \rho^{-1}}.\leqno{(7)}$$ We want for $f=f(0)+\sum_{n\geq 1}\, a_n\varphi_n\in\mathscr O(\Omega_R)$ to dominate the quantity : $\vert f(0)\vert+\sum_{n\geq 1}\, \vert a_n\vert\cdot\Vert\varphi_n\Vert_{K}$; write $$\sum_{n\geq 1}\vert a_n\vert\cdot\Vert\varphi_n\Vert_{K}=\sum_{n\geq 1}\vert a_n\vert\cdot\Vert\varphi_n\Vert_{D(0,\rho_0)}\times \dfrac{\Vert \varphi_n \Vert_{K}}{ \Vert \varphi_n \Vert_{D(0,\rho_0)}}.\ \ \leqno{(8)}$$ Let $L$ be a disc contained in $D(0,\rho_0)\setminus K$ then $$\lim_{n\to\infty} \Vert\varphi_n\Vert_L^{1/n} =R^{\alpha_L}$$ where\footnote{$\omega$ is the extremal function associated for the pair $(K, \Omega_R)$.} $\alpha_L:=\max_{z\in L} \omega(z, K,\Omega_R)$, this is in fact true for all compact $L\subset \Omega_R\setminus K$ and this is an immediate corollary of a Nguyen Thanh Van's result (\cite{nguyen}, page 228, see also \cite{nguyenzeriahi}, \cite{zeriahi} for ``pluricomplex versions''). At this point, it's not difficult to deduce $$\forall\,\varepsilon>0,\ \exists\, C_\varepsilon>0\ :\ \Vert\varphi_n\Vert_K\leq C_\varepsilon R^{n\varepsilon},\ \forall\,n\in\mathbb N,$$ and $$\exists\, C>0\ :\ \Vert\varphi_n\Vert_L\geq C \cdot R^{n\frac{\alpha_L}{2}},\ \forall\,n\in\mathbb N.$$ It remains to choose $\varepsilon>0$ small enough so that $R^\varepsilon< R^{\frac{\alpha_L}{2}}$. Such a choice assures $$0\leq \lim_{n\to+\infty}\dfrac{\Vert \varphi_n \Vert_{K}}{ \Vert \varphi_n \Vert_{D(0,\rho_0)}}\leq \lim_{n\to+\infty} \left( R^{\varepsilon-\frac{\alpha_L}{2}}\right)^n=0.$$ So the sequence $\left(\frac{\Vert \varphi_n \Vert_{K}}{ \Vert \varphi_n \Vert_{D(0,\rho_0)}}\right)_n$ is bounded : by (8) there exists $C>0$ such that $$\sum_{n\geq 1} \vert a_n\vert \cdot \Vert\varphi_n\Vert_{K}\leq C\sum_{n\geq 1}\vert a_n\vert\cdot\Vert\varphi_n\Vert_{D(0,\rho_0)},$$ which give us with (6) the estimation : $$\sum_{n\geq 1} \vert a_n\vert \cdot \Vert\varphi_n\Vert_{K}\leq C\big(\Vert f \Vert_{D(0,\rho_1)}-\vert f(0)\vert\big).$$ Finally, with the invariant Schwarz's lemma $(7)$ $$\sum_{n\geq 1} \vert a_n\vert \cdot \Vert\varphi_n\Vert_{K}\leq C\left( {\rho_1 \rho^{-1}+\vert f(0)\vert\over 1+\vert f(0)\vert \rho_1 \rho^{-1}}-\vert f(0)\vert\right)= C\rho_1 \rho^{-1}\left({1-\vert f(0)\vert^2\over 1+\vert f(0)\vert \rho_1 \rho^{-1}} \right)$$ which lead us to the main estimation $$\sum_{n\geq 1} \vert a_n\vert \cdot \Vert\varphi_n\Vert_{K}\leq 2C\rho_1 \rho^{-1}(1-\vert f(0)\vert).$$ To conclude, let us choose $\rho$ large enough so that $2C\rho_1 \rho^{-1}\leq 1$, therefore, for any $R>1$ such that $D(0,\rho)\subset \Omega_R$ and $f=f(0)+\sum_{n\geq 1}a_n\varphi_n\in \mathscr O(\Omega_R)$, $f(\Omega_R)\subset \mathbb D$, we have : $$\vert f(0)\vert + \sum_{n\geq 1} \vert a_n\vert\cdot\Vert\varphi_n\Vert_{K}\leq 1.$$ Q.E.D. $\blacksquare$ Of course we must now come back to the basis $(F_{K,n})_n$ : \begin{lem} Let $\widetilde K\subset K$ be another compact, $(\varepsilon_n)_{n\geq 1}$ a complex sequence and suppose that there exists a constant $0<C<1$ such that $$\begin{cases} \sup_{z\in \widetilde K}\vert \varphi_n(z)-\varepsilon_n\vert \leq C\cdot \Vert\varphi_n\Vert_K,\quad\forall\,n\in\mathbb N,&\qquad (9) \\ \vert\varepsilon_n\vert\leq (1-C)\cdot \Vert\varphi_n\Vert_K,\quad\forall\,n\in\mathbb N.&\qquad (10) \end{cases}$$ Then the family $( \widetilde K,\Omega, (\widetilde\varphi_n)_{n\geq 0})$ satisfies Bohr's property with $\widetilde\varphi_0\equiv 1,\ \widetilde\varphi_n:=\varphi_n-\varepsilon_n$. \end{lem} \noindent\textbf{Proof : } Let $f=a_0+\sum_{n\geq 1}a_n\varphi_n=a_0+\sum_{n\geq 1}a_n\epsilon_n+\sum_{n\geq 1}a_n(\varphi_n-\varepsilon_n)\in\mathscr O(\Omega)$ and suppose that $\vert f\vert\leq 1$ on $\Omega$. We have to prove that $$\left\vert a_0+\sum_{n\geq 1}a_n\epsilon_n\right\vert+\sum_{n\geq 1}\vert a_n\vert\cdot\Vert \varphi_n-\varepsilon_n\Vert_{\widetilde K}\leq 1.$$ But : $$\begin{aligned} \left\vert a_0+\sum_{n\geq 1}a_n\epsilon_n\right\vert&+\sum_{n\geq 1}\vert a_n\vert\cdot\Vert \varphi_n-\varepsilon_n\Vert_{\widetilde K} \leq \\ &\leq \left\vert a_0+\sum_{n\geq 1}a_n\epsilon_n\right\vert + C\cdot \sum_{n\geq 1}\vert a_n\vert\cdot\Vert \varphi_n\Vert_K \\ &\leq \vert a_0\vert +\sum_{n\geq 1}\vert a_n\vert\cdot\vert \epsilon_n\vert+ C\cdot \sum_{n\geq 1}\vert a_n\vert\cdot\Vert \varphi_n\Vert_K\\ &\leq \vert a_0\vert +(1-C)\sum_{n\geq 1}\vert a_n\vert\cdot\Vert \varphi_n\Vert_K+ C\cdot \sum_{n\geq 1}\vert a_n\vert\cdot\Vert \varphi_n\Vert_K\\ &\leq \vert a_0\vert +\sum_{n\geq 1}\vert a_n\vert\cdot\Vert \varphi_n\Vert_K\leq 1 \end{aligned}$$ Q.E.D. $\blacksquare$ \noindent\textbf{Proof of Theorem 3.1 : } Now let $K$ be a continuum, $\Omega_R,\ (R>1)$, a level set of the Green function of $K$ and fix $\widetilde K=\overline{\Omega_R }$. If $a\in\partial\Omega_R$ there exists (this is theorem 3.2) $R'>R$ such that the family $\left(\overline{\Omega_R }, \Omega_{R'}, (1, F_{\widetilde K, n}-F_{\widetilde K, n}(a))_{n\geq 0}\right)$ satisfies Bohr's property. Then for any function $$f=a_0+\sum_{n\geq 1} a_n ( F_{\widetilde K, n}-F_{\widetilde K, n}(a)) \in \mathscr O(\Omega_{R'}),$$ such that $\vert f\vert \leq 1$ on $\Omega_{R'}$, we have $$\vert a_0\vert+ \sum_{n\geq 1} \vert a_n\vert\cdot \Vert F_{\widetilde K, n}-F_{\widetilde K, n}(a)\Vert_{\overline{\Omega_R }}\leq 1.$$ \noindent But (\cite{suetin}, page 35) : $F_{\widetilde K, n}(z)=R^{-n}F_{K, n}(z)$ so $$\begin{aligned} f(z)&=a_0+\sum_{n\geq 1} a_n \left( F_{\widetilde K, n}(z)-F_{\widetilde K, n}(a)\right)\\ &= f(z)=a_0+\sum_{n\geq 1} a_n R^{-n}\left( F_{ K, n}(z)-F_{ K, n}(a)\right). \end{aligned}$$ Because $R>1$, this immediately implies that the basis $(1, F_{K, n}-F_{K, n}(a))_{n\geq 0}$ satisfies Bohr's property on $(\overline{\Omega_R }, \Omega_{R'})$. If we apply lemma 3.3 with $\varphi_n=F_{K, n}-F_{K, n}(a),\ a\in\partial \Omega_R$ and $-\varepsilon_n=F_{K, n}(a)$, the inequalities (9) and (10) are : $$\begin{aligned} &(9')\qquad& \sup_{z\in K}\vert F_{K, n}(z)\vert \leq C\cdot \sup_{z\in\overline{\Omega_R}}\vert F_{K, n}(z)-F_{K, n}(a)\vert, \\ &(10')\qquad& \vert F_{K, n}(a)\vert \leq (1-C)\cdot \sup_{z\in\overline{\Omega_R}}\vert F_{K, n}(z)-F_{K, n}(a)\vert. \end{aligned}$$ (where $C\in]0,1[$ is a constant). For all $n\in\mathbb N$ choose $a_n\in\partial\Omega_R$ such that $\Phi(a_n)=\theta_n\Phi(a)$ where $\theta_n$ is an $n$-root of $-1$ (remember that $\Phi(\partial\Omega_R)=C(0,R)$). So $$F_{K, n}^n(a_n)=\Phi^n(a_n)-E_{K, n}(a_n)=-\Phi(a)^n-E_{K, n}(a_n)$$ and $$F_{K, n}(a_n)-F_{K, n}(a)=-2\Phi(a)^n-\left[ E_{K, n}(a)+E_{K, n}(a_n)\right ].$$ But because of inequality (3) in paragraph 3.1, noting $r=1+\varepsilon_0$ : $$\left\vert E_{K, n}(a)+E_{K, n}(a_n)\right\vert \leq 2(1+\varepsilon_0)^n\dfrac{\texttt{lg}(\partial\Omega_{1+\varepsilon_0})}{\texttt{dist}(\partial\Omega_{1+\varepsilon_0},\partial\Omega_{R})},$$ for all $n\in\mathbb N$ and $R>1+\varepsilon_0$. Consequently : $$\begin{aligned} \sup_{z\in\overline{\Omega_R}}\vert F_{K, n}(z)-F_{K, n}(a)\vert &\geq \vert F_{K, n}(a_n)-F_{K, n}(a)\vert \\ &\geq 2R^n\left[1-\left(\dfrac{1+\varepsilon_0}{R}\right)^n\cdot\dfrac{\texttt{lg} (\partial\Omega_{1+\varepsilon_0})}{\texttt{dist}(\partial\Omega_{1+\varepsilon_0},\partial\Omega_{R})} \right] \end{aligned}$$ for all $n\in\mathbb N$ and $R>1+\varepsilon_0$. So, as long as we choose $R$ large enough, say $R>R_0$, we can suppose that $$ \sup_{z\in\overline{\Omega_R}}\vert F_{K, n}(z)-F_{K, n}(a)\vert \geq \dfrac{3}{2}R^n,\quad \forall\,n\in\mathbb N,\ R>R_0.\leqno{(11)}$$ Because of (4) : $$\vert F_{K,n}(a)\vert \leq R^n\left[ 1+\left(\dfrac{1+\varepsilon_0}{R}\right)^n\cdot\dfrac{\texttt{lg} (\partial\Omega_{1+\varepsilon_0})}{\texttt{dist}(\partial\Omega_{1+\varepsilon_0},\partial\Omega_{R})} \right]$$ for all $n\in\mathbb N$, $R>1+\varepsilon_0$. Because the term in between the brackets satisfies : $$\begin{aligned}1&\leq 1+\left(\dfrac{1+\varepsilon_0}{R}\right)^n\cdot\dfrac{\texttt{lg} (\partial\Omega_{1+\varepsilon_0})}{\texttt{dist}(\partial\Omega_{1+\varepsilon_0},\partial\Omega_{R})}\\ &\leq 1+\left(\dfrac{1+\varepsilon_0}{R_1}\right)\cdot\dfrac{\texttt{lg} (\partial\Omega_{1+\varepsilon_0})}{\texttt{dist}(\partial\Omega_{1+\varepsilon_0},\partial\Omega_{R_1})}\underset{R_1\to\infty}{\longrightarrow} 1 \end{aligned}$$ for all $R>R_1>1+\varepsilon_0$ ; it is less than $5/4$ for all $n\in\mathbb N$ and $R>R_1$ where $R_1$ is choosen large enough ; i.e. $$\vert F_{K,n}(a)\vert \leq \dfrac{5}{4}\cdot R^n,\quad \forall\,n\in\mathbb N,\ R>R_1.$$ It follows from (11) that $$\vert F_{K,n}(a)\vert \leq \dfrac{5}{6}\cdot \dfrac{3}{2}\cdot R^n \leq \dfrac{5}{6}\sup_{z\in\overline{\Omega_R}}\vert F_{K, n}(z)-F_{K, n}(a)\vert,\ \forall\,n\in\mathbb N,\ R>R_2:=\max\{ R_0, R_1\}.$$ So we have proved inequality (10') with $C=1/6$. Finaly, still because of (4) : $$\begin{aligned} \qquad \sup_{z\in K}\vert F_{K, n}(z)\vert &\leq \sup_{z\in\overline{\Omega_{1+2\varepsilon_0}}}\vert F_{K, n}(z)\vert \\ &\leq (1+2\varepsilon_0)^n\cdot \left[ 1+\left(\dfrac{1+\varepsilon_0}{1+\varepsilon_0}\right)^n\cdot\dfrac{\texttt{lg} (\partial\Omega_{1+\varepsilon_0})}{\texttt{dist}(\partial\Omega_{1+2\varepsilon_0},\partial\Omega_{1+\varepsilon_0})} \right]\\ &\leq A (1+2\varepsilon_0)^n,\quad\forall\,n\in\mathbb N \end{aligned}$$ where $A$ is a constant strictly larger than $1$. Given $A>1$ being fixed it is easy to deduce that for any $R>R_3$ : $$\sup_{z\in K}\vert F_{K, n}(z)\vert \leq A (1+2\varepsilon_0)^n \leq \dfrac{R^n}{4},\quad \forall\, n\in\mathbb N.$$ So because of (11) $$\sup_{z\in K}\vert F_{K, n}(z)\vert \leq \dfrac{1}{6}\cdot\dfrac{3}{2}R^n \leq \dfrac{1}{6} \sup_{z\in\overline{\Omega_R}}\vert F_{K, n}(z)-F_{K, n}(a)\vert$$ for all $n\in\mathbb N$ et $R > \max\{ R_3, R_2\}$. This is formula (9') with $C=1/6$, so we can apply lemma 3.3 and deduce that the family $(K,\Omega_R, (F_{K, n})_{n\geq 0})$ satisfies Bohr's phenomenon for all $R$ large enough : theorem 3.1. is proved. $\blacksquare$ \end{document}
\begin{document} \title[Complete graphs in $N^2\PLH\R$]{ The boundary behavior of domains with complete translating, minimal and CMC graphs in $N^2{\mkern-1mu\times\mkern-1mu} \mathbb{R}$} \author{Hengyu Zhou} \address{Department of Mathematics, Sun Yat-sen University, No. 135, Xingang Xi Road, Guangzhou, 510275, People's Repulic of China} \email{[email protected]} \date{\today} \subjclass[2010]{Primary 53A35: Secondary 53A10 35J93 49Q05 } \begin{abstract} In this note we discuss graphs over a domain $\Omega\subset N^2$ in the product manifold $N^2{\mkern-1mu\times\mkern-1mu}\mathbb{R}$. Here $N^2$ is a complete Riemannian surface and $\Omega$ has piece-wise smooth boundary. Let $\gamma \subset \partial\Omega$ be a smooth connected arc and $\Sigma$ be a complete graph in $N^2{\mkern-1mu\times\mkern-1mu}\mathbb{R}$ over $\Omega$. We show that if $\Sigma$ is a minimal or translating graph, then $\gamma$ is a geodesic in $N^2$. Moreover if $\Sigma$ is a CMC graph, then $\gamma$ has constant principle curvature in $N^2$. This explains the infinity value boundary condition upon domains having Jenkins-Serrin theorems on minimal and CMC graphs in $N^2{\mkern-1mu\times\mkern-1mu}\mathbb{R}$. \end{abstract} \maketitle \section{Introduction} In this paper we are interested in the asymptotic behavior of translating, minimal and constant mean curvature (CMC) graphs over a domain in a Riemannian surface. The purpose is to establish the connection between the completeness of those graphs over a domain and the property of its boundary. We are motivated by recent progresses on complete translating graphs in $\mathbb{R}^3$ and the Jenkins-Serrin theory on minimal graphs and CMC graphs.\\ \indent Before giving more details let us introduce the concept of translating graphs. We apply the following notation throughout this paper: $N^2$ is a complete Riemannian surface with a metric $\sigma$, $N^2{\mkern-1mu\times\mkern-1mu}\mathbb{R}$ is the product manifold $\{(x,r): x\in N^2, r\in \mathbb{R}\}$ equipped with the metric $\sigma+dr^2$ and $\Omega$ is a domain in $N^2$ with piecewise smooth boundary.\\ \indent A \textit{translating graph} in $N^2{\mkern-1mu\times\mkern-1mu}\mathbb{R}$ if it is the graph of $u(x)$ where $u(x):\Omega\rightarrow \mathbb{R}$ is the solution of a mean curvature type equation given as follows: \begin{equation} \label{def:tsgraph} div(\F{Du}{\sqrt{1+|Du|^2}})=\F{1}{\sqrt{1+|Du|^2}} \end{equation} where $Du$ is the gradient of $u$ and div is the divergence of $N^2$. Translating surfaces characterize the type II finite singularity of mean curvature flow in Euclidean space (see \cite{AngV97}, \cite{Ang95} and \cite{HS08}). Some geometric properties were investigated in \cite{AW94, WXJ11, GJJ10, CSS07, Sj16} etc.\\ \indent Recently Shahriyari \cite{Sha15} showed that if $\Sigma$ is a complete translating graph over a smooth domain $\Omega\subset \mathbb{R}^2$ in $\mathbb{R}^3$, then $\partial\Omega$ has to be a geodesic. This raises a question as follows.\\ \indent \emph{ Is there a connection between the completeness of graphs with certain properties over a domain and the boundary behavior of this domain? }\\ \indent An answer for this question in the case of CMC graphs was already founded by Spruck (section 8 in \cite{spr72}). One of his results says that suppose the function $u(x)$ in a domain $\Omega$ in $\mathbb{R}^n$ goes to $+\infty$ uniformly as $x$ approaches to an connected open domain $\Gamma\subset\partial\Omega$ and the graph of $u(x)$ is a (complete obviously) CMC graph in $\mathbb{R}^{n+1}$, then $\Gamma$ has constant mean curvature in $\mathbb{R}^n$. \\ \indent Our question is also related to the Jenkins-Serrin theory on minimal graphs and CMC graphs in product manifolds (see Jekins-Serrin\cite{JS68}). For an excellent summary of this topic we refer to Eichmair-Metzger \cite{EM16}. Its basic setting is given as follows. Let $\Omega$ be a domain in $N^2$ with its boundary $\partial\Omega$ which is composed with $\partial_{+}\Omega$, $\partial_{-}\Omega$ and $\partial_{0} \Omega$. The Jenkins-Serrin theory seek to a smooth function $u(x)$ on $\Omega$ such that $\Sigma$, the graph of $u(x)$, is minimal or of CMC in $N^2{\mkern-1mu\times\mkern-1mu}\mathbb{R}$ and $u(x)$ approaches to $+\infty(-\infty)$ when $x$ is close to $\partial_{+}\Omega (\partial_{-}\Omega)$ and approaches to continuous data when $x$ is close to $\partial_{0}\Omega$. Generally this theory also requires that $\partial_{+}\Omega$ and $\partial_{-}\Omega$ are minimal or have constant principle curvature in $N^2$ respectively (see \cite{spr72},\cite{PL09} and \cite{EM16}). One interesting application of Jenkins-Serrin theorems is the construction of a harmonic diffeomorphism from the complex plane $\mathbb{C}^2$ to the hyperbolic plane $\mathbb{H}^2$ by Collin-Rosenberg \cite{CR10}. \\ \indent Now our main result will answer the question mentioned above. It also explains the conditions on $\partial_{+}\Omega$ and $\partial_{-}\Omega$ in the Jekins-Serrin theory. We say that a graph $\Sigma$ in $N^2{\mkern-1mu\times\mkern-1mu} \mathbb{R}$ over $\Omega$ is complete approaching to a connected arc $\gamma\subset \partial\Omega$ if $\Sigma$ can not be extended along $\gamma$ as a complete graph over a neighborhood of $\gamma$. The main result of this paper is stated as follows. \begin{theorem}\label{thm:MT1}(\emph{Theorem \ref{thm:geo}}) Let $N^2$ be a complete Riemannian surface and $\Omega\subset N^2$ is a domain with piecewise smooth boundary. Let $\gamma \subset \partial\Omega$ denote a smooth connected arc and let $\Sigma$ be the graph of a smooth function $u(x)$ on $\Omega$ in the product manifold $N^2{\mkern-1mu\times\mkern-1mu}\mathbb{R}$. \\ \indent Suppose $\Sigma$ is complete approaching to $\gamma$. Then we have \begin{enumerate} \item if $\Sigma$ is a translating or minimal graph, then $\gamma$ is a geodesic arc; \item if $\Sigma$ is a CMC graph, then $\gamma$ has constant principle curvature. \end{enumerate} Moreover only one of the following holds: (1) $u(x)\rightarrow +\infty$ as $x\rightarrow x_0$ for all $x_0\in \gamma$; (2) $u(x)\rightarrow -\infty$ as $x\rightarrow x_0$ for all $x_0\in \gamma$. \end{theorem} \begin{Rem} The reason that we only work in a surface $N^2$ is that we need curvature estimates of stable type surfaces in three manifolds (see Section 4). We are working on a project that deals with the higher dimension version of Theorem \ref{thm:MT1}. \end{Rem} The essential part in the proof of our main result is when $\Sigma$ is a translating graph. The other two cases can be achieved with minor modification (see Section 5). Its basic idea is inspired from Shariyari \cite{Sha15}. \\ \indent When $\Sigma$ is a translating graph, we show that $\Sigma$ is stable and minimal with respect to a weighted product metric (see Theorem \ref{thm:mta}). The curvature estimate of stable minimal surfaces in three dimensional manifolds ( Schoen \cite{Soe83} and Minicozzi-Colding \cite{CM02}) gives a family of simply connected disks on $\Sigma$ with fixed diameter $\delta$ (see Lemma \ref{lm:ti}) centered at points $(x_n, u(x_n))$ where $x_n$ goes to a point in $\gamma$. These disks has a vertical limit $F$ according to Theorem \ref{thm:est2}. Moreover $F$ is minimal since the completeness of $\Sigma$ guarantees that the angle function on $F$ has to vanish according to Theorem \ref{thm:convergence} (see Lemma \ref{lm:minimal}). This implies that $\gamma$ is a geodesic. Notice that when $\Sigma$ is a CMC graph, the curvature estimate we need is from Zhang \cite{Zhang05} (see Theorem \ref{thm:cmc:ce}).\\ \indent Our paper is organized as follows. In Section 2 we show that a translating graph is minimal and stable with respect to a weighted metric in $N^2{\mkern-1mu\times\mkern-1mu}\mathbb{R}$. In Section 3 we compute the sectional curvature of this weighted metric. In Section 4 all curvature estimates of stable minimal surfaces and CMC surface that we need are collected. In Section 5 we prove Theorem \ref{thm:MT1}. In appendix A we construct translating graphs in $N^2{\mkern-1mu\times\mkern-1mu}\mathbb{R}$ where $N^2$ has certain warped product structure. A particular example is that $N^2$ is the two dimensional hyperbolic space $\mathbb{H}^2$. \\ \section{Stability} Let $\Sigma$ be a translating graph of $u(x)$ in $N^2\PLH\R$ where $u(x)$ satisfies \eqref{def:tsgraph} on a domain $\Omega$. We follow the notation in \cite{HZ16}. The upward normal vector $\vec{v}$ is $\Theta(\partial_r-Du)$ where $Du$ is the gradient of $u(x)$ with respect to $N^2$. Suppose $\{\partial_1,\partial_2\}$ is a local frame on $N^2$. We denote $\partial_i+u_i\partial_r$ by $X_i$ for $i=1,2$. Then $\{X_1, X_2\}$ is a local frame on $\Sigma$. The second fundamental form of $\Sigma$ is $$ h_{ij}=-\langle \bar{\nabla}_{X_i}X_j,\vec{v}\rangle $$ With these notation one sees that $ H=-div(\F{Du}{\sqrt{1+|Du|^2}}) $. Therefore an equivalent form of \eqref{def:tsgraph} is \begin{equation}\label{eq:angle} H=-\Theta=-\langle\vec{v},\partial_r\rangle \end{equation} where $\Theta$ is referred as the angle function of $\Sigma$. The corresponding Codazzi equation and Gauss equation take the following form: \begin{gather} R_{ijkl}=\bar{R}_{ijkl}+(h_{ik}h_{jl}-h_{il}h_{jk})\\ h_{ij,k}=h_{ik,j}+\bar{R}_{\vec{v}ijk} \end{gather} where $R$ and $\bar{R}$ are the Riemann curvature tensor of $\Sigma$ and $N^2{\mkern-1mu\times\mkern-1mu}\mathbb{R}$ respectively. \\ \indent Let $\widetilde{N^2\PLH\R}$ denote the manifold $\{(x,r): x\in N^2, r\in \mathbb{R}\}$ equipped with a weighted product metric $e^{r}(\sigma+dr^2)$.\\ \indent First we show that \begin{theorem}\label{thm:mta} Let $u(x)$ be a solution in \eqref{def:tsgraph}. Its graph $\Sigma=(x,u(x))$ is a stable minimal surface in $\widetilde{N^2\PLH\R}$. \end{theorem} \begin{proof} Before the proof, let us check the area functional of $\widetilde{N^2\PLH\R}$ given by $$ F(\Sigma)=\int_{\Sigma}e^{r}d\mu $$ where $d\mu$ is the volume of $\Sigma$ in product manifold $N^2\PLH\R$. Let $\Sigma_s$ be a family of surfaces satisfying \begin{equation} \F{\partial\Sigma_s}{\partial s}|_{t=0}=\phi\vec{v}\quad \text{with}\quad \Sigma_0=\Sigma \end{equation} where $\phi(x)$ is a smooth function on $\Sigma$ with compact support. We view $\Sigma_s$ as a curvature flow of $\Sigma$ in $N^2\PLH\R$. From the classical computation in curvature flows (see Huisken-Polden \cite{HP99}), we have \begin{equation}\label{eq:pd} \begin{split} \F{\partial\vec{v}}{\partial s}|_{s=0}&=-\nabla\phi\\ \F{\partial H}{\partial s}|_{s=0}&=-\Delta\phi-(|A|^2+\bar{R}ic(\vec{v},\vec{v}))\phi \end{split} \end{equation} where $\nabla$, $\Delta$ are the covariant derivative and Laplacian of $\Sigma$ in $N^2\PLH\R$, and $\bar{R}ic$ is the Ricci curvature tensor of $N^2\PLH\R$. According to \eqref{eq:pd} and \eqref{eq:angle}, a direct computation shows that \begin{equation} \begin{split} \F{\partial F(\Sigma_s)}{\partial s}|_{s=0} &=\int_{\Sigma}\phi(H+\langle \vec{v},\partial_r\rangle)e^{r} d\mu=0\\ \F{\partial^2 F(\Sigma_s)}{\partial^2 s}|_{s=0}&=-\int_{\Sigma}\phi(\Delta\phi+(|A|^2+\bar{R}ic(\vec{v},\vec{v}))\phi+\langle \nabla \phi,\partial_r\rangle)e^{r}d\mu \end{split} \end{equation} For convenience of computation, we define an elliptic operator $L$ as follows: \begin{equation}\label{def:L} L\phi=\Delta\phi+(|A|^2+\bar{R}ic(\vec{v},\vec{v}))\phi+\langle \nabla \phi,\partial_r\rangle \end{equation} With this notation it is sufficient to check that whether \begin{equation}\label{eq:second:variation} \F{\partial^2 F(\Sigma_s)}{\partial^2 s}|_{s=0}=-\int_{\Sigma}\phi L\phi e^{r}d\mu \end{equation} is negative. Since $\Sigma$ is a graph, its angle function$\Theta=\langle \vec{v},\partial_r\rangle >0$. Thus we can write $\phi=\eta\Theta$ where $\eta$ is another function over $\Sigma$ with compact support. Therefore, we obtain that \begin{equation} \label{eq:rst} \phi L\phi=\eta\Theta(\eta L\Theta+\Theta\Delta \eta+2\langle \nabla \eta, \nabla\Theta\rangle +\Theta\langle\nabla\eta,\partial_r\rangle) \end{equation} The reason we adapt this form is based on a general formula of $\Delta\Theta$ as follows. \begin{lem} \label{lm:graphic}On any $C^2$ surface $S$ in $N^2\PLH\R$, it holds that \begin{equation} \label{eq:reta} \Delta \Theta+(|A|^2+\bar{R}ic(\vec{v},\vec{v}))\Theta-\langle \nabla H, \partial_r\rangle =0 \end{equation} where $A$ is the second fundamental form of $S$. \end{lem} \begin{proof} Fix a point $p\in S$. Choose an orthonormal frame $\{e_1, e_2\}$ on $S$ such that $\nabla_{e_i}e_j(p)=0$ and $\langle e_i, e_j\rangle =\delta_{ij}$.\\ \indent Then $\bar{\nabla}_{e_i}e_j(p)=-h_{ij}\vec{v}$ where $\bar{\nabla}$ denotes the covariant derivative of $N^2\PLH\R$ and $\vec{v}$ is the normal vector of $S$. Since $N^2\PLH\R$ is a product manifold, it is well-known that $\bar{\nabla}_X\partial_r=0$ for any smooth vector field $X$. We compute $\Delta \Theta$ as follows. \begin{align} \Delta \Theta(p)&=\nabla_{e_i}\nabla_{e_i}\langle \partial_r,\vec{v}\rangle-\nabla_{\nabla_{e_i}e_i}\Theta(p)\notag\\ &=e_i\langle \partial_r, h_{ik}e_k\rangle (p)\notag\\ &=h_{ik,i}\langle\partial_r,e_k\rangle-|A|^2\Theta\label{eq:basic} \end{align} Recall that the Codazzi equation (Chapter 6 in \cite{doC92}) says that \begin{equation} h_{ik,i}=h_{ii,k}+\bar{R}(\vec{v},e_i,e_k,e_i) \end{equation} where $\bar{R}$ denotes the Riemann curvature tensor of $N^2\PLH\R$. Thus \begin{equation} h_{ik,i}\langle\partial_r,e_k\rangle=\langle \nabla H,\partial_r\rangle+\bar{R}ic(\vec{v},\langle \partial_r, e_k\rangle e_k) \end{equation} We observe that \begin{equation} \langle \partial_r, e_k\rangle e_k=\partial_r-\Theta \vec{v} \end{equation} and $\bar{R}ic(\vec{v},\partial_r)=0$ because $\bar{\nabla}_{X}\partial_r=0$ for any vector $X$. This implies that $$ h_{ik,i}\langle\partial_r,e_k\rangle=\langle \nabla H,\partial_r\rangle-\bar{R}ic(\vec{v},\vec{v})\Theta $$ Combining this with \eqref{eq:basic}, we achieve the lemma. \end{proof} Now we go back to the proof of Theorem \ref{thm:mta}. By assumption $\Sigma$ is a translating graph in $N^2\PLH\R$, then $H=-\Theta$. Hence \eqref{eq:reta} is written as \begin{equation} L\Theta=0 \end{equation} and therefore \eqref{eq:rst} becomes that \begin{equation} \phi L\phi =\eta\Theta(\Theta\Delta \eta+2\langle \nabla \eta, \nabla\Theta\rangle -\Theta\langle\nabla\eta,\partial_r\rangle) \end{equation} On the other hand, the divergence of $\eta \Theta^2\nabla \eta e^{r}$ is computed as follows. \begin{equation} \begin{split} div(\eta\Theta^2\nabla \eta e^{r})&=\eta\Theta e^{r}(\Theta\Delta \eta+2\langle \nabla \eta, \nabla\Theta\rangle +\Theta\langle\nabla\eta,\partial_r\rangle)+\Theta^2|\nabla \eta|^2 e^{r}\\ &=\phi L\phi e^{r} +\Theta^2|\nabla \eta|^2 e^{r} \end{split} \end{equation} Combining this expression with \eqref{eq:second:variation} and applying the divergence theorem we obtain that $$ \F{\partial^2 F(\Sigma_s)}{\partial^2 s}|_{s=0}=\int_{\Sigma} \Theta^2|\nabla \eta|^2 e^{r}d\mu\geq 0 $$ Then we conclude that $\Sigma$ is stable and minimal in $\widetilde{N^2\PLH\R}$. \end{proof} Lemma \ref{lm:graphic} leads to a rigidity result of limit surfaces for the $C^2$ convergence of minimal graphs, translating graphs and CMC graphs in $N^2{\mkern-1mu\times\mkern-1mu}\mathbb{R}$. It is an important ingredient in the proof of Lemma \ref{lm:minimal} (see Section 5). A similar result appeared in Lemma 2.3 of Eichmair \cite{Ecm10} in the setting of marginally outer trapped surfaces. \begin{theorem} \label{thm:convergence} Let $\{\Sigma_n\}_{n=1}^\infty$ be a sequence of smooth connected graphs in $N^2{\mkern-1mu\times\mkern-1mu}\mathbb{R}$ with diameter $\delta$ converging uniformly to a connected surface $\Sigma$ in the $C^2$ sense. If all $\Sigma_n$ are minimal in the interior of $\Sigma$ the angle function $\Theta$ satisfies that $\Theta>0$ or $\Theta\equiv 0$. The conclusion is also in the case of minimal or CMC graphs. \end{theorem} \begin{proof} Without loss of generality we assume $\Theta>0$ in the interior of all $\Sigma_n$. \\ \indent First we assume $\Sigma_n$ are minimal or CMC. Then $\nabla H\equiv 0$. By Lemma \ref{lm:graphic}, we have \begin{equation} \Delta \Theta+(|A|^2+\bar{R}ic(\vec{v},\vec{v}))\Theta=0 \end{equation} on all $\Sigma_n$. Notice that $\bar{R}ic(\vec{v},\vec{v})\geq -\beta$ here $\beta$ is a positive constant only depending on $N^2$. Then we have $ \Delta \Theta\leq \beta \Theta $ on all $\Sigma_n$. Since $\Sigma$ are the $C^2$ uniform limit of $\Sigma_n$ as $n\rightarrow \infty$, then it has the property that $\Theta\geq 0$ and satisfies that $\Delta \Theta\leq \beta \Theta$. By the strong maximum principle of elliptic equations, $\Theta\equiv 0$ or $\Theta>0$ on $\Sigma$. \\ \indent Assume $\Sigma_n$ are translating graphs. Then $H\equiv -\Theta$ by \eqref{eq:angle}. It also holds that $\Delta \Theta\leq \beta \Theta+\langle\nabla \Theta, \partial_r\rangle$ on $\Sigma_n$ and $\Sigma$. Based on the strong maximum principle and $\Theta\geq 0$ on $\Sigma$, we obtain the conclusion with a similar derivation as above. \end{proof} \section{Sectional Curvature of $\widetilde{N^2\PLH\R}$} In this section we compute the sectional curvature of $\widetilde{N^2\PLH\R}$ (see Lemma \ref{lm:section}). It should be a classical fact in Riemannian geometry. However we did not find appropriate literatures. For convenience of readers we inlcude its proof here.\\ \indent We first work with the general setting. Suppose $M$ is a Riemannian manifold with a smooth metric $g$. In a local coordinate $\{x_1,\cdots,x_n\}$ , its metric is expressed as $$ g=g_{ij}dx^idx^j $$ The Christoffel symbols with this local coordinates are defined by $\nabla_{\partial_i}\partial_j=\Gamma_{ij}^k\partial_k$ and computed by the following expressions: \begin{equation} \Gamma_{ij}^k=\F{1}{2}g^{kl}\{\partial_ig_{lj}+\partial_jg_{li}-\partial_l g_{ij}\} \end{equation} and the Riemannian curvature tensor is computed by \begin{equation} \label{eq:R} R(\partial_i,\partial_j,\partial_k,\partial_l)=(\partial_j\Gamma_{ik}^r-\partial_j\Gamma_{ik}^r +\Gamma_{ik}^m\Gamma_{mj}^r-\Gamma_{jk}^m\Gamma_{mi}^r)g_{rl} \end{equation} Let $\tilde{M}$ denote the same smooth manifold $M$ equipped with a weighted metric $e^{2f}g$ where $f$ is a smooth function on $M$. According the definition above, the corresponding Christoffel symbols $\tilde{\Gamma}_{ij}^k$ satisfy that \begin{equation} \label{eq:st} \tilde{\Gamma}_{ij}^k=\Gamma_{ij}^k+(\delta_{kj}\partial_i f+\delta_{ki}\partial_j f-g^{kl}\partial_lf g_{ij}) \end{equation} where $\nabla$ is the covariant derivative of $M$. \\ \indent We consider the second fundamental form of hypersurfaces in Riemanian manifolds and their conformal deformations. Notice that all computation belows are valid for any dimension. We have the following result. \begin{lem} If $\Sigma$ is a hypersurface in $M$, then it is a hypersurface in $\tilde{M}$ and vice versa. Let $(h_{ij})$ and $(\tilde{h}_{ij})$ be the second fundamental form of a hypersurface $\Sigma$ in $M$ and $\tilde{M}$ respectively. Then they satisfy that \begin{equation}\label{eq:second} \tilde{h}_{ij}=e^f(h_{ij}+df(\vec{v})g_{ij})\quad \tilde{h}_{i}^j=e^{-f}(h_i^j+df(\vec{v})\delta_{ij }) \end{equation} where $\vec{v}$ is the normal vector of $\Sigma$ in $M$. \end{lem} \begin{proof} We add $\sim$ for all geometric quantities related on $\tilde{M}$. Fix a point $p$ on $\Sigma$. Since $\Sigma$ is a manifold, we can choose a local chart $\{x_1,\cdots, x_n, x_{n+1}\}$ near $p$ in $M$ such that $\{\partial_1,\cdots,\partial_n\}$ is a local frame on $\Sigma$ and $\vec{v}(p)=\partial_{n+1}$. Notice that $e^{-f}\vec{v}$ is the normal vector of $\Sigma$ in $\tilde{M}$. According to the definition and applying \eqref{eq:st} one observes that \begin{align} \tilde{h}_{ij}(p)&=-\tilde{g}(\bar{\nabla}_{\partial_i}\partial_{j},e^{-f}\vec{v})=-a_l e^{-f}\tilde{\Gamma}_{ij}^{n+1}\\ &=-a_l e^{f}\Gamma_{ij}^{n+1}+a_l e^{f} \partial_l f g_{ij}\\ &=e^f (h_{ij}+df(\vec{v}) g_{ij}) \end{align} because $\tilde{g}_{ij}=e^{2f}g_{ij}$. Applying $\tilde{g}^{ij}=e^{-2f}g^{ij}$, we obtain \eqref{eq:second} from $\tilde{h}_i^i=\tilde{g}^{ik}\tilde{h}_{ki}$. \end{proof} Now we obtain the relationship between the sectional curvature of $N^2\PLH\R$ and this of $\widetilde{N^2\PLH\R}$ by applying the Codazzi equation. \begin{lem}\label{lm:section} Let $\{\partial_1,\partial_2,\partial_3=\partial_r\}$ be a local orthonormal frame on $N^2\PLH\R$. Let $K_{ij}$ and $\tilde{K}_{ij}$ denote the sectional curvature of $N^2\PLH\R$ and $\widetilde{N^2\PLH\R}$ respectively. Then it holds that \begin{equation} \tilde{K}_{ij}(x,r)=e^{-r}(K_{ij}(x)-\F{1}{4})\quad \tilde{K}_{i3}=0 \end{equation} for $i,j\in\{1,2\}$. \end{lem} \begin{proof} Let $f(r)=\F{r}{2}$. Fix any $r$. Notice that the slice $N^2{\mkern-1mu\times\mkern-1mu} \{r\}$ is totally geodesic in $N^2\PLH\R$. Assume $i,j\in\{1,2\}$. According to \eqref{eq:second}, its second fundamental form in $\widetilde{N^2\PLH\R}$ is \begin{equation} \tilde{h}_{ij}=-\F{1}{2}e^{\F{r}{2}}\sigma_{ij} \end{equation} It is easy to see that the Riemannian curvature tensor of $N^2{\mkern-1mu\times\mkern-1mu} \{r\}$ with respect to the induced metric is $e^{r}R$ where $R$ is the Riemannian curvature tensor of $N^2$. By the Codazzi equation, $$ \bar{R}_{ijij}(x,r)=e^{r}R_{ijij}(x)-\F{1}{4}e^{r}(\sigma_{ii}\sigma_{jj}-\sigma_{ij})(x) $$ From $\tilde{g}_{ij}=e^{r}\sigma_{ij}=e^{r}g_{ij}$, a direct computation yields the expression of $\tilde{K}_{ij}$. With a straightforward computation, we have $$ \tilde{\Gamma}_{k3}^l=-\F{1}{2}\delta_{kl} $$ where $\{k,l\}\in \{1,2\}$. According to \eqref{eq:R} we have $\tilde{K}_{i3}=\bar{R}_{i3i3}=0$. \end{proof} \section{Curvature Estimates} We recall some curvature estimates from Schoen \cite{Soe83}, Minicozzi-Colding \cite{CM02} on the stable minimal surfaces $\Sigma$ and Zhang \cite{Zhang05} on stable CMC surfaces immersed in three Riemannian manifolds $M^3$ with sectional curvature $K_M$. For a fixed point $x\in M$, $B_1(x)$ denotes the extrinsic ball in $M$ centered at $x$ with radius $r$. Similarly, For a fixed point $x\in \Sigma$, $B_r^\Sigma(x)$ denotes the intrinsic ball on $\Sigma$ centered at $x$ with radius $r$. \begin{theorem}\label{thm:est}(\emph{\cite{Soe83} and \cite{CM02}}) Suppose $\Sigma\subset M^3$ is a stable minimal surface with trivial normal bundle and $B_{r_0}^\Sigma(p)\subset \Sigma\backslash \partial\Sigma$ where $|K_M|\leq k^2$ where $r_0<\min\{\F{\pi}{k},k\}$. Then for some positive constant $C=C(k)$ and all $0<r <r_0$, \begin{equation} \sup_{B^\Sigma_{r_0-r}(p)}|A|^2\leq Cr^{-2} \end{equation} \end{theorem} Now we derive the curvature estimate for translating graphs. The idea follows from Shariyari \cite{Sha15}. Let $i_0$ denote the injective radius in $N^2$. Without loss of generality, we assume $i_0\leq 1$. \begin{theorem}\label{thm:est2} Let $U$ be an open domain in $N^2$ with sectional curvature satisfying $|K_{N^2}(x)|+\F{1}{4}\leq k^2$ for all $x\in U$. Let $\Sigma=(x,u(x))$ be a translating graph in $N^2\PLH\R$ where $x\in U$. If $B^\Sigma_{r_0e^{-1}}(p)\in\Sigma\cap B_1(p) \backslash \partial(\Sigma\cap B_1(p))$ and $r_0\leq \min\{\F{\pi}{k\sqrt{e}},k\sqrt{e}, i_0,1\}$ then for some positive constant $C=C(k)$ and all $0<r\leq r_0e^{-1}$, \begin{equation}\label{eq:third} \sup_{B^\Sigma_{r_0e^{-1}-r}(p)}|A|^2\leq Cr^{-2} \end{equation} \end{theorem} \begin{proof} Fix a point $p=(x_0,y_0)\in N^2\PLH\R$ where $x_0\in U$. Let $B_{r}(p)$ be the ball in $N^2\PLH\R$ containing all points of which the distance to $p$ is $r_0$. Then for any point $(x,y)\in B_{r_0}(p)$ we have \begin{equation}\label{eq:relation} |y-y_0|\leq r_0\leq 1 \end{equation} \indent Let $\tilde{B}_{r_0}(p)$ denote the ball $B_{r_0}(p)$ equipped with the conformal metric $e^{r-y_0}(\sigma+dr^2)$. We claim that the sectional curvatures $\tilde{K}$ of all points in $\tilde{B}_{r_0}(p)$ satisfies that \begin{equation}\label{eq:sectional} |\tilde{K}|\leq ek^2 \end{equation} . By Lemma \ref{lm:section}, the sectional curvature $\tilde{K}_{ij}$ of the ball $B_{r_0}(p)$ equipped with the conformal metric $e^{r}(\sigma+dr^2)$ is $ e^{-r}(K_{ij}-\F{1}{4})$ for $i=1,2$ and $\tilde{K}_{i3}=0$. Multiplying the constant factor $e^{-y_0}$, the sectional curvature of $\tilde{B}_{r_0}(p)$ shall satisfy that \begin{equation}\label{eq:sec:est} \tilde{K}_{ij}(x,r)=e^{y_0-r}(K_{ij}(x)-\F{1}{4})\quad \tilde{K}_{i3}=0; \end{equation} for $i,j=1,2$. Combining \eqref{eq:sec:est} with \eqref{eq:relation} and $|K_N(x)|+\F{1}{4}\leq k^2$ yields that \eqref{eq:sectional}. We obtain the claim.\\ \indent By Theorem \ref{thm:mta}, $\Sigma$ is a stable minimal graph with respect to the metric $e^{r}(\sigma+dr^2)$. Since $y_0$ is a constant, $\Sigma$ is still a stable minimal graph in $\tilde{B}_{r_0}(p)$ with respect to the metric $e^{r-y_0}(\sigma+dr^2)$. Here multiplying a positive constant $e^{-y_0}$ on the metric does not change the minimal and stable properties of hypersurfaces. Now applying Theorem \ref{thm:est}, the second fundamental form $\tilde{A}$ of $\Sigma$ in $\tilde{B}_{r_0}(p)$ satisfies \begin{equation} \sup_{\tilde{B}_{r_0-\sigma'}(p)}|\tilde{A}|^2\leq C(k)\sigma^{-2} \end{equation} where $r_0\leq \min\{\F{\pi}{k\sqrt{e}},k\sqrt{e}\}$, $\sigma'<r_0$ and $C(k)$ is a constant depending on $k$. According to Lemma \ref{eq:second}, we have \begin{equation} |\tilde{A}|^2=e^{-(r-y_0)}|A|^2\geq e^{-1}|A|^2 \end{equation} where $A$ is the second fundamental form of $\Sigma$ with respect to the metric $\sigma+dr^2$. Similarly, the ball $\tilde{B}_{(r_0-\sigma')}(p)$ with respect to the metric $e^{r-y_0}(\sigma+dr^2)$ contains the ball $B_{(r_0 e^{-1}-\sigma'e^{-1})}(p)$ because of \eqref{eq:relation}. This yields that \begin{equation} \sup_{B_{(r_0e^{-1}-\sigma'e^{-1})}(p)}|A|^2\leq C(k)e^3(e\sigma')^{-2} \end{equation} Let $r$ be $\sigma'e^{-1}$, we obtain \eqref{eq:third}. \end{proof} Following from Zhang \cite{Zhang05}, a CMC surface $\Sigma$ is \emph{stable} if for any $f\in C^\infty_0(\Sigma)$ it holds that $-Lf\geq 0$ where $L$ is given by \eqref{def:L}. Thus a CMC graph in $N^2{\mkern-1mu\times\mkern-1mu} \mathbb{R}$ is stable by combining \eqref{def:L} with \eqref{eq:reta}. The curvature estimate of stable CMC surfaces is given as follows. \begin{theorem}\label{thm:cmc:ce}(\emph{Theorem 1.1 in \cite{Zhang05}}) Let $\Sigma$ be an immersed stable CMC $H_0$-surface with trivial normal bundle in a complete three dimensional manifold $M$ where its sectional curvature satisfies $|K_M|\leq k^2$. There exists a positive constant $r_0=r_0(H_0, k, M)$ such that for all $\sigma \in (0, r_0)$ and any $x\in \Sigma$ with geodesic ball $B_{r_0}(x)\bigcap \partial\Sigma=\emptyset$ we have for $$ \sup_{B^\Sigma_{r_0-\sigma}(x)}|A|^2 \leq C\sigma^{-2} $$ where $C$ is a constant only depending on $H_0,k$ and $M$. \end{theorem} \begin{Rem} In this paper we do not need the precise expressions of those constants. \end{Rem} \section{Proof of the main theorem} In this section we show the main theorem. \begin{theorem}\label{thm:geo}(\emph{Theorem \ref{thm:MT1}}) Let $N^2$ be a Riemannian surface and $\Omega\subset N^2$ be a domain with piecewise smooth boundaries. Let $\gamma \subset \partial\Omega$ denote a smooth connected arc and $\Sigma$ be the graph of a smooth function $u(x)$ on $\Omega$ in the product manifold $N{\mkern-1mu\times\mkern-1mu}\mathbb{R}$. \\ \indent Suppose $\Sigma$ is complete approaching to $\gamma$. Then we have \begin{enumerate} \item if $\Sigma$ is a translating or minimal graph, then $\gamma$ is a geodesic arc; \item if $\Sigma$ is a CMC graph, then $\gamma$ has constant principle curvature. \end{enumerate} Moreover only one of the following holds: (1) $u(x)\rightarrow +\infty$ as $x\rightarrow x_0$ for all $x_0\in \gamma$; (2) $u(x)\rightarrow -\infty$ as $x\rightarrow x_0$ for all $x_0\in \gamma$. \end{theorem} \begin{Rem} In the case that $N^2$ is $\mathbb{R}^2$ and $\Sigma$ is a complete translating graph in $\mathbb{R}^3$, the above result is obtained by Shahriyari \cite{Sha15}. \end{Rem} In the sequel we only prove Theorem \ref{thm:geo} in the case of translating graphs. The proof in the cases of minimal graphs and minimal graphs only requires some minor modifications (see Remark \ref{rm:ce} and Remark \ref{rm:last}).\\ \indent From now on, we denote the graph of $u(x)$ by $\Sigma$ which is a translating graph over $\Omega$. Fix a point $x_0\in \gamma$ and let $U_{x_0}$ be an open bounded neighborhood of $x_0$ in $N^2$ such that its intersection with $\gamma$ is a connected arc passing through $x_0$. This arc is written as $\gamma_{x_0}$. \\ \indent To show Theorem \ref{thm:geo}, our objective is to show that $\gamma_{x_0}$ is geodesic and $\{u(x_n)\}$ has the property as described in the theorem when $\{x_n\}$ approaches to the points on $\gamma_{x_0}$. \\ \indent We start with the following result based on the curvature estimate in the previous section. Notice that $U_{x_0}$ has a compact closure in $N^2$. \begin{lem}\label{lm:ti} For any point $p=(y,u(y))$ on the translating graph $\Sigma$ where $y\in U_{x_0}$, then $\Sigma$ is a graph (in exponential coordinate of $N^2{\mkern-1mu\times\mkern-1mu}\mathbb{R}$ ) over the disk $D_{\delta}\subset T_p\Sigma$ of radius $\delta$. Such graph is denoted by $G(p)$. Moreover $\delta$ and the geometry of $G(p)$ only depend on $U_{x_0}$. The conclusion is also valid when $\Sigma$ is a minimal graph or CMC graph in $N^2{\mkern-1mu\times\mkern-1mu}\mathbb{R}$. \end{lem} \begin{Rem}\label{rm:ce} This is the only one place we apply the curvature estimate of translating graphs in Thoerem \ref{thm:est2}. The conclusions in this lemma for minimal graphs and CMC graphs follow from Theorem \ref{thm:est} and Theorem \ref{thm:cmc:ce} respectively with a similar derivation. Notice that a CMC graph is stable by Zhang \cite{Zhang05}. A similar application of Zhang's estimate can be found in Hauswirth-Rosenberg-Spruck \cite{HRS08}. \end{Rem} \begin{proof} Since $U_{x_0}$ has a compact closure, then the sectional curvature of $N^2$ on $U_{x_0}$ satisfies $|K_N |\leq k^2_{x_0}-\F{1}{4}$ for some constant $k_{x_0}$ only depending on $U_{x_0}$.\\ \indent For any $p\in \Sigma$, the exponential map $$ \exp_{p}: B_{r_1}(0)\rightarrow N^2\PLH\R $$ will be a diffeomorphism on the ball in $T_{p}(N^2\PLH\R)=\mathbb{R}^3$ centered at $0$ with radius $r_1$. Here this $r_1$ only depends on $U_{x_0}$ and is independent of $p$. We can equip a metric in $B_{r_1(0)}$ such that the exponential map is an local isometry and $\Sigma$ is a graph near the origin over $T_p\Sigma \cap B_{r_1(0)}(p)$. \\ \indent On the other hand, according to Theorem \ref{thm:est2}, there is a ball in $\Sigma$ with radius $r_0e^{-1}-r$ such that the second fundamental form of $\Sigma$ is uniformly bounded above by $Cr^{-2}$. Let $r=\F{1}{2}r_0e^{-1}$ and $\delta=\F{1}{2}r_0e^{-1}$. Since the exponential map is a uniformly local geometry, we obtain the disk $D_\delta$ in $B_{r_1}(0)$. The geometry of $G(p)$ is determined by the second fundamental form which also only depends on $U_{x_0}$. \end{proof} For each point $p\in \Sigma$, we translate vertically the graph $G(p)$ into the slice $N^2{\mkern-1mu\times\mkern-1mu}\{0\}$ as follows. Let $p=(x^*,u(x^*)$ and $G(p)$ be given in Lemma \ref{lm:ti} with a representation $(x,u(x))$ where $x$ belongs to some open set $U$. Then its \textbf{vertically translating graph} $F(p)$ is given by $$ (x,u(x)-u(x^*)) \subset N^2\PLH\R \text{\quad where}\quad (x,u(x))\in G(p) $$ This operation does not change any geometric property of $G(p)$.\\ \indent For any sequence $\{x_n\}\in \Omega$ converging to $x_0$, we conclude that the sequence of $\{u(x_n)\}$ is unbounded. Otherwise the completeness of $\Omega$ approaching to $\gamma$ implies that $x_0\in \Omega$. It is a contradiction.\\ \indent Let $\{p_n\}$ be the sequence $\{(x_n,u(x_n))\}$ on $\Sigma$ where $\{x_n\}$ converges to $x_0$ as $n\rightarrow \infty$. Let $F(p_n)$ and $G(p_n)$ be defined as above with the $\delta$ given in Lemma \ref{lm:ti}. \\ \indent The case of translating graphs in Theorem \ref{thm:geo} can be concluded from the following lemma and the connectedness of $\gamma$. \begin{lem} \label{lm:minimal} Let $\Sigma$ be a translating graph given in Lemma \ref{lm:ti}. After choosing subsequence, the sequence $F(p_n)$ converges uniformly to $\Gamma{\mkern-1mu\times\mkern-1mu} [-\F{\delta}{2},\F{\delta}{2}]\subset N^2\PLH\R$ in the $C^2$ topology where $\Gamma\subset \gamma_{x_0}$ is a connected geodesic for sufficiently small $\delta$. Moreover only one of the followings holds: \begin{enumerate} \item $u(x_n)\rightarrow +\infty$ as $x_n\rightarrow x$ for all $x\in \Gamma$; \item $u(x_n)\rightarrow -\infty$ as $x_n\rightarrow x$ for all $x\in \Gamma$. \end{enumerate} \end{lem} \begin{proof} We can choose $\delta$ sufficiently small if necessary. By Lemma \ref{lm:ti}, $F(p_n)$, translating $G(p_n)$ into the slice $N^2{\mkern-1mu\times\mkern-1mu}\{0\}$, have bounded uniform geometry. After choosing a subsequence, $F(p_n)$ will converge uniformly to a connected surface $F$ passing $x_{0}$ in the $C^2$ topology by Theorem \ref{thm:est2}.\\ \indent Now we claim that the normal vector of $F$ at $x_0$ is orthogonal to $\partial_{r}$, i.e $\Theta =0$. Otherwise the angle function $\Theta$ on $F_n$ shall have positive lower bound. Notice that the diameter of $F_n $ is $\delta$. When $x_n$ is sufficiently close to $x_0$, $F_n$ has to contain some point in $N^2{\mkern-1mu\times\mkern-1mu}\mathbb{R}$ such that its projection into $N^2$ lies outside $\Omega$ because of the two facts mentioned above. It contradicts to the fact that $\Sigma$ is complete approaching to $\gamma$. \\ \indent By Theorem \ref{thm:convergence} we have $\Theta\equiv 0$ on $F$. Because $H=\Theta$, $F$ is also minimal in $N^2{\mkern-1mu\times\mkern-1mu}\mathbb{R}$. We denote the intersection between $F$ and the slice $N^2{\mkern-1mu\times\mkern-1mu}\{0\}$ by $\Gamma$. Thus $\Gamma$ is a geodesic in $N^2$ and $F=\Gamma{\mkern-1mu\times\mkern-1mu} [-\F{\sigma}{2},\F{\sigma}{2}]$. Moreover $\Gamma$ is connected because $F$ is connected. \\ \indent Let $\Gamma_n$ be the intersection between $F_n$ and $N^2{\mkern-1mu\times\mkern-1mu}\{0\}$ belonging to $\bar{\Omega}$. According to the definition of $F_n$, we have $$ \Gamma_n=\{(x,u(x))\in G_n, u(x)=u(x_n) \} $$ Since $F_n$ converges to $F$, $\Gamma_n$ converges to $\Gamma$ as $n\rightarrow \infty$. We conclude that $\Gamma\subset \bar{\Omega}$.\\ \indent Suppose $\Gamma$ does not belongs to $\gamma_x$. Then there is a sequence $\{y_n\in\Gamma_n\}$ converges to $y\in \Omega$ on $\Gamma$. Thus the sequence $\{u(y_n)\}$ converges to $u(y)$ which is a finite number. This implies that the sequence $\{u(y_n)=u(x_n)\}$ is bounded from the definition of $\Gamma_n$. This is a contradiction to the fact that $\{u(x_n)\}$ is an unbounded sequence. As a result $\Gamma\subset \gamma_x$. \\ \indent Assuming that $u(x_n)\rightarrow +\infty$ as $x_n \rightarrow x_0$. Then for any sequence $\{x_n'\}$ approaching to $x_0$, then $u(x'_n)\rightarrow +\infty$. Otherwise by the intermediate theorem of continuous functions, there is a sequence $\{x_n^{''}\}$ such that $u(x_n^{''})$ converges to a finite number as $x_n^{''}\rightarrow x_0$. Again it contradicts the completeness of $\Sigma$ approaching to $\gamma$. The word-by-word derivation also works for any point $y\in \Gamma$. Thus we conclude (1). The proof of (2) is similar when we assume $u(x_n)\rightarrow-\infty$ as $x_n\rightarrow x_0$. The proof of Lemma \ref{lm:minimal} is complete. \end{proof} \begin{Rem}\label{rm:last} The conclusion in Lemma \ref{lm:minimal} is also valid when $\Sigma$ is minimal or CMC. The only modification is that $\Gamma$ is a geodesic or an arc with constant principle curvature respectively. \end{Rem} \begin{appendix} \section{Examples of translating graphs} In this section we construct some examples of translating graphs to mean curvature flows when the surface $N^2$ has a domain with certain special warped product structure. \\ \indent Suppose $N^2$ is a complete Riemannian surface with a metric $\sigma$ containing a domain $N^2_0$ equipped with the following coordinate system: \begin{equation} \label{metric:structure} \{\theta \in S^1, r\in [0, r_0)\}\quad\text{with}\quad \sigma=dr^2+h^2(r)d\theta^2 \end{equation} where $d\theta^2$ is the standard metric on the unit circle $S^1$, $h(r)$ is a positive function satisfying $h(0)=0$, $h'(0)=1$ with $h'(r)\neq 0$ for all $r\in (0, r_0)$. For more detail on warped product metric, we refer to Section 2 in \cite{HZ16}.\\ \indent The following result discusses the existence of translating graphs in $N^2\PLH\R$ with the structure in \eqref{metric:structure}. \begin{theorem} Let $N^2$ be a surface mentioned above. Let $u(r):[0,r_0)\rightarrow \mathbb{R}$ be a $C^2$ solution of the following ordinary equation \begin{equation}\label{eq:u} \F{u_{rr}}{1+u_r^2}+\F{h'(r)}{h(r)}u_r=1 \end{equation} with $u_r(0)=0$ for $r\in [0,r_0)$. Then $\Sigma=(x,u(r))$ for $r\in [0,r_0)$ is a translating graph in $N^2\PLH\R$ where $x=(r,\theta)\in N^2_0$ given by \eqref{metric:structure}. If $r_0=\infty$, then $\Sigma$ is complete. \end{theorem} \begin{Rem} \eqref{eq:u} is an ODE. The existence of its solution is obvious.\\ \indent Two concrete examples are the unit sphere $S^2$ and the hyperbolic plane $\mathbb{H}^2$. In the former case, $N^2_0$ is the hemisphere with $h(r)=\sin(r)$ for $r\in [0,\F{\pi}{2})$ and the spherical metric is written as $ dr^2+\sin(r)d\theta^2 $. In the case of $\mathbb{H}^2$ the hyperbolic metric is written as $ dr^2+\sinh(r)d\theta^2 $. \end{Rem} \begin{proof} Suppose $r_0=\infty$. Then $N^2_0$ is simply connected and should be a whole $N^2$. Thus $\Sigma$ is complete.\\ \indent Now we show that $\Sigma$ is a translating graph.\\ \indent According to \eqref{eq:angle} it is sufficient to derive the identity \begin{equation} \label{def:TS} H=-\Theta \end{equation} where $H$ is the mean curvature of $\Sigma$ and $\vec{v}$ is its upward normal vector.\\ \indent Fix a point $(x,u(x))$ on $\Sigma$ where $x\in N_0^2$ and the polar coordinate of $x$ in $N_0^2$ is not $(0,0)$. There is a natural frame $\{\partial_r,\partial_\theta\}$ according to the polar coordinate on $N^2_0$ by \eqref{metric:structure}. Let $\Sigma$ be the graph of $u(x)=u(r)$ in $N^2\PLH\R$. Let $u_r, u_\theta $ denote the partial derivative of $u$. Thus we have a natural frame $\{X_1=\partial_r+u_r\partial_3,X_2=\partial_\theta\}$ on $\Sigma$. Here $\partial_3$ denotes the vector field tangent to $\mathbb{R}$. We also use the fact $u_\theta=0$. Then the metric on $\Sigma$ and the upward normal vector of $\Sigma$ are given by \begin{align*} g_{11}&=\langle X_1, X_1\rangle=1+u_r^2,\quad g_{12}=\langle X_1, X_2\rangle =0 \\ g_{22}&=\langle X_2, X_2\rangle=h^2(r)\\ \vec{v}& =\F{\partial_3-u_r\partial_r}{\sqrt{1+u_r^2} } \end{align*} Let $\bar{\nabla}$ denote the covariant derivative of $N^2\PLH\R$. Then its second fundamental form is $$ h_{11}=-\langle\bar{\nabla}_{X_1}X_1,\vec{v}\rangle =\F{u_{rr}}{\sqrt{1+u_r^2}}\quad h_{22}=\langle \bar{\nabla}_{\partial_\theta}\partial_\theta,\vec{v}\rangle =-h'(r)h(r)\F{u_r}{\sqrt{1+u_r^2}} $$ where we use the fact $\langle \bar{\nabla}_{\partial_\theta}\partial_\theta,\partial_r\rangle=-h'(r)h(r)$ (for more detail see Section 2 in \cite{HZ16}). Then the mean curvature of $\Sigma$ with respect to $\vec{v}$ is $$ H=g^{11}h_{11}+g^{22}h_{22}=-\F{1}{\sqrt{1+u_r^2}}(\F{u_{rr}}{1+u_r^2}+\F{h'(r)}{h(r)}u_r)=-\F{1}{\sqrt{1+u_r^2}} $$ by \eqref{eq:u}. On the other hand we have $$ \Theta =\langle\vec{v},\partial_3\rangle =\F{1}{\sqrt{1+u_r^2}}=-H $$ Hence $\Sigma$ is a translating graph. The proof is complete. \end{proof} \end{appendix} \end{document}
\begin{document} \title{Vanishing geodesic distance for right-invariant Sobolev metrics on diffeomorphism groups} \author{Robert L.~Jerrard\footnote{Department of Mathematics, University of Toronto.} \,and Cy Maor\footnotemark[1]} \date{} \maketitle \begin{abstract} We study the geodesic distance induced by right-invariant metrics on the group $\operatorname{Diff}_\text{c}(\mathcal{M})$ of compactly supported diffeomorphisms, for various Sobolev norms $W^{s,p}$. Our main result is that the geodesic distance vanishes identically on every connected component whenever $s<\min\{n/p,1\}$, where $n$ is the dimension of $\mathcal{M}$. We also show that previous results imply that whenever $s > n/p$ or $s \ge 1$, the geodesic distance is always positive. In particular, when $n\ge 2$, the geodesic distance vanishes if and only if $s<1$ in the Riemannian case $p=2$, contrary to a conjecture made in Bauer et al.~\cite{BBHM13}. \end{abstract} \tableofcontents \section{Introduction} In this paper we mostly resolve a question about the geometry of the group $\operatorname{Diff}_\text{c}(\mathcal{M})$ of compactly supported diffeomorphisms of a Riemannian manifold $\mathcal{M}$, endowed with a right-invariant Sobolev metric; see Section \ref{sec_2} below for the precise definition, as well as assumptions on $\mathcal{M}$. Sobolev metrics on $\operatorname{Diff}_\text{c}(\mathcal{M})$ arise in a variety of contexts. In particular, such a metric turns $\operatorname{Diff}_\text{c}(\mathcal{M})$ into an infinite-dimensional Riemannian manifold, and a number of partial differential equations relevant to fluid dynamics can be formulated as geodesic flow in manifolds of this sort. Sobolev metrics on $\operatorname{Diff}_\text{c}(\mathcal{M})$ are also relevant to the study of what are known as {\em shape spaces}, a concept with connections to areas such as computer vision and computational anatomy. We refer to \cite{BBM14} for a discussion of these and other sources of motivation. A metric on $\operatorname{Diff}_\text{c}(\mathcal{M})$ gives rise to a notion of the length of a path, and the induced geodesic distance between a pair of elements is obtained by taking the infimum of the lengths of all paths connecting the two diffeomorphisms. If the metric is induced by the $H^s$ Sobolev inner product for $s$ small enough, the geodesic distance may vanish in the strong sense that any two diffeomorphisms that can be connected by a path can in fact be connected by a path of arbitrarily small length. For large enough $s$, by contrast, the geodesic distance between any two distinct diffeomorphisms is positive. Our aim is to identify the precise threshold that separates these two cases. This question grows out of work of \cite{MM05}, who proved (among other results) that the $H^s$ geodesic distance vanishes when $s=0$ and is positive when $s=1$. These results were extended to certain $s\in (0,1)$ by \cite{BBHM13,BBM13}, who proved that for $\mathcal{M}$ of bounded geometry, the $H^s$ geodesic distance vanishes if $s<1/2$. They also proved that for one-dimensional manifolds, the geodesic distance is positive when $s>1/2$, and for $\mathcal{M} = \mathbb S^1$, it vanishes in the borderline case $s=\frac 12$.\footnote{Very shortly after we completed this manuscript, a proof that $H^{1/2}$ geoedesic distance vanishes for all one-dimensional manifolds was posted, see \cite{BHP18}.} Motivated by these facts, they conjectured that for arbitrary manifolds, the induced $H^s$ geodesic distance should vanish if and only if $s\le 1/2$. It turns out to be illuminating to embed this conjecture in a larger family of questions, about the vanishing of the geodesic distance induced by right-invariant fractional Sobolev norms $W^{s,p}$, for $1\le p < \infty$, see again Section \ref{sec_2} for details (note that we do not consider the case $p=\infty$ in this paper unless explicitly noted). The arguments used by \cite[Theorem~5.7]{MM05}, \cite[Theorem~4.1]{BBHM13} then imply the following: \begin{theorem}[\cite{MM05,BBHM13}] The induced $W^{s,p}$-distance is positive whenever $sp>n$ or $s\ge 1$. \end{theorem} Our main result shows that these results are essentially sharp: \begin{theorem} The induced $W^{s,p}$-distance is vanishes whenever $sp<n$ and $s< 1$. \end{theorem} These results are stated in a more detailed way in Theorem~\ref{main_thm}. In particular, contrary to the conjecture of \cite{BBHM13}, we have the following corollary: \begin{corollary} If $\mathcal{M}$ is a manifold of dimension at least $2$, then the $H^s$ geodesic distance vanishes if and only if $s<1$. \end{corollary} We conclude this informal introduction by describing some ingredients in our analysis. First, we remark that the positivity proof of \cite[Theorem~5.7]{MM05} can be understood to show that for any $s\ge 0$, paths in $\operatorname{Diff}_\text{c}(\mathcal{M})$ of short length must involve compression of (parts of) the support of the diffeomorphism into very small sets, and that this compression can always be detected by $W^{s,p}$-norms when $s\ge 1$. The positivity proof of \cite[Theorem~4.1]{BBHM13} relies on the observation that any motion, no matter how small its support, can always be detected by any $W^{s,p}$-norm that embeds into $L^\infty$. This property holds whenever $sp>n$. If $s<1$, it turns out that one can compress parts of the manifold into arbitrarily small regions, for arbitrarily small cost; and if $sp<n$ one can transport small regions of the manifold for a long distance with small cost. Therefore, if $s<\min \{n/p,1\}$, one might expect the geodesic distance to vanish. Our proof that this is indeed the case has two main points. The first is to devise a strategy for alternating compression and transport of small sets in order to flow the identity mapping, say, onto a fixed target diffeomorphism at low cost. The second point is that the transport step requires some care in order to arrive at (or sufficiently close to) a fixed target, while still remaining small in the relevant norms. We achieve this by first constructing a flow, relying in part on ideas of \cite{BBHM13}, that exactly reaches the desired target; however in order for this flow to be in the right Sobolev space we need to regularize it. This regularization, and the error controlling that follows it, form the majority of the technical part of this paper. Our heuristic arguments, described above, for vanishing geodesic distance apply also in the endpoint case $s=\frac n p <1$, since $W^{n/p,p}$ also fails to embed into $L^\infty$ in this case. As mentioned above, it is known that the $W^{1/2,2}$-induced geodesic distance vanishes on $\operatorname{Diff}_\text{c}(\mathbb S^1)$, and although we do not present the details, the proof of \cite{BBHM13} can be readily extended to $W^{1/p,p}$ for all $1<p<\infty$. In general, however, although it is natural to conjecture that the $W^{n/p,p}$-induced geodesic distance vanishes on $n$ dimensional manifolds when $p>n$, the critical scaling makes constructions delicate, and this question remains open for $\dim\mathcal{M} > 1$. \section{Preliminaries and main result}\label{sec_2} Let $(\mathcal{M},\mathfrak{g})$ be a Riemannian manifold of \emph{bounded geometry}, that is $(\mathcal{M},\mathfrak{g})$ has a positive injectivity radius and all the covariant derivatives of the curvature are bounded: $\|\nabla^i R\|_\mathfrak{g} < C_i$ for $i\ge 0$. We denote by $\Gamma_c(T\mathcal{M})$ the Lie-algebra of compactly supported vector fields on $\mathcal{M}$, and by $\operatorname{Diff}_\text{c}(\mathcal{M})$ the group of compactly supported diffeomorphisms of $\mathcal{M}$, that is the diffeomorphisms $\phi$ for which the closure of $\{\phi(x)\ne x\}$ is compact. A smooth path $\{\phi_t \}_{ t\in [0,1]}$ in $\operatorname{Diff}_\text{c}(\mathcal{M})$ can be described in terms of the velocity vector fields $\{u(t,\cdot)\}_{t\in[0,1]}$ such that $\partial_t \phi_t = u(t, \phi_t)$ for $0\le t\le 1$. Given $\{\phi_t\}$, we find $u$ by setting $u(t, \cdot) := \partial_t \phi_t \circ \phi_t^{-1}$, and conversely $\{\phi_t\}_{t\in [0,1]}$ may be recovered from $u$ and $\phi_0$ by standard ODE theory. Given a norm $\|\cdot\|_{A}$ on $\Gamma_c(T\mathcal{M})$ we can then define the \textbf{geodesic distance} between $\phi_0,\phi_1\in \operatorname{Diff}_\text{c}(\mathcal{M})$ by \[ \operatorname{dist}_A(\phi_0,\phi_1) := \inf\BRK{\int_0^1 \|u(t) \|_{A} \,dt \,\,:\,\, \mbox{ $\partial_t \phi_t = u(t, \phi_t)$ for $0\le t\le 1$ }}. \] Note that $\operatorname{dist}_A$ forms a semi-metric on $\operatorname{Diff}_\text{c}(\mathcal{M})$, that is it satisfies the triangle inequality but may fail to be positive. This is the geodesic distance of the \textbf{right-invariant Finsler metric on $\operatorname{Diff}_\text{c}(\mathcal{M})$} induced by $\|\cdot\|_{A}$, which is defined as \[ \|X\|_{\phi,A} := \|X\circ \phi^{-1}\|_A \] for every $\phi\in \operatorname{Diff}_\text{c}(\mathcal{M})$ and $X\in T_\phi \operatorname{Diff}_\text{c}(\mathcal{M})$. If $\|\cdot\|_A$ comes from an inner-product, it defines a Riemannian metric on $\operatorname{Diff}_\text{c}(\mathcal{M})$ in a similar manner. See \cite{BBHM13} for more details. The right-invariance of $\operatorname{dist}_A$ is summarized in the following lemma: \begin{lemma}[Right-invariance] \label{lm:right_invariance} For $\psi,\phi_0,\phi_1\in \operatorname{Diff}_\text{c}(\mathcal{M})$, we have \[ \operatorname{dist}_A(\phi_0 \circ \psi, \phi_1 \circ \psi) = \operatorname{dist}_A(\phi_0,\phi_1). \] In particular, \[ \operatorname{dist}_A(\operatorname{Id},\psi) = \operatorname{dist}_A(\operatorname{Id},\psi^{-1}), \] and \[ \operatorname{dist}_A(\operatorname{Id},\phi_1 \circ \phi_0) \le \operatorname{dist}_A(\operatorname{Id}, \phi_1) + \operatorname{dist}_A(\operatorname{Id}, \phi_0). \] \end{lemma} \begin{proof} Let $t\mapsto \phi_t\in \operatorname{Diff}_\text{c}(\mathcal{M})$ be a curve from $\phi_0$ to $\phi_1$. Denote $u_t = \partial_t \phi_t \circ \phi_t^{-1}$. Define $\Phi_t = \phi_t\circ \psi$. This is a curve from $\phi_0\circ \psi$ to $\phi_1\circ \psi$. We then have \[ \partial_t \Phi_t = \partial_t \phi_t \circ \psi = \partial_t \phi_t \circ \phi_t^{-1} \circ \Phi_t = u_t \circ \Phi_t, \] from which the first claim follows immediately. The second and third claims follow from the first, since \[ \operatorname{dist}_A(\operatorname{Id},\psi^{-1}) = \operatorname{dist}_A(\psi\circ \psi^{-1},\psi^{-1}) = \operatorname{dist}_A(\psi,\operatorname{Id}), \] and \[ \operatorname{dist}_A(\operatorname{Id},\phi_1 \circ \phi_0) \le \operatorname{dist}_A(\operatorname{Id}, \phi_0) + \operatorname{dist}_A(\phi_0,\phi_1 \circ \phi_0) = \operatorname{dist}_A(\operatorname{Id}, \phi_0) + \operatorname{dist}_A(\operatorname{Id},\phi_1). \] \end{proof} We are interested in fractional Sobolev $W^{s,p}$-norms, and in particular in $H^s := W^{s,2}$, for $s\in(0,1)$. We adopt the following as our basic definition, from among a number of equivalent formulations. \begin{definition} \label{def:fractional_Sobolev} For $0<s<1$ and $1\le p<\infty$, the $W^{s,p}$-norm of a function $f\in L^p(\mathbb{R}^n)$ is given by \[ \|f\|_{s,p}^p = \| f\|_{L^p}^p + \int_{\mathbb{R}^n}\int_{\mathbb{R}^n} \frac {|f(x)-f(y)|^p}{|x-y|^{n+sp}}\, dx\,dy . \] \end{definition} Given a Riemannian manifold $(\mathcal{M},\mathfrak{g})$ of bounded geometry, this norm can be extended to $\Gamma_c(T\mathcal{M})$ using trivialization by normal coordinate patches on $\mathcal{M}$ (see \cite[Section~2.2]{BBM13} for details). We will denote the induced geodesic distance on $\operatorname{Diff}_c(\mathcal{M})$ by $\operatorname{dist}_{s,p}$. When $p=2$, we will denote $\operatorname{dist}_{s,2}$ by $\operatorname{dist}_s$ for simplicity. Different choices of charts result in equivalent metrics, and therefore the question of vanishing geodesic distance is independent of these choices. Instead of using Definition~\ref{def:fractional_Sobolev} directly, we will bound the $W^{s,p}$-norm using an interpolation inequality: \begin{proposition}[fractional Gagliardo-Nirenberg interpolation inequality] \label{pn:GN_inequality} Assume that $1<p<\infty$. For every $f\in W^{1,p}(\mathbb{R}^n)$ and $s\in (0,1)$, \[ \| f\|_{s,p} \le C_{s,p} \|f\|_{L^p}^{1-s} \|f\|_{1,p}^s\, , \qquad\mbox{ where }\ \ \|f\|_{1,p}^p := \|f\|_{L^p}^p+ \|df\|_{L^p}^p. \] \end{proposition} For a proof, see for example \cite[Corollary~3.2]{BM01}. In fact this is the only property of the $W^{s,p}$-norm that we will use. We remark that when $p=2$, the above inequality (with $C=1$) follows immediately from H\"older's inequality, if one uses the equivalent norm $\| f\|_{s,2}^2 = \int_{\mathbb{R}^n}(1+|\xi|^2)^{s/2}|\hat f(\xi)|^2 d\xi$, where $\hat f$ denotes the Fourier transform. The main result of this paper is the following. \begin{theorem}\label{main_thm} Let $(\mathcal{M},\mathfrak{g})$ be an $n$-dimensional Riemannian manifold of bounded geometry. \begin{enumerate} \item If $p\in [1,\infty)$ and $s< \min\{ 1, n/p\}$, then $\operatorname{dist}_{s,p}(\phi_0,\phi_1)= 0$ whenever $\phi_0,\phi_1$ belong to the same path-connected component of $\operatorname{Diff}_\text{c}(\mathcal{M})$. \item If $s\ge 1$ or $sp>n$ then $\operatorname{dist}_{s,p}(\phi_0,\phi_1)>0$ for any two distinct $\phi_0,\phi_1\in \operatorname{Diff}_\text{c}(\mathcal{M})$. \end{enumerate} \end{theorem} The second assertion is a direct consequence of known arguments in the case $p=2$. So is the first one for the case $n=1$. The new point is the vanishing of geodesic distance for all $ s <\min \{1, n/p\}$ whenever $n\ge 2$. Note that Proposition~\ref{pn:GN_inequality}, which is used extensively in the proof of the first part of Theorem~\ref{main_thm}, does not hold for $p=1$. However, Theorem~\ref{main_thm} does hold in this case as well; as explained in more detailed in Section~\ref{sec:Wsp}, our proof for vanishing $W^{s,p}$-distance for $p$ close enough to $1$ implies vanishing $W^{s,1}$-distance. In the remainder of this section we quickly verify that known results about the case $p=2$ extend to the more general setting we consider here, and we present the reduction, also well-known in the $H^s$ case, that will allow us to complete the proof of the theorem by showing that $\operatorname{dist}_{s,p}(\operatorname{Id},\Phi)=0$ for a single compactly supported diffeomorphism on $\mathbb{R}^n$. \paragraph{Positive geodesic distance} First, assume that $\phi_0,\phi_1$ are two distinct elements of $\operatorname{Diff}_\text{c}(\mathcal{M})$, and let $u$ be any time-dependent vector field generating a path $\phi:[0,1]\to \operatorname{Diff}_\text{c}(\mathcal{M})$ connecting $\phi_0$ to $\phi_1$, via the ODE $\partial_t\phi_t = u(t,\phi_t), 0<t <1$. The proof of \cite[Theorem~5.7]{MM05} uses a clever integration by parts to show that for any $\rho, \zeta\in C^1_c(\mathcal{M})$, \[ \left| \int_\mathcal{M} \rho (\zeta\circ \psi_1 - \zeta) \mbox{vol}(g) \right| = \left|\int_0^1 \int_\mathcal{M} (\zeta\circ \psi_t) \mbox{div}(\rho u_t) \mbox{vol}(g)\,dt\right|, \qquad \psi_t := \phi_0\circ \phi_t^{-1}. \] By a suitable choice of $\rho, \zeta$, this implies that $0<c\le C\int_0^1 \|u(t)\|_{1,p} dt$ for $p\ge 1$, where the constants depend on $\phi_1,\phi_2,\rho, \zeta, p$. This shows the positivity of the geodesic distance in $W^{1,p}$ for any $p\ge 1$, and hence (since these spaces embed into $W^{1,p}$) in $W^{s,p}$ for $s\ge 1$. On the other hand, if $s>n/p$, then $W^{s,p}$ embeds into some $C^{0,\alpha}$ (see for example \cite[Theorem~8.2]{DPV12}) and hence into $L^\infty$. Thus $\| \partial_t\phi_t\|_{L^\infty} = \| u(t) \|_{L^\infty} \le C \|u(t)\|_{s,p}$, and as noted in \cite[Theorem~4.1]{BBHM13}, the positivity of $\operatorname{dist}_{s,p}$ follows directly: \[ |\phi_1(x)-\phi_0(x)| = \left|\int_0^1 \partial_t\phi_t(x)\, dt\right| \le C\int_0^1 \| u(t)\|_{s,p} dt \qquad\mbox{ for every }x\in \mathcal{M}. \] Note that it also follows that the geodesic distance is positive for $L^\infty=W^{0,\infty}$. For $sp<n=1$, the proof of vanishing geodesic distance in \cite{BBHM13} in the case $p=2$ relies on an explicit construction (incorporated into \eqref{eq:ttheta_def} below) of a transportation scheme of the identity to a single diffeomorphism, that has arbitrarily small cost; this arbitrarily small cost follows from the fact that the $W^{s,p}$-norm of the characteristic function of an interval tends to zero with the length of the interval. For general $sp<n=1$, this is well-known and can easily be verified from Definition \ref{def:fractional_Sobolev}. Once this is noted, the proof goes through with no change. \paragraph{Reduction to a single diffeomorphism} The following proposition states an important property of $(\operatorname{Diff}_\text{c}(\mathcal{M}),\operatorname{dist}_{s,p})$ --- it is either a metric space, or it collapses completely, that is, the geodesic distance in any connected component of $\operatorname{Diff}_\text{c}(\mathcal{M})$ vanishes. In other words, if $(\operatorname{Diff}_\text{c}(\mathcal{M}),\operatorname{dist}_{s,p})$ is not a metric space, then any two diffeomorphisms in the same connected component can be connected by a path of arbitrary short $W^{s,p}$-length. \begin{proposition} \label{pn:normal_subgroup} Denote by $\operatorname{Diff}_0(\mathcal{M})$ the connected component of the identity (all diffeomorphisms in $\operatorname{Diff}_\text{c}(\mathcal{M})$ for which there exists a curve between them and $\operatorname{Id}$). \begin{enumerate} \item $\operatorname{Diff}_0(\mathcal{M})$ is a simple group. \item $\BRK{\phi : \operatorname{dist}_{s,p}(\operatorname{Id},\phi) = 0}$ is a normal subgroup of $\operatorname{Diff}_0(\mathcal{M})$. Therefore, it is either $\BRK{\operatorname{Id}}$ or the whole $\operatorname{Diff}_0(\mathcal{M})$. \end{enumerate} \end{proposition} This is proved in \cite[p.~15]{BBHM13} (see also \cite[Lemma~7.10]{BBM14}) when $p=2$, and the proof goes through with essentially no change in our setting. We recall the idea. The first conclusion is classical (and is independent of the norm). To establish the second, we consider $\phi, \psi\in \operatorname{Diff}_\text{c}(\mathcal{M})$ such that $\operatorname{dist}_{s,p}(\operatorname{Id}, \phi)= 0$, and we must show that $\operatorname{dist}_{s,p}(\operatorname{Id}, \Phi)=0$ for $\Phi :=\psi^{-1}\circ \phi \circ \psi$. To do this, note that if $\phi_t$, $0\le t\le 1$ is a path connecting $\operatorname{Id}$ to $\phi$, then $\Phi_t := \psi^{-1}\circ \phi_t\circ\psi$ connects $\operatorname{Id}$ to $\Phi$. The conclusion thus follows by verifying that $\int_0^1 \| \partial_t\Phi_t\circ \Phi^{-1}\|_{s,p}dt\le C\int_0^1 \| \partial_t\phi_t\circ \phi^{-1}\|_{s,p} \, dt $, where $C$ may depend on $\psi, (\mathcal{M},\mathfrak{g}), s,p$ but not $\phi$. In fact a pointwise inequality of the integrands holds for every $t$. This follows after a computation from the fact that for $h\in C^\infty(M)$ and $\psi\in \operatorname{Diff}_\text{c}(\mathcal{M})$, the operations of pointwise multiplication $u\mapsto h \cdot u$ and composition $u\mapsto u\circ\psi$ are bounded linear operators on $W^{s.p}(\mathcal{M})$, see Theorems~4.2.2 and 4.3.2 in \cite{Tri92}. \paragraph{The strategy for proving vanishing geodesic distance} The proof of part 1 of Theorem~\ref{main_thm} for $n\ge 2$ goes as follows: \begin{enumerate} \item For $sp<n$ and $n\ge 2$, we will show that there exists at least one nontrivial $\Phi\in \operatorname{Diff}_\text{c}(\mathbb{R}^n)$ such that $\operatorname{dist}_{s,p}(\operatorname{Id},\Phi) = 0$. \item For general $(\mathcal{M},\mathfrak{g})$ of bounded geometry, we can push-forward this example in $\mathbb{R}^n$ to obtain a diffeomorphism $\widetilde \Phi$, supported in a single coordinate chart used in the definition of induced $W^{s,p}$ geodesic distance. Then the definitions imply that $\operatorname{dist}_{s,p}(\operatorname{Id}, \widetilde \Phi) = 0$. (see \cite{BBM13} for a similar argument). \item Part 1 of Theorem~\ref{main_thm} then follows from Proposition~\ref{pn:normal_subgroup}. \end{enumerate} In the rest of the paper we treat the first point. For simplicity, we first consider the special case $p=2, \mathcal{M} = \mathbb{R}^2$, and we show that $\operatorname{dist}_{s}(\operatorname{Id}, \Phi) := \operatorname{dist}_{s,2}(\operatorname{Id}, \Phi) = 0$ for a particular $\Phi\in \operatorname{Diff}_\text{c}(\mathbb{R}^2) $. This construction, carried out in Section \ref{sec:2dc}, contains all the ingredients of more general cases. In Section \ref{sec:HD} we present a much simpler construction that works when $p=2, s<1$ and $n\ge 3$. Finally, in Section \ref{sec:Wsp} we show how to modify these arguments to complete the proof of the theorem in the general case. \section{Two-dimensional construction}\label{sec:2dc} In this section we prove the following: \begin{theorem} \label{thm:main_2D} Let $\zeta \in C_c^\infty((0,1)^2)$ satisfying $\zeta \ge 0$, $\partial_1\zeta > -1$. Denote $\phi(x,y) = x + \zeta(x,y)$, and define $\Phi\in \operatorname{Diff}_\text{c}(\mathbb{R}^2)$ by $\Phi(x,y) = (\phi(x,y),y)$. Then $\operatorname{dist}_s(\Phi,\operatorname{Id}) = 0$ for every $s\in[0,1)$. \end{theorem} We start with a general outline and heuristics of the proof. Fix $k\in \mathbb{N}$. In Section~\ref{sec:Step_I_2D} we decompose $\Phi$ as follows: \[ \Phi = \Phi_2\circ \Phi_1, \qquad \Phi_i = (\phi_i(x,y),y) = (x+\zeta_i(x,y),y) \in \operatorname{Diff}_\text{c}(\mathbb{R}^2), \] where $\zeta_i$ is supported on the union of $\approx k$ strips $(0,1)\times I_j$, $|I_j| \approx k^{-1}$. In Sections~\ref{sec:Step_II_2D}--\ref{sec:Step_IV_2D}, we show that $\operatorname{dist}_s(\Phi_1,\operatorname{Id}) = o(1)$, when $k\to \infty$; the proof for $\Phi_2$ is analogous, and since $k$ is arbitrary, the conclusion $\operatorname{dist}_s(\Phi,\operatorname{Id}) = 0$ follows by Lemma~\ref{lm:right_invariance}. In order to prove $\operatorname{dist}_s(\Phi_1,\operatorname{Id}) = o(1)$, we decompose $\Phi_1$ as follows: \[ \Phi_1 = \Gamma^{-1} \circ {\Psi}^{-1} \circ \Theta \circ \Psi, \qquad \Gamma,\Theta,\Psi \in \operatorname{Diff}_\text{c}(\mathbb{R}^2), \] where \begin{enumerate} \item $\Psi(x,y) = (x,\psi(x,y))$ squeezes the intervals $I_j$ into intervals of length $\approx \lambda$ for $\lambda$ of the form $\lambda = e^{-\alpha} k^{-1}$, where $\alpha = \alpha(k)$ is a (moderately large) parameter, to be determined. In Section~\ref{sec:Step_II_2D} we define $\Psi$ and show that $\operatorname{dist}_s(\Psi,\operatorname{Id}) \lesssim \alpha k^{-(1-s)}$. This stage compresses the support of $\Phi_1$ into small sets that can then, in the next stage, be transported large distances at low cost, owing to the subcriticality of $H^s(\mathbb{R}^2)$ for $s<1$. This concentration can be achieved at low cost (for $s<1$) because no point is moved very far. This requires the striped nature of the support of $\Phi_1$, and it is the reason for the decomposition $\Phi=\Phi_2\circ \Phi_1$. \item $\Theta(x,y) = (\theta(x,y),y)$ maps $x$ almost to its right place, that is $\theta(x,\psi(x,y)) - \phi_1(x,y) \ll 1$. $\Theta$ is defined (as the endpoint of a given flow) via a construction similar to the construction (for $s<1/2$) in \cite{BBHM13,BBM13}; in order for it to work for $s\in [1/2,1)$, we need to regularize the flow (and therefore $\theta(x,\psi(x,y)) \ne \phi_1(x,y)$). We define $\Theta$ in Section~\ref{sec:Step_III_2D}, show that $\operatorname{dist}^2_s(\Theta,\operatorname{Id}) \lesssim k\lambda^{2-s}\delta^{-s}$, where $\delta\ll \lambda$ is a regularization parameter to be determined. The main part of this section consists of proving bounds on $\theta(x,\psi(x,y)) - \phi_1(x,y)$ and on the derivatives of $\theta$. The key idea in the construction of the flow is that at every given time its support is very small in both $x$ and $y$; the subcriticality of $H^s(\mathbb{R}^2)$ then implies that its $H^s$-norm at any given time is small. For $H^s(\mathbb{R}^n)$, $n>2$ (and more generally, for $W^{s,p}(\mathbb{R}^n)$, $n>sp+1$), the squeezing in the $(n-1)$ $y$-directions done in the previous step is enough to guarantee a small $H^s$-norm of flows in the $x$ direction, that do not have small support in the $x$ direction (i.e.,~that the projection of the support on the $x$-axis is not small). This is why in this case there in a much simpler construction in which the subtleties of this stage can be avoided. \item In Section~\ref{sec:Step_IV_2D} we show that the error $\Gamma = {\Psi}^{-1} \circ \Theta \circ \Psi \circ \Phi_1^{-1}$ satisfies $\operatorname{dist}_s(\Gamma,\operatorname{Id}) \lesssim k^s\delta^{1-s}\lambda^{-(1-s)}$, by showing that the affine homotopy between $\operatorname{Id}$ and $\Gamma$ is a path of small $H^s$-distance. This uses the bounds on $\theta$ from Section~\ref{sec:Step_III_2D}. \end{enumerate} Finally, we show that $\alpha$ and $\delta$ can be chosen such that, as $k\to \infty$, \[ \operatorname{dist}_s(\Psi,\operatorname{Id}) = o(1), \qquad \operatorname{dist}_s(\Theta,\operatorname{Id}) = o(1), \quad \text{and} \quad \operatorname{dist}_s(\Gamma,\operatorname{Id}) = o(1), \] and then $\operatorname{dist}_s(\Phi_1,\operatorname{Id}) = o(1)$ follows from Lemma~\ref{lm:right_invariance}. A short video presenting the main stages of the construction can be found in the following link: \href{http://www.math.toronto.edu/rjerrard/geo_dist_diffeo/vanishing.html}{www.math.toronto.edu{/}rjerrard{/}geo\_dist\_diffeo{/}vanishing.html}. The flow in the video involves no regularization in the construction of $\Theta$ (as it would not be visible in this resolution), and therefore the error-correction term $\Gamma$ is not needed, and $\Theta = \Psi \circ \Phi_1 \circ \Psi^{-1}$. The video contains the following stages: \begin{enumerate} \item Compression of several disjoint intervals in the vertical direction (a path from $\operatorname{Id}$ to $\Psi$). \item A flow in the horizontal direction, from $\Psi$ to $\Theta \circ \Psi = \Psi\circ \Phi_1$. Note that at any given time the flow is supported on a union of very small rectangles. \item Undoing the squeezing stage, that is flowing from $\Psi \circ \Phi_1$ to $\Phi_1$. \item Repeating steps 1--3 for $\Phi_2$, resulting in $\Phi_2\circ \Phi_1 = \Phi$. \end{enumerate} \begin{remark} Throughout this paper, we use big $O$ and small $o$ notations with respect to the limit $k\to \infty$. We will also use notations such as $|I_j| \approx k^{-1}$ above, meaning that there exist $c_2\ge c_1>0$ such that $c_1k^{-1} \le |I_j| \le c_2 k^{-1}$. Finally, $a \lesssim b$, means $a \le Cb$ for some constant $C$ (that can depend on the dimension $n$ and the Sobolev exponent $s$). \end{remark} \subsection{Step I: Splitting into strips} \label{sec:Step_I_2D} Fix $k \in \mathbb{N}$. Define the following subintervals of $(0,1)$: \[ S_1^i := \Brk{\frac{8i-3}{k}, \frac{8i + 3}{k}}, \qquad L_1^i := \Brk{\frac{8i-2}{k}, \frac{8i + 2}{k}}, \qquad i\in \mathbb{Z}, \] \[ S_2^i := \Brk{\frac{8i+1}{k}, \frac{8i + 7}{k}}, \qquad L_2^i := \Brk{\frac{8i+2}{k}, \frac{8i + 6}{k}}, \qquad i\in \mathbb{Z}, \] and denote $S_j = \cup_i S_j^i\cap [0,1]$, $L_j = \cup_i L_j^i\cap[0,1]$. Let $\chi:[-4,4]\to [0,1]$ be a smooth function satisfying $\operatorname{supp} \chi\subset (-3,3)$ and $\chi|_{[-2,2]} \equiv 1$. Extend $\chi$ periodically, and define $\chi_k(y) = \chi(ky)$ on $(0,1)$. Note that $\operatorname{supp} \chi_k \subset S_1$, $\chi_k|_{L_1} \equiv 1$, and $|\chi_k'| \lesssim k$. See Figure~\ref{fig:Step_I}. \begin{figure} \caption{A sketch of $\chi_k$. The solid grey part of the top strip below the axis denotes $L_1$, where $\chi_k\equiv 1$; the dotted part of this strip denotes $L_2$. The marked part of the middle strip denotes $S_1$, which contains $\operatorname{supp}(\chi_k)$, and hence $\operatorname{supp}(\zeta_1(x,\cdot))$. The marked part of the bottom strip denotes $S_2$, which contains $\operatorname{supp}(\zeta_2(x,\cdot))$. } \label{fig:Step_I} \end{figure} Define $\zeta_1(x,y) = \zeta(x,y) \chi_k(y)$. Note that \begin{equation} \label{eq:zeta_1_bounds_0} \zeta_1|_{(0,1)\times L_1} = \zeta, \end{equation} \begin{equation} \label{eq:zeta_1_bounds_1} \operatorname{supp}(\zeta_1) \subset (0,1)\times S_1, \end{equation} and \begin{equation} \label{eq:zeta_1_bounds_2} 0\le \zeta_1\le C,\quad -1 + C^{-1}< \partial_x \zeta_1 < C, \quad |\partial_y \zeta_1| < Ck, \end{equation} where $C$ is independent of $k$. The bounds \eqref{eq:zeta_1_bounds_2} follow from the bounds $0\le \zeta\le C$, $|d\zeta|<C$, $\partial_x \zeta > -1 + C^{-1}$ and $|\chi_k'| < Ck$. Define \[ \Phi_1 = (\phi_1(x,y),y) = (x+\zeta_1(x,y),y), \qquad \Phi_2 = \Phi\circ \Phi_1^{-1} = (\phi_2(x,y),y). \] From \eqref{eq:zeta_1_bounds_0}--\eqref{eq:zeta_1_bounds_2}, it follows that we can write $\phi_2(x,y) = x+\zeta_2(x,y)$, with $\zeta_2$ satisfying the bounds \eqref{eq:zeta_1_bounds_2}, and property \eqref{eq:zeta_1_bounds_1} with $S_2$ in place of $S_1$. Indeed, if $(x,y)\in (0,1)^2\setminus (0,1)\times S_2$, then $y\in L_1$, and hence, from \eqref{eq:zeta_1_bounds_0} it follows that $\phi_2(x,y) = x$, and therefore \eqref{eq:zeta_1_bounds_1} holds for $\zeta_2$ (with $S_1$ replaced by $S_2$). Since $\zeta_1\le \zeta$ and $\zeta_2(\phi_1(x,y),y) = \zeta(x,y) - \zeta_1(x,y)$, it follows that $0\le \zeta_2 \le C$. Finally, \eqref{eq:zeta_1_bounds_2} implies that $C^{-1} < \partial_x \phi_1 < C$ and $|\partial_y \phi_1| < Ck$; the inverse function theorem then implies the bounds \eqref{eq:zeta_1_bounds_2} for $\zeta_2$. In the rest of this section we are going to prove that $\operatorname{dist}_s(\Phi_1,\operatorname{Id}) =o(1)$. This relies only on properties \eqref{eq:zeta_1_bounds_1}--\eqref{eq:zeta_1_bounds_2}; hence, the result also applies to $\Phi_2$, since $\zeta_2$ satisfies the same assumptions. \subsection{Step II: Squeezing the strips} \label{sec:Step_II_2D} \begin{lemma} \label{lem:squeezing_2D} Fix $\alpha \gg 1$. There exists a diffeomorphism $\Psi\in \operatorname{Diff}_\text{c}(\mathbb{R}^2)$, $\Psi(x,y) = (x,\psi(x,y))$, such that \begin{equation} \label{eq:squeezing_2D} \psi(x,y) = e^{-\alpha}\brk{y-\frac{8i}{k}} + \frac{8i}{k}, \qquad (x,y)\in [0,1]\times S_1^i\cap [0,1], \end{equation} and \begin{equation} \label{eq:squeezing_2D_H_s_dist} \operatorname{dist}_s(\Psi,\operatorname{Id}) \lesssim \alpha k^{-(1-s)}. \end{equation} \end{lemma} In other words, $\psi$ squeezes each intervals $S_1^i$ linearly around their midpoint by a factor of $e^{-\alpha}$, and has a small cost. \begin{proof} Let $u_1 \in C_c^\infty((-4,4))$, such that $u_1(y) = -y$ for $y\in [-3,3]$, and extend periodically. Let $\chi \in C_c^\infty(\mathbb{R}^2)$ such that $\chi\equiv 1$ on $[0,1]^2$. Define $u_k(x,y) := \frac{\alpha}{k} u_1(ky)\chi(x,y)$. Note that \[ \|u_k\|_{L^2} \lesssim \alpha/k, \qquad \|d u_k\|_{L^2} \lesssim \alpha. \] Therefore, by Proposition~\ref{pn:GN_inequality} we have \begin{equation} \label{eq:squeezing_2D_vec_field} \|u_k\|_{H^s} \lesssim \frac{\alpha^{1-s}}{k^{1-s}} \alpha^s = \frac{\alpha}{k^{1-s}}. \end{equation} Let $\psi(t,x,y)$ be the solution of \[ \partial_t \psi = u_k(x,\psi), \qquad \psi(0,x,y) = y. \] Define $\psi(x,y) := \psi(1,x,y)$, and $\Psi(x,y) := (x,\psi(x,y))$. A direct calculation shows that for $(x,y)\in [0,1]\times [-3/k, 3/k]$, $\psi(y) = y e^{-\alpha}$, so by periodicity and the fact that $\chi\equiv 1$ on $[0,1]^2$, $\psi$ satisfies \eqref{eq:squeezing_2D}. The trajectory from $\operatorname{Id}$ to $\Psi$ defined by $\Psi_t(x,y) = (x,\psi(t,x,y))$, together with the bound \eqref{eq:squeezing_2D_vec_field}, implies \eqref{eq:squeezing_2D_H_s_dist}. \end{proof} Note that in $[0,1]^2$, $\psi$ is independent of $x$. Therefore, slightly abusing notation, we write \[ \Psi(x,y) = (x,\psi(y)), \qquad \Psi^{-1}(x,y) = (x,\psi^{-1}(y)). \] We will later have $\alpha$ depend on $k$. Since eventually we want $\operatorname{dist}_s(\Psi,\operatorname{Id}) = o(1)$ when $k\to \infty$, \eqref{eq:squeezing_2D_H_s_dist} implies the bound \begin{equation} \label{eq:bounds_alpha} \alpha \ll k^{1-s}. \end{equation} \subsection{Step III: Flowing along the squeezed strips} \label{sec:Step_III_2D} Denote \[ \lambda(\alpha,k) = \frac{e^{-\alpha}}{k}, \] and consider \[ \Phi_1 \circ \Psi^{-1} (x,y) = (x + \zeta_1(x,\psi^{-1}(y)), \psi^{-1}(y)) =: (x + \tilde{\zeta}_1(x,y), \psi^{-1}(y)). \] Since $\zeta_1$ is supported inside $(0,1)\times S_1$, we have that $\tilde{\zeta}_1 =\zeta_1\circ \Psi^{-1}$ is supported on $(0,1) \times \psi(S_1)$, that is, on $\approx k$ strips of thickness $\approx \lambda$. Furthermore, from \eqref{eq:zeta_1_bounds_2} and \eqref{eq:squeezing_2D} we have \begin{equation} \label{eq:tilde_zeta_1_bounds} \tilde{\zeta}_1\ge 0,\quad -1 + C^{-1}< \partial_x \tilde{\zeta}_1 < C, \quad |\partial_y \tilde{\zeta}_1| < C\lambda^{-1}. \end{equation} We start by defining a path from $\operatorname{Id}$ to \[ \tilde{\Theta} := \Psi \circ \Phi_1 \circ \Psi^{-1} (x,y) = (x + \tilde{\zeta}_1(x,y), y), \] using a slight variation of the construction of \cite[Lemma 3.2]{BBHM13} that proves that the $H^s$ geodesic distance is vanishing for $s<1/2$. Let \begin{equation} \label{eq:tau_g_def} \tau_{y}(x) = x -\lambda \tilde{\zeta}_1(x,y), \qquad g_{y} = \tau_{y}^{-1}. \end{equation} It is clear that $\tau_{y}$ is increasing for all small enough $\lambda$. We will henceforth restrict our attention to such $\lambda$, for which the definition of $g_y$ makes sense. We will also write $\tau(x,y)$ and $g(t,y)$ instead of $\tau_{y}(x)$ and $g_{y} (t)$. Define \[ \tilde{\Theta}(t,x,y) = (\tilde{\theta}(t,x,y), y) \] by \begin{equation} \label{eq:ttheta_def} \tilde{\theta}(t,x,y) := \begin{cases} x &\mbox{ if }t\le \tau(x,y) \\ x+ (1+\lambda)^{-1}(t- \tau(x,y)) &\mbox{ if }\tau(x,y) \le t \le x+\tilde{\zeta}_1(x,y) \\ x+\tilde{\zeta}_1(x,y) &\mbox{ if }x+\tilde{\zeta}_1(x,y)\le t \le 1. \end{cases} \end{equation} Note that $\tilde{\theta}$ solves \[ \frac \partial{\partial t}\tilde{\theta}(t,x,y) = u(t,\tilde{\theta}(t,x,y),y), \qquad \tilde{\theta}(0,x)=x, \] where \begin{equation} \label{eq:def_u} u_t(x,y) = u(t,x,y) := (1+\lambda)^{-1} \mathds{1}_{t < x < g(t,y)} = (1+\lambda)^{-1} \mathds{1}_{\tau(x,y) < t < x} . \end{equation} See Figure~\ref{fig:BBHM}. \begin{figure} \caption{A sketch of the flow $\tilde{\theta}$. The dashed line shows the trajectory starting from a point $x$ over time. Its slope between $t=\tau(x,y)$ and $t=x$ is $(1+\lambda)^{-1}$. The grey domain is the support of the vector field $u$.} \label{fig:BBHM} \end{figure} We will see below, in Lemma~\ref{lem:properties_g}, that $g(t,y) = t + \lambda\tilde{\zeta}_1(t,y) + O(\lambda^2)$. Since for every fixed $x$, $\tilde{\zeta}_1(x,\cdot)$ is supported on $\approx k$ intervals of thickness $\approx \lambda$, it follows from \eqref{eq:def_u} and \eqref{eq:bounds_g} that for every fixed $t$, $u_t$ is supported on $\approx k$ disjoint compact sets, each contained in a square of edge length $\approx \lambda$, see Figure~\ref{fig:supp_u}. \begin{figure} \caption{A sketch of the support of $u_t$ for a fixed $t$. The support consists of $\approx k$ sets, each contained in a square of diameter $\approx \lambda$. Since the derivatives of $g$ are uniformly bounded \eqref{eq:bounds_dg}, the boundary of the support consists of $\approx k$ sets of length $\approx \lambda$.} \label{fig:supp_u} \end{figure} We obtained that $u_t$ has a small support, which is essential for using the subcriticality of $H^s$. However, since $u_t \notin H^s$ for $s\ge 1/2$, we first need to regularize. To do this, fix $\delta \ll \lambda$ (to be determined) and define \begin{equation} \label{eq:u_delta_def} u_{\delta,t}(x,y) = u_\delta(t,x,y) := \int_\mathbb{R} u(t,x-x',y) \eta_\delta(x')dx' = \frac 1{1+\lambda}\int_{x-g(t,y)}^{x-t} \eta_\delta(x')dx' \end{equation} for $\eta_\delta\in C^\infty_c(\mathbb{R})$ such that \[ \eta_\delta\ge 0, \qquad\int_{-\infty}^0\eta_\delta = \int_0^\infty \eta_\delta= \frac{1}{2},\qquad \mbox{supp}(\eta_\delta)\subset [-\delta, \delta], \qquad \|\eta_\delta\|_\infty \le \frac C {\delta}. \] Let $\theta(t,x,y)$ be the solution of \begin{equation} \label{eq:theta_def} \frac \partial{\partial t}\theta(t,x,y) = u_\delta(t,\theta(t,x,y),y), \qquad \theta(0,x,y)=x \end{equation} and define $\theta(x,y) = \theta(1,x,y)$. Define $\Theta\in \operatorname{Diff}_\text{c}(\mathbb{R}^2)$ by \begin{equation} \label{eq:Theta_def} \Theta(x,y) = (\theta(x,y),y). \end{equation} In the rest of this section (which is by far the most technical part of this paper), we prove some estimates on $\Theta$. First, we prove that the path between $\operatorname{Id}$ and $\Theta$ defined by flowing along $u_\delta$ is short, and therefore the distance from $\operatorname{Id}$ to $\Theta$ is small (for an appropriate choice of $\lambda$ and $\delta$): \begin{lemma} \label{lem:Theta_cost} \begin{equation} \label{eq:Theta_cost} \operatorname{dist}_s(\operatorname{Id},\Theta) \lesssim \frac{k^{1/2}\lambda^{(2-s)/2}}{\delta^{s/2}} \end{equation} \end{lemma} The proof of this lemma will follow from Lemma~\ref{lem:bounds_u_delta} below. We then prove that the regularization does not change the endpoint $\Theta$ by much (with respect to $\tilde{\Theta}$), and we prove bounds on the derivatives of $\Theta$. These are concluded in the following proposition: \begin{proposition} \label{pn:Theta} The diffeomorphism $\Psi^{-1}\circ \Theta \circ \Psi$ is of the form \begin{equation} \label{eq:Psi_Theta_Psi_form} \Psi^{-1}\circ \Theta \circ \Psi = (x + \sigma(x,y), y), \end{equation} where $\sigma(x,y) \ge 0$ is supported on $(0,1)\times S_1$ and satisfies \begin{equation} \label{eq:bounds_sigma} |\sigma(x,y) - \zeta_1(x,y)| \lesssim \frac{\delta}{\lambda}, \qquad -1 + C^{-1} < \partial_x \sigma < C, \qquad |\partial_y \sigma| \lesssim k. \end{equation} \end{proposition} This proposition is proved at the end of this subsection, after some preliminary lemmas. The conclusion of the proof of Theorem~\ref{thm:main_2D} (in Section~\ref{sec:Step_IV_2D} below) only uses \eqref{eq:Theta_cost}-\eqref{eq:bounds_sigma} and not the technical details that appear below in this subsection. We begin the proofs of Lemma~\ref{lem:Theta_cost} and Proposition~\ref{pn:Theta} by some estimates on the unregularized flow $u$: \begin{lemma} \label{lem:properties_g} The following bounds hold: \begin{equation} \label{eq:bounds_g} g(t,y) = t + \lambda\tilde{\zeta}_1(t,y) + O(\lambda^2), \qquad g(t,y) = t \iff \tilde{\zeta}_1(t,y) = 0. \end{equation} \begin{equation} \label{eq:bounds_dg} \qquad \partial_1g = 1+ \lambda \partial_1\tilde{\zeta}_1 + O(\lambda^2) = 1 +O(\lambda), \qquad |\partial_2g| < C. \end{equation} \end{lemma} \begin{proof} We fix $y$ and write $g(t) = g(t,y)$ and $\tilde{\zeta}_1(t) = \tilde{\zeta}_1(t,y)$. Let $\tilde g(t) = t+\lambda\tilde{\zeta}_1(t)$, and let $e(t) = g(t)-\tilde g(t)$. Then \[ t = \tau (g(t)) = \tau(t+\lambda \tilde{\zeta}_1(t) + e(t)) = t+\lambda \tilde{\zeta}_1(t) + e(t) -\lambda \tilde{\zeta}_1\brk{t+\lambda \tilde{\zeta}_1(t) + e(t)}. \] Thus $e = e(t)$ solves \[ f(e;t) = e +\lambda\tilde{\zeta}_1(t) - \lambda \tilde{\zeta}_1\brk{t+\lambda \tilde{\zeta}_1(t) + e} = 0. \] Since $|f(0;t)|\le \lambda^2 \|\partial_1\tilde{\zeta}_1\|_\infty\|\tilde{\zeta}_1\|_\infty <C\lambda^2$ for all $t$ and $\partial_e f \ge 1- \lambda \|\partial_1\tilde{\zeta}_1\|_\infty \ge 1-C\lambda$ (here we use \eqref{eq:tilde_zeta_1_bounds}), the Intermediate Value Theorem implies that a unique $e(t)$ such that $f(e(t);t)=0$ and $e(t)=O(\lambda^2)$. The second part of \eqref{eq:bounds_g} is immediate from the definition of $g$. For proving \eqref{eq:bounds_dg}, we use \eqref{eq:tilde_zeta_1_bounds} and calculate \[ \partial_1 g = \partial_1 \tau ^{-1} = \frac 1{\partial_1 \tau\circ g} = \frac 1{1-\lambda \partial_1 \tilde{\zeta}_1\circ g} = 1+ \lambda \partial_1\tilde{\zeta}_1 + O(\lambda^2), \] and \[ \Abs{\partial_2 g} = \Abs{\frac{\partial_2 \tau}{\partial_1 \tau} } = \Abs{ \frac{\lambda \partial_2 \tilde{\zeta}_1}{1-\lambda \partial_1 \tilde{\zeta}_1} } < C. \] \end{proof} The following lemma, and in particular \eqref{eq:bounds_H_s_norm_u_delta}, immediately implies Lemma~\ref{lem:Theta_cost}. \begin{lemma} \label{lem:bounds_u_delta} For a fixed $t$, $u_{\delta,t}(x,y)\in W^{1,\infty}(\mathbb{R}^2)$, and \begin{equation} \label{eq:bounds_du_delta} \|du_{\delta,t}\|_\infty \lesssim \frac{1}{\delta}. \end{equation} Moreover, \begin{equation} \label{eq:bounds_H_s_norm_u_delta} \|u_{\delta,t}\|^2_{H^s} \lesssim \frac{k\lambda^{2-s}}{\delta^s}. \end{equation} \end{lemma} \begin{proof} $|\partial_1 u_{\delta,t}| < C/\delta$ follows from the definition of $u_\delta$ and the bounds on $\eta_\delta$. We now show that $u_\delta$ is also Lipschitz with respect to the $y$ variable. Indeed, note that \[ \Abs{u(t,x,y'+h) - u(t,x,y')} = (1+\lambda)^{-1} \mathds{1}_{g(t,y)< x < g(t,y+h)}, \] if $g(t,y+h)>g(t,y)$, and similarly if not. By \eqref{eq:bounds_dg}, \[ \Abs{g(t,y+h) - g(t,y)} \le |h|\,\|\partial_2 g\|_\infty \le C|h| \] and therefore we have \[ \| u(t,\cdot,y'+h) - u(t,\cdot,y') \|_1 \le (1+\lambda)^{-1}C|h| \lesssim |h|. \] Finally, \[ \Abs{u_\delta(t,x,y'+h) - u_\delta(t,x,y')} \le \|\eta_\delta\|_\infty \,\| u(t,\cdot,y'+h) - u(t,\cdot,y') \|_1 \lesssim \frac{|h|}{\delta}, \] which completes the proof of \eqref{eq:bounds_du_delta}. Now, similar to $u_t$, $u_{\delta,t}$ is supported on $\approx k$ disjoint compact sets, each contained in a square of edge length $\approx \lambda$. Since $u_t$ is an indicator function, $du_{\delta,t}$ is supported on a $\delta$-neighborhood of the boundary of $\operatorname{supp} u_t$. Since $|\partial_2 g| \le C$ (see \eqref{eq:bounds_dg}), it follows that $d u_{\delta,t}$ is supported on $\approx k$ sets of area of $\approx \delta\lambda$ (see Figure~\ref{fig:supp_u}). Since $|u_{\delta,t}|_\infty < 1$, and $u_{\delta,t}$ is supported on a set of measure $\approx k\lambda^2$, we have \[ \|u_\delta\|_2^2 \lesssim k\lambda^2. \] Since $|d u_{\delta,t}| \le C/\delta$, and $du_{\delta,t}$ is supported on a set of measure $\approx k\lambda\delta$, \[ \|du_\delta,t\|_2^2 \lesssim \frac{k\lambda}{\delta}. \] Estimate \eqref{eq:bounds_H_s_norm_u_delta} follows from these bounds and Proposition~\ref{pn:GN_inequality}. \end{proof} Since we eventually want $u_{t,\lambda}$ to have a small $H^s$ norm, we will henceforth assume that $\delta$ satisfies \begin{equation} \label{eq:bounds_delta} k\lambda^{2-s} \ll \delta^s \ll k^{-s^2/(1-s)}\lambda^s, \end{equation} where the upper-bound assumption (which is more restrictive than the natural $\delta\ll \lambda$) will be needed later. In particular, note that these assumptions put some restrictions on the possible choices of $\lambda = e^{-\alpha}/k$, in addition to \eqref{eq:bounds_alpha}. We will give concrete choices of $\alpha$ and $\delta$ that satisfy these bounds in the end of the proof in Section~\ref{sec:Step_IV_2D}. The following lemma states that the amount $\Theta$ "misses" the target $\tilde{\Theta}$ because of the mollification is small: \begin{lemma} \label{lem:Theta_error} $\operatorname{supp}(\theta(x,y) - x)$ is a subset of a $\delta$-thickening in the $x$ direction of $\operatorname{supp}(\tilde{\zeta}_1)$, that is \begin{equation} \label{eq:theta_supp} \operatorname{supp}(\theta(x,y) - x) \subset \BRK{(x,y)\, :\, \exists (x',y)\in \operatorname{supp}(\tilde{\zeta}_1), \,\, |x-x'| < \delta}. \end{equation} In particular, for small enough $\delta$, $\operatorname{supp}(\theta(x,y) - x)\subset (0,1)^2$. Moreover, \begin{equation} \label{eq:theta_error} |\theta(x,y) - (x+ \tilde{\zeta}_1(x,y))| \le 3\frac{\delta}{\lambda} \end{equation} \end{lemma} \begin{proof} Throughout this proof $y$ is fixed and does not play a role, and we will omit it for notational brevity. Conclusion \eqref{eq:theta_supp} follows immediately from the definition of $\theta$. We now prove \eqref{eq:theta_error}. Define \begin{align*} u_\delta^- &= (1+\lambda)^{-1} \mathds{1}_{\BRK{u_\delta = (1+\lambda)^{-1}}} = (1+\lambda)^{-1} \mathds{1}_{t+\delta < x < g(t)-\delta}\ , \\ u_\delta^+ &= (1+\lambda)^{-1} \frac{1}{2}\brk{\mathds{1}_{\operatorname{supp} u} + \mathds{1}_{\operatorname{supp} u_\delta}} = (1+\lambda)^{-1} \brk{\mathds{1}_{t < x < g(t)} + \frac{1}{2} \mathds{1}_{\operatorname{supp} u_\delta \setminus \operatorname{supp} u}} \end{align*} and let $\theta^\pm(t,x)$ solve \[ \frac \partial{\partial t}\theta^\pm(t,x) = u_\delta^\pm(t,\theta^\pm(t,x)), \qquad \theta^\pm(0,x)=x. \] and let $\theta^\pm(x) := \theta^\pm (1, x)$. It is clear that \[ u_\delta^- \le u_\delta \le u_\delta^+ \] pointwise. It follows that $\theta^-(t,x)\le \theta(t,x)\le \theta^+(t,x)$ for all $t\ge 0$ and all $x$, and in particular $\theta^-(x)\le \theta(x)\le \theta^+(x)$. See Figure~\ref{fig:u_delta_plus}. \begin{figure} \caption{A sketch of the flow $\theta^+$ along $u_\delta^+$. The dark grey area is $\operatorname{supp} u$, where $u_\delta^+ = (1+\lambda)^{-1}$. The light grey area is $\operatorname{supp} u_\delta\setminus \operatorname{supp} u$, which is at most of width $\delta$; in this region $u_\delta^+ = \frac{1}{2}(1+\lambda)^{-1}$. } \label{fig:u_delta_plus} \end{figure} First consider $\theta^+(t,x)$.Note that $\theta^+(t,x) = x$ for $t\le t_1$, where $t_1$ is the first time such that $(t_1,x) \in \operatorname{supp} u_\delta$. Since $\operatorname{supp} \eta \subset [-\delta,\delta]$ we have \[ t_1 \ge \tau(x-\delta). \] Since $\partial_1 \tau = 1 + O(\lambda)$ (see \eqref{eq:tilde_zeta_1_bounds}--\eqref{eq:tau_g_def}), it follows that $t_1 \ge \tau(x) - 2\delta$. From $t_1$, until time $t_2$ defined by \[ g(t_2) = \theta^+(t_2,x), \] i.e.~the first time such that $(t_2,\theta^+(t_2,x))\in \operatorname{supp} u$, we have $\theta^+(t,x) < x + \frac{1}{2}(t-t_1)$ (note that for certain values of $x$, $(t,\theta^+(t,x))\notin \operatorname{supp} u$ for any $t$. In this case the analysis is simpler). Using this inequality, \eqref{eq:bounds_dg} and the bound on $t_1$, it follows that $t_2 - t_1 \le 5\delta$. Indeed, \[ x+\frac 12(t_2-t_1)>\theta^+(t_2,x)= g(t_2) > g(\tau(x)) + (1-C\lambda)(t_2-\tau(x)) \] and since $g(\tau(x)) = x$, we see that $\frac 12(t_2-t_1)> (1-C\lambda)(t_2-t_1 - 2\delta)$, from which the claim follows. Therefore $\theta^+(t_2,x) < x + 3\delta$. Until the time $t_3$ when $\theta^+(t,x)$ leaves $\operatorname{supp} u$, $\theta^+$ flows according to the flow of $u$ with initial condition $\theta^+(t_2,x)$. Therefore, \[ \theta^+(t_3,x) = \theta^+(t_2,x) + \tilde{\zeta}_1(\theta^+(t_2,x)) < x + \tilde{\zeta}_1(x)+ C\delta, \] where we used \eqref{eq:tilde_zeta_1_bounds} again. By the same arguments as for the time interval $[t_1,t_2]$, it follows that for $t>t_3$, $\theta^+(t,x)$ increases by less than $\delta$. Therefore we obtain the upper bound \begin{equation} \label{eq:theta_error_upper_bound} \theta(x) \le \theta^+(x) < x + \tilde{\zeta}_1(x)+ C\delta, \end{equation} for an appropriate constant $C$. We now consider $u_\delta^-$ and $\theta^-(t,x)$. Note that \[ u_\delta^-(t,x) = (1+\lambda)^{-1}\mathds{1}_{\tau(x+\delta)<t<x-\delta} > (1+\lambda)^{-1}\mathds{1}_{\tau(x) + 2\delta<t<x-\delta}, \] where we used $\partial_1 \tau = 1 + O(\lambda)$ in the inequality. Defining $t' = t +\delta$, we have \begin{equation} \label{eq:u_delta_v_delta} u_\delta^-(t',x) \ge v_\delta^-(t',x) := (1+\lambda)^{-1}\mathds{1}_{\max\BRK{\tau(x) + 3\delta,x}<t'<x}. \end{equation} By definition \eqref{eq:tau_g_def} of $\tau$ \[ \tau(x) + 3\delta = x - \lambda\tilde{\zeta}_1(x) + 3\delta = x - \lambda\brk{\tilde{\zeta}_1(x) - 3\frac{\delta}{\lambda}}. \] It follows that the flow by $v_\delta^-(t,x)$, that is the solution $\bar{\theta}^-$ of \[ \frac{\partial}{\partial t} \bar\theta^-(t,x) = v_\delta^-(t,\bar\theta^-(t,x)) \qquad \bar\theta^-(0,x)=x, \] satisfies \[ \bar\theta^-(1,x) = \max\BRK{x+\tilde{\zeta}_1(x) - 3\frac{\delta}{\lambda}, x}. \] Moreover, for $\delta$ small enough (depending only on $\zeta$), $\bar\theta^-(1-\delta,x) = \bar\theta^-(1,x)$. By \eqref{eq:u_delta_v_delta}, it follows that \begin{equation} \label{eq:theta_error_lower_bound} \theta(x) \ge \theta^-(1,x) \ge \bar\theta^-(1-\delta,x) \ge x+\tilde{\zeta}_1(x) - 3\frac{\delta}{\lambda}. \end{equation} \eqref{eq:theta_error_upper_bound} and \eqref{eq:theta_error_lower_bound} imply \eqref{eq:theta_error}. \end{proof} Next, we prove bounds on the derivatives of $\theta$. \begin{lemma} \label{lem:theta_x_bounds} There exists $C\ge 1$, depending only on $\zeta$, such that \begin{equation} \label{eq:bound_pl_x_theta} C^{-1} \le \partial_x \theta \le C \qquad \text{for all $(x,y)$.} \end{equation} \end{lemma} \begin{proof} As in the proof of Lemma \ref{lem:Theta_error}, we will omit $y$ for notational brevity, and because it does not play any role. Recall that $\partial_t \theta(t,x) = u_\delta(t,\theta)$, and consider the Eulerian version of this flow, that is the equation \begin{equation} \partial_t w(t,x) + u_\delta(t,x) \partial_xw(t,x) = 0 \label{eq:transport}\end{equation} with initial data \begin{equation}\label{eq:wzero} w(0,x)= x. \end{equation} If $w$ is a solution then \[ \frac d{dt}w(t,\theta(t,x)) = \partial_x w(t,\theta)\partial_t\theta +\partial_t w(t,\theta) = 0, \] using the ODE for $\theta$ and the PDE for $w$. The initial data then imply that $w(t, \theta(t,x)) = x$ for all $t$, and hence that \[ w(t,\cdot) = \theta(t,\cdot)^{-1}. \] Next, define \[ q = \partial_t w+\partial_x w. \] Since $u_\delta(t,x) = 0$ when $t$ is close to $0$ or $1$, we have that $\partial_t w=0$ for such values of $t$. In particular, $q(0,\cdot) = 1$ and $q(1,\cdot)=\partial_x w(1,\cdot)=\partial_x \theta(1,\cdot)^{-1}$, which is the quantity we need to estimate. We use $q$ and not $\partial_x w$ directly since it will allow us to exploit the fact, reflected in the smallness of $(\partial_t+\partial_x)u_\delta$, that the coefficients in \eqref{eq:transport} are nearly translation-invariant in the $\partial_t+\partial_x$ direction. We compute \begin{align*} \partial_t q = \partial_t(\partial_t w+\partial_x w) = (\partial_t+\partial_x)\partial_t w &= -(\partial_t+\partial_x)(u_\delta \partial_x w)= -u_\delta \partial_x q - (\partial_tu_\delta+\partial_x u_\delta)\partial_x w. \end{align*} We further deduce from \eqref{eq:transport} that \[ \partial_x w = q+ u_\delta \partial_x w,\qquad\mbox{ and thus }\qquad \partial_x w = \frac q{1-u_\delta}, \] so we can rewrite the above equation as \[ \partial_t q = -u_\delta \partial_x q - \frac{\partial_t u_\delta+\partial_x u_\delta}{1-u_\delta} q. \] It follows that \begin{equation}\label{eq:growth} \frac d{dt} q(t,\theta(t,x)) =- \frac{\partial_t u_\delta+\partial_x u_\delta}{1-u_\delta}\big(t,\theta(t,x)\big) \ q(t,\theta(t,x)). \end{equation} Therefore, if we obtain a bound \begin{equation} \label{eq:gronwall_estimate} \int_0^1 \Abs{\frac{\partial_t u_\delta+\partial_x u_\delta}{1-u_\delta}\big(t,\theta(t,x)\big) }\,dt < C, \end{equation} for some $C$ independent of $x$ (and $y$), we obtain \eqref{eq:bound_pl_x_theta} by Gronwall's inequality. From definition \eqref{eq:u_delta_def} of $u_\delta$, we have \begin{align} \label{eq:u_t_and_u_x} \partial_x u_\delta(t,x) &= \frac 1{1+\lambda} \Brk{\eta_\delta(x-t) - \eta_\delta(x-g(t))}, \\ \partial_t u_\delta(t,x) &= \frac 1{1+\lambda} \Brk{ -\eta_\delta(x-t) + g'(t) \eta_\delta(x-g(t))}, \end{align} and therefore, using \eqref{eq:bounds_dg}, we have \begin{equation}\label{eq:u_t_plus_u_x} \begin{split} \Abs{\partial_t u_\delta +\partial_x u_\delta } &= \frac 1{1+\lambda} \eta_\delta(x-g(t))\ \Abs{g'(t)-1}\\ &\le \frac {C\lambda}{1+\lambda} \eta_\delta(x-g(t)). \end{split} \end{equation} Because of \eqref{eq:growth} and \eqref{eq:u_t_plus_u_x}, we want to estimate $\frac{\eta_\delta(x-g(t))}{1-u_\delta(t,x)}$. We have \begin{align*} 1 - u_\delta(t,x) &=1 - \frac 1{1+\lambda}\int_{x-g(t)}^{x-t} \eta_\delta(x')dx' \\ &\ge 1 - \frac 1{1+\lambda}\int_{x-g(t)}^{\infty} \eta_\delta(x')dx'\\ &= 1 - \frac 1{1+\lambda} \mu_\delta (x-g(t)), \qquad \mbox{ for } \ \ \mu_\delta(x) := \int_x^{\infty} \eta_\delta(x') dx' , \end{align*} and therefore \[ \frac{\eta_\delta(x-g(t))}{1-u_\delta(t,x)} \le \frac{(1+\lambda) \eta_\delta(x-g(t))} {1+ \lambda - \mu_\delta(x-g(t))} = -\frac{(1+\lambda) \mu_\delta'(x-g(t))} {1+ \lambda - \mu_\delta(x-g(t))} . \] It follows that \begin{equation}\label{eq:dtud1} \int_0^1 \Abs{\frac{\partial_t u_\delta+\partial_x u_\delta}{1-u_\delta}\big(t,\theta(t,x)\big) }\,dt \le C\lambda \int_0^1 \frac{- \mu_\delta'(\theta(t,x)-g(t))} {1+\lambda - \mu_\delta(\theta(t,x)-g(t))} \, dt. \end{equation} For the following computation, $x$ is fixed. We wish to rewrite the integral in terms of the variable \[ \alpha = \alpha(t) = \mu_\delta(\theta(t,x)-g(t)), \] which increases from $0$ to $1$ as $t$ goes from $0$ to $1$ for $\delta, \lambda$ sufficiently small. To estimate $\alpha'(t)$, note that by the definition of $\theta$, we have \begin{align*} \partial_t \theta (t,x)= u_\delta(t,\theta(t,x)) = \frac 1{1+\lambda}\int_{\theta(t,x)-g(t)}^{\theta(t,x)-t} \eta_\delta(x')dx' &\le \frac{1}{1+\lambda} \int_{\theta(t,x)-g(t)}^\infty \eta_\delta(x')\,dx' \\ &=\frac{\alpha(t)}{1+\lambda}. \end{align*} Since $g' \ge 1-c\lambda$ for some $c<1$, depending only on $\zeta$, it follows that \begin{align*} \partial_t(\theta(t,x) - g(t)) &\le \frac{\alpha(t)}{1+\lambda} - 1 + c\lambda = \frac { \alpha(t) -(1+\lambda)(1-c\lambda) } {1+\lambda} . \end{align*} This is always negative for small enough $\lambda$, as $0\le\alpha\le 1$ and $c<1$. Thus \[ -\mu_\delta'(\theta(t,x)-g(t)) = \frac{\alpha'(t)}{ - \partial_t(\theta(x,t)-g(t))} \le \frac{(1+\lambda)\alpha'(t)}{ (1+\lambda)(1-c\lambda) - \alpha(t)}. \] So we can change variables in \eqref{eq:dtud1} to find that \begin{align*} \int_0^1 \Abs{\frac{\partial_t u_\delta+\partial_x u_\delta}{1-u_\delta}\big(t,\theta(t,x)\big) }\,dt &\le C\lambda \int_0^1 \frac 1{1+ \lambda-\alpha} \ \frac{1 }{(1+\lambda)(1-c\lambda) -\alpha}\ d\alpha. \\ \end{align*} For $\lambda < \frac {1-c}{2c}$, the integrand on the right is bounded by $(1+\frac 12(1-c)\lambda -\alpha)^{-2}$, so we integrate to conclude that \[ \int_0^1 \Abs{\frac{\partial_t u_\delta+\partial_x u_\delta}{1-u_\delta}\big(t,\theta(t,x)\big) }\,dt \le C\lambda\brk{\frac 12(1-c)\lambda}^{-1} \le C. \] We thus obtain \eqref{eq:gronwall_estimate}, which completes the proof. \end{proof} \begin{lemma} \label{lem:theta_y_bounds} For every $\lambda>0$ small enough, there exists a choice of mollifier $\eta_\delta$ in definition \eqref{eq:u_delta_def} such that \begin{equation} \label{eq:bound_pl_y_theta} |\partial_y \theta| \le C\lambda^{-1} \qquad \text{for all $(x,y)$}, \end{equation} where $C>0$ depends only on $\zeta$. \end{lemma} \begin{proof} Fix $h\in \mathbb{R}$, $|h|\ll\delta\lambda$, and consider $\theta(t,x,y)$ and $\theta(t,x,y+h)$. By Lemma~\ref{lem:properties_g} we have that \[ \Abs{\tau(t,y+h) - \tau (t,y)} < c|h|, \] for some $c>0$. In particular, \[ \begin{split} u(t-c|h|,x,y+h) &= (1+\lambda)^{-1} \mathds{1}_{\tau(x,y+h) < t - c|h| < x} \\ &\le (1+\lambda)^{-1} \mathds{1}_{\tau(x,y) - c|h| < t -c|h|< x} = (1+\lambda)^{-1} \mathds{1}_{\tau(x,y) < t < x + c|h|} \\ &= (1+\lambda)^{-1} \mathds{1}_{t-c|h|<x<g(t,y) } =: u^h(t,x,y). \end{split} \] Therefore $u_\delta(t,x,y+h) \le u^h_\delta(t + c|h|,x,y)$, where $u^h_\delta$ is the mollification of $u^h$ as in \eqref{eq:u_delta_def}. Define $\theta^h(t,x,y)$ by \[ \frac \partial{\partial t}\theta^h(t,x,y) = u^h_\delta(t,\theta(t,x,y),y), \qquad \theta(0,x)=x. \] It follows that \[ \theta(t - c|h|,x,y+h) \le \theta^h(t,x,y), \] and since for $h$ small enough (independent of $x$ and $y$), $\theta(1,x,y+h) = \theta(1-c|h|,x,y+h)$, we have \[ \theta(1,x,y+h) \le \theta^h(1,x,y). \] We now compare $\theta^h(t,x,y)$ and $\theta(t,x,y)$ and show that \begin{equation} \label{eq:theta_h_minus_theta} \theta(1,x,y+h) - \theta(1,x,y) \le \theta^h(1,x,y) - \theta(1,x,y) \lesssim \frac{|h|}{\lambda}. \end{equation} By symmetry it also follows that \[ \theta(1,x,y) - \theta(1,x,y+h ) \lesssim \frac{|h|}{\lambda}, \] which completes the proof. It remains to prove the righthand side inequality in \eqref{eq:theta_h_minus_theta}. In order to simplify notation, we will henceforth write $\theta(t) = \theta(t,x,y)$, $g(t) = g(t,y)$ and so on. For this, it is convenient to use a smooth mollifier $\eta_\delta$ with support in $[-\delta, \delta]$ such that \[ 0\le \eta_\delta(x) \le \frac {1+\lambda}{2\delta}. \] This is necessarily very close to the normalized characteristic function of the interval $[-\delta, \delta]$ in $L^p$ for every $p<\infty$. By the definition of $\theta^h(t)$, it follows (see Figure~\ref{fig:y_derivatives}) that \[ \theta^h(t) = \theta(t). \] for every $t<t_0$, where $t_0$ is defined by \[ \theta(t_0) = t_0 + \delta. \] \begin{figure} \caption{A sketch of the trajectories of $\theta(t)$ (lower dashed line) and $\theta^h(t)$ (upper dashed line). $\theta(t) = \theta^h(t)$ for $t\le t_0$. $\theta(t)$ is constant after $t_1$ (where $\theta(t_1) = t_1-\delta$). $\theta^h(t)$ is constant after $t_2 = t_1 + \delta$, see \eqref{eq:theta_h_minus_theta_t_1}.} \label{fig:y_derivatives} \end{figure} When $\theta(t) -t \ge -\delta$, we have \[ \frac d{dt}\theta = u_\delta(t,\theta) = \frac 1{1+\lambda} \int_{\theta - g(t)}^{\theta-t}\eta(x')dx' \le \frac 1{1+\lambda} \int_{-\infty}^{\theta-t}\eta(x')dx' \le \min \left\{ \frac {1}{2\delta}(\theta - t +\delta), \frac 1{1+\lambda} \right\}, \] and when $\theta(t)-t\le -\delta$ we have $\frac{d\theta}{dt}=0$. Let $\alpha(t) = \theta(t)-t$. It follows that \[ \frac {d\alpha}{dt} \le -\frac \lambda{1+\lambda} + \min \left\{ \frac {1}{2\delta}( \alpha - \delta\frac{1-\lambda}{1+\lambda}), 0 \right\} \ \qquad\mbox{ as long as }\alpha(t)\ge -\delta, \] and $\frac{d\alpha}{dt}=-1$ when $\alpha(t)\le -\delta$. If we write $\alpha_0(t)$ to denote the function solving the above ODE (with $\le$ replaced by $=$) with initial data $\alpha_0(t_0)=\delta$, then $\alpha(t)\le \alpha_0(t)$ for $t\ge t_0$. This leads to \[ \alpha(t)\le \begin{cases} \delta - \frac\lambda{1+\lambda}(t-t_0) &\mbox{ if } t_0 \le t \le t_a = t_0+ 2\delta \\ \delta - \delta \frac{2\lambda}{1+\lambda}\exp(\frac{t-t_a}{2\delta}) &\mbox{ if }t_a\le t \end{cases} \] as long as $\alpha(t)\ge -\delta$. We now define $t_1$ to be the unique time such that $\alpha(t_1)=-\delta$, and similarly $t_2$ such that $\alpha(t_2)=-2\delta$ (see Figure~\ref{fig:y_derivatives}). We deduce from the above that \begin{equation}\label{tminust} t_1 - t_0 \le 2\delta \left( 1 + \log(\frac{1+\lambda}{\lambda})\right), \qquad t_2 - t_1 = \delta. \end{equation} Next we estimate $\theta^h(t_2)-\theta(t_2)$. First note that \[ 0 \le u^h_\delta(t,x) - u_\delta(t,x) = \frac 1{1+\lambda} \int_{x-t}^{x-t+c|h|}\eta_\delta(x')dx' \le \frac {c|h|}{2\delta}. \] We can similarly estimate $u_\delta(t,x') - u_\delta(t,x)$, to find that \begin{align*} \frac d{dt}(\theta^h - \theta) &= [u^h_\delta(t, \theta^h) - u_\delta(t,\theta^h)] + [u_\delta(t,\theta^h) - u_\delta(t,\theta)] \\ &\le \frac {c|h|}{2\delta} + \frac {1}{2\delta} (\theta^h-\theta). \end{align*} (We have implicitly used the fact that $\theta^h(t)\ge \theta(t)$ for all $t$). Thus, Gr\"onwall's inequality implies that for $t>t_0$, \[ \theta^h(t) - \theta(t) \le c|h| \left[ \exp\left(\frac{ t-t_0}{2\delta} \right)-1 \right]. \] In particular, it follows from \eqref{tminust} that \[ \theta^h(t_2)-\theta(t_2) \le c|h| \left[ \exp\left( \frac 32 + \log(\frac{1+\lambda}{\lambda}) \right)-1 \right] \lesssim \frac {|h|}\lambda. \] Thus, \begin{equation} \label{eq:theta_h_minus_theta_t_1} \theta^h(t_2) - t_2 = \theta^h(t_2) - \theta(t_2) +\alpha(t_2) \le -2\delta+ c\frac{|h|}{\lambda} < -\delta \end{equation} for $h$ small enough. Since $\theta^h(t) - t$ is a decreasing function, this inequality continues to hold after time $t_2$. It then follows from the definitions that $u^h_\delta(t, \theta^h(t)) = u_\delta(t,\theta(t))= 0$ for $t\ge t_2$, and therefore \[ \theta^h(1) - \theta(1) = \theta^h(t_2) - \theta(t_2) \lesssim \frac{|h|}{\lambda}, \] which proves \eqref{eq:theta_h_minus_theta} and completes the proof. \end{proof} We conclude this section by completing the proof of Proposition~\ref{pn:Theta}: {\flushleft \emph{Proof of Proposition~\ref{pn:Theta}}:} The structure \eqref{eq:Psi_Theta_Psi_form} of $\Psi^{-1}\circ \Theta\circ \Psi$ is immediate from the definitions of $\Psi$ and $\Theta$. We see from \eqref{eq:theta_supp} that $\operatorname{supp} \sigma$ is a subset of a $\delta$-thickening in the $x$-direction of $\operatorname{supp} \zeta_1$. Therefore, \eqref{eq:zeta_1_bounds_1} implies $\operatorname{supp} \sigma\subset (0,1)\times S_1$ for small enough $\delta$. The first bound in \eqref{eq:bounds_sigma} follows from Lemma~\ref{lem:Theta_error}, the second from Lemma~\ref{lem:theta_x_bounds}, and the third from Lemma~\ref{lem:theta_y_bounds}, using the fact that $\Psi$ is linearly squeezing strips on which $\theta$ is supported by a factor of $e^{-\alpha} = k\lambda$. { \ding{110}} \subsection{Step IV: Error correction --- affine homotopy} \label{sec:Step_IV_2D} In this subsection we correct the error obtained by the regularization in the previous subsection via affine homotopy, and then complete the proof. The properties of the target of this affine homotopy, which follow from Proposition~\ref{pn:Theta}, are summed up in the following corollary: \begin{corollary} \label{cor:Gamma} The diffeomorphism $\Gamma = \Psi^{-1}\circ \Theta \circ \Psi\circ \Phi_1^{-1}$ is of the form \begin{equation} \label{eq:Psi_Theta_Psi_form.b} \Gamma = (\gamma(x,y), y) = (x + \xi(x,y), y), \end{equation} where $\xi(x,y) \ge 0$ is supported on $(0,1)\times S_1$ and satisfies \begin{equation} \label{eq:bounds_xi} |\xi(x,y)| \lesssim \frac{\delta}{\lambda}, \qquad -1 + C^{-1} < \partial_x \xi < C, \qquad |\partial_y \xi| \lesssim k. \end{equation} \end{corollary} \begin{proof} This is immediate from Proposition~\ref{pn:Theta}, the definition of $\Phi_1$ and the bounds \eqref{eq:zeta_1_bounds_2}. \end{proof} \begin{lemma} \label{lem:dist_s_Gamma_id} \begin{equation} \label{eq:dist_s_Gamma_id} \operatorname{dist}_s(\Gamma,\operatorname{Id}) \lesssim \frac{\delta^{1-s}}{\lambda^{1-s}}k^s. \end{equation} \end{lemma} \begin{proof} Consider an affine homotopy $\Gamma_t$ from $\operatorname{Id}$ to $\Gamma$, that is, \[ \Gamma_t(x,y) = (x + t\xi(x,y),y) =: (\gamma_{t,y}(x),y) \] We then have $\partial_t \Gamma_t = u_t(\Gamma_t)$, where \[ u_t(x,y) = (\xi(\Gamma_t^{-1}(x,y)),0) = (\xi(\gamma_{t,y}^{-1}(x),y),0). \] Note that $u_t$ is supported on a subset of the unit square, because $\xi$ is supported on a subset of the unit square and $\Gamma$ is a diffeomorphism of the unit square. Since $|\xi|\lesssim \delta\lambda^{-1}$, we have \begin{equation} \label{eq:L_2_norm_affine} \|u_t\|_{L^2} \lesssim \frac{\delta}{\lambda}. \end{equation} Next, we have \[ \partial_x u_t(x,y) = \partial_x \xi\, \partial_x \gamma_{t,y}^{-1}(x), \quad \partial_y u_t(x,y) = \partial_x \xi\, \partial_y \gamma_{t,y}^{-1}(x) + \partial_y \xi. \] Since, by \eqref{eq:bounds_xi}, $-1+C^{-1}<\partial_x \xi<C$, we obtain that $|\partial_x \gamma_{t,y}^{-1}| = |1 + t\partial_x\xi|^{-1} <C$ and therefore $|\partial_x u|<C$. Next, using \eqref{eq:bounds_xi} again, we have \[ \Abs{\partial_y \gamma_{t,y}^{-1}(x)} \le \Abs{\frac{\partial_y \gamma_{t,y}}{\partial_x \gamma_{t,y}}} \lesssim k, \] and therefore $|\partial_y u_t| \lesssim k$. We conclude that \begin{equation} \label{eq:H_1_norm_affine} \|u_t\|_{H^1} \lesssim k. \end{equation} Using Proposition~\ref{pn:GN_inequality}, \eqref{eq:L_2_norm_affine}--\eqref{eq:H_1_norm_affine} imply \eqref{eq:dist_s_Gamma_id}. \end{proof} We conclude now the proof of Theorem~\ref{thm:main_2D}. We showed that \[ \Phi_1 = \Gamma^{-1} \circ {\Psi}^{-1} \circ \Theta \circ \Psi, \qquad \Gamma,\Theta,\Psi \in \operatorname{Diff}_\text{c}(\mathbb{R}^2), \] where (following Lemma~\ref{lem:squeezing_2D}, \eqref{eq:Theta_cost} and Lemma~\ref{lem:dist_s_Gamma_id}) \[ \operatorname{dist}_s(\Psi,\operatorname{Id}) \lesssim \alpha k^{-(1-s)}, \qquad \operatorname{dist}_s(\Theta,\operatorname{Id}) \lesssim \frac{k^{1/2}\lambda^{(2-s)/2}}{\delta^{s/2}}, \qquad \operatorname{dist}_s(\Gamma,\operatorname{Id}) \lesssim \frac{\delta^{1-s}}{\lambda^{1-s}}k^s, \qquad \lambda = \frac{e^{-\alpha}}{k}. \] If we choose, say \[ \alpha = (\log k)^2, \qquad \lambda = \frac{1}{k^{1+\log k}}, \qquad \delta = \frac{1}{k^{\log k+\sqrt{\log k}}}, \] we have, for any $s<1$, \[ \begin{split} \operatorname{dist}_s(\Psi,\operatorname{Id}) &\lesssim (\log k)^2 k^{-(1-s)} = o(1), \\ \operatorname{dist}_s(\Theta,\operatorname{Id}) &\lesssim k^{-(1-s)\log k + \frac{1}{2}s\sqrt{\log k}-\frac{1-s}{2}} = o(1), \\ \operatorname{dist}_s(\Gamma,\operatorname{Id}) &\lesssim k^{1 -(1-s) \sqrt{\log k}} = o(1), \end{split} \] and therefore $\operatorname{dist}_s(\Phi_1,\operatorname{Id}) = o(1)$, which completes the proof. \begin{remark} Since we choose $\alpha$ and $\delta$ in an $s$-independent way, we constructed a sequence of paths from $\operatorname{Id}$ to $\Phi$ that are of asymptotically vanishing $H^s$-cost for any $s<1$. It follows that by choosing appropriate sequences of exponents $s_n \nearrow 1$ and constants $c_n \searrow 0$, we have \[ \operatorname{dist}_{H^{<1}}(\Phi, \operatorname{Id}) = 0, \] where the $H^{<1}$-norm is defined by \[ \| f\|_{H^{<1}} := \sum_{n=1}^\infty c_n \| f\|_{H^{s_n}}. \] \end{remark} \section{Higher-dimensional construction} \label{sec:HD} In this section we present a simpler construction in $\mathbb{R}^n$ for $n\ge 3$. Since we often want to split $\mathbb{R}^n = \mathbb{R}\times \mathbb{R}^{n-1}$, it is convenient to write $m=n-1$. \begin{theorem} \label{thm:main_HD} Let $n\ge 3$, and denote by $(x,y)$ the coordinates on $\mathbb{R}^{n}$, where $x\in \mathbb{R}$ and $y\in \mathbb{R}^m$. Let $\zeta \in C_c^\infty((0,1)^{n})$ satisfying $\zeta \ge 0$, $\partial_1\zeta > -1$. Denote $\phi(x,y) = x + \zeta(x,y)$. Define $\Phi\in \operatorname{Diff}_\text{c}(\mathbb{R}^{1+m})$ by $\Phi(x,y) = (\phi(x,y),y)$. Then $\operatorname{dist}_s(\Phi,\operatorname{Id}) = 0$ for every $s\in[0,1)$. \end{theorem} While in principle one can adjust the construction from the two-dimensional case to this setting, we can take advantage of the fact of the higher dimensionality to make a simpler construction, as outlined below: First, in Section~\ref{sec:Step_I_HD} we decompose $\Phi$ as follows: \[ \Phi = \Phi_{2^m} \circ \ldots \circ \Phi_2\circ \Phi_1, \qquad \Phi_i = (\phi_i(x,y),y) = (x+\zeta_i(x,y),y) \in \operatorname{Diff}_\text{c}(\mathbb{R}^{1+m}), \] where $\zeta_i$ is supported on the union of $\approx k^m$ "tubes" $(0,1)\times I_j$, where $I_j$ are $m$-dimensional cubes of edge length $\approx k^{-1}$. This is a generalization of the construction in Section~\ref{sec:Step_I_2D}. In the rest of Section~\ref{sec:HD} we show that $\operatorname{dist}_s(\Phi_1,\operatorname{Id}) = o(1)$ as $k\to \infty$, and the same holds for all the other $\Phi_i$s. Since $k$ is arbitrary, the conclusion $\operatorname{dist}_s(\Phi,\operatorname{Id}) = 0$ follows by Lemma~\ref{lm:right_invariance}. In order to prove $\operatorname{dist}_s(\Phi_1,\operatorname{Id}) = o(1)$, we decompose $\Phi_1$ as \[ \Phi_1 = \Psi^{-1} \circ \Gamma \circ \Psi, \qquad \Psi, \Gamma\in \operatorname{Diff}_\text{c}(\mathbb{R}^{1+m}), \] where \begin{enumerate} \item $\Psi(x,y) = (x,\psi(x,y))$ squeezes the $m$-dimensional cubes $I_j$ on which $\Phi_1$ is supported by a factor of $k^{\log k}$. In Section~\ref{sec:Step_II_HD}, we define $\Psi(x,y)$ and show that $\operatorname{dist}_s(\Phi,\operatorname{Id}) \lesssim (\log k)^2 k^{-(1-s)} = o(1)$. This is analogous to Section~\ref{sec:Step_II_2D}, with $\alpha = (\log k)^2$. \item $\Gamma = \Psi \circ \Phi_1 \circ \Psi^{-1}$. Unlike in the two-dimensional case, we do not have to construct a complicated flow along the strips (as in Section~\ref{sec:Step_III_2D}, which is the main part of the proof). This is because the squeezing in $m$-dimensions is enough to guarantee small norm, as explained in Section~\ref{sec:2dc}. Instead, in Section~\ref{sec:Step_III_HD}, we show that the affine homotopy between $\operatorname{Id}$ and $\Gamma$ is a path of small $H^s$ distance, and therefore $\operatorname{dist}_s(\Gamma,\operatorname{Id}) \lesssim k^{s-(m/2 -s) \log k} = o(1)$. \end{enumerate} It then follows from Lemma~\ref{lm:right_invariance} that $\operatorname{dist}_s(\Phi_1,\operatorname{Id}) = o(1)$. \subsection{Step I: Splitting into strips} \label{sec:Step_I_HD} Fix $k\in \mathbb{N}$, and consider the lattice $\frac{4}{k}\mathbb{Z}^m \subset \mathbb{R}^m$. We partition $\mathbb{Z}^m$ into $2^m$ latices: \[ 2\mathbb{Z}^m,\, 2\mathbb{Z}^m + e_1,\, \ldots,\, 2\mathbb{Z}^m + \sum_{i=1}^m e_i, \] where $\{e_i\}_{i=1}^m$ is the standard basis of $\mathbb{R}^m$, and similarly for the lattice $\frac{4}{k}\mathbb{Z}^m$. We index the different lattices as $Z_I$, $I\in \mathbb{Z}_2^m$, ordered by \[ (0,\ldots,0), (1,0,\ldots,0), (0,1,0,\ldots,0), \ldots, (0,1,1,\ldots,1), (1,\ldots,1). \] Sometimes we will denote the indices by $1,\ldots, 2^m$ according to this order. For each $I\in \mathbb{Z}^m_2$, denote \[ L_I := \brk{Z_I + \Brk{-2/k,2/k}^m} \cap [0,1]^m, \qquad S_I := \brk{Z_I + \brk{-3/k,3/k}^m} \cap [0,1]^m. \] Note that $\cup L_I = [0,1]^m$ and that $L_I$ may only intersect $L_J$ at its boundary. We now define diffeomorphisms $\Phi_I(x,y) = (x + \zeta_I(x,y),y)$, such that $\Phi = \Phi_{2^m} \circ\ldots \circ \Phi_{1}$, \begin{equation} \label{eq:zeta_1_bounds_0_HD} \Phi_{I} \circ\ldots \circ \Phi_{1} |_{(0,1)\times \cup_{J\le I} L_J} = \Phi, \end{equation} \begin{equation} \label{eq:zeta_1_bounds_1_HD} \operatorname{supp}(\zeta_I) \subset (0,1)\times S_I, \end{equation} and \begin{equation} \label{eq:zeta_1_bounds_2_HD} 0\le \zeta_I\le C,\quad -1 + C^{-1}< \partial_x \zeta_I < C, \quad |\partial_y \zeta_I| < Ck, \end{equation} for some $C$ independent of $k$. Let $\chi_I(y)$ be a bump function such that $\chi_I|_{L_I} \equiv 1$, $\operatorname{supp}\chi_I \subset S_I$ and $|d\chi_I| < Ck$. Define \[ \zeta_1(x,y) = \zeta(x,y) \chi_1(y). \] For $I=2,\ldots, 2^m -1$, define \[ \tilde{\Phi}_I := \Phi\circ \Phi_1^{-1} \circ \ldots \circ \Phi_{I-1}^{-1} = (x+ \tilde{\zeta}_I(x,y),y), \] and then \[ \zeta_I(x,y) = \tilde{\zeta}_I(x,y) \chi_I(y). \] Finally, define \[ \Phi_{2^m} := \Phi\circ \Phi_1^{-1} \circ \ldots \circ \Phi_{2^m-1}^{-1}. \] A direct calculation shows that $\Phi_I$ satisfies \eqref{eq:zeta_1_bounds_0_HD}-\eqref{eq:zeta_1_bounds_2_HD}. In the rest of this section we are going to prove that $\operatorname{dist}_s(\Phi_1,\operatorname{Id}) = o(1)$. This relies only on properties \eqref{eq:zeta_1_bounds_1_HD}--\eqref{eq:zeta_1_bounds_2_HD}, hence the result also applies to $\Phi_I$, for all $I\in \mathbb{Z}_2^m$, since $\zeta_I$ satisfies the same assumptions. \subsection{Step II: Squeezing the strips} \label{sec:Step_II_HD} \begin{lemma} \label{lem:squeezing_HD} Fix $\alpha \gg 1$. There exists a diffeomorphism $\Psi\in \operatorname{Diff}_\text{c}(\mathbb{R}^{1+m})$, $\Psi(x,y) = (x,\psi(x,y))$, such that \begin{equation} \label{eq:squeezing_HD} \psi(x,y) = e^{-\alpha}\brk{y-z} + z, \end{equation} for every $x\in [0,1]$ and $y\in S_1$ such that $z\in \frac{8}{k}\mathbb{Z}^m$ is the closest element to $y$ in $\frac{8}{k}\mathbb{Z}^m$. Moreover, \begin{equation} \label{eq:squeezing_HD_H_s_dist} \operatorname{dist}_s(\Psi,\operatorname{Id}) \lesssim \alpha k^{-(1-s)}. \end{equation} \end{lemma} \begin{proof} Let $u_1 \in C_c^\infty((-4,4)^m)$, such that $u_1(y) = -y$ for $y\in [-3,3]^m$, and extend periodically to $\mathbb{R}^m$. Let $\chi \in C_c^\infty(\mathbb{R}^{1+m})$ such that $\chi\equiv 1$ on $[0,1]^{1+m}$. Define $u_k(x,y) := \frac{\alpha}{k} u_1(ky)\chi(x,y)$. The proof continues in the same way as the proof of Lemma~\ref{lem:squeezing_2D}. \end{proof} Note that in $[0,1]^{1+m}$, $\psi$ is independent of $x$. Therefore, slightly abusing notation, we write \[ \Psi(x,y) = (x,\psi(y)), \qquad \Psi^{-1}(x,y) = (x,\psi^{-1}(y)). \] We will later have $\alpha$ depend on $k$. \subsection{Step III: Affine homotopy} \label{sec:Step_III_HD} \begin{lemma} \label{lem:dist_s_Gamma_id_HD} \[ \operatorname{dist}_s(\Gamma,\operatorname{Id}) \lesssim k^{m/2}\lambda^{m/2-s} = k^s e^{-(m/2-s)\alpha} \] where $\Gamma = \Psi \circ \Phi_1 \circ \Psi^{-1}$ and $\lambda = e^{-\alpha}/k$. \end{lemma} \begin{proof} Note that \[ \Gamma = (x + \zeta_1(x,\psi^{-1}(y)),y), \] and denote \[ \xi(x,y) := \zeta_1(x,\psi^{-1}(y)), \qquad \gamma(x,y) = x + \zeta_1(x,\psi^{-1}(y)). \] It follows from the definitions of $\zeta_1$ \eqref{eq:zeta_1_bounds_1_HD} and $\psi$ \eqref{eq:squeezing_HD} that $\xi$ is supported inside $(0,1)\times \psi(S_1)$, i.e., inside $\approx k^m$ "tubes" which are translations of $(0,1) \times [-3\lambda,3\lambda]^m$. In particular, \begin{equation} \label{eq:xi_support_HD} \text{Vol}(\operatorname{supp} \xi) \lesssim k^m\lambda^m. \end{equation} Furthermore, as in \eqref{eq:tilde_zeta_1_bounds}, we have from \eqref{eq:zeta_1_bounds_2_HD} that \begin{equation} \label{eq:xi_bounds_HD} 0\le \xi\le C,\quad -1 + C^{-1}< \partial_x \xi < C, \quad |\partial_y \xi| < C\lambda^{-1}. \end{equation} Consider now an affine homotopy $\Gamma_t$ from $\operatorname{Id}$ to $\Gamma$, that is, \[ \Gamma_t(x,y) = (x + t\xi(x,y),y). \] The same calculation as in Lemma~\ref{lem:dist_s_Gamma_id}, using the estimates \eqref{eq:xi_support_HD}--\eqref{eq:xi_bounds_HD}, yields the wanted bound on $\operatorname{dist}_s(\operatorname{Id},\Gamma)$. \end{proof} We conclude now the proof of Theorem~\ref{thm:main_HD}. We showed that \[ \Phi_1 = \Psi^{-1} \circ \Gamma \circ \Psi, \] where (following Lemmas~\ref{lem:squeezing_HD}~\ref{lem:dist_s_Gamma_id_HD}) \[ \operatorname{dist}_s(\Psi,\operatorname{Id}) \lesssim \alpha k^{-(1-s)}, \qquad \operatorname{dist}_s(\Gamma,\operatorname{Id}) \lesssim k^s e^{-(m/2-s)\alpha}. \] Recall that $m = n-1\ge 2$ by hypothesis. If we choose, say \[ \alpha = (\log k)^2, \] we have, for any $s<1$, \[ \begin{split} \operatorname{dist}_s(\Psi,\operatorname{Id}) &\lesssim (\log k)^2 k^{-(1-s)} = o(1), \\ \operatorname{dist}_s(\Gamma,\operatorname{Id}) &\lesssim k^{s-(m/2-s)\log k} = o(1), \end{split} \] and therefore $\operatorname{dist}_s(\Phi_1,\operatorname{Id}) = o(1)$, which completes the proof. \section{The construction for $W^{s,p}(\mathbb{R}^n)$, $n \ge 2$.}\label{sec:Wsp} In this section we explain how to modify the arguments presented above in order to extend our earlier construction to the induced $W^{s,p}$ geodesic distance on $\operatorname{Diff}_\text{c}(\mathbb{R}^n)$ for $n\ge 2$. \begin{theorem} \label{thm:main_HD2} Let $n\ge 2$, and denote by $(x,y)$ the coordinates on $\mathbb{R}^{n}$, where $x\in \mathbb{R}$ and $y\in \mathbb{R}^m$ for $m = n-1$. Let $\zeta \in C_c^\infty((0,1)^{n})$ satisfying $\zeta \ge 0$, $\partial_1\zeta > -1$. Denote $\phi(x,y) = x + \zeta(x,y)$. Define $\Phi\in \operatorname{Diff}_\text{c}(\mathbb{R}^{n})$ by $\Phi(x,y) = (\phi(x,y),y)$. Then $\operatorname{dist}_{s,p}(\Phi,\operatorname{Id}) = 0$ for every $s\in[0,1)$ and $p\ge 1$ such that $sp<n$. \end{theorem} As explained at the end of Section \ref{sec_2}, this will complete the proof of Theorem \ref{main_thm}. We will use the interpolation inequality of Proposition \ref{pn:GN_inequality} to estimate $W^{s,p}$-norms. This is not valid for $p=1$, but for functions $u$ with compact support, it follows easily from the definition \eqref{def:fractional_Sobolev} and H\"older's inequality that $\| u\|_{s,1} \le C(q,\operatorname{supp}(u)) \| u\|_{s,q}$ for every $q>1$, so the $p=1$ case follows from estimating $\| u\|_{s,q}$ for $q>1$, for $q$ close enough to $1$ (in the construction below the vector fields are independent of the exponent). \begin{proof} {\bf 1. Splitting into strips and squeezing the strips} Fix $k\in \mathbb{N}$. We start exactly as in Section \ref{sec:Step_I_HD} by writing $\Phi = \Phi_{2^m} \circ\ldots \circ \Phi_{1}$, where $\Phi_I$ satisfies \eqref{eq:zeta_1_bounds_1_HD}, \eqref{eq:zeta_1_bounds_2_HD} for $I = 1,\ldots, 2^m$. It now suffices to show that $\operatorname{dist}_{s,p}(\operatorname{Id}, \Phi_1) = o(1)$ as $k\to \infty$, at a rate that depends only on the constants in \eqref{eq:zeta_1_bounds_1_HD}, \eqref{eq:zeta_1_bounds_2_HD}, and that thus applies to $\Phi_2,\ldots, \Phi_{2^m}$ as well. To do this, we start with the (higher-dimensional) squeezing diffeomorphism $\Psi$ from Lemma \ref{lem:squeezing_HD}. Then the interpolation inequality from Proposition \ref{pn:GN_inequality} yields \begin{equation}\label{Psi.HD} \operatorname{dist}_{s,p}(\Psi,\operatorname{Id}) \lesssim \alpha k^{-(1-s)}\qquad\mbox{ for all $p\in (1,\infty)$}. \end{equation} {\bf 2. Flowing along the squeezed strips}. We will now follow the procedure of Section \ref{sec:2dc} and write \begin{equation}\label{GD} \Phi_1 = \Gamma^{-1} \circ {\Psi}^{-1} \circ \Theta \circ \Psi, \qquad \Gamma,\Theta,\Psi \in \operatorname{Diff}_\text{c}(\mathbb{R}^2), \end{equation} where the construction of $\Theta, \Gamma$ and accompanying estimates closely follow the two-dimensional constructions in Sections \ref{sec:Step_III_2D} and \ref{sec:Step_IV_2D}. In more detail, to define $\Theta$, we first define $\tilde{\theta}(t,x,y)$ and $u(t,x,y)$ as in \eqref{eq:ttheta_def} and \eqref{eq:def_u}, with the only difference that now $y\in \mathbb{R}^{n-1}$. We then define $u_\delta$ as in \eqref{eq:u_delta_def}, by convolving $u$ (in the $x$ variable only) with a mollifier $\eta_\delta$. Finally, we let $\theta(t,x,y)$ solve the ODE \eqref{eq:theta_def}, and we define $\Theta(x,y) = (\theta(x,y,1), y)$. Then Lemma~\ref{lem:properties_g} holds as is, and in Lemma~\ref{lem:bounds_u_delta}, \eqref{eq:bounds_du_delta} holds and \eqref{eq:bounds_H_s_norm_u_delta} becomes \[ \|u_{\delta,t}\|_{W^{s,p}}^p \lesssim \frac{k^{n-1} \lambda^{n-s}}{\delta^{(p-1)s}}, \] and hence, \[ \operatorname{dist}_{s,p}(\Psi,\operatorname{Id}) \lesssim \frac{k^{(n-1)/p} \lambda^{(n-s)/p}}{\delta^{(p-1)s/p}}. \] {\bf 3. Error correction --- affine homotopy} We define $\Gamma$ by \eqref{GD}, and we estimate $\operatorname{dist}_{s,p}(\operatorname{Id}, \Gamma)$ by using an affine homotopy. Lemmas~\ref{lem:Theta_error}--\ref{lem:theta_y_bounds} hold as is, hence Proposition~\ref{pn:Theta} and Corollary~\ref{cor:Gamma} as well. Lemma~\ref{lem:dist_s_Gamma_id} holds as well, yielding \[ \operatorname{dist}_{s,p}(\Gamma,\operatorname{Id}) \lesssim \frac{\delta^{1-s}}{\gamma^{1-s}}k^s. \] The estimate is independent of $p\in (1,\infty)$ and $n$ as a consequence of the fact that the velocity field $u_t$, $0\le t\le 1$ associated to the affine homotopy (which in fact does not depend on $t$) satisfies estimates that are uniform in $p$ and $n$. This follows from easy modifications of the proofs of \eqref{eq:L_2_norm_affine}, \eqref{eq:H_1_norm_affine}. The constant in the above inequality does depend on $p$ through the dependence on the constant in the interpolation inequality. {\bf 4. Conclusion of the proof} Again, choosing \[ \alpha = (\log k)^2, \qquad \lambda = \frac{1}{k^{1+\log k}} \qquad \delta = \frac{1}{k^{\log k+\sqrt{\log k}}}, \] we have, for any $s<\min\BRK{n/p,1}$, \[ \begin{split} \operatorname{dist}_{s,p}(\Psi,\operatorname{Id}) &\lesssim (\log k)^2 k^{-(1-s)} = o(1), \\ \operatorname{dist}_{s,p}(\Theta,\operatorname{Id}) &\lesssim k^{-\brk{\frac{n}{p}-s}\log k + \frac{p-1}{p}s\sqrt{\log k}-\frac{1-s}{p}} = o(1), \\ \operatorname{dist}_{s,p}(\Gamma,\operatorname{Id}) &\lesssim k^{1 -(1-s) \sqrt{\log k}} = o(1), \end{split} \] and therefore $\operatorname{dist}_{s,p}(\Phi_1,\operatorname{Id}) = o(1)$. \end{proof} In the far subcritical regime $s<\min\BRK{(n-1)/p,1}$, one can also give a simpler construction, like that of Section \ref{sec:HD}, in which the flow along the squeezed strips is carried out by an affine homotopy, and no error-correction is needed at the end. Again, this is because the $(n-1)$-dimensional squeezing of the second step is enough to guarantee a small norm for the affine homotopy, since $W^{s,p}(\mathbb{R}^{n-1})$ is subcritical. We do not think this has any deeper meaning besides the obvious observation that the weaker the norm is, the easier it is to construct paths of short length. \paragraph{Acknowledgements} We are grateful to Meital Kuchar for her help with the figures, and to the anonymous referee for their helpful comments. This work was partially supported by the Natural Sciences and Engineering Research Council of Canada under operating grant 261955. {\footnotesize \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} } \end{document}
\begin{document} \title{Disparity and Optical Flow Partitioning \ Using Extended Potts Priors} \begin{abstract} This paper addresses the problems of disparity and optical flow partitioning based on the brightness invariance assumption. We investigate new variational approaches to these problems with Potts priors and possibly box constraints. For the optical flow partitioning, our model includes vector-valued data and an adapted Potts regularizer. Using the notation of asymptotically level stable functions we prove the existence of global minimizers of our functionals. We propose a modified alternating direction method of minimizers. This iterative algorithm requires the computation of global minimizers of classical univariate Potts problems which can be done efficiently by dynamic programming. We prove that the algorithm converges both for the constrained and unconstrained problems. Numerical examples demonstrate the very good performance of our partitioning method. \end{abstract} \section{Introduction}\label{sec:introduction} An important task in computer vision is the reconstruction of three dimensional (3D) scenes from stereo images. Taking a photo, 3D objects are projected onto a 2D image and the depth information gets lost. If a stereo camera is used, two images are obtained. Due to the different perspectives there is a displacement between corresponding points in the images which depends on the distance of the points from the camera. This displacement is called disparity and turns out to be inversely proportional to the distances of the objects, see Fig. \ref {fig:disp_ex} for an illustration. Therefore {\it disparity estimation} has constituted an active research area in recent years. Global combinatorial optimization methods such as graph-cuts \cite{BVZ01,KZ01} which rely on a discrete label space of the disparity map and belief propagation \cite{KSK06,YWYWLN06} were developed as well as variational approaches \cite{CEFPP12,CTKC11,DKA96,HPP12,MPP06,MPP09,WY11,WUPB12}. In particular, in \cite{HPP12} the global energy function was also made convex by quantizing the disparity map and converting it into a set of binary fields. Illumination variations were additionally taken into account, e.g., in \cite{CEFPP12,CRH95}. A stereo matching algorithm based on the curvelet decomposition was developed in \cite{MWW10}. With the aim of reducing the computational redundancy, a histogram based disparity estimation method was proposed in \cite{MLD11}. Further, methods based on non-parametric local transforms followed by normalized cross correlation (NCC) \cite{TLC03} and rank-transforms \cite{ZW94} have been used. In this paper we are interested in the direct {\it disparity partitioning} without a preliminary separate estimation of the disparity. Moreover we want to avoid an initial quantization of the disparity map as necessary in graph-cut methods or in \cite{HPP12}. We focus on a variational approach with a linearized brightness invariance assumption to constitute the data fidelity term. The Potts prior described below will serve as regularizing term which forces the minimizer of our functional to show a good partitioning. \begin{figure} \caption{ Left and middle: Two images taken by a stereo camera. The shift between the images is clearly visible. Right: True disparity encoded by different gray values which shows the depth of the different objects in the scene. (http://vision.middlebury.edu/stereo/ image credits notice) } \label{fig:disp_ex} \end{figure} {\it Optical flow estimation} is closely related to disparity estimation where the horizontal displacement direction has to be completed by the vertical one. In other words, we are searching for vector fields now and have to deal with vector-valued data. Variational approaches to optical flow estimation were pioneered by Horn and Schunck \cite{HS81} followed by a vast number of refinements and extensions, including sophisticated data fidelity terms going beyond the brightness \cite{BPS14,BM11,HDW13} and nonsmooth regularizers, e.g., TV-like ones \cite{ADK99,HSSW02} including also higher order derivatives \cite{YSM07,YSS07,YSS09a} and nonlocal regularizers \cite{WPB10}, to mention only few of them. In general multiscale approaches have to be taken into account to correctly determine larger and smaller flow vectors \cite{Ana89,BBPW04,DHHM12}. A good overview is given in \cite{BPS14}. Recent comprehensive empirical evaluations \cite{BSLRBS11,GLSU13} show that variational algorithms yield a very good performance. As for the disparity we deal with variational {\it optical flow partitioning} using the brightness invariance assumption and a vector-valued Potts prior in this paper. The classical (discrete) Potts model, named after R. Potts \cite{Pot52} has the form \begin{equation} \label{potts_classic} \min_u \frac12 \|f - u\|_2 ^2 + \lambda \|\nabla u\|_0, \end{equation} where the discrete gradient consists of directional difference operators and $\| \cdot\|_0$ denotes the $\ell_0$ semi-norm. Computing a global minimizer of the multivariate Potts model appears to be NP hard \cite{BVZ01,DMA97,Tro06}. For univariate data this problem can be solved efficiently using dynamic programming \cite{Cha95,FKLW08,MS85,WSD12}. In the context of Markov random fields such kind of functionals were used by Geman and Geman \cite{GG84} and in \cite{Bes86}. In \cite{Lec89} a deterministic continuation method to restore piecewise constant images was proposed. A stochastic continuation approach was introduced and successfully used for the reconstruction of 3D tomographic images in \cite{RLM07}. The method and the theory were refined in \cite{RM10}. Recently theoretical results relating the probability for global convergence and the computation speed were given in \cite{RR13}. There is also a rich literature on $\ell_0$-regularized methods (without additional difference operator) in particular in the context of sparsity and on various (convex) relaxation methods (also for data fidelity terms with linear operators). Here we refer to the overview in \cite{FR14}. Various approximations of the $\ell_0$ norm were used in order to guarantee that the objective function has global minimizers; see, e.g., \cite{CJPT13}, among others. Note that the local and the global minimizers of least squares regularized with the $\ell_0$ norm were described in \cite{Ni13}. In this paper, we concentrate ourselves on the (non-relaxed) Potts functional. We apply the following model: \begin{equation} \label{potts_gen} \min_{u \in S} \frac12 \|f - A u\|_2 ^2 + \lambda \|\nabla u\|_0, \end{equation} where $S$ is a certain compact set, $A$ a linear operator and $\|\nabla u\|_0$ a 'grouped' or vector-valued prior now. We prove the existence of a global minimizer of the functional using the notion of asymptotically level stable functions \cite{Aus00}. For single-valued data a completely different existence proof was given in \cite{SWD13}. We apply an ADMM like algorithm to the general Potts model \eqref{potts_gen}. Such algorithm was proposed for the partitioning of vector-valued images for the Potts model \eqref{potts_classic} in \cite{SW14}. It appears to be faster than current methods based on graph cuts and convex relaxations of the Potts model. In particular the number of values of the sought-after image $u$ is not a priori restricted. Our algorithm is designed for the model \eqref{potts_gen} which includes non invertible linear operators in the data fidelity term as well as constraints. In the context of wavelet frame operators (instead of gradients) another minimization method for single-valued $\ell_0$-regularized, constrained problems was suggested in \cite{Lu13,ZDL13}. It is based on a penalty decomposition and reduces the problem mainly to the iterative solution of $\ell_2-\ell_0$ problems via hard thresholding. Convergence to a local minimizer is shown in case of an invertible operator $A$. However, note that in our applications both linear operators $A_1$ and $A$ have usually a nontrivial kernel. To the best of our knowledge this is the first time that this kind of direct partitioning model was applied for disparity and optical flow estimation. \\ The remaining part of the paper is organized as follows: Our disparity and optical flow partitioning models are presented in Section \ref{sec:models}. Section \ref{sec:min} provides the proof that the (vector-valued) general Potts model has a global minimizer. Then, in Section \ref{sec:alg} an ADMM like algorithm is suggested together with the convergence proofs for the constrained and unconstrained models. Numerical experiments are shown in Section \ref{sec:experiments}. Finally, Section \ref{sec:conclusions} gives conclusions for future work. \section{Disparity and Optical Flow Partitioning Models}\label{sec:models} In this paper we deal with gray-value images $f: {\cal G} \rightarrow \mathbb{R}$ defined on the grid ${\cal G} := \{1,\ldots,M\} \times \{1,\ldots,N\}$ and vector fields $u = (u_1,\ldots,u_d): {\cal G} \rightarrow \mathbb{R}^d$, where $d=1$ in the disparity partitioning problem and $d=2$ in the optical flow partitioning problem. Note that $$u(i,j) = (u_1(i,j),\ldots,u_d(i,j)) \in \mathbb{R}^d, \quad (i,j) \in {\cal G}.$$ By $\nabla_1$, $\nabla_2$ we denote derivative operators in vertical and horizontal directions, respectively. More precisely we will use their discrete counterparts. Among the various possible discretizations of derivative operators we focus on forward differences $$ \nabla_1 u (i,j) := u(i+1,j) - u(i,j), \quad \nabla_2 u (i,j) := u(i,j+1) - u(i,j) $$ and assume mirror boundary conditions. Further we will need the 'grouped' $\ell_0$ semi-norm for vector-valued data defined by \begin{equation} \label{group} \|u\|_0 := \sum_{i,j=1}^n \|u(i,j)\|_0, \quad \|u(i,j)\|_0 := \left\{ \begin{array}{ll} 0 &{\rm if} \; u(i,j) = 0_d,\\ 1 &{\rm otherwise}. \end{array} \right. \end{equation} Here $0_d$ denotes the null vector in $\mathbb{R}^d$. If $d=1$ then $\|u\|_0$ is the usual $\ell_0$ 'componentwise' semi-norm for vectors. For the disparity and optical flow partitioning we will apply the $\ell_0$ semi-norm not directly to the vectors but rather to $\nabla_\nu u_1$ and $\nabla_\nu u$, $\nu=1,2$, respectively, to penalize their spatial differences. In the disparity problem we consider $\|\nabla u_1\|_0 := \|\nabla_1 u_1\|_0 + \|\nabla_2 u_1\|_0$ and the optical flow problem $\|\nabla u\|_0 := \|\nabla_1 u\|_0 + \|\nabla_2 u\|_0$. For the later one, $\|\nabla_\nu u\|_0 = \|(\nabla_\nu u_1(i,j),\nabla_\nu u_2(i,j))_{(i,j)}\|_0$ uses indeed the 'grouped' version of the $\ell_0$ semi-norm. \begin{remark} \label{discrete} To have a convenient vector-matrix notation we reorder images $f$ and $u_l$, $l=1,\ldots,d$ columnwise into vectors ${\rm vec} \, f$ and ${\rm vec} \, u_l$ of length $n := NM$. We address the pixels by the index set $\mathbb I_n := \{1,\ldots,n\}$. If the meaning is clear from the context we keep the notation $f$ instead of ${\rm vec} \, f$ . In particular we will have $u_l \in \mathbb{R}^n$ and $u = (u_1^{\mbox{\tiny{T}}},\ldots,u_d^{\mbox{\tiny{T}}})^{\mbox{\tiny{T}}} \in \mathbb R^{nd}$. After columnwise reordering the forward difference operators (with mirror boundary conditions) can be written as matrices \begin{equation} \label{nabla_discr} \nabla_1 := I_d \otimes I_M \otimes D_N, \quad \nabla_2 := I_d \otimes D_M^{\mbox{\tiny{T}}} \otimes I_N, \end{equation} where $I_N$ denotes the $N\times N$ identity matrix, $$ D_N := \left( \begin{array}{cccccc} -1 & 1\\ & -1 & 1\\ & & & \ddots\\ & & & & -1 & 1\\ & & & & & 0 \end{array} \right) \in \mathbb R^{N,N} $$ and $\otimes$ is the tensor (Kronecker) product of matrices. \end{remark} Using the indicator function of a set $S$ defined by \[ \iota_{S} (t) = \begin{cases} 0 &{\rm if} \ t \in S, \\ \infty &{\rm otherwise}, \end{cases} \] we can address box constraints on $u$ by adding the regularizing term $\iota_{S_{Box}}(u)$, where $$S_{Box} := \{u \in \mathbb R^{dn}: u_{min} \le u \le u_{max}\}.$$ Both in the disparity and optical flow partitioning problems we are given a sequence of images. In this paper we focus on two images $f_1$ and $f_2$ coming from (i) the appropriate left and right images taken, e.g., by a stereo camera (disparity problem), and (ii) two image frames at different times arising, e.g., from a video (optical flow problem). Then the models rely on an invariance requirement between these images. Various invariance assumptions were considered in the literature and we refer to \cite{BPS14} for a comprehensive overview. Here we focus on the brightness invariance assumption. In the disparity model we address only horizontal displacements and consider in a continuous setting \begin{equation} \label{bas_disp} f_1(x,y) - f_2(x - u_1(x,y),y) \approx 0. \end{equation} For the optical flow model we assume \begin{equation} \label{bas_flow} f_1(x,y) - f_2\big((x,y) - u(x,y) \big) \approx 0, \quad u := (u_1,u_2). \end{equation} Using first order Taylor expansions around an initial disparity $\bar u_1$, resp., an initial optical flow estimate $\bar u = (\bar u_1,\bar u_2)$, gives \begin{align} {\rm disp.}&: \; f_2(x- u_1,y) \approx f_2(x- \bar u_1,y) - \nabla_1 f_2(x - \bar u_1,y) (u_1 (x,y) - \bar u_1(x,y)), \\ {\rm flow}&: \; f_2\big( (x,y) - u) \approx f_2 \big((x,y) - \bar u)\big) - (\nabla_1 f_2((x,y) - \bar u), \nabla_2 f_2((x,y) - \bar u) \big) (u(x,y) - \bar u(x,y)) . \end{align} To get an initial disparity we will use a simple block-matching approach with NCC as measure for the block similarity, following the ideas in \cite{CEFPP12,TLC03}. Then the linearized invariance requirements \eqref{bas_flow} and \eqref{bas_disp} become \begin{align} {\rm disp.}&: \; 0 \approx f_1(x,y) - f_2(x- \bar u_1,y) + \nabla_1 f_2(x - \bar u_1,y) (u_1(x,y) - \bar u_1(x,y)), \\ {\rm flow}&: \; 0 \approx f_1(x,y) - f_2\big( (x,y) - \bar u \big) + \big(\nabla_1 f_2((x,y) - \bar u), \nabla_2 f_2((x,y) - \bar u) \big) (u(x,y) - \bar u(x,y)) . \end{align} Note that $f_2((x,y) - \bar u)$ is only well defined in the discrete setting if $(i,j) - \bar u$ is in $\cal{G}$. Later we will see that our method to compute $\bar u$ really fulfills this condition, thus we can carry over the continuous model to the discrete setting without any modifications. Using a non-negative increasing function $\varphi:\mathbb R_{\ge 0} \rightarrow \mathbb R$, and considering only grid points $(x,y) = (i,j) \in {\cal G}$ the data term for the disparity partitioning model becomes for example $$ \sum_{(i,j) \in {\cal G}} \varphi \big(\nabla_1 f_2(i - \bar u_1,j) u_1(i,j) - (\nabla_1 f_2(i - \bar u_1,j) \bar u_1(i,j) + f_2(i- \bar u_1,j) - f_1(i,j) ) \big). $$ In this paper we will deal with quadratic functions $\varphi(t) := \frac12 t^2$. Using the notation in Remark \ref{discrete} our partitioning models become \begin{align} \label{e_disp} {\rm disp.}: \; E_{\rm disp} (u_1) & := \frac12\| A_1 u_1 - b_1\|_2^2 + \mu \, \iota_{S_{Box}}(u_1) + \lambda \left(\|\nabla_1 u_1 \|_0 + \|\nabla_2 u_1 \|_0 \right), \\ {\rm flow}: \; E_{\rm flow} (u) &:= \frac12\| A u - b\|_2^2 + \mu \, \iota_{S_{Box}}(u) + \lambda \left(\|\nabla_1 u \|_0 + \|\nabla_2 u \|_0 \right), \label{e_flow} \end{align} where $\mu \in \{0,1\}$, $\lambda > 0$, $\|\cdot\|_0$ stands for the 'group' semi-norm in \eqref{group} and \begin{align} \label{a1} A_1 &:= {\rm diag} \left({\rm vec} \big( \nabla_1 f_2(i-\bar u_1,j) \big)\right), \\ A &:= \left({\rm diag} \left({\rm vec} \big( \nabla_1 f_2((i,j) - \bar u) \big)\right), {\rm diag} \left({\rm vec} \big( \nabla_2 f_2((i,j) - \bar u) \big)\right) \right), \label{only_a}\\ b_1 &:= {\rm vec} \big(\nabla_1 f_2(i - \bar u_1,j) \bar u_1(i,j) + f_2(i- \bar u_1,j) - f_1(i,j) \big), \label{b_1}\\ b &:= {\rm vec} \left( \big(\nabla_1 f_2((i,j) - \bar u), \nabla_2 f_2((i,j) - \bar u) \big) \bar u(i,j) + f_2 ( (i,j) - \bar u) - f_1(i,j) \right). \label{only_b} \end{align} We are looking for minimizers of these functionals. \section{Global Minimizers for Potts Regularized Functionals}\label{sec:min} We want to know if the functionals in \eqref{e_disp} and \eqref{e_flow} have global minimizers. Both $E_{\rm disp}$ and $E_{\rm flow}$ are lower semi-continuous (l.s.c.) and proper functionals. When $\mu=1$, the minimization of $E_{\rm disp}$ and $E_{\rm flow}$ is constrained to the {\em compact} set $S_{Box}$ in which case \eqref{e_disp} and \eqref{e_flow} have global minimizers; see, e.g., \cite[Proposition 3.1.1,~p. 82]{AT03}. Next we focus on the case $\mu = 0$. More general, we consider for arbitrary given $A \in \mathbb{R}^{n,dn}$, $b \in \mathbb{R}^n$ and $p\ge 1$ functionals $E: \mathbb R^{dn} \rightarrow \mathbb{R}$ of the form \begin{equation} \label{fd} E(u) := \frac1p\| A u - b\|_p^p + \lambda \left(\|\nabla_1 u \|_0 + \|\nabla_2 u \|_0 \right), \qquad \lambda > 0. \end{equation} The existence of a global minimizer was proved in the case $d=1$ in \cite{SWD13}. Here we give a shorter and more general proof that holds for any $d \ge 1$ using the notion of asymptotically level stable functions. This wide class of functions was introduced by Auslender~\cite{Aus00} in 2000 and since then it appeared that many problems on the existence of optimal solutions are easily solved for these functions. As usual, \[\mathrm{lev}\,(E,\lambda) := \{u\in \mathbb{R}^{dn}: E(u)\leq \lambda \} \quad {\rm for} \quad \lambda >\inf_u E(u)~;\] by $E_\infty$ we denote the {\it asymptotic (or recession) function} of $E$ and $$ \mathrm{ker}(E_\infty) := \{ u\in\mathbb{R}^{dn}: E_\infty(u) = 0\} .$$ The following definition is taken from \cite[p.~94]{AT03}: a l.s.c. and proper function $E:\mathbb{R}^{dn} \rightarrow \mathbb{R} \cup \{+\infty\}$ is said to be {\sl asymptotically level stable (als)} if for each $\rho>0$, each real-valued, bounded sequence $\{\lambda_k\}_k$ and each sequence $\{u_k\}\in\mathbb{R}^{dn}$ satisfying \begin{equation} \label{aal} u_k \in \mathrm{lev}\,(E,\lambda_k), \quad \|u_k\| \rightarrow +\infty, \quad \frac{u_k}{\|u_k\|} \rightarrow \tilde u \in \mathrm{ker} (E_\infty), \end{equation} there exists $k_0$ such that \begin{equation} \label{lsa} u_k- \rho \tilde u \in \mathrm{lev}\,(E,\lambda_k)\quad \forall k\geq k_0. \end{equation} If for each real-valued, bounded sequence $\{\lambda_k\}_k$ there exists no sequence $\{u_k\}_k$ satisfying \eqref{aal}, then $E$ is automatically als. In particular, coercive functions are als. It was originally exhibited in \cite{BBGT98} (without the notion of als functions) that any als function $E$ with $\inf E>-\infty$ has a global minimizer. The proof is also given in \cite[Corollary 3.4.2]{AT03}. We show that the discontinuous non-coercive objective $E$ in \eqref{fd} is als and has thus a global minimizer. \begin{theorem} \label{pal} Let $E:\mathbb{R}^{dn} \rightarrow \mathbb{R}$ be of the form \eqref{fd}. Then the following relations hold true: \begin{itemize} \item[i)] $\mathrm{ker} (E_\infty )=\mathrm{ker}(A)$. \item[ii)] $E$ is als. \item[iii)] $E$ has a global minimizer. \end{itemize} \end{theorem} \begin{proof} i) The asymptotic function $E_\infty$ of $E$ can be calculated according to \cite{Ded77}, see also \cite[Theorem 2.5.1]{AT03}, as \[ E_\infty (u) = \liminf_{\atop{u'\rightarrow u}{t\rightarrow\infty}}\frac{E(tu')}{t}. \] Then \begin{align} E_\infty(u) &=\liminf_{\atop{u'\rightarrow u}{t\rightarrow\infty}} \frac{ \frac1p\|At u'-b\|_p^p + \|\nabla_1 (t u')\|_0 + \|\nabla_2 (t u')\|_0}{t} \\ &=\liminf_{\atop{u'\rightarrow u}{t\rightarrow\infty}} \left( \frac1p t^{p-1} \|A u'- \frac1t b\|_p^p + \frac{\|\nabla_1 (t u')\|_0 + \|\nabla_2 (t u')\|_0}{t} \right) \\ &=\left\{ \begin{array}{lll} 0 & {\rm if} & u\in \mathrm{ker}(A),\\ +\infty & {\rm if} & u \not\in \mathrm{ker}(A) \; {\rm and} \; p > 1,\\ \|A u\|_1 & {\rm if} & u \not\in \mathrm{ker}(A) \; {\rm and} \; p = 1, \end{array} \right. \label{auv} \end{align} and consequently $\mathrm{ker} (E_\infty) = \mathrm{ker}(A)$. ii) Let $\{u_k\}_k$ satisfy \eqref{aal} with $u_k\,\|u_k\|^{-1} \rightarrow \tilde u \in \mathrm{ker}(A)$ and let $\rho>0$ be arbitrarily fixed. Below we compare the numbers $\|\nabla_\nu u_k\|_0$ and $\|\nabla_\nu (u_k-\rho \tilde u)\|_0$, $\nu = 1,2$. There are two options. \\ If $(i,j) \in \mathrm{supp}(\nabla_1 \tilde u) := \{(i,j) \in {\cal G}: \tilde u(i+1,j) - \tilde u(i,j) \not = 0_d\}$, then \[ \tilde u(i,j) - \tilde u(i+1,j) = \lim_{k\rightarrow\infty} \frac{u_k (i,j) - u_k (i+1,j)}{\|u_k\|} \neq 0_d \] and $\| u_k(i,j) - u_k(i+1,j) \|>0$ for all but finitely many $k$. Therefore, there exists $k_1(i,j)$ such that \begin{equation} \label{wb} \| u_k(i,j) - u_k (i+1,j) - \rho (\tilde u(i,j) - \tilde u(i+1,j) ) \|_0 \leq \| u_k(i,j) - u_k(i,j+1) \|_0 \quad \forall k \geq k_1(i,j). \end{equation} If $(i,j) \in {\cal G} \backslash \mathrm{supp}(\nabla_1 \tilde u)$, i.e., $\tilde u(i,j) - \tilde u(i+1,j) = 0_d$, then clearly \begin{equation} \label{wc} u_k(i,j) - u_k(i+1,j) - \rho (\tilde u(i,j) - \tilde u(i+1,j) ) = u_k(i,j) - u_k(i+1,j) . \end{equation} Combining \eqref{wb} and \eqref{wc} shows that \begin{equation}\label{we} \| u_k(i,j) - u_k(i+1,j) - \rho (\tilde u(i,j) - \tilde u(i+1,j) ) \|_0 \leq \| u_k(i,j) - u_k(i+1,j) \|_0 \quad \forall k\geq k_1(i,j) \end{equation} and hence \begin{equation}\label{we1} \| \nabla_1(u_k - \rho \tilde u) \|_0 \leq \| \nabla_1 \, u_k \|_0 \quad \forall k \geq k_1 := \max \{k_1(i,j): (i,j) \in {\cal G}\}. \end{equation} In the same way, there is $k_2$ so that \begin{equation} \label{wea} \|\nabla_2 (u_k - \rho \tilde u)\|_0 \leq \|\nabla_2 u_k\|_0 \quad \forall k \geq k_2. \end{equation} By part i) of the proof we know that $A \tilde u = 0_n$ which jointly with~\eqref{we1} and \eqref{wea} implies for all $k \ge k_0:=\max\{k_1,k_2\}$ that \begin{align} E(u_k- \rho \tilde u) & = \frac1p \|A(u_k - \rho\tilde u ) - b\|_p^p + \lambda (\|\nabla_1 (u_k - \rho \tilde u)\|_0 + \|\nabla_2 (u_k - \rho \tilde u)\|_0)\\ & = \frac1p \|A u_k-b\|_p^p + \lambda (\|\nabla_1 (u_k - \rho \tilde u) \|_0 + \|\nabla_2 (u_k - \rho \tilde u)\|_0)\\ & \leq \frac1p \|Au_k-b\|_p^p + \lambda (\|\nabla_1 u_k\|_0 + \|\nabla_2 (u_k) \|_0) = E(u_k) . \label{hb} \end{align} Hence it follows by $u_k \in \mathrm{lev}\,(E,\lambda_k)$ that $u_k - \rho \tilde u \in \mathrm{lev}\,(E,\lambda_k)$ for any $k\geq k_0$. Consequently $E$ is als. \\ Finally, iii) follows directly from \cite[Corollary 3.4.2]{AT03}. \end{proof} \section{ADMM-like Algorithm}\label{sec:alg} In this section we follow an idea in \cite{SW14} to approximate minimizers of our more general functionals $E_{\rm disp}$ and $E_{\rm flow}$. Basically the problem is reduced to the iterative computation of minimizers of the univariate classical Potts problem for which there exist efficient solution techniques using dynamic programming \cite{FKLW08}. Here we apply the method proposed in \cite{WSD12,SWD13}. We consider \begin{equation}\label{model_general} \min_{u \in \mathbb R^{nd}} \Big\{ F(u) + \lambda \big( \|\nabla_1 u\|_0 + \|\nabla_2 u\|_0 \big)\Big\} . \end{equation} Clearly, we have \begin{align} \label{data_disp} {\rm disp.} \; (d = 1): \quad F(u) &:= \frac12 \|A_1 u - b_1\|^2_2 + \mu \, \iota_{S_{Box}}(u), \quad u = u_1,\\ {\rm flow} \; (d = 2): \quad F(u) &:= \frac12 \|A u - b\|^2_2 + \mu \, \iota_{S_{Box}}(u), \quad u = (u_1^{\mbox{\tiny{T}}},u_2^{\mbox{\tiny{T}}})^{\mbox{\tiny{T}}}.\label{data_flow} \end{align} For $\mu = 1$ we have a (box) constrained problem; for $\mu = 0$ an unconstrained one. In \cite{SW14} partitioning problems of vector-valued images with $F(u) := \frac12 \|u - b\|_2^2$ were considered. In our setting a linear operator is involved into the data term which is not a diagonal operator in the optical flow problem, see \eqref{a1}, and in both cases \eqref{a1} and \eqref{only_a} it has a non-trivial kernel. Further, we may have box constraints in addition. The minimization problem can be rewritten as \begin{equation} \min_{u,v,w \in \mathbb R^{nd}} \Big\{ F(u) + \lambda \big( \|\nabla_1 v\|_0 + \|\nabla_2 w\|_0 \big) \quad \mbox{subject to} \quad v = u,\; w=u\Big\} . \end{equation} To find an approximate (local) minimizer we suggest the following algorithm which resembles the basic structure of an alternating direction method of multipliers (ADMM) \cite{BPCPE10,Gab83} but with inner parameters $\eta^{(k)}$ which has to go to infinity. \begin{algorithm}[H] \caption{ADMM-like Algorithm \label{A1}} \begin{algorithmic} \STATE \textbf{Initialization:} $v^{(0)},w^{(0)},q_1^{(0)},q_2^{(0)},\eta^{(0)}$ and $\sigma > 1$ \STATE \textbf{Iteration:} For $k = 0,1,\ldots$ iterate \begin{align} \label{admm_sol_u_gen} u^{(k+1)} &\in \mathop{\rm argmin}_{u} \Big\{ F(u) + \frac{\eta^{(k)}}{2} \big(\|u-v^{(k)}+q_1^{(k)}\|_2^2 + \|u-w^{(k)}+q_2^{(k)}\|_2^2 \big) \Big\}, \\ \label{admm_sol_x} v^{(k+1)} & \in \mathop{\rm argmin}_v \Big\{ \lambda \|\nabla_1 v\|_0 + \frac{\eta^{(k)}}{2} \|u^{(k+1)}-v+q_1^{(k)}\|_2^2 \Big\}, \\ \label{admm_sol_y} w^{(k+1)} & \in \mathop{\rm argmin}_w \Big\{ \lambda \|\nabla_2 w\|_0 + \frac{\eta^{(k)}}{2} \|u^{(k+1)}-w+q_2^{(k)}\|_2^2 \Big\}, \\ \label{b1} q_1^{(k+1)} &= q_1^{(k)} + u^{(k+1)} - v^{(k+1)}, \\ \label{b2} q_2^{(k+1)} &= q_2^{(k)} + u^{(k+1)} - w^{(k+1)}, \\ \label{eta} \eta^{(k+1)} &= \eta^{(k)} \sigma. \end{align} \end{algorithmic} \end{algorithm} Step 1 of the algorithm in \eqref{admm_sol_u_gen} can be computed for our optical flow term $F$ in \eqref{data_flow} and $\mu = 0$ by setting the gradient of the respective function to zero. Then $u^{(k+1)}$ is the solution of the linear system of equations $$ (A^{\mbox{\tiny{T}}} A + 2 \eta^{(k)} I_{dn}) u = A^{\mbox{\tiny{T}}} b + \eta^{(k)} \left( v^{(k)}-q_1^{(k)} + w^{(k)}-q_2^{(k)}\right). $$ For the disparity problem \eqref{data_disp} we have just to replace $A$ by $A_1$ which is a simple diagonal matrix and $b$ by $b_1$. For $\mu = 1$ and the disparity problem, $u^{(k+1)}$ can be computed componentwise by straightforward computation as $$ u^{(k+1)} = \max\left\{\min\{ u^{(k+\frac12)}, u_{\max} \}, u_{\min} \right\}, $$ where \begin{equation} \label{lin_syst} u^{(k+\frac12)} := (A_1^{\mbox{\tiny{T}}} A_1 + 2 \eta^{(k)} I_{n})^{-1} \left(A_1^{\mbox{\tiny{T}}} b_1 + \eta^{(k)} \left( v^{(k)}-q_1^{(k)} + w^{(k)}-q_2^{(k)} \right) \right). \end{equation} For the optical flow problem and $\mu = 1$ we have to minimize a box constrained quadratic problem for which there exist efficient algorithms, see, e.g., \cite{BNO03}. In our numerical part the optical flow problem is handled without constraints, i.e. for $\mu = 0$. In this case, only the linear system of equations \eqref{lin_syst} has to be solved. The Steps 2 and 3 in \eqref{admm_sol_x} and \eqref{admm_sol_y} are univariate Potts problems which can be solved efficiently using the method proposed in \cite{SW14,WSD12}. As shown in \cite{SW14} the vector-valued univariate Potts problem can be tackled nearly in the same way as in the scalar-valued case. The arithmetic complexity is ${\cal O}(dn^\frac32)$ if $N \sim M$. \\ Next we prove the convergence of Algorithm \ref{A1}. Due to the NP hardness of the problem we can in general not expect that the limit point is in general a (global) minimizer of the cost function. First we deal with a general situation which involves our unconstrained problems ($\mu = 0$). We assume that any vector in the subdifferential $\partial F$ of $F$ fulfills the growth constraint \begin{equation} \label{subdiff_prop} u^* \in \partial F(u) \quad \Rightarrow \quad \|u^*\|_2 \le C(\|u\|_2 + 1). \end{equation} It can be easily checked that $F: \mathbb R^{dn} \rightarrow \mathbb R^n$ with $F(u) := \frac1p\|M u - m\|_p^p$, $p \in [1,2]$ fulfills \eqref{subdiff_prop} for any matrix $M \in \mathbb R^{n,dn}$ and $m \in \mathbb{R}^n$. Note that the variable $C$ stands for any constant in the rest of the paper. \begin{theorem} \label{thm1} Let $F:\mathbb R^{dn} \rightarrow \mathbb R \cup \{ + \infty \}$ be a proper, closed, convex function which fulfills \eqref{subdiff_prop}. Then Algorithm \ref{A1} converges in the sense that $(u^{(k)}, v^{(k)}, w^{(k)}) \rightarrow (\hat u, \hat v, \hat w)$ as $k \rightarrow \infty$ with $\hat u = \hat v = \hat w$ and $(q_1^{(k)}, q_2^{(k)}) \rightarrow (0,0)$ as $k \rightarrow \infty$. \end{theorem} \begin{proof} By \eqref{b1} we have \begin{align}\label{eqn-min-g1} \frac{\eta^{(k)}}{2}\|q_1^{(k+1)} \|_2^2 &= \frac{\eta^{(k)}}{2}\|u^{(k+1)}-v^{(k+1)}+q_1^{(k)}\|_2^2 \\ &\le \lambda \|\nabla_1 v^{(k+1)}\|_0 + \frac{\eta^{(k)}}{2}\|u^{(k+1)}-v^{(k+1)}+q_1^{(k)}\|_2^2 \end{align} and by \eqref{admm_sol_x} further \begin{align} \frac{\eta^{(k)}}{2}\|q_1^{(k+1)} \|_2^2 &\le \lambda \|\nabla_1(u^{(k+1)}+q_1^{(k)})\|_0 + \frac{\eta^{(k)}}{2}\|u^{(k+1)}- (u^{(k+1)}+q_1^{(k)} )+q_1^{(k)}\|_2^2 \\ &\le \lambda \|\nabla_1(u^{(k+1)}+q_1^{(k)})\|_0 \\ &\le \lambda n. \end{align} By \eqref{b2} and \eqref{admm_sol_y} we conclude similarly \begin{equation} \label{eqn-min-h1} \frac{\eta^{(k)}}{2}\|q_2^{(k+1)} \|_2^2 \le \lambda n. \end{equation} Hence it follows \begin{equation}\label{eqn-min-b2-b3} \|q_1^{(k+1)} \|_2^2 \le \frac{2\lambda n}{\eta^{(k)}} \quad {\rm and}\quad \|q_2^{(k+1)} \|_2^2 \le \frac{2\lambda n}{\eta^{(k)}}, \end{equation} which implies $q_1^{(k+1)} \rightarrow 0$ and $q_2^{(k+1)} \rightarrow 0$ as $k\rightarrow \infty$. Further, we obtain by $u^{(k)} - v^{(k)} = q_1^{(k)} - q_1^{(k-1)}$ that \begin{equation} \label{star2} \|v^{(k)} - u^{(k)} \|_2 \le \| q_1^{(k)} \|_2 + \| q_1^{(k-1)} \|_2 \le \sqrt{\frac{2\lambda n}{\eta^{(k-1)}}} + \sqrt{\frac{2\lambda n}{\eta^{(k-2)}}} \le 2\sqrt{\frac{2\lambda n}{\eta^{(k-2)}}} \end{equation} and analogously \begin{equation} \label{star3} \|w^{(k)} - u^{(k)} \|_2 \le 2\sqrt{\frac{2\lambda n}{\eta^{(k-2)}}}. \end{equation} For $\epsilon(k) := v^{(k)}-u^{(k)}-q_1^{(k)} + w^{(k)}-u^{(k)}-q_2^{(k)}$ we get by \eqref{eqn-min-b2-b3} - \eqref{star3} that \begin{align} \nonumber \| \epsilon(k) \|_2 &\le \| q_1^{(k)} \|_2 + \| q_2^{(k)} \|_2 + \| v^{(k)} - u^{(k)} \|_2 + \| w^{(k)} - u^{(k)} \|_2 \\ &\le \sqrt{\frac{2\lambda n}{\eta^{(k-1)}}} + \sqrt{\frac{2\lambda n}{\eta^{(k-1)}}} + 2\sqrt{\frac{2\lambda n}{\eta^{(k-2)}}} + 2\sqrt{\frac{2\lambda n}{\eta^{(k-2)}}} \le 6\sqrt{\frac{2\lambda n}{\eta^{(k-2)}}}, \label{important} \end{align} i.e., $\| \epsilon(k)\|_2$ decreases exponentially. By Fermat's theorem the proximum $u^{(k+1)}$ in \eqref{admm_sol_u_gen} has to fulfill $$0 \in \partial F(u^{(k+1)} ) + \eta^{(k)} (u^{(k+1)} - v^{(k)} + q_1^{(k)} + u^{(k+1)} - w^{(k)} + q_2^{(k)})$$ so that there exists $p^{(k+1)} \in F(u^{(k+1)})$ satisfying \begin{align} 0 &= p^{(k+1)} + \eta^{(k)} (u^{(k+1)} - v^{(k)} + q_1^{(k)} + u^{(k+1)} - w^{(k)} + q_2^{(k)})\\ &= p^{(k+1)} + \eta^{(k)} (u^{(k)} - v^{(k)} + q_1^{(k)} + u^{(k)} - w^{(k)} + q_2^{(k)}) + 2 \eta^{(k)} (u^{(k+1)} - u^{(k)}) \\ &= p^{(k+1)} + \eta^{(k)} \epsilon(k) + 2 \eta^{(k)} (u^{(k+1)} - u^{(k)}). \end{align} Rearranging terms, taking the norm and applying the triangle inequality leads to \begin{align} \label{eq:proof_eq1} \| u^{(k+1)} - u^{(k)}\|_2 \le \frac{\| p^{(k+1)} \|_2}{2 \eta^{(k)}} + \frac{1}{2} \| \epsilon(k)\|_2. \end{align} Since $\|x-y\| \ge \|x\| - \|y\|$ and by assumption \eqref{subdiff_prop} it follows \begin{align} \label{eq:proof_eq2} \| u^{(k+1)}\|_2 &\le \frac{\| p^{(k+1)} \|_2}{2 \eta^{(k)}} + \frac{1}{2} \|\epsilon(k)\|_2 + \| u^{(k)}\|_2 \\ &\le \frac{C \| u^{(k+1)} \|_2}{2 \eta^{(k)}} + \frac{C}{2 \eta^{(k)}}+ \frac{1}{2} \| \epsilon(k)\|_2 + \| u^{(k)}\|_2 . \end{align} Since $\frac{C}{2 \eta^{(k)}} \rightarrow 0$ as $k \rightarrow \infty$, there exists a $K$ such that $1 < \frac{1}{1 - \frac{C}{2 \eta^{(k)}}} \le \tau := \sqrt{\sigma}$ for all $k >K$. Now \eqref{eq:proof_eq2} implies \begin{align} \| u^{(k+1)}\|_2 \left(1 - \frac{C}{2 \eta^{(k)}}\right) &\le \frac{C}{2 \eta^{(k)}}+ \frac{1}{2} \|\epsilon(k)\|_2 + \| u^{(k)}\|_2 \end{align} which gives for $k > K$ the estimates \begin{align} \| u^{(k+1)}\|_2 &\le \tau \frac{C}{2 \eta^{(k)}}+ \tau\frac{1}{2} \|\epsilon(k)\|_2 + \tau\| u^{(k)}\|_2 \\ &\le \tau \frac{C}{2 \eta^{(k)}}+ \tau\frac{1}{2} \|\epsilon(k)\|_2 + \tau^2 \frac{C}{2 \eta^{(k-1)}}+ \tau^2\frac{1}{2} \|\epsilon(k-1)\|_2 + \tau^2\| u^{(k-1)}\|_2 \\ &\le \tau^{k+1-K} \| u^{(K)}\|_2 + \sum_{j=1}^{k+1-K} \frac{C \tau^{j}}{2 \eta^{(k+1-j)}}+ \sum_{j=1}^{k+1-K} \frac{\tau^j}{2} \|\epsilon(k+1-j)\|_2 \\ &\le \tau^{k+1} \big( \| u^{(K)}\|_2 + \sum_{j=1}^{k+1-K} \frac{C}{2 \eta^{(k+1-j)}}+ \sum_{j=1}^{k+1-K} \frac{1}{2} \|\epsilon(k+1-j)\|_2 \big) \end{align} and by the exponential decay of $\| \epsilon(k) \|_2$ with $\eta^{(k)}$ further \begin{align} \| u^{(k+1)}\|_2 &\le C \tau^{k+1}. \end{align} Using this relation together with \eqref{subdiff_prop} and \eqref{eta} in \eqref{eq:proof_eq1} we conclude \begin{align} \| u^{(k+1)} - u^{(k)}\|_2 &\le \frac{\| p^{(k+1)} \|_2}{2 \eta^{(k)}} + \frac{1}{2} \| \epsilon(k)\|_2 \\ &\le \frac{C \| u^{(k+1)}\|_2 }{2 \eta^{(k)}}+ \frac{C }{2 \eta^{(k)}} + \frac{1}{2} \| \epsilon(k)\|_2 \\ &\le \frac{C^2 \tau^{k+1} }{2 \eta^{(k)}}+ \frac{C }{2 \eta^{(k)}} + \frac{1}{2} \| \epsilon(k)\|_2 \\ &\le \frac{C^2 }{2 \eta^{(0)} \sigma^{\frac{k-1}{2}}}+ \frac{C }{2 \eta^{(k)}} + 3\sqrt{\frac{2\lambda n}{\eta^{(k-2)}}}. \end{align} Thus, $\| u^{(k+1)} - u^{(k)}\|_2$ decreases exponentially. Therefore it is a Cauchy sequence and $\{ u^{(k)} \}_k$ converges to some $\hat u$ as $k \rightarrow \infty$. Since $q_1^{(k)} \rightarrow 0$ and $q_2^{(k)} \rightarrow 0$ as $k \rightarrow \infty$ we obtain by \eqref{b1} and \eqref{b2} that $\{ v^{(k)} \}_k$ and $\{ w^{(k)} \}_k$ also converge to $\hat u$. This finishes the proof. \end{proof} The assumptions in the next theorem fit to our constrained models ($\mu = 1$), but are more general. \begin{theorem} \label{thm2} Let $F:\mathbb R^{dn} \rightarrow \mathbb R \cup \{ + \infty \}$ be any function which is bounded on its domain. Further assume that \eqref{admm_sol_u_gen} has a global minimizer. Then Algorithm \ref{A1} converges in the sense that $(u^{(k)}, v^{(k)}, w^{(k)}) \rightarrow (\hat u, \hat v, \hat w)$ as $k \rightarrow \infty$ with $\hat u = \hat v = \hat w$ and $(q_1^{(k)}, q_2^{(k)}) \rightarrow (0,0)$ as $k \rightarrow \infty$. \end{theorem} \begin{proof} As in the proof of Theorem \ref{thm1} we can show that \eqref{important} holds true for $\epsilon(k) := v^{(k)}-u^{(k)}-q_1^{(k)} + w^{(k)}-u^{(k)}-q_2^{(k)}$. The quadratic term in \eqref{admm_sol_u_gen} can be rewritten as \begin{align} \|u-v^{(k)}+q_1^{(k)}\|_2^2 + \|u-w^{(k)}+q_2^{(k)}\|_2^2 &= 2 \langle u,u \rangle + 2 \langle u, q_1^{(k)} - v^{(k)} + q_2^{(k)} - w^{(k)} \rangle + C \\ &= 2 \| u-u^{(k)} \|_2^2 - 2 \langle u, \epsilon(k) \rangle + C. \end{align} Thus, the first step of Algorithm \ref{A1} is equivalent to \begin{align} u^{(k+1)} \in \mathop{\rm argmin}_{u} \Big\{ F(u) + \eta^{(k)} \|u-u^{(k)}\|_2^2 - \eta^{(k)} \langle \epsilon(k) , u \rangle \Big\}. \end{align} This implies \begin{align} F(u^{(k+1)}) + \eta^{(k)} \|u^{(k+1)}-u^{(k)}\|_2^2 - \eta^{(k)} \langle \epsilon(k) , u^{(k+1)} \rangle &\le F(u^{(k)}) - \eta^{(k)} \langle \epsilon(k) , u^{(k)} \rangle \end{align} and further \begin{align} \|u^{(k+1)}-u^{(k)}\|_2^2 &\le \frac{F(u^{(k)}) - F(u^{(k+1)})}{\eta^{(k)}} - \langle \epsilon(k) , u^{(k)} - u^{(k+1)} \rangle. \end{align} Using the boundedness of $f$ and the Cauchy-Schwarz inequality leads to \begin{align} \|u^{(k+1)}-u^{(k)}\|_2^2 &\le \frac{C}{\eta^{(k)}} + \| \epsilon(k) \|_2 \| u^{(k)} - u^{(k+1)}\|_2. \end{align} Since $\epsilon(k) \rightarrow 0$ as $k \rightarrow \infty$, we conclude that $\| u^{(k)} - u^{(k+1)}\|_2$ is bounded so that \begin{align} \|u^{(k+1)}-u^{(k)}\|_2^2 &\le \frac{C}{\eta^{(k)}} + C \| \epsilon(k) \|_2. \end{align} Thus, $\| u^{(k)} - u^{(k+1)}\|_2$ is decreasing exponentially and $\{ u^{(k)} \}_k$ converges to some $\hat u$ as $k \rightarrow \infty$. \end{proof} \section{Numerical Results} \label{sec:experiments} In this section we present numerical results obtained by our partitioning approaches. The test images for the disparity and the optical flow problems were taken from \begin{itemize} \item http://vision.middlebury.edu/stereo/ \cite{SP07,SS02,SS03}, and \item http://vision.middlebury.edu/flow/ \cite{BSLRBS11}, \end{itemize} respectively. All examples were executed on a computer with an Intel Core i7-870 Processor (8M Cache, 2.93 GHz) and 8 GB physical memory, 64 Bit Linux. We compare our direct partitioning methods \eqref{e_disp} and \eqref{e_flow} via Algorithm \ref{A1} with a two-stage approach consisting of i) disparity, resp.\ optical flow estimation, and ii) partitioning of the estimated values. More precisely the two stage algorithm performs as follows: \begin{itemize} \item[i)] In the first step, the disparity is estimated using the TV regularized model \begin{align}\label{ex:disparity_tv} \min_{u_1 \in S_{Box}} & \Big\{ \frac{1}{2}\| A_1 u_1 - b_1\|_2^2 + \iota_{S_{Box}} (u_1) + \alpha_1 \| \, |\nabla u_1| \, \|_{1} \Big\} \end{align} with $A_1$ and $b_1$ defined by \eqref{a1} and \eqref{b1}, respectively. Here $|\nabla u_1|$ stands for the discrete version of $\left( \left( \frac{\partial u_1}{\partial x}(x,y)\right)^2 + \left( \frac{\partial u_1}{\partial y}(x,y)\right)^2\right)^\frac12$, i.e., we use the isotropic (``rotationally invariant'') TV version. Such model was proposed for the disparity estimation in \cite{CEFPP12} and can be found with e.g., shearlet regularized $\ell_1$ norm in \cite{Fit13}. For estimating the optical flow we minimize \begin{equation}\label{ex:flow_tv} \min_{u} \Big\{ \frac{1}{2} \| A u - b\|_2^2 + \alpha_1 \| \sqrt{|\nabla u_1|^2 + |\nabla u_2|^2} \|_1 \Big\}, \end{equation} with $A$ and $b$ defined by \eqref{only_a} and \eqref{only_b}, respectively. The global minimizers of the convex functionals \eqref{ex:disparity_tv} and \eqref{ex:flow_tv} were computed via the primal-dual hybrid gradient method (PDHG) proposed in \cite{CP11,PCCB09}. Clearly, one could use other iterative first order (primal-dual) methods, see, e.g., \cite{CP10}. \item [ii)] In the second step the estimated disparity, resp.\ optical flow is partitioned by the method in \cite{SW14} which minimizes, e.g., for the disparity the functional \begin{align} \min_{u_1} \Big\{ \frac{1}{2} \|u_1 - u_{1,est}\|_2^2 + \alpha_2 (\| \nabla_1 u_1 \|_0 + \| \nabla_2 u_1 \|_0) \Big\}, \end{align} where $ u_{1,est}$ is the disparity estimated in the first step. For the approximation of a minimizer we use the software package Pottslab http://pottslab.de with default parameters. Note that by introducing weights $w$ in the Potts prior the functional can be made more isotropic which leads to a better ``rotation invariance'', see \cite{SW14}. \end{itemize} Next we comment on the direct partitioning implementation. Our partitioning models \eqref{e_disp} and \eqref{e_flow} are based on the knowledge of initial values $\bar u_1$ and $\bar u$ for the disparity, resp., the optical flow. Here we use a simple block matching based algorithm, see \cite{CEFPP12}. This method consists basically of a search within a given range. For each pixel in the first image we compare its surrounding block with surrounding blocks of pixels in the search range of the second image. The chosen block size is $7 \times 7$. As a similarity measure we use the normalized cross correlation \cite{TLC03}. Finally we apply a median filter to the initial guess to reduce the influence of outliers. Since $(i - \bar u_1,j)$, resp.\ $(i,j) - \bar u(i,j)$ are the grid coordinates of the pixel in the second image corresponding to pixel $(i,j)$ in the first image, we see that $f_2(i- \bar u_1,j)$, resp.\ $f_2((i,j) - \bar u)$ are really well defined grid functions. As parameters in Algorithm \ref{A1} we choose $\eta^{(0)} = 0.01$ and $\sigma = 1.05$. The algorithm is initialized with $v^{(0)} = w^{(0)}=\bar u_1$ for the disparity partitioning and $v^{(0)} = w^{(0)}=\bar u$ for the flow partitioning; further $q_i^{(0)}$, $i=1,2$ are zero matrices. We show the results after 100 iterations where no differences to subsequently iterated images can be seen. \\ We start with the disparity partitioning results. Figure \ref{fig:venus} shows the results for the image ``Venus''. The true disparity contains horizontal and vertical structures so that our non isotropic direct approach fits fine. It can compete with the more expansive two stage method. The main differences appear due to the more or less isotropy of the models. \begin{figure} \caption{ Results for the test images ``Venus''. Left to right: original left image, ground truth, partitioned disparity using the two stage algorithm ($\alpha_1 = 0.005$, $\alpha_2 = 300$), partitioned disparity using the direct algorithm ($\lambda = 2.5$). } \label{fig:venus} \end{figure} Figs. \ref{fig:cones} and \ref{fig:dolls} show that our direct partitioning algorithm can qualitatively compete with the two stage algorithm. \begin{figure} \caption{ Result for the images ``Cones''. Left to right: original left image, ground truth, partitioned disparity using the two stage algorithm ($\alpha_1 = 0.005$, $\alpha_2 = 50$), partitioned disparity using the direct algorithm ($\lambda = 0.5$).} \label{fig:cones} \end{figure} \begin{figure} \caption{ Result for the ``Dolls'' images. Left to right: original left image, ground truth, partitioned disparity using the two stage algorithm ($\alpha_1 = 0.01$, $\alpha_2 = 80$), partitioned disparity using the direct algorithm ($\lambda = 1.5$).} \label{fig:dolls} \end{figure} Next we show our results for the optical flow partitioning. The flow vectors are color coded with color $\simeq$ direction, brightness $\simeq$ magnitude). The ground truth flow field in the first example ``Wooden'' in Fig. \ref{fig:wooden} prefers horizontal and vertical directions. As in the first disparity example our algorithm show good results. In Fig. \ref{fig:rubber} and Fig. \ref{fig:hydra} we see that our direct method can compete with the more involved two stage approach. The main differences appear again due to the more isotropic approach in the two stage model. Especially in Fig. \ref{fig:rubber} one can see that the flow field of the rotating wheel is partitioned into rectangular instead of annular segments by our direct method. In the same figure, we show a result where we have estimated the optical flow in Step 1 by the more sophisticated model in \cite{BM11}, for the program code see http://lmb.informatik.uni-freiburg.de/resources/software.php. Step 2 was the same. The result is only slightly different from those obtained by the previously described two stage algorithm. \begin{figure} \caption{ Result for the ``Wooden'' images, Left to right: first test image, ground truth, partitioned optical flow using the two stage algorithm ($\alpha_1 = 0.01$, $\alpha_2 = 150$), partitioned optical flow by the direct algorithm ($\lambda = 0.5$). } \label{fig:wooden} \end{figure} \begin{figure} \caption{ Result for the test images ``RubberWhale''. Top: first test image, ground truth, partitioned optical flow by the direct algorithm ($\lambda = 0.05$) . Bottom: partitioned optical flow by the two stage algorithm. Left: Two stage algorithm ($\alpha_1 = 0.005$, $\alpha_2 = 7$), Right: Two stage algorithm but with Step 1 computed by the model in \cite{BM11} ($\alpha_2 = 7$). } \label{fig:rubber} \end{figure} \begin{figure} \caption{ Result for the images ``Hydrangea''. Left to right: ground truth, partitioned optical flow by the two stage algorithm ($\alpha_1 = 0.01$, $\alpha_2 = 35$), partitioned optical flow by the direct algorithm ($\lambda = 0.15$).} \label{fig:hydra} \end{figure} \section{Conclusions} \label{sec:conclusions} In this paper we have proposed a new method for disparity and optical flow partitioning based on a Potts regularized variational model together with an ADMM like algorithm. In case of the optical flow it is adapted to vector-valued data. In this paper, we have only shown the {\it basic approach} and further refinements are planned in the future. So we intend to incorporate more sophisticated data fidelity terms. In particular illumination changes should be handled. We will make the model more ``rotationally invariant''. The simple introduction of weights and other differences as in \cite{SW14} and in several graph cut approaches is one possibility. The crucial part for the running time of the proposed direct algorithm is the univariate Potts minimization. However, since the single problems are independent of each other, they could be solved in parallel. Such parallel implementation is another point of future activities. Further we want to incorporate multiple frames instead of just two of them in our model. From the theoretical point of view, to establish just the convergence of an algorithm to a local minimizer seems not to be enlightening since certain constant images are contained in the set of local minimizers and we are clearly not looking for them. However, a better understanding of strict (local) minimizers and the choice of initial values for the algorithm is interesting. \\ {\bf Acknowledgement:} The work of J. H. Fitschen has been supported by Deutsche Forschungsgemeinschaft (DFG) within the Graduate School 1932. Some parts of the paper have been written during a visit M. Nikolova at this Graduate School. Many thanks to M. El-Gheche (University Paris Est) for fruitful discussions on disparity estimation. \end{document}
\begin{document} \title{Constructing combinatorial operads from monoids} \begin{abstract} \paragraph{Abstract.} We introduce a functorial construction which, from a monoid, produces a set-operad. We obtain new (symmetric or not) operads as suboperads or quotients of the operad obtained from the additive monoid. These involve various familiar combinatorial objects: parking functions, packed words, planar rooted trees, generalized Dyck paths, Schröder trees, Motzkin paths, integer compositions, directed animals,~\emph{etc.} We also retrieve some known operads: the magmatic operad, the commutative associative operad, and the diassociative operad. \paragraph{Résumé.} Nous introduisons une construction fonctorielle qui, à partir d'un monoïde, produit une opérade ensembliste. Nous obtenons de nouvelles opérades (symétriques ou non) comme sous-opérades ou quotients de l'opérade obtenue à partir du monoïde additif. Celles-ci mettent en jeu divers objets combinatoires familiers~: fonctions de parking, mots tassés, arbres plans enracinés, chemins de Dyck généralisés, arbres de Schröder, chemins de Motzkin, compositions d'entiers, animaux dirigés,~\emph{etc.} Nous retrouvons également des opérades déjà connues~: l'opérade magmatique, l'opérade commutative associative et l'opérade diassociative. \end{abstract} \section{Introduction} \label{sec:Introduction} Operads are algebraic structures introduced in the 1970s by Boardman and Vogt~\cite{BV73} and by May~\cite{May72} in the context of algebraic topology. Informally, an operad is a structure containing operators with~$n$ inputs and~$1$ output, for all positive integer~$n$. Two operators~$x$ and~$y$ can be composed at $i$th position by grafting the output of~$y$ on the $i$th input of~$x$. The new operator thus obtained is denoted by~$x \circ_i y$. In an operad, one can also switch the inputs of an operator~$x$ by letting a permutation~$\sigma$ act to obtain a new operator denoted by~$x \cdot \sigma$. One of the main relishes of operads comes from the fact that they offer a general theory to study in an unifying way different types of algebras, such as associative algebras and Lie algebras. In recent years, the importance of operads in combinatorics has continued to increase and several new operads were defined on combinatorial objects (see \emph{e.g.},~\cite{Lod01,CL01,Liv06,Cha08}). The structure thereby added on combinatorial families enables to see these in a new light and offers original ways to solve some combinatorial problems. For example, the dendriform operad~\cite{Lod01} is an operad on binary trees and plays an interesting role for the understanding of the Hopf algebra of Loday-Ronco of binary trees~\cite{LR98,HNT05}. Besides, this operad is a key ingredient for the enumeration of intervals of the Tamari lattice~\cite{Cha06,Cha08}. There is also a very rich link connecting combinatorial Hopf algebra theory and operad theory: various constructions produce combinatorial Hopf algebras from operads~\cite{CL07,LV10}. In this paper, we propose a new generic method to build combinatorial operads. The starting point is to pick a monoid~$M$. We then consider the set of words whose letters are elements of~$M$. The arity of such words are their length, the composition of two words is expressed from the product of~$M$, and permutations act on words by permuting letters. In this way, we associate to any monoid~$M$ an operad denoted by~${\sf T} M$. This construction is rich from a combinatorial point of view since it allows, by considering suboperads and quotients of~${\sf T} M$, to get new operads on various combinatorial objects. This paper is organized as follows. In Section~\ref{sec:Preliminaires}, we recall briefly the basics about set-operads. Section~\ref{sec:Foncteur} is devoted to the definition of the construction associating an operad to a monoid and to establish its first properties. We show that this construction is a functor from the category of monoids to the category of operads that respects injections and surjections. Finally we apply this construction in Section~\ref{sec:Exemples} on various monoids and obtain several new combinatorial (symmetric or not) operads on the following combinatorial objects: endofunctions, parking functions, packed words, permutations, planar rooted trees, generalized Dyck paths, Schröder trees, Motzkin paths, integer compositions, directed animals, and segmented integer compositions. We conclude by building an operad isomorphic to the diassociative operad~\cite{Lod01}. \acknowledgements The author would like to thank Florent Hivert and Jean-Christophe Novelli for their advice during the preparation of this paper. This work is based on computer exploration and the author used, for this purpose, the open-source mathematical software Sage~\cite{Sage} and one of its extensions, Sage-Combinat~\cite{SageC}. \section{Preliminaries and notations} \label{sec:Preliminaires} \subsection{Permutations} Let us denote by~$[n]$ the set $\{1, \dots, n\}$ and by~$\mathfrak{S}_n$ the set of permutations of~$[n]$. Let $\sigma \in \mathfrak{S}_n$, $\nu \in \mathfrak{S}_m$, and $i \in [n]$. The \emph{substitution} of~$\nu$ into~$\sigma$ is the permutation $B_i(\sigma, \nu) := \sigma'_1 \dots \sigma'_{i - 1} \nu''_1 \dots \nu''_m \sigma'_{i + 1} \dots \sigma'_n$ where $\sigma'_j := \sigma_j$ if $\sigma_j < \sigma_i$ and $\sigma'_j := \sigma_j + m - 1$ otherwise, and $\nu''_j := \nu_j + \sigma_i - 1$. For instance, one has $B_4(\textcolor{Bleu}{741}{\bf 5}\textcolor{Bleu}{623}, \textcolor{Rouge}{231}) = \textcolor{Bleu}{941}\textcolor{Rouge}{675}\textcolor{Bleu}{823}$. \subsection{Operads} Recall that a \emph{set-operad}, or an \emph{operad} for short, is a set $\mathcal{P} := \biguplus_{n \geq 1} \mathcal{P}(n)$ together with \emph{substitution maps} \begin{equation} \circ_i : \mathcal{P}(n) \times \mathcal{P}(m) \to \mathcal{P}(n + m - 1), \qquad n, m \geq 1, i \in [n], \end{equation} a distinguished element ${\bf 1} \in \mathcal{P}(1)$, the \emph{unit} of $\mathcal{P}$, and a \emph{symmetric group action} \begin{equation} \cdot : \mathcal{P}(n) \times \mathfrak{S}_n \to \mathcal{P}(n), \qquad n \geq 1. \end{equation} The above data has to satisfy the following relations: \begin{align} (x \circ_i y) \circ_{i + j - 1} z = x \circ_i (y \circ_j z), \qquad & x \in \mathcal{P}(n), y \in \mathcal{P}(m), z \in \mathcal{P}(k), i \in [n], j \in [m], \label{eq:AssocSerie} \\ (x \circ_i y) \circ_{j + m - 1} z = (x \circ_j z) \circ_i y, \qquad & x \in \mathcal{P}(n), y \in \mathcal{P}(m), z \in \mathcal{P}(k), 1 \leq i < j \leq n, \label{eq:AssocParallele}\\ {\bf 1} \circ_1 x = x = x \circ_i {\bf 1}, \qquad & x \in \mathcal{P}(n), i \in [n], \label{eq:Unite} \\ (x \cdot \sigma) \circ_i (y \cdot \nu) = \left(x \circ_{\sigma_i} y\right) \cdot B_i(\sigma, \nu), \qquad & x \in \mathcal{P}(n), y \in \mathcal{P}(m), \sigma \in \mathfrak{S}_n, \nu \in \mathfrak{S}_m, i \in [n]. \label{eq:Equivariance} \end{align} The \emph{arity} of an element~$x$ of~$\mathcal{P}(n)$ is~$n$. Let~$\mathcal{Q}$ be an operad. A map~$\phi : \mathcal{P} \to \mathcal{Q}$ is an \emph{operad morphism} if it commutes with substitution maps and symmetric group action and maps elements of arity~$n$ of~$\mathcal{P}$ to elements of arity~$n$ of~$\mathcal{Q}$. A \emph{non-symmetric operad} is an operad without symmetric group action. The above definitions also work when~$\mathcal{P}$ is a~$\mathbb{N}$-graded vector space. In this case, the substitution maps~$\circ_i$ are linear maps, and the symmetric group action is linear on the left. \section{A combinatorial functor from monoids to operads} \label{sec:Foncteur} \subsection{The construction} \subsubsection{Monoids to operads} Let $(M, \bullet, 1)$ be a monoid. Let us denote by~${\sf T} M$ the set ${\sf T} M := \biguplus_{n \geq 1} {\sf T} M(n)$, where for all~$n \geq 1$, \begin{equation} {\sf T} M(n) := \left\{(x_1, \dots, x_n) : x_i \in M \mbox{ for all $i \in [n]$}\right\}. \end{equation} We endow the set~${\sf T} M$ with maps \begin{equation} \label{eq:TDomaineSubs} \circ_i : {\sf T} M(n) \times {\sf T} M(m) \to {\sf T} M(n + m - 1), \qquad n, m \geq 1, i \in [n], \end{equation} defined as follows: For all $x \in {\sf T} M(n)$, $y \in {\sf T} M(m)$, and $i \in [n]$, we set \begin{equation} \label{eq:TSub} x \circ_i y := (x_1, \; \dots, \; x_{i-1}, \; x_i \bullet y_1, \; \dots, \; x_i \bullet y_m, \; x_{i+1}, \; \dots, \; x_n). \end{equation} Let us also set ${\bf 1} := (1)$ as a distinguished element of~${\sf T} M(1)$. We endow finally each set~${\sf T} M(n)$ with a right action of the symmetric group \begin{equation} \cdot : {\sf T} M(n) \times \mathfrak{S}_n \to {\sf T} M(n), \qquad n \geq 1, \end{equation} defined as follows: For all $x \in {\sf T} M(n)$ and $\sigma \in \mathfrak{S}_n$, we set \begin{equation} x \cdot \sigma := \left(x_{\sigma_1}, \dots, x_{\sigma_n}\right). \end{equation} The elements of~${\sf T} M$ are words over~$M$ regarded as an alphabet. The arity of an element~$x$ of~${\sf T} M(n)$, denoted by~$|x|$, is~$n$. For the sake of readability, we shall denote in some cases an element $(x_1, \dots, x_n)$ of~${\sf T} M(n)$ by its word notation $x_1 \dots x_n$. \begin{Proposition} \label{prop:TOperade} If~$M$ is a monoid, then~${\sf T} M$ is a set-operad. \end{Proposition} \begin{proof} Let us respectively denote by~$\bullet$ and~$1$ the product and the unit of~$M$. First of all, thanks to~(\ref{eq:TDomaineSubs}) and~(\ref{eq:TSub}), the maps~$\circ_i$ are well-defined and are substitution maps of operads. Let us now show that~${\sf T} M$ satisfies~(\ref{eq:AssocSerie}), (\ref{eq:AssocParallele}), (\ref{eq:Unite}), and~(\ref{eq:Equivariance}). Let $x \in {\sf T} M(n)$, $y \in {\sf T} M(m)$, $z \in {\sf T} M(k)$, $i \in [n]$, and $j \in [m]$. We have, using associativity of~$\bullet$, \begin{equation} \begin{split} (x \circ_i y) \circ_{i + j - 1} z & = (x_1, \; \dots, \; x_{i-1}, \; x_i \bullet y_1, \; \dots, \; x_i \bullet y_m, \; x_{i+1}, \; \dots, \; x_n) \circ_{i + j - 1} z \\ & = (x_1, \;\dots, \; x_{i-1}, \; x_i \bullet y_1, \; \dots, \; x_i \bullet y_{j-1}, \; (x_i \bullet y_j) \bullet z_1, \; \dots, \; (x_i \bullet y_j) \bullet z_k, \\ & \qquad x_i \bullet y_{j+1}, \; \dots, \; x_i \bullet y_m, \; x_{i+1}, \; \dots, \; x_n) \\ & = (x_1, \; \dots, \; x_{i-1}, \; x_i \bullet y_1, \; \dots, \; x_i \bullet y_{j-1}, \; x_i \bullet (y_j \bullet z_1), \; \dots, \; x_i \bullet (y_j \bullet z_k), \\ & \qquad x_i \bullet y_{j+1}, \; \dots, \; x_i \bullet y_m, \; x_{i+1}, \; \dots, \; x_n) \\ & = x \circ_i (y_1, \; \dots, \; y_{j-1}, \; y_j \bullet z_1, \; \dots, \; y_j \bullet z_k, \; y_{j+1}, \; \dots, \; y_m) \\ & = x \circ_i (y \circ_j z), \end{split} \end{equation} showing that~$\circ_i$ satisfies~(\ref{eq:AssocSerie}). Let $x \in {\sf T} M(n)$, $y \in {\sf T} M(m)$, $z \in {\sf T} M(k)$, and $i < j \in [n]$. We have, \begin{equation} \begin{split} (x \circ_j z) \circ_i y & = (x_1, \; \dots, \; x_{j-1}, \; x_j \bullet z_1, \; \dots, \; x_j \bullet z_k, \; x_{j+1}, \; \dots, \; x_n) \circ_i y, \\ & = (x_1, \; \dots, \; x_{i-1}, \; x_i \bullet y_1, \; \dots, \; x_i \bullet y_m, \; x_{i+1}, \; \dots, \; x_{j-1}, \\ & \qquad x_j \bullet z_1, \; \dots, \; x_j \bullet z_k, \; x_{j+1}, \; \dots, \; x_n) \\ & = (x_1, \; \dots, \; x_{i-1}, \; x_i \bullet y_1, \; \dots, \; x_i \bullet y_m, \; x_{i+1}, \; \dots, \; x_n) \circ_{j + m - 1} z \\ & = (x \circ_i y) \circ_{j + m - 1} z, \end{split} \end{equation} showing that~$\circ_i$ satisfies~(\ref{eq:AssocParallele}). The element~${\bf 1}$ is the unit of~${\sf T} M$. Indeed, we have ${\bf 1} \in {\sf T} M(1)$, and, for all $x \in {\sf T} M(n)$ and~$i \in [n]$, \begin{equation} x \circ_i {\bf 1} \enspace = \enspace (x_1, \; \dots, \; x_{i-1}, \; x_i \bullet 1, \; x_{i+1}, \; \dots, \; x_n) \enspace = \enspace x, \end{equation} since~$1$ is the unit for~$\bullet$, and \begin{equation} {\bf 1} \circ_1 x \enspace = \enspace (1 \bullet x_1, \; \dots, \; 1 \bullet x_n) \enspace = \enspace x, \end{equation} for the same reason. That shows~(\ref{eq:Unite}). Finally, since the symmetric group~$\mathfrak{S}_n$ acts by permuting the letters of a word $(x_1, \dots, x_n)$ of~${\sf T} M(n)$, the maps~$\circ_i$ and the action~$\cdot$ satisfy together~(\ref{eq:Equivariance}). \end{proof} \subsubsection{Monoids morphisms to operads morphisms} Let~$M$ and~$N$ be two monoids and $\theta : M \to N$ be a monoid morphism. Let us denote by~${\sf T} \theta$ the map \begin{equation} {\sf T} \theta : {\sf T} M \to {\sf T} N, \end{equation} defined for all $(x_1, \dots, x_n) \in {\sf T} M(n)$ by \begin{equation} {\sf T} \theta\left(x_1, \dots, x_n\right) := \left(\theta(x_1), \dots, \theta(x_n)\right). \end{equation} \begin{Proposition} \label{prop:TOperadeMorph} If~$M$ and~$N$ are two monoids and $\theta : M \to N$ is a monoid morphism, then the map ${\sf T} \theta : {\sf T} M \to {\sf T} N$ is an operad morphism. \end{Proposition} \begin{proof} Let us respectively denote by~$\bullet_M$ (resp.~$\bullet_N$) and~$1_M$ (resp.~$1_N$) the product and the unit of~$M$ (resp.~$N$). Let $x \in {\sf T} M(n)$, $y \in {\sf T} M(m)$, and $i \in [n]$. Since $\theta$ is a monoid morphism, we have \begin{equation} \begin{split} {\sf T} \theta (x \circ_i y) & = {\sf T} \theta\left(x_1, \; \dots, \; x_{i-1}, \; x_i \bullet_M y_1, \; \dots, \; x_i \bullet_M y_m, \; x_{i+1}, \; \dots, \; x_n\right) \\ & = (\theta(x_1), \; \dots, \; \theta(x_{i-1}), \; \theta(x_i \bullet_M y_1), \; \dots, \; \theta(x_i \bullet_M y_m), \; \theta(x_{i+1}), \; \dots, \; \theta(x_n)) \\ & = (\theta(x_1), \; \dots, \theta(x_{i-1}), \; \theta(x_i) \bullet_N \theta(y_1), \; \dots, \; \theta(x_i) \bullet_N \theta(y_m), \; \theta(x_{i+1}), \; \dots, \; \theta(x_n)) \\ & = \left(\theta(x_1), \; \dots, \; \theta(x_n)) \circ_i (\theta(y_1), \; \dots, \; \theta(y_m)\right) \\ & = {\sf T} \theta(x) \circ_i {\sf T} \theta(y). \end{split} \end{equation} Moreover, since~$(1_M)$ is by definition the unit of~${\sf T} M$, we have \begin{equation} {\sf T} \theta\left(1_M\right) \enspace = \enspace \left(\theta\left(1_M\right)\right) \enspace = \enspace \left(1_N\right). \end{equation} Finally, since the symmetric group~$\mathfrak{S}_n$ acts by permuting letters, we have for all $x \in {\sf T} M(n)$ and $\sigma \in \mathfrak{S}_n$, ${\sf T} \theta(x \cdot \sigma) = {\sf T} \theta(x) \cdot \sigma$. The map~${\sf T} \theta$ satisfies the three required properties and hence, since by Proposition~\ref{prop:TOperade}, ${\sf T} M$ and ${\sf T} N$ are operads,~${\sf T} \theta$ is an operad morphism. \end{proof} \subsection{Properties of the construction} \begin{Proposition} \label{prop:TInjSur} Let~$M$ and~$N$ be two monoids and $\theta : M \to N$ be a monoid morphism. If~$\theta$ is injective (resp. surjective), then~${\sf T} \theta$ is injective (resp. surjective). \end{Proposition} \begin{proof} Assume that~$\theta$ is injective and that there are two elements~$x$ and~$y$ of~${\sf T} M$ such that ${\sf T} \theta(x) = {\sf T} \theta(y)$. Then, \begin{equation} {\sf T} \theta(x) \enspace = \enspace (\theta(x_1), \; \dots, \; \theta(x_n)) \enspace = \enspace (\theta(y_1), \; \dots, \; \theta(y_n)) = {\sf T} \theta(y), \end{equation} implying $\theta(x_i) = \theta(y_i)$ for all $i \in [n]$. Since~$\theta$ is injective, we have $x_i = y_i$ for all $i \in [n]$ and thus,~$x = y$. Hence, since~${\sf T} \theta$ is, by Proposition~\ref{prop:TOperadeMorph}, an operad morphism, it also is an injective operad morphism. Assume that~$\theta$ is surjective and let~$y$ be an element of~${\sf T} N(n)$. Since~$\theta$ is surjective, there are some elements~$x_i$ of~$M$ such that $\theta(x_i) = y_i$ for all $i \in [n]$. We have \begin{equation} {\sf T} \theta(x_1, \; \dots, \; x_n) \enspace = \enspace (\theta(x_1), \; \dots, \; \theta(x_n)) \enspace = \enspace (y_1, \; \dots, \; y_n). \end{equation} Hence, since $(x_1, \dots, x_n)$ is by definition an element of~${\sf T} M(n)$, and since~${\sf T} \theta$ is, by Proposition~\ref{prop:TOperadeMorph}, an operad morphism, it also is a surjective operad morphism. \end{proof} \begin{Theoreme} \label{thm:TFonct} The construction~${\sf T}$ is a functor from the category of monoids with monoid morphisms to the category of set-operads with operad morphisms. Moreover, ${\sf T}$ respects injections and surjections. \end{Theoreme} \begin{proof} By Proposition~\ref{prop:TOperade},~${\sf T}$ constructs a set-operad from a monoid, and by Proposition~\ref{prop:TOperadeMorph}, an operad morphism from a monoid morphism. Let~$M$ be a monoid, $\theta : M \to M$ be the identity morphism on~$M$, and~$x$ be an element of~${\sf T} M(n)$. We have \begin{equation} {\sf T} \theta(x) \enspace = \enspace (\theta(x_1), \; \dots, \; \theta(x_n)) \enspace = \enspace (x_1, \; \dots, \; x_n) \enspace = \enspace x, \end{equation} showing that~${\sf T} \theta$ is the identity morphism on the operad~${\sf T} M$. Let $(L, \bullet_L)$, $(M, \bullet_M)$, and $(N, \bullet_N)$ be three monoids, $\theta : L \to M$ and $\omega : M \to N$ be two monoid morphisms, and~$x$ be an element of~${\sf T} L(n)$. We have \begin{equation} \begin{split} {\sf T}(\omega \circ \theta)(x) & = \left(\omega\left(\theta(x_1)\right), \; \dots, \; \omega\left(\theta(x_n)\right)\right) \\ & = {\sf T} \omega\left(\theta(x_1), \; \dots, \; \theta(x_n)\right) \\ & = {\sf T} \omega\left( {\sf T} \theta\left(x_1, \; \dots, \; x_n\right)\right) \\ & = ({\sf T} \omega \circ {\sf T} \theta)(x), \end{split} \end{equation} showing that~${\sf T}$ is compatible with map composition. Hence,~${\sf T}$ is a functor, and by Proposition~\ref{prop:TInjSur},~${\sf T}$ also respects injections and surjections. \end{proof} \section{Some operads obtained by the construction} \label{sec:Exemples} Through this Section, we consider examples of applications of the functor~${\sf T}$. We shall mainly consider, given a monoid~$M$, some suboperads of~${\sf T} M$, symmetric or not, and generated by a finite subset of~${\sf T} M$. We shall denote by~$\mathbb{N}$ the additive monoid of integers, and for all~$\ell \geq 1$, by~$\mathbb{N}_\ell$ the quotient of~$\mathbb{N}$ consisting in the set $\{0, 1, \dots, \ell - 1\}$ with the addition modulo $\ell$ as the operation of $\mathbb{N}_\ell$. Note that since~${\sf T}$ is a functor that respects surjective maps (see Theorem~\ref{thm:TFonct}),~${\sf T} \mathbb{N}_\ell$ is a quotient operad of~${\sf T} \mathbb{N}$. The operads constructed in this Section fit into the diagram of non-symmetric operads represented in Figure~\ref{fig:DiagrammeOperades}. Table~\ref{tab:Operades} summarizes some information about these operads. \begin{figure} \caption{The diagram of non-symmetric suboperads and quotients of~${\sf T} \mathbb{N}$. Arrows~$\rightarrowtail$ (resp.~$\twoheadrightarrow$) are injective (resp. surjective) non-symmetric operad morphisms.} \label{fig:DiagrammeOperades} \end{figure} \begin{table}[ht] \centering \begin{tabular}{c|c|c|c} Operad & Generators & First dimensions & Combinatorial objects \\ \hline ${\it End}$ & --- & $1, 4, 27, 256, 3125$ & Endofunctions \\ ${\it PF}$ & --- & $1, 3, 16, 125, 1296$ & Parking functions \\ ${\it PW}$ & --- & $1, 3, 13, 75, 541$ & Packed words \\ ${\it Per}$ & --- & $1, 2, 6, 24, 120$ & Permutations \\ ${\it PRT}$ & $01$ & $1, 1, 2, 5, 14, 42$ & Planar rooted trees \\ $\FCat{k}$ & $00$, $01$, \dots, $0k$ & Fuss-Catalan numbers & $k$-Dyck paths \\ ${\it Schr}$ & $00$, $01$, $10$ & $1, 3, 11, 45, 197$ & Schröder trees \\ ${\it Motz}$ & $00$, $010$ & $1, 1, 2, 4, 9, 21, 51$ & Motzkin paths \\ ${\it Comp}$ & $00$, $01$ & $1, 2, 4, 8, 16, 32$ & Integer compositions \\ ${\it DA}$ & $00$, $01$ & $1, 2, 5, 13, 35, 96$ & Directed animals \\ ${\it SComp}$ & $00$, $01$, $02$ & $1, 3, 27, 81, 243$ & Segmented integer compositions \end{tabular} \caption{Generators, first dimensions, and combinatorial objects involved in the non-symmetric suboperads and quotients of~${\sf T} \mathbb{N}$.} \label{tab:Operades} \end{table} \subsection{Endofunctions, parking functions, packed words, and permutations} Neither the set of endofunctions nor the set of parking functions, packed words, and permutations are suboperads of~${\sf T} \mathbb{N}$. Indeed, one has the following counterexample: \begin{equation} \textcolor{Bleu}{1}{\bf 2} \circ_2 \textcolor{Rouge}{12} = \textcolor{Bleu}{1}\textcolor{Rouge}{34}, \end{equation} and, even if~$12$ is a permutation,~$134$ is not an endofunction. Therefore, let us call a word~$u$ a \emph{twisted} endofunction (resp. parking function, packed word, permutation) if the word $(u_1 + 1, u_2 + 1, \dots, u_n + 1)$ is an endofunction (resp. parking function, packed word, permutation). For example, the word~$2300$ is a twisted endofunction since~$3411$ is an endofunction. Let us denote by~${\it End}$ (resp.~${\it PF}$, ${\it PW}$, ${\it Per}$) the set of endofunctions (resp. parking functions, packed words, permutations). Under this reformulation, one has the following result: \begin{Proposition} \label{prop:OpEndFPMT} The sets~${\it End}$, ${\it PF}$, and ${\it PW}$ are suboperads of~${\sf T} \mathbb{N}$. \end{Proposition} For example, we have in ${\it End}$ the following substitution: \begin{equation} \textcolor{Bleu}{2}{\bf 1} \textcolor{Bleu}{23} \circ_2 \textcolor{Rouge}{30313} = \textcolor{Bleu}{2}\textcolor{Rouge}{41424}\textcolor{Bleu}{23}, \end{equation} and the following application of the symmetric group action: \begin{equation} 11210 \cdot 23514 = 12011. \end{equation} Note that~${\it End}$ is not a finitely generated operad. Indeed, the twisted endofunctions $u := u_1 \dots u_n$ satisfying $u_i := n - 1$ for all~$i \in [n]$ cannot be obtained by substitutions involving elements of~${\it End}$ of arity smaller than~$n$. Similarly,~${\it PF}$ is not a finitely generated operad since the twisted parking functions $u := u_1 \dots u_n$ satisfying~$u_i := 0$ for all $i \in [n - 1]$ and $u_n := n - 1$ cannot be obtained by substitutions involving elements of~${\it PF}$ of arity smaller than~$n$. However, the operad~${\it PW}$ is a finitely generated operad: \begin{Proposition} \label{prop:GenerationMT} The operad~${\it PW}$ is the suboperad of~${\sf T} \mathbb{N}$ generated, as a symmetric operad, by the elements~$00$ and~$01$. \end{Proposition} Let~$\mathbb{K}$ be a field and let us from now consider that~${\it PW}$ is an operad in the category of $\mathbb{K}$-vector spaces, \emph{i.e.},~${\it PW}$ is the free $\mathbb{K}$-vector space over the set of twisted packed words with substitution maps and the right symmetric group action extended by linearity. Let~$I$ be the free $\mathbb{K}$-vector space over the set of twisted packed words having multiple occurrences of a same letter. \begin{Proposition} \label{prop:IdealDeMT} The vector space $I$ is an operadic ideal of~${\it PW}$. Moreover, the operadic quotient ${\it Per} := {\it PW}/_I$ is the free vector space over the set of twisted permutations. \end{Proposition} One has, for all twisted permutations~$x$ and~$y$, the following expression for the substitution maps in~${\it Per}$: \begin{equation} \label{eq:SubsPartiellePer} x \circ_i y = \begin{cases} x \circ_i y & \mbox{if $x_i = \max x$,} \\ 0_\mathbb{K} & \mbox{otherwise,} \end{cases} \end{equation} where~$0_\mathbb{K}$ is the null vector of~${\it Per}$ and the map~$\circ_i$ in the right member of~(\ref{eq:SubsPartiellePer}) is the substitution map of~${\it PW}$. \subsection{Planar rooted trees} Let~${\it PRT}$ be the non-symmetric suboperad of~${\sf T} \mathbb{N}$ generated by~$01$. One has the following characterization of the elements of~${\it PRT}$: \begin{Proposition} \label{prop:MotsAPE} The elements of~${\it PRT}$ are exactly the words~$x$ on the alphabet~$\mathbb{N}$ that satisfy~$x_1 = 0$ and $1 \leq x_{i + 1} \leq x_i + 1$ for all $i \in [|x| - 1]$. \end{Proposition} Proposition~\ref{prop:MotsAPE} implies that we can regard the elements of arity~$n$ of~${\it PRT}$ as planar rooted trees with~$n$ nodes. Indeed, there is a bijection between words of~${\it PRT}$ and such trees. Given a planar rooted tree~$T$, one computes an element of~${\it PRT}$ by labelling each node~$x$ of~$T$ by its depth and then, by reading its labels following a depth-first traversal of~$T$. Figure~\ref{fig:InterpretationAPE} shows an example of this bijection. \begin{figure} \caption{Interpretation of the elements and substitution of the non-symmetric operad~${\it PRT}$ in terms of planar rooted trees.} \label{fig:InterpretationAPE} \label{fig:SubsAPE} \end{figure} This bijection offers an alternative way to compute the substitution in~${\it PRT}$: \begin{Proposition} \label{prop:SubsAPE} Let~$S$ and~$T$ be two planar rooted trees and~$s$ be the $i$th visited node of~$S$ following its depth-first traversal. The substitution $S \circ_i T$ in~${\it PRT}$ amounts to graft the subtrees of the root of~$T$ as leftmost sons of~$s$. \end{Proposition} Figure~\ref{fig:SubsAPE} shows an example of substitution in~${\it PRT}$. \begin{Proposition} \label{prop:PresentationAPE} The non-symmetric operad~${\it PRT}$ is isomorphic to the free non-symmetric operad generated by one element of arity~$2$. \end{Proposition} Proposition~\ref{prop:PresentationAPE} also says that~${\it PRT}$ is isomorphic to the magmatic operad. Hence,~${\it PRT}$ is a realization of the magmatic operad. Moreover,~${\it PRT}$ can be seen as a planar version of the operad~${\it NAP}$ of Livernet~\cite{Liv06}. \subsection{Generalized Dyck paths} Let~$k \geq 0$ be an integer and~$\FCat{k}$ be the non-symmetric suboperad of~${\sf T} \mathbb{N}$ generated by~$00$,~$01$,~\dots,~$0k$. One has the following characterization of the elements of~$\FCat{k}$: \begin{Proposition} \label{prop:MotsFCat} The elements of~$\FCat{k}$ are exactly the words~$x$ on the alphabet~$\mathbb{N}$ that satisfy~$x_1 = 0$ and $0 \leq x_{i + 1} \leq x_i + k$ for all $i \in [|x| - 1]$. \end{Proposition} Let us recall that a \emph{$k$-Dyck path} of length~$n$ is a path in~$\mathbb{N}^2$ connecting the points $(0, 0)$ and $((k + 1)n, 0)$ and consisting in~$n$ \emph{up steps} $(1, k)$ and $kn$ \emph{down steps} $(1, -1)$. It is well-known that $k$-Dyck paths are enumerated by Fuss-Catalan numbers~\cite{DM47}. Proposition~\ref{prop:MotsFCat} implies that we can regard the elements of arity~$n$ of~$\FCat{k}$ as $k$-Dyck paths of length~$n$. Indeed, there is a bijection between words of~$\FCat{k}$ and such paths. Given a $k$-Dyck path~$P$, one computes an element~of $\FCat{k}$ by writing, from left to right, the ordinate of the starting point of each up step of~$P$. Figure~\ref{fig:BijFCatKDyck} shows an example of this bijection. \begin{figure} \caption{A $2$-Dyck path and the corresponding element of the non-symmetric operad~$\FCat{2}$.} \label{fig:BijFCatKDyck} \end{figure} Note that the operad~$\FCat{0}$ is the commutative associative operad. Next Theorem elucidates the structure of~$\FCat{1}$: \begin{Theoreme} \label{thm:PresentationFCat1} The non-symmetric operad~$\FCat{1}$ is the free non-symmetric operad generated by two elements~$\FCatOpA$ and~$\FCatOpB$ of arity~$2$, subject to the three relations \begin{minipage}[c]{.4\linewidth} \begin{align} \FCatOpB \circ_1 \FCatOpB & = \FCatOpB \circ_2 \FCatOpB, \\ \FCatOpA \circ_1 \FCatOpB & = \FCatOpB \circ_2 \FCatOpA, \end{align} \end{minipage} \begin{minipage}[c]{.4\linewidth} \begin{align} \FCatOpA \circ_1 \FCatOpA = \FCatOpA \circ_2 \FCatOpB. \end{align} \end{minipage} \end{Theoreme} \subsection{Schröder trees} Let~${\it Schr}$ be the non-symmetric suboperad of~${\sf T} \mathbb{N}$ generated by~$00$, $01$, and~$10$. One has the following characterization of the elements of~${\it Schr}$: \begin{Proposition} \label{prop:MotsSchr} The elements of~${\it Schr}$ are exactly the words~$x$ on the alphabet~$\mathbb{N}$ that have at least one occurrence of~$0$ and, for all letter ${\tt b} \geq 1$ of~$x$, there exists a letter ${\tt a} := {\tt b} - 1$ such that~$x$ has a factor ${\tt a} u {\tt b}$ or ${\tt b} u {\tt a}$ where~$u$ is a word consisting in letters~${\tt c}$ satisfying ${\tt c} \geq {\tt b}$. \end{Proposition} Recall that a \emph{Schröder tree} is a planar rooted tree such that no node has exactly one child. The \emph{leaves} of a Schröder tree are the nodes without children. We call \emph{sector} of a Schröder tree~$T$ a triple $(x, i, j)$ consisting in a node~$x$ and two adjacent edges~$i$ and~$j$, where~$i$ is immediately on the left of~$j$. Proposition~\ref{prop:MotsSchr} implies that we can regard the elements of arity~$n$ of~${\it Schr}$ as Schröder trees with~$n$ leaves. Indeed, there is a bijection between words of~${\it Schr}$ and such trees. Given a Schröder tree~$T$, one computes an element of~${\it Schr}$ by labelling each sector $(x, i, j)$ of~$T$ by the depth of~$x$ and then, by reading the labels from left to right. Figure~\ref{fig:BijSchrMots} shows an example of this bijection. \begin{figure} \caption{A Schröder tree and the corresponding element of the non-symmetric operad~${\it Schr}$.} \label{fig:BijSchrMots} \end{figure} Let us respectively denote by~$\SchrOpA$,~$\SchrOpB$, and~$\SchrOpC$ the generators~$00$, $01$, and~$10$ of~${\it Schr}$. \begin{Proposition} \label{prop:RelationsGenSchr} The generators~$\SchrOpA$, $\SchrOpB$, and~$\SchrOpC$ of~${\it Schr}$ are subject, in degree~$2$, to the seven relations \begin{minipage}[c]{.4\linewidth} \begin{align} \SchrOpA \circ_1 \SchrOpA & = \SchrOpA \circ_2 \SchrOpA, \\ \SchrOpB \circ_1 \SchrOpC & = \SchrOpC \circ_2 \SchrOpB, \\ \SchrOpA \circ_1 \SchrOpB & = \SchrOpA \circ_2 \SchrOpC, \\ \SchrOpB \circ_1 \SchrOpA & = \SchrOpA \circ_2 \SchrOpB, \end{align} \end{minipage} \begin{minipage}[c]{.4\linewidth} \begin{align} \SchrOpA \circ_1 \SchrOpC & = \SchrOpC \circ_2 \SchrOpA, \\ \SchrOpB \circ_1 \SchrOpB & = \SchrOpB \circ_2 \SchrOpA, \\ \SchrOpC \circ_1 \SchrOpA & = \SchrOpC \circ_2 \SchrOpC. \end{align} \end{minipage} \end{Proposition} \subsection{Motzkin paths} Let~${\it Motz}$ be the non-symmetric suboperad of~${\sf T} \mathbb{N}$ generated by~$00$ and~$010$. Since~$00$ and~$01$ generate~$\FCat{1}$ and since $010 = 00 \circ_1 01$, ${\it Motz}$ is a non-symmetric suboperad of~$\FCat{1}$. One has the following characterization of the elements of~${\it Motz}$: \begin{Proposition} \label{prop:ElemMotz} The elements of~${\it Motz}$ are exactly the words~$x$ on the alphabet~$\mathbb{N}$ that begin and start by~$0$ and such that $|x_i - x_{i + 1}| \leq 1$ for all $i \in [|x| - 1]$. \end{Proposition} Recall that a Motzkin path of length~$n$ is a path in~$\mathbb{N}^2$ connecting the points $(0, 0)$ and $(n, 0)$, and consisting in \emph{up steps} $(1, 1)$, \emph{down steps} $(1, -1)$, and \emph{stationary steps} $(1, 0)$. Proposition~\ref{prop:ElemMotz} implies that we can regard the elements of arity~$n$ of~${\it Motz}$ as Motzkin paths of length $n - 1$. Indeed there is a bijection between words of~${\it Motz}$ and such paths. Given a Motzkin path~$P$, one computes an element of~${\it Motz}$ by writing for each point~$p$ of~$P$ the ordinate of~$p$, and then, by reading these labels from left to right. Figure~\ref{fig:BijMotzMots} shows an example of this bijection. \begin{figure} \caption{A Motzkin path and the corresponding element of the non-symmetric operad~${\it Motz}$.} \label{fig:BijMotzMots} \end{figure} Let us respectively denote by~$\MotzOpA$ and~$\MotzOpB$ the generators~$00$ and~$010$ of~${\it Motz}$. \begin{Proposition} \label{prop:RelationsGenMotz} The generators~$\MotzOpA$ and~$\MotzOpB$ of~${\it Motz}$ are subject, in degree~$2$, to the four relations \begin{minipage}[c]{.4\linewidth} \begin{align} \MotzOpA \circ_1 \MotzOpA & = \MotzOpA \circ_2 \MotzOpA, \\ \MotzOpB \circ_1 \MotzOpA & = \MotzOpA \circ_2 \MotzOpB, \end{align} \end{minipage} \begin{minipage}[c]{.4\linewidth} \begin{align} \MotzOpA \circ_1 \MotzOpB & = \MotzOpB \circ_3 \MotzOpA, \\ \MotzOpB \circ_1 \MotzOpB & = \MotzOpB \circ_3 \MotzOpB. \end{align} \end{minipage} \end{Proposition} \subsection{Integer compositions} Let~${\it Comp}$ be the non-symmetric suboperad of~${\sf T} \mathbb{N}_2$ generated by~$00$ and~$01$. Since~$\FCat{1}$ is the non-symmetric suboperad of~${\sf T} \mathbb{N}$ generated by~$00$ and~$01$, and since~${\sf T} \mathbb{N}_2$ is a quotient of~${\sf T} \mathbb{N}$, ${\it Comp}$ is a quotient of~$\FCat{1}$. One has the following characterization of the elements of~${\it Comp}$~: \begin{Proposition} \label{prop:ElemComp} The elements of~${\it Comp}$ are exactly the words on the alphabet~$\{0, 1\}$ that begin by~$0$. \end{Proposition} Proposition~\ref{prop:ElemComp} implies that we can regard the elements of arity~$n$ of~${\it Comp}$ as integer compositions of~$n$. Indeed, there is a bijection between words of~${\it Comp}$ and integer compositions. Given a composition $C := (C_1, C_2, \dots, C_\ell)$, one computes the following element of~${\it Comp}$: \begin{equation} 01^{C_1 - 1} 01^{C_2 - 1} \dots 01^{C_\ell - 1}. \end{equation} Encoding integer compositions by ribbon diagrams offers an alternative way to compute the substitution in~${\it Comp}$: \begin{Proposition} \label{prop:SubsComp} Let~$C$ and~$D$ be two ribbon diagrams, $i$ be an integer, and~$c$ be the $i$th visited box of~$C$ by scanning it from up to down and from left to right. Then, the substitution $C \circ_i D$ in~${\it Comp}$ returns to replace~$c$ by~$D$ if~$c$ is the upper box of its column, or to replace~$c$ by the transpose ribbon diagram of~$D$ otherwise. \end{Proposition} Figure~\ref{fig:SubsComp} shows two examples of substitution in~${\it Comp}$. \begin{figure} \caption{Two examples of substitutions in the non-symmetric operad~${\it Comp}$.} \label{fig:SubsComp} \end{figure} \begin{Theoreme} \label{thm:PresentationComp} The non-symmetric operad~${\it Comp}$ is the free non-symmetric operad generated by two elements~$\CompOpA$ and~$\CompOpB$ of arity~$2$, subject to the four relations \begin{minipage}[c]{.4\linewidth} \begin{align} \CompOpA \circ_1 \CompOpA & = \CompOpA \circ_2 \CompOpA, \\ \CompOpB \circ_1 \CompOpA & = \CompOpA \circ_2 \CompOpB, \end{align} \end{minipage} \begin{minipage}[c]{.4\linewidth} \begin{align} \CompOpB \circ_1 \CompOpB & = \CompOpB \circ_2 \CompOpA, \\ \CompOpA \circ_1 \CompOpB & = \CompOpB \circ_2 \CompOpB. \end{align} \end{minipage} \end{Theoreme} \subsection{Directed animals} Let~${\it DA}$ be the non-symmetric suboperad of~${\sf T} \mathbb{N}_3$ generated by~$00$ and~$01$. Since~$\FCat{1}$ is the non-symmetric suboperad of~${\sf T} \mathbb{N}$ generated by~$00$ and~$01$, and since~${\sf T} \mathbb{N}_3$ is a quotient of~${\sf T} \mathbb{N}$, ${\it DA}$ is a quotient of~$\FCat{1}$. From now, we shall represent by~$-1$ the element~$2$ of~$\mathbb{N}_3$. With this encoding, let \begin{equation} \phi : \{-1, 0, 1\}^n \to \{-1, 0, 1\}^{n - 1}, \end{equation} be the map defined for all ${\tt a}, {\tt b} \in \{-1, 0, -1\}$ by \begin{equation} \phi({\tt a}) := \epsilon \qquad \mbox{ and } \qquad \phi({\tt a} \cdot {\tt b} \cdot u) := ({\tt b} - {\tt a}) \cdot \phi(u). \end{equation} For example, the element $x := 011220201$ of~${\it DA}$ is represented by the word $x' := 011-\!\!1-\!\!10-\!\!101$ and we have $\phi(x') = 10101-\!\!111$. \begin{Proposition} \label{prop:BijAnDPrefMotz} By interpreting letters~$-1$ (resp.~$0$, $1$) as down (resp. stationary, up) steps, the map~$\phi$ induces a bijection between the elements of~${\it DA}$ of arity~$n$ and the prefixes of Motzkin paths of length~$n - 1$. \end{Proposition} Recall that a \emph{directed animal} is a subset~$A$ of~$\mathbb{N}^2$ such that $(0, 0) \in A$ and $(i, j) \in A$ with~$i \geq 1$ or~$j \geq 1$ implies $(i - 1, j) \in A$ or $(i, j - 1) \in A$. Using a bijection of Gouyou-Beauchamps and Viennot~\cite{GBV88} between directed animals of size~$n$ and prefixes of Motzkin paths of length~$n - 1$, one obtains, by Proposition~\ref{prop:BijAnDPrefMotz}, a bijection between the elements of~${\it DA}$ of arity~$n$ and directed animals of size~$n$. Hence,~${\it DA}$ is a non-symmetric operad on directed animals. \subsection{Segmented integer compositions} Let~${\it SComp}$ be the non-symmetric suboperad of~${\sf T} \mathbb{N}_3$ generated by~$00$, $01$, and~$02$. Since~$\FCat{2}$ is the non-symmetric suboperad of~${\sf T} \mathbb{N}$ generated by~$00$, $01$, and~$02$, and since~${\sf T} \mathbb{N}_3$ is a quotient of~${\sf T} \mathbb{N}$, ${\it SComp}$ is a quotient of~$\FCat{2}$. One has the following characterization of the elements of~${\it SComp}$: \begin{Proposition} \label{prop:ElemSComp} The elements of~${\it SComp}$ are exactly the words on the alphabet~$\{0, 1, 2\}$ that begin by~$0$. \end{Proposition} Recall that a \emph{segmented integer composition} of~$n$ is a sequence $(S_1, \dots, S_\ell)$ of integers compositions such that~$S_1$ is an integer composition of~$n_1$, \dots, $S_\ell$ is an integer composition of~$n_\ell$, and $n_1 + \dots + n_\ell = n$. Proposition~\ref{prop:ElemSComp} implies that we can regard the elements of arity~$n$ of~${\it SComp}$ as segmented integer compositions of~$n$. Indeed, there is a bijection between words of~${\it SComp}$ and segmented compositions since there are~$3^{n - 1}$ segmented compositions of~$n$ and also, by the above Proposition, $3^{n - 1}$ elements of~${\it SComp}$ of arity~$n$. \subsection{The diassociative operad} Let~$M$ be the submonoid of the multiplicative monoid restricted to the set~$\{0, 1\}$. Let~${\it D}$ be the non-symmetric suboperad~of ${\sf T} M$ generated by~$01$ and~$10$. One has the following characterization of the elements of~${\it D}$: \begin{Proposition} \label{prop:ElemD} The elements of~${\it D}$ are exactly the words on the alphabet~$\{0, 1\}$ that contain exactly one~$1$. \end{Proposition} Recall that the \emph{diassociative operad}~\cite{Lod01}~${\it Dias}$ is the non-symmetric operad generated by two elements~$\dashv$ and~$\vdash$ of arity~$2$, subject only to the relations \begin{minipage}[c]{.4\linewidth} \begin{align} \dashv \circ_1 \dashv \enspace = \enspace \dashv \circ_2 \dashv & \enspace = \enspace \dashv \circ_2 \vdash, \\ \vdash \circ_2 \vdash \enspace = \enspace \vdash \circ_1 \vdash & \enspace = \enspace \vdash \circ_1 \dashv, \end{align} \end{minipage} \begin{minipage}[c]{.4\linewidth} \begin{equation} \dashv \circ_1 \vdash \enspace = \enspace \vdash \circ_2 \dashv. \end{equation} \end{minipage} \begin{Proposition} \label{prop:IsoDiasD} The non-symmetric operads~${\it D}$ and~${\it Dias}$ are isomorphic. The map $\phi : {\it Dias} \to {\it D}$ defined by $\phi(\dashv) := 10$ and $\phi(\vdash) := 01$ is an isomorphism. \end{Proposition} Proposition~\ref{prop:IsoDiasD} also says that~${\it D}$ is a realization of the diassociative operad. \end{document}
\begin{document} \title[A Crystal Surface Model]{Existence theorems for a crystal surface model involving the p-Laplace operator} \author{Xiangsheng Xu}\thanks {Department of Mathematics and Statistics, Mississippi State University, Mississippi State, MS 39762. {\it Email}: [email protected]. To appear in SIAM J. Math. Anal..} \keywords{Crystal surface model, existence, exponential function of a p-Laplacian, nonlinear fourth order equations. } \subjclass{ 35D30, 35Q82, 35A01.} \begin{abstract} The manufacturing of crystal films lies at the heart of modern nanotechnology. How to accurately predict the motion of a crystal surface is of fundamental importance. Many continuum models have been developed for this purpose, including a number of PDE models, which are often obtained as the continuum limit of a family of kinetic Monte Carlo models of crystal surface relaxation that includes both the solid-on-solid and discrete Gaussian models. In this paper we offer an analytical perspective into some of these models. To be specific, we study the existence of a weak solution to the boundary value problem for the equation $ - \Delta e^{-\mbox{div}\left(|\nabla u|^{p-2}\nabla u\right)}+au=f$, where $p>1, a>0$ are given numbers and $f$ is a given function. This problem is derived from a crystal surface model proposed by J.L.~Marzuola and J.~Weare (2013 Physical Review, E 88, 032403). The mathematical challenge is due to the fact that the principal term in our equation is an exponential function of a p-Laplacian. Existence of a suitably-defined weak solution is established under the assumptions that $p\in(1,2], \ N\leq 4$, and $f\in W^{1,p}$. Our investigations reveal that the key to our existence assertion is how to control the set where $-\mbox{div}\left(|\nabla u|^{p-2}\nabla u\right)$ is $\pm\infty$. \end{abstract} \maketitle \section{Introduction} Let $\Omega$ be a bounded domain in $\mathbb{R}^N$ with smooth boundary $\partial\Omega $. Given that $p>1$, $a>0$, and a function $f=f(x)$, we consider the boundary value problem \begin{eqnarray} -\Delta e^{-\Delta_pu} +a u&=&f\ \ \mbox{in $\Omega$,}\label{sta1}\\ \nabla u\cdot\nu&=&\nabla e^{-\Delta_pu}\cdot\nu=0\ \ \mbox{on $\partial\Omega $,}\label{sta2} \end{eqnarray} where $\Delta_p$ is the $p$-Laplacian, i.e., $\Delta_pu=\mbox{div}\left(|\nabla u|^{p-2}\nabla u\right)$, and $\nu$ is the unit outward normal to $\partial\Omega $. Our interest in the problem originates in the mathematical description of the evolution of a crystal surface. The surface of a crystal below the roughing temperature consists of steps and terraces. According to the Burton, Cabrera and Frank (BCF) model \cite{BCF}, atoms detach from the steps, diffuse across terraces, and reattach at new locations, inducing an overall evolution of the crystal surface. At the nanoscale, the motion of steps is described by large systems of ordinary differential equations for step positions (\cite{AKW}, \cite{GLL2}). At the macro-scale, this description is often reduced conveniently to nonlinear PDEs for macroscopic variables such as the surface height and slope profiles (see \cite{K, GLL2} and the references therein). To see the connection between our problem \eqref{sta1}-\eqref{sta2} and certain existing continuum models, we first observe from the conservation of mass that the dynamic equation for the surface height profile $u(t, x)$ of a solid film is governed by \begin{equation} \partial_t u + \mbox{div} J = 0, \end{equation} where $J$ is the adatom flux. By Fick's law \cite{MK}, $J$ can be written as $$J = -M(\nabla u)\nabla\rho_s.$$ Here $M(\nabla u)$ is the mobility and $\rho_s$ is the local equilibrium density of adatoms. On account of the Gibbs-Thomson relation \cite{KMCS, RW, MK}, which is connected to the theory of molecular capillarity, the corresponding local equilibrium density of adatoms is determined by $$ \rho_s=\rho_0e^{\frac{\mu}{kT}},$$ where $\mu$ is the chemical potential, $\rho_0$ is a constant reference density, $T$ is the temperature, and $k$ is the Boltzmann constant. Denote by $\Omega$ the ``step locations area" of interest. Then we can take the general surface energy $G(u)$ to be \begin{eqnarray} G(u)=\frac{1}{p}\int_\Omega|\nabla u|^pdx,\ \ p\geq 1. \end{eqnarray} The justification for this, as observed in \cite{MW}, is that it can retain many of the interesting features of the microscopic system that are lost in the more standard scaling regime. The chemical potential $\mu$ is defined as the change per atom in the surface energy. That is, \begin{equation} \mu=\frac{\delta G}{\delta u}=-\Delta_pu. \end{equation} After incorporating those physical parameters into the scaling of the time and/or spatial variables \cite{GLL3,LMM}, we can rewrite the evolution equation for $u $ as \begin{equation}\label{exp} \partial_t u=\mbox{div}\left(M(\nabla u)\nabla e^{\frac{\delta G}{\delta u}}\right). \end{equation} In the diffusion-limited (DL) regime, where the dynamics is dominated by the diffusion across the terraces and $M \equiv 1$ the above equations reduces to \begin{equation} \partial_t u=\mbox{div}\left(\nabla e^{\frac{\delta G}{\delta u}}\right)\label{p1} =\Delta e^{-\Delta_pu}. \end{equation} This equation is assumed to hold in a space-time domain $\Omega_T\equiv \Omega\times(0,T)$, $ T>0$, coupled with the following initial boundary conditions \begin{eqnarray} \nabla u\cdot\nu=\nabla e^{-\Delta_pu}\cdot\nu &=& 0 \ \ \ \mbox{ on $\Sigma_T\equiv \partial\Omega\times(0,T)$},\label{p2}\\ u(x,0)&=& u_0(x) \ \ \ \mbox{on $\Omega$.}\label{p3} \end{eqnarray} As we shall see, a priori estimates for this problem are rather weak. As a result, an existence theorem seems to be hopeless. Instead, we focus on the associated stationary problem. That is, we discretize the time derivative in \eqref{p1}, thereby obtaining the following stationary equation \begin{equation} \frac{u-v}{\delta}-\Delta e^{-\Delta_pu}=0\ \ \mbox{in $\Omega$.} \end{equation} Here $v$ is a given function. Initially, $v=u_0(x)$. The positive number $\delta$ is the step size. Set $a=\frac{1}{\delta}$ and $f=\frac{1}{\delta}v$. This leads to the boundary value problem \eqref{sta1}-\eqref{sta2}. The objective of this paper is to establish an existence assertion for the stationary problem problem \eqref{sta1}-\eqref{sta2}, while the time-dependent problem \eqref{p1}-\eqref{p3} is left open. If we linearize the exponential term \begin{equation} e^{-\Delta_pu}\approx 1-\Delta_pu, \end{equation} then \eqref{p1} reduces to \begin{equation} \partial_tu+\Delta\Delta_pu=0. \end{equation} Giga-Kohn \cite{GK} proved that there is a finite time extinction for the equation when $p > 1$. For the difficult case of $p = 1$, Giga-Giga \cite{GG} developed an $H^{-1}$ total variation gradient flow to analyze this equation and they showed that the solution may instantaneously develop jump discontinuity in the explicit example of important crystal facet dynamics. This explicit construction of the jump discontinuity solution for facet dynamics was extended to the exponential PDE in \cite{LMM}. The time-dependent problem in the case where $p=2$ has been investigated in \cite{LX}. The mathematical novelty there is that the exponent $-\Delta u$ is only a measure. But the singular part of the measure is such that the composition $e^{-\Delta u}$ is still a well-defined function. A gradient flow approach to the problem can be found in \cite{GLL3}. We also would like to mention two other related articles \cite{GLLX, LX2}. Note that if $p=2$ then the principal term in \eqref{sta1}, i.e., $e^{-\Delta u}$, can be viewed as a monotone operator in a suitable function space. This property is essential to the results in \cite{LX,GLL3}. If $p\ne 2$, this property is no longer true. Moreover, the exponent becomes nonlinear. Subsequently, we lose most of the a priori estimates in \cite{LX}. What remains is collected in the following \begin{lem}\label{lapri} If $u$ is a classical solution of \eqref{p1}-\eqref{p3}, then we have \begin{eqnarray} \frac{1}{p}\int_{\Omega}|\nabla u(x,s)|^p \, dx +4\int_{\Omega_s}\left|\nabla \sqrt{\rho}\right|^2 \, dx\, dt &=& \frac{1}{p}\int_{\Omega}|\nabla u_0(x)|^p\, dx,\label{nm2}\\ \int_{\Omega}\ln\rho \, dx&=&0,\label{nm3}\\ \int_\Omega u(x,t)dx&=&\int_\Omega u_0(x)dx.\label{nm5} \end{eqnarray} where $s>0$, $\Omega_{s}=\Omega\times(0,s)$, and \begin{equation}\label{nm6} \rho=e^{-\Delta_pu}. \end{equation} \end{lem} \begin{proof} We calculate \begin{eqnarray} \int_\Omega\Delta_pu\partial_tudx&=&-\int_\Omega|\nabla u|^{p-2}\nabla u\nabla \partial_tudx=-\frac{1}{p}\frac{d}{dt}\int_\Omega|\nabla u|^pdx,\\ \int_{\Omega}\Deltae^{-\Delta_pu}\cdot\Delta_pu \, dx &=&-\int_{\Omega}\nabla e^{-\Delta_pu}\cdot\nabla\Delta_pu\, dx\nonumber\\ &=&\int_{\Omega}e^{-\Delta_pu}\left|\nabla\Delta_pu\right|^2 \, dx\nonumber\\ &=&4\int_{\Omega}\left|\nabla e^{-\frac{1}{2}\Delta_pu}\right|^2 \, dx. \end{eqnarray} Multiply through \eqref{p1} by $\Delta_pu$ and integrate the resulting equation with respect to the space variables over $\Omega$ to obtain \begin{equation}\label{r11} \frac{1}{p}\frac{d}{dt}\int_{\Omega}|\nabla u(x,t)|^p\, dx +4\int_{\Omega}\left|\nabla e^{-\frac{1}{2}\Delta_pu}\right|^2 \, dx=0. \end{equation} Integrate \eqref{r11} with respect to $t$ to arrive at \eqref{nm2}. By \eqref{nm6}, we have \begin{equation} -\Delta_pu =\ln\rho \ \ \ \mbox{on $\Omega$.} \end{equation} Integrate the above equation over $\Omega$ to obtain \eqref{nm3}. Similarly, we can integrate \eqref{p1} over $\Omega$ to get \eqref{nm5}. \end{proof} Unfortunately, this lemma is not enough for an existence assertion for problem \eqref{p1}-\eqref{p3}. To gain any further results, we are facing two main challenges. First, it does not seem possible to derive any meaningful estimates in the time variable such as estimates (6) and (9) in \cite{LX}. Second, do equations \eqref{nm2} and \eqref{nm3} really imply that $\rho\in L^{q}(\Omega_T)$ for some $q\geq 1$ in the context here? Obviously, the two are interconnected. In the stationary problem \eqref{sta1}-\eqref{sta2}, of course, the first challenge mentioned earlier goes away, but the second one remains. Thus the main mathematical interest of problem \eqref{sta1}-\eqref{sta2} is how to suitably interpolate between $\ln \rho$ and $\nabla\sqrt{\rho}$. We must point out that condition \eqref{nm3} is rather weak. Indeed, we can easily construct a sequence $\{f_j\}$ such that \begin{eqnarray} f_j&\rightarrow& \infty \ \ \mbox{a.e. on $\Omega$ as $j\rightarrow \infty$, and}\\ \int_\Omega f_j dx &=& 0. \end{eqnarray} For example, take $\Omega=(0,1)$ and define $$f_j(s)=\left\{\begin{array}{ll} j & \mbox{if $0\leq s<\frac{1}{j}$,}\\ j-4j^3(s-\frac{1}{j})&\mbox{if $\frac{1}{j}\leq s<\frac{3}{2j}$,}\\ -2j^2+j+4j^3(s-\frac{3}{2j})&\mbox{if $\frac{3}{2j}\leq s<\frac{2}{j}$,}\\ j&\mbox{if $\frac{2}{j}\leq s\leq 1$.} \end{array}\right. $$ Note that $f_j$ is continuous and piecewise linear and satisfies the boundary condition \eqref{sta2}. This means that equation \eqref{nm3} cannot be approximated. On the other hand, we obviously can not prevent a sequence with the boundary condition \eqref{sta2} from going to infinity if we only have some control on its partial derivatives. That is to say, neither \eqref{nm2} nor \eqref{nm3} alone is sufficient for our purpose, and we must find a right combination between the two and equation \eqref{sta1}. This constitutes the core of our mathematical analysis. Our investigations reveal that the set where $\Delta_pu$ is negative infinity and the set where it is positive infinity play two significantly different roles with the former commanding most of our attention, while the latter is similar to the case already considered in \cite{LX,LX2}. In view of Lemma \ref{lapri} and the analysis in \cite{LX}, we can give the following definition of a weak solution. \begin{defn} We say that a pair $(u,\rho)$ is a weak solution to \eqref{sta1}-\eqref{sta2} if the following conditions hold: \begin{enumerate} \item[(D1)] $\rho\in W^{2,p^*}(\Omega)$, $u\in W^{1,p}(\Omega)$, $\Delta_pu\in \mathcal{M}(\overline{\Omega})\cap \left(W^{1,p}(\Omega)\right)^*$, where $p^*=\frac{Np}{N-p}$, $\left(W^{1,p}(\Omega)\right)^*$ is the dual space of $W^{1,p}(\Omega)$, and $\mathcal{M} (\overline{\Omega})$ is the space of bounded Radon measures on $\overline{\Omega}$; \item[(D2)] Let $$-\Delta_pu=g_a+\nu_s$$ be the Lebesgue decomposition of $-\Delta_pu$ (\cite{EG}, p.42). That is, $g_a\in L^1(\Omega)$ and the support of $\nu_s\equiv A_0$ has Lebesgue measure $0$. Then there holds \begin{equation} \rho=e^{\, g_a}\ \ \ \mbox{a.e. on $\Omega$;}\label{ns1} \end{equation} \item[(D3)] We have \begin{eqnarray} - \Delta \rho+au &=&f \ \ \ \mbox{a.e. on $\Omega$,}\label{ow11}\\ \nabla \rho\cdot\nu &=& 0 \ \ \ \mbox{a.e. on $\partial\Omega $}. \end{eqnarray} The boundary condition $\nabla u\cdot\nu=0$ on $\partial\Omega $ is satisfied in the sense $$ \langle -\Delta_pu, \xi \rangle =\int_\Omega|\nabla u|^{p-2}\nabla u \cdot \nabla\xi \, dx \ \ \ \mbox{for all $\xi\in W^{1,p}(\Omega)$,}$$ where $\langle \cdot,\cdot \rangle$ is the duality pairing between $W^{1,p}(\Omega)$ and $\left(W^{1,p}(\Omega)\right)^*$ . \end{enumerate} \end{defn} An example in \cite{LX} shows that the singular part in $\Delta_pu$ is an intrinsic property of our solutions. Physically, the singularities represent rupture defects and pinning on the surface evolution due to the asymmetric in the exponential curvature dependent mobility. The pinning point ruptures in the epitaxial growth models was carefully studied numerically in \cite{MW}. An easy way to remove the singular part in $-\Delta_pu$ is by adding a lower order perturbation to the equation \eqref{sta1}. To be precise, we consider the problem \begin{eqnarray} -\Delta e^{-\Delta_pu} +\varepsilon\Delta_pu +au& =&f \ \ \ \mbox{in $\Omega$}\label{ota20}\\ \nabla u\cdot\nu=\nabla e^{-\Delta_pu}\cdot\nu &=& 0 \ \ \ \mbox{ on $\partial\Omega $},\label{pp2} \end{eqnarray} where $\varepsilon>0$ is a small perturbation parameter. In this case, we will have $\Delta_pu\in L^2(\Omega)$. Indeed, we use $-\Delta_pu$ as a test function in \eqref{ota20} to obtain \begin{equation} \int_{\Omega}|\nabla u|^p\, dx +\int_{\Omega}|\nabla e^{-\frac{1}{2}\Delta_pu}|^2\, dx +\varepsilon\int_{\Omega}(\Delta_pu)^2\, dx\leq c\int_{\Omega}|\nabla f|^p\, dx. \end{equation} Our main result is the following \begin{thm}\label{th1.1} Assume that $\Omega$ is a bounded domain in $\mathbb{R}^N$ with $C^{2,\alpha}$ boundary for some $\alpha\in (0,1)$, $N\leq 4$, $a>0$, and $f\in W^{1,p}(\Omega)$ with $1<p\leq 2$. Then there is a weak solution to \eqref{sta1}-\eqref{sta2}. \end{thm} We have not considered the case where $p=1$. The physical relevance of this case can be found in \cite{KDM}. It is also related to the motion by surface curvature. Our key compactness result Claim \ref{gup} relies on (ii) in Lemma \ref{plap}, which fails when $p=1$. Thus it would be interesting to know if we can take the limit of our solutions as $p\rightarrow 1$. Theorem \ref{th1.1} should also hold for $p> 2$. In the remark following Claim \ref{posi} below we shall see why we have to require $p\leq 2$. Since we allow $p$ to be arbitrarily close $1$, the Sobolev embedding theorem forces us to impose the condition $N\leq 4$. The uniqueness assertion for problem \eqref{sta1}-\eqref{sta2} is still open. The difficulty here is due to the fact that the operator $-\Delta e^{-\Delta_pu}$ does not seem to be monotone anymore for $p\ne 2$. A solution to \eqref{sta1}-\eqref{sta2} will be constructed as the limit of a sequence of approximate solutions. The key is to design an approximation scheme that can generate sufficiently regular approximate solutions so that all the preceding formal calculations are made vigorous. Then we must be able to show that the sequence of approximate solutions does not converge to infinity a.e. on $\Omega$. This is accomplished in Section 3. In Section 2 we state a few preparatory lemmas, while in Section 4 we make some further remarks about the time-dependent problem. Finally, we make some remarks about the notation. The letter $c$ denotes a positive constant. In theory, its value can be computed from various given data. In the applications of the Sobolev embedding theorems, whenever the term $N-2$ appears in a denominator, it is understood that $N>2$ because the case where $N=2$ can always be handled separately. \section{Preliminaries} In this section we state a few preparatory lemmas. Relevant interpolation inequalities for Sobolev spaces are listed in the following lemma. \begin{lem}\label{linterp} Let $\Omega$ be a bounded domain in $\mathbb{R}^N$. Denote by $\|\cdot\|_p$ the norm in the space $L^p(\Omega)$. Then we have: \begin{enumerate} \item $ \|f\|_q\leq\varepsilon\|f\|_r+\varepsilon^{-\sigma} \|f\|_p$, where $\varepsilon>0, p\leq q\leq r$, and $\sigma=\left(\frac{1}{p}-\frac{1}{q}\right)/\left(\frac{1}{q}-\frac{1}{r}\right)$; \item If $\partial\Omega $ is Lipschitz, then for each $\varepsilon >0$ and each $q\in (1, p^*)$, where $p^*=\frac{pN}{N-p}$ if $N>p\geq 1$ and any number bigger than $p$ if $N=p$, there is a positive number $c=c(\varepsilon, p)$ such that \begin{eqnarray} \|f\|_q&\leq &\varepsilon\|\nabla f\|_p+c\|f\|_1\ \ \mbox{for all $f\in W^{1,p}(\Omega)$}.\label{otn9} \end{eqnarray} \item If $\partial\Omega $ is $C^{2, \alpha}$ for some $\alpha\in (0,1)$ and $q$ is given as in (2), then \begin{eqnarray} \|\nabla g\|_q&\leq &\varepsilon\|\Delta g\|_p+c\|g\|_1\ \ \mbox{for all $g\in W^{2,p}(\Omega)$}. \label{otn10} \end{eqnarray} \end{enumerate} \end{lem} This lemma can be found in \cite{GT,G,PS}. The next lemma collects a few frequently used elementary inequalities. \begin{lem}\label{elmen} For $x,y\in \mathbb{R}^N$ and $\ a, b\in \mathbb{R}^+$, we have: \begin{enumerate} \item[(4)] $|x|^{p-2}x\cdot(x-y)\geq \frac{1}{p}(|x|^p-|y|^p);$ \item[(5)] $ab\leq \varepsilon a^p+\frac{1}{\varepsilon^{q/p}}b^q \ \mbox{if $\varepsilon>0,\, p, \, q>1$ with $\frac{1}{p}+\frac{1}{q}=1$}.$ \end{enumerate} \end{lem} \begin{lem}\label{plap}Let $x,y$ be any two vectors in $\mathbb{R}^N$. Then: \begin{enumerate} \item[\textup{(i)}] For $p\geq 2$, \begin{equation*} \left(\left(|x|^{p-2}x-|y|^{p-2}y\right)\cdot(x-y)\right)\geq \frac{1}{2^{p-1}}|x-y|^{p}; \end{equation*} \item[\textup{(ii)}] For $1<p\leq 2$, \begin{equation*} \left(1+|x|^2+|y|^2\right)^{\frac{2-p}{2}}\left(\left(|x|^{p-2}x-|y|^{p-2}y\right)\cdot(x-y)\right)\geq (p-1)|x-y|^2. \end{equation*} \end{enumerate} \end{lem} The proof of this lemma is contained in (\cite{O}, p. 146-148). \begin{lem}\label{l21} Let $\Omega$ be a bounded domain in $\mathbb{R}^N$ with Lipschitz boundary $\partial\Omega $. Consider the problem \begin{eqnarray} -\Delta_pu +\tau |u|^{p-2}u&=& f\ \ \textup{in $\Omega$,}\label{of1}\\ \nabla u\cdot\nu&=&0\ \ \textup{on $\partial\Omega $,}\label{of2} \end{eqnarray} where $\tau>0, \ p>1, \ f\in L^{\frac{p}{p-1}}(\Omega)$. Without loss of generality, we also assume \begin{equation} p<N. \end{equation} Then there is a unique weak solution $u$ to the above problem in the space $W^{1,p}(\Omega)$. Furthermore, if $f$ also lies in the space $L^{q}(\Omega)$ with \begin{equation}\label{of3} q>\frac{N}{p}, \end{equation} $u$ is bounded and we have the estimate \begin{equation}\label{of4} \|u\|_\infty \leq c\|u\|_1+c\left(\|f\|_q\right)^{\frac{1}{p-1}}. \end{equation} \end{lem} \begin{proof} We do not believe that the estimate \eqref{of4} is new. Since we cannot find a good reference for it, we shall offer a proof here. We employ a technique of iteration of $L^q$ norms originally due to Moser \cite{M}. Without loss of generality, assume \begin{equation} \|u^+\|_\infty=\|u\|_\infty. \end{equation} Set $b=(\|f\|_q)^{\frac{1}{p-1}}$. For each $s> \frac{p-1}{p}$ we use $\frac{s^p}{ps-p+1}(u^++b)^{ps-p+1}$ as a test function in \eqref{of1} to obtain \begin{eqnarray} \lefteqn{s^p\int_\Omega(u^++b)^{ps-p}|\nabla u^+|^pdx}\nonumber\\ &&+\frac{\tau s^p}{ps-p+1}\int_\Omega |u|^{p-2}u(u^++b)^{ps-p+1}dx\nonumber\\ &=&\frac{s^p}{ps-p+1}\int_\Omega f(u^++b)^{ps-p+1}dx.\label{os1} \end{eqnarray} Note that \begin{eqnarray} \int_\Omega|u|^{p-2} u^-(u^++b)^{ps-p+1}dx &=&b^{ps-p+1}\int_\Omega|u|^{p-2} u^-dx.\label{os2} \end{eqnarray} Letting $u=u^+-u^-$ in \eqref{of1} and integrating the resulting equation over $\Omega$, which amounts to using $1$ as a test function in the equation, yield \begin{equation} \tau\int_\Omega|u|^{p-2} u^+dx=\tau\int_\Omega|u|^{p-2} u^-dx+\int_\Omega fdx. \end{equation} Substitute this into \eqref{os2} to obtain \begin{eqnarray} \int_\Omega |u|^{p-2}u^-(u^++b)^{ps-p+1}dx&=&b^{ps-p+1}\int_\Omega |u|^{p-2}u^+dx-\frac{b^{ps-p+1}}{\tau}\int_\Omega fdx\nonumber\\ &\leq &\int_\Omega(u^++b)^{ps}dx+\frac{1}{\tau}\int_\Omega |f|(u^++b)^{ps-p+1}dx.\label{os3} \end{eqnarray} Keeping this in mind, we can derive from \eqref{os1} that \begin{eqnarray} \lefteqn{\int_\Omega \left|\nabla(u^++b)^s\right|^pdx}\nonumber\\ &\leq&\frac{\tau s^p}{ps-p+1}\int_\Omega (u^++b)^{ps}dx+\frac{2 s^p}{ps-p+1}\int_\Omega |f|(u^++b)^{ps-p+1}dx\nonumber\\ &\leq&\frac{\tau s^p}{ps-p+1}\int_\Omega (u^++b)^{ps}dx\nonumber\\ &&+\frac{2 s^p}{ps-p+1}\|f(u^++b)^{-p+1}\|_q\left(\int_\Omega (u^++b)^{\frac{psq}{q-1}}dx\right)^{\frac{q-1}{q}}\nonumber\\ &\leq&\frac{\tau s^p}{ps-p+1}\int_\Omega (u^++b)^{ps}dx+\frac{2 s^p}{ps-p+1}\left(\int_\Omega (u^++b)^{\frac{psq}{q-1}}dx\right)^{\frac{q-1}{q}}\nonumber\\ &\leq&\frac{c s^p}{ps-p+1}\left(\int_\Omega (u^++b)^{\frac{psq}{q-1}}dx\right)^{\frac{q-1}{q}}, \ \ c=c(\Omega, \tau, q).\label{os4} \end{eqnarray} Here we have used the fact that \begin{equation} \|f(u^++b)^{-p+1}\|_q=\left(\int_\Omega \frac{|f|^q}{(u^++b)^{(p-1)q}}dx\right)^{\frac{1}{q}}\leq \frac{\|f\|_q}{b^{p-1}}=1. \end{equation} With the aid of the Sobolev inequality, we obtain \begin{eqnarray} \lefteqn{ \left(\int_\Omega (u^++b)^{\frac{sNp}{N-p}}dx\right)^{\frac{N-p}{N}}}\nonumber\\ &\leq&c\int_\Omega \left|\nabla(u^++b)^{s}\right|^pdx+c\int_\Omega (u^++b)^{sp}dx\nonumber\\ &\leq&\frac{c s^p}{ps-p+1}\left(\int_\Omega (u^++b)^{\frac{psq}{q-1}}dx\right)^{\frac{q-1}{q}}+c\int_\Omega (u^++b)^{sp}dx\nonumber\\ &\leq&\frac{c s^p}{ps-p+1}\left(\int_\Omega (u^++b)^{\frac{psq}{q-1}}dx\right)^{\frac{q-1}{q}}. \end{eqnarray} The last step is due to the fact that $$\frac{ s^p}{ps-p+1}\geq 1. $$ Set $\chi=\frac{N}{N-p}/\frac{q}{q-1}$. Our assumption \eqref{of3} implies that $\chi>1$. We can write \eqref{os4} in the form \begin{eqnarray} \|u^++b\|_{\frac{psq\chi}{q-1}}&\leq &\left(\frac{c }{ps-p+1}\right)^{\frac{1}{ps}}s^{\frac{1}{s}}\|u^++b\|_{\frac{psq}{q-1}}\nonumber\\ &\leq &c ^{\frac{1}{s}}s^{\frac{1}{s}}\|u^++b\|_{\frac{psq}{q-1}},\ \ \mbox{provided that $s\geq 1$.} \end{eqnarray} In view of the proof in (\cite{GT}, p. 190), we take $s=\chi^m, m=0,1,2,\cdots$ in the above inequality. Iterating and taking the limit lead to \begin{equation} \|u^++b\|_{\infty}\leq c\|u^++b\|_{\frac{pq}{q-1}}, \end{equation} whence by the interpolation inequality (1) in Lemma \ref{linterp} we have \begin{equation} \|u^++b\|_{\infty}\leq c\|u^++b\|_{1}. \end{equation} This implies the desired result. \end{proof} Further regularity results for solutions to equations of p-laplace type can be found in \cite{AZ, T} and the references therein. Our existence theorem is based upon the following fixed point theorem, which is often called the Leray-Schauder Theorem (\cite{GT}, p.280). \begin{lem} Let $B$ be a map from a Banach space $\mathcal{B}$ into itself. Assume: \begin{enumerate} \item[(H1)] $B$ is continuous; \item[(H2)] the images of bounded sets of $B$ are precompact; \item[(H3)] there exists a constant $c$ such that $$\|z\|_{\mathcal{B}}\leq c$$ for all $z\in\mathcal{B}$ and $\sigma\in[0,1]$ satisfying $z=\sigma B(z)$. \end{enumerate} Then $B$ has a fixed point. \end{lem} \begin{lem}\label{poin} Let $\Omega$ be a bounded domain in $\mathbb{R}^N$ with Lipschitz boundary and $1\leq p<N$. Then there is a positive number $c=c(N)$ such that \begin{equation} \|u-u_S\|_{p^*}\leq \frac{cd^{N+1-\frac{p}{N}}}{|S|^{\frac{1}{p}}}\|\nabla u\|_p \ \ \mbox{for each $u\in W^{1,p}(\Omega)$,} \end{equation} where $S$ is any measurable subset of $\Omega$ with $|S|>0$, $u_S=\frac{1}{|S|}\int_S udx$, and $d$ is the diameter of $\Omega$. \end{lem} This lemma can be inferred from Lemma 7.16 in \cite{GT}. Also see \cite{G,PS}. It is a version of the Poincar\'{e} inequality. \section{Proof of Theorem \ref{th1.1}} In this section we first design an approximation scheme for problem \eqref{sta1}-\eqref{sta2}. Then we obtain a weak solution by passing to the limit in our approximate problems. Following \cite{LX}, we introduce a new unknown function \begin{equation} \psi=-\Delta_pu. \end{equation} Then regularize this equation by adding the term $\tau |u|^{p-2}u,\ \tau>0$, to its right-hand side. This is due to the Neumann boundary condition in our problem. By the same reason, we add $\tau\psi$ to \eqref{sta1}. This leads to the study of the system \begin{eqnarray} -\Delta e^\psi +\tau\psi+a u &=&f\ \ \ \mbox{in $\Omega$},\label{ot1}\\ -\Delta_pu +\tau|u|^{p-2} u &=&\psi \ \ \ \mbox{in $\Omega$} \end{eqnarray} coupled with the boundary conditions \begin{equation} \nabla u\cdot\nu=\nablae^\psi \cdot\nu=0\ \ \ \mbox{on $\partial\Omega$},\label{ot2} \end{equation} where we assume \begin{equation} f\in L^\infty(\Omega. \label{ot3} \end{equation} This is our approximating problem. Basically, we have transformed a fourth-order equation into a system of two second-order equations. A mathematical motivation behind this construction is given in \cite{LX}. \begin{thm}\label{p21} Let $\Omega$ be a bounded domain in $\mathbb{R}^N$ with $C^{2,\alpha}$ boundary with some $\alpha\in(0,1)$, and assume that $1<p<N$ and \eqref{ot3} hold. Then there is a weak solution $(\psi, u)$ to \eqref{ot1}-\eqref{ot2} with \begin{eqnarray} \psi&\in& W^{2,q}(\Omega)\ \ \textup{for each $q>1$},\label{reg1}\\ u&\in& C^{1,\lambda}(\overline{\Omega}),\ \ \lambda\in (0,1).\label{reg2} \end{eqnarray}. \end{thm} \begin{proof} The existence assertion will be established via the Leray-Schauder Theorem. For this purpose, we define an operator $B$ from $L^\infty(\Omega)$ into itself as follows: for each $g\in L^\infty(\Omega)$ we say $B(g)=\psi$ if $\psi$ is the unique solution of the linear boundary value problem \begin{eqnarray} -\mbox{div}\left(e^g\nabla\psi\right)+\tau\psi &=&f-au\ \ \ \mbox{in $\Omega$},\label{om3}\\ \nabla\psi\cdot\nu&=&0\ \ \ \mbox{on $\partial\Omega$},\label{om4} \end{eqnarray} where $u$ solves the problem \begin{eqnarray} -\Delta_pu +\tau |u|^{p-2}u &=&g\ \ \ \mbox{in $\Omega$},\label{om5}\\ \nabla u\cdot\nu&=& 0\ \ \ \mbox{on $\partial\Omega$}.\label{om6} \end{eqnarray} Concerning the preceding boundary value problem, a theorem in (\cite{O}, p.124) asserts that the problem has a weak solution $u$ in the space $W^{1,p}(\Omega)$. Obviously, the uniqueness of such a solution is a consequence of Lemma \ref{plap}. In fact, we can further conclude from \cite{AZ,L,T} that $u$ satisfies \eqref{reg2}. Observe that since $g\in L^\infty(\Omega)$ the equation \eqref{om3} is uniformly elliptic. According to the classical regularity theory for linear elliptic equations, problem \eqref{om3}-\eqref{om4} has a unique solution $\psi$ in the space $W^{1,2}(\Omega)\cap C^{0,\beta}(\overline{\Omega})$ for some $\beta\in (0,1)$ (\cite{GT}, Chap. 8). Therefore, we can conclude that $B$ is well-defined, continuous, and maps bounded sets into precompact ones. It remains to show that there is a positive number $c$ such that \begin{equation} \|\psi\|_\infty\leq c\label{ot8} \end{equation} for all $\psi\in L^\infty(\Omega)$ and $\sigma\in [0,1]$ satisfying $$\psi=\sigma B(\psi).$$ This equation is equivalent to the boundary value problem \begin{eqnarray} -\Delta e^\psi +\tau\psi &=&\sigma(f-au)\ \ \ \mbox{in $\Omega$},\label{ot9}\\ -\Delta_pu +\tau|u|^{p-2} u &=&\psi \ \ \ \mbox{in $\Omega$},\label{ot10}\\ \nabla u\cdot\nu=\nablae^\psi \cdot\nu&=& 0\ \ \ \mbox{on $\partial\Omega$}. \end{eqnarray} \begin{clm}We have \begin{eqnarray} \|\psi\|_{q}&\leq&\frac{1}{\tau}\|f-au\|_{q}\ \ \textup{for each $q>2$, and thus}\label{c21} \\ \|\psi\|_\infty&\leq&\frac{1}{\tau}\|f-au\|_\infty.\label{ot12} \end{eqnarray} Furthermore, \begin{equation}\label{c22} \|\psi\|_2\leq\frac{1}{\tau}\|f\|_2 \end{equation} \end{clm} \begin{proof} We just need to slightly modify the proof of Claim 2.1 in \cite{LX}. Let $q>2$ be given. Then the function $|\psi|^{q-2} \psi$ lies in $W^{1,2}(\Omega)$ and $\nabla\left(|\psi|^{q-2} \psi\right)=(q-1)|\psi|^{q-2}\nabla\psi$. Multiply through \eqref{ot9} by this function and integrate the resulting equation over $\Omega$ to obtain \begin{eqnarray*} (q-1)\int_\Omegae^\psi |\psi|^{q-2}|\nabla\psi|^2\, dx +\tau\int_\Omega|\psi|^{q}\, dx&=&\sigma\int_\Omega(f-au)|\psi|^{q-2} \psi \, dx\\ &\leq &\int_\Omega|f-au||\psi|^{q-1}\, dx\\ &\leq &\|f-au\|_{q}\|\psi\|_{q}^{q-1}. \end{eqnarray*} Dropping the first integral in the above inequality yields \eqref{c21}. Multiplying $\psi$ through \eqref{ot9}, we obtain \begin{equation} \int_\Omegae^\psi |\nabla\psi|^2\, dx +\tau\int_\Omega\psi^2\, dx =-\sigma\int_\Omega au\psi \, dx +\sigma\int_\Omega f\psi \, dx.\label{ot11} \end{equation} Upon using $u$ as a test function in \eqref{ot10}, we can derive $$\int_\Omega u\psi \, dx =\int_\Omega|\nabla u|^p\, dx +\tau\int_\Omega u^p \, dx\geq 0.$$ Keeping this in mind, we deduce from \eqref{ot11} that $$\tau\int_\Omega\psi^2\, dx \leq \sigma\int_\Omega f\psi \, dx \leq \int_\Omega |f||\psi| \, dx \leq\|f\|_2\|\psi\|_2.$$ Then \eqref{c22} follows. \end{proof} To continue the proof of Theorem \ref{p21}, multiply through \eqref{ot9} by $e^\psi-1$ and integrate the resulting equation over $\Omega$ to obtain \begin{eqnarray}\label{ot122} \int_\Omega|\nabla e^\psi|^2dx+\tau\int_\Omega\psi(e^\psi-1)dx&=&-\sigma\int_\Omega au(e^\psi-1)dx+\sigma\int_\Omega f(e^\psi-1)dx\nonumber\\ &\leq&\left| \int_\Omega au(e^\psi-1)dx\right|+\|f\|_\infty\int_\Omega|(e^\psi-1)|dx. \end{eqnarray} In view of \eqref{otn9}, the first integral on the right-hand side in the above equation can be estimated as follows: \begin{eqnarray} \left|\int_\Omega u(e^\psi-1)dx\right|&\leq&\|u\|_2\|e^\psi-1\|_2\nonumber\\ &\leq&c\varepsilon\|\nabla e^\psi\|_2+c\|e^\psi-1\|_1, \ \ \varepsilon>0.\label{ot13} \end{eqnarray} Here we have taken \eqref{c22} into account. For each $M>0$ we have \begin{eqnarray} \int_\Omega\left|(e^\psi-1)\right|dx&=&\int_{|\psi|>M}\left|e^\psi-1\right|dx+\int_{|\psi|\leq M}\left|e^\psi-1\right|dx\nonumber\\ &\leq&\frac{1}{M}\int_\Omega\psi\left(e^\psi-1\right)dx+c(M).\label{ot14} \end{eqnarray} Plug \eqref{ot13} and then \eqref{ot14} into \eqref{ot122}, choose $\varepsilon$ suitably small and $M$ suitably large in the resulting inequality, thereby derive \begin{equation} \int_\Omega|\nabla e^\psi|^2dx+\tau\int_\Omega\psi(e^\psi-1)dx\leq c. \end{equation} This combined with \eqref{ot14} implies that $\psi$ is bounded in $L^q(\Omega)$ for each $q\geq 1$. Thus we apply Lemma \ref{l21} to obtain that $u$ is bounded in $L^\infty(\Omega)$. Consequently, \eqref{ot8} follows from \eqref{ot12} and \eqref{ot3}. As we mentioned earlier, we can infer \eqref{reg2} from \cite{L, AZ,T}. This together with the classical Calder\'{o}n-Zygmund estimate implies \eqref{reg1}. The proof is complete. \end{proof} \begin{proof}[Proof of Theorem \ref{th1.1}] Without loss of genelarity, we may assume that \begin{equation} f\in L^\infty(\Omega)\cap W^{1,p}(\Omega). \end{equation} Otherwise, $f$ can be approximated by a sequence in the above space in $W^{1,p}(\Omega)$. We shall show that we can take $\tau\rightarrow 0$ in \eqref{ot1}-\eqref{ot2}. For this purpose we need to derive estimates that are uniform in $\tau$. We write \begin{equation} u=u_\tau,\ \ \psi=\psi_\tau. \end{equation} Then problem \eqref{ot1}-\eqref{ot2} becomes \begin{eqnarray} -\Delta \rho_\tau+\tau\psi_\tau+a u_\tau &=&f\ \ \ \mbox{in $\Omega$},\label{ot1t}\\ e^{\psi_\tau}&=&\rho_\tau\ \ \ \mbox{in $\Omega$},\label{ot4t}\\ -\Delta_p\ut+\tau|u_\tau|^{p-2} u_\tau &=&\psi_\tau \ \ \ \mbox{in $\Omega$},\label{ot3t}\\ \nabla u_\tau=\cdot\nu&=&\nabla \rho_\tau\cdot\nu=0\ \ \ \mbox{on $\partial\Omega$}.\label{ot2t} \end{eqnarray} We also view $\{u_\tau,\rho_\tau,\psi_\tau\}$ as a sequence in the subsequent proof. Take $\tau=\frac{1}{k}$, where $k$ is a positive integer, for example. The rest of the proof is divided into several claims. \end{proof} \begin{clm}We have \begin{eqnarray} \int_\Omega|\nabla\sqrt{\rho_\tau}|^2dx+\tau\int_\Omega\psi_\tau^2dx+\int_\Omega|\nablau_\tau|^pdx+\tau\iou_\tau^pdx&\leq &c,\label{rue}\\ \|u_\tau\|_{W^{1.p}(\Omega)}&\leq &c.\label{ue} \end{eqnarray} \end{clm} \begin{proof} Use $\psi_\tau=\ln\rho_\tau$ as a test function in \eqref{ot1t} to obtain \begin{equation}\label{nf1} 4\int_\Omega|\nabla\sqrt{\rho_\tau}|^2dx+\tau\int_\Omega\psi_\tau^2dx+a\iou_\tau\psi_\tau dx=\int_\Omega f\psi_\tau dx. \end{equation} With the aid of \eqref{ot3t}, we evaluate the last two integrals in the above equation as follows: \begin{eqnarray} \iou_\tau\psi_\tau dx&=&\int_\Omega|\nablau_\tau|^pdx+\tau\iou_\tau^pdx,\label{nf2}\\ \int_\Omega f\psi_\tau dx&=&\int_\Omega|\nablau_\tau|^{p-2}\nablau_\tau\nabla fdx+\tau\int_\Omega |u_\tau|^{p-2} u_\tau f dx\nonumber\\ &\leq &\|\nabla f\|_p\|\nabla u_\tau\|_p^{p-1}+\tau\|f\|_p\|u_\tau\|_p^{p-1}.\label{nf3} \end{eqnarray} Plug \eqref{nf2} and \eqref{nf3} into \eqref{nf1}, apply the interpolation inequality (5) in Lemma \ref{elmen} in the resulting inequality, and thereby obtain \begin{eqnarray}\label{nf4} \lefteqn{\int_\Omega|\nabla\sqrt{\rho_\tau}|^2dx+\tau\int_\Omega\psi_\tau^2dx+\int_\Omega|\nablau_\tau|^pdx+\tau\iou_\tau^pdx}\nonumber\\&\leq &c\int_\Omega|\nabla f|^pdx+c\tau\int_\Omega| f|^pdx\nonumber\\ &\leq &c\int_\Omega|\nabla f|^pdx+c\int_\Omega| f|^pdx. \end{eqnarray} From here on we assume that $\tau\leq 1$. The above estimate gives \eqref{rue}. Integrate \eqref{ot1t} over $\Omega$ to yield \begin{equation} \left|a\int_\Omega u_\tau dx\right|=\left|\int_\Omega fdx-\tau\int_\Omega\psi_\tau dx\right|\leq c. \end{equation} Subsequently, we can apply the Poincar\'{e} inequality to get \begin{eqnarray} \|u_\tau\|_{p^*}&\leq &\|u_\tau-\frac{1}{|\Omega|}\int_\Omega u_\tau dx\|_{p^*}+\frac{1}{|\Omega|^{1-\frac{1}{p}}}\left|\int_\Omega u_\tau dx\right|\nonumber\\ &\leq &c\|\nablau_\tau\|_{p}+\frac{1}{|\Omega|^{1-\frac{1}{p}}}\left|\int_\Omega u_\tau dx\right|\leq c.\label{nf5} \end{eqnarray} Thus \eqref{ue} follows. The proof is complete. \end{proof} \begin{clm}\label{aec} There exists a subsequence of $\{\rho_\tau\}$, still denoted by $\{\rho_\tau\}$, such that \begin{equation} \rho_\tau\rightarrow \rho\ \ \mbox{a.e. on $\Omega$ as $\tau\rightarrow 0$.} \end{equation} \end{clm} \begin{proof}We use $\arctan\rho_\tau$ as a test function in \eqref{ot1t} to obtain \begin{eqnarray} \int_\Omega\frac{|\nabla\rho_\tau|^2}{1+\rho_\tau^2}dx &=&\int_\Omega\left(f-\tau\psi_\tau-au_\tau\right)\arctan\rho_\tau dx\nonumber\\ &\leq &\pi\int_\Omega\left|f-\tau\psi_\tau-au_\tau\right|dx\leq c. \end{eqnarray} Thus $\{\arctan\rho_\tau\}$ is bounded in $W^{1,2}(\Omega)$. We can extract a subsequence of $\{\arctan\rho_\tau\}$ which converges a .e. on $\Omega$. It follows that $\rho_\tau=\tan\left(\arctan\rho_\tau\right)$ also converges a.e. along the subsequence. This completes the proof. \end{proof} It should be noted that at this point we cannot rule out the possibility that $\{\arctan\rho_\tau\}$ goes to $\frac{\pi}{2}$ on a large set. Thus the limit $\rho$ may not be finite a.e. on $\Omega$. \begin{clm}\label{fini} If $\rho$ is finite on a set of positive measure, then there is a subsequence of $\{\rho_\tau\}$ which is bounded in $L^q(\Omega)$ for each $1\leq q<\frac{N}{N-2}$. \end{clm} \begin{proof} Our assumption implies that there is a positive number $s_0$ such that the set \begin{equation} \Omega_{s_0}=\{x\in\Omega: \rho(x)\leq s_0\} \end{equation} has positive measure. According to Claim \ref{aec} and Egoroff's theorem, for each $\varepsilon>0$ there is a closed set $K\subset \Omega_{s_0}$ such that $| \Omega_{s_0}\setminus K|< \varepsilon$ and $\rho_\tau\rightarrow\rho$ uniformly on $K$. We take $\varepsilon=\frac{1}{2}|\Omega_{s_0}|$. Then the corresponding $K$ has positive measure. We easily conclude from the uniform convergence that there is a positive number $s_1>s_0$ with the property \begin{equation} \rho_\tau\leq s_1\ \ \mbox{on $K$.} \end{equation} For each $s>s_1$ we use $\left(\frac{1}{s}-\frac{1}{\rho_\tau}\right)^+$ as a test function in \eqref{ot1t} to obtain \begin{equation} \int_\Omega\left|\nabla \ln^+\frac{\rho_\tau}{s}\right|^2dx=\int_\Omega(f-\tau\psi_\tau-au_\tau)\left(\frac{1}{s}-\frac{1}{\rho_\tau}\right)^+dx\leq \frac{c}{s}. \end{equation} Denote by $S_{\tau,s}$ the set where the function $\left(\frac{1}{s}-\frac{1}{\rho_\tau}\right)^+$ is $0$. It follows that \begin{equation} K\subset S_{\tau,s}\ \ \mbox{for all $s>s_1$ and sufficiently many $\tau>0$}. \end{equation} Thus we may apply Lemma \ref{poin} to obtain \begin{eqnarray} \ln^22|\{x\in\Omega:\rho_\tau(x)>2s\}|^{\frac{N-2}{N}}&\leq&\left[\int_\Omega\left(\ln^+\frac{\rho_\tau}{s}\right)^{\frac{2N}{N-2}}dx\right]^{\frac{N-2}{N}}\nonumber\\ &\leq&\frac{c}{|K|}\int_\Omega\left|\nabla \ln^+\frac{\rho_\tau}{s}\right|^2dx\leq\frac{c}{s}. \end{eqnarray} This implies the desired result. \end{proof} \begin{clm}\label{posi} The set where $\rho$ is finite has positive measure. \end{clm} \begin{proof}We argue by contradiction. Suppose that the claim is false. Then we have \begin{equation} \rho =\infty\ \ \mbox{a.e. on $\Omega$.} \end{equation} For each $L>0$ we define \begin{equation}\label{posi1} \gamma_L(s)=\left\{\begin{array}{ll} L& \mbox{if $s>L$,}\\ s&\mbox{if $-L\leq s\leq L$,}\\ -L&\mbox{if $s<-L$.} \end{array}\right.\end{equation} Fix $L>1$. Multiply through \eqref{ot1t} by $\gamma_L\left((\rho_\tau-1)^+\right)$ and integrate to obtain \begin{equation} \int_\Omega\left|\nabla \gamma_L\left((\rho_\tau-1)^+\right)\right|^2dx=\int_\Omega(f-\tau\psi_\tau-au_\tau)\gamma_L\left((\rho_\tau-1)^+\right)dx\leq cL. \end{equation} Here we have used the fact that \begin{equation} \nabla \gamma_L\left((\rho_\tau-1)^+\right)=0\ \ \mbox{on the set where either $\rho_\tau\leq 1$ or $\rho_\tau>L+1$.} \end{equation} Now consider the sequence $\{\ln\rho_\tau\gamma_L\left((\rho_\tau-1)^+\right)\}$. It is easy to see that \begin{eqnarray} \ln\rho_\tau\gamma_L\left((\rho_\tau-1)^+\right)&\geq&0 \ \ \mbox{a.e. on $\Omega$,}\\ \lim_{\tau\rightarrow 0}\ln\rho_\tau\gamma_L\left((\rho_\tau-1)^+\right)&=&\infty\ \ \mbox{a.e. on $\Omega$.} \end{eqnarray} By \eqref{ot4t} and \eqref{ot3t}, we have \begin{equation}\label{ns11} -\Delta_p\ut+\tau|u_\tau|^{p-2} u_\tau =\ln\rho_\tau \ \ \ \mbox{in $\Omega$}. \end{equation} Use $\gamma_L\left((\rho_\tau-1)^+\right)$ as a test function in the above equation, thereby deriving \begin{eqnarray} \lefteqn{\int_\Omega\ln\rho_\tau\gamma_L\left((\rho_\tau-1)^+\right)dx}\nonumber\\ &=&\int_\Omega|\nablau_\tau|^{p-2}\nablau_\tau\nabla\gamma_L\left((\rho_\tau-1)^+\right)dx\nonumber\\ &&+\tau\int_\Omega|u_\tau|^{p-2} u_\tau \gamma_L\left((\rho_\tau-1)^+\right) dx\nonumber\\ &\leq &\|\nabla\gamma_L\left((\rho_\tau-1)^+\right)\|_p\|\nablau_\tau\|_p^{p-1}+L\tau\int_\Omega|u_\tau|^{p-1}dx\leq c(L).\label{pl2} \end{eqnarray} The last step is due to the assumption $p\leq 2$. It follows from Fatou's lemma that the left-hand side of the above inequality goes to $\infty$ as $\tau\rightarrow 0$. This gives us a contradiction. The proof is complete. \end{proof} We would like to make a remark about the condition $p\leq 2$. Note from \eqref{c21} that \begin{equation}\label{c211} \tau\|\psi_\tau\|_{p^*}\leq\|f-au_\tau\|_{p^*}. \end{equation} Thus this condition could be avoided here if we had the estimate \begin{equation}\label{wpe1} \|\nabla\rho_\tau\|_p\leq c\|f-\tau\psi_\tau-au_\tau\|_{\frac{Np}{N+p}}. \end{equation} The above inequality is valid if $\rho_\tau$ satisfies the Dirichlet boundary condition $\rho_\tau|_{\partial\Omega }=0$. In our case, the right hand side of \eqref{wpe1} seems to also depend on the $L^1$-norm of $\rho_\tau$. \begin{clm}\label{lnr} The sequence $\{\ln\rho_\tau\}$ is bounded in $L^1(\Omega)$. \end{clm} \begin{proof} Use the number $1$ as a test function in \eqref{ns11} to get \begin{equation}\label{nf6} \left|\int_\Omega \ln\rho_\tau dx\right|=\tau\left|\int_\Omega |u_\tau|^{p-2} u_\tau dx \right|\leq c\tau^{\frac{1}{p}}. \end{equation} The last step is due to \eqref{nf4}. By virtue of Claims \ref{fini} and \ref{posi}, \begin{equation}\label{rb1} \mbox{the sequence $\{\rho_\tau\}$ is bounded in $L^q(\Omega) $ for each $1\leq q<\frac{N}{N-2}$. } \end{equation} We will use this for $q=1$. Aslo keeping \eqref{nf6} in mind, we estimate \begin{eqnarray} \int_\Omega|\ln\rho_\tau|dx&=&\int_\Omega\ln^+\rho_\tau dx+\int_\Omega\ln^-\rho_\tau dx\nonumber\\ &=&2\int_\Omega\ln^+\rho_\tau dx-\int_\Omega\ln\rho_\tau dx\nonumber\\ &\leq& 2\int_\Omega\rho_\tau dx+c\tau^{\frac{1}{p}}\leq c. \end{eqnarray}\end{proof} \begin{clm}\label{rtc} The sequence $\{\rho_\tau\}$ is precompact in $W^{1,2}(\Omega)$. \end{clm} \begin{proof} Note that our assumptions on $N, p$ imply \begin{equation}\label{nss2} \frac{2N}{N+2}<\frac{Np}{N-p}=p^*. \end{equation}Use $\rho_\tau-1$ as a test function in \eqref{ot1t} to obtain \begin{eqnarray} \int_\Omega|\nabla\rho_\tau|^2dx&\leq &\int_\Omega(f-au_\tau)(\rho_\tau-1)dx\nonumber\\ &\leq&\|f-au_\tau\|_{\frac{2N}{N+2}}\|\rho_\tau-1\|_{\frac{2N}{N-2}}\nonumber\\ &\leq&c\|\nabla\rho_\tau\|_2+c\|\rho_\tau-1\|_2.\label{nss1} \end{eqnarray} Here we have used the fact that $\psi_\tau(\rho_\tau-1)\geq 0$. Use \eqref{rb1} and the interpolation inequality \eqref{otn9} in \eqref{nss1} to obtain \begin{equation} \int_\Omega|\nabla\rho_\tau|^2dx\leq c. \end{equation} This combined with \eqref{rb1} implies that \begin{equation}\label{rtb11} \mbox{$\{\rho_\tau\}$ is bounded in $W^{1,2}(\Omega)$.} \end{equation} Thus $\{\rho_\tau\}$ is precompact in $L^q(\Omega)$ for each $q\in [0, 2^*)$. Set $g_\tau= f-au_\tau-\tau\psi_\tau$. Then by \eqref{c211}, the sequence $\{g_\tau\}$ is bounded in $L^{p^*}(\Omega)$. Let $\tau_1,\tau_2\in (0,1)$. We calculate from \eqref{ot1t} that \begin{eqnarray} \int_\Omega|\nabla\rho_{\tau_1}-\nabla\rho_{\tau_2}|^2dx&\leq& \int_\Omega(g_{\tau_1}-g_{\tau_2})(\rho_{\tau_1}-\rho_{\tau_2})dx\nonumber\\ &\leq& c \|g_{\tau_1}-g_{\tau_2}\|_{p^*} \|\rho_{\tau_1}-\rho_{\tau_2}\|_{\frac{Np}{Np-N+p}}\nonumber\\ &\leq& c\|\rho_{\tau_1}-\rho_{\tau_2}\|_{\frac{Np}{Np-N+p}}. \end{eqnarray} In view of our assumptions on $N, p$, we have that \begin{equation} \frac{Np}{Np-N+p}<2^*. \end{equation} The claim follows from the precompactness of $\{\rho_\tau\}$ in $L^q(\Omega)$ for each $q\in [0, 2^*)$. \end{proof} \begin{clm}\label{gup} At least a subsequence of $\{\nablau_\tau\}$ converges a.e. on $\Omega$. \end{clm} \begin{proof} We will show that $\{u_\tau\}$ is precompact in $W^{1,q}(\Omega)$ for each $q<p$. The idea behind the proof has appeared elsewhere. See, for example, the proof of Lemma 2.2 in \cite{X4}. By \eqref{ue} and Egoroff's theorem, for each $\delta>0$ there is a closed set $E\subset \Omega$ with the properties \begin{eqnarray} |\Omega\setminus E|&\leq&\delta,\\ u_\tau&\rightarrow& u\ \ \ \mbox{uniformly on $E$.} \end{eqnarray} Subsequently, we can find a positive number $K$ so that \begin{equation}\label{utb1} |u_\tau|\leq K \ \ \ \mbox{on $E$.} \end{equation} For any $\varepsilon>0$ we have \begin{equation} |u_{\tau_1}-u_{\tau_2}|<\varepsilon\ \ \mbox{on $E$ for sufficiently small $\tau_1,\tau_2$.} \end{equation}We can derive from \eqref{ot3t} and Lemma \ref{plap} that \begin{eqnarray} \lefteqn{ \int_{E}\left(|\nabla u_{\tau_1}|^{p-2}\nabla u_{\tau_1}-|\nabla u_{\tau_2}|^{p-2}\nabla u_{\tau_2}\right)\cdot\nabla(u_{\tau_1}-u_{\tau_2})dx}\nonumber\\ &\leq& \int_\Omega\left(|\nabla u_{\tau_1}|^{p-2}\nabla u_{\tau_1}-|\nabla u_{\tau_2}|^{p-2}\nabla u_{\tau_2}\right)\cdot\nabla\gamma_\varepsilon(u_{\tau_1}-u_{\tau_2})dx\nonumber\\ &=&\int_\Omega\left(\ln\rho_{\tau_1}-\tau_1|u_{\tau_1}|^{p-2}u_{\tau_1}-\ln\rho_{\tau_2}+\tau_2|u_{\tau_2}|^{p-2}u_{\tau_2}\right)\gamma_\varepsilon(u_{\tau_1}-u_{\tau_2})dx\nonumber\\ &\leq & c\varepsilon, \end{eqnarray} where $\gamma_\varepsilon$ is obtained by replacing L with $\varepsilon$ in \eqref{posi1}. Apply (ii) in Lemma \ref{plap} and \eqref{utb1} to deduce \begin{equation} \int_E|\nabla(u_{\tau_1}-u_{\tau_2})|^2dx\leq c\varepsilon. \end{equation} Thus $\{\nablau_\tau\}$ is precompact in $\left(L^2(E)\right)^N$. Let $q<p$ be given. We estimate \begin{eqnarray} \int_\Omega|\nabla(u_\tau-u)|^q dx&=&\int_E|\nabla(u_\tau-u)|^q dx+\int_{\Omega\setminus E}|\nabla(u_\tau-u)|^q dx\nonumber\\ &\leq &c|\Omega\setminus E|^{1-\frac{q}{p}}+\int_E|\nabla(u_\tau-u)|^q dx\nonumber\\ &\leq &c\delta^{1-\frac{q}{p}}+\int_E|\nabla(u_\tau-u)|^q dx. \end{eqnarray} Therefore, \begin{equation} \limsup_{\tau\rightarrow 0}\leq c\delta^{1-\frac{q}{p}}\ \mbox{at least along a subsequence.} \end{equation} Since $\delta$ is arbitrary, this implies the desired result. \end{proof} In view of Lemma \ref{linterp}, \eqref{rb1}, the classical Calder\'{o}n and Zygmund estimate, $\{\rho_\tau\}$ is bounded in $W^{2,q}(\Omega)$, where $q=\min\{2,\frac{Np}{N-p}\}$. Passing to subsequences if necessary, we may assume \begin{eqnarray} u_\tau&\rightharpoonup & \mbox{$u$ weakly in $W^{1,p}(\Omega)$},\label{otn15}\\ \rho_\tau&\rightharpoonup & \mbox{$\rho$ weakly in $W^{2,q}(\Omega)$ and a.e. on $\Omega$}.\label{otn13} \end{eqnarray} By virtue of Claim \ref{gup}, \begin{equation} |\nablau_\tau|^{p-2}\nablau_\tau\rightharpoonup |\nabla u|^{p-2}\nabla u\ \ \mbox{weakly in $\left(L^{\frac{p}{p-1}}(\Omega)\right)^N$.} \end{equation} With the aid of Fatou's Lemma, we deduce from Claim \ref{lnr} that $$\int_{\Omega}|\ln \rho| \, dx \leq\liminf_{\tau\rightarrow 0}\int_{\Omega}|\ln \rho_\tau| \, dx\leq c.$$ Therefore, the set $$A_0=\{(x)\in\Omega:\rho(x)=0\}$$ has Lebesque measure $0$. This combined with (\ref{otn13}) asserts that \begin{equation} \ln \rho_\tau\rightarrow \ln\rho \ \ \ \mbox{a.e. on $\Omega$}. \end{equation} Obviously, we have from \eqref{ue} that \begin{equation} \tau|u_\tau|^{p-2}u_\tau\rightarrow \mbox{$0$ strongly in $L^{\frac{p}{p-1}}(\Omega)$, and thus a.e on $\Omega$ } \end{equation} (passing to a subsequence if need be). Recall (\ref{ns11}) to obtain \begin{equation} -\Delta_p\ut\rightarrow \ln\rho \ \ \ \mbox{a.e. on $\Omega$}. \end{equation} On the other hand, we conclude from Claim \ref{lnr} and \eqref{ue} that the sequence $\{-\Delta_p\ut\}$ is bounded in both $L^1(\Omega)$ and $\left(W^{1,p}(\Omega)\right)^*$. Hence we have \begin{equation} -\Delta_p\ut\rightharpoonup -\Delta_pu\equiv\mu \ \ \ \mbox{weakly in both $\mathcal{M} (\overline{\Omega})$ and $\left(W^{1,p}(\Omega)\right)^*$}.\label{ow1} \end{equation} The key issue is: do we have $$-\Delta_pu=\mu=\ln\rho?$$ The following claim addresses this issue. \begin{clm}The restriction of $\mu$ to the set $\overline{\Omega}\setminus A_0$ is a function. This function is exactly $\ln\rho$. That is, the Lebesgue decomposition of $\mu$ with respect to the Lebesgue measure is $\ln\rho+\nu_s$, where $\nu_s$ is a measure supported in $A_0$, and we have \begin{equation} \rho=e^\mu \ \ \mbox{on the set $\overline{\Omega}\setminus A_0$.} \end{equation} That is, $\ln\rho$ is the function $g_a$ in the definition of a weak solution. \end{clm}\begin{center} \end{center} \begin{proof} The proof is almost identical to the proof of Proposition 3.7 in \cite{LX}. For the reader's convenience, we shall reproduce it here. Keep in mind that since $\mu\in\left(W^{1,p}(\Omega)\right)^*$ each function in $W^{1,p}(\Omega)$ is $\mu$-measurable, and thus it is well-defined except on a set of $\mu$ measure $0$. Furthermore, $ \langle \mu, v \rangle \, =\int_{\Omega} v \, d\mu$ for each $v\in W^{1,p}(\Omega) $. For $\varepsilon>0$ let $\theta_\varepsilon $ be a smooth function on $\mathbb{R}$ having the properties $$\theta_\varepsilon (s)=\left\{\begin{array}{ll} 1 &\mbox{if $s\geq 2\varepsilon$,}\\ 0&\mbox{if $s\leq \varepsilon$ \hspace{.5in} and} \end{array}\right.$$ $$ 0\leq\theta_\varepsilon \leq 1\ \ \mbox{on $\mathbb{R}$.}$$ Then it is easy to verify from Claim \ref{rtc} that we still have \begin{equation} \theta_\varepsilon (\rho_\tau)\rightarrow\theta_\varepsilon (\rho)\ \ \mbox{strongly in $W^{1,p}(\Omega)$ for each $p\leq 2$.}\label{otn14} \end{equation} Pick a function $\xi$ from $C^\infty(\overline{\Omega})$. Multiply through (\ref{ns11}) by $\xi\, \theta_\varepsilon (\rho_\tau)$ and integrate the resulting equation over $\Omega$ to obtain \begin{equation} -\int_{\Omega}\Delta_p\ut\theta_\varepsilon (\rho_\tau)\, \xi \, dx+\tau\int_{\Omega}|u_\tau|^{p-2} u_\tau\, \theta_\varepsilon (\rho_\tau)\, \xi \, dx =\int_{\Omega}\ln\rho_\tau \, \theta_\varepsilon (\rho_\tau)\, \xi \, dx.\label{otn20} \end{equation} For each fixed $\varepsilon$ we can infer from \eqref{rb1} that the sequence $\{\ln\rho_\tau \, \theta_\varepsilon (\rho_\tau)\}$ is bounded in $L^p(\Omega)$ for any $p>1$. This, along with (\ref{otn13}), gives $$\int_{\Omega}\ln\rho_\tau \, \theta_\varepsilon (\rho_\tau)\, \xi \, dx\rightarrow\int_{\Omega}\theta_\varepsilon (\rho)\ln\rho\, \xi \, dx.$$ Observe from (\ref{otn14}) and (\ref{ow1}) that \begin{equation} -\int_{\Omega}\Delta_p\ut \, \theta_\varepsilon (\rho_\tau)\, \xi\, dx =\langle -\Delta_p\ut,\theta_\varepsilon (\rho_\tau)\, \xi \rangle\, \rightarrow\int_{\Omega}\theta_\varepsilon (\rho)\, \xi \, d\mu. \end{equation} Taking $\tau\rightarrow 0$ in (\ref{otn20}) yields \begin{equation} \int_{\Omega}\theta_\varepsilon (\rho)\,\xi \,d\mu=\int_{\Omega}\theta_\varepsilon (\rho)\ln\rho \, \xi \, dx.\label{otn21} \end{equation} Obviously, $\rho\in W^{1,p}(\Omega)$, and thus it is well-defined except on a set of $\mu$ measure $0$. We can easily conclude from the definition of $\theta_\varepsilon $ that $\{\theta_\varepsilon (\rho)\}$ converges everywhere on the set where $\rho$ is defined as $\varepsilon\rightarrow 0$. With the aid of the Dominated Convergence Theorem, we can take $\varepsilon\rightarrow 0$ in (\ref{otn21}) to obtain $$\int_{\Omega\setminus A_0}\, \xi \, d\mu=\int_{\Omega\setminus A_0}\ln\rho \, \xi \, dx.$$ This is true for every $\xi\in C^\infty(\overline{\Omega})$, which means \begin{equation} \mu=\ln\rho\ \ \ \mbox{on $\Omega\setminus A_0$.} \end{equation} This proves the claim. \end{proof} With this claim, the proof of Theorem \ref{th1.1} is now totally completed. \section{Remarks about the time-dependent problem} In this section, we first fabricate an approximation scheme for the time-dependent problem \eqref{p1}-\eqref{p3}. This is based upon Theorem \ref{p21}. Then we show that estimate \eqref{nm2} is preserved for the approximate problems. Let $T>0$ be given. For each $j\in\{1,2,\cdots,\}$ we divide the time interval $[0,T]$ into $j$ equal subintervals. Set $$\delta=\frac{T}{j}.$$ We discretize \eqref{p1}-\eqref{p3} as follows. For $k=1,\cdots, j$, we solve recursively the system \begin{eqnarray} \frac{u_k-u_{k-1}}{\delta}-\Deltae^{\psi_k }+\delta\psi_k &=&0\ \ \ \mbox{in $\Omega$},\label{s31}\\ -\Delta_pu_k+\delta|u_k |^{p-2}u_k &=&\psi_k \ \ \ \mbox{in $\Omega$},\label{s32}\\ \nablae^{\psi_k }\cdot\nu=\nablau_k &=&0\ \ \ \mbox{on $\partial\Omega$}.\label{s33} \end{eqnarray} Introduce the functions \begin{eqnarray} \tilde{u}_j (x,t)&=&\frac{t-t_{k-1}}{\delta}u_k (x)+\left(1-\frac{t-t_{k-1}}{\delta}\right)u_{k-1} (x), \ x\in\Omega, \ t\in(t_{k-1},t _k],\\ \bar{u}_j (x,t)&=&u_k (x), \ \ \ x\in\Omega, \ \ t\in(t_{k-1},t_k],\\ \bar{\psi}_j (x,t)&=&\psi_k (x), \ \ \ x\in\Omega, \ \ t\in(t_{k-1},t_k], \end{eqnarray} where $t_k=k\delta$. We can rewrite \eqref{s31}-\eqref{s33} as \begin{eqnarray} \frac{\partial\tilde{u}_j }{\partial t}-\Delta e^{\bar{\psi}_j}+\delta\bar{\psi}_j &=&0\ \ \ \mbox{in $\Omega_T$},\label{omm1}\\ -\Delta_p\ubj+\delta|\bar{u}_j |^{p-2}\bar{u}_j &=&\bar{\psi}_j \ \ \ \mbox{in $\Omega_T$}. \end{eqnarray} We proceed to derive a priori estimates for the sequence of approximate solutions $\{\tilde{u}_j ,\bar{u}_j ,\bar{\psi}_j \}$. \begin{prop} There holds \begin{eqnarray} \lefteqn{\frac{1}{p}\max_{0\leq t\leq T}\int_\Omega|\nabla\bar{u}_j |^p dx +4\int_{\Omega_T}|\nabla e^{\frac{1}{2}\bar{\psi}_j }|^2dxdt}\nonumber\\ &&+\frac{\delta}{p}\max_{0\leq t\leq T}\int_\Omega|\bar{u}_j |^pdx +\delta\int_{\Omega_T}\bar{\psi}_j ^2 dxdt\nonumber\\ &\leq &\frac{1}{p}\int_\Omega|\nabla u_0|^p dx+ \frac{\delta}{p}\int_\Omega|u_0|^pdx. \end{eqnarray} \end{prop} Obviously, this proposition is the discretized version of \eqref{nm2}. \begin{proof} Multiply through \eqref{s31} by $\psi_k$ and integrate to obtain \begin{equation} \int_\Omega\frac{ u_k- u_{k-1}}{\delta}\psi_k \, dx +\int_\Omega\nabla\left(e^{\psi_k }\right)\cdot\nabla\psi_k \, dx +\delta\int_\Omega\psi_k ^2 \, dx=0.\label{ota10} \end{equation} The second integral in the preceding equation is computed as follows: \begin{eqnarray} \int_\Omega\nabla\left(e^{\psi_k }\right)\nabla\psi_k \, dx &=& \int_\Omegae^{\psi_k } \, |\nabla\psi_k |^2 \, dx\nonumber\\ &=&4\int_\Omega|\nabla e^{\frac{1}{2}\psi_k} |^2dx. \label{ota8} \end{eqnarray} Using $u_k -u_{k-1} $ as a test function in \eqref{s32} yields \begin{eqnarray} \int_\Omega(u_k- u_{k-1})\psi_k \, dx&=&\int_\Omega|\nablau_k |^{p-2}\nablau_k (\nablau_k -\nablau_{k-1} )dx\nonumber\\ &&+\delta\int_\Omega |u_k |^{p-2}u_k (u_k -u_{k-1} )dx\nonumber\\ &\geq&\frac{1}{p}\int_\Omega\left(|\nablau_k |^p-|\nablau_{k-1} |^p\right)dx+\frac{\delta}{p}\int_\Omega\left(|u_k |^p-|u_{k-1} |^p\right)dx.\label{ota9} \end{eqnarray} The last step is due to (3) in Lemma \ref{elmen}. Substituting \eqref{ota8} and \eqref{ota9} into \eqref{ota10} yield \begin{eqnarray} \lefteqn{\frac{1}{p\delta}\int_\Omega\left(|\nablau_k |^p-|\nablau_{k-1} |^p\right)dx +4\int_\Omega|\nabla e^{\frac{1}{2}\psi_k }|^2dx}\nonumber\\ &&+\frac{1}{p}\int_\Omega\left(|u_k |^p-|u_{k-1} |^p\right)dx +\delta\int_\Omega\psi_k ^2 dx\leq 0. \end{eqnarray} Then the proposition follows from multiplying through the above inequality by $\delta$ and summing up the resulting one over $k$. \end{proof} Obviously, this theorem is not enough to justify passing to the limit in \eqref{omm1}. It remains open to find additional estimates to accomplish the feat. \noindent{\bf Acknowledgment.} The author is grateful to Prof. Jian-Guo Liu for originally bringing this problem to his attention. \end{document}
\begin{document} \title[Electrowetting]{A Diffuse Interface Model for Electrowetting with Moving Contact Lines} \author[R.H.~Nochetto]{Ricardo H.~Nochetto$^1$} \address{$^1$Department of Mathematics and Institute for Physical Science and Technology, University of Maryland, College Park, MD 20742, USA.} \email{[email protected]} \author[A.J.~Salgado]{Abner J.~Salgado$^2$} \address{$^2$Department of Mathematics, University of Maryland, College Park, MD 20742, USA.} \email{[email protected]} \author[S.W.~Walker]{Shawn W.~Walker$^3$} \address{$^3$Department of Mathematics and Center for Computation and Technology, Louisiana State University, Baton Rouge, LA 70803, USA.} \email{[email protected]} \thanks{ This material is based on work supported by NSF grants CBET-0754983 and DMS-0807811. AJS is also supported by an AMS-Simons Travel Grant. } \keywords{Electrowetting; Navier Stokes; Cahn Hilliard; Multiphase Flow; Contact Line.} \subjclass[2000]{35M30, 35Q30, 76D27, 76T10, 76D45. } \date{Submitted to M3AS on \today} \begin{abstract} We introduce a diffuse interface model for the phenomenon of electrowetting on dielectric and present an analysis of the arising system of equations. Moreover, we study discretization techniques for the problem. The model takes into account different material parameters on each phase and incorporates the most important physical processes, such as incompressibility, electrostatics and dynamic contact lines; necessary to properly reflect the relevant phenomena. The arising nonlinear system couples the variable density incompressible Navier-Stokes equations for velocity and pressure with a Cahn-Hilliard type equation for the phase variable and chemical potential, a convection diffusion equation for the electric charges and a Poisson equation for the electric potential. Numerical experiments are presented, which illustrate the wide range of effects the model is able to capture, such as splitting and coalescence of droplets. \end{abstract} \maketitle \section{Introduction} \label{sec:Intro} The term electrowetting on dielectric refers to the local modification of the surface tension between two immiscible fluids via electric actuation. This allows for change of shape and wetting behavior of a the two-fluid system and, thus, for its manipulation and control. The existence of such a phenomenon was originally discovered by Lippmann \cite{Lippmann1875}, more than a century ago (see also \cite{PhysRevB.76.035437, MugelePhysics, Berge:Comptes_Rendus, Shapiro:JAP}). However, only recently has electrowetting found a wide spectrum of applications, specially in the realm of micro-fluidics \cite{Cho_Moon:ME_Congress_2001, Cho:JMEMS, Gong:MEMSConf}. One can mention, for example, reprogrammable lab-on-chip systems \cite{Lee:Sens_Act_2002, SaekiKim_PMSE2001}, auto-focus cell phone lenses \cite{BergePeseux_EPJ2000}, colored oil pixels and video speed smart paper \cite{HayesFeenstra_Nature2003, Roques-CarmesFeenstra_JAP2004, Roques-CarmesHayes_JAP2004}. In \cite{NatureInverseEwod}, the reverse electrowetting process has been proposed as an approach to energy harvesting. From the examples presented above, it becomes clear that it is very important for applications to have a better understanding of this phenomenon and it is necessary to obtain reliable computational tools for the simulation and control of these effects. The computational models must be complete enough, so that they can reproduce the most important physical effects, yet sufficiently simple that it is possible to extract from them meaningful information in a reasonable amount of computing time. Several works have been concerned with the modeling of electrowetting. The approaches include experimental relations and scaling laws \cite{ewodExperimentJapanese,springerlink:10.1007/s10404-008-0360-y}, empirical models \cite{CambridgeJournals:1380872}, studies concerning the dependence of the contact angle (\cite{MR2551398,MR2483669}) or the shape of the droplet (\cite{MR2487069,MR2683577}) on the applied voltage, lattice Boltzmann methods \cite{MR2745030,MR2557502} and others. Of relevance to our present discussion are the works \cite{walker:102103,MR2595379} and \cite{MR2511642,fontelos:527}. To the best of our knowledge, \cite{walker:102103,MR2595379} are the first papers where the contact line pinning was included in an electrowetting model. On the other hand the models of \cite{MR2511642,fontelos:527} are the only ones that are intrinsically three dimensional and do not assume any special geometric configuration. They have the limitation, however, that they assume the density of the two fluids to be constant and they apply a no-slip boundary condition to the fluid-solid interface, thus limiting the movement of the droplet. The purpose of this work is to propose and analyze an electrowetting model that is intrinsically three-dimensional; it takes into account that all material parameters are different in each one of the fluids; and it is derived (as long as this is possible) from physical principles. To do so, we extend the diffuse interface model of \cite{MR2511642}. The main additions are the fact that we allow the fluids to have different densities -- thus leading to a variable density Cahn Hilliard Navier Stokes system -- and that we treat the contact line movement in a thermodynamically consistent way, namely using the so-called generalized Navier boundary condition (see \cite{MR2261865,QianCiCP}). In addition, we propose a (phenomenological) approach to contact line pinning and study stability and convergence of discretization techniques. In this respect, our work also differs from \cite{MR2511642,fontelos:527}, since our approach deals with a practical fully discrete scheme, for which we derive a priori estimates and convergence results. Through private communication we have become aware of the following recent contributions: discretization schemes for the model proposed in \cite{MR2511642} are studied in \cite{klingbeilpaja}; the models of \cite{MR2511642,fontelos:527} have been extended, using the techniques of \cite{AbelsGarckeGrun}, in \cite{Grunmodelpaja,grunklingbeilpaja} where discretization issues are also discussed. This work is organized as follows. In \S\ref{sub:Notation} we introduce the notation and some preliminary assumptions necessary for our discussion. Section~\ref{sec:Model} describes the model that we shall be concerned with and its physical derivation. A formal energy estimate and a formal weak formulation of our problem is shown in section~\ref{sec:Energy}. The energy estimate shown in this section serves as a basis for the precise definition of our notion of solution and the proof of its existence. The details of this are accounted for in section~\ref{sec:Discrete}. In section~\ref{sec:NumExp} we discuss discretization techniques for our problem and present some numerical experiments aimed at showing the capabilities of our model: droplet splitting and coalescence as well as contact line movement. Finally, in section~\ref{sec:semidiscrete}, we briefly discuss convergence of the discrete solutions to solutions of a semi-discrete problem. \subsection{Notation and Preliminaries} \label{sub:Notation} \begin{figure} \caption{The basic configuration of an electrowetting on dielectric device \cite{Cho_Moon:ME_Congress_2001, Cho:JMEMS}. The solid black region depicts the dielectric plates and the white region denotes a droplet of one fluid (say water), which is surrounded by another (air). We denote by $\Omega$ the fluid domain, by $\Gamma$ its boundary, by $\Omega^\star$ the region occupied by the fluids and the plates and by $\partial^\star\Omega^\star := \partial\Omega^\star \setminus \Gamma$.} \label{fig:conf} \end{figure} Figure~\ref{fig:conf} shows the basic configuration for the electrowetting on dielectric problem. We use the symbol $\Omega$ to denote the domain occupied by the fluid and $\Omega^\star$ for the fluid and dielectric plates, thus, $\Omega \subset \Omega^\star$. In this manner, we assume that $\Omega$ and $\Omega^\star$ are convex, bounded connected domains in $\Real^d$, for $d=2$ or $3$, with $\calC^{0,1}$ boundaries. The boundary of $\Omega$ is denoted by $\Gamma$ and $\partial^\star \Omega^\star = \partial\Omega^\star \setminus \Gamma$, $\bn$ stands for the outer unit normal to $\Gamma$. We denote by $[0,T]$ with $0<T<\infty$ the time interval of interest. For any vector valued function $\bw :\Omega \rightarrow \Real^d$ that is smooth enough so as to have a trace on $\Gamma$, we define the tangential component of $\bw$ as \begin{equation} \bw_{\btau}|_\Gamma := \bw|_\Gamma - (\bw|_\Gamma \SCAL \bn ) \bn, \label{eq:defttrace} \end{equation} and, for any scalar function $f$, $\partial_\btau f := (\GRAD f)_\btau$. We will use standard notation for spaces of Lebesgue integrable functions $L^p(\Omega),\ 1\leq p \leq \infty$ and Sobolev spaces $W^m_p(\Omega)\ 1\leq p \leq \infty,\ m \in \polN_0$, \cite{02208228}. Vector valued functions and spaces of vector valued functions will be denoted by boldface characters. For $S\subset \Real^d$, by $\langle \cdot, \cdot\rangle_S$ we denote, indistinctly, the $L^2(S)$- or $\bL^2(S)$-inner product. If no subscript is given, we assume that the domain is $\Omega$. If $S \subset \Real^{d-1}$, then the inner product is denoted by $[\cdot,\cdot]_S$ and if no subindex is given, the domain must be understood to be $\Gamma$. We define the following spaces: \begin{equation} \Hunstar := \left\{ v \in H^1(\Omega^\star): v|_{\partial^\star\Omega^\star}=0 \right\}, \label{eq:defHunstar} \end{equation} normed by \[ \| v \|_{H^1_\star} := \| \GRAD v \|_{\bL^2(\Omega^\star)}, \] and \begin{equation} \bV := \left\{ \bv \in \Hund: \bv\SCAL\bn|_\Gamma =0 \right\}, \label{eq:defbV} \end{equation} which we endow with the norm \[ \| \bv \|_{\bV}^2 := \| \GRAD \bv \|_{\bL^2}^2 + \| \bv_\btau \|_{\bL^2(\Gamma)}^2. \] Clearly, for these norms, they are Hilbert spaces. To take into account the fact that our problem will be time dependent we introduce the following notation. Let $E$ be a normed space with norm $\|\cdot\|_E$. The space of functions $\varphi : [0,T] \rightarrow E$ such that the map $(0,T)\ni t \mapsto \| \varphi(t)\|_E \in \Real$ is $L^p$-integrable is denoted by $L^p(0,T,E)$ or $L^p(E)$. To discuss the time discretization of our problem, we introduce a time-step $\dt>0$ (for simplicity assumed constant) and let $t_n = n \dt$ for $0 \leq n \leq N:=\lceil T/\dt \rceil$. for any time-dependent function, $\varphi$, we denote $\varphi^n := \varphi(t_n)$ and the sequence of values $\{ \varphi^n \}_{n=0}^N$ is denoted by $\varphi_\dt$. For any sequence $\varphi_\dt$ we define the time-increment operator $\frakd$ by \begin{equation} \frakd \varphi^n := \varphi^n - \varphi^{n-1}, \label{eq:frakd} \end{equation} and the time average operator $\overline{(\cdot)}$ by \begin{equation} \overline{\varphi^n} := \frac12\left( \varphi^n + \varphi^{n-1} \right). \label{eq:defstar} \end{equation} On sequences $\varphi_\dt \subset E$ we define the norms \[ \| \varphi_\dt \|^2_{\ell^2(E)} := \dt \sum_{n=0}^N \| \varphi^n\|_E^2, \quad \| \varphi_\dt \|_{\ell^\infty(E)} := \max_{0 \leq n \leq N} \left\{ \| \varphi^n \|_E \right\}, \quad \| \varphi_\dt \|^2_{\frakh^{1/2}(E)} := \sum_{n=1}^N \| \frakd \varphi^n \|_E^2. \] which are, respectively, discrete analogues of the $L^2(E)$, $L^\infty(E)$ and $H^{1/2}(E)$ norms. When dealing with energy estimates of time discrete problems, we will make, without explicit mention, repeated use of the following elementary identity \begin{equation} 2a(a-b) = a^2 - b^2 + (a-b)^2. \label{eq:toobasic} \end{equation} \section{Model Derivation} \label{sec:Model} In this section we briefly describe the derivation of our model. The procedure used to obtain it is quite similar to the arguments used in \cite{MR2511642,MR2261865,AbelsGarckeGrun} and it fits into the general ideological framework of so-called phase-field models. In phase-field methods, sharp interfaces are replaced by thin transitional layers where the interfacial forces are now smoothly distributed and, thus, there is no need to explicitly track interfaces. \subsection{Diffuse Interface Model} \label{sub:Model} To develop a phase-field model, we begin by introducing a so-called phase field variable $\phi$ and an interface thickness $\delta$. The phase field variable acts as a marker that will be almost constant (in our case $\pm1$) in the bulk regions, and will smoothly transition between these values in an interfacial region of thickness $\delta$. Having introduced the phase field, all the material properties that depend on the phase are slave variables and defined as \begin{equation} \Psi(\phi) = \frac{ \Psi_1 - \Psi_2 }2 \arctan\left(\frac\phi\delta\right) + \frac{ \Psi_1 + \Psi_2 }2, \label{eq:slave} \end{equation} where the $\Psi_i$ are the values on each one of the phases. \begin{rem}[Material properties] Relation \eqref{eq:slave} is not the only possible definition of the phase dependent quantities. For instance, \cite{Shen_Xiaofeng_2010} proposes to use a linear average between the bulk values. This approach has the advantage that the derivative of a phase-dependent field with respect to the phase (expressions that contain such quantities appear repeatedly) is constant, which greatly simplifies the calculations. However, this definition cannot be guaranteed to stay in the physical range of values which might lead to, say, a vanishing density or viscosity. On the other hand, \cite{MR1984386} proposes to use a harmonic average which guarantees that positive quantities stay bounded away from zero. In this work, we will assume that, with the exception of the permittivity $\vare$, \eqref{eq:slave} is the way the slave variables are defined, which has the advantage that guarantees that the field stays within the physical bounds. Any other definition with this property is equally suitable for our purposes. \end{rem} We model the droplet and surrounding medium as an incompressible Newtonian viscous two-phase fluid, so that its behavior is governed by the variable density incompressible Navier Stokes equations. The equation of conservation of momentum can be written in several forms. We chose the one proposed by Guermond and Quartapelle (\cite{MR2002i:76080}, see also \cite{MR2726065,Shen_Xiaofeng_2010}) because its nonlinear term possesses a skew symmetry property similar to the constant density Navier Stokes equations, \begin{subequations}\label{eq:NSE} \begin{align} \label{eq:NSE1} \sigma(\sigma\ue)_t + \left( \rho\ue\SCAL\GRAD\ue \right) + \frac12 \DIV(\rho\ue )\ue - \DIV\left(\eta \bS(\ue) \right) + \GRAD \pe &= \bF, \\ \DIV \ue &= 0, \label{eq:NSE2} \end{align} \end{subequations} where $\sigma = \sqrt\rho$ and $\rho$ is the density of the fluid and depends on the phase field; $\ue$ is the velocity of the fluid; $\pe$ is the pressure; $\eta$ is the viscosity of the fluid and depends on $\phi$; $\bS(\ue) = \tfrac12( \GRAD \ue + \GRAD \ue^\intercal )$ is the symmetric part of the gradient and $\bF$ are the external forces acting on the fluid. The phase field can be thought of as a scalar that is convected by the flow. Hence its motion is described by \begin{equation} \phi_t + \DIV( \phi \ue ) = -\DIV\bJ_\phi, \label{eq:phi} \end{equation} for some flux field $\bJ_\phi$ which will be found later. To model the interaction between the applied voltage and the fluid we introduce the charge density $q$. Another possibility, not explored here, is to introduce ion concentrations, thus leading to a Nernst Planck Poisson-like system, see \cite{fontelos:527,MR2535842,MR2666654,MR2471611}. The electric displacement field $\bD$ is defined in $\Omega^\star$. The evolution of these two quantities is governed by Maxwell's equations, \ie \begin{equation} \DIV\bD = q, \qquad \bD_t + q\ue + \bJ_\bD =0, \label{eq:DMaxwell} \end{equation} for some flux $\bJ_\bD$. Notice that we assume that the magnitude of the velocity of the fluid is negligible in comparison with the speed of light, and that the frequency of voltage actuation is sufficiently small, so that magnetic effects can be ignored. Taking the time derivative of the first equation and substituting in the second we obtain \begin{equation} q_t + \DIV(q\ue) = -\DIV \bJ_\bD. \label{eq:qD} \end{equation} To close the system, we must prescribe boundary conditions, determine the force $\bF$ exerted on the fluid, and find constitutive relations for the fluxes $\bJ_\phi$ and $\bJ_\bD$. We are assuming the solid walls are impermeable, therefore if $\bn$ is the normal to $\Gamma$, $\ue\SCAL\bn = 0$ on $\Gamma$ and $\bJ_\flat \SCAL \bn = 0$ for any flux $\bJ_\flat$. To find the rest of the boundary conditions, $\bF$ and relations for the fluxes, we denote the surface tension between the two phases by $\gamma$ and define the Ginzburg-Landau double well potential by \[ \calW(\xi) = \begin{dcases} (\xi + 1 )^2, & \xi < -1, \\ \frac14\left( 1 - \xi^2 \right)^2, & |\xi| \leq 1, \\ (\xi - 1 )^2, & \xi > 1. \end{dcases} \] \begin{rem}[The Ginzburg Landau potential] The original definition, given by Cahn and Hilliard, of the potential is logarithmic. See, for instance, \cite{Gomez20115310}. This way, the potential becomes infinite if the phase field variable is out of the range $[-1,1]$, thus guaranteeing that the phase field variable $\phi$ stays within that range. This is difficult to treat both in the analysis and numerics and hence practitioners have used the Ginzburg-Landau potential $c(1-\xi^2)^2$, for some $c>0$. We go one step further and restrict the growth of the potential to quadratic away from the range of interest. With this restriction Caffarelli and M\"uller, \cite{MR1367359}, have shown uniform $L^\infty$-bounds on the solutions of the Cahn Hilliard equations (which as we will see below the phase field must satisfy). This has also proved useful in the numerical discretization of the Cahn Hilliard and Cahn Hilliard Navier Stokes equations, see \cite{MR2679727,MR2726065,SalgadoMCL}. \end{rem} Finally, we introduce the interface energy density function, which describes the energy due to the fluid-solid interaction. Let $\theta_s$ be the contact angle that, at equilibrium, the interface between the two fluids makes with respect to the solid walls (see \cite{MR2261865,MR2498521,SalgadoMCL}) and define \[ \Theta_{fs}(\phi) = \frac{\cos\theta_s}2 \sin\left( \frac{\pi\phi}2 \right). \] Then, up to a constant, the interfacial energy density equals $\gamma \Theta_{fs}(\phi)$. Let us write the free energy of the system \begin{equation} \frakE = \gamma \int_\Omega \left( \frac\delta2 |\GRAD\phi|^2 + \frac1\delta \calW(\phi) \right) + \gamma \int_\Gamma \Theta_{fs}(\phi) + \frac12 \int_\Omega \frac1{\vare(\phi)} |\bD|^2 + \frac12 \int_\Omega \rho(\phi) |\ue|^2 + \frac\lambda2\int_\Omega q^2, \label{eq:energy} \end{equation} where $\vare$ is the electric permittivity of the medium and $\lambda>0$ is a regularization parameter. Computing the variation of the energy $\frakE$ with respect to $\phi$, while keeping all the other arguments fixed, we obtain that \[ \langle D_\phi \frakE, \bar\phi \rangle = \int_\Omega \mu \bar\phi + \int_\Gamma L \bar\phi, \] where $\mu$ is the so-called chemical potential which, in this situation, is given by \begin{equation} \mu = \gamma \left( \frac1\delta \calW'(\phi) - \delta \LAP \phi \right) - \frac{ \vare'(\phi) }{2 \vare(\phi)^2 } |\bD|^2 + \frac12 \rho'(\phi) |\ue|^2. \label{eq:mu} \end{equation} The quantity $L$ is given by \begin{equation} L = \gamma \left( \Theta_{fs}'(\phi) + \delta \partial_\bn \phi \right), \label{eq:L} \end{equation} and can be regarded as a ``chemical potential'' on the boundary. \begin{rem}[Chemical potential] From the definition of the chemical potential $\mu$ we see that the product $\mu\GRAD\phi$ includes the usual terms that define the surface tension, \ie \[ \gamma \left( \frac1\delta \calW'(\phi) - \delta \LAP \phi \right)\GRAD \phi. \] Additionally, it has the term \[ -\frac{ \vare'(\phi) }{ 2 \vare(\phi)^2 } |\bD|^2 \GRAD \phi, \] which, in some sense, can be thought of as coming from the Maxwell stress tensor. \end{rem} With this notation, let us take the time derivative of the free energy: \[ \frac{\diff \frakE}{\diff t} = \int_\Omega \mu \phi_t + \int_\Gamma L \phi_t + \int_\Omega \bE \SCAL \bD_t + \int_\Omega \rho(\phi) \ue\SCAL\ue_t + \lambda \int_\Omega q q_t, \] where $\bE$ is the electric field, defined as $\bE := \vare^{-1}\bD$. Let us rewrite each one of the terms in this expression. Using \eqref{eq:phi} and the impermeability conditions, \[ \int_\Omega \mu \phi_t = - \int_\Omega \mu \DIV\left( \phi \ue + \bJ_\phi \right) = \int_\Omega \GRAD \mu \SCAL \left( \phi \ue + \bJ_\phi \right). \] Using \eqref{eq:DMaxwell} \[ \int_\Omega \bE \SCAL \bD_t = - \int_\Omega \bE \SCAL \left( q \ue + \bJ_\bD \right). \] For the boundary term, we introduce the material derivative at the boundary $\dot\phi = \phi_t + \ue_\btau \partial_\btau \phi$ and rewrite \[ \int_\Gamma L \phi_t = \int_\Gamma L (\dot\phi - \ue_\btau \partial_\btau \phi). \] Notice that $ \sigma(\sigma\ue)_t = \rho \ue_t + \tfrac12 \rho_t \ue$, so that using \eqref{eq:NSE}, and integrating by parts, we obtain \[ \int_\Omega \rho(\phi) \ue \SCAL \ue_t = \int_\Omega \bF \SCAL \ue - \frac12 \int_\Omega \rho(\phi)_t |\ue|^2 + \int_\Gamma \eta \left(\bS(\ue) \SCAL \bn \right) \SCAL \ue_\btau - \int_\Omega \eta |\bS(\ue)|^2. \] Finally, using \eqref{eq:qD} and the impermeability condition $(q\ue + \bJ_\bD)\SCAL \bn|_\Gamma = 0$, \[ \lambda \int_\Omega q q_t = -\lambda \int_\Omega q \DIV\left( q \ue + \bJ_\bD \right) = \lambda \int_\Omega \GRAD q \SCAL \left( q \ue + \bJ_\bD\right). \] With the help of these calculations, we find that the time-derivative of the free energy can be rewritten as \begin{equation} \label{eq:dotE} \begin{aligned} \dot \frakE &= - \int_\Omega \mu \GRAD \phi \SCAL \ue + \int_\Omega \bJ_\phi \SCAL \GRAD \mu + \int_\Gamma L (\dot\phi - \ue_\btau \partial_\btau \phi) - \int_\Omega \bE \SCAL \left( q\ue + \bJ_\bD \right) + \int_\Omega \bF \SCAL \ue \\ &- \frac12 \int_\Omega \rho'(\phi)\phi_t |\ue|^2 + \int_\Gamma \eta \left(\bS(\ue) \SCAL \bn \right) \SCAL \ue_\btau - \int_\Omega \eta |\bS(\ue)|^2 + \frac\lambda2 \int_\Omega \ue \SCAL \GRAD\left( q^2 \right) + \lambda \int_\Omega \GRAD q \SCAL \bJ_\bD. \end{aligned} \end{equation} From \eqref{eq:dotE}, we can identify the power of the system, \ie the time derivative of the work $\frakW$ of internal forces, upon collecting all terms having a scalar product with the velocity $\ue$, \[ \dot \frakW = \int_\Omega \bF \SCAL \ue - \int_\Omega \mu \GRAD \phi \SCAL \ue - \int_\Omega q \bE \SCAL \ue + \frac\lambda2 \GRAD(q^2) \SCAL \ue - \frac12 \int_\Omega \rho'(\phi)\phi_t \ue\SCAL\ue. \] We assume that the system is closed, \ie there are no external forces. This implies that $\dot\frakW \equiv 0$ and we obtain an expression for the forces $\bF$ acting on the fluid, \[ \bF = \mu \GRAD \phi + q \bE + \frac12 \rho'(\phi)\phi_t \ue - \GRAD\left( \frac\lambda2 q^2 \right). \] Using the first law of thermodynamics \[ \frac{\diff \frakE}{\diff t} = \frac{\diff \frakW}{\diff t} - \calT \frac{\diff \frakS}{\diff t}, \] where the absolute temperature is denoted by $\calT$ and the entropy by $\frakS$, we can conclude that \[ \calT \dot \frakS = \int_\Omega \eta |\bS(\ue)|^2 - \int_\Omega \bE \SCAL \bJ_\bD + \int_\Omega \bJ_\phi \SCAL \GRAD \mu + \lambda \int_\Omega \GRAD q \SCAL \bJ_\bD + \int_\Gamma \eta \left(\bS(\ue) \SCAL \bn \right) \SCAL \ue_\btau + \int_\Gamma L (\dot\phi - \ue_\btau \partial_\btau \phi). \] To find an expression for the fluxes we introduce, in the spirit of Onsager \cite{onsager33,MR2261865}, a dissipation function $\Phi$. Since this must be a positive definite function on the fluxes, the simplest possible expression for a dissipation function is quadratic and diagonal in the fluxes, \eg \[ \Phi = \frac12 \int_\Omega \frac1M |\bJ_\phi|^2 + \frac\alpha2 \int_\Gamma {\dot\phi}^2 + \frac12 \int_\Omega \frac1K |\bJ_\bD|^2 + \frac12 \int_\Gamma \beta |\ue_\btau|^2, \] where all the proportionality constants, in principle, can depend on the phase $\phi$. Here, $M$ is known as the mobility, $K$ the conductivity and $\beta$ the slip coefficient. Using Onsager's relation \[ \left \langle D_\bJ \left( \dot\frakE(\bJ) + \Phi(\bJ) \right) , \bar\bJ\right\rangle = 0, \quad \forall \bar\bJ, \] and \eqref{eq:dotE}, we find that \begin{equation} \label{eq:fluxes} \bJ_\phi = -M \GRAD \mu, \ \bJ_\phi = -M \GRAD \mu, \ \bJ_\bD = K \left( \bE - \lambda \GRAD q \right), \ \beta \ue_\btau = -\eta \bS(\ue)_{\bn\btau} + L \partial_\btau \phi, \ \alpha\dot\phi = - L, \end{equation} where $\bS(\ue)_{\bn\btau} := (\bS(\ue)\SCAL\bn)_\btau$. \begin{rem}[Constitutive relations] Definitions \eqref{eq:fluxes} can also be obtained by simply saying that the constitutive relations of the fluxes depend linearly on the gradients, which is implicitly postulated in the form of the dissipation function $\Phi$. \end{rem} Since, in practical settings, there is an externally applied voltage (which is going to act as the control mechanism) we introduce a potential $V$ and then the electric field is given by $\bE = -\GRAD V$ with $V = V_0$ on $\partial^\star\Omega^\star$, where $V_0$ is the voltage applied. To summarize, we obtain the following system of equations for the phase variable $\phi$ and the chemical potential $\mu$, \begin{equation} \begin{dcases} \phi_t + \ue \SCAL \GRAD \phi = \DIV( M(\phi) \GRAD \mu ), & \text{in }\Omega, \\ \mu = \gamma \left( \frac1\delta \calW'(\phi) - \delta \LAP \phi \right) - \frac12\vare'(\phi)|\GRAD V|^2 + \frac12 \rho'(\phi)|\ue|^2, & \text{in } \Omega, \\ \alpha\left( \phi_t + \ue_\btau \partial_\btau \phi \right) + \gamma\left( \Theta_{fs}'(\phi) + \delta \partial_\bn \phi \right) = 0,\ M(\phi) \partial_n \mu = 0, & \text{on }\Gamma, \end{dcases} \label{eq:phase} \end{equation} and the velocity $\ue$ and pressure $\pe$, \begin{equation} \begin{dcases} \frac{ D(\rho(\phi)\ue) }{Dt} - \DIV\left( \eta(\phi) \bS(\ue) \right) + \GRAD \pe = \mu \GRAD \phi - q \GRAD \left( V + \lambda q \right) + \frac12 \rho'(\phi)\phi_t \ue & \text{on }\Omega, \\ \DIV \ue =0, & \text{in } \Omega, \\ \ue\SCAL\bn = 0, & \text{on }\Gamma, \\ \beta(\phi) \ue_\btau + \eta(\phi) \bS(\ue)_{\bn\btau} = \gamma\left( \Theta_{fs}'(\phi) + \delta \partial_\bn \phi \right) \partial_\btau \phi, & \text{on }\Gamma, \end{dcases} \label{eq:vel} \end{equation} where we have set \[ \frac{D(\rho(\phi)\ue)}{Dt} := \sigma(\phi)(\sigma(\phi) \ue)_t + \rho(\phi) \ue\ADV \ue + \tfrac12 \DIV(\rho(\phi) \ue)\ue. \] In addition, we have the equation for the electric charges $q$, \begin{equation} \begin{dcases} q_t + \DIV (q \ue ) = \DIV \left[K(\phi) \GRAD\left( \lambda q + V \right)\right], & \text{in }\Omega, \\ K(\phi) \GRAD\left( \lambda q + V \right) \SCAL \bn = 0, & \text{on }\Gamma, \end{dcases} \label{eq:charge} \end{equation} and voltage $V$, \begin{equation} \begin{dcases} - \DIV\left( \vare^\star(\phi) \GRAD V \right) = q \chi_\Omega, & \text{in }\Omega^\star, \\ V = V_0, & \text{on }\partial^\star\Omega^\star, \\ \partial_\bn V = 0, & \text{on }\partial\Omega^\star \cap \Gamma, \end{dcases} \label{eq:potential} \end{equation} where \[ \vare^\star(\phi) = \begin{dcases} \vare(\phi), & \Omega, \\ \vare_D, & \Omega^\star \setminus \Omega, \end{dcases} \] with $\vare_D$ being the value of the permittivity on the dielectric plates $\Omega^\star \setminus \Omega$, so $\vare_D$ is constant there. \begin{rem}[Generalized Navier Boundary condition] In \eqref{eq:vel}, the boundary condition for the tangential velocity is known as the generalized Navier boundary condition (GNBC), and it is aimed at resolving the so-called contact line paradox of the movement of a two phase fluid on a solid wall. The reader is referred to, for instance, \cite{QianCiCP,MR2261865,MR2498521} for a discussion of its derivation. Although there has been a lot of discussion and controversy around the validity of this boundary condition, see for instance \cite{Buscaglia20113011,MR2455379}, we shall take the GNBC as a given and will not discuss its applicability and/or consequences here. \end{rem} \subsection{Nondimensionalization} \label{sub:non-dim} \begin{table}[h] \caption{Physical Parameters at standard temperature (25$^\circ$ C) and pressure (1 bar), taken from \cite{CRC_Book:2002}. A Farad (F) is $\mathrm{C}^2 / \mathrm{J}$. For drinking water, $K$ is $5\cdot10^{-4}$ to $5\cdot10^{-2}$.} \begin{centering} \begin{tabular}{|c||c|} \hline \textbf{Parameter} & \textbf{Value} \\ \hline\hline Surface Tension $\gamma$ & (air/water) 0.07199 $\mathrm{J} / \mathrm{m}^{2}$ \\ \hline Dynamic Viscosity $\etascale$ & (water) $8.68\cdot10^{-4}$, (air) $1.84\cdot10^{-5}$ $\;\mathrm{Kg} / \mathrm{m\cdot s}$ \\ \hline Density $\rhoscale$ & (water) 996.93, (air) 1.1839 $\;\mathrm{Kg} / \mathrm{m}^{3}$ \\ \hline Length Scale (Channel Height) $\Lscale$ & $50\cdot10^{-6}$ to $100\cdot10^{-6}$ m \\ \hline Velocity Scale $\uscale$ & 0.001 to 0.05 $\mathrm{m} / \mathrm{s}$ \\ \hline Voltage Scale $\Vscale$ & 10 to 50 Volts \\ \hline Permittivity of Vacuum $\epsvac$ & $8.854\cdot10^{-12}$ $~\mathrm{F} / \mathrm{m}$ \\ \hline Permittivity $\epsscale$ & (water) $78.36 \cdot \epsvac$, (air) $1.0 \cdot \epsvac$ \\ \hline Charge (Regularization) Parameter $\lambda$ & 0.5 $~\mathrm{J} \cdot \mathrm{m}^{3} / \mathrm{C}^2$ \\ \hline Mobility $\Mscale$ & 0.01 $\; \mathrm{m}^5 / (\mathrm{J} \cdot \mathrm{s}) $ \\ \hline Phase Field Parameter $\alpha$ & 0.001 $\; \mathrm{J} \cdot \mathrm{s} / \mathrm{m}^2 $ \\ \hline Electrical Conductivity $\Kscale$ & (deionized water) $5.5\cdot10^{-6}$, \\ & (air) $\approx$ 0.0 $\; \mathrm{C}^2 / (\mathrm{J} \cdot \mathrm{m} \cdot \mathrm{s}) \equiv$ Amp$/ (\mathrm{Volt} \cdot \mathrm{m})$ \\ \hline\hline \end{tabular} \end{centering} \label{tbl:Physical_Parameters} \end{table} Here we present appropriate scalings so that we may write equations \eqref{eq:phase}--\eqref{eq:potential} in non-dimensional form. Table~\ref{tbl:Physical_Parameters} shows some typical values for the material parameters appearing in the model. Consider the following scalings: \begin{align*} \tilde{\rho} &= \rho / \rhoscale \text{ (choose $\rhoscale$)}, & \tilde{\eta} &= \eta / \etascale \text{ (choose $\etascale$)}, &\tilde{\beta} &= \beta / \betascale, & \betascale &= \etascale / \Lscale, \\ \tilde{\pe} &= \pe / \pscale, &\pscale &= \rhoscale \uscale^2, &\tilde{\ue} &= \ue / \uscale \text{ (choose $\uscale$)}, &\tilde{\bx} &= \bx / \Lscale \text{ (choose $\Lscale$)}, \\ \tilde{t} &= t / \tscale, &\tscale &= \Lscale / \uscale, &\tilde{\mu} &= \mu / \muscale, &\muscale &= \gamma / \Lscale, \\ \tilde{q} &= q / \qscale, &\qscale &= \Vscale / \lambda, &\tilde{V} &= V / \Vscale, \text{ (choose $\Vscale$)}, & \tilde{\vare} &= \vare / \epsscale, \\ \tilde{\delta} &= \delta / \Lscale, &\widetilde{M} &= M / \Mscale, &\widetilde{K} &= K / \Kscale, &\CAP &= \frac{\etascale \uscale}{\gamma}, \\ \REY &= \frac{\rhoscale \uscale \Lscale}{\etascale}, &\WEB &= \frac{\rhoscale \uscale^2 \Lscale}{\gamma}, &\BOEW &= \frac{\epsscale \Vscale^2}{\Lscale \gamma}, &\IE &= \frac{\rhoscale \uscale^2}{\qscale \Vscale}, \\ \STPH &= \frac{\gamma}{\alpha / \tscale}, &\MO &= \frac{\gamma \Mscale}{\Lscale^2 \uscale}, &\KO &= \frac{\Vscale \Kscale}{\Lscale \qscale \uscale}, &\CH &= \frac{\qscale \Lscale^2}{\Vscale \epsscale}, \\ \end{align*} where $\CAP$ is the capillary number, $\REY$ is the Reynolds number, $\WEB$ is the Weber number, $\BOEW$ is the electro-wetting Bond number, $\IE$ is the ratio of fluid forces to electrical forces, $\STPH$ is the ratio of surface tension to ``phase field forces,'' $\MO$ is a (non-dimensional) mobility coefficient, $\KO$ is a conductivity coefficient, and $\CH$ is an electric charge coefficient. Let us now make the change of variables. To simplify notation, we drop the tildes, and consider all variables and differential operators as non-dimensional. The fluid equations read: \begin{equation*} \begin{dcases} \frac{ D(\rho \ue) }{Dt} -\frac{1}{\REY} \DIV\left( \eta \bS(\ue) \right) +\GRAD \pe = \frac{1}{\WEB} \mu \GRAD \phi - \frac{1}{\IE} q \GRAD \left( V + q \right) + \frac12 \rho'(\phi)\phi_t \ue, & \text{in }\Omega, \\ \DIV \ue =0, & \text{in } \Omega, \\ \ue\SCAL\bn = 0, & \text{on }\Gamma, \\ \beta \ue_\btau + \eta \bS(\ue)_{\bn\btau} = \frac1{\CAP} \left( \Theta_{fs}'(\phi) + \delta \partial_\bn \phi \right) \partial_\btau \phi, & \text{on }\Gamma. \end{dcases} \end{equation*} The phase-field equations change to (again dropping the tilde) \begin{equation*} \begin{dcases} \phi_t + \ue \SCAL \GRAD \phi = \MO \DIV( M(\phi)\GRAD \mu), & \text{in }\Omega, \\ \mu = \left( \frac1\delta \calW'(\phi) - \delta \LAP \phi \right) -\BOEW \frac12 \vare'(\phi)|\GRAD V|^2 + \WEB \frac12 \rho'(\phi)|\ue|^2, & \text{in } \Omega, \\ \phi_t + \ue_\btau \partial_\btau \phi + \STPH \left( \Theta_{fs}'(\phi) + \delta \partial_\bn \phi \right) = 0, \ \partial_n \mu = 0, & \text{on }\Gamma. \end{dcases} \end{equation*} Performing the change of variables on the charge transport equation gives \begin{equation*} \begin{dcases} q_t + \DIV (q \ue ) = \KO \DIV \left(K(\phi)\GRAD\left(q + V \right)\right), & \text{in }\Omega, \\ \bn \SCAL \GRAD\left(q + V \right) = 0, & \text{on }\Gamma. \end{dcases} \end{equation*} Lastly, for the electrostatic equation we obtain \begin{equation*} \begin{dcases} - \DIV\left( (\phi) \GRAD V \right) = \CH q \chi_\Omega, & \text{in }\Omega^\star, \\ V = V_0 / \Vscale, & \text{on }\partial^\star\Omega^\star, \\ \partial_\bn V = 0, & \text{on }\partial\Omega^\star \cap \Gamma. \end{dcases} \end{equation*} where $\vare^\star(\phi)$ has been normalized by $\epsscale$. To alleviate the notation, for the rest of our discussion we will set all the nondimensional groups ($\CAP$, $\REY$, $\WEB$, $\BOEW$, $\IE$, $\STPH$, $\MO$, $\KO$ and $\CH$) to one. If needed, the dependence of the constants on all these parameters can be traced by following our arguments. Moreover, we must note that if a simplification of this model is desired, then these scalings must serve as a guide to decide which effects are dominant. \subsection{Tangential Derivatives at the Boundary} \label{sub:tangentialderiv} As we can see from \eqref{eq:phase} and \eqref{eq:vel}, our model incorporates tangential derivatives of the phase variable $\phi$ at the boundary $\Gamma$. Unfortunately, in the analysis, we are not capable of dealing with these terms. Therefore, we propose some simplifications. The first possible simplification is simply to ignore the terms that contain this tangential derivative; see \cite{MR2511642}. However, it is our feeling that the presence of them is important, specially in dealing with the contact angle in the GNBC. A second possibility would be to add an \emph{ad hoc} term of the form $\LAP_\Gamma \phi$ on the boundary condition for the phase variable, where by $\LAP_\Gamma$ we denote the Laplace-Beltrami operator on $\Gamma$. A similar approach has been followed, in a somewhat different context, for instance, by Pr\"uss \etal \cite{MR2230586} and Cherfils \etal \cite{MR2629535}. However, this condition might lead to lack of conservation of $\phi$, which is an important feature of phase field models based on the Cahn Hilliard equation. Finally, the approach that we propose is to recall that, in principle, the phase field variable must be constant in the bulk of each one of the phases and so $\partial_\btau \phi \approx 0$ there. Moreover, in the sharp interface limit this tangential derivative must be a Dirac measure supported on the interface. Therefore we define a function \begin{equation} \psi(\phi) = \frac{1}{\Lscale} \frac1\delta e^{-\frac{\phi^2}{2\delta}}, \quad \text{where } \delta \text{ is non-dimensional}, \label{eq:defpsi} \end{equation} and replace all the instances of $\partial_\btau \phi$ by $\psi(\phi)$. \subsection{Contact Line Pinning} \label{sub:pinning} Simply put, the contact line pinning (hysteresis) is a frictional effect that occurs at the three-phase contact line, and is rather controversial. We refer the reader to \cite{walker:102103,MR2595379} for an explanation about its origins and possible dependences. Let us here only mention that, macroscopically, the pinning force has a threshold value and, thus, it should depend on the stress at the contact line. It is important to take into account contact line pinning since, as observed in \cite{walker:102103,MR2595379}, it is crucial for capturing the true time scales of the problem. We propose a phenomenological approach to deal with this effect. From the GNBC, \[ \beta \ue_\btau + \eta \bS(\ue)_{\bn\btau} = \gamma\left( \Theta_{fs}'(\phi) + \delta \partial_\bn \phi \right)\psi(\phi), \] we can see that, to recover no-slip conditions, one must set the slip coefficient $\beta$ sufficiently large. On the contrary, when $\beta$ is small, one obtains an approximation of full slip conditions. A simple dimensional argument then shows that $\beta = \eta\ell$, where $\ell$ has the dimensions of inverse length. Therefore, we propose the slip coefficient to have the following form \[ \beta = \eta(\phi) \ell(\phi,\bS), \] where \[ \ell(\phi,\bS) = \frac{1}{\Lscale} \begin{dcases} \frac1\delta, & |\phi|>\frac12, \\ \frac1\delta, & |\phi| \leq \frac12, \text{ and } |\bS(\ue)_{\bn\btau}| \ll T_p, \\ 1, & |\phi| \leq \frac12, \text{ and } |\bS(\ue)_{\bn\btau}| \approx T_p, \\ \delta, & |\phi| \leq \frac12, \text{ and } |\bS(\ue)_{\bn\btau}| \gg T_p, \end{dcases} \] where $\delta$ is the non-dimensional transition length. For the purposes of analysis, we face the same difficulties in this expression as in \S\ref{sub:tangentialderiv}. Ergo, we will use this to model pinning in the numerical examples, but leave it out of the analysis. \section{Formal Weak Formulation and Formal Energy Estimate} \label{sec:Energy} In this section we obtain a weak formulation for problem \eqref{eq:phase}--\eqref{eq:potential} and show a formal energy estimate, which serves as an a priori estimate and the basic relation on which our existence theory is based. \subsection{Formal Weak Formulation} \label{sub:weak} To obtain a weak formulation of the problem, we begin by multiplying the first equation of \eqref{eq:phase} by $\bar\phi$, the second by $\bar\mu$ and integrating in $\Omega$. After integration by parts, taking into account the boundary conditions, we arrive at \begin{subequations} \label{eq:phaseweak} \begin{equation} \label{eq:phaseweak1} \scl \phi_t , \bar\phi \scr + \scl \ue\SCAL\GRAD\phi, \bar\phi \scr + \scl M(\phi) \GRAD\mu, \GRAD\bar\phi \scr =0, \end{equation} \text{and} \begin{multline} \scl \mu, \bar\mu \scr = \frac\gamma\delta \scl \calW'(\phi), \bar\mu \scr + \gamma \delta \scl \GRAD\phi, \GRAD\bar\mu \scr - \frac12 \scl \vare'(\phi) |\GRAD V|^2, \bar\mu \scr + \frac12 \scl \rho'(\phi) |\ue|^2, \bar \mu \scr \\ + \alpha \sbl \phi_t + \ue_\btau \psi(\phi), \bar\mu \sbr + \gamma \sbl \Theta_{fs}'(\phi), \bar\mu \sbr. \label{eq:phaseweak2} \end{multline} \end{subequations} Multiply the first equation of \eqref{eq:vel} by $\bw$ such that $\bw\SCAL\bn|_\Gamma = 0$, the second by $\bar p$ and integrate in $\Omega$. Integration by parts on the first equation, in conjunction with the boundary conditions and \eqref{eq:defpsi}, yields \begin{align*} -\scl \DIV (\eta(\phi) \bS(\ue)), \bw \scr &= \scl \eta(\phi) \bS(\ue),\bS(\bw) \scr - \sbl \eta(\phi) \bS(\ue)_\bn, \bw_\btau \sbr \\ & = \scl \eta(\phi) \bS(\ue), \bS(\bw) \scr + \sbl \beta(\phi) \ue_\btau, \bw_\btau \sbr - \gamma \sbl \Theta_{fs}'(\phi) + \delta\partial_\bn \phi, \bw_\btau \psi(\phi) \sbr \\ &= \scl \eta(\phi) \bS(\ue),\bS(\bw) \scr + \sbl \beta(\phi) \ue_\btau, \bw_\btau \sbr + \alpha \sbl \phi_t + \ue_\btau \psi(\phi), \bw_\btau \psi(\phi) \sbr, \end{align*} where we used the third equation of \eqref{eq:phase}. With these manipulations we obtain \begin{subequations} \label{eq:NSEweak} \begin{multline} \label{eq:velweak} \scl \frac{ D( \rho(\phi) \ue ) }{Dt}, \bw \scr + \scl \eta(\phi) \bS(\ue), \bS(\bw) \scr - \scl \pe, \DIV \bw \scr + \sbl \beta(\phi) \ue_\btau, \bw_\btau \sbr + \alpha \sbl \ue_\btau \psi(\phi), \bw_\btau \psi(\phi) \sbr \\ = \scl \mu \GRAD\phi, \bw \scr - \scl q \GRAD(\lambda q + V), \bw \scr + \frac12 \scl \rho'(\phi)\phi_t \ue, \bw \scr - \alpha \sbl \phi_t \psi(\phi), \bw_\btau \sbr, \end{multline} \text{for all $\bw$, and } \begin{equation} \scl \bar p, \DIV \ue \scr = 0, \label{eq:incweak} \end{equation} \end{subequations} for all $\bar p$. Multiply \eqref{eq:charge} by $r$ and integrate in $\Omega$ to get \begin{equation} \label{eq:chargeweak} \scl q_t, r \scr - \scl q\ue, \GRAD r \scr + \scl K(\phi) \GRAD(\lambda q + V ), \GRAD r \scr = 0. \end{equation} Let $W$ be a function that equals zero on $\partial^\star \Omega^\star$. Multiply the equation for the electric potential \eqref{eq:potential} by $W$, integrate in $\Omega^\star$ to obtain \begin{equation} \label{eq:potentialweak} \scl \vare^\star(\phi) \GRAD V, \GRAD W \scr_{\Omega^\star}= \scl q, W \scr. \end{equation} Given the way the model has been derived, it is clear that an energy estimate must exist. Before we obtain it let us show a comparison result \emph{\`a la} Gr\"onwall. \begin{lem}[Gr\"onwall] \label{lem:mod-gronwall} Let $f,g,h,w:[0,T] \rightarrow \Real$ be measurable and positive functions such that \begin{equation} \label{eq:gR} f(t)^2 + \int_0^t g(s)\diff s \leq h(t) + \int_0^t f(s)w(s)\diff s, \quad \forall t\in[0,T]. \end{equation} Then \[ \sup_{s\in[0,T]}f(s)^2 + \frac12 \int_0^T g(s) \diff s \leq 4 \sup_{s\in[0,T]}h(s) + 4T \int_0^T w^2(s) \diff s, \quad \forall t\in[0,T] \] \end{lem} \begin{proof} Take, in \eqref{eq:gR}, $t=t_0$, where \[ t_0 = \argmax \left\{ f(s) : s \in [0,T] \right\}, \] then \[ f(t_0)^2 + \int_0^{t_0} g(s) \diff s \leq \max_{s\in[0,T]}h(s) + f(t_0)\int_0^{t_0} w(s) \diff s \leq \max_{s\in[0,T]}h(s) + \frac12 f(t_0)^2 + \left( \int_0^T w(s) \diff s \right)^2. \] Canceling the common factors, applying the Cauchy Schwarz inequality on the right and taking the supremum on the left hand side we obtain the result. \end{proof} \begin{rem}[Exponential in time estimates] The main advantage of using Lemma~\ref{lem:mod-gronwall} to obtain \emph{a priori} estimates, as opposed to a standard argument invoking Gr\"onwall's inequality, is that we can avoid exponential dependence on the final time $T$. \end{rem} The following result provides the formal energy estimate. \begin{thm}[Stability] \label{thm:energy} If there is a solution to \eqref{eq:phase}--\eqref{eq:potential}, then it must satisfy the following estimate \begin{multline} \sup_{s\in(0,T]}\left\{ \int_\Omega \left[\frac12 \rho(\phi) |\ue|^2 + \frac\lambda4 q^2 + \gamma\left( \frac\delta2 |\GRAD \phi|^2 + \frac1\delta \calW(\phi)\right) \right] + \int_{\Omega^\star} \frac14 \vare^\star(\phi) |\GRAD V|^2 + \gamma \int_\Gamma \Theta_{fs}(\phi) \right\} \\ + \int_0^T\left\{ \int_\Omega \left[ \eta(\phi) |\bS(\ue)|^2 + M(\phi) |\GRAD \mu|^2 + K(\phi) |\GRAD(\lambda q + V )|^2 \right] + \int_\Gamma \left[ \beta(\phi) |\ue_\btau|^2 + \alpha |\phi_t + \ue_\btau \psi(\phi)|^2 \right] \right\} \leq \\ \left\{ \int_\Omega \left[\frac12 \rho(\phi) |\ue|^2 + q^2 + \gamma\left( \frac\delta2 |\GRAD \phi|^2 + \frac1\delta \calW(\phi)\right) + \frac12 |\bar V_0|^2 \right] + \int_{\Omega^\star}\left( \vare^\star(\phi) |\GRAD V|^2 + \vare_M |\GRAD \bar V_0|^2 \right) \right. \\ \left. + \gamma\int_\Gamma \Theta_{fs}(\phi) \right\}\Big|_{t=0} +\sup_{s\in[0,T]}\left\{ \int_{\Omega^\star}\vare_M |\GRAD \bar V_0|^2 +\int_\Omega \frac1\lambda |\bar V_0|^2(t) \right\} \\ +cT\int_0^T \left[ \int_{\Omega^\star} \vare_M |\GRAD \bar V_{0,t}|^2 + \frac4\lambda \int_\Omega |\bar V_{0,t}|^2 \right], \label{eq:Elaw} \end{multline} where $c$ does not depend on $T$. \end{thm} \begin{proof} We first deal with the Navier Stokes and Cahn Hilliard equations in a way very similar to Theorem~3.1 of \cite{SalgadoMCL}. Set $\bw = \ue$ in \eqref{eq:velweak} and notice that \[ \scl \frac{ D(\rho\ue)}{Dt}, \ue \scr = \frac12\frac{\diff}{\diff t}\int_\Omega \rho |\ue|^2, \] because \[ \frac{ D(\rho \ue)}{Dt} = \sigma(\sigma\ue)_t + \rho \ue \ADV \ue + \frac12 \DIV(\rho \ue) \ue. \] We obtain \begin{multline} \frac{\diff}{\diff t} \frac12\int_\Omega \rho |\ue|^2 + \int_\Omega \eta |\bS(\ue)|^2 + \int_\Gamma \beta(\phi) |\ue_\btau|^2 + \alpha \int_\Gamma |\ue_\btau \psi(\phi)|^2 = \scl \mu \GRAD \phi, \ue \scr \\ - \scl q \GRAD (\lambda q + V ), \ue \scr + \frac12 \scl \rho'(\phi)\phi_t, |\ue|^2 \scr - \alpha \sbl \phi_t \ue_\btau, \psi( \phi) \sbr. \label{eq:velE} \end{multline} Set $\bar\phi = \mu$ in \eqref{eq:phaseweak1} to get \begin{equation} \scl \mu, \phi_t \scr + \scl \mu \GRAD \phi, \ue \scr + \int_\Omega M(\phi) |\GRAD \mu|^2 = 0. \label{eq:phaseE} \end{equation} Set $\bar\mu = -\phi_t$ in \eqref{eq:phaseweak2} to write \begin{multline} - \scl \phi_t, \mu \scr = - \gamma \frac{\diff}{\diff t}\left[ \int_\Omega \left (\frac\delta2 |\GRAD \phi|^2 + \frac1\delta \calW(\phi) \right) + \int_\Gamma \Theta_{fs}(\phi) \right] + \frac12 \scl \vare'(\phi)\phi_t, |\GRAD V|^2 \scr \\ - \frac12 \scl \rho'(\phi)\phi_t, |\ue|^2 \scr - \alpha \int_\Gamma (\phi_t)^2 - \alpha \sbl \phi_t, \ue_\btau \psi( \phi) \sbr. \label{eq:chemE} \end{multline} Add \eqref{eq:velE}, \eqref{eq:phaseE} and \eqref{eq:chemE} to arrive at \begin{multline} \frac{\diff}{\diff t}\left[ \int_\Omega \left(\frac12 \rho(\phi) |\ue|^2 + \gamma \left( \frac\delta2 |\GRAD \phi|^2 + \frac1\delta \calW(\phi) \right) \right) + \gamma \int_\Gamma \Theta_{fs}(\phi) \right] + \int_\Omega \eta(\phi) |\bS(\ue)|^2 \\ + \int_\Gamma \beta(\phi) |\ue_\btau|^2 + \int_\Omega M(\phi) |\GRAD \mu|^2 +\alpha \int_\Gamma \left( \phi_t + \ue_\btau \psi\left( \phi \right) \right)^2 = - \scl q \GRAD (\lambda q + V ), \ue \scr + \frac12 \scl \vare'(\phi)\phi_t, |\GRAD V|^2 \scr. \label{eq:combined} \end{multline} We next deal with the electrostatic equations. Set $r = \lambda q + V$ in \eqref{eq:chargeweak} to get \begin{equation} \label{eq:chargeE} \frac\lambda2 \frac{\diff}{\diff t}\int_\Omega q^2 + \scl V, q_t \scr - \scl q \GRAD ( \lambda q + V ), \ue \scr + \int_\Omega K(\phi) | \GRAD (\lambda q + V ) |^2 = 0. \end{equation} Take the time derivative of \eqref{eq:potentialweak} and set $W=V-\bar V_0$, where by $\bar V_0$ we mean an extension of $V_0$ to $\Omega^\star$. We obtain \begin{equation} \int_{\Omega^\star} \partial_t \left( \vare^\star(\phi) \right) |\GRAD V|^2 + \frac12 \int_{\Omega^\star}\vare^\star(\phi) \partial_t \left( |\GRAD V|^2 \right) = \scl q_t, V \scr -\scl q_t, \bar V_0 \scr + \scl \partial_t(\vare^\star(\phi) \GRAD V), \GRAD \bar V_0 \scr_{\Omega^\star}. \label{eq:potentialE} \end{equation} Add \eqref{eq:combined}, \eqref{eq:chargeE} and \eqref{eq:potentialE} and recall that $\vare^\star(\phi)$ is constant on $\Omega^\star \setminus \Omega$. We thus obtain \begin{multline*} \frac\diff{\diff t}\left\{ \int_\Omega \left[\frac12 \rho(\phi) |\ue|^2 + \frac\lambda2 q^2 + \gamma\left( \frac\delta2 |\GRAD \phi|^2 + \frac1\delta \calW(\phi)\right) \right] + \int_{\Omega^\star} \frac12 \vare^\star(\phi) |\GRAD V|^2 + \gamma \int_\Gamma \Theta_{fs}(\phi) \right\} \\ + \int_\Omega \left[\eta(\phi) |\bS(\ue)|^2 + M(\phi) |\GRAD \mu|^2 + K(\phi) |\GRAD(\lambda q + V )|^2 \right] + \int_\Gamma \left[\beta(\phi) |\ue_\btau|^2 + \alpha |\phi_t + \ue_\btau \psi(\phi)|^2 \right] \\ = \scl \partial_t (\vare^\star(\phi) \GRAD V), \GRAD \bar V_0 \scr_{\Omega^\star} - \scl q_t, \bar V_0 \scr. \end{multline*} Integrate in time over $[0,t]$, with $0<t<T$ and integrate by parts the right hand side. Repeated applications of the Cauchy-Schwarz inequality give us \begin{multline*} \left\{ \int_\Omega \left[\frac12 \rho(\phi) |\ue|^2 + \frac\lambda4 q^2 + \gamma\left( \frac\delta2 |\GRAD \phi|^2 + \frac1\delta \calW(\phi)\right) \right] + \int_{\Omega^\star} \frac14 \vare^\star(\phi) |\GRAD V|^2 + \gamma \int_\Gamma \Theta_{fs}(\phi) \right\}\Big|_{t} \\ + \int_0^t\int_\Omega \left[ \eta(\phi) |\bS(\ue)|^2 + M(\phi) |\GRAD \mu|^2 + K(\phi) |\GRAD(\lambda q + V )|^2 \right] + \int_0^t\int_\Gamma \left[\beta(\phi) |\ue_\btau|^2 + \alpha |\phi_t + \ue_\btau \psi(\phi)|^2 \right] \\ \leq \left\{ \int_\Omega \left[\frac12 \rho(\phi) |\ue|^2 + q^2 + \gamma\left( \frac\delta2 |\GRAD \phi|^2 + \frac1\delta \calW(\phi)\right) \right] + \int_{\Omega^\star} \vare^\star(\phi) |\GRAD V|^2 + \gamma \int_\Gamma \Theta_{fs}(\phi) \right\}\Big|_{t=0}\\ + \int_{\Omega^\star} \vare_M \left( |\GRAD \bar V_0|^2(t) + |\GRAD \bar V_0|^2(0) \right) +\int_\Omega \left[ \frac1\lambda |\bar V_0|^2(t) + \frac12 |\bar V_0|^2(0) \right] \\ + c\int_0^t \left\{ \int_{\Omega^\star} \vare_M |\GRAD \bar V_{0,t}|^2 + \frac4\lambda \int_\Omega |\bar V_{0,t}|^2 \right\}^{1/2} \left[ \frac\lambda4 \int_\Omega q^2 + \int_{\Omega^\star} \frac14 \vare^\star(\phi) |\GRAD V|^2 \right]^{1/2}, \end{multline*} where $\vare_M$ is the maximal value of the function $\vare^\star(\phi)$. Finally, if we set \begin{align*} f(t) &= \left\{ \int_\Omega \left[\frac12 \rho(\phi) |\ue|^2 + \frac\lambda4 q^2 + \gamma\left( \frac\delta2 |\GRAD \phi|^2 + \frac1\delta \calW(\phi)\right) \right] + \int_{\Omega^\star} \frac14 \vare^\star(\phi) |\GRAD V|^2 + \gamma \int_\Gamma \Theta_{fs}(\phi) \right\}(t), \\ g(t) &= \int_\Omega \left[ \eta(\phi) |\bS(\ue)|^2 + M(\phi) |\GRAD \mu|^2 + K(\phi) |\GRAD(\lambda q + V )|^2 \right] + \int_\Gamma \left[ \beta(\phi) |\ue_\btau|^2 + \alpha |\phi_t + \ue_\btau \psi(\phi)|^2 \right](t), \\ h(t) &= \left\{ \int_\Omega \left[\frac12 \rho(\phi) |\ue|^2 + q^2 + \gamma\left( \frac\delta2 |\GRAD \phi|^2 + \frac1\delta \calW(\phi)\right) \right] + \int_{\Omega^\star} \vare^\star(\phi) |\GRAD V|^2 + \gamma \int_\Gamma \Theta_{fs}(\phi) \right\}\Big|_{0}, \\ &+ \int_{\Omega^\star}\vare_M \left( |\GRAD \bar V_0|^2(t) + |\GRAD \bar V_0|^2(0) \right) +\int_\Omega \left[ \frac1\lambda |\bar V_0|^2(t) + \frac12 |\bar V_0|^2(0) \right], \\ w(t) &= \left\{ \int_{\Omega^\star} \vare_M |\GRAD \bar V_{0,t}|^2 + \frac4\lambda \int_\Omega |\bar V_{0,t}|^2 \right\}^{1/2}, \end{align*} then an application of Lemma~\ref{lem:mod-gronwall} gives the desired estimate. \end{proof} \section{The Fully Discrete Problem and Its Analysis} \label{sec:Discrete} In this section we introduce a space-time discrete problem that is used to approximate the electrowetting problem \eqref{eq:phaseweak}--\eqref{eq:potentialweak}. Using this discrete problem, and the result of Theorem~\ref{thm:energy}, we will prove that a time-discrete version of our problem always has a solution. Moreover, in Section~\ref{sec:NumExp}, we will base our numerical experiments on a variant of the problem defined here. \subsection{Definition of the Fully Discrete Problem} \label{sub:defhtau} To discretize in time, as discussed in \S\ref{sub:Notation}, we divide the time interval $[0,T]$ into subintervals of length $\dt>0$. Recall that the time increment operator $\frakd$ was introduced in \eqref{eq:frakd} and the time average operator $\overline{(\cdot)}$ in \eqref{eq:defstar}. To discretize in space, we introduce a parameter $h>0$ and let $\polW_h \subset \Hunstar$, $\polQ_h \subset \Hun$, $\polX_h \subset \bV$ and $\polM_h \subset \tildeLdeux$ be finite dimensional subspaces. We require the following compatibility condition between the spaces $\polW_h$ and $\polQ_h$: \begin{equation} W_h|_{\Omega} \in \polQ_h, \quad \forall W_h \in \polW_h. \label{eq:voltchargecomp} \end{equation} Moreover, we require that the pair of spaces $(\polX_h,\polM_h)$ satisfies the so-called LBB condition (see \cite{GR86,BF91,MR2050138}), that is, there exists a constant $c$ independent of $h$ such that \begin{equation} c \| \bar{p}_h \|_{L^2} \leq \sup_{\bv_h \in \polX_h} \frac{ \int_\Omega \bar{p}_h \DIV \bv_h }{ \| \bv_h \|_{\bH^1}}, \quad \forall \bar{p}_h \in \polM_h. \label{eq:LBB} \end{equation} Finally, we assume that if $\polY$ is any of the continuous spaces and $\polY_h$ the corresponding subspace, then $h_1 < h_2$ implies $\polY_{h_2} \subset \polY_{h_1}$. Moreover, the family of spaces $\{ \polY_h\}_{h>0}$, is ``dense in the limit.'' In other words, for every $h>0$ there is a continuous operator $\calI_h : \polY \rightarrow \polY_h$ such that when $h\rightarrow 0$ \[ \| y - \calI_h y \|_{\polY} \rightarrow 0, \quad \forall y \in \polY. \] The space $\polW_h$ will be used to approximate the voltage; $\polQ_h$ the charge, phase field and chemical potential; and $\polX_h,\ \polM_h$ the velocity and pressure, respectively. Finally, to account for the boundary conditions on the voltage, we denote \[ \polW_h(\bar V_0^{k+1}) = \polW_h + \bar V_0^{k+1}. \] \begin{rem}[Finite elements] The introduced spaces can be easily constructed using, for instance, finite elements, see \cite{GR86,BF91,MR2050138,Ci78} for details. The compatibility condition \eqref{eq:voltchargecomp} can be easily attained. For instance, one can require that the mesh is constructed in such a way that for all cells $\calK$ in the triangulation $\calT_h$, \begin{align*} \calK \cap \bar \Omega \neq \emptyset &\Leftrightarrow \calK \cap \left (\Omega^\star \setminus \bar \Omega \right)= \emptyset, \end{align*} and the polynomial degree of the space $\polQ_h$ is no less than that of $\polW_h$. Finally, we remark that the nestedness assumption is done merely for convenience. \end{rem} The fully discrete problem searches for \[ \left\{V_{h\dt}-\bar{V}_{0,\dt},q_{h\dt},\phi_{h\dt},\mu_{h\dt},\bu_{h\dt},p_{h\dt}\right\} \subset \polW_h \times \polQ_h^3 \times \polX_h \times \polM_h, \] that solve: \begin{description} \item[Initialization] For $n=0$, let $q_h^0$, $\phi_h^0$ and $\bu_h^0$ be suitable approximations of the initial charge, phase field and velocity, respectively. \item[Time Marching] For $0 \leq n \leq N-1$ we compute \[ (V_h^{n+1},q_h^{n+1},\phi_h^{n+1},\mu_h^{n+1},\bu_h^{n+1},p_h^{n+1}) \in \polW_h(\bar V_0^{n+1}) \times \polQ_h^3 \times \polX_h \times \polM_h, \] that solve: \begin{equation} \scl \vare^\star(\phi_h^{n+1}) \GRAD V_h^{n+1}, \GRAD W_h \scr_{\Omega^\star} = \scl q_h^{n+1}, W_h \scr, \quad \forall W_h \in \polW_h, \label{eq:potentialdiscrete} \end{equation} \begin{equation} \scl \frac{ \frakd q_h^{n+1} }\dt, r_h \scr - \scl q_h^n \bu_h^{n+1}, \GRAD r_h \scr + \scl K(\phi_h^n) \GRAD \left( \lambda q_h^{n+1} + V_h^{n+1} \right), \GRAD r_h \scr = 0, \quad \forall r_h \in \polQ_h, \label{eq:chargediscrete} \end{equation} \begin{equation} \scl \frac{\frakd \phi_h^{n+1}}\dt, \bar\phi_h \scr + \scl \bu_h^{n+1} \SCAL \GRAD \phi_h^n, \bar\phi_h \scr + \scl M(\phi_h^n) \GRAD \mu_h^{n+1}, \GRAD \bar\phi_h \scr =0, \quad \forall \bar\phi_h \in \polQ_h \label{eq:phasediscrete} \end{equation} \begin{multline} \scl \mu_h^{n+1}, \bar\mu_h \scr = \frac\gamma\delta \scl \calW'(\phi_h^n) + \calA \frakd\phi_h^{n+1}, \bar\mu_h \scr + \gamma\delta \scl \GRAD \phi_h^{n+1}, \GRAD \bar\mu_h \scr \\ - \frac12 \scl \calE(\phi_h^{n+1}, \phi_h^n) |\GRAD V_h^{n+1}|^2, \bar\mu_h \scr + \frac12 \scl \rho'(\phi_h^n) \bu_h^n \SCAL \bu_h^{n+1}, \bar\mu_h \scr \\ + \alpha \sbl \frac{\frakd \phi_h^{n+1}}\dt + \bu_{h\btau}^{n+1}\psi(\phi_h^n), \bar\mu_h \sbr + \gamma \sbl \Theta_{fs}'(\phi_h^n) + \calB\frakd\phi_h^{n+1}, \bar\mu_h \sbr \quad \forall \bar\mu_h \in \polQ_h, \label{eq:chemdiscrete} \end{multline} where we introduced \begin{equation} \calE(\varphi_1, \varphi_2) = \int_0^1 \vare'\left( s \varphi_1 + (1-s)\varphi_2 \right) \diff s, \label{eq:defofcalE} \end{equation} \begin{subequations} \label{eq:NSEdiscrete} \begin{multline} \label{eq:veldiscrete} \scl \frac{ \overline{\rho(\phi_h^{n+1})} \bu_h^{n+1} - \rho(\phi_h^n) \bu_h^n }\dt, \bw_h \scr + \scl \rho(\phi_h^n) \bu_h^n \ADV \bu_h^{n+1}, \bw_h \scr + \frac12 \scl \DIV(\rho(\phi_h^n)\bu_h^n) \bu_h^{n+1}, \bw_h \scr \\ + \scl \eta(\phi_h^n) \bS(\bu_h^{n+1}), \bS(\bw_h) \scr - \scl p_h^{n+1}, \DIV \bw_h \scr + \sbl \beta(\phi_h^n) \bu_{h\btau}^{n+1}, \bw_{h\btau} \sbr + \alpha \sbl \bu_{h\btau}^{n+1} \psi(\phi_h^n), \bw_{h\btau} \psi(\phi_h^n) \sbr \\ = \scl \mu_h^{n+1}\GRAD \phi_h^n, \bw_h \scr - \scl q_h^n\GRAD( \lambda q_h^{n+1} + V_h^{n+1}), \bw_h \scr + \frac12 \scl \rho'(\phi_h^n) \frac{\frakd \phi_h^{n+1}}\dt \bu_h^n, \bw_h \scr \\ - \alpha \sbl \frac{\frakd \phi_h^{n+1}}\dt, \bw_{h\btau} \psi(\phi_h^n) \sbr \quad \forall \bw_h \in \polX_h \end{multline} \begin{equation} \label{eq:presdiscrete} \scl \bar{p}_h, \DIV \bu_h^{n+1} \scr = 0, \quad \forall \bar{p}_h \in \polM_h. \end{equation} \end{subequations} \end{description} \begin{rem}[Stabilization parameters] Notice that, in \eqref{eq:chemdiscrete}, we have introduced two stabilization parameters, namely $\calA$ and $\calB$. Their purpose is two-fold. First, they will allow us to treat the nonlinear terms explicitly while still being able to mantain stability of the scheme, see Proposition~\ref{prop:denergy} below. Second, when studying convergence of this problem, the presence of these terms will allow us to obtain further \emph{a priori} estimates on discrete solutions which, in turn, will help in passing to the limit, see Theorem~\ref{thm:semidiscreteexists}. We must mention that, this way of writing nonlinearities is related to the splitting of the energy into a convex and concave part proposed in \cite{MR2519603}. See also \cite{MR2679727,MR2726065}. \end{rem} \begin{rem}[Derivative of the permittivity] \label{rem:calE} Notice that \eqref{eq:defofcalE}, \ie the definition of the term $\calE$, is a highly nonlinear function of its arguments (unless $\vare$ is of a very specific type). As the reader has seen in the derivation of the energy law (Theorem~\ref{thm:energy}), the treatment of the term involving the derivative of the permittivity is subtle. In the fully discrete setting this is additionally complicated by the fact that we need to deal with quantities at different time layers. The reason to write the derivative of the permittivity in this form is that \[ \calE(\varphi_1, \varphi_2 ) = \begin{dcases} \frac{ \vare(\varphi_1) - \vare(\varphi_2) }{\varphi_1 - \varphi_2 }, & \varphi_1 \neq \varphi_2, \\ \vare'(\varphi_1), & \varphi_1 = \varphi_2, \end{dcases} \] which will allow us to obtain the desired cancellations. \end{rem} The following subsections will be devoted to the analysis of problem \eqref{eq:potentialdiscrete}--\eqref{eq:NSEdiscrete}. For convenience, we define \[ \phi_h^n := \frac{ \frakd \phi_h^n}\dt + \bu_{h\btau}^n \psi(\phi_h^{n-1}). \] \subsection{A Priori Estimates and Existence} \label{sub:stability} Let us show that, if problem \eqref{eq:potentialdiscrete}--\eqref{eq:NSEdiscrete} has a solution, it satisfies a discrete energy inequality similar to the one stated in Theorem~\ref{thm:energy}. To do this, we first require the following formula, whose proof is straightforward. \begin{lem}[Summation by parts] \label{prop:sum_by_parts} Let $\{ f^n \}^{m-1}_{n=0}$ and $\{ g^n \}^{m-1}_{n=0}$ be sequences and assume $f^{-1} = g^{-1} = 0$. Then we have \begin{equation} \label{eq:sum_by_parts} \sum^{m-1}_{n=0} (\frakd g^n) f^n = f^{m-1} g^{m-1} - \sum^{m-2}_{n=0} g^n (\frakd f^{n+1}). \end{equation} \end{lem} \begin{prop}[Discrete stability] \label{prop:denergy} Assume that the stabilization parameters $\calA$ and $\calB$ are chosen so that \begin{equation} \calA \geq \frac12 \sup_{\xi \in \Real} \calW''(\xi), \quad \calB \geq \frac12 \sup_{\xi \in \Real} \Theta_{fs}''(\xi). \label{eq:stabparams} \end{equation} The solution to \eqref{eq:potentialdiscrete}--\eqref{eq:NSEdiscrete}, if it exists, satisfies the following a priori estimate \begin{multline} \label{eq:denergy} \| \bu_{h\dt} \|_{\ell^\infty(\bL^2)} + \| \frakd \bu_{h\dt} \|_{\frakh^{1/2}(\bL^2)} + \| \bu_{h\dt} \|_{\ell^2(\bV)} + \| q_{h\dt}\|_{\ell^\infty(L^2)} + \| \frakd q_{h\dt} \|_{\frakh^{1/2}(L^2)} \\ + \| \GRAD \phi_{h\dt}\|_{\ell^\infty(\bL^2)} + \| \GRAD \frakd \phi_{h\dt}\|_{\frakh^{1/2}(\bL^2)} + \| \calW(\phi_{h\dt})\|_{\ell^\infty(L^1)} + \|\GRAD V_{h\dt}\|_{\ell^\infty(\bL^2(\Omega^\star))} + \| \GRAD \frakd V_{h\dt}\|_{\frakh^{1/2}(\bL^2(\Omega^\star))} \\ + \| \dot \phi_{h\dt} \|_{\ell^2(L^2(\Gamma))} + \| \Theta_{fs}(\phi_{h\dt})\|_{\ell^\infty(L^1(\Gamma) )} + \| \GRAD \mu_{h\dt} \|_{\ell^2(\bL^2)} + \| \GRAD( \lambda q + V )_{h\dt} \|_{\ell^2(\bL^2)} \leq c, \end{multline} where we have set $\mu_h^0 \equiv 0$, $V_h^0 \equiv 0$ for convenience in writing \eqref{eq:denergy}. The constant $c$ depends on the constants $\gamma$, $\delta$, $\alpha$, the data of the problem $\bu_h^0$, $\phi_h^0$, $q_h^0$, $\bar V_{0,\dt}$ and $T$, but it does not depend on the discretization parameters $h$ or $\dt$, nor the solution of the problem. \end{prop} \begin{proof} We repeat the steps used to prove Theorem~\ref{thm:energy}, \ie set $\bw_h = 2\dt \bu_h^{n+1}$ in \eqref{eq:veldiscrete}, $\bar p_h = p_h^{n+1}$ in \eqref{eq:presdiscrete}, $\bar\phi_h = 2\dt \mu_h^{n+1}$ in \eqref{eq:phasediscrete}, $\bar\mu_h = -2\frakd \phi_h^{n+1}$ in \eqref{eq:chemdiscrete} and $r_h = 2\dt (\lambda q_h^{n+1}+V_h^{n+1})$ in \eqref{eq:chargediscrete}. To treat the time-derivative terms in the discrete momentum equation, we use the identity \[ 2 \bu_h^{n+1} \SCAL \left( \overline{\rho(\phi_h^{n+1})} \bu_h^{n+1} - \rho(\phi_h^n) \bu_h^n \right) = \rho(\phi_h^{n+1}) |\bu_h^{n+1}|^2 - \rho(\phi_h^n) |\bu_h^n|^2 + \rho(\phi_h^n) |\frakd \bu_h^{n+1}|^2; \] see \cite{Guermond20092834,MR2398778}. To obtain control on the explicit terms involving the derivatives of the Ginzburg-Landau potential $\calW$ and the surface energy density $\Theta_{fs}$, notice that, for instance, \[ \calW(\phi_h^{n+1})- \calW(\phi_h^n) = \calW'(\phi_h^n)\frakd \phi_h^{n+1} + \frac12 \calW''(\xi) (\frakd \phi_h^{n+1})^2, \] for some $\xi$. Choosing the stabilization constant according to \eqref{eq:stabparams} (\cf \cite{MR2679727,MR2726065,Shen_Xiaofeng_2010,SalgadoMCL}), we deduce that \[ \int_\Omega \left( \calW'(\phi_h^n) + \calA \frakd\phi_h^{n+1} \right)\frakd \phi_h^{n+1} \geq \int_\Omega \frakd \calW(\phi_h^{n+1}). \] Adding \eqref{eq:chargediscrete}--\eqref{eq:NSEdiscrete} yields, \begin{multline} \frakd \| \sigma(\phi_h^{n+1})\bu_h^{n+1}\|_{\bL^2}^2 + \| \sigma(\phi_h^n) \frakd \bu_h^{n+1}\|_{\bL^2}^2 + \lambda \left( \frakd \|q_h^{n+1}\|_{L^2}^2 + \| \frakd q_h^{n+1}\|_{L^2}^2 \right) + \gamma \delta \left( \frakd \|\GRAD \phi_h^{n+1}\|_{\bL^2}^2 \right. \\ + \left. \| \GRAD \frakd \phi_h^{n+1}\|_{\bL^2}^2 \right) + \frac{2\gamma}\delta \int_\Omega \frakd \calW(\phi_h^{n+1}) + 2\gamma \int_\Gamma \frakd \Theta_{fs}(\phi_h^{n+1}) + 2\dt \left[ \left\| \sqrt{\eta(\phi_h^n)}\bS(\bu_h^{n+1}) \right\|_{\bL^2}^2 \right. \\ + \left\| \sqrt{\beta(\phi_h^n)} \bu_{h\btau}^{n+1} \right\|_{\bL^2(\Gamma)}^2 + \left\| \sqrt{M(\phi_h^n)}\GRAD \mu_h^{n+1} \right\|_{\bL^2}^2 + \left\| \sqrt{K(\phi_h^n)}\GRAD \left( \lambda q_h^{n+1} + V_h^{n+1} \right) \right\|_{\bL^2}^2 \\ \left. + \alpha \left\| \frac{ \frakd \phi_h^{n+1} }\dt + \bu_{h\btau}^{n+1}\psi(\phi_h^n) \right\|_{L^2(\Gamma)}^2 \right] + 2 \scl \frakd q_h^{n+1}, V_h^{n+1} \scr \leq \scl \calE(\phi_h^{n+1},\phi_h^n) |\GRAD V_h^{n+1}|^2, \frakd \phi_h^{n+1} \scr. \label{eq:dElaw-sansV} \end{multline} Take the difference of \eqref{eq:potentialdiscrete} at time-indices $n+1$ and $n$ to obtain \[ \scl \frakd \left( \vare^\star(\phi_h^{n+1}) \GRAD V_h^{n+1} \right), \GRAD W_h \scr_{\Omega^\star} = \scl \frakd q_h^{n+1}, W_h \scr, \] and set $W_h = 2(V_h^{n+1} - \bar V_0^{n+1})$. In view of \eqref{eq:toobasic} we have \begin{multline*} 2 \frakd \left( \vare^\star(\phi_h^{n+1}) \GRAD V_h^{n+1} \right) \SCAL \GRAD V_h^{n+1} = \frakd \left( \vare^\star(\phi_h^{n+1}) | \GRAD V_h^{n+1} |^2 \right) \\ + \vare^\star(\phi_h^n) | \GRAD \frakd V_h^{n+1} |^2 + \frakd\left( \vare^\star(\phi_h^{n+1}) \right) | \GRAD V_h^{n+1} |^2, \end{multline*} whence \begin{multline} \frakd \left\| \sqrt{\vare^\star(\phi_h^{n+1}) } \GRAD V_h^{n+1} \right\|_{\bL^2(\Omega^\star)}^2 + \left\| \sqrt{\vare^\star(\phi_h^n) } \GRAD \frakd V_h^{n+1} \right\|_{\bL^2(\Omega^\star)}^2 + \int_{\Omega^\star} \frakd \vare^\star(\phi_h^{n+1}) |\GRAD V_h^{n+1}|^2 \\ = 2 \scl \frakd q_h^{n+1}, V_h^{n+1} \scr - 2 \scl \frakd q_h^{n+1}, \bar V_0^{n+1} \scr + 2 \scl \frakd\left( \vare^\star(\phi_h^{n+1}) \GRAD V_h^{n+1} \right), \GRAD \bar V_0^{n+1} \scr_{\Omega^\star}. \label{eq:dEVolt} \end{multline} Add \eqref{eq:dElaw-sansV} and \eqref{eq:dEVolt}. Notice that, since the permittivity is assumed constant on $\Omega^\star \setminus \bar\Omega$, on the left hand side of the resulting inequality we have the following term \[ \int_\Omega \left( \frakd\vare(\phi_h^{n+1}) - \calE(\phi_h^{n+1},\phi_h^n) \frakd\phi_h^{n+1} \right) |\GRAD V_h^{n+1}|^2 = 0, \] where we used the definition of $\calE$, see \eqref{eq:defofcalE} and Remark~\ref{rem:calE}. Therefore, we obtain \begin{multline} \frakd \| \sigma(\phi_h^{n+1})\bu_h^{n+1}\|_{\bL^2}^2 + \| \sigma(\phi_h^n) \frakd \bu_h^{n+1}\|_{\bL^2}^2 + \lambda \left( \frakd \|q_h^{n+1}\|_{L^2}^2 + \frac12 \| \frakd q_h^{n+1}\|_{L^2}^2 \right) \\ + \gamma \delta \left( \frakd \|\GRAD \phi_h^{n+1}\|_{\bL^2}^2 + \| \GRAD \frakd \phi_h^{n+1}\|_{\bL^2}^2 \right) + \frac{2\gamma}\delta \int_\Omega \frakd \calW(\phi_h^{n+1}) \\ + \frakd \left\| \sqrt{\vare^\star(\phi_h^{n+1}) } \GRAD V_h^{n+1} \right\|_{\bL^2(\Omega^\star)}^2 + \left\| \sqrt{\vare^\star(\phi_h^n) } \GRAD \frakd V_h^{n+1} \right\|_{\bL^2(\Omega^\star)}^2 + 2\gamma \int_\Gamma \frakd \Theta_{fs}(\phi_h^{n+1}) \\ + 2\dt \left[ \left\| \sqrt{\eta(\phi_h^n)}\bS(\bu_h^{n+1}) \right\|_{\bL^2}^2 + \| \sqrt{\beta(\phi_h^n)} \bu_{h\btau}^{n+1}\|_{\bL^2(\Gamma)}^2 \right. \\ \left. + \|\sqrt{M(\phi_h^n)}\GRAD \mu_h^{n+1}\|_{\bL^2}^2 + \left\| \sqrt{K(\phi_h^n)}\GRAD \left( \lambda q_h^{n+1} + V_h^{n+1} \right) \right\|_{\bL^2}^2 + \alpha \left\| \frac{ \frakd \phi_h^{n+1} }\dt + \bu_{h\btau}^{n+1}\psi(\phi_h^n) \right\|_{L^2(\Gamma)}^2 \right] \\ \leq -2 \scl \frakd q_h^{n+1}, \bar V_0^{n+1} \scr + 2 \scl \frakd \left( \vare^\star(\phi_h^{n+1}) \GRAD V_h^{n+1} \right), \GRAD \bar V_0^{n+1} \scr_{\Omega^\star}. \label{eq:dElawfinalM1} \end{multline} Summing \eqref{eq:dElawfinalM1} for $n=0,...,m-1$, using summation by parts \eqref{eq:sum_by_parts} (set $\mu_h^0 \equiv 0$, $V_h^0 \equiv 0$), applying the Cauchy-Schwarz and weighted Young's inequality, we obtain the result. \end{proof} \begin{rem}[Compatibility] Notice that condition \eqref{eq:voltchargecomp} is needed to obtain the stability estimate, otherwise $2\dt (\lambda q_h^{n+1}+V_h^{n+1})$ would not be an admissible test function for \eqref{eq:chargediscrete}. \end{rem} The \emph{a priori} estimate \eqref{eq:denergy} allows us to conclude that, for all $h>0$ and $\dt>0$, problem \eqref{eq:potentialdiscrete}--\eqref{eq:NSEdiscrete} has a solution. \begin{thm}[Existence] \label{cor:d-existence} Assume that the discrete spaces satisfy assumptions \eqref{eq:voltchargecomp} and \eqref{eq:LBB}, the stabilization parameters $\calA,\ \calB$ are chosen as in Proposition~\ref{prop:denergy}. Then, for all $h>0$ and $\dt>0$, problem \eqref{eq:potentialdiscrete}--\eqref{eq:NSEdiscrete} has a solution. Moreover, any solution satisfies estimate \eqref{eq:denergy}. \end{thm} \begin{proof} The idea of the proof is to use the ``\emph{method of a priori estimates}'' at each time step. In other words, for each time step we define a map $\calL^{n+1}$ in such a way that a fixed point of $\calL^{n+1}$, if it exists, is a solution of our problem. Then, with the aid of the previously shown a priori estimates we show that $\calL^{n+1}$ does indeed have a fixed point. We proceed by induction in the discrete time and assume that we have shown that the problem has a solution up to $n$. For each $n = 0,...,N-1$, we define \begin{align*} \calL^{n+1}: \polW_h(\bar V_0^{n+1}) \times \polQ_h^3 \times \polX_h \times \polM_h &\rightarrow \polW_h(\bar V_0^{n+1}) \times \polQ_h^3 \times \polX_h \times \polM_h, \\ (V_h,q_h,\phi_h,\mu_h,\bu_h,p_h) &\overset{\calL^{n+1}}{\longmapsto} ( \hat V_h, \hat q_h,\hat \phi_h,\hat \mu_h,\hat \bu_h, \hat p_h), \end{align*} where the quantities with hats solve \begin{equation} \scl \vare^\star(\phi_h) \GRAD \hat V_h, \GRAD W_h \scr_{\Omega^\star} = \scl \hat q_h, W_h \scr, \quad \forall W_h \in \polW_h, \label{eq:fpvoltage} \end{equation} \begin{equation} \scl \frac{ \hat q_h - q_h^n }\dt, r_h \scr - \scl q_h \bu_h, \GRAD r_h \scr + \scl K(\phi_h^n) \GRAD \left( \lambda \hat q_h + \hat V_h \right), \GRAD r_h \scr = 0, \quad \forall r_h \in \polQ_h, \label{eq:fpcharge} \end{equation} \begin{equation} \scl \frac{\hat \phi_h - \phi_h^n }\dt, \bar\phi_h \scr + \scl \bu_h \SCAL \GRAD \phi_h^n, \bar\phi_h \scr + \scl M(\phi_h^n) \GRAD \hat \mu_h, \GRAD \bar\phi_h \scr =0, \quad \forall \bar\phi_h \in \polQ_h, \label{eq:fpphase} \end{equation} \begin{multline} \scl \hat \mu_h, \bar\mu_h \scr = \frac\gamma\delta \scl \calW'(\phi_h^n) + \calA \left(\phi_h - \phi_h^n \right), \bar\mu_h \scr + \gamma\delta \scl \GRAD \hat \phi_h, \GRAD \bar\mu_h \scr + \frac12 \scl \rho'(\phi_h^n) \bu_h^n \SCAL \bu_h, \bar\mu_h \scr \\ - \frac12 \scl \calE(\phi_h,\phi_h^n) \GRAD V_h \SCAL \GRAD \hat V_h, \bar\mu_h \scr + \alpha \sbl \frac{\hat \phi_h - \phi_h^n }\dt + \bu_{h\btau} \psi(\phi_h^n), \bar\mu_h \sbr \\ + \gamma \sbl \Theta_{fs}'(\phi_h^n) + \calB \left( \phi_h - \phi_h^n \right), \bar\mu_h \sbr \quad \forall \bar\mu_h \in \polQ_h, \label{eq:fpchem} \end{multline} \begin{multline} \scl \frac{ \tfrac12 \left( \rho(\phi_h) + \rho(\phi_h^n) \right) \hat \bu_h - \rho(\phi_h^n) \bu_h^n }\dt, \bw_h \scr + \scl \rho(\phi_h^n) \bu_h^n \ADV \hat \bu_h, \bw_h \scr + \frac12 \scl \DIV(\rho(\phi_h^n)\bu_h^n) \hat \bu_h, \bw_h \scr \\ + \scl \eta(\phi_h^n) \bS(\hat \bu_h ), \bS(\bw_h) \scr - \scl \hat p_h, \DIV \bw_h \scr + \sbl \beta(\phi_h^n) \hat \bu_{h\btau}, \bw_{h\btau} \sbr + \alpha \sbl \bu_{h\btau} \psi(\phi_h^n), \bw_{h\btau} \psi(\phi_h^n) \sbr \\ = \scl \mu_h \GRAD \phi_h^n, \bw_h \scr - \scl q_h \GRAD( \lambda q_h + V_h), \bw_h \scr + \frac12 \scl \rho'(\phi_h^n) \frac{\phi_h - \phi_h^n }\dt \bu_h^n, \bw_h \scr \\ - \alpha \sbl \frac{ \phi_h - \phi_h^n }\dt, \bw_{h\btau} \psi(\phi_h^n) \sbr \quad \forall \bw_h \in \polX_h, \label{eq:fpmom} \end{multline} \begin{equation} \scl \bar{p}_h, \DIV \hat \bu_h \scr = 0, \quad \forall \bar{p}_h \in \polM_h. \label{eq:fppres} \end{equation} Notice that a fixed point of $\calL^{n+1}$ is precisely a solution of the discrete problem \eqref{eq:potentialdiscrete}--\eqref{eq:NSEdiscrete}. To show the existence of a fixed point we must prove that: \begin{itemize} \item The operator $\calL^{n+1}$ is well defined. \item If there is a $\calX=(V_h,q_h,\phi_h,\mu_h,\bu_h,p_h)$ for which $\calX = \omega \calL^{n+1}\calX$, for some $\omega \in [0,1]$, then \begin{equation} \| \calX \| \leq M, \label{eq:omegafpbdd} \end{equation} where $M>0$ does not depend on $\calX$ or $\omega$. \end{itemize} Then, an application of the Leray-Schauder theorem \cite{MR1625845,MR816732} will allow us to conclude. Moreover, since a fixed point of $\calL^{n+1}$ is precisely a solution of our problem, Proposition~\ref{prop:denergy} gives us the desired stability estimate for this solution. Let us then proceed to show these two points: \noindent\underline{The operator $\calL^{n+1}$ is well defined:} Clearly, for any given $\phi_h$, and $q_h$, the system \eqref{eq:fpvoltage}--\eqref{eq:fpcharge} is positive definite and, thus, there are unique $\hat V_h$ and $\hat q_h$. Having computed $\hat V_h$ and $\hat q_h$ we then notice that \eqref{eq:fpmom} and \eqref{eq:fppres} are nothing but a discrete version of a generalized Stokes problem. Assumption \eqref{eq:LBB} then shows that there is a unique pair $(\hat\bu_h,\hat p_h)$. To conclude, use $(\hat V_h, \hat q_h, \hat\bu_h, \hat p_h)$ as data in \eqref{eq:fpphase} and \eqref{eq:fpchem}. The fact that this linear system has a unique solution can then be seen, for instance, by noticing that the system matrix is positive definite. \noindent\underline{Bounds on the operator:} Notice, first of all, that one of the assumptions of the Leray-Schauder theorem is the compactness of the operator for which we are looking for a fixed point. However, this is trivial since the spaces we are working on are finite dimensional. Let us now show the bounds noticing that, at this stage, we do not need to obtain bounds that are independent of $h$, $\dt$ or the solution at the previous step. This will be a consequence of Proposition~\ref{prop:denergy}. Let us then assume that for some $\calX=(V_h,q_h,\phi_h,\mu_h,\bu_h,p_h)$ we have $\calX = \omega \calL^{n+1} \calX$. Notice, first of all, that if $\omega=0$ then $\calX=0$ and the bound is trivial. If $\omega \in (0,1]$, the existence of such element can be identified with replacing, in \eqref{eq:fpvoltage}--\eqref{eq:fppres}, $(\hat V_h, \hat q_h,\hat \phi_h, \hat \mu_h,\hat \bu_h,\hat p_h)$ by $\omega^{-1}(V_h,q_h,\phi_h,\mu_h,\bu_h,p_h)$. Having done that, set $\bw_h = 2\dt u_h$ in \eqref{eq:fpmom}, $r_h = 2\dt(\lambda q_h + V_h)$ in \eqref{eq:fpcharge}, $\bar \phi_h = 2\dt \mu_h$ in \eqref{eq:fpphase} and $\bar \mu_h = 2(\phi_h - \phi_h^n)$ in \eqref{eq:fpchem}. Next we observe that, by induction, the equation has a solution at the previous time step, therefore there are functions that satisfy \eqref{eq:potentialdiscrete} for time $n$. Multiply this identity by $\omega$ and subtract it from \eqref{eq:fpvoltage}. Arguing as in the proof of Proposition~\ref{prop:denergy} we see that condition \eqref{eq:stabparams} implies that to obtain the desired bound we must prove estimates for the terms \[ \scl \rho'(\phi_h^n) \bu_h^n \bu_h, \phi_h^n \scr, \quad \scl \mu_h, \phi_h^n \scr, \quad \scl q_h^n, V_h \scr, \] which are, in a sense, the price we are paying for not being fully impicit. All these terms are linear $\calX$ and, thus, can be easily bounded by taking into account that we are in finite dimensions and that the estimates need not be uniform in $h$ and $\dt$. \end{proof} \section{Numerical Experiments} \label{sec:NumExp} In this section we present a series of numerical examples aimed at showing the capabilities of the model we have proposed and analyzed. The implementation of all the numerical experiments has been carried out with the help of the \texttt{deal.II} library \cite{BHK07,BHK} and the details will be presented in \cite{SalgadodealII}. Let us briefly describe the discretization technique. Its starting point is problem \eqref{eq:potentialdiscrete}--\eqref{eq:NSEdiscrete} which, being a nonlinear problem, we linearize with time-lagging of the variables. Moreover, for the Cahn Hilliard Navier Stokes part we employ the fractional time-stepping technique developed in \cite{SalgadoMCL}. In other words, at each time step we know \[ (V_h^n,q_h^n,\phi_h^n,\mu_h^n,\bu_h^n,p_h^n,\xi_h^n) \in \polW_h(\bar V_0^n) \times \polQ_h^3 \times \polX_h \times \polM_h^2, \] with $\xi_h^0 := 0$ and, to advance in time, solve the following sequence of discrete and linear problems: \begin{itemize} \item Step 1: \begin{description} \item[Potential] Find $V_h^{n+1} \in \polW_h(\bar V_0^{n+1})$ that solves: \[ \scl \vare^\star(\phi_h^n) \GRAD V_h^{n+1}, \GRAD W_h \scr_{\Omega^\star} = \scl q_h^n, W_h \scr, \quad \forall W_h \in \polW_h, \] \item[Charge] Find $q_h^{n+1} \in \polQ_h$ that solves: \[ \scl \frac{ \frakd q_h^{n+1} }\dt, r_h \scr - \scl q_h^n \bu_h^n, \GRAD r_h \scr + \scl K(\phi_h^n) \GRAD \left( \lambda q_h^{n+1} + V_h^{n+1} \right), \GRAD r_h \scr = 0, \quad \forall r_h \in \polQ_h, \] \end{description} \item Step 2: \begin{description} \item[Phase Field and Potential] Find $\phi_h^{n+1},\; \mu_h^{n+1} \in \polQ_h$ that solve: \[ \scl \frac{\frakd \phi_h^{n+1}}\dt, \bar\phi_h \scr + \scl \bu_h^n \SCAL \GRAD \phi_h^n, \bar\phi_h \scr + \scl M(\phi_h^n) \GRAD \mu_h^{n+1}, \GRAD \bar\phi_h \scr =0, \quad \forall \bar\phi_h \in \polQ_h, \] \begin{multline*} \scl \mu_h^{n+1}, \bar\mu_h \scr = \frac\gamma\delta \scl \calW'(\phi_h^n) + \calA \frakd\phi_h^{n+1}, \bar\mu_h \scr + \gamma\delta \scl \GRAD \phi_h^{n+1}, \GRAD \bar\mu_h \scr \\ - \frac12 \scl \vare'(\phi_h^n) |\GRAD V_h^{n+1}|^2, \bar\mu_h \scr + \frac12 \scl \rho'(\phi_h^n) |\bu_h^n|^2, \bar\mu_h \scr \\ + \alpha \sbl \frac{\frakd \phi_h^{n+1}}\dt + \bu_{h\btau}^n \psi(\phi_h^n), \bar\mu_h \sbr + \gamma \sbl \Theta_{fs}'(\phi_h^n) + \calB\frakd\phi_h^{n+1}, \bar\mu_h \sbr \quad \forall \bar\mu_h \in \polQ_h, \end{multline*} \end{description} \item Step 3: \begin{description} \item[Velocity] Define $p_h^\sharp = p_h^n + \xi_h^n$, then find $\bu_h^{n+1} \in \polX_h$ such that \begin{multline*} \scl \frac{ \overline{\rho(\phi_h^{n+1})} \bu_h^{n+1} - \rho(\phi^n) \bu_h^n }\dt, \bw_h \scr + \scl \rho(\phi_h^n) \bu_h^n \ADV \bu_h^{n+1}, \bw_h \scr + \frac12 \scl \DIV(\rho(\phi_h^n)\bu_h^n) \bu_h^{n+1}, \bw_h \scr \\ + \scl \eta(\phi_h^n) \bS(\bu_h^{n+1}), \bS(\bw_h) \scr - \scl p_h^\sharp, \DIV \bw_h \scr + \sbl \beta(\phi_h^n) \bu_{h\btau}^{n+1}, \bw_{h\btau} \sbr + \alpha \sbl \bu_{h\btau}^{n+1} \psi(\phi_h^n), \bw_{h\btau} \psi(\phi_h^n) \sbr \\ = \scl \mu_h^{n+1}\GRAD \phi_h^n, \bw_h \scr - \scl q_h^n\GRAD( \lambda q_h^{n+1} + V_h^{n+1}), \bw_h \scr + \frac12 \scl \rho'(\phi_h^n) \frac{\frakd \phi_h^{n+1}}\dt \bu_h^n, \bw_h \scr \\ - \alpha \sbl \frac{\frakd \phi_h^{n+1}}\dt, \bw_{h\btau} \psi(\phi_h^n) \sbr \quad \forall \bw_h \in \polX_h. \end{multline*} \end{description} \item Step 4: \begin{description} \item[Penalization and Pressure] Finally, $\xi_h^{n+1}$ and $p_h^{n+1}$ are computed via \[ \scl \GRAD \xi_h^{n+1}, \GRAD \bar p_h \scr = - \frac\varrho\dt \scl \DIV \bu_h^{n+1}, \bar p_h\scr, \quad \forall \bar p_h \in \polM_h, \] where $\varrho := \min\{\rho_1, \rho_2 \}$ and \[ p_h^{n+1} = p_h^n + \xi_h^{n+1}. \] \end{description} \end{itemize} \begin{rem}[CFL] A variant of the subscheme used to solve for the Cahn Hilliard Navier Stokes part of our problem was proposed in \cite{SalgadoMCL} and shown to be unconditionally stable. In that reference, however, the equations for the phase field and velocity are coupled via terms of the form $\scl \bu_h^{n+1}\SCAL \GRAD \phi_h^n, \bar \phi_h \scr$. If we adopt this approach, coupling steps 2 and 3, and assume that the permittivity does not depend on the phase, it seems possible to show that this variant of the scheme described above is stable under, \[ \dt \leq c \delta h. \] On the other hand, if we work with full time-lagging of the variables, then it is possible to show that the scheme is stable under the, quite restrictive, assumption that \[ \dt \leq c \delta^2 h^2. \] To assess how extreme these conditions are one must remember that, in practice, it is necessary to set $h = \calO(\delta)$. Nevertheless, computations show that these conditions are suboptimal and just a standard CFL condition is necessary to guarantee stability of the scheme. \end{rem} \subsection{Movement of a Droplet} \label{sub:drop_move} The first example aims at showing that, indeed, electric actuation can be used to manipulate a two-fluid system. The fluid occupies the domain $\Omega = (-5,5)\times(0,1)$ and above and below there are dielectric plates of thickness $1/2$, so that $\Omega^\star = (-5,5)\times(-1/2,3/2)$. A droplet of a heavier fluid shaped like half a circle of radius $1/2$ is centered at the origin and initially at rest. To the right half of lower plate we apply a voltage, so that \[ V_0 = V_{00} \chi_D, \qquad D = \left\{ (x,y)\in \Real^2: x \geq 0,\ y = -\frac12 \right\}. \] The density ratio between the two fluids is $\rho_1/\rho_2 = 100$, the viscosity ratio $\eta_1/\eta_2 = 10$ and the surface tension coefficient is $\gamma = 50$. The conductivity ratio is $K_1/K_2 = 10$ and the permittivity ratio $\vare_1/\vare_2 = 5$ and $\vare_D/\vare_2 = 100$. We have set the mobility parameter to be constant $M = 10^{-2}$, and $\alpha = 10^{-3}$. The slip coefficient is taken constant $\beta = 10$, and the equilibrium contact angle between the two fluids is $\theta_s = 120^\circ$. The interface thickness is $\delta = 5\cdot10^{-2}$ and the regularization parameter $\lambda = 0.5$. The applied voltage is $V_{00} = 20$. The time-step is set constant and $\dt = 10^{-3}$. The initial mesh consists of $5364$ cells with two different levels of refinement. Away from the two-fluid interface the local mesh size is about $0.125$ and, near the interface, the local mesh size is about $0.03125$. As required in \texttt{deal.II}, the degree of nonconformity of the mesh is restricted to 1 \ie there is only one hanging node per face. Every $10$ time-steps the mesh is coarsened and refined using as, heuristic, refinement indicator the $\bL^2$-norm of the gradient of the phase field variable $\phi$. The number of coarsened and refined cells is such that we try to keep the number of cells constant. The discrete spaces are constructed with finite elements with equal polynomial degree in each coordinate direction and \[ \deg \polW_h = 1, \quad \deg\polQ_h = 2, \quad \deg\polX_h = 2, \quad \deg\polM_h = 1, \] that is the lowest order quadrilateral Taylor-Hood element. No stabilization is added to the momentum conservation equation, nor the convection diffusion equation used to define the charge density. \begin{figure} \caption{Movement of a droplet under the action of an external voltage. The material parameters are $\rho_1/\rho_2 = 100$, $\eta_1/\eta_2 = 10$, $\gamma = 50$, $K_1/K_2 = 10$, $\vare_1/\vare_2 = 5$, $\vare_D/\vare_2 = 100$, $M = 10^{-2}$, $\alpha = 10^{-3}$, $\beta = 10$, $\theta_s = 120^\circ$, $\delta = 5\cdot10^{-2}$, $\lambda = 0.5$ and $V_{00} = 20$. The interface is shown at times $0$, $0.2$, $0.4$, $0.6$, $0.8$, $1.0$, $1.2$ and $1.4$. Colored lines are used to represent the iso-values of the voltage. The black dotted line is the position of the interface at the beginning of the computations.} \label{fig:drop_move} \end{figure} Figure~\ref{fig:drop_move} shows the evolution of the interface. Notice that, other than adapting the mesh so as to resolve the interfacial layer, no other special techniques are applied to obtain these results. As expected, the applied voltage creates a local modification variation in the value of the surface tension between the two fluids, which in turn generates a forcing term that drives the droplet. \subsection{Splitting of a Droplet} \label{sub:drop_split} One of the main arguments in favor of diffuse interface models is their ability to handle topological changes automatically. The purpose of this numerical simulation is to illustrate this by showing that, using electrowetting, one can split a droplet and, thus, control fluids. Initially a drop of heavier material occupies \[ S_{\rho_2} = \left\{ (x,y) \in \Real^2:\ \frac{x^2}{2.5^2} + \frac{y^2}{0.5^2} = 1 \right\}. \] The material parameters are the same as in \S\ref{sub:drop_move}. To be able to split the droplet, the externally applied voltage is \[ D = \left\{ (x,y)\in \Real^2: |x| \geq \frac32,\ y = -\frac12 \right\}. \] \begin{figure} \caption{Splitting of a droplet under the action of an external voltage. The material parameters are $\rho_1/\rho_2 = 100$, $\eta_1/\eta_2 = 10$, $\gamma = 50$, $K_1/K_2 = 10$, $\vare_1/\vare_2 = 5$, $\vare_D/\vare_2 = 100$, $M = 10^{-2}$, $\alpha = 10^{-3}$, $\beta = 10$, $\theta_s = 120^\circ$, $\delta = 5\cdot10^{-2}$, $\lambda = 0.5$ and $V_{00} = 20$. The interface is shown at times $0$, $0.25$, $0.5$, $0.75$, $1.0$, $1.25$, $1.5$, $1.75$, $2.0$, $2.25$, $2.5$, $2.75$, $3.0$, $3.02$, $3.05$, $3.10$, $3.25$ and $3.5$. Colored lines are used to represent the iso-values of the voltage. The black dotted line is the position of the interface at the beginning of the computations.} \label{fig:drop_split} \end{figure} Figure~\ref{fig:drop_split} shows the evolution of the system. Notice that, other than adapting the mesh so as to resolve the interfacial layer, nothing else is done and the topological change is handled without the necessity to detect it or to adapt the time-step. \subsection{Merging of Two Droplets} \label{sub:drop_merge} To finalize let us show an example illustrating the merging of two droplets of the same material via electric actuation. The geometrical configuration is the same as in \S\ref{sub:drop_split}. In this case, however, there are initially two droplets of heavier material, each one of radius $0.5$ and centered at $(-0.7,0)$ and $(0.7,0)$, respectively. The material parameters are the same as in \S\ref{sub:drop_split}, except the interfacial thickness, which is set to $\delta=10^{-2}$. We apply an external voltage so that \[ D = \left\{ (x,y) \in \Real^2: \ |x| \leq \frac12,\ y = -\frac12 \right\}. \] To be able to capture the fine interfacial dynamics that merging possesses, we set the initial level of refinement to $4$, with $3$ extra refinements near the interface, so that the number of cells is $48,696$ with a local mesh size away of the interface of about $0.02875$ and near the interface of about $6\cdot10^{-3}$. This amounts to a total of $147,249$ degrees of freedom. The time-step, again, is set to $\dt = 10^{-3}$. \begin{figure} \caption{Merging of two droplets under the action of an externally applied voltage. The material parameters are $\rho_1/\rho_2 = 100$, $\eta_1/\eta_2 = 10$, $\gamma = 50$, $K_1/K_2 = 10$, $\vare_1/\vare_2 = 5$, $\vare_D/\vare_2 = 100$, $M = 10^{-2}$, $\alpha = 10^{-3}$, $\beta = 10$, $\theta_s = 120^\circ$, $\delta = 10^{-2}$, $\lambda = 0.5$ and $V_{00} = 20$. The interface is shown at times $0$, $1$, $2$, $3$, $3.3$, $3.4$, $3.5$, $4$, $5$ and $5.5$. Colored lines are used to represent the iso-values of the voltage. The black dotted line is the position of the interface at the beginning of the computations.} \label{fig:drop_merge} \end{figure} Figure~\ref{fig:drop_merge} shows the evolution of the two droplets under the action of the voltage. Again, other than properly resolving the interfacial layer, we did not need to do anything special to handle the topological change. \section{The Semi-Discrete Problem} \label{sec:semidiscrete} In \S\ref{sub:stability} we showed that the fully discrete problem always has a solution and that, moreover, this solution satisfies certain \emph{a priori} estimates. Our purpose here is to pass to the limit for $h \rightarrow 0$ so as to show that a semi-discrete (that is continuous in space and discrete in time) version of our electrowetting model always has a solution. Let us begin by defining the semi-discrete problem. Given initial data and an external voltage, we find: \[ \left\{V_{\dt}-\bar{V}_{0,\dt},q_{\dt},\phi_{\dt},\mu_{\dt},\bu_{\dt},p_{\dt}\right\} \subset \Hunstar \times \Hun^3 \times \bV \times \tildeLdeux \] that solve: \begin{description} \item[Initialization] For $n=0$, let $q^0$, $\phi^0$ and $\bu^0$ equal the initial charge, phase field and velocity, respectively. \item[Time Marching] For $0 \leq n \leq N-1$ we compute \[ (V^{n+1},q^{n+1},\phi^{n+1},\mu^{n+1},\bu^{n+1},p^{n+1}) \in \Hunstar + \bar V_0^{n+1} \times \Hun^3 \times \bV \times \tildeLdeux, \] that solve: \begin{equation} \scl \vare^\star(\phi^{n+1}) \GRAD V^{n+1}, \GRAD W \scr_{\Omega^\star} = \scl q^{n+1}, W \scr, \quad \forall W \in H^1_0(\Omega^\star), \label{eq:potentialsemidiscrete} \end{equation} \begin{equation} \scl \frac{ \frakd q^{n+1} }\dt, r \scr - \scl q^n \bu^{n+1}, \GRAD r \scr + \scl K(\phi^n) \GRAD \left( \lambda q^{n+1} + V^{n+1} \right), \GRAD r \scr = 0, \quad \forall r \in \Hun \label{eq:chargesemidiscrete} \end{equation} \begin{equation} \scl \frac{\frakd \phi^{n+1}}\dt, \bar\phi \scr + \scl \bu^{n+1} \SCAL \GRAD \phi^n, \bar\phi \scr + \scl M(\phi^n) \GRAD \mu^{n+1}, \GRAD \bar\phi \scr =0, \quad \forall \bar\phi \in \Hun \label{eq:phasesemidiscrete} \end{equation} \begin{multline} \scl \mu^{n+1}, \bar\mu \scr = \frac\gamma\delta \scl \calW'(\phi^n) + \calA \frakd\phi^{n+1}, \bar\mu \scr + \gamma\delta \scl \GRAD \phi^{n+1}, \GRAD \bar\mu \scr + \frac12 \scl \rho'(\phi^n) \bu^n \SCAL \bu^{n+1}, \bar\mu \scr \\ - \frac12 \scl \calE(\phi^{n+1},\phi^n)|\GRAD V^{n+1}|^2, \bar\mu \scr + \alpha \sbl \frac{\frakd \phi^{n+1}}\dt + \bu_\btau^{n+1}\psi(\phi^n), \bar\mu \sbr \\ + \gamma \sbl \Theta_{fs}'(\phi^n) + \calB\frakd\phi^{n+1}, \bar\mu \sbr \quad \forall \bar\mu \in \Hun \cap L^\infty(\Omega), \label{eq:chemsemidiscrete} \end{multline} \begin{subequations} \label{eq:NSEsemidiscrete} \begin{multline} \label{eq:velsemidiscrete} \scl \frac{ \overline{\rho(\phi^{n+1})} \bu^{n+1} - \rho(\phi^n) \bu^n }\dt, \bw \scr + \scl \rho(\phi^n) \bu^n \ADV \bu^{n+1} + \frac12 \DIV(\rho(\phi^n)\bu^n) \bu^{n+1}, \bw \scr \\ + \scl \eta(\phi^n) \bS(\bu^{n+1}), \bS(\bw) \scr - \scl p^{n+1}, \DIV \bw \scr + \sbl \beta(\phi^n) \bu_{\btau}^{n+1}, \bw_{\btau} \sbr + \alpha \sbl \bu_{\btau}^{n+1} \psi(\phi^n), \bw_{\btau} \psi(\phi^n) \sbr \\ = \scl \mu^{n+1}\GRAD \phi^n, \bw \scr - \scl q^n \GRAD( \lambda q^{n+1} + V^{n+1}), \bw \scr + \frac12 \scl \rho'(\phi^n) \frac{\frakd \phi^{n+1}}\dt \bu^n, \bw \scr \\ - \alpha \sbl \frac{\frakd \phi^{n+1}}\dt, \bw_{\btau} \psi(\phi^n) \sbr, \quad \forall \bw \in \bV, \end{multline} \begin{equation} \label{eq:pressemidiscrete} \scl \bar{p}, \DIV \bu^{n+1} \scr = 0, \quad \forall \bar{p}\in \tildeLdeux. \end{equation} \end{subequations} \end{description} \begin{rem}[Permittivity] Notice that, in our definition of solution, the test function for equation \eqref{eq:chemsemidiscrete} needs to be bounded. This is necessary to make sense of the term \[ \scl \calE(\phi^{n+1},\phi^n) |\GRAD V^{n+1}|^2, \bar\mu \scr, \] since $\calE$ is bounded by construction and $V^{n+1} \in \Hun$. The authors of \cite{MR2511642} used a similar choice of test functions and showed, using different techniques, existence of a solution for their model of electrowetting in the case when the permittivity is phase-dependent. \end{rem} Since the solution to the fully discrete problem \eqref{eq:potentialdiscrete}--\eqref{eq:NSEdiscrete} exists for all values of $h>0$ and satisfies uniform bounds, one expects the sequence of discrete solutions to converge, in some topology, and that the limit is a solution of problem \eqref{eq:potentialsemidiscrete}--\eqref{eq:NSEsemidiscrete}. The following result shows that this is indeed the case. \begin{thm}[Existence and stability] \label{thm:semidiscreteexists} For all $\dt>0$, problem \eqref{eq:potentialsemidiscrete}--\eqref{eq:NSEsemidiscrete} has a solution. Moreover, this solution satisfies an energy estimate, analogous to \eqref{eq:denergy}, where the constant $c$ might depend on $\dt$ and the data of the problem, but not on the solution. \end{thm} \begin{proof} Theorem~\ref{cor:d-existence} shows the existence, for every $h>0$, of a solution to the fully discrete problem \eqref{eq:potentialdiscrete}--\eqref{eq:NSEdiscrete} which, moreover, satisfies estimate \eqref{eq:denergy}. This estimate implies that, for every $n$, as $h\rightarrow0$: \begin{itemize} \item $\calW(\phi_h^n)$ remains bounded in $L^1(\Omega)$. Since the modified Ginzburg-Landau potential is a quadratic function of its argument, this implies that there is a subsequence, labeled again $\phi_h^n$, that converges weakly in $\Ldeux$. \item $\GRAD \phi_h^n$ remains bounded in $\bL^2$. This, together with the previous observation, gives us a subsequence that converges weakly in $\Hun$ and strongly in $\Ldeux$. \item The strong $L^2$-convergence of $\phi_h^n$ implies that the convergence is almost everywhere and, since all the material functions are assumed continuous, the coefficients converge also almost everywhere. \item There is a subsequence of $\bu_h^{n+1}$ that converges weakly in $\bV$ and strongly in $\Ldeuxd$. \item A subsequence of $V_h^n-\bar V_0^n$ converges weakly in $\Hunstar$ and hence strongly in $L^2(\Omega^\star)$. \item There is a subsequence of $q_h^{n+1}$ that converges weakly in $\Ldeux$. Moreover, we know that $K(\phi_h^n)\GRAD(\lambda q_h^{n+1} + V_h^{n+1})$ converges weakly. By the a.e.~convergence of the coefficients and the $L^2$-weak convergence of $\GRAD V_h^{n+1}$ we conclude that $\GRAD q_h^{n+1}$ must converge weakly and, thus, the convergence is weak in $\Hun$ and strong in $\Ldeux$. \item The quantity $ \dot\phi_h^{n+1} = \tfrac{\frakd \phi_h^{n+1}}\dt + \bu_{h\btau}^{n+1} \psi(\phi_h^n)$ remains bounded in $L^2(\Gamma)$, which implies that there is a subsequence of $\dot \phi_h^{n+1}$ that converges weakly in $L^2(\Gamma)$. \item $\GRAD \mu_h^n$ remains bounded in $\Ldeuxd$. Moreover, setting $\bar \mu_h = 1 $ in \eqref{eq:chemdiscrete} and the observations given above, imply \begin{multline*} \left| \scl \mu_h^{n+1}, 1 \scr \right| \leq \left| \frac\gamma\delta \scl \calW'(\phi_h^k) + \calA \frakd \phi_h^{k+1},1\scr \right. \\ \left. +\frac12 \scl \rho'(\phi_h^n) \bu_h^n, \bu_h^{n+1} \scr + \alpha \sbl \dot \phi_h^{k+1},1\sbr + \gamma \sbl \Theta_{fs}'(\phi_h^n) + \calB \frakd \phi_h^{n+1}, 1 \sbr \right| \leq c, \end{multline*} which shows that $\int_\Omega \mu_h^{n+1}$ remains bounded and, thus, $\mu_h^n$ remains bounded in $\Hun$ and so there is a subsequence that converges weakly in $\Hun$ and strongly in $\Ldeux$. \item Finally, we use the compatibility condition \eqref{eq:LBB} and the discrete momentum equation \eqref{eq:veldiscrete} to obtain an estimate on the pressure $p_h^{n+1}$, \begin{multline*} c \| p^{n+1}_h \|_{L^2} \leq \frac1\dt \| \rho(\phi_h^n)\|_{L^\infty} \|\frakd \bu_h^{n+1}\|_{\bL^2} + \frac1\dt \| \frakd \rho(\phi_h^{n+1})\|_{L^\infty} \| \bu_h^{n+1}\|_{\bL^2} + \| \rho(\phi_h^n)\|_{L^\infty} \| \bu_h^n \|_{\bH^1} \| \bu_h^{n+1} \|_{\bH^1} \\ + \| \rho'(\phi_h^n)\|_{L^\infty}\|\GRAD\phi_h^n\|_{\bL^2}\|\bu_h^n\|_{\bH^1}\|\bu_h^{n+1}\|_{\bH^1} + \| \eta(\phi_h^n) \|_{L^\infty} \| \bS(\bu_h^{n+1}) \|_{\bL^2} + \| \beta(\phi_h^n) \|_{L^\infty} \| \bu_h^{n+1} \|_{\bV} \\ + \alpha \| \psi(\phi_h^n) \|_{L^\infty(\Gamma)} \| \dot \phi_h^{n+1} \|_{L^2(\Gamma)} + \| \mu_h^{n+1} \|_{H^1} \| \GRAD \phi_h^n \|_{\bL^2} + \| q_h^{n+1} \|_{H^1} \|\GRAD (\lambda q_h^{n+1} + V_h^{n+1}) \|_{\bL^2} \\ + \frac1\dt \| \rho'(\phi_h^n) \|_{L^\infty} \| \frakd \phi_h^{n+1} \|_{L^2} \| \bu_h^n \|_{\bV} \leq \frac{c}\dt, \end{multline*} which, for a fixed and positive $\dt$, implies the existence of a $L^2$-weakly convergent subsequence. \end{itemize} Let us denote the limit by \[ \left\{V_{\dt}-\bar{V}_{0,\dt},q_{\dt},\phi_{\dt},\mu_{\dt},\bu_{\dt},p_{\dt}\right\} \subset \Hunstar \times \Hun^3 \times \bV \times \tildeLdeux. \] It remains to show that this limit is a solution of \eqref{eq:potentialsemidiscrete}--\eqref{eq:NSEsemidiscrete}: \begin{description} \item[Equation \eqref{eq:potentialsemidiscrete}] Notice that if we show that, as $h \rightarrow 0$, the sequence $V_h^{n+1}$ converges to $V^{n+1}$ strongly in $\Hunstar$, then the a.e. convergence of the coefficients implies \begin{equation} \scl \vare^\star(\phi_h^{n+1}) \GRAD V_h^{n+1}, \GRAD W \scr_{\Omega^\star} \rightarrow \scl \vare^\star(\phi^{n+1}) \GRAD V^{n+1}, \GRAD W \scr_{\Omega^\star}. \label{eq:potentialconv} \end{equation} Let us then show the strong convergence by an argument similar to that of \cite[pp.~2778]{MR2511642}. For any function $V \in \Hunstar,$ we introduce the elliptic projection $\calP_h V \in \polW_h(V)$ as the solution to \[ \scl \GRAD \calP_h V, \GRAD W_h \scr_{\Omega^\star} = \scl \GRAD V, \GRAD W_h \scr_{\Omega^\star}, \quad \forall W_h \in \polW_h(0). \] It is well known that $\calP_h V \rightarrow V$ strongly in $\Hunstar$. Given that $\vare$ is uniformly bounded, \begin{align*} c\| \GRAD(V_h^{n+1} - V^{n+1}) \|_{\bL^2}^2 &\leq \scl \vare^\star(\phi_h^{n+1}) \GRAD (V_h^{n+1}-V^{n+1}), \GRAD (V_h^{n+1}-V^{n+1}) \scr_{\Omega^\star} \\ &= \scl \vare^\star( \phi_h^{n+1}) \GRAD V_h^{n+1}, \GRAD (\calP_h V^{n+1} - V^{n+1}) \scr_{\Omega^\star} \\ &+ \scl \vare^\star( \phi_h^{n+1}) \GRAD V_h^{n+1}, \GRAD ( V_h^{n+1} - \calP_h V^{n+1} ) \scr_{\Omega^\star} \\ &+ \scl \vare^\star( \phi_h^{n+1}) \GRAD V^{n+1}, \GRAD ( V^{n+1} - V_h^{n+1}) \scr_{\Omega^\star} \\ &= I + II +III. \end{align*} Let us estimate each one of the terms separately. Since the coefficients are bounded and the sequence $\GRAD V_h^{n+1}$ is uniformly bounded in $\Ldeuxd$, the strong convergence of $\calP_h V^{n+1}$ shows that $I\rightarrow0$. For $II$ we use the equation, namely \[ II= \scl \vare^\star( \phi_h^{n+1}) \GRAD V_h^{n+1}, \GRAD ( V_h^{n+1} - \calP_h V^{n+1} ) \scr_{\Omega^\star} = \scl q_h^{n+1}, V_h^{n+1} - \calP_h V^{n+1} \scr \rightarrow 0 \] since $q_h^{n+1}$ converges strongly in $\Ldeux$. Finally, notice that the last term can be rewritten as \begin{align*} III &= \scl \left( \vare^\star( \phi_h^{n+1}) \rrbracket - \vare^\star(\phi^{n+1} \right) \GRAD V^{n+1}, \GRAD ( V^{n+1} - V_h^{n+1}) \scr_{\Omega^\star} \\ &+ \scl \vare^\star(\phi^{n+1}) \GRAD V^{n+1}, \GRAD ( V^{n+1} - V_h^{n+1}) \scr_{\Omega^\star} \end{align*} The uniform boundedness of $\GRAD V_h^{n+1}$ in $\Ldeuxd$ implies that, for the first term, it suffices to show that $\left( \vare^\star( \phi_h^{n+1}) - \vare^\star(\phi^{n+1}) \right) \GRAD V^{n+1} \rightarrow 0$ in $\Ldeuxd$, which follows from the Lebesgue dominated convergence theorem. For the second term, use the weak convergence of $\GRAD V_h^{n+1}$. This, together with the strong $L^2$-convergence of $q_h^{n+1}$ implies that the limit solves \eqref{eq:potentialsemidiscrete}. \item[Equation \eqref{eq:chargesemidiscrete}] The strong $L^2$-convergence of $q_h^{n+1}$ implies that $\tfrac1\dt \frakd q_h^{n+1} \rightarrow \tfrac1\dt \frakd q^{n+1}$ strongly in $\Ldeux$. Using the compact embeddings $\Hun \Subset L^4(\Omega)$ and $\bV \Subset \bL^4(\Omega)$, we see that \[ \scl q_h^n \bu_h^{n+1}, \GRAD r \scr \rightarrow \scl q^n \bu^{n+1}, \GRAD r \scr, \quad \forall r \in \Hun, \] as $h\rightarrow0$. The term $K(\phi_h^n)\GRAD( \lambda q_h^{n+1} + V_h^{n+1})$ can be treated as in \eqref{eq:potentialconv}. These observations imply that the limit solves \eqref{eq:chargesemidiscrete}. \item[Equation \eqref{eq:phasesemidiscrete}] The strong $\bL^2$-convergence of $\bu_h^{n+1}$, the weak $H^1$-convergence of $\phi_h^{n+1}$ and an argument similar to \eqref{eq:potentialconv} imply that the limit solves \eqref{eq:phasesemidiscrete}. \item[Equation \eqref{eq:chemsemidiscrete}] The smoothness of $\calW$ and the fact its growth is quadratic imply \[ \left| \scl \calW'(\phi_h^n) - \calW'(\phi^n), \bar \mu \scr\right| \leq \max_{\varphi} |\calW''(\varphi)| \| \phi_h^n - \phi^n \|_{L^2} \| \bar \mu \|_{L^2} \rightarrow 0. \] A similar argument and the embedding $\Hun \Subset L^2(\Gamma)$ can be used to show convergence of $\Theta_{fs}'(\phi_h^n)$. Since $\rho$ is a bounded smooth function, \[ \scl \rho'(\phi_h^n) \bu_h^n \SCAL \bu_h^{n+1}, \bar\mu \scr \rightarrow \scl \rho'(\phi^n) \bu^n \SCAL \bu^{n+1}, \bar\mu \scr. \] The strong $\bL^2$-convergence of $\GRAD V_h^{n+1}$ implies that \[ \scl \calE(\phi_h^{n+1},\phi_h^n) |\GRAD V_h^{n+1}|^2, \bar \mu \scr \rightarrow \scl \calE(\phi^{n+1},\phi^n) |\GRAD V^{n+1}|^2, \bar \mu \scr, \] where it is necessary to have $\bar\mu \in L^\infty(\Omega)$. To conclude that \eqref{eq:chemsemidiscrete} is satisfied by the limit, it is left to show that $\dot \phi_h^{n+1}$ converges strongly in $L^2(\Gamma)$. We know that $\dot \phi_h^{n+1}$ converges weakly in $L^2(\Gamma)$. On the other hand $\tfrac1\dt \frakd \phi_h^{n+1}$ converges strongly in $L^2(\Gamma)$, $\bu_{h\btau}^{n+1}$ converges strongly in $\bL^2(\Gamma)$ and $\psi(\phi_h^n)$ converges a.e. in $\Gamma$. \item[Equations \eqref{eq:NSEsemidiscrete}] Clearly, \eqref{eq:pressemidiscrete} is satisfied. To show that \eqref{eq:velsemidiscrete} holds, notice that \begin{multline*} \scl \overline{\rho(\phi_h^{n+1})} \bu_h^{n+1} - \overline{\rho(\phi^{n+1})} \bu^{n+1}, \bw \scr = \\ \scl \overline{\rho(\phi_h^{n+1})}\left( \bu_h^{n+1} - \bu^{n+1} \right), \bw \scr + \scl \left( \overline{\rho(\phi_h^{n+1})} - \overline{\rho(\phi^{n+1})}\right)\bu^{n+1},\bw \scr \rightarrow 0. \end{multline*} Since we assume that $\psi$ is smooth and the slip coefficient $\beta$ is smooth and depends only on the phase field, but not on the stress (as opposed to \S\ref{sub:pinning}), we can get convergence of the terms $ \sbl \beta(\phi_h^n) \bu_{h\btau}^{n+1}, \bw_{\btau} \sbr$ and $\sbl \bu_{h\btau}^{n+1} \psi(\phi_h^n), \bw_\btau \psi(\phi_h^n) \sbr$, respectively. The advection term can be treated using standard arguments and thus we will not give details here. The terms \[ \scl \mu_h^{n+1} \GRAD \phi_h^n, \bw \scr, \qquad \scl q_h^n \GRAD( \lambda q_h^{n+1} + V_h^{n+1} ), \GRAD \bw \scr, \] can be treated using arguments similar to the ones given before. The term \[ \scl \rho'(\phi_h^n) \frac{ \frakd\phi_h^{n+1} }\dt \bu_h^n, \bw \scr, \] can be easily shown to converge since all terms converge strongly. The convergence of the term \[ \sbl \frac{\frakd \phi_h^{n+1}}\dt, \bw_{\btau}\psi( \phi_h^n )\sbr, \] follows again from the compact embedding $\Hun \Subset L^2(\Gamma)$. Finally, the convergence of the viscous stress term follows the lines of the proof of \eqref{eq:potentialconv}. \end{description} To conlcude, we notice that we do not need to reprove estimates similar to \eqref{eq:denergy}. These are uniformly valid, in $h$, for all terms in the sequence and, therefore, valid for the limit. Moreover, if one wanted to obtain an energy estimate by repeating the arguments used to obtain Proposition~\ref{prop:denergy} it would be necessary first to obtain uniform $L^\infty$ bounds on the sequence $\frakd \phi_{h,\dt}$, since one of the steps in the proof requires setting $\bar \mu_h = 2\frakd \phi_h^{n+1}$. \end{proof} \begin{rem}[Limit $\dt \rightarrow 0$] We are not able to pass to the limit when $\dt \rightarrow 0$ for several reasons. First, the estimates on the pressure depend on the time-step and getting around this would require finer estimates on the time derivative of the velocity, this is standard for Navier Stokes. In addition, the terms \[ \scl \rho'(\phi^n) \frac{\frakd \phi^{n+1}}\dt \bu^n, \bw \scr, \qquad \scl \frac{ \frakd \rho(\phi^{n+1}) }\dt \bu^{n+1}, \bw \scr, \] would require finer estimates on the time derivative of the phase field which we have not been able to show. It might be possible, however, to circumvent these two restrictions by defining the weak solution to the continuous problem with an unconstrained formulation for the momentum equation (\ie solution and test functions in $\bV$) and modifying the Cahn-Hilliard equations to their ``viscous version'', in other words suitably adding a term of the form $\phi_t$. We will not pursue this direction. \end{rem} \section{Conclusions and Perspectives} \label{sec:concl} Some possible directions for future work would be to extend the analysis by passing to the limit as $\dt \rightarrow 0$, or investigate the phenomenological pinning model more thoroughly. It would also be interesting to look at the use of open boundary conditions on $\partial^\star \Omega^\star$, which is more physically correct for some electrowetting devices. As far as we know, this is an open area of research in the context of phase-field methods. Other extensions of the model could include the transport of surfactant at the liquid-gas interface, though this would make the model more complicated. We want to emphasize that our model gives physically reasonable results when modeling actual electrowetting systems, and so could be used within an optimization framework for improving device design. Concerning numerics, an important issue that has not been addressed is how to actually solve the discretized systems. Even in the fully uncoupled case, the pressence of the dynamic boundary condition in the Cahn-Hilliard system (Step 2 in the scheme of section~\ref{sec:NumExp}) makes this problem extremely ill-conditioned and standard preconditioning techniques (for instance the one in \cite{MR2800707}) inapplicable. \end{document}
\begin{document} \begin{abstract} Given a partition ${\mathcal V}=(V_1, \ldots,V_m)$ of the vertex set of a graph $G$, an {\em independent transversal} (IT) is an independent set in $G$ that contains one vertex from each $V_i$. A {\em fractional IT} is a non-negative real valued function on $V(G)$ that represents each part with total weight at least $1$, and belongs as a vector to the convex hull of the incidence vectors of independent sets in the graph. It is known that if the domination number of the graph induced on the union of every $k$ parts $V_i$ is at least $k$, then there is a fractional IT. We prove a weighted version of this result. This is a special case of a general conjecture, on the weighted version of a duality phenomenon, between independence and domination in pairs of graphs. \end{abstract} \title{ Independence-Domination Duality in weighted graphs} \author{Ron Aharoni} \address{Department of Mathematics\\ Technion, Haifa\\ Israel 32000} \email{Ron Aharoni: [email protected]} \author{Irina Gorelik} \address{Department of Mathematics\\ Technion, Haifa\\ Israel 32000} \email{Irina Gorelik: [email protected]} \maketitle \begin{section}{Introduction} \subsection{Domination and collective domination} All graphs in this paper are assumed to be simple, namely not containing parallel edges or loops. The (open) neighborhood of a vertex $v$ in a graph $G$, denoted by $\tilde{N}(v)=\tilde{N}_G(v)$, is the set of all vertices connected to $v$. Given a set $D$ of vertices we write $\tilde{N}(D)$ for $\bigcup_{v \in D}\tilde{N}(v)$. Let $N(D)=N_G(D) = \tilde{N}(D) \cup D$. A set $D$ is said to be {\em dominating} if $N(D)=V$ and {\em totally dominating} if $\tilde{N}(D)=V$. The minimal size of a dominating set is denoted by $\gamma(G)$, and the minimal size of a totally dominating set by $\tilde{\gamma}(G)$. There is a {\em collective} version of domination. Given a system of graphs ${\mathcal G}=(G_1,\dots,G_k)$ on the same vertex set $V$, a system ${\mathcal D}=(D_1,\dots,D_k)$ of subsets of $V$ is said to be {\em collectively dominating} if $\bigcup_{i \le k}N_{G_i}(D_i)=V$. Let $\gamma_\cup({\mathcal G})$ be the minimum of $\sum_{i \le k}|D_i|$ over all collectively dominating systems. \subsection{Independence and joint independence} A set of vertices is said to be {\em independent} in $G$ if its elements are pairwise non-adjacent. The complex (closed down hypergraph) of independent sets in $G$ is denoted by ${\mathcal I}(G)$. The {\em independence polytope} of $G$, denoted by $IP(G)$, is the convex hull of the characteristic vectors of the sets in ${\mathcal I}(G)$. For a system of graphs ${\mathcal G}=(G_1,\dots,G_k)$ on $V$ the {\em joint independence number}, $\alpha_\cap({\mathcal G})$, is $\max\{|I| : I \in \cap_{i \le k}{\mathcal I}(G_i)\}$. The {\em fractional joint independence number}, $\alpha_\cap^*({\mathcal G})$, is $\max\{\vec{x}\cdot\vec{1}:~~\vec{x}\in\bigcap_{i\le k} IP(G_i)\}$.\\ We shall mainly deal with the case $k=2$. Let us first observe that it is possible to have $\alpha_\cap^*(G_1,G_2)<\min(\alpha(G_1),\alpha(G_2))$. \begin{example} Let $G_1$ be obtained from the complete bipartite graph with respective sides $\{v_1, \ldots,v_6\}$ and $\{u_1,u_2\}$, by the addition of the edges $v_1v_2$, $v_3v_4$ and $u_1u_2$, and let $G_2=\bar{G_1}$. Then $\alpha(G_1)=\alpha(G_2)=4$, while $\alpha_\cap^*(G_1,G_2)=2$, the optimal vector in $IP(G_1) \cap IP(G_2)$ being the constant $\frac{1}{4}$ vector. \end{example} A graph $H$ is called a {\em partition graph} if it is the disjoint union of cliques. In a partition graph $\alpha=\gamma$. The union of two systems of disjoint cliques is the line graph of a bipartite graph, having the set of cliques in one system as one side of the graph, and the set of cliques in the other system as the other side, an edge connecting two vertices (namely, cliques in different systems) if they intersect. Thus, by K\"{o}nig's famous duality theorem \cite{konig}, we have: \begin{theorem}\label{konig2} If $G$ and $H$ are partition graphs on the same vertex set, then $$\alpha_\cap(G,H)=\gamma_\cup(G,H)$$. \end{theorem} There are graphs in which $\alpha >\gamma$, and thus equality does not necessarily hold for general pairs $(G,H)$ of graphs, even when $G=H$. On the other hand, since a maximal independent set is dominating, we have $\gamma(G) \le \alpha(G)$ in every graph $G$. But the corresponding inequality for pairs of graphs is not necessarily true, as the following example shows. \begin{example}\label{noteq} Let $G=P_4$, namely the path with $3$ edges on $4$ vertices, and let $H$ be its complement. Then $\alpha_\cap(G,H)=1$ and $\gamma_\cup(G,H)=2$, so $\alpha_\cap(G,H)<\gamma_\cup(G,H)$. \end{example} However, as was shown in \cite{abhk}, if $\alpha_\cap$ is replaced by its fractional version, then the non-trivial inequality in Theorem \ref{konig2} does hold. \begin{theorem}\label{inddom} For any two graphs $G$ and $H$ on the same set of vertices we have $$\alpha_\cap^*(G,H)\geq\gamma_\cup(G,H)$$. \end{theorem} In Example \ref{noteq} $\vec{\frac{1}{2}}\in IP(G)\cap IP(H)$, and $\alpha_\cap^*(G,H)=2$, so $\alpha_\cap^*(G,H)=\gamma_\cup(G,H)$. \begin{lemma}\label{fracit} Let ${\mathcal V}=(V_1, \ldots ,V_m)$ be a system of disjoint sets, let ${\mathcal I}$ be the set of ranges of partial choice functions from ${\mathcal V}$, and let $V=\bigcup_{i \le m} V_i$. Then $$\{f: V \to \mathbb{R}^+ \mid \sum_{v \in V_j}f(v)\le 1 \text{~for~ every~} j \le m \}=conv(\{\chi_I \mid I \in {\mathcal I}\})$$. \end{lemma} \begin{proof} Obviously, the right hand side is contained in the left hand side. For the reverse containment, let $f: V \to \mathbb{R}^+$ be such that $\sum_{v \in V_j}f(v)\le 1$ for every $j \le m$, and assume for negation that it can be separated from all functions $\chi_I$, $ I \in {\mathcal I}$, namely there exists a vector $\vec{u}$ such that $\sum_{v\in V} u(v)f(v)\ge 1$, and $\sum_{v\in I}u(v)=\sum_{v\in V} u(v)\chi_I(v)<1$ for all $I \in {\mathcal I}$. Since $conv(\{\chi_I \mid I \in {\mathcal I}\})$ is closed down, we may assume that $\vec{u}$ is non-negative. For each $j \le m$ let $v_j$ be such that $u(v_j)$ is maximal over all $v\in V_j$, and let $I=\{v_j \mid j \le m\}$. The fact that $\sum_{v\in I}u(v)< 1$ implies then that $$\sum_{j \le m}\sum_{v\in V_j}u(v)f(v)\le \sum_{j \le m}u(v_j)\sum_{v \in V_j} f(v)\le \sum_{j \in V_j}u(v_j)< 1, $$ a contradiction. \end{proof} \subsection{Independent transversals} When one graph in the pair $(G,H)$, say $H$, is a partition graph, the parameters $\alpha_\cap(G,H)$ and $\alpha^*_\cap(G,H)$ can be described using the terminology of so-called {\em independent transversals}. Given a graph $G$ and a partition ${\mathcal V}=(V_1, \ldots ,V_m)$ of $V(G)$, an independent transversal (IT) is an independent set in $G$ consisting of the choice of one vertex from each set $V_i$. A {\em partial IT} is an independent set representing some $V_i$'s (so, it is the independent range of a partial choice function from ${\mathcal V}$). A function $f : V \to \mathbb{R}^+$ is called a {\em partial fractional IT} if, when viewed as a vector, it belongs to $IP(G)$, and $\sum_{v\in V_j}f(v)\le 1$ for all $j \le m$. If $\sum_{v\in V_j}f(v)=1$ for all $j \le m$ then $f$ is called a {\em fractional IT}. By Lemma \ref{fracit} this means that $f \in IP(H) \cap IP(G)$, namely it is a jointly fractional independent set, where ${\mathcal V}$ is the set of cliques in $H$. For $I \subseteq [m]$ let $V_I=\bigcup_{i \in I}V_i$. The following was proved in \cite{penny}: \begin{theorem}\label{thm:penny} If $\tilde{\gamma}(G[V_I]) \ge 2|I|-1$ for every $I \subseteq [m]$ then there exists an IT. \end{theorem} Theorem \ref{inddom}, applied to the case in which $H$ is a partition graph, yields: \begin{theorem}\label{fractr} If $\gamma(G[V_I]) \ge |I|$ for every $I \subseteq [m]$ then there exists a fractional IT. \end{theorem} \section{Putting weights on the vertices} In \cite{abz} a weighted version of Theorem \ref{thm:penny} was proved. As is often the case with weighted versions, the motivation came from decompositions: weighted results give, by duality, fractional decompositions results. It is conjectured that if $|V_i| \ge 2\Delta(G)$ then there exists a partition of $V(G)$ into $\max_{i \le m}|V_i|$ IT's. The weighted version of Theorem \ref{thm:penny} yielded the existence of a fractional such decomposition. \begin{notation} Given a real valued function $f$ on a set $S$, and a set $A\subseteq S$, define $f[A]=\sum_{a\in A}f(a)$. We also write $|f|=f[S]$ and we call $|f|$ the {\em size} of $f$. \end{notation} \begin{definition} Let $G=(V,E)$ be a graph, and let $w:V\to\mathbb{N}$ be a weight function on $V$. We say that a function $f:V\to\mathbb{N} $ {\em $w$-dominates} a set $U$ of vertices, if $f[N(u)]\geq w(u)$ for every $u\in U$. We say that $f$ is {\em $w$-dominating} if it $w$-dominates $V$. The {\em weighted domination number} $\gamma^w(G)$ is $\min\{|f| \mid f ~\text{is $w$-dominating}\} $ \end{definition} This definition extends to systems of graphs: \begin{definition} Let $\mathcal{G}=(G_1,\dots,G_k)$ be a system of graphs on the same vertex set $V$. Let $w:V\to\mathbb{N}$ be a non-negative weight function on $V$, and let ${\mathcal F}=(f_i:~V\to\mathbb{N}, ~~i \le k)$ be a system of functions. We say that ${\mathcal F}$ $w$-dominates ${\mathcal G}$ if $\sum_{i=1}^k f_i[N_{G_i}(v)]\geq w(v)$ for every $v\in V$. The {\em weighted collective domination number } is $$\gamma_\cup^w(\mathcal{G})=\min\{\sum_{i=1}^k |f_i|:(f_1,\dots,f_k)\; is\; w-dominating\}.$$ The extension of the independence parameter to the weighted case is also quite natural: $$(\alpha_\cap^w)^*({\mathcal G})=\max\{\sum_{v \in V}x(v)w(v) \mid \vec{x}\in\bigcap_{i=1}^k IP(G_i)\}.$$ \end{definition} The aim of this paper is to study the following possible extension of Theorem \ref{inddom} to the weighted case. \begin{Conjecture}\label{conj:main} If $G$ and $H$ are graphs on the same vertex set $V$ then for any weight function $w:V\to\mathbb{N}$ we have $$(\alpha_\cap^w)^*(G,H)\geq\gamma_\cup^w(G,H).$$ \end{Conjecture} If $H=G$ then the stronger $\alpha_\cap^w(G,G) \geq\gamma_\cup^w(G,G)$ is true, namely: \begin{lemma} $\alpha^w(G) \ge \gamma^w(G)$. \end{lemma} \begin{proof} We have to exhibit a $w$-dominating function $f$ and an independent set $I$ with $|f|\leq w[I]$. Let $V(G)=\{v_1,\dots,v_n\}$. We define a $w$-dominating function $f:V\to \mathbb{N}$ inductively. Let $f(v_1)=w(v_1)$. Having defined $f(v_1),\dots,f(v_{i-1})$ let $$f(v_i)= [w(v_i)-\sum_{v_j\in N(v_i),\; j<i}f(v_j)]^+$$ Clearly, $f$ is $w$-dominating. We next find an independent set $I$ such that $w[I]\geq|f|$. Let $v_{i_1}$ be the vertex that has the maximal index over all the vertices in $V_1=V\cap supp(f)$. Since $f(v_{i_1})>0$, we have $f[N(v_{i_1})]=w(v_{i_1})$. Suppose that we have defined the sets of vertices $V_1,V_2,\dots, V_{k-1}$ and vertices $v_{i_1},\dots,v_{i_k}$ such that $v_{i_j}$ is the vertex whose index is maximal over all the vertices in $V_j$ where $V_j=V_{j-1}\setminus N(v_{i_{j-1}})$ for every $j=1,\dots,k-1$. Let $V_k=V_{k-1}\setminus N(v_{i_{k-1}})$ and let $v_{i_k}$ be the vertex whose index is maximal over all the vertices in $V_k$. By the definition of $f$ we have $\sum_{v_j\in N(v_{i_k}),\; j<i_k}f(v_j)=w(v_{i_k})$, so $\sum_{v_j\in V_k\cap N(v_{i_k})}f(v_j)\leq w(v_{i_k})$. We stop the process when $V_t=\emptyset$ for some $t$. In this case $I=\{v_{i_1},\dots,v_{i_{t-1}}\}$ is an independent set that satisfies $w[I]\geq |f|$ as desired. \end{proof} \section{The case of partition graphs} The main result of this paper is: \begin{theorem}\label{fractrconj} Conjecture \ref{conj:main} is true if $H$ is a partition graph. Namely, if $H$ is a partition graph and $G$ is any graph, then $$(\alpha_\cap^w)^*(G,H)\geq\gamma_\cup^w(G,H).$$ \end{theorem} Let us first re-formulate the left hand side of the inequality in terms of partitions. For a partition ${\mathcal V}=(V_1, \ldots, V_m)$ of the vertex set $V$ of a graph $G$, let \begin{equation} \label{ns} {({\nu}^w)}^*(G, {\mathcal V}) =\max\{ \sum_{v \in V}w(v)f(v)\; \mid \; f ~\text{is~a~fractional~ partial ~IT}\}.\end{equation} By Lemma \ref{fracit} we have: \begin{lemma}\label{lem:param} $(\alpha_\cap^w)^*(G,H)={({\nu}^w)}^*(G, {\mathcal V})$. \end{lemma} Let us also re-formulate the right hand side using the terminology of partitions. Given partition ${\mathcal V}=(V_1, \ldots, V_m)$ of $V(G)$, a pair of non-negative real valued functions $f$ on $V$ and $g$ on $[m]$ is said to be {\em collectively $w$-dominating} if for every vertex $v \in V_i$ we have $g(i)+f[N(v)] \ge w(v)$. Let $\gamma^w(G, {\mathcal V})$ be the minimum of $|g|+|f|$ over all collectively $w$-dominating pairs of functions. In this terminology, $\gamma_\cup^w(G,H)=\gamma^w(G,{\mathcal V})$. In addition, let $\tau^w(G, {\mathcal V})$ be the minimum of $|g|+\frac{|f|}{2}$ over all collectively $w$-dominating pairs of functions. In \cite{abz} the following weighted version of Theorem \ref{fractr} was proved. \begin{theorem}\label{thm:abz} $\nu^w(G,{\mathcal V}) \ge \tau^w(G, {\mathcal V})$. \end{theorem} \begin{remark} Note the factor $\frac{1}{2}$ difference between the definitions of $\tau^w(G, {\mathcal V})$ and $\gamma^w(G,{\mathcal V})$. It mirrors the factor $\frac{1}{2}$ difference (manifest in the factor $2$ in ``$2|I|-1$'') between the statements of Theorems \ref{thm:penny} and \ref{fractr}. The same factor appears in the weighted case: the difference between the integral and fractional versions is the $\frac{1}{2}$ factor hidden in the right hand sides of Theorems \ref{thm:abz} and of \ref{thm:partition} below. \end{remark} By Lemma \ref{lem:param} the case of Conjecture \ref{conj:main} in which $H$ is a partition graph is: \begin{theorem}\label{thm:partition} ${(\nu^w)}^*(G,{\mathcal V}) \ge \gamma^w(G, {\mathcal V})$, where ${\mathcal V}$ is the partition of $V$ into cliques of $H$. \end{theorem} \begin{proof} Note that if $f = \sum_{I \in {\mathcal I}(G)}x_I\chi_I$ then $f[V_j]=\sum_{I\in{\mathcal I}(G)}x_I|I\cap V_j|$, and thus the constraints defining the linear program for ${({\nu}^w)}^*(G, {\mathcal V})$ are $\sum_{I\in{\mathcal I}(G)}x_I|I\cap V_j|\leq 1$ and $\sum_{I\in{\mathcal I}(G)}x_I=1$. \begin{assertion} Let \begin{equation}\label{t}{({\nu}^w)}^*(G,{\mathcal V})=\max\{\sum_{I\in {\mathcal I}(G)}x_I w[I]|\sum_{I\in{\mathcal I}(G)}x_I|I\cap V_j|\leq 1,\; \sum_{I\in{\mathcal I}(G)}x_I\leq 1\}\end{equation} \end{assertion} \begin{proof} Denote the right hand side by $t$. If $f=\sum_{I\in{\mathcal I}(G)}x_I\chi_I$ is an optimal solution of the linear program \eqref{ns} then ${({\nu}^w)}^*(G,{\mathcal V})=\sum_{v\in V}w(v)f(v)=\sum_{I\in {\mathcal I}(G)}x_I w[I]$. Hence ${({\nu}^w)}^*(G,{\mathcal V})\leq t$. On the other hand, suppose by negation that there exists an optimal solution of the linear program \eqref{t} that satisfies $\sum_{I\in{\mathcal I}(G)}x_I=1-\epsilon$ for some $\epsilon>0$. Clearly $t>0$, and hence there exists an independent set $I_0$ such that $x_{I_0}>0$. Choose a vertex $v\in I_0$, and define a vector $\vec{x'}$ as follows. Let $x'_{I_0}=x_{I_0}-\epsilon$, $x'_{I_0\setminus\{v\}}=x'_{\{v\}}=\epsilon$ and $x'_I=x_I$ otherwise. Note that the vector $x'$ satisfies constrains of the linear program, but the weight of $\sum_{I\in{\mathcal I}(G)}x'_I\chi_I$ is $\sum_{I\in{\mathcal I}(G)}x_Iw[I]+\epsilon$, contradicting the maximality of the optimal solution. Hence this optimal solution is also a solution for the linear program \eqref{ns}, so, $t\leq {({\nu}^w)}^*(G,{\mathcal V})$ proving the desired equality. \end{proof} By LP duality ${({\nu}^w)}^*(G, {\mathcal V})$ is the minimum of $\sum_{j=0}^m y_j$ over all vectors $\vec{y}=(y_0,y_1,\dots,y_m)$ satisfying $y_0+\sum_{j=1}^my_j |I\cap V_j| \geq w[I]$ for all $I\in{\mathcal I}(G)$. Let $\vec{y}=(y_0,y_1,\dots,y_m)$ be a vector in which the minimum is attained, meaning that $\sum_{j=0}^m y_j={({\nu}^w)}^*(G,{\mathcal V})$, and let $g(j)=\lfloor y_j\rfloor$ for all $j \le m$. We define a new weight function $w_g$ by $w_g(v)=[w(v)-\lfloor y_{j(v)}\rfloor]^+$, where $j(v)$ is that $j$ for which $v \in V_j$. Let $V'=\{v \mid w_g(v)>0\}$ be the support of $w_g$, and let $G'=G[V']$. For a number $s$ let $\{s\}$ be the fractional part of $s$, namely $\{s\}=s- \lfloor s \rfloor$. \begin{assertion} The vector $(y_0,\{y_1\},\dots,\{y_m\})$ is an optimal solution for the program dual to: $(\nu^{w_g})^*(G', {\mathcal V})$, namely $$y_0+\sum_{j=1}^m \{y_j\}=(\nu^{w_g})^*(G',{\mathcal V}):=$$ $$\max\{\sum_{I\in{\mathcal I}(G')}x_I w_g[I]\mid \;\sum_{I\in{\mathcal I}(G')} x_I\leq1\; \text{and} \; \forall j\;\sum_{I\in{\mathcal I}(G')} x_I |I\cap V_j|\leq 1\}$$ \end{assertion} \begin{proof} Denote by $y$ the sum $y_0+\sum_{j=1}^m \{y_j\}$. For every $v\in V'$, we have $w_g(v)=w(v)-\lfloor y_{j(v)}\rfloor$, hence $y_0+\sum_{j=1}^m\{y_j\} |I\cap V_j|\geq w_g[I]$ for every $I\in{\mathcal I}(G')\subseteq{\mathcal I}(G)$, proving that $y\geq (\nu^{w_g})^*(G',{\mathcal V})$. For the reverse inequality, assume for negation that there exists a solution $\vec{x}=(x_0,\dots,x_m)$ such that $\sum_{j=0}^m x_j<y$. Then the vector $\vec{x}=(x_0,x_1+\lfloor y_1\rfloor,\dots,x_m+\lfloor y_m\rfloor)$ is a solution to the original problem that satisfies $x_0+\sum_{j=1}^m x_j+\lfloor y_j\rfloor<\sum_{j=0}^m y_j={({\nu}^w)}^*(G,{\mathcal V})$, contradicting the optimality of $\vec{x}$. \end{proof} Since an optimal solution of the primary problem corresponding to the weight function $w_g$ satisfies $\sum_{I\in\mathcal{I}(G')}x_I=1$, there exists a set $I$ such that $x_I>0$. Let $I_0$ be a set of minimal weight in $supp(x)$. Then \begin{equation}\label{I_0} w_g[I_0]=w_g[I_0]\sum_{I\in{\mathcal I}(G')}x_I\leq \sum_{I\in{\mathcal I}(G')}x_I w_g[I]=(\nu^{w_g})^*(G',{\mathcal V})\end{equation} Let $h:V'\to\mathbb{N}$ defined by $h(v)=w_g(v)$ if $v\in I_0$ and $h(v)=0$ otherwise. \begin{assertion} The function $h$ is $w_g$-dominating in $G'$. \end{assertion} \begin{proof} Suppose not. Then there exists a vertex $v\in V'$ such that $w_g(v)>h(v)+h[\tilde{N}(v)]=h(v)+w_g[\tilde{N}(v)\cap I_0]$. Clearly, $v\notin I_0$, hence $h(v)=0$. The set $I'=(I_0\setminus \tilde{N}(v))\cup \{v\}$ satisfies $$w_g[I']=w_g[I_0\setminus \tilde{N}(v)]+w_g(v)>w_g[I_0\setminus \tilde{N}(v)]+w_g[I_0\cap \tilde{N}(v)]=w_g[I_0]$$ Since $w$ is an integral function, $w_g$ is also integral, hence $w_g[I']=w_g[I_0]+k_v$ for some $k_v\geq 1$. On the other hand, by the definition of the dual program $w_g[I']\leq y_0+\sum_{j=1}^m \{y_j\}|I'\cap V_j|$. In addition, since $x_{I_0}>0$, the complementary slackness conditions state that equality holds in the corresponding constraint in the dual problem, i.e. $w_g[I_0]= y_0+\sum_{j=1}^m \{y_j\}|I_0\cap V_j|$. Hence, $$k_v=w_g[I']-w_g[I_0]\leq \sum_{j=1}^m \{y_j\}(|I'\cap V_j|-|I_0\cap V_j|)=\{y_{j(v)}\}-\sum_{u\in N(v)\cap I_0}\{y_{j(u)}\}<1 $$ is a contradiction. \end{proof} Since $g$ dominates all vertices $v\in V\setminus V'$, the pair $(g,h)$ is $w$-dominating, hence using \eqref{I_0} we have $$\gamma^w(G,{\mathcal V})=\leq |g|+|h|=|g|+w_g[I_0]\leq |g|+(\nu^{w_g})^*(G',{\mathcal V})$$ $$=\sum_{j=1}^m \lfloor y_j\rfloor+y_0+\sum_{j=1}^m \{y_j\}=\sum_{j=0}^m y_j={({\nu}^w)}^*(G, {\mathcal V}) $$ as desired. \end{proof} \end{section} \end{document}
\begin{document} LA-UR-13-23134 SAND 2013-5836J \title{On the gap of Hamiltonians for the adiabatic simulation of quantum circuits} \author{Anand Ganti} \email{[email protected]} \affiliation{ Sandia National Laboratories \\ Albuquerque, New Mexico 87185, USA \\ } \author{Rolando D. Somma} \email{[email protected]} \affiliation{ Los Alamos National Laboratory \\ Los Alamos, New Mexico 87545, USA } \begin{abstract} The time or cost of simulating a quantum circuit by adiabatic evolution is determined by the spectral gap of the Hamiltonians involved in the simulation. In ``standard'' constructions based on Feynman's Hamiltonian, such a gap decreases polynomially with the number of gates in the circuit, $L$. Because a larger gap implies a smaller cost, we study the limits of spectral gap amplification in this context. We show that, under some assumptions on the ground states and the cost of evolving with the Hamiltonians (which apply to the standard constructions), an upper bound on the gap of order $1/L$ follows. In addition, if the Hamiltonians satisfy a frustration-free property, the upper bound is of order $1/L^2$. Our proofs use recent results on adiabatic state transformations, spectral gap amplification, and the simulation of continuous-time quantum query algorithms. They also consider a reduction from the unstructured search problem, whose lower bound in the oracle cost translates into the upper bounds in the gaps. The impact of our results is that improving the gap beyond that of standard constructions (i.e., $1/L^2$), if possible, is challenging. \end{abstract} \date{\today} \maketitle \section{Introduction} \label{sec:intro} Adiabatic quantum computing (AQC) is an alternative to the standard circuit model of quantum computation. In AQC, the input is a (qubit) Hamiltonian $H(1)$ and the goal is to prepare the ground state of $H(1)$ by means of slow or adiabatic evolutions. One then sets an initial Hamiltonian $H(0)$ and builds a Hamiltonian path $H(g)$, $0 \le g \le 1$, that interpolates between $H(0)$ and $H(1)$. If the ground states of $H(g)$ are continuously related and remain at a spectral gap of order $\Delta$ with any other eigenstate during the evolution, the quantum adiabatic approximation implies that, for \begin{align} \label{eq:runningtime} \dot g(t) \le \epsilon \frac{\Delta^q}{h} \; , \end{align} the ground state of $H(1)$ can be adiabatically prepared with fidelity $1-\epsilon$. $0<q \le 3$ and $h$ depends on $\| \partial^n H(g)/\partial g^n \|^{q+1}$, $n=1,2$, for differentiable paths \cite{messiah_1999,jansen_bounds_2007,regev_quantum_2004,lidar_adiabatic_2009,boixo:qc2009b}. A key feature of AQC is that it constitutes a ``natural'' model for problems that efficiently reduce to the computation of ground-state properties. Some of these are problems in combinatorial optimization \cite{finnila:qc1994a,kadowaki:qc1998a,farhi_quantum_2000,farhi:qc2001a, somma_thermod_2007,somma_optimization_2010,somma_quantum_2008} and problems in many-body physics, e.g. the computation of a quantum phase diagram~\cite{sachdev_2001}. Whether AQC is robust to decoherence or not is unclear and a complete fault-tolerant implementation of AQC remains unknown~\cite{jordan_adiabatic_2006,lidar_FTAQC_2008}. Nevertheless, the role of the spectral gap is imperative in a noisy implementation of AQC: a bigger $\Delta$ could imply a smaller running time [Eq.~\eqref{eq:runningtime}] and a reduction of the (unwanted) population of excited states due to thermal effects. Our goal is then to study the limits and possibilities of amplifying the gap in AQC. Roughly stated, we are addressing the following question: Given $H(g)$ with gap $\Delta(g)$ and ground state $\ket{\psi(g)}$, can we find $\tilde H(g)$ with gap $\tilde \Delta(g) \gg \Delta(g)$ and ground state satisfying $|\tilde \psi(g)\rangle \approx \ket{\psi(g)}$? Our motivation is the same as that of Ref.~\cite{mizel_fixedgap_2010}. We are particularly interested in amplifying the gap of those Hamiltonians that arise in the adiabatic simulation of general quantum circuits-- see below. The power of AQC and the standard quantum circuit model are equivalent. That is, any algorithm in the AQC model with a running time $T$, that prepares a quantum state $\ket {\psi(1)}$, can be simulated with a quantum circuit of $L \in {\rm poly}(T)$ unitary gates that prepares a sufficiently close state to ${\ket {\psi(1)}}$ when acting on some trivial initial state~\cite{berry_efficient_2007, wiebe_product_2010,cleve_query_2009,childs_efficient_2010}. The converse also holds: Any quantum circuit of $L$ unitary gates that prepares a quantum state $\ket {\phi^L}$, when acting on some trivial initial state, can be simulated within the AQC model by evolving adiabatically with suitable Hamiltonians $H(g)$ for time $T \in {\rm poly}{ (L)}$. The ground state of the final Hamiltonian, $\ket{\psi(1)}$, has a large probability of being in $\ket {\phi^L}$ after a simple measurement~\cite{aharonov_adiabatic_2007, mizel_equivalence_2007,aharonov_line_2009}. $H(g)$ depends on the unitaries that specify the quantum circuit. To describe our results in detail, we review the first ``standard'' construction in Ref.~\cite{aharonov_adiabatic_2007}, which is based on Feynman's Hamiltonian~\cite{feynman_simulating_1982}. $\mathcal{U}$ is a quantum circuit acting on $n$ qubits that prepares the ``system'' state $\ket {\phi^L}= U^L \ldots U^1 \ket {\phi^0}$, after the action of $L$ unitary gates $U^1,\ldots,U^L$. There is also an ancillary system, denoted by ``clock'', whose basis states are $\{ \ket 0_{\rm c} , \ket 1_{\rm c}, \ldots , \ket L_{\rm c} \}$. The (final) Hamiltonian $H^\mathcal{U}$ is mainly a sum of two terms. The first term is the so-called Feynman Hamiltonian: \begin{align} \nonumber &H_{\rm Feynman}^\mathcal{U} = \sum_{l=1}^L h^{\mathcal{U},l} \; , \\ \label{eq:Hfeynman} & h^{\mathcal{U},l} = \frac 1 2 ( \one \otimes \ketbra l_{\rm c} +\one \otimes \ketbra{l-1}_{\rm c} - \\ \nonumber & - U^l \otimes \ket l \! \bra{l-1}_{\rm c} - (U^l)^\dagger \otimes \ket {l-1}\! \bra{l} _{\rm c} ) \; . \end{align} $\one$ is the trivial operation on the system's state. $H_{\rm Feynman}^\mathcal{U}$ can be easily diagonalized by using the Fourier transform. The eigenvalues are $1- \cos k$, with $k = 2 \pi m/(L+1)$ and $m \in \mathbb{Z}$. Then, the lowest eigenvalue is zero ($k=0$) and the gap is of order $1/L^2$ [$k=2\pi/(L+1)$ for the smallest nonzero eigenvalue]. Each eigenvalue appears with multiplicity $2^n$, corresponding to each state $\ket \sigma$ of the system. The eigenstates of $H_{\rm Feynman}^\mathcal{U}$ are \begin{align} \label{eq:feynmaneigenstate} \frac 1 {\sqrt{L+1}}\sum_{l=0}^L e^{i kl }U^l \ldots U^0 \ket{\sigma} \otimes \ket l_{\rm c} \; , \end{align} where $U^0 = \one$. If $\ket \sigma= \ket {\phi^0}$, the eigenstate in Eq.~\eqref{eq:feynmaneigenstate} has probability $1/(L+1)$ of being in the state output by the circuit. That is, we can prepare $\ket {\phi^L}$ with such a probability by a projective measurement of the clock register on the state of Eq.~\eqref{eq:feynmaneigenstate}. We can remove the multiplicity of the lowest eigenvalue if we add a second term, $ H_{\rm input}$, whose expected value vanishes when $\ket \sigma = \ket {\phi^0}$ and is strictly positive otherwise. For example, if $\ket {\phi^0} = \ket+^{\otimes n}$, where $\ket + = (\ket 0 + \ket 1)/\sqrt 2$, $H_{\rm input}$ in Ref.~\cite{aharonov_adiabatic_2007} corresponds to \begin{align} \nonumber H_{\rm input} = \sum_{j=1}^n \ketbra{-} _j\otimes \ketbra 0 _{\rm c} \; , \end{align} with $\ket - = (\ket 0 - \ket 1)/\sqrt 2$. In this case, $ H_{\rm input}$ sets a ``penalty'' if the system-clock initial state is different from $\ket + ^{\otimes n} \otimes \ket 0 _{\rm c}$. The lowest eigenvalue of $H_{\rm input} $ is zero and the gap is a constant independent of $L$ (i.e., $\Delta_{\rm input}=1$). Then, the Hamiltonian \begin{align} \label{eq:standardH} H^\mathcal{U}= H_{\rm Feynman}^\mathcal{U} + H_{\rm input} \end{align} has \begin{align} \label{eq:historystate} \ket {\psi^\mathcal{U}} = \frac 1 {\sqrt{L+1}}\sum_{l=0}^L\ket {\phi^l}\otimes \ket l_{\rm c} \end{align} as unique ground state [$k=0$ in Eq.~\eqref{eq:feynmaneigenstate}], where $\ket{\phi^l}=U^l \ldots U^0 \ket{\phi^0}$. We will refer to $\ket {\psi^\mathcal{U}}$ as the ``history state''. The lowest eigenvalue of $H^\mathcal{U}$ is also zero and the spectral gap satisfies $\Delta^\mathcal{U} \in \Theta(1/L^2)$ \cite{deift_improved_2007}. It is simple to construct an interpolating path $H^\mathcal{U}(g)$ that has a spectral gap $\Delta^\mathcal{U}(g) =\Delta^\mathcal{U} \in \Theta(1/{\rm poly}L)$ for all $g$ and $H^\mathcal{U}(1)=H^\mathcal{U}$. This is done by, for example, parametrizing the unitaries in the circuit so that $U^l \rightarrow U^l(g)$ in Eq.~\eqref{eq:standardH}, and $U^l(0)=\one$, $U^l(1)=U^l$. Then, the ground state $\ket {\psi^\mathcal{U}(1)}=\ket{\psi^\mathcal{U}}$ can be prepared from $\ket {\psi^\mathcal{U}(0)} = \ket{\phi^0} \otimes \sum_l \ket l_{\rm c}/\sqrt{L+1}$ by evolving adiabatically with $H^\mathcal{U}(g)$ for time $T \in \mathcal{O}[{\rm poly}(L)] $ [see Eq.~\eqref{eq:runningtime}]. $H^\mathcal{U}$ is often regarded as ``unphysical'' as the system-clock interactions may represent non-local interactions of actual quantum subsystems (qubits). Then, a number of steps that include modifications of the gates in the circuit and techniques from perturbation theory (e.g., perturbation gadgets)~\cite{nagaj_2008}, allow us to reduce $H^\mathcal{U}$ to a physical, local Hamiltonian $H^\mathcal{U}_{\rm local}$. Such steps preserve the two main ingredients for showing the equivalence between AQC and the circuit model: i- that the spectral gap of the local Hamiltonian, $\Delta^\mathcal{U}_{\rm local}$, is bounded from below by $1/{\rm poly}(L)$ and ii- that the ground state has sufficiently large probability of being in $\ket{ \phi^L}$ after a simple quantum operation (e.g., a simple projective measurement). It is important to remark that $\Delta^\mathcal{U}_{\rm local}$ is smaller than $\Delta^\mathcal{U}$ in standard constructions~\cite{aharonov_adiabatic_2007,aharonov_line_2009}. For this reason, some attempts to improve the running time of the adiabatic simulation of a quantum circuit consider first the amplification of $\Delta^\mathcal{U}$ by making simple modifications to $H^\mathcal{U}$ (see Ref.~\cite{lloyd_adiabatic_2008} for an example); Our results concern the amplification of $\Delta^\mathcal{U}$. In this report we show that, under some assumptions on the ground states and the time or cost of evolving with the Hamiltonians, an upper bound on the gap of order $1/L$ follows. Furthermore, if the Hamiltonians additionally satisfy a so-called frustration-free property, then the upper bound is $1/L^2$. An implication of our results is that simple modifications to $H^\mathcal{U}$ in Eq.~\eqref{eq:standardH} are not sufficient to amplify its gap. While such modifications could be useful to prepare the desired state via a constant-Hamiltonian evolution~\cite{landahl_PST_2004}, they may not be useful to prepare the state adiabatically. Our proofs are constructive, i.e., we find a reduction from the unstructured search problem~\cite{grover_fast_1996} (Sec.~\ref{sec:grover}), whose lower bound on the oracle cost~\cite{bennett_searchbound_1997} (i.e., the number of queries to the oracle needed) can be transformed into the upper bounds on the gaps (Sec.~\ref{sec:mainresult}). Clearly, the only way to obtain a bigger gap, if gap amplification is indeed possible, is by avoiding one or more assumptions needed for our proofs. This suggests a migration from those constructions that are based on Feynman's Hamiltonian. \section{Search by a generalized measurement-based method} \label{sec:grover} The proof of an upper bound on $\Delta^\mathcal{U}$ uses a reduction from the unstructured search problem or SEARCH. In this section, we show a quantum method that solves SEARCH using measurements. For a system of $n$ qubits, we let $N=2^n$ be the dimension of the associated state (Hilbert) space $\mathcal{H}$. Given an oracle $O_X$, where the input $X$ is a $n$-bit string, the goal of SEARCH is to output $X$. In quantum computing, $O_X$ implements the following unitary operation: \begin{align} \nonumber O_X \ket Y= \left\{ \begin{matrix} & \ \ \ \ket {Y} \ {\rm if} \ X \ne Y \; , \cr &- \ket X \ {\rm if } \ X = Y \; . \end{matrix} \right. \end{align} $O_X$ acts on ${\cal H}$. A quantum algorithm for SEARCH uses $O_X$ and other $X$-independent operations to prepare a state sufficiently close to $\ket X$. Thus, a projective measurement on this state outputs $X$ with large probability. The (oracle) cost of the algorithm is given by the number of times that $O_X$ is used. A lower bound $\Omega(\sqrt N)$ for the cost of SEARCH is known~\cite{bennett_searchbound_1997} and the famous Grover's algorithm solves SEARCH with $L/2 \in \Theta(\sqrt N)$ oracle uses \cite{grover_fast_1996}. Grover's algorithm, denoted by $\mathcal{U}_X$, is a sequence of two unitary operations, $O_X$ and $R$, where $R$ is a reflection over the equal superposition state $\ket {\phi^0}= \frac 1 {\sqrt{N}} \sum_Y \ket Y=\ket{+}^{\otimes n}$. The initial state is also $\ket {\phi^0}$. The state output by $\mathcal{U}_X$ is $\ket {\phi^L_X}$ and satisfies, in the large $N$ limit, \begin{align} \label{eq:halfgroverstate} \left \| \ket {\phi_X^L} - \ket X ] \right \| \ll 1 \; . \end{align} There are other quantum methods that solve SEARCH, with optimal cost, using measurements. One such method, first introduced in Ref.~\cite{childs_quantum_2002}, involves two projective measurements: After preparing $\ket{\phi^0}$, a measurement of $\ket{\psi_X} \approx[\ket X + \ket{\phi^0}]/\sqrt 2$, followed by a measurement of $\ket{X}$, outputs $X$ with probability close to $1/4$. The cost of this measurement-based method is dominated by the simulation of the first measurement. Such a simulation can be done using the phase estimation algorithm~\cite{kitaev_quantum_1995} or by phase randomization~\cite{boixo:qc2009a}. Both methods require evolving with a Hamiltonian that has $\ket {\psi_X}$ as eigenstate. The evolution time is proportional to the inverse gap of the Hamiltonian, which is needed to resolve the desired state from any other eigenstate. Generalizations of the above measurement-based method, that consider simulating measurements in other states, also solve SEARCH. To see this, we let $\ket{\zeta_X} \in \mathcal{H}'$ and $\ket \nu \in \mathcal{H}''$ be two pure quantum states that do and do not depend on $X$, respectively. The corresponding Hilbert spaces satisfy $\mathcal{H}'' \subseteq \mathcal{H}' \subseteq \mathcal{H}$. We also define \begin{align} \nonumber p_{\nu, \zeta_{X}} &= {\rm tr} [ \langle \zeta_X\ketbra{\nu} \zeta_X \rangle ] \; , \\ \nonumber p_{X, \zeta_{X}} &= {\rm tr} [ \langle{\phi^L_X} \ketbra{\zeta_{X}}\phi^L_X \rangle ] \; , \end{align} which are the probabilities of projecting $\ket \nu$ into $\ket{\zeta_X}$, and $\ket{\zeta_X}$ into $\ket X$, respectively, after a measurement (on the corresponding Hilbert spaces). A generalization of the measurement-based method is described in Table I. The probability of success is $p_s \ge p_{\nu, \zeta_{X}} \cdot p_{X, \zeta_{X}}$. \begin{tabular}{m{8cm} c} \label{GMBM} Table I. Generalized measurement-based method\\ \hline \hline \begin{description} \item[i- ] Prepare $\ket \nu$ \item[ii- ] Measure $\ket{\zeta_X}$ \item[iii-] Measure $\ket X$ \end{description} \\ \hline \end{tabular} We now obtain the time $T$ of solving SEARCH (with probability $p_s$) with the generalized measurement-based method. We let $G_{X}$ be the Hamiltonian that has $\ket {\zeta_{X}}$ as unique ground state and the corresponding spectral gap of $G_X$ is $\Delta^{\mathcal{U}_X}$. $G_X$ acts on the Hilbert space ${\cal H}'$ and depends on $O_X$. $T$ is determined by the total time of evolution with $G_X$ needed to simulate the measurement of $\ket{\zeta_X}$ in step ii. Using the phase estimation algorithm~\cite{kitaev_quantum_1995} or evolution randomization~\cite{boixo:qc2009a}, this time is \begin{align} \label{eq:GMBcost} T = c/\Delta^{\mathcal{U}_X} \; , \end{align} for some constant $c\ge \pi$. The lower bound in the oracle cost of SEARCH can then be used to set a lower bound on $T$ or, equivalently, an upper bound in $\Delta^{\mathcal{U}_X}$. This results from noting that the evolution under $G_X$ can be well approximated with a discrete sequence of unitaries that contains $O_X$. Nevertheless, to make a rigorous statement on $\Delta^{\mathcal{U}_X}$, some assumptions on $G_X$ and the ground state are needed. We provide such assumptions and our main results in the next section. \section{Gap bounds} We list three assumptions on $G_X$ and its ground state, $\ket{\zeta_X}$. \label{sec:mainresult} {\bf Assumption 1}: \begin{align} \nonumber p_{\nu, \zeta_{X}} \in \Theta(1) \; \forall X \; . \end{align} That is, there exists an $X$-independent state $\ket \nu$ that can be projected into $\ket {\zeta_{X}}$, with high probability, after a measurement. {\bf Assumption 2}: \begin{align} \nonumber p_{X, \zeta_{X}} \in \Theta(1) \; \forall X \; . \end{align} That is, $\ket {\zeta_{X}}$ can be projected into $\ket X$, with high probability, after a measurement. Assumptions 1 and 2 result in a probability of success $p_s \in \Theta(1)$ when solving SEARCH with the generalized measurement-based method of Table I. Assumptions 1 and 2 may be combined into one as described in Appendix~\ref{appendix0}. Also, a generalization of Assumption 2 to any circuit $\mathcal{U}$ is a requirement of having a ground state with large probability of being in the state output by the circuit after measurement. This property is desired for Hamiltonians involved in the adiabatic simulation of quantum circuits. {\bf Assumption 3}: For all $t \in \mathbb{R}$ and fixed $\epsilon$, $0 \le \epsilon<1$, there exists a unitary operation ${ W_X}=(S.\tilde O_X)^r$, where $S$ is also a unitary operation that does not depend on $X$, $ \tilde O_X = O_X \otimes \one$ is the oracle for SEARCH acting on the larger Hilbert space ${\cal H}'$, $r \le |c' t|^{\gamma}$, and \begin{align} \nonumber \| e^{i G_X t} - W_X \| \le \epsilon \; . \end{align} $c'>0$ and $\gamma \ge 0$ are constants. Assumption 3 implies that the evolution operator determined by $G_X$ can be approximated, at precision $\epsilon$, by a sequence of unitary operations that uses the oracle order $|c' t|^\gamma$ times. For some specific $G_X$, such an approximation may follow from the results in Refs.~\cite{cleve_query_2009,berry_efficient_2007,wiebe_product_2010} on Hamiltonian simulation (see Sec.~\ref{sec:discussion}). {\bf Theorem.} If $G_X$ and $\ket{\zeta_X}$ satisfy Assumptions 1, 2, and 3, \begin{align} \nonumber \Delta^{\mathcal{U}_X} \in \mathcal{O}(1/L^{1/\gamma}) \; . \end{align} In addition, if $G_X$ satisfies a frustration-free property~\cite{bravyi_stoquastic_2009,somma_gap_2013}, \begin{align} \nonumber \Delta^{\mathcal{U}_X} \in \mathcal{O}(1/L^{2/\gamma}) \; . \end{align} The definition of a frustration-free Hamiltonian is included in the proof. The second bound applies under an additional requirement on $G_X$. This requirement together with the constants for the upper bounds are also discussed in the proof. We note that the gap in the second upper bound may not be the ``relevant'' gap for the adiabatic simulation. In certain cases, for example, the adiabatic simulation may not allow for transitions from the ground state to the first-excited state due to symmetry reasons. Nevertheless, the first bound still holds for the relevant gap in these cases. {\bf Proof.} Simulating the measurement in step ii of the generalized measurement-based method requires an evolution time $T = c /\Delta^{\mathcal{U}_X}$ [Eq.~\eqref{eq:GMBcost}]. From Assumption 3, the evolution can be approximated by a quantum circuit that uses the oracle $r$ times, with $r \le (c'T)^\gamma$. The lower bound on the cost of solving SEARCH~\cite{bennett_searchbound_1997} implies \begin{align} \nonumber \left(\frac{c' c}{ \Delta^{\mathcal{U}_X}} \right)^\gamma \ge r \ge \alpha \sqrt N \ge \alpha 2L\; , \end{align} where $\alpha>0$ is a constant because $p_s \in \mathcal{O}(1)$. Then, $\Delta^{\mathcal{U}_X} \le c' c/(2 \alpha L)^{1/\gamma}$. $G_X$ is a frustration-free Hamiltonian if it is a sum of positive semidefinite terms and the ground state $\ket{\zeta_X}$ is a ground state of every term~\cite{bravyi_stoquastic_2009,somma_gap_2013,feiguin_renorm_2013}. In this case, it is possible to preprocess $G_X$ and build a Hamiltonian $\tilde G_{X}$ that has \begin{align} \label{eq:modHgroundstate} | \tilde \zeta_X \rangle = \ket{\zeta_X} \otimes \ket 0_{\rm a} \end{align} as (unique) eigenstate of eigenvalue zero, where $\ket 0_{\rm a}$ denotes some simple, $X$-independent state of an ancillary system a. The corresponding spectral gap of $\tilde G_X$ for this state is $\tilde \Delta^{\mathcal{U}_X} \ge \sqrt{ \Delta^{\mathcal{U}_X}}$ -- see Ref.~\cite{somma_gap_2013} for details on spectral gap amplification. Then, SEARCH can be solved with probability $p_s \in \mathcal{O}(1)$, using the generalized measurement-based method, by evolving with $\tilde G_X$ for time $T =c /\sqrt{ \Delta^{\mathcal{U}_X}}$ ~\footnote{ To be rigorous, the state $\ket \nu$ has to be redefined as $\ket \nu \otimes \ket 0_{\rm a}$ for this case.}. If Assumption 3 also applies for approximating the evolution operator $e^{-i \tilde G_Xt}$, then $\sqrt{ \Delta^{\mathcal{U}_X}} \le c' c/(2 \alpha L)^{1/\gamma}$. This completes the proof. {\bf Corollary.} If $\gamma=1$, then $\Delta^{\mathcal{U}_X} \in \mathcal{O}(1/L)$. In addition, if $G_X$ is frustration free as explained above, $\Delta^{\mathcal{U}_X} \in \mathcal{O}(1/L^2)$. It is possible to achieve $\gamma \rightarrow 1$ for some $G_X$ (see Sec.~\ref{sec:discussion}). {\bf Corollary.} If the eigenvalues of $H^\mathcal{U}$ do not depend on $\mathcal{U}$, the upper bounds on $\Delta^{\mathcal{U}_X}$ are upper bounds on $\Delta^\mathcal{U}$. \section{Discussion: Validity of the Assumptions and Implications} \label{sec:discussion} We review the validity of the assumptions and implications for some constructions found in the literature. The first is the standard construction in Ref.~\cite{aharonov_adiabatic_2007}, also discussed in Sec.~\ref{sec:intro}. In this case, we consider a modification of Grover's algorithm so that $\mathcal{U}_X = \one^{L/4} (RO_X)^{L/4} \one^{L/4}$, with $L \in \Theta(\sqrt N)$ and $\one$ the trivial (identity) operation. Such a modification is unnecessary but it simplifies the analysis below. The state output by the modified circuit is unchanged; the only change is in the Hamiltonians. As before, we let $G_X =H^{ \mathcal{U}_X}$ be the Hamiltonian associated with $\mathcal{U}_X$ and $\ket{\zeta_X}= \ket{\psi^{\mathcal{U}_X}}$ be its ground state [i.e., the history state of Eq.~\eqref{eq:historystate} with $\mathcal{U}=\mathcal{U}_X$]. For the modified circuit, the ground state has large overlap with the $X$-independent state \begin{align} \nonumber \ket \nu = \ket {\phi^0} \otimes \frac 1 {\sqrt{L/4+1}} \sum_{l=0}^{L/4} \ket l_{\rm c} \; . \end{align} Similarly, $\ket{\zeta_X}$ has large overlap with the state \begin{align} \nonumber \ket { X} \otimes \frac 1 {\sqrt{L/4}} \sum_{l=3L/4}^{L} \ket l_{\rm c} \; , \end{align} because $\ket X \approx \ket{\phi^L_X} $ [see Eq.~\eqref{eq:halfgroverstate}]. These Eqs. imply $p_{\nu,\zeta_X} \approx 1/4$ and $p_{X,\zeta_X} \approx 1/4$, so that Assumptions 1 and 2 are readily satisfied. To study Assumption 3, we write \begin{align} \nonumber G_{X}&= - O_X \otimes \sum_{l : U^l = O_X} [ \ket {l}\! \bra{l-1}_{\rm c} + \ket {l-1} \! \bra{l}_{\rm c}] +\ldots \\ \label{eq:Hrep} &= O_X \otimes P_{\rm c} + H_{\rm s-c} \; . \end{align} $P_{\rm c}$ is a Hamiltonian acting on the clock register that is a sum of commuting terms like $\ket {l}\! \bra{l-1}_{\rm c} + \ket {l-1} \! \bra{l}_{\rm c}$: the oracles $O_X$ are interleaved with the operations $R$ in Grover's algorithm. Then, the eigenvalues of $P_{\rm c}$ are $\pm 1$ and $\| P_{\rm c} \| \le 1$, where $\| . \|$ is the operator norm. $H_{\rm s-c}$ is a system-clock Hamiltonian that does not depend on $X$: $H_{\rm s-c}$ is a sum of $H_{\rm input}$ and those terms in $H_{\rm Feynman}^{\mathcal{U}_X}$ that do not depend on $O_X$. Using the results in Ref.~\cite{cleve_query_2009}, the operator $\exp\{iG_Xt\}$ can be well approximated using $ \mathcal{O}(|t| \log|t|)$ oracles $O_X$ (see Appendix~\ref{appendixA}). Thus, Assumption 3 is satisfied for the construction of Ref.~\cite{aharonov_adiabatic_2007} and $\gamma \rightarrow 1$ assymptotically. To prove that $G_X$ is frustration free, we note that \begin{align} \label{eq:modHtransf} G_X = W(\mathcal{U}_X)^{\;} H^\one W(\mathcal{U}_X)^{\dagger} \; , \end{align} where $H^\one$ is the Hamiltonian of Eq.~\eqref{eq:standardH} for the trivial circuit and \begin{align} \label{eq:modHtransf2} W(\mathcal{U}_X)^{\;} = \sum_{l=0}^L U^l \otimes \ketbra l_{\rm c} \end{align} is a unitary operation. For the modified Grover's algorithm, $U^l \in \{ \one, R, O_X \}$. It is simple to verify that $\ket{\psi^\one} \propto \ket{\phi^0}\sum_l \ket l_{\rm c}$, $h^{\one,l} \ket{\psi^\one} =0$, $h^{\one,l} \ge 0$, $H_{\rm input} \ket{\psi^\one}=0$, $H_{\rm input} \ge 0$. This implies that $H^\one$ is frustration free and so are $G_X$ and $H^\mathcal{U}$ for any $\mathcal{U}$. Then, there exists \begin{align} \nonumber \tilde G_X = W(\mathcal{U}_X)^{\;} \tilde H^\one W(\mathcal{U}_X)^{\dagger} \end{align} whose ground state is $|\tilde\zeta_X \! \rangle=\ket{\zeta_X} \otimes \ket 0_{\rm a}$ and whose gap is $\sqrt{\Delta}$ \footnote{While the subspace of eigenvalue zero of $\tilde G_X$ is highly degenerate, the degeneracy is irrelevant and can be easily removed by adding other $X$-independent terms~\cite{somma_gap_2013}.}. a is an ancilliary system of dimension $L+n$. The operators $h^{\mathcal{U},l}$ have eigenvalues $0,1$ and $\sqrt{h^{\mathcal{U},l}}=h^{\mathcal{U},l}$. Then, from the results in Ref.~\cite{somma_gap_2013}, Sec. IV, we obtain \begin{align} \label{eq:modH1} \tilde G_X = \tilde H^{\mathcal{U}_X}_{\rm Feynman} + \tilde H_{\rm input} \; , \end{align} with \begin{align} \nonumber &\tilde H^{\mathcal{U}_X}_{\rm Feynman} = \sum_{l=1}^L h^{\mathcal{U}_X,l} \otimes [\ket l \! \bra 0_{\rm a} + \ket 0 \! \bra l_{\rm a} ] \; , \\ \nonumber &\tilde H_{\rm input} = \sum_{j=1}^n \ketbra- _j \otimes \ketbra 0_{\rm c} \otimes \\ \nonumber & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \otimes [\ket {L+j} \! \bra 0_{\rm a} + \ket 0 \! \bra {L+j}_{\rm a} ] \; . \end{align} When $U^l=O_X$ in the modified Grover's algorithm, \begin{align} \nonumber h^{\mathcal{U}_X,l} = \frac 1 2 [\one( \otimes \ketbra l_{\rm c} + \ketbra{l-1}_{\rm c} ) + \\ \nonumber +O_X \otimes \ket l \bra{l-1}_{\rm c} + \ket{l-1}\bra l_{\rm c}] \; . \end{align} Thus, another representation for $\tilde G_X$ is \begin{align} \label{eq:modHrep} & \tilde G_X = O_X \otimes \tilde P_{\rm c-a} + \tilde H_{\rm s-c-a} \; , \end{align} with $\| \tilde P_{\rm c-a} \| \le 1$ because $\| \tilde H^\mathcal{U}_{\rm Feynman} \| \le 1$ (see Appendix~\ref{appendix1}). The system-clock-ancilla Hamiltonian $\tilde H_{\rm s-c -a}$ is independent of $X$. Then, the evolution operator $e^{i\tilde G_X t}$ can be approximated from the results in Ref.~\cite{cleve_query_2009} using the oracle $\mathcal{O}(|t| \log|t|)$ times and the gadget in Appendix~\ref{appendixA}. It follows that $\gamma \rightarrow 1$ asymptotically for this case as well, and the gap satisfies $\Delta^{\mathcal{U}_X} \in \tilde \mathcal{O}(1/L^2)$. (The $\tilde \mathcal{O}$ notation accounts for the additional logarithmic factor.) This upper bound is also valid for any $\Delta^\mathcal{U}$, because the eigenvalues of $H^\mathcal{U}$ do not depend on $\mathcal{U}$ [Eq.~\eqref{eq:modHtransf}]. Our result is compatible with the lower bound on $\Delta^\mathcal{U}$ obtained in Ref.~\cite{aharonov_adiabatic_2007} (see Sec.~\ref{sec:intro}). It proves that our technique to establish limits in the gap is effective. Nevertheless, as we show below, our technique is powerful when analyzing the gaps of Hamiltonians that are simple modifications to the $G_X$ above, where obtaining the spectrum directly can be challenging. We note again that, since the local Hamiltonian constructed in Ref.~\cite{aharonov_adiabatic_2007} has a smaller gap than that of $H^\mathcal{U}$ or $G_X$, the bound on the gap of $G_X$ translates into a bound on the gap of the local Hamiltonian. We use the previous analysis to show a more general result. Consider a general Hamiltonian $H^\mathcal{U}=W(\mathcal{U}) H^\one W(\mathcal{U})^\dagger$ for the adiabatic simulation of a quantum circuit, which uses a clock register, and whose ground state is of the form \begin{align} \nonumber \ket{\psi^\mathcal{U}}& = W(\mathcal{U}) \ket{\psi^\one} \\ \label{eq:generalhistory} & = \sum_{l=0}^L \alpha^l \ket{\phi^l} \otimes \ket l_{\rm c} \; , \end{align} and $\ket{\psi^\one} = \ket{\phi^0} \otimes \sum_l \ket l_{\rm c}$. With no loss of generality, we can assume that there exists $l_0$ such that \begin{align} \label{eq:amplitudecondition} \sum_{l=l_0}^L |\alpha^l|^2 \in \Theta(1) \; . \end{align} If this condition is not satisfied, we can always apply an operation that permutes the clock states or we can add trivial operations to the circuit so that Eq.~\eqref{eq:amplitudecondition} is satisfied (the spectrum of $H^\mathcal{U}$ is unchanged). We let $l_0$ be the largest $l$ to satisfy Eq.~\eqref{eq:amplitudecondition}. Then, we consider a modification of Grover's algorithm so that \begin{align} \nonumber \mathcal{U}_X = \ketbra 0_{\rm b} \otimes \one + \ketbra 1_{\rm b} \otimes \left(\one^{l_0} .(RO_X)^{L-l_0}\right) \; , \end{align} where b is an ancillary qubit (see Appendix~\ref{appendix0}). $L \in \Theta(\sqrt N)$. Basically, the modified Grover's algorithm acts trivially, if the state of an ancillary qubit is $\ket 0_{\rm b}$, or implements the original Grover's algorithm, if the state of the ancilla is $\ket 1_{\rm b}$. The initial state is $\ket+_{\rm b} \otimes \ket{\phi^0}$, and $\ket{\phi^0}$ is the equal superposition state as required in Grover's algorithm. Assumptions 1 and 2 then follow from Eq.~\eqref{eq:amplitudecondition}, for those ground states that can be described by Eq.~\eqref{eq:generalhistory}. Additionally, if the Hamiltonian associated with $\mathcal{U}_X$ can be represented as in Eq.~\eqref{eq:Hrep}, with $\| P_{\rm c} \| \le 1$, the evolution under $G_X$ can be well approximated using $\mathcal{O}(t \log t)$ oracles and the upper bound on $\Delta^{\mathcal{U}_X}$ is of $\tilde \mathcal{O}(1/L)$. Such Hamiltonians include those $H'^{\mathcal{U}}$ arising from modified Feynman Hamiltonians, where $H'^\mathcal{U}_{\rm Feynman} = \sum_l \beta^l h^{\mathcal{U},l}$, $|\beta^l | \le 1$, and those Hamiltonians that have an additional term \begin{align} \nonumber H_{\rm pointer} = \sum_l E^l . \one \otimes \ketbra l_{\rm c} \; , \end{align} that acts solely in the clock space. For those $H'^\mathcal{U}$, the spectrum is independent of $\mathcal{U}$ (i.e., $\Delta^{\mathcal{U}_X}=\Delta^\mathcal{U}$) and, in particular, \begin{align} \nonumber G_X = W^{\!}(\mathcal{U}_X) H'^\one W(\mathcal{U}_X)^\dagger \; \end{align} [see Eqs.~\eqref{eq:modHtransf} and~\eqref{eq:modHtransf2}]]. The unitaries $U^l$ involved in the definition of $W(\mathcal{U}_X)$ are $U^l \in \{ \one, \ketbra 0_{\rm b} \otimes \one + \ketbra 1_{\rm b} \otimes O_X, \ketbra 0_{\rm b} \otimes \one + \ketbra 1_{\rm b} \otimes R \}$, for the current $\mathcal{U}_X$. $H'^\one$ acts trivially in the system and has tridiagonal form in the basis $\{\ket 0 _{\rm c} , \ldots, \ket L_{\rm c} \}$. Then, with no loss of generality, we can assume that $H^\one$ is frustration free \footnote{ The frustration free property can be obtained by adding, for example, a constant to $H^\one$ so that its lowest eigenvalue is zero.}. It follows that $G_X$ is also frustration free and we can build \begin{align} \nonumber \tilde G_X = W^{\;}(\mathcal{U}_X) \tilde H^\one W^\dagger(\mathcal{U}_X) \; , \end{align} by using the results of Ref.~\cite{somma_gap_2013}. Because $\tilde H^\one$ is also tridiagonal in the basis $\{\ket 0 _{\rm c} , \ldots, \ket L_{\rm c} \}$, $\tilde G_X$ admits a representation of the form of Eq.~\eqref{eq:modHrep} in this case, with $\|\tilde P_{\rm c-a}\| \le 1$. Then, the oracle cost of simulating $\tilde G_X$ for time $t$ is also $\mathcal{O}(t \log t)$. This implies that, for modified Feynman Hamiltonians, the second bound on the gap applies with $\gamma \rightarrow 1$, and $\Delta^\mathcal{U} \in \tilde \mathcal{O}(1/L^2)$. A few remarks are in order. First, we note that the above result contradicts a statement in Ref.~\cite{lloyd_adiabatic_2008} claiming that the gap can be amplified to order $1/L$ by including a term of the form $H_{\rm pointer}$. Second, that an upper bound on $\Delta^\mathcal{U}$ of order $1/L^{2/\gamma}$ is obtained when the Hamiltonian satisfies the frustration free property, it does not contradict that, for some Hamiltonians, the ``relevant'' gap in certain subspace (e.g., the translationally invariant subspace) may be larger. Nevertheless, such a relevant gap should be limited by the bound on $\Delta^\mathcal{U}$ obtained without assuming the property in the frustration (i.e., $1/L^{1/\gamma}$ in this case). A third remark concerns the applicability of our results to those constructions in which the Hamiltonians are associated with one-dimensional quantum systems, such as the one in Ref.~\cite{aharonov_line_2009}. These constructions would require ``breaking'' the oracle $O_X$ into local, two-qubit pieces. While Assumptions 1 and 2 are easy to verify, a new version of Assumption 3 is required for this case. Such a version may be possible even if the oracle is now a composition of two-qubit local operations, because the evolution operator with the one-dimensional Hamiltonian may ``reconstruct'' a full oracle after certain unit of evolution time. However, we do not have any rigorous result for this case and finding other suitable versions of Assumption 3 is work in progress. Finally, Assumptions 1 and 2 do not apply to the construction in A. Mizel, e-print: arXiv:1002.0846 (2010). \section{ Acknowledgements} We thank S. Boixo, R. Blume-Kohout, D. Gossett, A. Landahl, and D. Nagaj for insightful discussions. We acknowledge support from the Laboratory Directed Research and Development Program at Sandia National Laboratories. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. \hspace{1cm} \begin{appendix} \section{More on Assumptions 1 and 2} \label{appendix0} In general, Assumption 2 is mostly an statement about the ground state of the Hamiltonian $H^\mathcal{U}$ that simulates a quantum circuit, $\ket{\psi^\mathcal{U}}$. Ideally, such state has large probability of being in the state output by the circuit, $\ket{\phi^L}$; that is, \begin{align} \nonumber \mathrm{Pr}({\phi^L|\psi^\mathcal{U}})= \mathrm{Tr} [\bra{\phi^L}\psi^\mathcal{U} \rangle \langle \psi^\mathcal{U} \ket{\phi^L} ] \in \Theta(1) \; . \end{align} We can then consider a modified quantum circuit that uses an additional ancilla b prepared in $\ket +_{\rm b}$ so that it applies the unitary $\mathcal{U}$ (original circuit) controlled on the state $\ket 1$ of the ancilla, or does nothing otherwise. If we denote the modified circuit by $\bar \mathcal{U}$, the output state is \begin{align} \nonumber \ket{\bar \phi^L}&= \bar \mathcal{U} \left ( \ket + \otimes \ket{\phi^0} \right) \\ \nonumber & =\frac 1 {\sqrt 2} [\ket 0_{\rm b} \otimes \ket{\phi^0} + \ket 1_{\rm b} \otimes \ket { \phi^L}]\; . \end{align} In this way, if the ground state of $H^\mathcal{U}$ is a superposition of system-clock states of the form $\ket {\phi^l} \otimes \ket l_{\rm c}$, the ground state of $ H^{\bar \mathcal{U}}$ will be a superposition of states of the form $\ket {\bar \phi^l} \otimes \ket l_{\rm c}$, with $\ket{\bar \phi^l} = \bar U^l \cdots \bar U^0 (\ket +_{\rm b} \otimes \ket {\phi^0})$. When $\mathcal{U}=\mathcal{U}_X$ corresponds to Grover's algorithm, if $|\psi^{\bar \mathcal{U}}\rangle$ has large probability of being in $| \bar \phi^L\rangle$ after measurement, then it has large probability of being in both, $\ket{\phi^0}$ and $\ket{\phi^L}$, after respective measurements. Since $\ket{\phi^0}$ is independent of $X$, $| \bar \phi^L \rangle$ satisfies Assumption 1 and 2 simultaneously. Thus, in Grover's algorithm, Assumptions 1 and 2 can be combined into a single one for Hamiltonians whose ground states are superpositions of $\ket {\phi^l} \otimes \ket l_{\rm c}$. The gap bounds will apply to $H^{\bar \mathcal{U}_X}$ in this case. \section{Oracle simulation of the Feynman Hamiltonian associated with Grover's algorithm} \label{appendixA} Following Ref.~\cite{cleve_query_2009}, the first step is to use the Trotter-Suzuki approximation that, in the case of the evolution under $G_X=O_X \otimes P_{\rm c} + H_{\rm s-c}$, it yields terms of the form \begin{align} \label{eq:coupledoracle} e^{-i s O_X \otimes P_{\rm c}} \end{align} for some small $s \in \mathbb{R}$. The goal in this section is to present gadget that implements Eq.~\eqref{eq:coupledoracle} (i.e., a fractional oracle) using $O_X$. Then, the problem is reduced to the one analyzed in Ref.~\cite{cleve_query_2009}, for which the oracle cost is known. First, we note that there exists a unitary operation $V_{\rm c}$ such that \begin{align} \label{eq:coupledoracle2} V_{\rm c}^{\;} e^{-i s O_X \otimes P_{\rm c}} V_{\rm c}^\dagger = e^{-i s O_X \otimes D_{\rm c}} \; , \end{align} where $D_{\rm c}$ is a diagonal operator acting on the clock register, i.e., \begin{align} \nonumber D_{\rm c} = \sum_k \lambda_k \ketbra k_{\rm c} \; , \end{align} and $|\lambda_k| \le 1$ because $\|P_{\rm c}\| \le 1$. $V_{\rm c}$ commutes with $O_X$ and it does not depend on $X$. The ``gadget'' of Fig.~\ref{fig:exact} uses this observation to implement the operation of the {\em rhs} of Eq.~\eqref{eq:coupledoracle2}. Then, the desired operator of Eq.~\eqref{eq:coupledoracle} can be implemented by conjugating the circuit of Fig.~\ref{fig:exact} with $V_{\rm c}$. This has to be compared with Fig. 3 of Ref.~\cite{cleve_query_2009}. \begin{figure}\label{fig:exact} \end{figure} If we use the simulation of Fig.~\ref{fig:exact} in the scheme shown in Fig.4 of Ref.~\cite{cleve_query_2009}, the total number of oracles needed for approximating the evolution operator $e^{-i G_X t}$ is of order $\mathcal{O}(|t| \log|t|)$. This requires implementing other simulation ``tricks'' to reduce the oracle cost, such as reducing the Hamming weight of the state of the ancillas for each simulation of $e^{-i s O_X \otimes D_{\rm c}}$, coming from the Trotter-Suzuki approximation (see Ref.~\cite{cleve_query_2009} for more details). \section{The modified Hamiltonians $\tilde G_X$} \label{appendix1} The first modified Hamiltonian we analyze is the one in Eq.~\eqref{eq:modH1} for Grover's algorithm, and write $\tilde G_X = \tilde H^{\mathcal{U}_X}$. Then, \begin{align} \nonumber & \tilde G_X = \\ \nonumber & =- O_X \otimes \sum_{l : U^l=O_X} [ \ket {l}\! \bra{l-1}_{\rm c} + \ket {l-1} \! \bra{l}_{\rm c}] \otimes \\ \nonumber & \otimes[ \ket{l}\bra 0_{\rm a} + \ket 0 \bra{l}_{\rm a}]+ \ldots \\ \nonumber &= O_X \otimes \tilde P_{\rm c-a} + \tilde H_{\rm s-c-a} \; . \end{align} $H_{\rm s-c-a} $ is a Hamiltonian that contains terms of the system, clock, and ancilla a not included in the first term. It does not contain any term that depends on $O_X$, i.e., it contains only those with $R$ (and $\one$ for the modified algorithm). Because the set $\{ l : U^l=O_X \}$ involves only odd or even values of $l$ (i.e., $R$ and $O_X$ alternate in Grover's algorithm), the operator $P_{\rm c-a}$ is a sum of commuting terms, each of the form \begin{align} \nonumber - [ \ket {l}\! \bra{l-1}_{\rm c} + \ket {l-1} \! \bra{l}_{\rm c}] \otimes [ \ket{l}\bra 0_{\rm a} + \ket 0 \bra{l}_{\rm a}] \; . \end{align} The eigenvalues of each of these terms are $\pm 1$, implying that $\| P_{\rm c-a} \| =1$. \end{appendix} \end{document}
\begin{document} \title[The Homeomorphism Theorem ]{Homeomorphisms between limbs of the Mandelbrot set} \author{Dzmitry Dudko} \address{Research I, Jacobs University, Postfach 750 561, D-28725 Bremen, Germany; and: G.-A.-Universit\"at zu G\"ottingen, Bunsenstrasse 3--5, D-37073 G\"ottingen, Germany} \email{[email protected]} \author{Dierk Schleicher} \address{Research I, Jacobs University, Postfach 750 561, D-28725 Bremen, Germany} \email{[email protected]} \date{\today} \begin{abstract} We prove that for every hyperbolic component of the Mandelbrot set, any two limbs with equal denominators are homeomorphic so that the homeomorphism preserves periods of hyperbolic components. This settles a conjecture on the Mandelbrot set that goes back to 1994. \end{abstract} \maketitle \section{Introduction} The Mandelbrot set $\mathcal M$ is a set with a very rich combinatorial, topological, and geometric structure. It is often called ``self-similar'' because there are countably many dynamically defined homeomorphisms from $\mathcal M$ into itself, and the set of such homeomorphisms forms a semigroup. Moreover, there are many dynamically defined homeomorphisms from certain dynamically defined subsets of $\mathcal M$ to other subsets of $\mathcal M$. Perhaps the first such result was a homeomorphism from the $1/2$-limb of $\mathcal M$ to a subset of the $1/3$-limb of $\mathcal M$ constructed by Branner and Douady \cite{BD}; this class of homeomorphisms was later extended by Riedl \cite{Ri}. In \cite{BF1}, it was shown, using homeomorphisms to parameter spaces of certain higher degree polynomials, that any two limbs $\mathcal L_{p/q}$ and $\mathcal L_{p'/q}$ (with equal denominators) were homeomorphic. These homeomorphisms preserve the embedding into the plane so that they even extend to neighborhoods of these limbs within $\mathbb{C}$, preserving the orientation \cite{BF2}. All these homeomorphisms are constructed by quasiconformal surgery, and they all change the dynamics of the associated polynomials so that in general, periods of hyperbolic components are changed. At about the same time, it was observed \cite{LS} that there is a combinatorially defined bijection between the limbs $\mathcal L_{p/q}$ and $\mathcal L_{p'/q}$ that preserves periods of hyperbolic components, and it was conjectured that this would yield a homeomorphism between these limbs that preserved periods of hyperbolic components. An early attempt to prove this conjecture by quasiconformal surgery resulted in another proof of the theorem from \cite{BF1} that stayed within the quadratic family. A proof of this conjecture is the main result of the present paper; it can be stated as follows. \begin{maintheorem} For any hyperbolic component of $\mathcal M$, let $\mathcal L_{p/q}$ and $\mathcal L_{p'/q}$ be two limbs with equal denominators. Then there exists a homeomorphism between them that preserves periods of hyperbolic components. \end{maintheorem} Since our homeomorphism preserves periods of hyperbolic components, it can not extend to neighborhoods of the limbs. For a fixed $n\ge1$ consider the arrangement $\mathcal M_n$ of all hyperbolic components with periods up to $n$ (see Figure \ref{figure:CombMand} for an example). There is a combinatorial model $\mathcal M_{comb}$ of the Mandelbrot set that can be described as a limit of $\mathcal M_n$ in a certain sense \cite{Do}. Furthermore, there is a canonical continuous projection $\pi:\mathcal M\rightarrow \mathcal M_{comb}$, and any fiber $\pi^{-1}(c)$ is compact, connected, and full (a bounded set $X\subset\mathbb{C}$ is called \emph{full} if its complement has no bounded components). The famous ``MLC conjecture'' (``the Mandelbrot set is locally connected'') can be stated as saying that $\pi$ is a homeomorphism. \begin{figure} \caption{Combinatorics of hyperbolic components of $\mathcal M$ up to period 4. } \label{figure:CombMand} \end{figure} For any $p/q$ and $p'/q$ there is a canonical homeomorphism $f'$ between $\pi(L_{p/q})$ and $\pi(L_{p'/q})$ preserving periods of hyperbolic components. Our strategy is to show that $f'$ can be lifted up to the level of the Mandelbrot set; namely we have the following commutative diagram: \begin{equation} \label{eq:diagram0} \begin{array}[c]{ccc} L_{p/q}&\stackrel{f}{\longrightarrow} &L_{p'/q}\\ \downarrow\scriptstyle{\pi}&&\downarrow\scriptstyle{\pi}\\ \pi(L_{p/q})&\stackrel{f'}{\longrightarrow}&\pi(L_{p'/q}). \end{array} \end{equation} We will show that this technique can be applied to any continuous map that ``respects'' small copies of the Mandelbrot set. This result fits into the vision of Douady expressed by the statement that ``combinatorics implies topology'': many results about the Mandelbrot sets are discovered and described in terms of combinatorics, and these combinatorial results lead the way for topological statements. In our case, the combinatorial result remained a topological conjecture since about 1994. The key progress that was required was the Decoration Theorem (see below). \subsection*{Outline of the paper.} In Section \ref{sect:Mand} we recall the notion of hyperbolic components, small copies of the Mandelbrot set, and combinatorial classes. The combinatorial model $\mathcal M_{comb}$ is defined as the quotient of $\mathcal M$. Section \ref{sect:internal_addresses} contains the definition and main properties of internal and angled internal addresses. They are coordinates for combinatorial classes. In Section \ref{sec:proof} we will construct Diagram \ref{eq:diagram0}. The homeomorphism $f':\pi(L_{p/q})\rightarrow \pi(L_{p'/q})$ exists by fundamental properties of angled internal addresses. As $f'$ coincides with the canonical homeomorphism on every small copy of the Mandelbrot set there exists a bijection $f$ that makes Diagram \ref{eq:diagram0} commute. The continuity of $f$ follows from Yoccoz's results, the existence of the canonical isomorphism of all copies of the Mandelbrot set, and the Decoration theorem. In Section \ref{sec:generalization} we will formulate a general statement that allows to lift a continuous map from the level of the combinatorial model of the Mandelbrot set to the actual Mandelbrot set. \section{The Mandelbrot set} \label{sect:Mand} The \textit{Mandelbrot set} $\mathcal M$ is defined as the set of quadratic polynomials \begin{equation} \label{eq:Qadr} p_c\colon z\mapsto z^2+c \end{equation} with connected Julia sets. It is a compact, connected, and full set. As the Mandelbrot set is the parameter space of quadratic polynomials there is an additional structure (combinatorics) of $\mathcal M$ on top of the topology. For instance, $\mathcal M$ contains hyperbolic components and small copies of the Mandelbrot set. Both types of subsets have dynamical meaning. \subsubsection*{Hyperbolic components} A \emph{hyperbolic component of $\mathcal M$} is a connected component of the set of parameters $c\in\mathcal M$ so that $p_c$ has an attracting orbit. Assume that $p_c(z)=z^2+c:\mathbb{C}\rightarrow\mathbb{C}$ has an non-repelling periodic cycle; this means there is a periodic orbit $z_c$ of $p_c$ with multiplier of absolute value at most $1$. The periodic orbit $z_c$ is necessarily unique; let $\lambda(z_c)$ be its multiplier. Then it is known that: \begin{itemize} \item there is a hyperbolic component $\mathcal H$ such that $c\in\overline{\mathcal H}\subset\mathcal M$; \item the attracting orbit $z_c$ has constant period throughout $\mathcal H$; \item within $\mathcal H$ the cycle $z_c$ moves holomorphically; \item the multiplier map $\lambda\colon\mathcal H\to\mathbb D$ is a conformal isomorphism; this map extends to a homeomorphism from $\overline \mathcal H$ to $\overline\mathbb D$. \end{itemize} By definition, the period of $\mathcal H$ is the period of $z_c$. For every fixed $n\ge 1$ there are finitely many hyperbolic components with period $n$. The arrangement of hyperbolic components up to period $n$ gives an approximation (topological and combinatorial) to the Mandelbrot set (see Figure \ref{figure:CombMand}). The unique hyperbolic component of period $1$ is called the \textit{main hyperbolic component of the Mandelbrot set}. Let $\mathcal H_1$ and $\mathcal H_2$ be hyperbolic components with periodic orbits $z_c$ and $z'_c$ and periods $n_1$ and $n_2$, respectively. If the closures of $\mathcal H_1$ and $\mathcal H_2$ intersect, then the intersection is one point. Let us assume that $\mathcal H_1$ and $\mathcal H_2$ intersect, $n_1\le n_2$, and the point $c'$ is the intersection. It is known that: \begin{itemize} \item $n_2/n_1=q$ is an integer greater than $1$; \item at $c'$ the cycles $z_c$ and $z'_c$ collide: every point from the cycle $z_c$ collides with $q$ points from the cycle $z'_c$; \item the multiplier of $z_c$ at $c'$ is $\exp(2\pi i p/q )$, where $p$ is an integer coprime (and less than) to $q$; in particular, the unique non-repelling orbit of $p_{c'}$ is parabolic. \end{itemize} The closure of the connected component of $\mathcal M\setminus\mathcal H_1$ containing $\mathcal H_2$ is called the \emph{$p/q$-limb} $\mathcal L_{p/q}(\mathcal H_1)$ \textit{of} $\mathcal H_1$. It is known that $p/q$-limbs exist for all coprime $p/q$ with $q\ge 2$, and (the closure of) every component of $\mathcal M\setminus\mathcal H_1$ that does not contain the point $c=0$ is a limb of $\mathcal H_1$. If $\mathcal H_1$ is the main hyperbolic component, then $\mathcal L_{p/q}(\mathcal H_1)$ is called the (primary) \textit{$p/q$-limb} $\mathcal L_{p/q}$. We define \textit{the combinatorial class} $\widehat{\HH}$ of a hyperbolic component $\mathcal H$ as $\{c\in\overline{\mathcal H}\ |\ \lambda(z_c)\not= p/q,\ q>1\}$, where $\lambda(z_c)$ is the multiplier of the (unique) non-repelling periodic cycle $z_c$ of a polynomial $p_c$; equivalently: \begin{equation} \label{eq:CombClos} \widehat{\HH}=\overline{\mathcal H}\setminus\bigcup_{q=2}^{\infty}\bigcup_{p}\mathcal L_{p/q}(\mathcal H). \end{equation} \subsubsection*{Small copies of the Mandelbrot set} The Mandelbrot set is a self-similar set in the following sense: there are countably many copies of the Mandelbrot set in $\mathcal M$; every such copy is canonically homeomorphic to $\mathcal M$, where the homeomorphism is given by the straightening theorem \cite{DH}. In particular small copies are compact, connected, full sets and they preserve hyperbolic components: if a small copy intersects a hyperbolic component, then it contains it. Polynomials within small copies of $\mathcal M$ are called \emph{renormalizable}; polynomials within infinitely many nested small copies of $\mathcal M$ are called \emph{infinitely renormalizable}. Small copies are in one-to-one correspondence with hyperbolic components: for every hyperbolic component $\mathcal H$, there is a unique small copy $\mathcal M_\mathcal H\supset\mathcal H$ so that the canonical homeomorphism of $\mathcal M_\mathcal H$ sends $\mathcal H$ to the main hyperbolic component of $\mathcal M$, and every small copy of $\mathcal M$ is of this type for a unique component $\mathcal H$. For a small copy $\mathcal M_\mathcal H$, let $per(\mathcal M_\mathcal H)$ be the period of $\mathcal H$. Then the canonical homeomorphism from $\mathcal M_\mathcal H$ to $\mathcal M$ divides all periods of hyperbolic components by $per(\mathcal M_\mathcal H)$. If $\mathcal M_\mathcal H$ is a small copy of $\mathcal M$ within $\mathcal M$, then $\mathcal M\setminus\mathcal M_\mathcal H$ consists of countably many components, called ``decorations'' of $\mathcal M_\mathcal H$. If $\mathcal D$ is the closure of any such decoration, then $\mathcal D\cap\mathcal M_\mathcal H$ is a single Misiurewicz point (i.e., a parameter for which the critical orbit is strictly preperiodic). The following theorem was recently proved \cite{D} (see \cite{PR} for a different proof); it will be the fundamental motor for our theorem. \begin{decotheorem} For any $\varepsilon > 0$, there are at most finitely many connected components of $\mathcal M \backslash \mathcal M_{s}$ with diameter at least $\varepsilon$. \end{decotheorem} \subsubsection*{Yoccoz polynomials} A \emph{Yoccoz polynomial} is a quadratic polynomial in $\mathcal M$ for which all periodic orbits are repelling, and it is not infinitely renormalizable (equivalently, it does not belong to any hyperbolic component and is not within infinitely many small copies of $\mathcal M$). It is known that $\mathcal M$ is locally connected at Yoccoz polynomials, and stronger yet, that the corresponding fibers of $\mathcal M$ are trivial: this was shown in detail in \cite{Hu} for non-renormalizable parameters, but results of this kind are automatically preserved by finite renormalizations \cite{Sch2}. \subsection*{Combinatorial classes} A combinatorial class is an equivalence class of parameters with the same rational lamination. A \textit{combinatorial class} is either: \begin{itemize} \item a hyperbolic combinatorial class (as defined above); or \item the intersection of an infinite nested sequence of small copies of $\mathcal M$; \item a single point that does not belong to any combinatorial class of the first two types. \end{itemize} Any non-hyperbolic combinatorial class is always a compact, connected, and full set. A combinatorial class of the last type is exactly a Yoccoz parameter. There are two famous conjectures: ``The Mandelbrot set is locally connected'' (MLC) and ``hyperbolic dynamics is dense in the space of quadratic polynomials'' (the Fatou conjecture for quadratic polynomials). These are equivalent to the statements ``every non-hyperbolic combinatorial class is a point'' and ``every non-hyperbolic combinatorial class has no interior point'' respectively \cite{Do}. \subsection*{The combinatorial model of the Mandelbrot set.} Let us say that two points $c_1$ and $c_2$ are \emph{combinatorially equivalent} if $c_1$ and $c_2$ are in the same non-hyperbolic combinatorial class. The \textit{combinatorial model} $\mathcal M_{comb}$ of the Mandelbrot set is the quotient of $\mathcal M$ by the above equivalence relation. The associated projection $\pi:\mathcal M \rightarrow\mathcal M_{comb}$ is called canonical. It is known that $\mathcal M_{comb}$ is a connected, locally connected, compact, full set; $\pi$ is a continuous surjection and $\pi$ is a homeomorphism (i.e., injective) if and only if MLC is valid \cite{Do}. Hyperbolic components and small copies for $\mathcal M_{comb}$ are defined using the projection $\pi$. Yoccoz' theorem can be expressed as follows: if $c_n$ is a sequence of parameters in $\mathcal M$ so that $\pi(c_n)\to\pi(c)$ for some Yoccoz parameter $c\in\mathcal M$, then $c_n\to c$; and this is the statement we need. (Note that this does \emph{not} follow from the fact that $\mathcal M$ is locally connected at $c$; the stronger property is required that the fiber of $\mathcal M$ at $c$ is trivial; and Yoccoz indeed proves that; see \cite{Sch2}.) \section{Internal addresses of the Mandelbrot set} \label{sect:internal_addresses} In this section we recall the definition and main properties of internal and angled internal addresses. The main reference is \cite{Sch1}. The motivation of an internal address is to approximate any combinatorial class by a canonical sequence of (simpler) hyperbolic classes. Internal addresses (and angled internal addresses) are defined for combinatorial classes, hence there is no difference in the definitions for $\mathcal M$ and $\mathcal M_{comb}$. Consider a combinatorial class $C$ and a hyperbolic component $\mathcal H$. Assume that either $\widehat{\HH}=C$ or $C$ is not in the connected component of $\mathcal M\setminus C$ containing $0$. Then we say that $\mathcal H$ is \textit{closer to} $0$ than $C$ and write $\mathcal H\le C$. We also write $C<\mathcal H$ if $C\le\mathcal H$ and $\widehat{\HH}\not=C$. For any $C$ inductively define the (finite or infinite) sequence \begin{equation} \label{eq:IntAddr} \mathcal H_0< \mathcal H_1<\dots<\mathcal H_n< \dots \end{equation} such that $\mathcal H_0$ is the main hyperbolic component of $\mathcal M$ and $\mathcal H_n$ is of the smallest period satisfying $\mathcal H_{n-1}< \mathcal H_n \le C$. \begin{defprop} For any $C$ the sequence in (\ref{eq:IntAddr}) is unique. Define $S_n$ to be the period of $\mathcal H_n$ and let $p_n/q_n$ be the fraction so that $\mathcal H_{n+1}\subset\mathcal L_{p_n/q_n}(\mathcal H_n)$. The sequence \begin{equation}1=S_0 \rightarrow S_1\rightarrow S_2 \rightarrow \dots \end{equation} is called the \textbf{internal} address of $C$. The sequence \begin{equation} (S_0)_{p_0/q_0} \rightarrow (S_1)_{p_1/q_1}\rightarrow (S_2)_{p_2/q_2} \rightarrow \dots \end{equation} is called the \textbf{angled internal} address of $C$. \end{defprop} It is known \cite[Theorem 1.10]{Sch1} that an angled internal address uniquely describes a combinatorial class, where finite addresses correspond to hyperbolic classes. On the other hand the internal address describes a combinatorial class up to the ``symmetry''. For example, hyperbolic polynomials have the same internal addresses if and only if the dynamics of the polynomials on the Julia sets are topologically conjugate; this topological conjugation extends to a neighborhood of the Julia set, preserving the orientation, if and only if the two polynomials have the same angled internal address. Internal addresses are strictly increasing (finite or infinite) sequences of integers starting with $1$. Not every such sequence occurs for a combinatorial class of the Mandelbrot set: those that occur are called ``complex admissible''. An explicit characterization of the complex admissible sequences was given in \cite{BS}. The following theorem shows the ``valency of the symmetry'': \begin{theorem}[{\cite[Theorem 2.3]{Sch1}}] \label{th:AnglIntAddr} If an angled internal address describes a combinatorial class in the Mandelbrot set, then the numerators $p_k$ can be changed arbitrarily (coprime to $q_k$) and the modified angled internal address still describes a combinatorial class in the Mandelbrot set. \end{theorem} In other words, complex admissibility is a property of internal addresses, not of angled internal addresses. In fact, any internal address uniquely determines the denominators $q_k$ of any associated angled internal address, while Theorem~\ref{th:AnglIntAddr} says that the numerators $p_k$ are completely arbitrary (coprime to $q_k$). Consider a small copy $\mathcal M'$ of the Mandelbrot set, and assume $\mathcal H'$ is the main hyperbolic component of $\mathcal M'$. Then $\mathcal H'$ has a finite angled internal address: \begin{equation} (S'_0)_{p'_0/q'_0} \rightarrow (S'_1)_{p'_1/q'_1}\rightarrow (S'_2)_{p'_2/q'_2} \rightarrow \dots \rightarrow S'_n. \end{equation} \begin{theorem}[{\cite[Proposition 2.7]{Sch1}}] \label{th:SmallCop} A combinatorial class $C$ belongs to the small copy $\mathcal M'$ if and only if the angled internal address of $C$ is \begin{equation} (S'_0)_{p'_0/q'_0} \rightarrow \dots (S'_{n-1})_{p'_{n-1}/q'_{n-1}}\rightarrow (S'_n)_{p_n/q_n}\rightarrow (S_{n+1})_{p_{n+1}/q_{n+1}}\rightarrow\dots \end{equation} and $S'_n|S_{n+k}$ for $k\ge 1$. The canonical homeomorphism between $\mathcal M'$ and $\mathcal M$ sends $C$ to the combinatorial class with the internal address \begin{equation} (1)_{p_n/q_n}\rightarrow (S_{n+1}/S'_n)_{p_{n+1}/q_{n+1}}\rightarrow (S_{n+2}/S'_n)_{p_{n+1}/q_{n+1}}\rightarrow \dots. \end{equation} \end{theorem} We need one more result from \cite{Sch1}. \begin{theorem}[{\cite[Proposition 2.6]{Sch1}}] \label{th:BranchIntAddr} Consider a hyperbolic component $\mathcal H$ and a combinatorial class $C$ in the $p/q$-limb of $\mathcal H$. If $q\ge 3$, then $\mathcal H$ occurs in the internal address of $C$; more precisely, the internal address of $\mathcal H$ is a finite initial sequence of the internal address of $C$. \end{theorem} This result can be expressed as follows: for a given combinatorial class $C$, there are usually many hyperbolic components $\mathcal H<C$, and most of them are not associated to the internal address of $C$. For those that are not, $C$ is in the $1/2$-limb of $\mathcal H$. \section{Proof of the homeomorphism theorem} \label{sec:proof} In this, section, we prove the main theorem in an apparently stronger form: consider hyperbolic components $\mathcal H_1$ and $\mathcal H_2$ with identical internal addresses. Then we will construct a homeomorphism between the limbs $\mathcal L_{p/q}(\mathcal H_1)$ and $\mathcal L_{p'/q}(\mathcal H_2)$, where $q\ge 3$. The original statement of the Main Theorem describes the case $\mathcal H_1=\mathcal H_2$. \begin{remark*} This more general version can easily be deduced from the statement of the Main Theorem, because there is a unique hyperbolic $\mathcal H'$ at which the angled internal addresses of $\mathcal H_1$ and $\mathcal H_2$ branch off, in the sense that $\mathcal H_1$ and $\mathcal H_2$ are contained in two different limbs at angles $p/q$ and $p'/q$ of $\mathcal H'$, with $q\ge 3$; a possibly repeated application of the Main Theorem will then yield the statement we are proving here, and it shows that the statement remains true for $q=2$, i.e., for the limbs $\mathcal L_{1/2}(\mathcal H_1)$ and $\mathcal L_{1/2}(\mathcal H_2)$. The reason why we are giving an apparently more general proof is because the proof really is the same, and this illustrates the general nature of the argument. \end{remark*} Let the angled internal addresses of $\mathcal H_1$ and $\mathcal H_2$ be \begin{equation} (S_0)_{p_0/q_0} \rightarrow\dots\rightarrow (S_{n-1})_{p_{n-1}/q_{n-1}}\rightarrow (S_n) \;, \end{equation} \begin{equation} (S_0)_{p'_0/q_0} \rightarrow\dots\rightarrow (S_{n-1})_{p'_{n-1}/q_{n-1}}\rightarrow (S_n) \end{equation} respectively. By Theorem~\ref{th:BranchIntAddr}, the limbs $\mathcal L_{p/q}(\mathcal H_1)$ and $\mathcal L_{p'/q}(\mathcal H_2)$ consist exactly of all combinatorial classes that have internal addresses starting with \begin{equation} \label{eq:pr1} (S_0)_{p_0/q_0} \rightarrow\dots\rightarrow (S_{n-1})_{p_{n-1}/q_{n-1}}\rightarrow (S_n)_{p/q}\rightarrow\;, \end{equation} \begin{equation} \label{eq:pr2} (S_0)_{p'_0/q_0} \rightarrow\dots\rightarrow (S_{n-1})_{p'_{n-1}/q_{n-1}}\rightarrow (S_n)_{p'/q}\rightarrow \end{equation} respectively. Define a map $f'\colon\pi(\mathcal L_{p/q}(\mathcal H_1))\rightarrow \pi(\mathcal L_{p'/q}(\mathcal H_2))$ so that it changes the initial segment (\ref{eq:pr1}) of the angled internal address into the segment (\ref{eq:pr2}), i.e., it changes the angles in the angled internal address from the limb $\mathcal L_{p/q}(\mathcal H_1)$ into the limb $\mathcal L_{p'/q}(\mathcal H_2)$; within hyperbolic components, this map should fix multipliers. It follows from the construction and Theorem~\ref{th:AnglIntAddr} that the map $f'$ is a well defined homeomorphism and it preserves internal addresses. We will show that there exists a canonical homeomorphism $f$ such that: \begin{equation} \label{eq:diagram} \begin{array}[c]{ccc} \mathcal L_{p/q}(\mathcal H_1)&\stackrel{f}{\longrightarrow} &\mathcal L_{p'/q}(\mathcal H_2)\\ \downarrow\scriptstyle{\pi}&&\downarrow\scriptstyle{\pi}\\ \pi(\mathcal L_{p/q}(\mathcal H_1))&\stackrel{f'}{\longrightarrow}&\pi(\mathcal L_{p'/q}(\mathcal H_2)). \end{array} \end{equation} By canonical we mean that $f$ coincides with the natural homeomorphism between small copies of the Mandelbrot set. With this requirement there is a unique bijection $f$ that makes the above diagram commute. Indeed, if a non-hyperbolic combinatorial class $C$ does not belong to any small copy of $\mathcal M$, then $C$ is a point and $f(C)$ is uniquely defined. If $C$ belongs to a small copy $\mathcal M_s\subset\mathcal L_{p/q}(\mathcal H_1)$ of $\mathcal M$, then $f$ on $\mathcal M_s$ is uniquely defined by the requirement that $f$ be canonical (note that $f'$ is canonical by Theorem \ref{th:SmallCop}). The main issue is to prove that $f$ is continuous. Let us prove that if $c_n$ tends to $c_\infty$, then $f(c_n)$ tends to $f(c_\infty)$. This will imply that $f$ is a homeomorphism (as $f$ is a continuous bijection between compact Hausdorff spaces). It suffices to consider the following three cases. \subsection*{Case 1} Assume $c_\infty$ belongs to at most finitely many small copies of the Mandelbrot set; then the same is true for $f(c_\infty)$. By construction, $f'(\pi(c_n))=\pi(f(c_n))$ tends to $f'(\pi(c_\infty))=\pi(f(c_\infty))$ (using commutativity of Diagram (\ref{eq:diagram})). By Yoccoz' theorem, it follows that $f(c_n)$ tends to $f(c_\infty)$. \subsection*{Case 2} Assume $c_\infty$ and all $c_n$ belong to a single small copy $\mathcal M_s$, where $\mathcal M_s\subset\mathcal L_{p/q}(\mathcal H_1)$. Then the statement follows from the definition of $f$ because $f$ coincides with the canonical homeomorphism from $\mathcal M_s$ to $f(\mathcal M_s)$. \subsection*{Case 3} Assume $c_\infty$ belongs to infinitely many copies of the Mandelbrot set (i.e., $c_\infty$ is infinitely renormalizable), $\mathcal M_s\subset\mathcal L_{p/q}(\mathcal H_1)$ is a small copy containing $c_\infty$, and $c_n$ does not belong to $\mathcal M_s$ for any $n$. Let $\mathcal D_n$ be the closure of the connected component of $\mathcal M \backslash \mathcal M_{s}$ containing $c_n$, and let $a_n$ be the intersection of $\mathcal M_{s}$ and $\mathcal D_n$. Then $a_n$ is a single Misiurewicz point and hence belongs to at most finitely many copies of $\mathcal M$. As $c_\infty$ belongs to infinitely many copies of $\mathcal M$, it follows that $c_\infty\neq a_k$ for all $k$. Therefore only finitely many $c_n$ are in $\mathcal D_k$ for each fixed $k$. Hence by the Decoration Theorem the distance between $c_k$ and $a_k$ tends to $0$, and so the sequence $a_k$ tends to $c_\infty$. By Case $2$ we obtain that $f(a_k)$ tends to $f(c_\infty)$. By a similar reason the distance between $f(a_k)$ and $f(c_k)$ tends to $0$ ($f(a_k)$ is the intersection of $f(\mathcal M_s)$ with the closure of the connected component of $\mathcal M\setminus \mathcal M_s$ containing $f(c_k)$). We conclude that $f(c_k)$ tends to $f(c_\infty)$. This concludes the proof of the Main Theorem. \section{Generalization} \label{sec:generalization} We say that a set $\mathcal L\subset\mathcal M$ is \textit{combinatorially saturated} if $\pi^{-1}(\pi(\mathcal L))=\mathcal L$. Consider two closed combinatorially saturated sets $\mathcal L_1$ and $\mathcal L_2$ and assume that there exists a continuous map $f':\pi(\mathcal L_1)\rightarrow \pi(\mathcal L_2)$. We say that $f'$ is \textit{canonical} (with respect to small copies of $\mathcal M$) if: \begin{itemize} \item for every infinitely renormalizable $c\in\pi(\mathcal L_1)$ there exists a copy $\pi(\mathcal M_s)\subset\pi(\mathcal L_1)$ containing $c$ such that $f'$ restricted to $\pi(\mathcal M_s)$ is the canonical homeomorphism on $\pi(\mathcal M_s)$; \item for every infinitely renormalizable $c\in\pi(\mathcal L_2)$ there exists a copy $\pi(\mathcal M_s)\subset\pi(\mathcal L_2)$ containing $c$ such that $f'$ restricted to any connected component of $f'^{-1}(\pi(\mathcal M_s))$ is the canonical homeomorphism of a small copy of the Mandelbrot set. \end{itemize} In particular, by a standard compactness argument $f'^{-1}(\pi(\mathcal M_s))$ (in the second condition) consists of finitely many small copies. \begin{theorem} Let $\mathcal L_1,\mathcal L_2\subset\mathcal M$ be two closed connected combinatorial sets. For every continuous map $f':\pi(\mathcal L_1)\rightarrow \pi(\mathcal L_2)$ that is canonical with respect to small copies of the Mandelbrot set there exists a continuous map $f:\mathcal L_1\rightarrow \mathcal L_2$ such that the following diagram commutes: \begin{equation} \begin{array}[c]{ccc} \mathcal L_{1}&\stackrel{f}{\longrightarrow} &\mathcal L_{2}\\ \downarrow\scriptstyle{\pi}&&\downarrow\scriptstyle{\pi}\\ \pi(\mathcal L_{1})&\stackrel{f'}{\longrightarrow}&\pi(\mathcal L_{2}). \end{array} \end{equation} \end{theorem} The proof is quite similar to previous one and is left to the reader. \end{document}
\begin{document} \title{\LARGE Computation of Lyapunov functions for nonlinear differential equations\\ via a Massera--type construction } \thispagestyle{empty} \pagestyle{empty} \begin{abstract} An approach for computing Lyapunov functions for nonlinear continuous--time differential equations is developed via a new, Massera--type construction. This construction is enabled by imposing a finite--time criterion on the integrated function. By means of this approach, we relax the assumptions of exponential stability on the system dynamics, while still allowing integration over a finite time interval. The resulting Lyapunov function can be computed based on any $\mathcal{K}_\infty$--function of the norm of the solution of the system. In addition, we show how the developed converse theorem can be used to construct an estimate of the domain of attraction. Finally, a range of examples from literature and biological applications such as the genetic toggle switch, the repressilator and the HPA axis are worked out to demonstrate the efficiency and improvement in computations of the proposed approach. \end{abstract} \section{Introduction} \label{sec:01} The converse of Lyapunov's second method (or direct method) for general nonlinear systems is a topic of extensive ongoing research in the Lyapunov theory community. Work on the converse theorem started around the $1950$s with the crucial result in \cite{Massera1949}, which states that if the origin of an autonomous differential equation is asymptotically stable, then the function defined by a semidefinite integral of an appropriately chosen function of the norm of the solution is a continuously differentiable Lyapunov function (LF). This construction led to a significant amount of subsequent work, out of which we recall here \cite{Kurzweil}. It is well known that finding an explicit form of a LF for general nonlinear systems is a very difficult problem. One of the constructive results on answering the converse problem has been introduced in \cite{Zubov64}, also for autonomous systems. Therein an analytic formula of a LF is provided, which approaches the value $1$ on the boundary of the domain of attraction (DOA) of the considered system. Thus, simultaneously with constructing a LF, an estimate of the DOA is computed. This result, also known as the Zubov method is summarized in \cite[Theorem 34.1]{Hahn67} and \cite[Theorem 51.1]{Hahn67}. Stemming from Zubov's method, a recursive procedure for constructing a rational LF for nonlinear systems has been proposed in \cite{VanVid85}. This procedure has many computational advantages and it is directly applicable to polynomial systems, providing nonconservative DOA estimates. An alternative construction to the one of Massera, has been proposed in \cite{Yoshizawa1966}, where it was shown that the supremum of a function of the solutions of the system is a LF. Additionally, we refer the interested reader to the books \cite{Krasovskii1963} and \cite{LaSalleLef61}, and the survey \cite{Kalman1959}. As for more recent works, for the particular case of differential inclusions, a converse theorem for uniform global asymptotic stability of a compact set was provided in \cite{LinSontagWang1996}, and for the case of homogeneous systems it was shown in \cite{Rosier92} that asymptotic stability implies the existence of a smooth homogeneous LF. Further converse results for differential inclusions for stability with two measures were provided in, for example, \cite{TeelPraly} and \cite{KelletTeel2004}. If control inputs are to be considered, an existence result of control LFs under the assumption of asymptotic controlability was derived in \cite{Sontag1983}. For what concerns state--of--the--art, LF constructive methods, see the recent developments of the author of \cite{Hafstein2007} and subsequent works, out of which we single out \cite{Bjornsson2014} and \cite{Bjornsson2015}, where the Massera construction is exploited for generating piecewise affine LFs and \cite{SiggiKelletLi2014} and \cite{Hafstein2016} where the Yoshizawa construction is used. More detailed historical survey on converse LF results and computatinal methods for LFs can be found in the extensive papers \cite{Kellet2015survey} and \cite{Giesl2015}. Despite the comprehensive work on the topic of providing a converse to Lyapunov's theorem, the existing constructive approaches either rely on complex candidate LFs (rational, polynomial) or they involve state space partitions (for which scalability with the state space dimension is problematic), accompanied by correspondingly complex or large optimization problems. In turn, if we restrict strictly to analytical, Massera type of converse results, the construction in \cite[Theorem 4.14]{Khalil2002}, for example, involves integrating over a finite time interval, however with the assumption of exponential stability of the origin. A similar construction, with the relaxed assumption of asymptotic stability, has been developed in \cite{Bjornsson2015}, by using a Lipschitz, positive outside a neighborhood around the origin, (arbitrary) function of the state. In this paper, we propose a similar Massera--type construction for the LF, by relaxing the exponential stability assumption to a richer type of $\mathcal{K}\mathcal{L}$--stability property. Additionally, we allow for the LF to be generated by any $\mathcal{K}_\infty$ candidate function which satisfies a finite--time decrease criterion. Nonetheless, this relaxation comes with a restriction on the $\mathcal{K}\mathcal{L}$--stability condition indicated in the paper by Assumption~\ref{as:02.01}. Ultimately, the proposed solution makes use of an analytic relation between LFs and finite--time Lyapunov functions (FTLFs) to compute a LF. Thus, construction of LFs is brought down to verification that a candidate function is a FTLF, which is somewhat easier than identifying a candidate function for a true LF. A similar finite--time criterion has been introduced in \cite{Aeyels1998} to provide a new asymptotic stability result for nonautonomous nonlinear differential equations. The discrete--time analog of this condition has been first used in \cite{Geiselhart2014} to provide a converse Lyapunov theorem for nonlinear difference equations. The candidate LF therein is also of Massera type, but projected in discrete--time, thus defined by finite summation. In this respect, some of the results reported in this paper provide a continuous--time counterpart to the findings in \cite{Geiselhart2014}. The main results of this paper consist of the finite--time converse result in Theorem~\ref{th:02.02}, the equivalence condition in Lemma~\ref{lemma:02.01} and the construction in Theorems~\ref{co:03.01} and \ref{th:04.01}. One of the major benefits of the proposed converse theorem is that it enables a systematic construction of DOA estimates. Since the computation procedure is based on analytical relations, it provides potentially improved scalability with the state--space dimension and it is applicable also to systems with nonpolynomial nonlinearities. \subsection{Notation and some definitions} \label{sec:01.01} We say that a set $\mathcal{S}\subseteq\C{R}^n$ is proper if it contains the origin in its interior and it is compact. The logarithmic norm \cite{Soderlind} of a matrix $A\in\C{R}^{n\times n}$, $\mu(A)$\footnote{Thus, $\mu(A)$, sometimes reffered to as the matrix measure, does not define a norm in the conventional sense.} is defined as $\mu(A)=\underset{h\rightarrow 0^+}\lim\frac{\norm{I+hA}-1}{h}$. For the $1,2$ and $\infty$ norms, standard definitions of $\mu(A)$ exist. We recall here the definition of the logarithmic norm induced by the $2$--norm, $\mu_2(A)$ and by the $2$--weighted norm, $\mu_{2,P}(A)$ \cite{Hu2004}: $\mu_2(A)=\lambda_{max}(\frac{1}{2}(A+A^\top))$, $\mu_{2,P}(A)=\lambda_{max}(\frac{\sqrt{P}A\sqrt{P}^{-1} + (\sqrt{P}A\sqrt{P}^{-1})^\top}{2})$, where $\lambda_{max}(P)$ denotes the largest eigenvalue of a symmetric real matrix $P$. In this paper we consider autonomous continuous--time systems described by \begin{equation} \label{eq:01.01} \dot{x}=f(x), \end{equation} where $f\,:\,\C{R}^n\,\rightarrow\,\C{R}^n$, is a locally Lipschitz function. \begin{remark}[Solution notation.] \label{ch4:re:sol} Let the solution of \eqref{eq:01.01} at time $t\in\C{R}_{\geq 0}$ with initial value $x(0)$ be denoted by $\phi(t,x(0))$, where $\phi\,:\,\C{R}_{\geq 0}\times\C{R}^n\,\rightarrow \C{R}^n$. We assume that $\phi(t,x(0))$ exists and it is unique for all $t\in\C{R}_{\geq0}$ (see \cite[Chapter 3]{Khalil2002} for sufficient smoothness conditions on $f$). The locally Lipschitz assumption on $f(x)$ implies, additionally, that $\phi(t,x(0))$ is a continuous function of $x(0)$ \cite[Chapter III]{Hahn67}. Furthermore, we assume that the system \eqref{eq:01.01} has an equilibrium point at the origin, i.e. $f(0)=0$. In what follows, for simplicity, unless stated otherwise, we will use the notation $x(t):=\phi(t,x(0))$ with the following interpretations: \begin{itemize} \item if $x(t)$ is the argument of $\dot{W}(x(t))$, then $x(t)$ represents the solution of the system as defined by $\phi(t, x(0))$; \item if $x(t)$ is the argument of $V(x(t))$ as in \eqref{eq:02.01b} and \eqref{eq:02.06}, for example, then $x(t)$ represents a point on the solution $\phi(t,x(0))$ for a fixed value of $t$; the same interpretation holds for $W(x(t))$; in other words we do not consider time-varying functions $V$ or $W$. \end{itemize} \end{remark} In what follows, we proceed by recalling some subsidiary notions and definitions. \begin{definition} \label{def:01.01} A function $\alpha\,:\,\C{R}_{\geq0}\,\rightarrow\C{R}_{\geq 0}$ is said to be a $\mathcal{K}$ function if it is continuous, zero at zero and strictly increasing. If additionally, $\lim_{s\rightarrow\infty}\alpha(s)=\infty$, then $\alpha$ is called a $\mathcal{K}_\infty$ function. \end{definition} \begin{definition} \label{def:01.02} A function $\sigma\,:\,\C{R}_{\geq0}\,\rightarrow\C{R}_{\geq 0}$ is said to be a $\mathcal{L}$ function if it is continuous, strictly decreasing and $\lim_{s\rightarrow\infty}\sigma(s)=0$. \end{definition} \begin{definition} \label{def:01.03} A function $\beta\,:\,\C{R}_{\geq0}\times \C{R}_{\geq0}\,\rightarrow\C{R}_{\geq 0}$ is said to be a $\mathcal{K}\mathcal{L}$ function if it is a $\mathcal{K}$ function in its first argument and a $\mathcal{L}$ function in its second argument. \end{definition} \begin{definition} \label{def:01.04} The origin is an asymptotically stable (AS) equilibrium for the system \eqref{eq:01.01} if for some proper set $\mathcal{S}\subseteq\C{R}^n$, there exists a function $\beta\in\mathcal{K}\mathcal{L}$ such that for all $x(0)\in\mathcal{S}$, \begin{equation} \label{eq:01.02} \norm{x(t)}\leq\beta(\norm{x(0)},t),\quad \forall t\in\C{R}_{\geq 0}. \end{equation} If the set $\mathcal{S}=\C{R}^n$, then we say that the origin is globally asymptotically stable (GAS). \end{definition} AS defined as above is equivalent with $\mathcal{K}\mathcal{L}$--stability in the set $\mathcal{S}$. In the remainder of the paper we will say that the origin is \emph{$\mathcal{K}\mathcal{L}$--stable in $\mathcal{S}$} to refer to the property defined above. When $\mathcal{S}=\C{R}^n$, then we use the term global $\mathcal{K}\mathcal{L}$--stability. \begin{definition} \label{def:01.05} A continuously differentiable function $V\,:\,\C{R}^n\rightarrow\C{R}_{\geq0}$, for which there exist $\alpha_1,\alpha_2\in\mathcal{K}_\infty$ and a $\mathcal{K}$ function $\rho \,:\,\C{R}_{\geq 0}\rightarrow\C{R}_{\geq0}$ such that \begin{eqnarray} \label{eq:01.03a} \alpha_1(\norm{x}) \leq V(x) &\leq& \alpha_2(\norm{x}), \quad \forall x\in\C{R}^n\\ \label{eq:01.03b} \dot{V}(x)=\nabla^\top V(x) f(x)&\leq&-\rho(\norm{x}), \quad \forall x\in\mathcal{S}, \end{eqnarray} with $\mathcal{S}\subseteq\C{R}^n$ proper, is called a Lyapunov function for the system \eqref{eq:01.01}. \end{definition} \begin{definition} \label{def:01.05} A proper set $\mathcal{S}\subseteq\C{R}^n$ is called an invariant set for the system \eqref{eq:01.01} if for any $x(0)\in\mathcal{S}$, the corresponding solution $x(t)\in\mathcal{S}$, for all $t\in\C{R}_{\geq 0}$. \end{definition} \begin{definition} \label{def:02.01} Given a positive, real scalar $d$, the proper set $\mathcal{S}\subseteq\C{R}^n$ is called a $d$--invariant set for the system \eqref{eq:01.01} if for any $t\in \C{R}_{\geq 0}$, if $x(t)\in\mathcal{S}$, then it holds that $x(t+d)\in\mathcal{S}$. \end{definition} Note that the $d$--invariance property does not imply that $x(t)\in\mathcal{S}$ for all $t\geq 0$ if $x(0)\in\mathcal{S}$. We recall below Sontag's lemma on $\mathcal{K}\mathcal{L}$--estimates \cite[Proposition 7]{Sontag98}, as it will be instrumental. \begin{lemma} \label{lemma:01.01} For each class $\mathcal{K}\mathcal{L}$--function $\beta$ and each number $\lambda\in\C{R}_{\geq 0}$, there exist $\varphi_1, \varphi_2\in\mathcal{K}_\infty$, such that $\varphi_1(s)$ is locally Lipschitz and \begin{equation} \label{eq:01.04} \varphi_1(\beta(s,t))\leq\varphi_2(s)e^{-\lambda t}, \quad\forall s, t\in\C{R}_{\geq 0}. \end{equation} \end{lemma} The following result was introduced in \cite[Definition 24.3]{Hahn67} to relate positive definite functions and $\mathcal{K}$--functions. A proof was proposed in \cite[Lemma 4.3]{Khalil2002}. \begin{lemma} \label{lemma:01.02} Consider a function $W\,:\,\C{R}^n\,\rightarrow\,\C{R}_{\geq0}$ with $W(0)=0$. \begin{enumerate}[1.] \item If $W(x)$ is continuous and positive definite in some neighborhood around the origin, $\mathcal{N}(0)$, then there exist two functions $\hat{\alpha}_1, \hat{\alpha}_2\in\mathcal{K}$ such that \begin{equation} \label{eq:01.05} \hat{\alpha}_1(\norm{x})\leq W(x)\leq\hat{\alpha}_2(\norm{x}), \quad \forall x\in\mathcal{N}(0). \end{equation} \item If $W(x)$ is continuous and positive definite in $\C{R}^n$ and additionally, $W(x)\rightarrow\infty$, when $x\rightarrow\infty$ then \eqref{eq:01.05} holds with $\hat{\alpha}_1, \hat{\alpha}_2\in\mathcal{K}_\infty$ and for all $x\in\C{R}^n$. \end{enumerate} \end{lemma} \begin{comment} \begin{IEEEproof} \begin{enumerate}[1.] \item For any $x\in\mathcal{N}(0)$, with $\mathcal{N}(0)$ sufficiently small, there exists $r_1$ and $r_2$ such that for $0\leq r_1\leq\norm{x}\leq r_2$ and \begin{equation} \nonumber \begin{split} W_u(\norm{x}):= &\max_{\xi} \,W(\xi)\\ &\mbox{subject to } r_1\leq \norm{\xi}\leq \norm{x}, \end{split} \end{equation} and \begin{equation} \nonumber \begin{split} W_l(\norm{x}):=&\min_{\xi} \,W(\xi)\\ &\mbox{subject to } \norm{x}\leq \norm{\xi}\leq r_2, \end{split} \end{equation} it holds that (by continuity of $W$) $$W_l(\norm{x})\leq W(x)\leq W_u(\norm{x}).$$ Additionally, $W_u(\cdot)$ and $W_l(\cdot)$ above are non--decreasing and continuous and $W_u(0)=W_l(0)=0$. $W_u(\norm{x})$ and $W_l(\norm{x})$ can be upper and lower bounded, respectively, by $\mathcal{K}$--functions constructed as in \cite[Lemma I]{VidyasagarNonlinear} or in \cite[Lemma A.2]{FreemanKokotovic2008}. \item Since $W(x)$ is continuous and positive definite in $\C{R}^n$ with $W(x)\rightarrow\infty$, by the same arguments as above, $\mathcal{K}_\infty$ functions can be constructed such that \eqref{eq:01.05} holds. \end{enumerate} \end{IEEEproof} \end{comment} Next we recall the Bellman--Gronwall Lemma. A proof is provided in \cite[Lemma C. 3.1]{Sontag1998}. \begin{lemma} \label{lemma:01.03} Assume given an interval $\mathcal{I}\subseteq\C{R}$, a constant $c\in\C{R}_{\geq 0}$, and two functions $\alpha,\mu\,:\,\mathcal{I}\,\rightarrow\,\C{R}_{\geq 0}$, such that $\alpha$ is locally integrable and $\mu$ is continuous. Suppose further that for some $\sigma\in\mathcal{I}$ it holds that $$\mu(t)\leq\nu(t):=c+\int_\sigma^t \alpha(\tau)\mu(\tau)\mathrm{d}\tau$$ for all $t\geq \sigma$, $t\in\mathcal{I}$. Then it must hold that $$\mu(t)\leq ce^{\int_\sigma^t\alpha(\tau)\mathrm{d}\tau}.$$ \end{lemma} It is well known, from the direct method of Lyapunov, that the existence of a Lyapunov function for the system \eqref{eq:01.01} implies that the origin is an AS equilibrium for \eqref{eq:01.01}. In the remainder of this paper we propose some new alternatives to classical Lyapunov converse results, which are verifiable and constructive towards obtaining nonconservative DOA estimates. More specifically, in Section~\ref{sec:02.01} we recall a finite--time criterion for $\mathcal{K}\mathcal{L}$--stability in a given set $\mathcal{S}$ and we provide its converse. In Section~\ref{sec:02.02} the alternative converse theorem is provided and comparative remarks with respect to existing constructions are drawn in Section~\ref{sec:02.03}. Given a FTLF $V$, an expansion scheme which preserves the FT decrease property is rendered in Section~\ref{sec:03}, while indirect verification techniques are given in Section~\ref{sec:04.01} and computational steps are indicated in Section~\ref{sec:04.02}. Finally, a range of insightful worked out examples from literature and biological applications (biochemical reactions) such as the genetic toggle switch and the HPA axis are shown in Section~\ref{sec:05}. \section{A constructive Lyapunov converse theorem} \label{sec:02} \subsection{Finite--time conditions} \label{sec:02.01} \begin{definition} \label{def:finite} Let there be a continuous function $V\,:\,\C{R}^n\,\rightarrow\,\C{R}_{\geq 0}$, and a real scalar $d>0$ for which the proper set $\mathcal{S}\subseteq\C{R}^n$ is $d$--invariant and the conditions \begin{eqnarray} \label{eq:02.01a} \alpha_1(\norm{x})\leq V(x)&\leq &\alpha_2(\norm{x}), \quad\forall x\in\C{R}^n,\\ \label{eq:02.01b} V(x(t+d))-V(x(t))&\leq& -\gamma(\|x(t)\|) ,\quad \forall t\geq 0, \end{eqnarray} are satisfied with $\alpha_1$, $\alpha_2\in\mathcal{K}_\infty$, and $\gamma\in\mathcal{K}$ and for all $x(t)$, with $x(0)\in\mathcal{S}$. Then the function $V$ is called a \emph{finite--time Lyapunov function} (FTLF) for the system \eqref{eq:01.01}. \end{definition} In order for condition \eqref{eq:02.01b} to be well--defined, additionally to the locally Lipschitz property of the map $f(x)$, it is assumed that there exists no finite escape time in each interval $[t,t+d]$, for all $t\in\C{R}_{\geq 0}$. However, as it will be shown later, it is sufficient to require that there is no finite escape time in the time interval $[0,d]$. When $\mathcal{S}=\C{R}^n$, a sufficient condition for existence of the solution for all $t\in\C{R}_{\geq 0}$ is that the map $f(x)$ is Lipschitz bounded \cite[Chapter III.16]{Hahn67}. Furthermore, note that existence of a finite escape time for initial conditions in a given set in $\C{R}^n$ implies that the origin is unstable in that set \cite[Chapter III.16]{Hahn67}. The following result relates inequality \eqref{eq:02.01b} with another known type of decrease condition, which will be instrumental. \begin{lemma} \label{re:02.01} The decrease condition \eqref{eq:02.01b} on $V$ is equivalent with \begin{equation} \label{eq:02.06} V(x(t+d))-\rho(V(x(t)))\leq 0,\quad \forall t\in\C{R}_{\geq 0}, \end{equation} for all $x(t)$ with $x(0)\in\mathcal{S}$, where $\rho\,:\,\C{R}_{\geq 0}\,\rightarrow\,\C{R}_{\geq 0}$ is a positive definite, continuous function and satisfies $\rho<\mathrm{id}$ and $\rho(0)=0$. \end{lemma} \begin{IEEEproof} The proof follows a similar reasoning as in \cite[Remark 2.5]{Geiselhart2015}. Assume that $V$ is such that \eqref{eq:02.01a} and \eqref{eq:02.01b} hold. Then, for all $x(t)\neq0$: \begin{equation} \label{eq:02.07} \begin{split} 0\leq V(x(t+d))&\leq V(x(t))-\gamma(\norm{x(t)})\\ &< V(x(t))-0.5\gamma(\norm{x(t)})\\ &\leq V(x(t))-0.5\gamma(\alpha_2^{-1}(V(x(t))))\\ &=(\mathrm{id}-0.5\gamma\circ \alpha_2^{-1})(V(x(t)))\\ &=:\rho(V(x(t))). \end{split} \end{equation} Similarly, for all $x(t)\neq 0$, \begin{equation} \nonumber \begin{split} 0\leq V(x(t+d))&\leq V(x(t))-\gamma(\norm{x(t)})\\ &< \alpha_2(\norm{x(t)})-0.5\gamma(\norm{x(t)})\\ &=(\alpha_2-0.5\gamma)(\norm{x(t)}),\\ \end{split} \end{equation} which implies that $(\alpha_2-0.5\gamma)(s)>0$, for all $s\neq0$. Furthermore, since $\alpha_2^{-1}\in\mathcal{K}_\infty$, then $(\alpha_2-0.5\gamma)\circ\alpha_2^{-1}(s)>0$ and $$0<(\mathrm{id}-0.5\gamma\circ\alpha_2^{-1})(s)<\mathrm{id}, \quad \forall s\neq0.$$ Thus, by construction, the function $\rho\,:\,\C{R}_{\geq 0}\,\rightarrow\,\C{R}_{\geq 0}$ is a continuous, positive definite function. All involved functions are continuous by definition, thus the difference remains continuous. When $x(t)=0$, then \eqref{eq:02.06} trivially holds, since $\rho(0)=0$. Now assume that \eqref{eq:02.06} holds. Then \begin{equation} \label{eq:02.08} \begin{split} V(x(t+d))-V(x(t))&\leq \rho(V(x(t)))-V(x(t))\\ &=-(V(x(t))-\rho(V(x(t)))\\ &=-((\mathrm{id}-\rho)(V(x(t))))\\ &\leq-((\mathrm{id}-\rho)(\alpha_1(\norm{x(t)}))\\ &=-\tilde{\gamma}(\norm{x(t)}), \end{split} \end{equation} with $\tilde{\gamma} = (\mathrm{id}-\rho)\circ\alpha_1$. Since $\rho<\mathrm{id}$ by assumption, then $\tilde{\gamma}$ is positive definite, and furthermore continuous. Thus, by Lemma~\ref{lemma:01.02} $\tilde{\gamma}$ can be can be lower bounded by a $\mathcal{K}$--function $\gamma(\norm{x(t)})$, hence $V(x(t+d))-V(x(t))\leq -\gamma(\norm{x(t)})$, with $\gamma\in\mathcal{K}.$ \end{IEEEproof} Next, we propose a version of \cite[Theorem $1$]{Aeyels1998} for the time--invariant case and with the additional assumption that the set $\mathcal{S}$ is a $d$--invariant set for \eqref{eq:01.01}. This additional assumption enables a simpler proof, while the result is stronger, i.e., $\mathcal{K}\mathcal{L}$--stability in $\mathcal{S}$ is attained as opposed to local (in some neighborhood around the origin) $\mathcal{K}\mathcal{L}$--stability. \begin{theorem} \label{th:02.01a} If a function $V$ defined as in \eqref{eq:02.01a} and \eqref{eq:02.01b} and a proper $d$-invariant set $\mathcal{S}$ exist for the system \eqref{eq:01.01}, then the origin equilibrium of \eqref{eq:01.01} is $\mathcal{K}\mathcal{L}$--stable in $\mathcal{S}$. \end{theorem} \begin{IEEEproof} For any $t\in\C{R}_{\geq 0}$, there exists an integer $N\geq0$ and $j\in\C{R}_{\geq0}$, $j<d$ such that $t=Nd+j$. By applying \eqref{eq:02.01b} in its equivalent form \eqref{eq:02.06} recursively, we get that \begin{equation} \label{eq:02.AS} \begin{split} V(x(t))&= V(x(Nd+j))\\ &= V(x(((N-1)d+j)+d))\\ &\leq\rho( V(x((N-1)d+j)))\\ &= \rho( V(x(((N-2)d+j)+d)))\\ &\leq \rho^2(V(x((N-2)d+j)))\\ &\ldots\\ &\leq \rho^N(V(x(j)))\\ &\leq\rho^N(\alpha_2(\norm{x(j)})), \end{split} \end{equation} where $\rho^N$ denotes the $N$--times composition of $\rho$. The solution at time $t=j$ is given by $$x(j) = x(0)+\int_{0}^j f(x(s))\mathrm{d} s, $$ for any $j\geq 0$. Then \begin{equation*} \begin{split} \norm{x(j)-x(0)}\leq &\int_0^j \norm{f(x(s))-f(x(0))+f(x(0))}\mathrm{d} s \\ \leq &\int_0^j \norm{f(x(s))-f(x(0))}\mathrm{d} s +\\ &\int_0^j\norm{f(x(0))}\mathrm{d} s. \end{split} \end{equation*} By using the local Lipschitz continuity property of $f$, with $L>0$ the Lipschitz constant, and the Lemma~\ref{lemma:01.03} we obtain that \begin{equation*} \begin{split} \norm{x(j)-x(0)}& \leq \int_0^j L\norm{x(s)-x(0)}\mathrm{d} s +\int_0^j\norm{f(x(0))}\mathrm{d} s\\ &\leq \left(\int_0^j\norm{f(x(0))}\mathrm{d} s\right)e^{Lj}. \end{split} \end{equation*} Thus, $$\norm{x(j)}\leq\norm{x(0)}+\left(\int_0^j\norm{f(x(0))}\mathrm{d} s\right)e^{Lj}=:F_j(\norm{x(0)}).$$ Then $F_j(\norm{x(0)})\leq F_d(\norm{x(0)})$, for all $j\in[0,d]$. By the standing assumptions on $f$ it results that $F_d(\norm{x(0)})$ is continuous with respect to $x(0)$. Furthermore, $F_d(0)=0$, $F_d(s)$ is positive definite and continuous and $F_d(s)\rightarrow\infty$, when $s\rightarrow\infty$, for any $s\geq 0$. By applying Lemma~\ref{lemma:01.02} to $F_d(\norm{x(0)})$ we obtain that there exists a function $\omega\in\mathcal{K}_\infty$ such that $F_d(\norm{x(0)})\leq\omega(\norm{x(0)})$, and consequently, $\norm{x(j)}\leq\omega(\norm{x(0)})$, for all $0\leq j<d$. Thus, with $\hat{\alpha}_2:=\alpha_2\circ\omega$ and $\hat{\rho}:=\rho^{-1}$ we get that \begin{equation} \nonumber \begin{split} V(x(t))&\leq\rho^N(\hat{\alpha}_2(\norm{x(0)})) = \rho^{\frac{t-j}{d}}(\hat{\alpha}_2(\norm{x(0)}))\\ &\leq\rho^{\lfloor \frac{t}{d}\rfloor -1}\circ\hat{\alpha}_2(\norm{x(0)})\\ &=\rho^{\lfloor \frac{t}{d}\rfloor}\circ\rho^{-1}\circ\hat{\alpha}_2(\norm{x(0)})\\ &\leq\rho^{\lfloor \frac{t}{d}\rfloor}\circ\hat{\rho}\circ\hat{\alpha}_2(\norm{x(0)}),\quad \hat{\rho}\in\mathcal{K}_\infty\\ &=:\hat{\beta}(\norm{x(0)},t). \end{split} \end{equation} Without loss of generality we can assume that $\rho$ is a one--to--one (injective) and onto (surjective) function, thus invertible since, by hypothesys, $\mathcal{S}$ is compact. Furthermore, since $\rho$ is continuous, then by \cite[Theorem 3.16]{Browder1996}, $\rho^{-1}$ is continuous. Additionally, $\rho^{-1}(0)=\rho^{-1}(\rho(0))=0$. Thus, there exists a function $\hat{\rho}\in\mathcal{K}_{\infty}$, such that $\rho^{-1}\leq\hat{\rho}$, as follows from Lemma~\ref{lemma:01.02}. We can conclude that $\hat{\beta}\in\mathcal{K}\mathcal{L}$ since $\hat{\rho}\circ\hat{\alpha}_2(s)\in\mathcal{K}_\infty$ and $\rho^{\lfloor \frac{t}{d}\rfloor}\in\mathcal{L}$. Finally, $$\norm{x(t)}\leq\alpha_1^{-1}(\hat{\beta}(\norm{x(0)},t))=:\beta(\norm{x(0)},t),$$ for all $x(0)\in\mathcal{S}$ and for all $t\in\C{R}_{\geq 0}$, thus we have obtained $\mathcal{K}\mathcal{L}$--stability in $\mathcal{S}$. \end{IEEEproof} A similar result was derived in \cite[Proposition 2.3]{Karafyllis2012}, which offers an alternative to the periodic decrease condition in \cite{Aeyels1998} (here \eqref{eq:02.01b}), by requiring the minimum over a finite time interval of a positive definite function of the state to decrease. Condition (2.2) in \cite{Karafyllis2012} always implies a decrease after a finite time interval, but it allows the length of the time interval to be state dependent. Proposition 2.3 of \cite{Karafyllis2012} shows that such a relaxed finite time decrease condition implies $\mathcal{K}\mathcal{L}$-stability and exponential stability (under the usual global exponential stability assumptions plus a common time interval length for all states). We proceed by providing a converse finite--time Lyapunov function for $\mathcal{K}\mathcal{L}$--stability in a compact set $\mathcal{S}$. \subsection{Alternative converse theorem} \label{sec:02.02} \begin{assumption} \label{as:02.01} There exists a $\mathcal{K}\mathcal{L}$--function $\beta$ satisfying \eqref{eq:01.02} for the system \eqref{eq:01.01} such that \begin{equation} \label{eq:02.08} \beta(s, d)<s \end{equation} for some positive $d\in\C{R}$ and all $s>0$. \end{assumption} \begin{theorem} \label{th:02.02} If the origin is $\mathcal{K}\mathcal{L}$--stable in some invariant subset of $\C{R}^n$, $\mathcal{S}$\footnote{Invariance is needed in order for \eqref{eq:02.06} to hold for all $t\geq 0$.} for the system \eqref{eq:01.01} and Assumption~\ref{as:02.01} is satisfied, then for any function $\eta\in\mathcal{K}_\infty$ and for any norm $\norm{\cdot}$, the function $V:\C{R}^n\,\rightarrow\,\C{R}_{\geq0}$, with \begin{equation} \label{eq:02.10} V(x):=\eta(\norm{x}), \quad \forall x\in\C{R}^n \end{equation} satisfies \eqref{eq:02.01a} and \eqref{eq:02.01b}. \end{theorem} \begin{IEEEproof} Let the pair $(\beta, d)$ be such that Assumption~\ref{as:02.01} holds. Then, by hypothesis we have that: \begin{equation} \nonumber \begin{split} \eta(\norm{x(t+d)}) &\leq\eta(\beta(\norm{x(t)},d))\\ &\leq\eta(\beta(\eta^{-1}(V(x(t))),d))\\ &:=\rho(V(x(t))), \end{split} \end{equation} where $\rho =\eta(\beta(\eta^{-1}(\cdot),d))$, for all initial conditions $x(0)\in\mathcal{S}$. By Assumption~\ref{as:02.01}, we obtain that there exists a $d>0$ such that $\rho<\eta(\eta^{-1}(\cdot))=\mathrm{id}$, Thus, we get $$V(x(t+d))-\rho(V(x(t)))\leq0, \quad \forall x(0)\in\mathcal{S}.$$ From Lemma~\ref{re:02.01} this implies that \eqref{eq:02.01b} holds. Since $V$ is defined by a $\mathcal{K}_\infty$ function, then let $\alpha_1(s)= \alpha_2(s)=\eta(s)$ such that \eqref{eq:02.01a} holds. \end{IEEEproof} In \cite[Remark 2.4]{Karafyllis2012}, a converse result for $\mathcal{K}\mathcal{L}$--stable systems is derived, in terms of the finite decrease condition (2.2) therein. More precisely, it is shown that if the $\mathcal{K}\mathcal{L}$--stability property holds, then any positive definite function satisfies inequality (2.2) in \cite{Karafyllis2012}. Compared to the converse results of \cite{Karafyllis2012}, the converse theorem above shows that a stronger condition holds (inequality \eqref{eq:02.01b} with a common finite time $d$ for all states $x(t)$) under Assumption~\ref{as:02.01}. \begin{comment} \begin{lemma} \label{lemma:02.03} Let condition \eqref{eq:02.01b} hold for all $d\rightarrow 0$. Then $V$ is a classical LF, i.e. condition \eqref{eq:01.03b} is satisfied. \end{lemma} \begin{IEEEproof} The proof shall be worked out for the case when $\mathcal{S}=\C{R}^n$. Take any point on the solution trajectory of \eqref{eq:01.01}, $x(t)\in\C{R}^n\setminus\{0\}$ for any time value $t\in\C{R}_{>0}$. Then, there exists a level set value $C>0$ such that $V(x(t))=C$. By assumption \begin{equation} \nonumber \begin{split} \lim_{d\rightarrow 0} V(x(t+d)) &\leq V(x(t))- \gamma(\norm{x(t)})\\ & = C-\gamma(\norm{x(t)})\\ &< C. \end{split} \end{equation} Thus we have obtained that $\lim_{d\rightarrow 0} V(x(t+d))<C$. Let us assume that $\nabla V(x)^\top f(x(t))\geq0$. Via Nagumo's Theorem \cite[Theorem 4.7]{Blanchini08} this implies that there exists a $\bar{d}>0$ such that for all $d<\bar{d}$, $V(x(t+d))\geq C$. Furthermore, since $V(x)$ is continuous, $\lim_{d\rightarrow 0} V(x(t+d))\geq C$, which leads to contradiction and implies that $\dot{V}(x(t))<0$, for any $x(t)\in\C{R}^n\setminus\{0\}$. When $x(t)=0$, then $V(x(t))=C=0$ and $\dot{V}(x(t))=0$, since $f(x(t))=0$. \end{IEEEproof} \end{comment} Consider the function defined as \begin{equation} \label{eq:02.11} W(x(t)):=\int_{t}^{t+d}V(x(\tau))\mathrm{d}\tau, \end{equation} for any $V$ that satisfies \eqref{eq:02.01a} and \eqref{eq:02.01b}. Generally, in standard converse theorems the function $\varphi_1$ which defines the LF is a particular, special $\mathcal{K}_\infty$ function. In the proposed finite--time converse theorem, $\eta$ is any $\mathcal{K}_\infty$ function, which allows for more freedom in the construction. In turn, the developed finite--time converse theorem will be used to obtain an alternative converse Lyapunov theorem. \begin{lemma} \label{lemma:02.01} A continuously differentiable function $V\,:\,\C{R}^n\,\rightarrow\,\C{R}_{\geq 0}$ satisfies \eqref{eq:02.01a} and \eqref{eq:02.01b} for \eqref{eq:01.01} and some $d>0$, if and only if the function $W$ as defined in \eqref{eq:02.11} with the same $d>0$ is a Lyapunov function for the system \eqref{eq:01.01}. \end{lemma} \begin{IEEEproof} Let there be a function $V$ satisfying \eqref{eq:02.01a} and \eqref{eq:02.01b}. $V$ is continuous, thus it is integrable over any closed, bounded interval $[t, t+d]$, $t\geq0$. By Theorem~$5.30$ in \cite{Browder1996}, this implies that $W(x(t))$ is continuous on each interval $[t, t+d]$, for any $t$. Since $V$ is also positive definite, by integrating over the bounded interval $[t, t+d]$ the resulting function $W(x(t))$ will also be positive definite. Additionally, $\lim_{x\rightarrow\infty}V(x)=\infty$, thus, $\lim_{x\rightarrow\infty}W(x)=\infty$ and the result in Lemma~\ref{lemma:01.02} can be applied. Therefore, there exist two functions $\hat{\alpha}_1, \hat{\alpha}_2\in\mathcal{K}_\infty$ such that \begin{equation} \label{eq:02.12} \hat{\alpha}_1(\norm{x})\leq W(x)\leq\hat{\alpha}_2(\norm{x}), \quad \forall x\in\mathbb\C{R}^n, \end{equation} holds. Next, by making use of the general Leibniz integral rule, we get that \begin{equation} \nonumber \begin{split} \frac{\mathrm{d} }{ \mathrm{d} t}W(x(t))=&\int_{t}^{t+d}\underbrace{\frac{\mathrm{d}}{\mathrm{d} t} V(x(\tau))}_{=0}\mathrm{d} \tau +\\ & V(x(t+d))\dot{(t+d)}-V(x(t))\dot{t}\\ =&V(x(t+d))-V(x(t))\leq-\gamma(\norm{x(t)}). \end{split} \end{equation} Thus, $W$ is a Lyapunov function for \eqref{eq:01.01}. Now assume that $W$ is a Lyapunov function for \eqref{eq:01.01}, i.e. \eqref{eq:02.12} holds and for $\gamma\in\mathcal{K}$ it holds that $$\dot{W}(x)\leq -\gamma(\norm{x}),\quad \forall x\in\mathcal{S}.$$ By the same Leibniz rule, we know that $\dot{W}=V(x(t+d))-V(x(t))$, thus the difference $V(x(t+d))-V(x(t))$ is negative definite, i.e. \eqref{eq:02.01b} holds. Now we have to show that \eqref{eq:02.01a} holds. Assume that there exists an $x(t)\in\mathcal{S}$, $x(t)\neq 0$, such that $V(x(t))\leq0$. Then, this implies that $$V(x(t+d))<V(x(t)\leq0,$$ and furthermore, $V(x(t+id))<V(x(t+(i-1)d))<\ldots<V(x(t+2d))<V(x(t+d))<V(x(t))\leq 0,$ for all integers $i>0$, due to $d$-invariance of $\mathcal{S}$ and the assumption that $x(t) \in \mathcal{S}$. Then, $$\lim_{i\rightarrow\infty}V(x(t+id))=-\infty.$$ Since $W$ is a LF for \eqref{eq:01.01} in $\mathcal{S}$, then the origin is $\mathcal{K}\mathcal{L}$--stable in $\mathcal{S}$, thus it implies that $\lim_{i\rightarrow\infty}x(t+id)=0$. Then, because the solution of the system \eqref{eq:01.01} is a continuous function of time and $W$ is continuous, it follows that $\lim_{i\rightarrow\infty}W(x(t+id))=W(0)$. Therefore, we have that \begin{equation*} \begin{split} \lim_{i\rightarrow\infty}W(x(t+id))&=\lim_{i\rightarrow\infty}\int_{t+id}^{t+(i+1)d}V(x(\tau))\mathrm{d}\tau\\ &\Leftrightarrow\\ W(0) &= \int_\infty^\infty V(x(\tau))\mathrm{d}\tau=V(x(\infty))=-\infty,\\ \end{split} \end{equation*} which is a contradiction since $W(0)=0$, thus $V(x)$ must be positive definite on $\mathcal{S}$. By the definition of $W$, we have that $V$ must be a continuous function, because it needs to be integrable for $W$ to exist. By assumption, $W$ is upper and lower bounded by $\mathcal{K}_\infty$ functions, thus for $x\rightarrow\infty$, $W(x)\rightarrow\infty$. This can only happen when $V(x)\rightarrow\infty$. Thus, using a similar reasoning as above, based on Lemma~\ref{lemma:01.02}, this implies that $V$ is upper and lower bounded by $\mathcal{K}_\infty$ functions, hence \eqref{eq:02.01a} holds. \end{IEEEproof} The next result summarizes the proposed alternative converse LF for $\mathcal{K}\mathcal{L}$--stability in $\mathcal{S}$, enabled by the finite--time conditions \eqref{eq:02.01a} and \eqref{eq:02.01b}. \begin{corollary} \label{co:02.01} If the origin is $\mathcal{K}\mathcal{L}$--stable in some set $\mathcal{S}$ for the system \eqref{eq:01.01}, with the $\mathcal{K}\mathcal{L}$--function $\beta$ satisfying Assumption~\ref{as:02.01} for some $d>0$, then by Theorem~\ref{th:02.02} and Lemma~\ref{lemma:02.01}, for any function $\eta\in\mathcal{K}_\infty$ and any norm $\norm{\cdot}$, the function $W(\cdot)$ defined as in \eqref{eq:02.11} for the same $d>0$, is a Lyapunov function for the system \eqref{eq:01.01}. \end{corollary} The above corollary provides a continuous--time counterpart of the converse result \cite[Corollary 22]{Geiselhart2014}. The main line of reasoning relies on the Assumption~\ref{as:02.01}, which is the same assumption needed for the discrete--time converse theorem. The main technical differences with \cite{Geiselhart2014} lie in the proof of Theorem~\ref{th:02.01a}, the construction \eqref{eq:02.11} and the proof of Lemma~\ref{lemma:02.01}. \subsection{Remarks Massera construction} \label{sec:02.03} Notice that the function \eqref{eq:02.11} with $V(x) =\eta(\norm{x})$ corresponds to a Massera type of construction, which in its original form in \cite{Massera1949} is defined as: \begin{equation} \label{eq:02.13} W(x(t))=\int_{t}^{\infty}\alpha(\norm{x(\tau)})\mathrm{d} \tau, \end{equation} with $\alpha\,:\,\C{R}_{\geq 0}\,\rightarrow\, \C{R}_{\geq 0}$ an appropriately chosen continuous function. In \cite{Bjornsson2014} the construction above is recalled as \begin{equation} \label{eq:02.14} W(x(0))=\int_{0}^{N}\norm{x(\tau)}_2 \mathrm{d} \tau, \end{equation} as an alternative to the construction in \cite[Theorem 4.14]{Khalil2002}, where the limits of the integral are $t$ and $t+N$. This type of construction facilitated an extensive amount of converse results. Also, the converse proof in \cite{Massera1949} set up a proof technique which is based on the so--called Massera's Lemma \cite[p. 716]{Massera1949} for constructing the function $\alpha$ in \eqref{eq:02.13}. The formulation in \cite{Bjornsson2014} is based on the exponential stability assumption, though an extension for the asymptotic stability case is suggested. The extension relates to the construction in \eqref{eq:02.11}, where instead of the function $\eta$, a nonlinear scaling of the norm of the state trajectory is used. This scaling function is to obtained from a $\mathcal{K}\mathcal{L}$ estimate and Lemma~\ref{lemma:01.01} to provide an exponentially decreasing in time upper bound for the asymptotic stability estimate. In \cite{Bjornsson2015}, a similar construction to the one in \eqref{eq:02.11} is proposed, namely \begin{equation} \label{eq:02.14b} W(x(0))=\int_{0}^{T}\gamma(x(\tau)) \mathrm{d} \tau, \end{equation} with $T$ a positive, finite constant and $\gamma$ a positive definite function. While both the proposed construction in this paper and the one in \eqref{eq:02.14b} rely on arbitrary functions of the state, the main difference is in the choice of the integration interval, with $d$ being such that \eqref{eq:02.01b} holds for the arbitrary $\eta$ function. By using the finite--time function $V$, we provide an alternative to Massera's construction via Corollary~\ref{co:02.01}, whilst by defining $W$ as in \eqref{eq:02.11} we allow for a $\mathcal{K}\mathcal{L}$--stability assumption with the $\mathcal{K}\mathcal{L}$ function satisfying \eqref{eq:02.08}. The freedom in choosing any function of the state norm $\eta\in\mathcal{K}_\infty$ in the proposed construction facilitates an implementable verification procedure which is detailed in Section~\ref{sec:04} and does not rely on a specific, possibly more complex form for a LF, which can add to the computational load. \section{Expansion scheme} \label{sec:03} In \cite{Chiang89} a scheme for constructing LFs starting from a given LF, which at every iterate provides a less conservative estimate of the DOA of a nonlinear system of the type \eqref{eq:01.01} was proposed. The sequence of Lyapunov functions of the type \begin{eqnarray} \label{eq:03.01} \nonumber W_1(x)&=&W(x+\alpha_1f(x))\\ \nonumber W_2(x)&=&W_1(x+\alpha_2f(x))\\ &\vdots&\\ \nonumber W_n(x) &=&W_{n-1}(x+\alpha_nf(x)), \end{eqnarray} with $\alpha_i\in\C{R}_{\geq0}$, $i=1,2\ldots,n$ leads to the DOA estimates set inclusion $$\mathcal{S}_W(c)\subset\mathcal{S}_{W_1}(c)\subset\ldots\subset\mathcal{S}_{W_n}(c),$$ where $\mathcal{S}_{W_i}(c):=\{x\in\C{R}^n\,|\,W_i(x)\leq c\}$, $i=1,2\ldots,n$ denote the largest level sets of the Lyapunov functions generated by the expansion sequence \eqref{eq:03.01} with $\mathcal{S}_{W}(c)\subset\mathcal{S}$, where $\mathcal{S}$ is the $d$--invariant set. Note that the largest level set of the LF $W$, included in the $d$--invariant set $\mathcal{S}$ is a subset of the true DOA of the system \eqref{eq:01.01}. Thus, the expansion scheme \eqref{eq:03.01} applied on a computed $W$ will provide a better estimate of the true shape of the DOA contained in $\mathcal{S}$. However, a less conservative initial set $\mathcal{S}$ leads to a better estimate of the true DOA. We propose to utilize the expansion idea in \cite{Chiang89} to generate a sequence of FTLFs, with the purpose to generate a more appropriate $d$--invariant set. The next result follows as a consequence of Lemmas~$4$-$1$ and $4$-$2$ from \cite{Chiang 89}. \begin{theorem} \label{co:03.01} Let $V$ be a FTLF, i.e. conditions \eqref{eq:02.01a} and \eqref{eq:02.01b} hold with respect to the $d$--invariant set $\mathcal{S}$. Furthermore let $V$ be continuously differentiable and define $\mathcal{S}_V(c):=\{x\in\C{R}^n\,|\,V(x)\leq c\}\subset\mathcal{S}$. Then, there exists an $\alpha\in\C{R}_{\geq0}$ such that for $V_1(x)=V(x+\alpha f(x))$ and $\mathcal{S}_{V_1}(c):=\{x\in\mathbb{R}^n\,|\,V_1(x)\leq c\}$, it holds that $\mathcal{S}_V(c)\subset\mathcal{S}_{V_1}(c)$ and $V(x+\alpha f(x))$ is a FTLF. \end{theorem} \begin{IEEEproof} By hypothesis, $V(x)$ is a continuously differentiable FTLF, i.e. let $V$ have continuous partial derivatives of order $r$ higher or equal to $1$. Then by \cite[p.190]{Dieudonne69} and \cite[Lemma 2-1]{Chiang89} we can write: \begin{equation} \nonumber \begin{split} V_1(x(t)) =&\,V(x(t))+\alpha\dot{V}(x(t)) + \frac{\alpha^2}{2!}\ddot{V}(x(t))+\ldots+\\ &\,\frac{\alpha^{r-1}}{(r-1)!}V^{(r-1)}(x(t)) +\frac{\alpha^{r}}{r!}V^{(r)}(z(t)), \end{split} \end{equation} where $V^{r}$ denotes the $r$-th derivative of $V$, i.e. $V^{(r)}(x)=\nabla^{(r-1)} V(x)^\top f(x)$ and $z(t)=x(t)+h\alpha f(x(t))$, for $h\in[0,1]$. Let $V(x(t))= (V\circ x)(t)=\psi(t)$. Similarly, by the formula \cite[p.190, (8.14.3)]{Dieudonne69}, we have that \begin{equation} \nonumber \begin{split} \psi(t+d) = &\,\psi(t)+d\dot{\psi}(t)+\frac{d^2}{2!}\ddot{\psi}(t) +\dots \frac{d^{r-1}}{(r-1)!}\psi^{(r-1)}(t) + \\ &\,\frac{d^{r}}{r!}\psi^{r}(w), \end{split} \end{equation} where $w = t+hd$, $h\in[0,1]$ and $\dot{\psi}(t)=\dot{V}(x(t))=\nabla V(x)^\top f(x(t))$. Then, \begin{equation} \nonumber \begin{split} V(x(t+d))=&\,V(x(t))+d\dot{V}(x(t)) +\frac{d^2}{2!}\ddot{V}(x(t)) +\ldots +\\ &\,\frac{\alpha^{r-1}}{(r-1)!}V^{(r-1)}(x(t)) + \frac{\alpha^{r}}{r!}V^r(x(w)). \end{split} \end{equation} From the expressions of $V_1(x(t))$ and $V(x(t+d))$, we can write \begin{equation} \label{eq:03.02} \begin{split} V_1(x(t)) \leq &\,V(x(t))+V(x(t+d))-V(x(t))+ \\ &\,|\alpha-d|\dot{V}(x(t)) + \frac{|\alpha^2-d^2|}{2!}\ddot{V}(x(t)) + \ldots +\\ &\, \frac{|\alpha^{r-1}-d^{r-1}|}{(r-1)!}V^{(r-1)}(x(t)) +\\ &\,\bigg|\frac{\alpha^r}{r!}V^{r}(z(t)) - \frac{\alpha^r}{r!} V^{r}(x(w))\bigg|\\ \leq &\, V(x(t))+V(x(t+d))-V(x(t))+\\ &\, |\alpha-d|\dot{V}(x(t)) +|\alpha-d|\epsilon. \end{split} \end{equation} Since $V(x(t+d))-V(x(t))<0$, there exists a $\beta\in\C{R}_{> 0}$ such that $V(x(t+d))-V(x(t))<-\beta$, for all $x(0)\in\mathcal{S}\setminus\mathcal{S}_V(c)$, where $\mathcal{S}\setminus\mathcal{S}_V(c)$ is a compact set containing no equilibrium point. Furthermore, on the compact set $\mathcal{S}\setminus\mathcal{S}_V(c)$, $\epsilon>0$ is a bound on the sum of higher order continuous terms in the above expression. This is due to the fact that continuous functions are bounded on compact sets. As such, we obtain that \begin{equation} \nonumber \begin{split} V_1(x(t)) < & \,V(x(t)) -\beta +|\alpha- d| (\dot{V}(x(t))+ \epsilon)\\ \end{split} \end{equation} Let $|\alpha-d|<\bar{\alpha}$ and let $\dot{V}(x(t))+\epsilon\leq\nu,$ $\nu\in\C{R}_{>0}$ for all $x\in\mathcal{S}\setminus\mathcal{S}_V(c)$ since every continuous function is bounded on a compact set. Thus, we obtain that $$V_1(x(t))<V(x(t)) +\bar{\alpha}\nu-\beta.$$ If $(\bar{\alpha}\nu-\beta)<0$, hence for $0<|\alpha-d|<\beta/\nu$, it holds that $$V_1(x(t))<V(x(t)).$$ Similarly as in \cite[Lemma $4$-$1$]{Chiang89}, this implies that $\mathcal{S}_V(c)\subset\mathcal{S}_{V_1}(c)$. Next we will show that $V_1$ is a FTLF. From Lemma~\ref{lemma:02.01} we know that the function $W(x(t))=\int_t^{t+d} V(x(\tau))\mathrm{d}\tau$ is a LF for \eqref{eq:01.01}. From \cite[Lemma 4-2]{Chiang89} it is known that there exists some $\alpha>0$ such that $W_1(x(t))= W(x(t)+\alpha f(x(t)))=\int_t^{t+d} V(x(\tau)+\alpha f(x(\tau)))\mathrm{d}\tau$ is a LF. This implies that there exists some $\mathcal{K}$--function $\tilde{\gamma}(x(t))$ such that \begin{equation}\nonumber \begin{split} \dot{W}_1(x(t))&\,=V(x(t+d) +\alpha f(x(t+d)))-\\ &\quad \,\,\, V(x(t)+\alpha f(x(t)))\\ &\,\leq-\tilde{\gamma}(x(t)), \end{split} \end{equation} where the Leibniz integral rule was used. This further implies that $V_1(x(t+d))-V_1(x(t))\leq-\tilde{\gamma}(x(t))$, thus $V_1$ is a FTLF. \end{IEEEproof} \section{Verification} \label{sec:04} \subsection{The indirect approach} \label{sec:04.01} In Section~\ref{sec:02} it has been shown that if the equilibrium of a given system is $\mathcal{K}\mathcal{L}$--stable, then a method to construct a Lyapunov function is provided by \eqref{eq:02.11}, for $V(x)$ defined by any function $\eta\in\mathcal{K}_\infty$ and any norm. The method is constructive starting with a given candidate $d$--invariant set $\mathcal{S}$ and a candidate function $V(x)=\eta(\norm{x})$. Due to the $d$--invariance property of $\mathcal{S}$, verifying condition \eqref{eq:02.01b} for the chosen $V$ is reduced to verifying \begin{equation} \label{eq:04.05} V(x(d))-V(x(0))\leq-\gamma(\norm{x})<0, \end{equation} for all $x(0)\in\mathcal{S}$. The difficulty in verifying \eqref{eq:04.05} is given by the need to compute $x(d)$, for all $x(0)\in\mathcal{S}$. However if $x(d)$ is known analytically, then it suffices to verify \eqref{eq:04.05} for all initial conditions in a chosen set $\mathcal{S}$. Then the largest level set of $W$, defined as in \eqref{eq:02.11}, which is included in $\mathcal{S}$, is a subset of the true DOA of the considered system. The verification of \eqref{eq:04.05} translates into solving a problem of the type \begin{equation} \label{eq:04.05.ver} \begin{split} &\max_{x(0)} \,[V(x(d))-V(x(0))]\\ &\mbox{subject to } x(0)\in\mathcal{S}. \end{split} \end{equation} The regularity assumptions on the map describing the dynamics \eqref{eq:01.01} ensures that the solution is continuous for all $t\in[0,d]$. Furthermore, since $V(x)=\eta(\norm{x})\in\mathcal{K}_\infty$ and the $d$--invariant candidate set $\mathcal{S}$ is compact, the problem \eqref{eq:04.05.ver} will always have a global optimum. A way to avoid solving \eqref{eq:04.05.ver} will be indicated in Section~\ref{sec:04.02}. When the analytical solution is not known, or obtaining a numerical approximation is computationally tedious, as it can be the case for higher order nonlinear systems, then we propose the following approach starting from the linearized dynamics of \eqref{eq:01.01}. In what follows we recall some relevant properties of the map $f$ with respect to its linearization. The detailed derivations can be found in \cite[Chaper 4.3]{Khalil2002}. Firstly, by the mean value theorem it follows that \begin{equation} \label{eq:04.01} f_i(x)=f_i(0)+\frac{\partial f_i}{\partial x}(z_i) x,\quad i =1,\ldots,n, \end{equation} where $z_i$ is a point on the line connecting $x$ to the origin. Since the origin is an equilibrium of \eqref{eq:01.01}, $$f_i(x)=\frac{\partial f_i}{\partial x}(z_i) x = \frac{\partial f_i}{\partial x}(0)x + \left(\frac{\partial f_i}{\partial x}(z_i)-\frac{\partial f_i}{\partial x}(0)\right)x.$$ Then, \begin{equation} \label{eq:04.02} f(x)=A x +g(x), \end{equation} with $$A = \frac{\partial f}{\partial x}(0)=\left[\frac{\partial f(x)}{\partial x}\right]_{x=0},$$ $$g_i(x)=\left(\frac{\partial f_i}{\partial x}(z_i)-\frac{\partial f_i}{\partial x}(0)\right)x.$$ Furthermore, $$\norm{g_i(x)}\leq \norm{\frac{\partial f_i}{\partial x}(z_i)-\frac{\partial f_i}{\partial x}(0)}\norm{x}$$ and $$\frac{\norm{g(x)}}{\norm{x}}\rightarrow 0,\, \mbox{as}\, \norm{x}\rightarrow 0.$$ In this way, in a sufficiently small region around the origin we can approximate the system \eqref{eq:01.01} with $\dot{x}=Ax$. The next result is an analog to \emph{Lyapunov's indirect method} \cite[Theorem 4.7]{Khalil2002}, but in terms of the FTLF concept. The aim is to provide a validity result for the FT condition \eqref{eq:02.01b} for the nonlinear system \eqref{eq:01.01}, whenever a (global) FT type condition is satisfied for the linearized system with respect to the origin. \begin{theorem} \label{th:04.01} Let $V(x)=\norm{x}$ be a global FTLF function for $\dot{x}= Ax$, i.e. there exists a $d>0$ such that $\norm{e^{Ad}} < 1$. Additionally, let \begin{equation} \label{eq:04.03} e^{d\mu(A)}-1=-\varsigma,\quad \varsigma\in\C{R}_{>0} \end{equation} be satisfied. Then the following statements hold. \begin{enumerate}[1.] \item There exists a $d$--invariant set $\mathcal{S}$ for which $V(x)$ is a FTLF for \eqref{eq:01.01}. \item There exists a set $\mathcal{A}\subseteq\mathcal{S}$ for which \begin{equation} \label{eq:04.W} W(x) = \int_0^d V(x+\tau f(x))\mathrm{d}\tau \end{equation} is a LF for \eqref{eq:01.01}. \end{enumerate} \end{theorem} \begin{IEEEproof} We start by proving point 1. As indicated in \cite{Khalil2002}, from $\frac{\norm{g(x)}}{\norm{x}}\rightarrow 0$, as $\norm{x}\rightarrow 0$, it follows that for any $\delta>0$, there exists an $r>0$ such that $\norm{g(x)}<\delta\norm{x}$, for $\norm{x}<r$. Thus, the solution of the system defined with the map in \eqref{eq:04.02} is bounded, whenever $g(x)$ is bounded \cite{Soderlind}, as shown below. \begin{equation} \nonumber \begin{split} \norm{x(d)}\leq &\, e^{d\mu(A)}\norm{x(0)} +\int_0^d e^{(d-\tau)\mu(A)}\norm{g(x(\tau))}\mathrm{d}\tau\\ \leq &\, e^{d\mu(A)}\norm{x(0)} + \int_0^d e^{(d-\tau)\mu(A)}\delta\norm{x(\tau)}\mathrm{d}\tau. \end{split} \end{equation} By applying the Bellman--Gronwall Lemma~\ref{lemma:01.03} to the inequality above, we obtain \begin{equation} \nonumber \begin{split} \norm{x(d)} \leq &\, e^{d\mu(A)}\norm{x(0)}e^{\int_0^d e^{(d-\tau)\mu(A)}\delta\mathrm{d}\tau}\\ \leq &\, e^{d\mu(A)}\norm{x(0)}e^{\delta\frac{e^{d\mu(A)}-1}{\mu(A)}}\\ \leq &\,e^{d\mu(A)-\frac{\varsigma\delta}{\mu(A)}}\norm{x(0)}. \end{split} \end{equation} Thus, $V(x(d))\leq \rho V(x(0))$, with $\rho := e^{d\mu(A)-\frac{\varsigma\delta}{\mu(A)}}$. For the equivalent FT condition \eqref{eq:02.06} to hold, $\rho$ must be subunitary, and equivalently \begin{equation} \label{eq:04.04} d\mu(A)-\frac{\varsigma\delta}{\mu(A)}<0. \end{equation} For equation \eqref{eq:04.03} to hold, $\mu(A)$ must be negative. Thus, we obtain that \begin{equation} \label{eq:04.bound} \delta\leq\frac{d\mu(A)^2}{\varsigma}, \end{equation} which provides an upper bound on $\delta$, and thus on $r$. Consequently, there exists a $d$--invariant set $\mathcal{S}\subseteq\{x\in\C{R}^n\,|\,\norm{x}< r\}$. As for the second item of the theorem, let us consider the Dini derivative expression for $\dot{W}(x(0))$, $$\dot{W}(x(0))=\mathscr{D}^+ W(x(0))= \limsup_{h\rightarrow 0^+}\frac{W(x(h))-W(x(0))}{h}.$$ Then, \begin{equation} \nonumber \begin{split} \mathscr{D}^+ W(x(0)) = &\,\limsup_{h\rightarrow 0^+}\bigg[\frac{\int_h^{h+d} V(x(0)+\tau f(x(0)))\mathrm{d}\tau-}{h} \\ &\frac{\int_0^d V(x(0)+\tau f(x(0)))\mathrm{d}\tau}{h}\bigg]\\ =&\,\limsup_{h\rightarrow 0^+}\bigg[\frac{\int_d^{d+h} V(x(0)+\tau f(x(0)))\mathrm{d}\tau - }{h}\\ &\frac{\int_0^{h} V(x(0)+\tau f(x(0)))\mathrm{d}\tau}{h}\bigg]\\ =\footnotemark &\, V(x(0)+df(x(0)))-V(x(0))\\ < &\, V(x(d))-V(x(0))\leq-\gamma(\norm{x(0)}), \end{split} \end{equation} \footnotetext{Here we applied L'Hospital rule together with Leibniz integral rule. } where we used the fact that $V_1(x(t))<V(x(t))$ shown in the proof of Corollary~\ref{co:03.01}. \end{IEEEproof} In the theorem above, due to the equivalence result in Lemma~\ref{lemma:02.01}, existence of a FTLF $V$ is equivalent to existence of a true LF $W$ defined as in \eqref{eq:02.11}. Since $V$ is only valid in the region around the origin defined by $\delta$, $W$ will also be a valid LF in some subset of that region. However, the expression of $W$ in \eqref{eq:02.11} still involves knowing the solution $x(d)$. In this case, relying on the solution of the linearized system might lead to conservative approximations of the DOA. In view of the fact that we want to construct LFs that lead to relevant DOA estimates, the construction in \eqref{eq:04.W} is more suitable as it includes the nonlinear vector field. The condition in \eqref{eq:04.03}, essentially requires that there should exist a (weighted) norm for which the induced logarithmic norm of the matrix obtained by evaluating the Jacobian associated to \eqref{eq:01.01} at the origin is negative. The choice of the norm inducing the matrix measure is dictated by the choice of the norm defining the FTLF $V(x)$. A similar condition has been introduced in \cite{Sontag2014} for characterizing infinitesimally contracting systems on a given convex set $\mathbb{X}\subseteq\C{R}^n$. Therein, the logarithmic norm of the Jacobian at all points in the set $\mathbb{X}$ is required to be negative. \subsection{Computational procedure} \label{sec:04.02} \subsubsection{Compute a $d$ for which the finite--time condition holds for the linearized system with respect to the origin} Let $$\dot{\delta} x = \left[\frac{\partial f(x)}{\partial x}\right]_{x=0}\delta x,$$ be the linearized system. Then condition \eqref{eq:04.05} can be verified as: $$\eta(\norm{e^{d\left[\frac{\partial f(x)}{\partial x}\right]_{x=0}}x(0)})-\eta(\norm{x(0)})<0,$$ for all $x(0)$ in some compact, proper set $\mathcal{S}$. As such, for a given value of $d$, \eqref{eq:04.05} can be verified via a feasibility problem. If $\eta =\mathrm{id}$ then the feasibility problem above translates into verifying the matrix norm condition: \begin{equation} \label{eq:norm} \norm{e^{d\left[\frac{\partial f(x)}{\partial x}\right]_{x=0}}}<1. \end{equation} Note that if there exists a $d>0$ such that the condition \eqref{eq:04.03} holds for some $\varsigma>0$, then condition \eqref{eq:norm} is implicitly satisfied since it holds that $\norm{e^{dA}}\leq e^{d\mu(A)}$ for some real matrix $A$ \cite{Soderlind}. \subsubsection{ Compute $W(x)$} \begin{equation} \label{eq:04.06} W(x)=\int_{0}^d V(x+\tau f(x))\mathrm{d}\tau. \end{equation} In this construction we exploit the results in Corollary~\ref{co:03.01} and Theorem~\ref{th:04.01}, which guarantee that $V(x+\tau f(x))$ remains a FTLF for \eqref{eq:01.01} for all $\tau\in[0,d]$. \subsubsection{Find the best DOA estimate of \eqref{eq:01.01} provided by $W$} Let $C$ be such that the set $\{x\in\C{R}^n\,|\,W(x)\leq C\}$ is included in the set $\{x\in\C{R}^n\,|\,\nabla^\top W f(x)=0\}$. Finding $C$, implies solving an optimization problem, which involves rather complex nonlinear functions. However feasibility problems (for example by using bisection) can be solved successfully to obtain the best $C$ which leads to a true DOA estimate. The feasibility problem is as follows: \begin{equation} \label{eq:04.07} \begin{split} &\max_{x} \,\nabla^\top W(x) f(x)\\ &\mbox{subject to } W(x)\leq C, \end{split} \end{equation} for a given $C$ value. The largest $C$ value for which $\dot{W}(x)$ remains negative renders the best DOA estimate provided by a true LF $W(x)$, which is valid for \eqref{eq:01.01} since $\dot{W}(x)=\nabla^\top W(x) f(x)$. The optimization problem \eqref{eq:04.07} is standard in checking validity regions of LFs aposteriori to construction. For more details, see \cite{Chesi2011} or \cite{Hachicho2007}. The bound \eqref{eq:04.bound}, via $r$, provides an implicit, a priori theoretical indication of the region where $W$ is a valid Lyapunov function, or in other words of a subset of the DOA. An explicit estimate can be obtained a posteriori to computing $W$ by solving the problem above. \subsubsection{Further improve the DOA estimate by expansion of $W$} The DOA estimate obtained above can be further improved by making use of expansion methods as introduced in \cite{Chiang89}. More specifically, the expansion method states that the $C$--level set of $W_1(x)=W_1(x+\alpha f(x))$ for some $\alpha\in\C{R}_{>0}$ is a subset of the true DOA of the considered system and it will contain the estimate of the DOA obtained by the $C$--level set of $W$. In summary, the proposed method starts by verifying the finite--time decrease condition \eqref{eq:04.05} for a candidate function $V(x)=\eta(\norm{x})$, $\eta\in\mathcal{K}_\infty$ and a candidate $d$--invariant set $\mathcal{S}$. The simplest way to do this, while avoiding solution approximations, given in the first step of the computation procedure above, is to verify \eqref{eq:04.05} globally ($x\in\C{R}^n$) for the linearization of \eqref{eq:01.01} around the origin. Then, it is known by Theorem~\ref{th:04.01}, that there exists a set $\mathcal{S}\subseteq\{x\in\C{R}^n\,|\,\norm{x}<r\}$ which is $d$--invariant, such that \eqref{eq:04.05} holds for the true nonlinear system. Next, in the second step of the procedure, $W$ is computed via the analytic formula \eqref{eq:04.06}, which yields an educated guess of a true LF. Thus, the final check in the third step is a verification of the standard Lyapunov condition on $W$, with $W$ known, and with the aim to maximize the level set $C$ of $W$ where the condition is satisfied. In contrast, most of the other proposed methods in the literature, compute the LF $W$ simultaneously with verifying the derivative negative definiteness condition, which is in a general a more difficult task, even because it is not clear how to select a non--conservative $W$. \section{Examples} \label{sec:05} \subsection{An illustrative example with no polynomial LF} \label{ex:05.01} Consider the system \begin{eqnarray} \label{eq:05.01} \dot{x}_1&=&-x_1+x_1x_2\\ \nonumber \dot{x}_2&=&-x_2, \end{eqnarray} with solutions $x_1(t)=x_1(0)e^{(x_2(0)-x_2(0)e^{-t} -t)}$ and $x_2(t)=x_2(0)e^{-t}$. In \cite{Parillo2011} it has been shown that the system is GAS by using the Lyapunov function $V_{GAS}(x)=\ln{(1+x_1^2)} +x_2^2$. Furthermore, it has been shown that no polynomial LF exists for this system. We will illustrate our converse results on this system and we will show that the proposed constructive approach leads to local approximations of the DOA that cover a larger area of the state space compared to similar sublevel sets of $V_{GAS}$. Since the system is GAS, then it is $\mathcal{K}\mathcal{L}$--stable with $\mathcal{S}=\C{R}^n$, and by Theorem~\ref{th:02.02} we have that for any function $\eta\in\mathcal{K}_\infty$, there exists a set $\mathcal{S}\subseteq\C{R}^n$ and a scalar $d$ such that the function $V(x)=\eta(\norm{x})$, for any $x\in\mathcal{S}$ is a FTLF for the system \eqref{eq:05.01}. \begin{figure} \caption{Plots of the level sets of the function $V$, $W$ and $W_1$ computed for \eqref{eq:01.01}.} \label{fig:05.01a} \label{fig:05.01b} \label{fig:05.01c} \label{fig:05.01} \end{figure} Consider the set $\mathbb{X}:=\{x\in\C{R}^n\,|\,\norm{x}_\infty\leq 4\}$, which is displayed in Figure~\ref{fig:05.01} with the black contour. Pick $V(x)=x^\top P x$, where $P=\left(\begin{smallmatrix} 0.1 & 0\\ 0 & 0.1 \end{smallmatrix}\right).$ For this choice of $V$, the feasibility problem \eqref{eq:04.05.ver} was solved using the \verb|sqp| algorithm with \verb|fmincon|. Since all involved functions are Lipschitz and we considered a polytopic, compact candidate set $\mathcal{S}$, the problem \eqref{eq:04.05.ver} has a global optimum. Thus condition \eqref{eq:04.05} and consequently, \eqref{eq:02.01b} holds for $d=2.4$ for any $x\in\mathcal{S}$ where $\mathcal{S}$ is the largest level set of $V$ included in the set $\mathbb{X}$; $\mathcal{S}$ is shown in Figure~\ref{fig:05.01a} in red. By Lemma~\ref{lemma:02.01}, we obtain that $$W(x(t))=\int_{t}^{t+d}x(\tau)^\top P x(\tau) \mathrm{d}\tau$$ is a Lyapunov function for \eqref{eq:05.01}, for any $t\in\C{R}_{\geq 0}$. From \eqref{eq:04.05} and the equivalence result in Lemma~\ref{lemma:02.01}, it is sufficient to compute $W(x(0))$. Since the system is GAS, any scaling of the set $\mathcal{S}$ will satisfy \eqref{eq:02.01b}, however with a bigger $d$, due to the nonlinear dynamics. For the computed $d$ and chosen $\mathcal{S}$ and $V$, let $\mathcal{S}_V(C_V)$ denote the largest level set of $V$ included in $\mathcal{S}$. Then, the largest level set of $W$ in $\mathcal{S}_V(C_V)$ will be a subset of the true DOA of the system. In Figure~\ref{fig:05.01a} we show the level set of $W$ defined by $C_W=0.415$ in blue. Next, we will illustrate also the expansion method. Consider $W_1(x(t))=W(x(t)+\alpha_1f(x(t))$, with $\alpha_1=0.1$. The level set defined by $W_1(x) =C_{W_1}$, where $C_{W_1} = 1.5$ is shown in Figure~\ref{fig:05.01b} in green. Note that, by construction, $W_1(x)=1.5$ is not restricted to the black set anymore. For the sake of comparison we show in Figure~\ref{fig:05.01c} a plot of level sets of the two computed functions, $W$ and $W_1$ and a relevant level set value of the logarithmic LF $V_{GAS}$. \subsection{3D example from literature} \label{ex:05.02} We consider the system in \cite[Example 5]{SiggiMicnon2015}, described by \begin{eqnarray} \label{eq:05.02} \nonumber \dot{x}_1&=&x_1(x_1^2+x_2^2-1)-x_2(x_3^2+1)\\ \dot{x}_2&=&x_2(x_1^2+x_2^2-1)+x_1(x_3^2+1)\\ \nonumber \dot{x}_3&=&10x_3(x_3^2 -1). \end{eqnarray} \begin{figure} \caption{Plot of the the level set $W(x)=0.19$--green and the DOA approximation from \cite{SiggiMicnon2015}--red.} \label{fig:05.02a} \end{figure} \begin{figure} \caption{Validation of level set of $W$: $\nabla^\top W f=0$--red, level set of $W(x)=0.19$--green.} \label{fig:05.02b} \end{figure} For this system a piecewise affine LF was computed in \cite{SiggiMicnon2015}, resulting in the DOA plotted in Figure~\ref{fig:05.02a} with red. This plot can be found at \emph{www.ru.is/kennarar/sigurdurh/MICNON2015CPP.rar}. The blue dots in the figure represent the infeasibility points from computing the piecewise affine LF. For this system, we do not know an analytic expression for $x(d)$. Thus, we apply the steps described in Section~\ref{sec:04.02} for verification starting with the linearized dynamics. Let $V(x)=x^\top P x$ where $P$ is the identity matrix. Then, the condition \eqref{eq:norm} holds with $d=0.2$. In Figure~\ref{fig:05.02b} the level set defined by $W(x)=0.19$ is plotted with green (inner set) together with the zero level set of $\dot{W}(x)$ in red. The value of $C=0.19$ was obtained by solving the feasibility problem \eqref{eq:04.07}. In Figure~\ref{fig:05.02a} we show the level set $W(x)=0.19$ and a trajectory of the system \eqref{eq:05.02}, initialized in $x_0=[0.5933\,-0.3636\,-0.6869]^\top$ with $W(x_0)=0.1198$, which is one of the infeasible points in \cite{SiggiMicnon2015}. As for computation time, the example was worked out using MatlabR2015b, on a MacBookPro 2,8GHz Intel Core i5 processor and resulted in total computation time of $21.2712$s, while solving the problem \eqref{eq:04.07} takes $15.2496$s using \verb|fmincon| with the optimization algorithm \verb|sqp|. Finding the expression of $W$ takes $0.1726$s. \subsection{Nonpolynomial 2D system--the genetic toggle switch} \label{ex:05.03} Consider the genetic toggle switch in Escherichia coli constructed in \cite{Gardner2000}, \begin{eqnarray} \label{eq:05.03} \nonumber \dot{x}_1&=&\frac{\alpha_1}{1+x_2^\beta}-x_1\\ \dot{x}_2&=&\frac{\alpha_2}{1+x_1^\gamma}-x_2. \end{eqnarray} \begin{figure} \caption{Level sets of the computed LFs corresponding to $E_1$--blue and $E_3$--red for the toggle switch system \eqref{eq:05.03} together with vector field plots.} \label{fig:05.03} \end{figure} The genetic toggle switch is a synthetic, bistable gene--regulatory network, constructed from any two repressible promoters arranged in a mutually inhibitory network. The model \eqref{eq:05.03}, proposed in \cite{Gardner2000}, was derived from a biochemical rate equation formulation of gene expression. For this model, the set of parameters for which bistability is ensured is especially of interest, as it accommodates the real behavior of the toggle switch. The implications of the toggle switch circuit as an addressable memory unit are in biotechnology and gene therapy. When at least one of the parameters $\beta,\gamma>1$, bistability occurs. In \eqref{eq:05.03}, $x_1$ denotes the concentration of repressor $1$, $x_2$ is the concentration of repressor $2$, $\alpha_1$ is the effective rate of synthesis of repressor $1$, $\alpha_2$ is the effective rate of synthesis of repressor $2$, $\beta$ is the cooperativity of repression of promoter $2$ and $\gamma$ is the cooperativity of repression of promoter $1$. The rational terms in the above equations represent the cooperative repression of constitutively transcribed promoters and the linear terms represent the degradation/dilution of the repressors. The two possible stable states are one in which promoter $1$ transcribes repressor $2$ and one in which promoter $2$ transcribes repressor $2$. Let the parameters be defined by $\alpha_1=1.3$, $\alpha_2=1$, $\beta=3$ and $\gamma=10$. For this set of parameters the stable equilibria are $E_1=\begin{pmatrix} 0.668 & 0.9829 \end{pmatrix}$ and $E_3=\begin{pmatrix} 1.2996 & 0.0678 \end{pmatrix}$, which correspond to the cases in which promoter $1$ transcribes repressor $2$ and in which promoter $2$ transcribes repressor $1$, respectively. The unstable equilibrium is $E_2=\begin{pmatrix} 0.8807 & 0.7808 \end{pmatrix}$. The separation between the DOAs corresponding to $E_1$ and $E_3$ is achieved by the stability boundary or separatrix. It is known that the unstable equilibrium $E_2$ belongs to the stability boundary \cite{Chiang88}. The problem of computing the DOAs of the two stable equilibria was previosuly addressed by means of PWA approximating dynamics in \cite{Yordanov2013} by computing reachable sets via a linear temporal logic formalism, however the method is very computationally expensive even for such a system with two states. By following the procedure in Section~\ref{sec:04.02}, the DOAs corresponding to $E_1$ and $E_3$ where computed and are shown in Figure~\ref{fig:05.03}. Note that the computation procedure needs to be carried out for each system resulting by translating each nonzero equilibrium to the origin. For each system corresponding to $E_1$ and $E_3$, a quadratic FTLF candidate was considered, where the matrix $P$ is the solution of the classical Lyapunov inequality $A^\top P+PA<-I$, where $I$ denotes the identity matrix and $A= \left[\frac{\partial f(x)}{\partial x}\right]_{x=E_i}$ for each $i=1,3$. The blue level set defined by $C_1=0.07$ and $d=1.2$ corresponds to $E_1$ and the black level set defined by $C_3=0.8$ and $d=0.4$ corresponds to $E_3$. The trajectories starting from initial conditions close to the stability boundary will go to $E_2$ and from there via its unstable directions, they will converge either to $E_1$ or $E_3$. As shown in Figure~\ref{fig:05.03}, the DOA estimates go very close to $E_2$ (red), thus close to the stability boundary. However, the computed sets seem conservative with respect to the directions of the vector fields for initial conditions far from the equilibria in the positive orthant. This is due to the fact that higher level sets of the corresponding LFs would intersect with the unstable equilibrium and violate the stability boundary. In fact, this is not an issue for the toggle switch as the real--life behavior is centered around the three equilibria and stability boundary. \subsection{Nonpolynomial 3D system--the HPA axis} \label{ex:05.04} The following model has been proposed in \cite{Andersen2013} to illustrate the behavior of the Hypothalamic-Pituitary-Adrenal (HPA) axis. \begin{eqnarray} \label{eq:05.04} \nonumber \dot{x}_1 &=& \left(1+\xi\frac{x_3^\alpha}{1+x_3^\alpha} -\psi\frac{x_3^\gamma}{x_3^\gamma +\tilde{c}_3^\gamma}\right) -\tilde{w}_1x_1 \\ \dot{x}_2 &=&\left(1-\rho\frac{x_3^\alpha}{1+x_3^\alpha}\right)x_1-\tilde{w}_2x_2\\ \nonumber \dot{x}_3 &=& x_2-\tilde{w}_3x_3. \end{eqnarray} The HPA axis is a system which acts mainly at maintaining body homeostasis by regulating the level of cortisol. The three hormones involved in the HPA axis are the CRH ($x_1$), the ACTH ($x_2$) and the cortisol ($x_3$). For certain parameter values the system \eqref{eq:05.04} has a unique stable equilibrium, which relates to cortisol level returning to normal after periods of mild stress in healthy individuals. When the parameters are perturbed, bifurcation can occur which leads to bistability. The stable states correspond to hypercortisolemic and hypocortisolemic equilibria, respectively. \begin{figure} \caption{Level sets of the computed LFs corresponding to $E_1$--green and $E_3$--black for the HPA system \eqref{eq:05.04}. } \label{fig:05.04} \end{figure} For the parameter values $\tilde{w}_1=4.79$, $\tilde{w}_2=0.964$, $\tilde{w}_3=0.251$, $\tilde{c}_3=0.464$, $\psi=1$, $\xi=1$, $\rho = 0.5$, $\gamma =\alpha=5$, the HPA system has three equilibria, $E_1 = \begin{pmatrix}0.1170 & 0.1199 & 0.4778\end{pmatrix}$, $E_2 = \begin{pmatrix} 0.2224 & 0.2017 & 0.8039\end{pmatrix}$ and $E_3 = \begin{pmatrix} 0.7833 & 0.4316 & 1.7196 \end{pmatrix}$, with $E_1$ and $E_3$ stable and $E_2$ unstable. By following the same steps as those indicated in the case of the toggle switch system, for the $E_1$ equilibrium the level set defined by $C_1=0.08$ and $d=0.4$ is plotted in Figure~\ref{fig:05.04} (the lower set) and for the $E_2$ equilibrium the level set defined by $C_2=1$ and $d=0.4$ is plotted in Figure~\ref{fig:05.04} (the upper set). It is worth noting that for this system, the logarithmic norm defined by taking the $2$--norm, $\mu_2(A)$ is positive, with $A=\left[\frac{\partial f}{\partial x}\right]_{x=E_1}$. Thus, equality \eqref{eq:04.03} will not be satisfied. However, since the considered FTLF is $V(x)=x^\top P x$, then the weighted logarithmic norm satisfies $\mu_{2,P}(A)<0$. \subsection{Nonpolynomial 3D system--the repressilator} \label{ex:05.05} Regulatory molecular networks, especially the oscillatory networks, have attracted a lot of interest from biologists and biophysiscists because they are found in many molecular pathways. Abnormalities of these processes lead to various diseases, from sleep disorders to cancer. The naturally occurring regulatory networks are very complex, so their dynamics have been studied by highly simplified models \cite{Buse2010}, \cite{Elowitz2000}. These models are particularly valuable because they can provide an understanding of the important properties in the naturally occurring regulatory networks and, thus, support the engineering of artificial ones. Moreover, these rather simple, models can describe behaviors observed in experiments rather well \cite{Elowitz2000}. An example of such a network is the repressilator. Its genetic implementation uses three proteins that cyclically repress the synthesis of one another. \begin{figure} \caption{Computation of the DOA of $E=(1.516, 1.516, 1.516)$ for \eqref{ch6:eq:03.01}.} \label{ch6:fig:03.01} \label{ch6:fig:03.02} \end{figure} A model for the reprissilator was first proposed in \cite{Elowitz2000} and we consider here the simplified version from \cite{Buse2010}: \begin{eqnarray} \label{ch6:eq:03.01} \nonumber \dot{x}_1 &=&\frac{\alpha}{1+x_2^\beta}-x_1 \\ \dot{x}_2 &=&\frac{\alpha}{1+x_3^\beta}-x_2 \\ \nonumber \dot{x}_3 &=& \frac{\alpha}{1+x_1^\beta}-x_3 . \end{eqnarray} The states $x_1$, $x_2$, and $x_3$ are proportional to protein concentrations. All negative terms in the right-hand side represent degradation of the molecules. The nonlinear function $h(x)=\frac{1}{1+x^\beta}$ reflects synthesis of the mRNAs from the DNA controlled by regulatory elements called promoters. $\beta$ is called cooperativity and reflects multimerization of the protein required to affect the promoter. The three proteins are assumed to be identical, rendering the model symmetric and the order in choosing the states $x_1$, $x_2$ and $x_3$ does not influence the analysis outcome. In \cite{Buse2010} it was shown that for $\alpha>0$ and $\beta>1$, there is only one equilibrium point for the system \eqref{ch6:eq:03.01}, of the type $E=(r,r,r)$, where $r$ satisfies the equation $r^{\beta+1} +r-\alpha=0$. For the values $\alpha=5$ and $\beta =2$, $r=1.516$ and the eigenvalues of its corresponding linearized matrix are $\lambda=( -2.3936, -0.3032 + 1.2069i, -0.3032 - 1.2069i)$. By following the procedure in Section~\ref{sec:04.02} with the FTLF candidate $V(x)=x^\top P x$, with $P$ the identity matrix the value $d=0.4$ was obtained. For the obtained function $W=\int_0^d V(x+\tau f(x))\mathrm{d}\tau$ the level set given by $C=0.32$ was checked to be a subset of the true DOA of the system and it is plotted in Figure~\ref{ch6:fig:03.01} with blue. The zero level set of $\dot{W}=\nabla W^\top f(x)$, where $f(x)$ denotes the map describing \eqref{ch6:eq:03.01} is plotted in Figure~\ref{ch6:fig:03.01} with red. In Figure~\ref{ch6:fig:03.02} the level set $W(x)=0.32$ is shown together with some trajectories of \eqref{ch6:eq:03.01}. \subsection{Trigonometricc nonlinearity--the whirling pendulum} \label{ex:05.06} We consider the system below, which was studied in \cite{Chesi2009} with the purpose to compute the DOA of the zero equilibrium. \begin{eqnarray} \label{eq:05.06} \dot{x}_1 &=& x_2\\ \dot{x}_2 &=& \frac{-k_f}{mb} x_2 +\omega^2\sin(x_1)\cos(x_1) -\frac{g}{l_p}\sin(x_1), \nonumber \end{eqnarray} where $x_1$ is the angle with the vertical, $k_f = 0.2$ is the friction, $m_b = 1$ is the mass of the rigid arm , $l_p=10$ is the length of the rigid arm, $\omega =0.9$ is the angular velocity and $g=10$ is the gravity acceleration. Therein a polynomial LF was computed, whose level set rendering a DOA estimate is shown in Figure~\ref{fig:05.06} with the black contour. Following the same steps as in the previous examples, a Lyapunov function $W$ was computed on basis of the FTLF $V(x)=x^\top P x$, with $P= \begin{pmatrix} 3.6831 & 2.3169\\ 2.3169 & 14.7694 \end{pmatrix}$ and $d=1.1$. The level set $C=3.55$ of $W(x)$ defines an estimate of the true DOA of the origin equilibrium of \eqref{eq:05.06} and it is shown with blue in Figure~\ref{fig:05.06} together with a vector field plot of the system. \begin{figure} \caption{Level set of $W$ for $C=3.55$ computed for \eqref{eq:05.06} with $d=1.1$, plotted in blue, its corresponding derivative--red and the level set computed in \cite{Chesi2009} with the toolbox smrsoft. } \label{fig:05.06} \end{figure} \subsection{Trigonometric nonlinearity--multiple equilibria} \label{ex:05.07} Consider the system: \begin{equation} \label{eq:05.07} \begin{split} \dot{x_1}&=x_2\\ \nonumber \dot{x_2}&=0.301 - \sin(x_1+0.4136) +\\ &\quad\,\,0.138\sin 2(x_1+0.4136)-0.279 x_2, \end{split} \end{equation} \begin{figure} \caption{The level set $W(x)=5$--blue, its derivative $\dot{W}=0$--red, the stable equilibrium $E_1$--blue and the unstable ones $E_1, E_2$--black together with the vector field plot for \eqref{eq:05.07}.} \label{fig:05.07} \end{figure} with the stable equilibrium $E_1=(6.284098\quad 0)^\top$, and the unstable ones $E_2=(2.488345 \quad 0)^\top$, $E_2=(8.772443 \quad 0)^\top$. This system was studied in \cite{Chiang88} with the purpose to compute the stability boundary of $E_1$. By applying the steps described in Section~\ref{sec:04.02} for verification starting with the linearized dynamics, for $V(x)=x^\top P x$ where $P=\begin{pmatrix} 1.6448 & 0.3430\\ 0.3430 & 2.1255 \end{pmatrix}$, the condition \eqref{eq:norm} holds with $d=0.8$. The resulting LF $W=\int_{0}^d V(x+\tau f(x))\mathrm{d}\tau$ leads to the DOA estimate defined by $C= 5$ and plotted in Figure~\ref{fig:05.06} with blue contour. The zero level set of the corresponding derivative $\dot{W}=\nabla W(x)^\top f(x)=0$ is shown with red. \section{Conclusions} \label{sec:06} We have provided a new Massera type of LF which is enabled by imposing a finite--time condition on an arbitrary candidate function defined by a $\mathcal{K}_\infty$--function of the state norm. As the finite--time condition is verifiable numerically, this definition of the LF allows for an implementable algorithm towards obtaining nonconservative DOA estimates. Compared to classical constructions which require for finite time interval integration exponential stability or they allow for $\mathcal{K}\mathcal{L}$--stability by imposing a specific function under the integral, we provide an approach which provides two more degrees of freedom. On one hand, we allow for $\mathcal{K}\mathcal{L}$--stability (under Assumption~\ref{as:02.01}) and on the other hand the construction of the LF is based on any $\mathcal{K}_\infty$ function of the norm of the solution of the system. For future work we are considering building a Matlab toolbox for computing LFs and DOA estimations for nonlinear systems. \begin{comment} \begin{IEEEbiography}[{\includegraphics[width=2.7cm,height=3cm]{Figures/PhotoAlina3blackwhite}}]{Alina Doban} received the B.Sc. degree in automatic control from the Technical University ``Gh. Asachi" of Iasi, Romania, in 2010 and the M.Sc. degree, cum laudae in electrical engineering from Eindhoven University of Technology, in 2012. She is currently pursuing the Ph.D. degree in control engineering from Eindhoven University of Technology, The Netherlands with a DISC--NWO research grant which was awarded as part of the NWO Graduate Programme. Her research interests include stability theory, Lyapunov methods, feedback stabilization of nonlinear systems and biological systems. \end{IEEEbiography} \begin{IEEEbiography}[{\includegraphics[width=2.7cm,height=3cm]{Figures/PhotoMirceablackwhite}}]{Mircea Lazar} was born in Iasi, Romania, in 1978. He received the M.Sc. degree in control engineering from the Technical University ``Gh. Asachi" of Iasi, Romania, in 2002 and the Ph.D. degree in control engineering from the Eindhoven University of Technology, Eindhoven, The Netherlands, in 2006. Since 2006 he has been an Assistant Professor in the Control Systems group of the Electrical Engineering Faculty at the Eindhoven University of Technology. His research interests lie in stability theory, scalable Lyapunov methods and formal methods, and model predictive control. Dr. Lazar received the EECI (European Embedded Control Institute) Ph.D.award. \end{IEEEbiography} \end{comment} \end{document}
\begin{document} \title{Measurement-Disturbance Tradeoff Outperforming Optimal Cloning} \centerline{} \centerline{} \author{Lukas Knips} \affiliation{Max-Planck-Institut f\"{u}r Quantenoptik, Hans-Kopfermann-Stra{\ss}e 1, 85748 Garching, Germany} \affiliation{Department f\"{u}r Physik, Ludwig-Maximilians-Universit\"{a}t, 80797 M\"{u}nchen, Germany} \author{Jan Dziewior} \affiliation{Max-Planck-Institut f\"{u}r Quantenoptik, Hans-Kopfermann-Stra{\ss}e 1, 85748 Garching, Germany} \affiliation{Department f\"{u}r Physik, Ludwig-Maximilians-Universit\"{a}t, 80797 M\"{u}nchen, Germany} \author{Anna-Lena K. Hashagen} \affiliation{Fakult\"{a}t f\"{u}r Mathematik, Technische Universit\"{a}t M\"{u}nchen, Germany} \author{Jasmin D. A. Meinecke} \affiliation{Max-Planck-Institut f\"{u}r Quantenoptik, Hans-Kopfermann-Stra{\ss}e 1, 85748 Garching, Germany} \affiliation{Department f\"{u}r Physik, Ludwig-Maximilians-Universit\"{a}t, 80797 M\"{u}nchen, Germany} \author{Harald Weinfurter} \affiliation{Max-Planck-Institut f\"{u}r Quantenoptik, Hans-Kopfermann-Stra{\ss}e 1, 85748 Garching, Germany} \affiliation{Department f\"{u}r Physik, Ludwig-Maximilians-Universit\"{a}t, 80797 M\"{u}nchen, Germany} \author{Michael M. Wolf} \affiliation{Fakult\"{a}t f\"{u}r Mathematik, Technische Universit\"{a}t M\"{u}nchen, Germany} \ifdefined\showMain \begin{abstract} One of the characteristic features of quantum mechanics is that every measurement that extracts information about a general quantum system necessarily causes an unavoidable disturbance to the state of this system. A plethora of different approaches has been developed to characterize and optimize this tradeoff. Here, we apply the framework of quantum instruments to investigate the optimal tradeoff and to derive a class of procedures that is optimal with respect to most meaningful measures. We focus our analysis on binary measurements on qubits as commonly used in communication and computation protocols and demonstrate theoretically and in an experiment that the optimal universal asymmetric quantum cloner, albeit ideal for cloning, is not an optimal procedure for measurements and can be outperformed with high significance. \end{abstract} \maketitle \textit{Introduction.---}The work of Heisenberg, best visualized by the Heisenberg microscope~\cite{Heisenberg1930}, teaches us that every measurement is accompanied by a fundamental disturbance of a quantum system. The question about the precise relation between the information gained about the quantum system and the resulting disturbance has since inspired numerous studies~\cite{Jaeger1995,Englert1996,Ozawa2003,Branciard2013,Busch2013,Banaszek2001,Fuchs1996,Maccone2006,Fuchs1996,Maccone2006,Buscemi2008,Buscemi2014,Zhang2016c,DAriano2003,DAriano2003,Jordan2010,Cheong2012,NielsenGold,Kretschmann2008,Fan2015,Shitara2016}. A central problem is to find a tight, quantitative tradeoff relation, e.g., for the maximally achievable information for a given disturbance or, vice versa, for the minimal disturbance for a certain amount of extracted information. Obviously, this is not only relevant for quantum foundations, but also for many applications in quantum communication~\cite{Gisin2002,Pan2012} and quantum computation~\cite{Ekert1996,Vedral1998,Steane1998}. Initially studied in the context of which-path information and loss of visibility in interferometers~\cite{Jaeger1995,Englert1996}, quantifying the information-disturbance tradeoff was based on various measures such as the traditional root mean squared distance~\cite{Ozawa2003,Branciard2013}, the distance of probability distributions \cite{Busch2013}, operation and estimation fidelities~\cite{Banaszek2001,Fuchs1996,Maccone2006}, entropic quantities~\cite{Fuchs1996,Maccone2006,Buscemi2008,Buscemi2014,Zhang2016c,DAriano2003}, reversibility~\cite{DAriano2003,Jordan2010,Cheong2012}, stabilized operator norms~\cite{NielsenGold,Kretschmann2008}, state discrimination probability~\cite{Buscemi2008}, probability distribution fidelity~\cite{Fan2015}, and Fisher information~\cite{Shitara2016}. In spite of all these distinct approaches, no clear candidate for a most fundamental framework for the analysis of the information-disturbance tradeoff in quantum mechanics has yet emerged. Here we build upon a novel, comprehensive information-disturbance relation introduced recently by two of us~\cite{Hashagen_Wolf_2018}. There, optimal measurement devices have been proven to be independent of the chosen quality measures, as long as these fulfill some reasonable assumptions, such as convexity and basis-independence. This approach is unique with respect to the employment of reference observables. On one hand, since information eventually is obtained via measurements of observables, we base the quantification of the measurement error on a reference observable. On the other hand, the measurement induced disturbance is defined without relying on any reference observable in order not to restrict the further usage of the post-measurement state. For a finite-dimensional von Neumann measurement, the optimal tradeoff can be achieved with quantum instruments described by at most two parameters. \begin{figure} \caption{{The optimal quantum instruments in terms of measurement error and disturbance} clearly outperform the optimal asymmetric cloner (red curve) and the coherent swap operation (green line). Our measurements (blue crosses) come close to the theoretical curve (blue curve). The violet marked instrument is discussed in Fig.~\ref{fig:dataplot} in more detail. The error bars are too small to be visible; for a detailed discussion see~\cite{SM}. } \label{fig:dataplotInstruments} \end{figure} In this letter, we describe how optimal instruments can be derived for typical measures of measurement error, i.e., inverse information, and state disturbance and how they can be implemented in an experiment. Typically, quantum cloning is considered to be a good choice to achieve an optimal measurement disturbance tradeoff. Yet, here we show that the optimal instruments outperform all (asymmetric) quantum cloners~\cite{SM}. We test the tradeoff relation experimentally using a tunable Mach-Zehnder-Interferometer and implement a large range of quantum instruments. We apply these instruments to a two-dimensional quantum system encoded in the photon polarization and investigate the relation between the error of the measurement and the disturbance of the qubit state. As distance measures we consider exemplarily some of the measures recommended in \cite{NielsenGold}, i.e., the worst-case total variational distance and the worst-case trace norm. For other measures see supplemental material (SM)~\cite{SM}. The experiment clearly shows that the optimal universal asymmetric cloner as well as the coherent swap scheme are suboptimal (Fig.~\ref{fig:dataplotInstruments}). \textit{Measurements as quantum instruments.---}To generally quantify both the measurement error and the measurement induced disturbance, we describe the measurement of observables on a quantum system by means of quantum instruments~\cite{Davies1970,Watrous2018} as illustrated in Fig.~\ref{fig:setup}. Formally, a quantum instrument $I$ is defined as a set of completely positive linear maps $I:=\{ I_j \}_{j=1}^m$ that fulfills the normalization condition $\sum_{j=1}^m I_j^\ast(\mathbbm{1}) =\mathbbm{1}$, where $I_j^\ast$ denotes the dual map to $I_j$ with respect to the Hilbert-Schmidt inner product. This description naturally encompasses the connection between the observable given by a positive operator valued measure (POVM) $E^\prime:=\{E_j^\prime\}_{j=1}^m$ and the quantum channel $T_s$, which describes the measurement induced change of the state. In general, a quantum channel is a completely positive trace preserving linear map. In the context of quantum instruments, the channel is given by the sum of the linear maps with $T_s:=\sum_{j=1}^m I_j$, where each map corresponds to one measurement operator $E^\prime_j$ of the POVM. The normalization condition of the quantum instrument ensures that the corresponding quantum channel is trace-preserving. Expressing the channel in terms of $I$ as above reflects the decohering effect of the measurement on the quantum state of the measured system. The measurement operators $\{E^\prime_j \}_{j=1}^m$ themselves are fully determined by $I$ via $E^\prime_j:=I_j^\ast(\mathbbm{1})$, where the probability distribution for outcomes $\{ j \}_{j=1}^{m}$ on state $\rho$ is given by $ \tr{I_j(\rho)} = \tr{I_j(\rho)\mathbbm{1}} = \tr{\rho I_j^\ast(\mathbbm{1})} = \tr{\rho E_j^\prime}$. From this point of view, the normalization condition of the quantum instrument ensures that the distribution $\{\tr{E^\prime_j \rho} \}_{j=1}^m$ is normalized. The instrument description based on the normalized set of maps $I$, which implies the pair $(E^\prime, T_s)$, is sufficient to exhaustively describe all possible quantum measurement processes. \begin{figure} \caption{ {General description of a measurement using a quantum instrument $I$.} Obtaining information about the quantum state via the POVM $E^\prime$ (dashed line, classical output) induces a change of the quantum state described by the quantum channel $T_s$ (solid line, quantum output).} \label{fig:setup} \end{figure} \textit{Distance measures.---}From the notion of quantum instruments it becomes immediately clear that $E^\prime$ and $T_s$ are not independent, i.e. the change of the state has a fundamental dependence on the information gained and vice versa. To enable a thorough quantitative analysis of this measurement-disturbance tradeoff, we use distance measures to assess the quality of the approximate measurement and to quantify the disturbance. We quantify the disturbance $\Delta$ caused to the system by the deviation of the channel $T_s$ from the identity channel $T_{\rm id}\left(\rho\right):=\rho$. The measurement error $\delta$ quantifies the deviation of the measurement $E^\prime$ from a reference measurement $E$. This approach utilizes a reference POVM $E$ to quantify the measurement error, but not the disturbance, in contrast to all other approaches found in the literature, where either a reference system is used for both, measurement error and disturbance, or none is used at all. The measurement error $\delta$ can be quantified by defining a worst-case total variational distance based on the $l_1$-distance between probability distributions. The $l_1$-distance, also called total variational distance, displays the largest possible difference between the probabilities that two probability distributions assign to the same event and therefore is the relevant distance measure for hypothesis testing~\cite{Neyman1933,Watrous2018}. In our case, these two probability distributions stem from the target measurement $E$ and the actual measurement $E^\prime$ for some quantum state. To generalize the measure for the measurement error to take into account all possible quantum states $\rho$ of the system we additionally take the worst case w.r.t. all states, which is natural when considering the maximal difference, i.e., worst-case characteristic of the $l_1$-distance itself. Thus our worst-case total variational distance is defined as \begin{equation} \delta(E^\prime) := \sup_{\rho} \frac{1}{2} \sum_{i=1}^2 \abs{\tr{E^\prime_i \rho} - \tr{E_i \rho}}. \label{eq:MeasError1} \end{equation} The quantum analogue of the worst-case total variational distance is the worst-case trace norm distance, which we thus use to quantify the distance between the quantum channel $T_s$ and the identity channel $T_{\rm id}$, \begin{equation} \Delta(T_s) := \frac{1}{2} \sup_{\rho} \norm{T_s(\rho) -\rho}_1. \label{eq:Dist} \end{equation} This disturbance measure quantifies how well the quantum channel $T_s$ can be distinguished from the identity channel $T_{\rm id}$ in a statistical experiment, if no auxiliary systems are allowed \footnote{ Allowing auxiliary systems, the relevant disturbance measure is the diamond norm, $\Delta_\diamond(T_s) := \frac{1}{2} \sup_{\xi} \norm{\left(\left(T_s - T_{{\rm id},d} \right) \otimes T_{{\rm id},d} \right)(\xi) }_1$, where the state $\xi$ includes auxiliary systems. Here, for the optimal tradeoff curve, the trace norm turns out to be equal to the diamond norm distance \cite{Hashagen_Wolf_2018}. }. \textit{Optimal instruments and tradeoff.---}As reference measurement, we choose the ideal projective measurement of the qubit with $E = \set{\ketbra{j}{j}}^2_{j=1}$. As proven in \cite{Hashagen_Wolf_2018} for the optimal quantum instruments each element $I_j$ can be expressed by a single Kraus operator, agreeing with the intuition that additional Kraus operators introduce noise to the system. In the case of a qubit this leads to \begin{equation} T_s(\rho) = \sum_{j=1}^2 K_j \rho K_j^\dagger \quad \text{ and } \quad \{E^\prime_j = K_j^\dagger K_j\}_{j=1}^2. \label{eq:optimalKrausdecomposition} \end{equation} The Kraus operators of an optimal instrument can be chosen diagonal in the basis $\set{\ket{j}}_{j=1}^2$ given by the target measurement~\cite{Hashagen_Wolf_2018}. Since for a qubit there are only two of them and they must satisfy the normalization condition, in general their form is \begin{subequations}\label{eq:generaldiagonalKraus} \begin{align} K_1 = \sqrt{1-b^2_2} \ketbra{1}{1} + e^{i\beta_1} b_1 \ketbra{2}{2}, \\ K_2 = b_2 \ketbra{1}{1} + e^{i\beta_2}\sqrt{1-b^2_1} \ketbra{2}{2} , \end{align} \end{subequations} with $0 \leq b_1^2,b_2^2 \leq 1$ and two arbitrary phases $\beta_1$ and $\beta_2$. As proven in \cite{SM}, for such an instrument, the worst-case total variational distance $\delta$ and its trace-norm analogue $\Delta$, Eqs.~(\ref{eq:MeasError1},\ref{eq:Dist}), quantifying measurement error and disturbance respectively, satisfy \begin{equation} \Delta \geq \begin{cases} \frac{1}{2}\left( \sqrt{1-\delta} - \sqrt{\delta} \right)^2 & \text{if } \delta \leq \frac{1}{2}, \\ 0 & \text{if } \delta \geq \frac{1}{2}. \end{cases} \label{eq:Tradeoff} \end{equation} The inequality is tight and cannot be exceeded by any quantum measurement procedure. Equality in Eq.~(\ref{eq:Tradeoff}) is attained for the family of optimal instruments defined by \begin{subequations}\label{eq:optimalKraus} \begin{align} K_1 = \frac{1}{\sqrt{2}} \left(\sqrt{1-\gamma} \ketbra{1}{1} + \sqrt{1+\gamma} \ketbra{2}{2}\right), \\ K_2 = \frac{1}{\sqrt{2}} \left(\sqrt{1+\gamma} \ketbra{1}{1} + \sqrt{1-\gamma} \ketbra{2}{2}\right), \end{align} \end{subequations} with $\gamma \in [0,1]$, leading to $\delta(\gamma)=\left(1-\gamma\right)/2$. \textit{Other known measurement schemes.---}Let us evaluate common quantum measurement procedures in terms of their measurement-disturbance tradeoff. For perfect quantum cloning, there would be no measurement-disturbance tradeoff, as one of the perfect clones could be measured without error with the other clone staying undisturbed. Although perfect cloning is impossible~\cite{Wootters1982}, one can derive a protocol that is optimal for approximate quantum cloning. Hence, it is a manifest intuition that the optimal universal asymmetric quantum cloner provides a promising measurement protocol that naturally leads simultaneously to a small disturbance and a small measurement error. It is illustrated in Fig.~\ref{fig:cloning}. The quantum channel $T_s(\rho) = \partr{s^\prime}{T_{\text{clo}}(\rho)}$, a marginal of the cloning channel $T_{\text{clo}}$, corresponds to the evolution of the system state, obtained when tracing out the second (primed) clone. The corresponding channel of the second clone, $T_{s^\prime}(\rho) = \partr{s}{T_{\text{clo}}(\rho)}$, provides an approximate copy to which the reference POVM $E$ is applied. Asymmetry within the quality of the clones determines the tradeoff between the measurement error and the disturbance. \begin{figure} \caption{{Universal asymmetric quantum cloning.} The initial quantum state $\rho$ is asymmetrically, approximately cloned to the auxiliary system, initially in state $\mathbbm{1}/2$. The target measurement is performed on one of the clones, while the other is compared to the initial quantum state $\rho$.} \label{fig:cloning} \end{figure} The optimal universal asymmetric quantum cloning channel $T_\text{clo}$ for any initial quantum state $\rho$ reads~\cite{Hashagen_2017} \begin{equation} T_{\text{clo}}\left(\rho\right)= \left(a_2 \mathbbm{1} +a_1 \ensuremath{\mathbb{F}} \right)\left( \rho \otimes \frac{\mathbbm{1}}{2}\right) \left(a_2 \mathbbm{1} +a_1 \ensuremath{\mathbb{F}}\right), \label{eq:CloningChannel} \end{equation} with $a^2_1+a^2_2+a_1a_2 = 1$, $a_1, a_2 \in \ensuremath{\mathbb{R}}$, and the flip (or swap) operator $\ensuremath{\mathbb{F}}:= \sum_{i,j=1}^2 \ketbra{ji}{ij}$. The parameter $a_1$ determines the amplitude of a swap operation between both qubits. With our measures, the measurement-disturbance tradeoff for the asymmetric quantum cloning channel satisfies \begin{equation} \Delta = \begin{cases} \frac{1}{4}\left( \sqrt{2-3\delta} - \sqrt{\delta} \right)^2 & \text{if } \delta \leq \frac{1}{2}, \\ 0 & \text{if } \delta \geq \frac{1}{2} \end{cases} \label{eq:CloningTradeoff} \end{equation} with $\delta(a_2)=a_2^2/2$~\cite{SM}. As the cloning operation cannot be realized by a unitary two-qubit transformation, any real implementation of the protocol is embedded in a larger system. Let us thus consider an obvious analogue to the cloning operation, which can be realized by a unitary two-qubit operation. For the swapping channel $T_{\text{cs}}$, the system interacts with the auxiliary system via a Heisenberg Hamiltonian as \begin{align} T_{\text{cs}}\left( \rho \right) &= e^{it\ensuremath{\mathbb{F}}} \left( \rho \otimes \tilde{\rho} \right) e^{-it\ensuremath{\mathbb{F}}} \nonumber \\ &= \left( a_2 \mathbbm{1} + i a_1 \ensuremath{\mathbb{F}} \right) \left( \rho \otimes \tilde{\rho} \right) \left( a_2 \mathbbm{1} - i a_1 \ensuremath{\mathbb{F}} \right), \label{eq:SwapChannel} \end{align} with $t \in [0,\pi/2]$ or using a parametrization analogous to the cloning scheme with $a^2_1 + a^2_2 = 1$, $a_1, a_2 \in \ensuremath{\mathbb{R}}$. The extreme cases are no swap ($t=0$, $a_2 = 1$) and full swap ($t=\pi/2$, $a_1 = 1$). The $\delta$-$\Delta$-tradeoff for the target measurement $E=\{ \ketbra{j}{j} \}_{j=1}^2$ performed on one of the outputs satisfies \begin{equation} \Delta = \frac{1}{2} - \delta, \label{eq:CohSwapTradeoff} \end{equation} with $\delta(t)=(1-a_1^2)/2$, for the coherent swap~\cite{SM}, evidently also inferior to our optimal instruments, Eq.~(\ref{eq:optimalKraus}), with the tradeoff given in Eq.~(\ref{eq:Tradeoff}). \begin{figure} \caption{{Conceptual experimental setup.} The state $\rho$ is encoded in the polarization degree of freedom of a photon, which is sent to a variable beam splitter (var BS). The spatial superposition state inside of the interferometer is denoted by $\ket{\phi_0}$ and can be tuned in terms of relative intensities and phase. For the interaction $U$ between the path and the polarization degrees of freedom we apply a $\sigma_z$ operation to the polarization in one path. Projections onto the output ports $\ket{C}$ and $\ket{D}$ of a balanced $50\!\!:\!\!50$ beam splitter conclude the realization of the Kraus operators as given in Eqs.~(\ref{eq:KrausOperatorsExp}). Polarization and intensity measurements are performed at the output ports of the interferometer. Please note that the actual experiment, while equivalent to the shown setup, is structured differently such that the polarization state $\rho$ is created inside of the interferometer. The actual experiment is described in more detail in~\cite{SM}. } \label{fig:expsetup} \end{figure} \textit{Experimental implementation.---}For our experimental evaluation of the measurement-disturbance tradeoff we want to realize a broad range of quantum instruments including the optimal ones. For that purpose we consider the polarization degree of freedom of photons to encode $\rho$, with $\ket{1}\leftrightarrow\ket{H}$ and $\ket{2}\leftrightarrow\ket{V}$, where $\ket{H}$ ($\ket{V}$) denotes horizontally (vertically) polarized light. The Kraus operators describing the chosen set of instruments are thus given by \begin{align} \label{eq:KrausOperators} K_{1,2} = \frac{1}{\sqrt{2}} \Big[ \sqrt{ 1 \pm \gamma} \ketbra{H}{H} + e^{i\beta} \sqrt{ 1 \mp \gamma} \ketbra{V}{V} \Big] \end{align} with an arbitrary phase $\beta$. The optimal cases Eqs.~(\ref{eq:optimalKraus}) are achieved for $\beta = 0$. To experimentally realize a quantum instrument and to enable analysis of the two outputs $T_s$ and $E^\prime$, it is necessary to employ an additional auxiliary quantum system, which is not yet explicitly present in the instrument description of Fig.~\ref{fig:setup}. For the measurement of photon polarization a natural candidate is the path degree of freedom of the photons. Since in our case a two dimensional auxiliary system is sufficient, we employ a Mach-Zehnder interferometer, which provides the two path states $\ket{A}$ and $\ket{B}$, see Fig.~\ref{fig:expsetup}. The properties of the instrument are then determined by the initial state of this auxiliary system, $\ket{\phi_0}=\cos\alpha\ket{A}+e^{i\varphi}\sin\alpha\ket{B}$, the measurement performed on it, i.e., the detection in the output path states $\ket{C}$ and $\ket{D}$, as well as by an intermediate interaction between path and polarization. The interaction is given by a unitary evolution $U$, which exchanges information between the systems. We use $U= i \sigma_z \otimes \kb{A}{A} + \mathbbm{1} \otimes \kb{B}{B}$, which introduces a polarization dependent phase shift in arm $\ket{A}$. For an initial path state $\ket{\phi_0}$ the Kraus operators, which act on the polarization, can then be obtained as \begin{subequations}\label{eq:KrausOperatorsExp} \begin{align} K_1 &= \mathrm{tr}_\text{path} \left[ (\mathbbm{1} \otimes \kb{C}{C}) \, U \, (\mathbbm{1} \otimes \kb{\phi_0}{\phi_0}) \right], \\ K_2 &= \mathrm{tr}_\text{path} \left[ (\mathbbm{1} \otimes \kb{D}{D}) \, U \, (\mathbbm{1} \otimes \kb{\phi_0}{\phi_0}) \right]. \end{align} \end{subequations} Relating these expressions with Eq.~(\ref{eq:KrausOperators}), the parameters $\gamma$ and $\beta$ are given by the experimental parameters $\alpha$ and $\varphi$ by $\gamma = \sin \left(2\alpha\right) \sin \varphi$ and $\beta = \arctan\left[ \tan\left( 2 \alpha \right) \cos \varphi\right]$. The outcome of the measurement $E^\prime$ is then obtained by determining the total intensity in the output $C$ ($E^\prime_1$) and $D$ ($E^\prime_2$), respectively, the action of the quantum channel $T_s$ by state tomography of the polarization degree of freedom. \begin{figure} \caption{{Evaluating measurement error $\delta$ and disturbance $\Delta$.} a) The measurement error corresponds to the maximal distance between the outcomes of the actual measurements $E^\prime_1$ and $E^\prime_2$ (red crosses) to the outcomes of the ideal measurements $E_1$ and $E_2$ (blue line). b) The disturbance is obtained by taking the supremum of the trace distance between the prepared polarization states and the tomographically reconstructed states of $T_s$. Please note that the suprema in a) and b) are achieved for different states. Statistical error bars are negligibly small. For a detailed discussion, see ~\cite{SM}. } \label{fig:dataplot} \end{figure} \textit{Measurements and results.---}According to Eqs.~(\ref{eq:MeasError1}) and (\ref{eq:Dist}), the measures $\delta$ and $\Delta$ use the supremum over different input states $\rho$. We thus prepare for each quantum instrument different linearly polarized states $\rho$, which are analyzed after the interaction. The prepared polarization state $\rho=\ketbra{\psi}{\psi}$ in both arms is given by $\ket{\psi} = \cos\frac{\theta}{2} \ket{H} + \sin\frac{\theta}{2} \ket{V}$, where $\ket{H}$ and $\ket{V}$ as the eigenstates of the Pauli matrix $\sigma_z$ with eigenvalues $+1$ and $-1$, respectively, denote horizontal and vertical polarization. We use $16$ different values for $\theta$, including those where extremal behavior for the disturbance or the measurement error is expected. The set of pure, linearly polarized states is sufficient as the suprema in Eqs.~(\ref{eq:MeasError1}) and (\ref{eq:Dist}) are attained in our experimental implementation, see SM~\cite{SM}. An intuitive strategy consists of setting a specific instrument and then varying the polarization state $\rho$, which however requires to keep the instrument parameters ($\alpha$ and $\varphi$) stable. It turns out to be experimentally more favorable to prepare different polarization states $\rho$ and then vary the phase $\varphi$ for fixed $\alpha$ and $\rho$. One thus associates measurements which correspond to the same state $\ket{\phi_0}$ of the auxiliary system to the same instrument. The evaluation of the measurement error and the disturbance for one instrument of Fig.~\ref{fig:dataplotInstruments} is shown in Fig.~\ref{fig:dataplot} a) and b), respectively. The supremum over a great circle of the Bloch sphere, described by $\ket{\psi}$, has been used for the analysis. The measurement error is given by the maximal deviation of the measurement (red crosses) to the best fitting target measurement (blue solid line), see Eq.~(\ref{eq:MeasError1}). While some states as eigenstates of the transformation (theoretically) do not show any disturbance, for the disturbance, the largest trace distance has to be taken into account, see Eq.~(\ref{eq:Dist}). The obtained values for measurement error and state disturbance are shown in Fig.~\ref{fig:dataplotInstruments} for the set of experimentally prepared quantum instruments. Each data point here identifies one quantum instrument, for which the supremum of the prepared quantum states in terms of measurement error and disturbance is determined. The horizontal structure is explained when considering that for a fixed $\alpha$, various measurements with different $\varphi$ have been taken, see Eq.~\eqref{eq:KrausOperators}. We could show that there exist quantum instruments, also experimentally accessible, which significantly outperform the optimal universal asymmetric cloner (red curve) and the coherent swap operation (green line) in terms of the considered distances. \textit{Conclusion.---}We applied the novel approach derived in \cite{Hashagen_Wolf_2018} to the setting of binary qubit measurements achieving an optimal measurement-disturbance tradeoff. In this setting a reference measurement is used to quantitatively obtain the measurement error. The disturbance, on the other hand, does not depend on any reference measurement, but solely on comparing the state before and after the measurement. Our protocol is tailored for applications based on a specific measurement without restricting subsequent use of the post-measurement state. Furthermore, we have demonstrated that the strategies of optimal universal asymmetric quantum cloning and coherent swap do not perform optimally when considering the tradeoff relation between measurement error and disturbance. Those protocols are optimal for their respective purposes such as approximate quantum cloning, but cannot compete with the optimal quantum instruments in the measurement scenario as in general they result in worse measurement-disturbance tradeoff relations. We have shown that the advantage of optimal instruments over other schemes is experimentally accessible and not only a mere theoretical improvement. In future applications our findings allow to identify these procedures which retrieve information at the physically lowest cost in terms of state disturbance. \textit{Acknowledgments.---}We thank Jonas Goeser for stimulating discussions. This research was supported in part by the National Science Foundation under Grant No. NSF PHY11-25915 and by the German excellence initiative Nanosystems Initiative Munich. LK and AKH are supported by the PhD program \textit{Exploring Quantum Matter} of the Elite Network of Bavaria. JD acknowledges support by the International Max-Planck Research Program for Quantum Science and Technology (IMPRS-QST). JDMA is supported by an LMU research fellowship. \begin{thebibliography}{32} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Heisenberg}(1930)}]{Heisenberg1930} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Werner}\ \bibnamefont {Heisenberg}},\ }\href@noop {} {\emph {\bibinfo {title} {{The Physical Principles of the Quantum Theory}}}}\ (\bibinfo {publisher} {University of Chicago Press},\ \bibinfo {year} {1930})\BibitemShut {NoStop} \bibitem [{\citenamefont {Jaeger}\ \emph {et~al.}(1995)\citenamefont {Jaeger}, \citenamefont {Shimony},\ and\ \citenamefont {Vaidman}}]{Jaeger1995} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Gregg}\ \bibnamefont {Jaeger}}, \bibinfo {author} {\bibfnamefont {Abner}\ \bibnamefont {Shimony}}, \ and\ \bibinfo {author} {\bibfnamefont {Lev}\ \bibnamefont {Vaidman}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Two interferometric complementarities}},}\ }\href {\doibase 10.1103/PhysRevA.51.54} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {51}},\ \bibinfo {pages} {54--67} (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Englert}(1996)}]{Englert1996} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Berthold-Georg}\ \bibnamefont {Englert}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Fringe Visibility and Which-Way Information: An Inequality}},}\ }\href {\doibase 10.1103/PhysRevLett.77.2154} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {2154--2157} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ozawa}(2003)}]{Ozawa2003} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Masanao}\ \bibnamefont {Ozawa}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Universally valid reformulation of the Heisenberg uncertainty principle on noise and disturbance in measurement}},}\ }\href {\doibase 10.1103/PhysRevA.67.042105} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {67}},\ \bibinfo {pages} {042105} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Branciard}(2013)}]{Branciard2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Cyril}\ \bibnamefont {Branciard}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Error-tradeoff and error-disturbance relations for incompatible quantum measurements}},}\ }\href {\doibase 10.1073/pnas.1219331110} {\bibfield {journal} {\bibinfo {journal} {Proceedings of the National Academy of Sciences}\ }\textbf {\bibinfo {volume} {110}},\ \bibinfo {pages} {6742--6747} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Busch}\ \emph {et~al.}(2013)\citenamefont {Busch}, \citenamefont {Lahti},\ and\ \citenamefont {Werner}}]{Busch2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Paul}\ \bibnamefont {Busch}}, \bibinfo {author} {\bibfnamefont {Pekka}\ \bibnamefont {Lahti}}, \ and\ \bibinfo {author} {\bibfnamefont {Reinhard~F.}\ \bibnamefont {Werner}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Proof of Heisenberg's Error-Disturbance Relation}},}\ }\href {\doibase 10.1103/PhysRevLett.111.160405} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {111}},\ \bibinfo {pages} {160405} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Banaszek}(2001)}]{Banaszek2001} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Konrad}\ \bibnamefont {Banaszek}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Fidelity Balance in Quantum Operations}},}\ }\href {\doibase 10.1103/PhysRevLett.86.1366} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages} {1366--1369} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fuchs}\ and\ \citenamefont {Peres}(1996)}]{Fuchs1996} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Christopher~A.}\ \bibnamefont {Fuchs}}\ and\ \bibinfo {author} {\bibfnamefont {Asher}\ \bibnamefont {Peres}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Quantum-state disturbance versus information gain: Uncertainty relations for quantum information}},}\ }\href {\doibase 10.1103/PhysRevA.53.2038} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {53}},\ \bibinfo {pages} {2038--2045} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Maccone}(2006)}]{Maccone2006} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Lorenzo}\ \bibnamefont {Maccone}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Information-disturbance tradeoff in quantum measurements}},}\ }\href {\doibase 10.1103/PhysRevA.73.042307} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {73}},\ \bibinfo {pages} {042307} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Buscemi}\ \emph {et~al.}(2008)\citenamefont {Buscemi}, \citenamefont {Hayashi},\ and\ \citenamefont {Horodecki}}]{Buscemi2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Francesco}\ \bibnamefont {Buscemi}}, \bibinfo {author} {\bibfnamefont {Masahito}\ \bibnamefont {Hayashi}}, \ and\ \bibinfo {author} {\bibfnamefont {Micha\l{}}\ \bibnamefont {Horodecki}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Global Information Balance in Quantum Measurements}},}\ }\href {\doibase 10.1103/PhysRevLett.100.210504} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages} {210504} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Buscemi}\ \emph {et~al.}(2014)\citenamefont {Buscemi}, \citenamefont {Hall}, \citenamefont {Ozawa},\ and\ \citenamefont {Wilde}}]{Buscemi2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Francesco}\ \bibnamefont {Buscemi}}, \bibinfo {author} {\bibfnamefont {Michael J.~W.}\ \bibnamefont {Hall}}, \bibinfo {author} {\bibfnamefont {Masanao}\ \bibnamefont {Ozawa}}, \ and\ \bibinfo {author} {\bibfnamefont {Mark~M.}\ \bibnamefont {Wilde}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Noise and Disturbance in Quantum Measurements: An Information-Theoretic Approach}},}\ }\href {\doibase 10.1103/PhysRevLett.112.050401} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {050401} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhang}\ \emph {et~al.}(2016)\citenamefont {Zhang}, \citenamefont {Zhang},\ and\ \citenamefont {Yu}}]{Zhang2016c} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Jun}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {Yang}\ \bibnamefont {Zhang}}, \ and\ \bibinfo {author} {\bibfnamefont {Chang-Shui}\ \bibnamefont {Yu}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{The Measurement-Disturbance Relation and the Disturbance Trade-off Relation in Terms of Relative Entropy}},}\ }\bibfield {booktitle} {\emph {\bibinfo {booktitle} {International Journal of Theoretical Physics}},\ }\href@noop {} {\ \textbf {\bibinfo {volume} {55}} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {D'Ariano}(2003)}]{DAriano2003} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Giacomo~M.}\ \bibnamefont {D'Ariano}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{On the Heisenberg principle, namely on the information-disturbance trade-off in a quantum measurement}},}\ }\href {\doibase 10.1002/prop.200310045} {\bibfield {journal} {\bibinfo {journal} {Fortschritte der Physik}\ }\textbf {\bibinfo {volume} {51}},\ \bibinfo {pages} {318--330} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jordan}\ and\ \citenamefont {Korotkov}(2010)}]{Jordan2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Andrew~N.}\ \bibnamefont {Jordan}}\ and\ \bibinfo {author} {\bibfnamefont {Alexander~N.}\ \bibnamefont {Korotkov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Uncollapsing the wavefunction by undoing quantum measurements}},}\ }\href {\doibase 10.1080/00107510903385292} {\bibfield {journal} {\bibinfo {journal} {Contemporary Physics}\ }\textbf {\bibinfo {volume} {51}},\ \bibinfo {pages} {125--147} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cheong}\ and\ \citenamefont {Lee}(2012)}]{Cheong2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Yong~Wook}\ \bibnamefont {Cheong}}\ and\ \bibinfo {author} {\bibfnamefont {Seung-Woo}\ \bibnamefont {Lee}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Balance Between Information Gain and Reversibility in Weak Measurement}},}\ }\href {\doibase 10.1103/PhysRevLett.109.150402} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {109}},\ \bibinfo {pages} {150402} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gilchrist}\ \emph {et~al.}(2005)\citenamefont {Gilchrist}, \citenamefont {Langford},\ and\ \citenamefont {Nielsen}}]{NielsenGold} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Alexei}\ \bibnamefont {Gilchrist}}, \bibinfo {author} {\bibfnamefont {Nathan~K.}\ \bibnamefont {Langford}}, \ and\ \bibinfo {author} {\bibfnamefont {Michael~A.}\ \bibnamefont {Nielsen}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Distance measures to compare real and ideal quantum processes}},}\ }\href {\doibase 10.1103/PhysRevA.71.062310} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {71}},\ \bibinfo {pages} {062310} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kretschmann}\ \emph {et~al.}(2008)\citenamefont {Kretschmann}, \citenamefont {Schlingemann},\ and\ \citenamefont {Werner}}]{Kretschmann2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Dennis}\ \bibnamefont {Kretschmann}}, \bibinfo {author} {\bibfnamefont {Dirk}\ \bibnamefont {Schlingemann}}, \ and\ \bibinfo {author} {\bibfnamefont {Reinhard~F.}\ \bibnamefont {Werner}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{The Information-Disturbance Tradeoff and the Continuity of Stinespring's Representation}},}\ }\href {\doibase 10.1109/TIT.2008.917696} {\bibfield {journal} {\bibinfo {journal} {IEEE Transactions on Information Theory}\ }\textbf {\bibinfo {volume} {54}},\ \bibinfo {pages} {1708--1717} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fan}\ \emph {et~al.}(2015)\citenamefont {Fan}, \citenamefont {Ge}, \citenamefont {Nha},\ and\ \citenamefont {Zubairy}}]{Fan2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Longfei}\ \bibnamefont {Fan}}, \bibinfo {author} {\bibfnamefont {Wenchao}\ \bibnamefont {Ge}}, \bibinfo {author} {\bibfnamefont {Hyunchul}\ \bibnamefont {Nha}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Zubairy}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Trade-off between information gain and fidelity under weak measurements}},}\ }\href {\doibase 10.1103/PhysRevA.92.022114} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {022114} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shitara}\ \emph {et~al.}(2016)\citenamefont {Shitara}, \citenamefont {Kuramochi},\ and\ \citenamefont {Ueda}}]{Shitara2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Tomohiro}\ \bibnamefont {Shitara}}, \bibinfo {author} {\bibfnamefont {Yui}\ \bibnamefont {Kuramochi}}, \ and\ \bibinfo {author} {\bibfnamefont {Masahito}\ \bibnamefont {Ueda}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Trade-off relation between information and disturbance in quantum measurement}},}\ }\href {\doibase 10.1103/PhysRevA.93.032134} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {032134} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gisin}\ \emph {et~al.}(2002)\citenamefont {Gisin}, \citenamefont {Ribordy}, \citenamefont {Tittel},\ and\ \citenamefont {Zbinden}}]{Gisin2002} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Nicolas}\ \bibnamefont {Gisin}}, \bibinfo {author} {\bibfnamefont {Gr{\'{e}}goire}\ \bibnamefont {Ribordy}}, \bibinfo {author} {\bibfnamefont {Wolfgang}\ \bibnamefont {Tittel}}, \ and\ \bibinfo {author} {\bibfnamefont {Hugo}\ \bibnamefont {Zbinden}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Quantum cryptography}},}\ }\href {\doibase 10.1103/revmodphys.74.145} {\bibfield {journal} {\bibinfo {journal} {Reviews of Modern Physics}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {145--195} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pan}\ \emph {et~al.}(2012)\citenamefont {Pan}, \citenamefont {Chen}, \citenamefont {Lu}, \citenamefont {Weinfurter}, \citenamefont {Zeilinger},\ and\ \citenamefont {{\.{Z}}ukowski}}]{Pan2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Jian-Wei}\ \bibnamefont {Pan}}, \bibinfo {author} {\bibfnamefont {Zeng-Bing}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Chao-Yang}\ \bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont {Harald}\ \bibnamefont {Weinfurter}}, \bibinfo {author} {\bibfnamefont {Anton}\ \bibnamefont {Zeilinger}}, \ and\ \bibinfo {author} {\bibfnamefont {Marek}\ \bibnamefont {{\.{Z}}ukowski}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Multiphoton entanglement and interferometry}},}\ }\href {\doibase 10.1103/revmodphys.84.777} {\bibfield {journal} {\bibinfo {journal} {Reviews of Modern Physics}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {777--838} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ekert}\ and\ \citenamefont {Jozsa}(1996)}]{Ekert1996} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Artur}\ \bibnamefont {Ekert}}\ and\ \bibinfo {author} {\bibfnamefont {Richard}\ \bibnamefont {Jozsa}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Quantum computation and Shor's factoring algorithm}},}\ }\href {\doibase 10.1103/revmodphys.68.733} {\bibfield {journal} {\bibinfo {journal} {Reviews of Modern Physics}\ }\textbf {\bibinfo {volume} {68}},\ \bibinfo {pages} {733--753} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vedral}\ and\ \citenamefont {Plenio}(1998)}]{Vedral1998} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Vlatko}\ \bibnamefont {Vedral}}\ and\ \bibinfo {author} {\bibfnamefont {Martin~B.}\ \bibnamefont {Plenio}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Basics of quantum computation}},}\ }\href {\doibase 10.1016/s0079-6727(98)00004-4} {\bibfield {journal} {\bibinfo {journal} {Progress in Quantum Electronics}\ }\textbf {\bibinfo {volume} {22}},\ \bibinfo {pages} {1--39} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Steane}(1998)}]{Steane1998} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Andrew}\ \bibnamefont {Steane}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Quantum computing}},}\ }\href {\doibase 10.1088/0034-4885/61/2/002} {\bibfield {journal} {\bibinfo {journal} {Reports on Progress in Physics}\ }\textbf {\bibinfo {volume} {61}},\ \bibinfo {pages} {117--173} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Hashagen}}\ and\ \citenamefont {{Wolf}}(2018)}]{Hashagen_Wolf_2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Anna-Lena~K.}\ \bibnamefont {{Hashagen}}}\ and\ \bibinfo {author} {\bibfnamefont {Michael~M.}\ \bibnamefont {{Wolf}}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Universality and Optimality in the Information-Disturbance Tradeoff}},}\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {ArXiv e-prints}\ } (\bibinfo {year} {2018})},\ \Eprint {http://arxiv.org/abs/1802.09893} {arXiv:1802.09893 [quant-ph]} \BibitemShut {NoStop} \bibitem [{SM()}]{SM} \BibitemOpen \href@noop {} {\emph {\bibinfo {title} {{Supplemental Material}}}}\BibitemShut {NoStop} \bibitem [{\citenamefont {Davies}\ and\ \citenamefont {Lewis}(1970)}]{Davies1970} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~Brian}\ \bibnamefont {Davies}}\ and\ \bibinfo {author} {\bibfnamefont {John~T.}\ \bibnamefont {Lewis}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{An operational approach to quantum probability}},}\ }\href {\doibase 10.1007/BF01647093} {\bibfield {journal} {\bibinfo {journal} {Communications in Mathematical Physics}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {239--260} (\bibinfo {year} {1970})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Watrous}(2018)}]{Watrous2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {John}\ \bibnamefont {Watrous}},\ }\href {https://cs.uwaterloo.ca/~watrous/TQI/TQI.pdf} {\emph {\bibinfo {title} {{The Theory of Quantum Information}}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {year} {2018})\BibitemShut {NoStop} \bibitem [{\citenamefont {Neyman}\ and\ \citenamefont {Pearson}(1933)}]{Neyman1933} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Jerzy}\ \bibnamefont {Neyman}}\ and\ \bibinfo {author} {\bibfnamefont {Egon~S.}\ \bibnamefont {Pearson}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{On the Problem of the Most Efficient Tests of Statistical Hypotheses}},}\ }\href {\doibase 10.1098/rsta.1933.0009} {\bibfield {journal} {\bibinfo {journal} {Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences}\ }\textbf {\bibinfo {volume} {231}},\ \bibinfo {pages} {289--337} (\bibinfo {year} {1933})}\BibitemShut {NoStop} \bibitem [{Note1()}]{Note1} \BibitemOpen \bibinfo {note} {Allowing auxiliary systems, the relevant disturbance measure is the diamond norm, $\Delta _\diamond (T_s) := \protect \frac {1}{2} \protect \qopname \relax m{sup}_{\xi } \left \delimiter 69645069 \left (\left (T_s - T_{{\protect \rm id},d} \right ) \otimes T_{{\protect \rm id},d} \right )(\xi ) \right \delimiter 86422285 _1$, where the state $\xi $ includes auxiliary systems. Here, for the optimal tradeoff curve, the trace norm turns out to be equal to the diamond norm distance \cite {Hashagen_Wolf_2018}.}\BibitemShut {Stop} \bibitem [{\citenamefont {Wootters}\ and\ \citenamefont {Zurek}(1982)}]{Wootters1982} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {William~K.}\ \bibnamefont {Wootters}}\ and\ \bibinfo {author} {\bibfnamefont {Wojciech~H.}\ \bibnamefont {Zurek}},\ }\bibfield {title} {\enquote {\bibinfo {title} {A single quantum cannot be cloned},}\ }\href {\doibase 10.1038/299802a0} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {299}},\ \bibinfo {pages} {802--803} (\bibinfo {year} {1982})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hashagen}(2017)}]{Hashagen_2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Anna-Lena~K.}\ \bibnamefont {Hashagen}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Universal Asymmetric Quantum Cloning Revisited}},}\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quant. Inf. Comp.}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {0747--0778} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \end{thebibliography} \else \fi \ifdefined\showSM \setcounter{equation}{0} \setcounter{figure}{0} \makeatletter \renewcommand{S\@arabic\c@equation}{S\@arabic\c@equation} \renewcommand{S\@arabic\c@figure}{S\@arabic\c@figure} \makeatletter \ifdefined\showMain \else \title{Supplemental Material: \\ Measurement-disturbance tradeoff outperforming optimal cloning} \maketitle \fi \ifdefined\showMain \section*{\large Supplemental Material} \else \fi \section*{SM\,1: Optimal tradeoff relation} \label{sec:proofOptimalTradeoff} \begin{thm}[Total variation - trace norm tradeoff] Consider a von Neumann target measurement given by an orthonormal basis $\left\{ \ket{i} \in \ensuremath{\mathbb{C}}^2 \right\}_{i=1}^2$, and an instrument with two corresponding outcomes. Then the worst-case total variational distance $\delta$ and its trace-norm analogue $\Delta$, defined as in Eqs.~(\ref{eq:MeasError1},\ref{eq:Dist}), quantifying measurement error and disturbance respectively, satisfy \begin{equation} \Delta \geq \begin{cases} \frac{1}{2}\left( \sqrt{1-\delta} - \sqrt{\delta} \right)^2 & \text{if } \delta \leq \frac{1}{2}, \\ 0 & \text{if } \delta \geq \frac{1}{2}. \end{cases} \label{eq:TradeoffAppendix} \end{equation} The inequality is tight and equality is attained within the family of instruments defined by \begin{equation} I_j(\rho) := K_j\rho K_j, \qquad j=1,2, \label{eq:OptInstr} \end{equation} with \begin{equation} K_{1,2} = \frac{1}{\sqrt{2}} \left(\sqrt{1\pm\gamma} \ketbra{1}{1} + \sqrt{1\mp\gamma} \ketbra{2}{2}\right) \end{equation} with $\gamma \in [0,1]$. \label{thm:theoremOptimalTradeoff} \end{thm} \begin{proof} In order to derive the information-disturbance tradeoff, we need to solve the following optimization problem: \\ For $\gamma \in [0,1]$ \begin{align} \label{eq:opt1} &\text{minimize } & & \Delta\left(T_s = \sum_{j=1}^2 I_j\right) \\ &\text{subject to } & & \delta\left(E^\prime = \left\{ I_j^\ast(\mathbbm{1}) \right\}_{j=1}^2 \right) \leq \gamma, \nonumber \\ &&& I_j \text{ is c.p. and} \nonumber \\ &&& \sum_{j=1}^2 I_j^\ast (\mathbbm{1}) = \mathbbm{1}, \nonumber \end{align} where the last two constraints ensure that $I$ is an instrument. As discussed before, we assume that every element of the instrument can be expressed using a single Kraus operator. This agrees well with intuition, because more Kraus operators introduce more noise to the system. Furthermore, we assume that these Kraus operators can be chosen diagonal in the basis of the target measurement, $E=\{ \ketbra{j}{j} \}_{i=1}^2$, to reflect the symmetry of the optimization problem. These assumptions simplify the optimization problem significantly. The Kraus operators given in Eq.~(\ref{eq:generaldiagonalKraus}) then yield the following POVM elements of the approximate measurement \begin{equation} E^\prime_j = (1 - b_{\bar{j}}^2) \ketbra{j}{j} + b^2_j(\mathbbm{1} -\ketbra{j}{j}), \label{eq:EffectOp} \end{equation} for $j=1,2$, where $\bar{j}=2$ if $j=1$ and $\bar{j}=1$ if $j=2$ with $0 \leq b_1^2,b_2^2 \leq 1$. The measurement error is thus given as \begin{align*} \delta(E^\prime) &= \sup_{\rho} \frac{1}{2}\sum_{j=1}^2 \abs{\tr{E_j' \rho} - \bra{j} \rho \ket{j}} \\ &= \sup_{\rho} \frac{1}{2}\sum_{j=1}^2 \abs{\tr{ \left( b_j^2 \mathbbm{1} - (b_j^2+b_{\bar{j}}^2) \ketbra{j}{j} \right) \rho} } \\ &= \sup_{\norm{\psi}=1} \frac{1}{2}\sum_{j=1}^2 \abs{ \bra{\psi} b_j^2 \mathbbm{1} - (b_j^2+b_{\bar{j}}^2) \ketbra{j}{j} \ket{\psi} } \\ &= \frac{1}{2}(b_1^2 + b_2^2), \end{align*} where the convexity of the $l_1$-norm was used. The disturbance follows from direct calculations, \begin{align*} \Delta(T_1) &= \frac{1}{2} \sup_{\rho} \norm{T_1(\rho)-\rho}_1 \\ &= \frac{1}{2} \sup_{\rho} \norm{\sum_{j=1}^2K_j\rho K_j^\dagger-\rho}_1 \\ &= \frac{1}{2}\left|1-e^{i\beta_1}b_1\sqrt{1-b_2^2} - e^{i\beta_2}b_2\sqrt{1-b_1^2} \right|. \end{align*} Without loss of generality, we may assume that $b_1,b_2\geq 0$ in the optimization problem, such that an optimum is attained for $\beta_1=\beta_2=0$. The optimization problem given in Eq.~(\ref{eq:opt1}) therefore simplifies: \\ For $\gamma \in [0,1]$ \begin{align} \label{eq:opt2} &\text{minimize } & & \frac{1}{2}\left(1-b_1\sqrt{1-b_2^2} - b_2\sqrt{1-b_1^2} \right) \\ &\text{subject to } & & \frac{1}{2}(b_1^2 + b_2^2) \leq \frac{1}{2}\left(1-\gamma\right), \nonumber \\ &&& 0 \leq b_1,b_2 \leq 1. \nonumber \end{align} The global minimum is achieved at \begin{equation*} b_1=b_2= \begin{cases} \sqrt{\frac{1}{2}} & \gamma \in \left[ -1, 0\right] \\ \sqrt{\frac{1}{2}}\sqrt{1-\gamma} & \gamma \in \left[ 0, 1\right] \end{cases} \end{equation*} and as stated in Eq.~(\ref{eq:TradeoffAppendix}). \end{proof} \section*{SM\,2: Tradeoff relation for optimal universal asymmetric cloning}\label{sec:derivationCloning} \begin{thm}[Total variation - trace norm tradeoff using optimal universal asymmetric cloning] Consider a von Neumann measurement given by an orthonormal basis in $\ensuremath{\mathbb{C}}^2$ on one of the outputs of the optimal universal $1\to 2$ asymmetric quantum cloning channel. Then the worst-case total variational distance $\delta$ and its trace-norm analogue $\Delta$ satisfy \begin{equation} \Delta = \begin{cases} \frac{1}{4}\left( \sqrt{2-3\delta} - \sqrt{\delta} \right)^2 & \text{if } \delta \leq \frac{1}{2}, \\ 0 & \text{if } \delta \geq \frac{1}{2}. \end{cases} \label{eq:CloningTradeoffAppendix} \end{equation} \label{thm:theoremCloningTradeoff} \end{thm} \begin{proof} The marginals of the optimal cloning channel are given by \begin{equation} T_{\text{clo},i} (\rho) = a_i^2 \frac{\mathbbm{1}}{2} \tr{\rho} + (1-a_i^2)\rho, \ \ i=1,2, \end{equation} with $T_{\text{clo},1}=T_s$ and $T_{\text{clo},2}=T_{s^\prime}$. The marginal quantum channel $T_s$ describes the evolution of the quantum state and its distance to the identity channel $T_{\rm id}$ then quantifies the disturbance. Similarly, the marginal $T_{s^\prime}$, whose output is measured by the target measurement $E$, describes the measurement itself through $E_j' = T_{s^\prime}^\ast(E_j)$. This is illustrated in Fig.~\ref{fig:cloning}. This yields for the disturbance \begin{align*} \Delta(T_s) :=& \frac{1}{2} \sup_{\rho} \norm{T_s(\rho) -\rho}_1 \\ =& \frac{1}{2}\sup_{\rho} \norm{a_1^2\frac{\mathbbm{1}}{2}-a_1^2\rho}_1 \\ =& \frac{a_1^2}{2}. \end{align*} The measurement error turns out to be \begin{align*} \delta(E^\prime) :=& \sup_{\rho} \frac{1}{2} \sum_{j=1}^2 \abs{\tr{E_j' \rho} - \bra{j} \rho \ket{j}} \\ =& \sup_{\rho} \frac{1}{2} \sum_{j=1}^2 \abs{\tr{T_{s^\prime}^\ast(\ketbra{j}{j}) \rho} - \bra{j} \rho \ket{j}} \\ =& \sup_{\rho} \frac{1}{2} \sum_{j=1}^2 \abs{\tr{\ketbra{j}{j} T_{s^\prime}(\rho)} - \bra{j} \rho \ket{j}} \\ =& \sup_{\rho} \frac{1}{2} \sum_{j=1}^2 \abs{\bra{j} a_2^2 \frac{\mathbbm{1}}{2} - a_2^2 \rho \ket{j}} \\ =& \frac{a_2^2}{2}. \end{align*} Substituting this into the trace-preserving condition of the optimal universal asymmetric quantum cloning channel, we obtain the theorem~\ref{thm:theoremCloningTradeoff}. \end{proof} \section*{SM\,3: Tradeoff relation for coherent swap}\label{sec:derivationSwap} \begin{thm}[Total variation - trace norm tradeoff using the coherent swap] Consider a von Neumman measurement given by an orthonormal basis in $\ensuremath{\mathbb{C}}^2$ on one of the outputs of a coherent swap channel. Then the worst-case total variational distance $\delta$ and its trace-norm analogue $\Delta$ satisfy \begin{equation} \Delta = \frac{1}{2} - \delta. \label{eq:CohSwapTradeoffAppendix} \end{equation} \label{thm:theoremSwapTradeoff} \end{thm} \begin{proof} Using the substitution $a_1=a$ and $a_2 = \sqrt{1-a^2}$ with $a\in [0,1]$ yields the two marginals of the coherent swap quantum channel, \begin{equation} T_s(\rho) = a^2 \tilde{\rho} + (1-a^2)\rho \label{eq:T1swap} \end{equation} and \begin{equation} T_{s^\prime}(\rho) = (1-a^2)\tilde{\rho}+a^2 \rho. \label{eq:T2swap} \end{equation} The disturbance is therefore \begin{align*} \Delta(T_s) :=& \frac{1}{2} \sup_{\rho} \norm{T_s(\rho) -\rho}_1 \\ =& \frac{1}{2} a^2 \sup_{\rho} \norm{\tilde{\rho} -\rho}_1. \end{align*} The optimal choice for $\tilde{\rho}$ should clearly satisfy the points $(\Delta(T_s)=0,\delta(E^\prime)=1/2)$ and $(\Delta(T_s)=1/2,\delta(E^\prime)=0)$, where again $E^\prime=T_{s^\prime}^\ast(E)$. For any such choice of $\tilde{\rho}$ the disturbance thus satisfies $\Delta(T_s) \geq a^2/2$. The measurement error turns out to be \begin{align*} \delta(E^\prime) :=& \sup_{\rho} \frac{1}{2} \sum_{j=1}^2 \abs{\tr{E_j^\prime \rho} - \bra{j} \rho \ket{j}} \\ =& \sup_{\rho} \frac{1}{2} \sum_{j=1}^2 \abs{\tr{T_{s^\prime}^\ast(\ketbra{j}{j}) \rho} - \bra{j} \rho \ket{j}} \\ =& \sup_{\rho} \frac{1}{2} \sum_{j=1}^2 \abs{\tr{\ketbra{j}{j} T_{s^\prime}(\rho)} - \bra{j} \rho \ket{j}} \\ =& \left(1-a^2\right)\sup_{\rho} \frac{1}{2} \sum_{j=1}^2 \abs{ \bra{j} \tilde{\rho} \ket{j} - \bra{j} \rho \ket{j} }. \end{align*} Thus, an optimal choice for $\tilde{\rho}$ that minimizes the disturbance and the measurement error is $\tilde{\rho} = \mathbbm{1}/2$. A pure state with the same diagonal entries yields the same measurement error; it would, however, increase the disturbance caused to the system. The disturbance is then \begin{equation*} \Delta(T_s) = \frac{a^2}{2}, \end{equation*} and the measurement error is \begin{equation*} \delta(E^\prime) = \frac{1}{2}\left( 1-a^2\right). \end{equation*} This gives the linear tradeoff curve given in theorem~\ref{thm:theoremSwapTradeoff}. \end{proof} \section*{SM\,4: Properties of distance measures} \label{sec:distanceProperties} The distance measures used throughout this manu\-script to quantify the measurement error and the disturbance, denoted by $\delta$ and $\Delta$, satisfy Assumption~1 and Assumption~2 of \cite{Hashagen_Wolf_2018} respectively. \begin{lem} \label{lem:prop1} $\delta$ as defined in Eq.~(\ref{eq:MeasError1}) satisfies the following properties: \begin{enumerate}[(a)] \item $\delta(\{\ketbra{i}{i}\}_{i=1}^2) = 0$, \item $\delta$ is convex, \item $\delta$ is permutation invariant, i.e., for every permutation $\pi$ and any measurement $M$ \begin{equation} \delta \left( \{ U_\pi^\dagger M_{\pi(i)} U_\pi \}_{i=1}^2 \right) = \delta \left(\{ M_i \}_{i=1}^2\right), \nonumber \end{equation} where $U_\pi$ is the permutation matrix that acts as $U_\pi \ket{i} = \ket{\pi(i)}$, and \item $\delta$ is invariant under diagonal unitaries, i.e., that for every diagonal unitary $D$ and any measurement $M$ \begin{equation} \delta \left( \{ D^\dagger M_{i} D \}_{i=1}^2 \right) = \delta \left(\{ M_i \}_{i=1}^2\right). \nonumber \end{equation} \end{enumerate} \end{lem} \begin{proof} Let $\delta(M):=\sup_{\rho}\frac{1}{2} \sum_{i=1}^2 \abs{\tr{M_i \rho} - \bra{i} \rho \ket{i}}$. Then \begin{enumerate}[(a)] \item $\delta(\{\ketbra{i}{i}\}_{i=1}^2) = 0$, since \begin{equation*} \delta(\{\ketbra{i}{i}\}_{i=1}^2) = \sup_{\rho} \frac{1}{2} \sum_{i=1}^2 \abs{\bra{i} \rho \ket{i} - \bra{i} \rho \ket{i}} =0, \end{equation*} \item $\delta$ is convex, since for any measurements $M, M'$ and for all $\lambda \in [0,1]$, \begin{align*} &\delta \left(\lambda M + (1-\lambda)M' \right) \\ = &\sup_{\rho } \frac{1}{2} \sum_{i=1}^2 \abs{\tr{\left(\lambda M_i + (1-\lambda)M_i'\right) \rho} - \bra{i} \rho \ket{i}} \\ \leq & \lambda \sup_{\rho } \frac{1}{2} \sum_{i=1}^2 \abs{ \tr{M_i\rho} -\bra{i} \rho \ket{i} } \\ & \qquad + (1-\lambda) \sup_{\rho} \frac{1}{2} \sum_{i=1}^2 \abs{ \left( \tr{M_i'\rho} -\bra{i} \rho \ket{i} \right)} \\ = & \lambda \delta(M) + (1-\lambda) \delta(M'), \end{align*} \item $\delta$ is permutation invariant, since for every permutation $\pi$ and any measurement $M$ \begin{align*} &\delta \left( \{ U_\pi^\dagger M_{\pi(i)} U_\pi \}_{i=1}^2 \right) \\ = & \sup_{\rho} \frac{1}{2} \sum_{i=1}^2 \abs{\tr{U_\pi^\dagger M_{\pi(i)} U_\pi \rho} - \bra{i} \rho \ket{i}} \\ = &\sup_{\rho} \frac{1}{2} \sum_{i=1}^2 \abs{\tr{ M_{\pi(i)} \rho } - \bra{\pi(i)} \rho \ket{\pi(i)}} \\ = & \sup_{\rho} \frac{1}{2} \sum_{i=1}^2 \abs{\tr{ M_{i} \rho } - \bra{i} \rho \ket{i}} \\ = &\delta \left(\{ M_i \}_{i=1}^2\right), \end{align*} where $U_\pi$ is the permutation matrix that acts as $U_\pi \ket{i} = \ket{\pi(i)}$, and \item $\delta$ is invariant under diagonal unitaries, since for every diagonal unitary $D$ and any measurement $M$ \begin{align*} &\delta \left( \{ D^\dagger M_{i} D \}_{i=1}^2 \right) \\ = & \sup_{\rho} \frac{1}{2} \sum_{i=1}^2 \abs{\tr{D^\dagger M_{i} D \rho} - \bra{i} \rho \ket{i}} \\ = & \sup_{\rho} \frac{1}{2} \sum_{i=1}^2 \abs{\tr{M_{i} \rho} - \bra{i} D^\dagger \rho D\ket{i}} \\ = &\sup_{\rho} \frac{1}{2} \sum_{i=1}^2 \abs{\tr{M_{i} \rho} - \bra{i} \rho \ket{i}} \\ = &\delta \left(\{ M_i \}_{i=1}^2\right). \end{align*} \end{enumerate} \end{proof} \begin{lem} \label{lem:prop2} $\Delta$ as defined in Eq.~(\ref{eq:Dist}) satisfies the following properties: \begin{enumerate}[(a)] \item $\Delta(T_{\rm id}) = 0$, \item $\Delta$ is convex, \item $\Delta$ is basis-independent, i.e., for every unitary $U$ and every quantum channel $\Phi$ \begin{equation} \Delta \left( U \Phi \left( U^\dagger \cdot U \right) U^\dagger \right) = \Delta\left( \Phi \right). \nonumber \end{equation} \end{enumerate} \end{lem} \begin{proof} Let $\Delta (\Phi) := \frac{1}{2} \sup_{\rho} \norm{\Phi(\rho) -\rho}_1$. Then \begin{enumerate}[(a)] \item $\Delta(T_{\rm id}) = 0$, since $\Delta (T_{\rm id}) = \frac{1}{2} \sup_{\rho} \norm{\rho -\rho}_1 =0$, \item $\Delta$ is convex, since for any quantum channels $\Phi, \Phi'$ and for all $\lambda \in [0,1]$, \begin{align*} &\Delta\left( \lambda \Phi +(1-\lambda)\Phi' \right) \\ = &\frac{1}{2} \sup_{\rho} \norm{\left( \lambda \Phi +(1-\lambda)\Phi' \right)(\rho) -\rho}_1 \\ = &\frac{1}{2} \sup_{\rho} \norm{ \lambda \left(\Phi(\rho) -\rho \right) +(1-\lambda) \left( \Phi'(\rho) -\rho \right) }_1 \\ \leq & \lambda \frac{1}{2} \sup_{\rho} \norm{ \Phi(\rho) -\rho }_1 + (1-\lambda) \frac{1}{2} \sup_{\rho} \norm{ \Phi'(\rho) -\rho }_1 \\ = & \lambda \Delta(\Phi) + (1-\lambda) \Delta (\Phi'), \end{align*} where we have used properties of a norm and properties of a supremum of a convex functional over a convex set, \item $\Delta$ is basis-independent, i.e., for every unitary $U$ and every quantum channel $\Phi$ \begin{align*} &\Delta \left( U \Phi \left( U^\dagger \rho U \right) U^\dagger \right) \\ = &\frac{1}{2} \sup_{\rho} \norm{U \Phi \left( U^\dagger \rho U \right) U^\dagger -\rho}_1 \\ = &\frac{1}{2} \sup_{\rho} \norm{U \Phi \left( \rho\right) U^\dagger - U \rho U^\dagger}_1 \\ = &\frac{1}{2} \sup_{\rho} \norm{ \Phi \left( \rho\right) - \rho }_1 \\ = &\Delta\left( \Phi \right), \end{align*} where we have used the fact that the trace norm is unitarily invariant. \end{enumerate} \end{proof} \section*{SM\,5: Different measures} The optimal instruments as explained in the main text and derived in Sec.~\ref{sec:proofOptimalTradeoff} result in optimal measurement-disturbance relations for all distance measures which satisfy the assumptions of \cite{Hashagen_Wolf_2018}. For more details on the distance measure used in the main text see Sec.~\ref{sec:distanceProperties}. \begin{figure} \caption{ Comparison of optimal quantum instruments (blue) with the optimal universal asymmetric quantum cloner (red) for different distance measures based on simulations. The tradeoff relation of the main text based on the measures of Eqs.~(\ref{eq:MeasError1}) and (\ref{eq:Dist}) is shown (solid lines) and equivalent to a properly scaled version of the worst-case Hilbert-Schmidt norm (overlayed dashed lines) and to the worst-case infidelity (not shown). For averaging over all quantum states instead of taking the supremum of the trace norm for the disturbance, one obtains the dashdotted lines. } \label{fig:differentMeasures} \end{figure} We here show the tradeoff relations for different choices of disturbance measures, while the measurement error is always quantified as in Eq.~(\ref{eq:MeasError1}). For various meaningful measures, we observe that the optimal instruments outperform the cloner, see Fig.~\ref{fig:differentMeasures}. \section*{SM\,6: Experimental setup} Due to experimental and practical limitations, the actual experimental setup has been slightly different than described in the main text. However, the actual implementation is fully equivalent to the description there. In order to be able to fully tune the attenuation in one of the interferometer arms, we use a half waveplate (HWP) sandwiched between two polarizers. Therefore, the polarization state $\rho$ cannot be set before. Hence, we decided to first create the spatial superposition state $\ket{\phi_0}$ using waveplates and polarizers and subsequently set $\rho$ in both interferometer arms separately. With this approach, we still achieve at this stage a separable state $\rho\otimes\ketbra{\phi_0}{\phi_0}$ within the interferometer before the interaction. As we set the polarization state directly in front of the second beam splitter of the interferometer, the reflection of beam $A$ on the beam splitter already provides the interaction between system and auxiliary system. This reflection induces the unitary transformation $U$ as described in the main text, enabling us to obtain the Kraus operators given in Eq.~(\ref{eq:KrausOperators}). Since for a perfect beam splitter the output ports are interchanged for $\varphi_0\leftrightarrow\varphi_0+\pi$, we use only output port $C$ to obtain data for both projections, considering the phases $\varphi_1 = \varphi_0$ and $\varphi_2 = \varphi_0 + \pi$. This way, both projections are carried out with exactly the same equipment, reducing possible experimental errors. \begin{figure} \caption{{Actual experimental setup.} Light from a diode laser (LD) propagates through a single mode fiber and is sent through a fixed polarizer (H-POL). A beam splitter (BS) creates a spatial superposition. The attenuation of one arm can be adjusted using a half waveplate (HWP) in arm $A$ and another H-POL. The relative phase $\varphi$ can be varied using a piezo controlled prism. H-POLs together with variable HWPs ensure equal polarization in both arms as indicated by the dotted lines. As the H-POLs are used to vary the attenuation as well as to set the polarization state, they are part of both the instrument and the state preparation. The reflection from arm $A$ on the second BS introduces a coupling between polarization and path. Polarization and intensity measurements are performed in output port $C$ using waveplates (HWP and QWP), polarizing beam splitters (PBS) and photodiodes (PD). Output port $D$ is not monitored, as for phase $\varphi_0$ it is redundant to the output of port $C$ at phase $\varphi_0+\pi$. } \label{fig:ActualExpsetup} \end{figure} \section*{SM\,7: Choice of polarization states} \label{sec:choiceOfStates} According to the parametrization $\ket{\psi} = \cos\frac{\theta}{2} \ket{H} + \sin\frac{\theta}{2} \ket{V}$, the experimentally prepared values for $\theta$ were $\{-20^\circ$, $-10^\circ$, $0^\circ$, $10^\circ$, $20^\circ$, $70^\circ$, $80^\circ$, $90^\circ$, $100^\circ$, $110^\circ$, $160^\circ$, $170^\circ$, $180^\circ$, $190^\circ$, $200^\circ$, $270^\circ\}$. For $\theta=0^\circ$ and $\theta=180^\circ$, the prepared state corresponds to horizontal polarization $\ket{H}$ and vertical polarization $\ket{V}$, respectively. Thus, the reflection in beam $A$ only introduces a phase, as for example the state for $\theta=0^\circ$ is transformed according to \begin{align} \ket{H}&\otimes\left(\cos\alpha\ket{A}+\sin\alpha e^{i\varphi}\ket{B}\right)\rightarrow \nonumber \\ \ket{H}&\otimes\left(i\cos\alpha\ket{A}+\sin\alpha e^{i\varphi}\ket{B}\right), \end{align} which does not change the state of the polarization. The disturbance therefore (ideally) vanishes. In contrast, for $\theta=90^\circ$, we expect \begin{align} \left(\ket{H}+\ket{V}\right)&\otimes\left(\cos\alpha\ket{A}+\sin\alpha e^{i\varphi}\ket{B}\right)\rightarrow \nonumber \\ i\left(\ket{H}-\ket{V}\right)&\otimes\cos\alpha\ket{A}+\left(\ket{H}+\ket{V}\right)\otimes\sin\alpha e^{i\varphi}\ket{B}, \end{align} where normalization is omitted. For a given instrument characterized by $\{\alpha,\varphi\}$, this polarization state is expected to give the largest disturbance $\Delta$. For the Kraus operators given in Eq.~(\ref{eq:KrausOperators}), we find for $E_j^\prime=K_j^\dagger K_j$ for $j=1,2$, \begin{equation} E_{1,2}^\prime = \frac{1}{2} \begin{pmatrix} 1 \pm \sin 2\alpha\cos\varphi & 0 \\ 0 & 1 \mp \sin 2\alpha\cos\varphi \end{pmatrix}. \end{equation} Therefore, the distance of the outcome probabilities, used to obtain $\delta$, becomes \begin{align} \frac{1}{2}\sum_i\left|\tr{E_i^\prime\ketbra{\psi}{\psi}}-\left|\braket{i}{\psi}\right|^2\right|=\nonumber\\ \left|\cos\theta\left(1-\cos\varphi\sin2\alpha\right)\right|, \end{align} which vanishes for $\theta=90^\circ$ (and $\theta=270^\circ$) and can be maximal for $\theta=0^\circ$ (and $\theta=180^\circ$). \section*{SM\,8: Error analysis of experimental data} The statistical error of the data shown in Fig.~\ref{fig:dataplotInstruments} is estimated by comparing the results obtained in redundant measurements. The standard deviation of the measurement error is estimated to be around $8.3\cdot10^{-5}$, whereas the $1\sigma$-error bar for the estimated disturbance is approximately $7.0\cdot10^{-5}$. Those values are thus too small to be visible in Fig.~\ref{fig:dataplotInstruments}. Additionally to statistical errors, two different sources of systematic errors have been identified. First, the state preparation as well as the interaction are not perfectly implemented. The imperfect preparation of the initial polarization state and of the state analysis are the main reasons that the identity channel with no disturbance at all (but high measurement error) cannot be implemented perfectly, leading to a residual disturbance, which appears as an increase of the minimal disturbance $\Delta$ of the data in the plot. In any case, this type of error only reduces the quality of the prepared quantum instruments and does not lead to faulty conclusions. However, as a second type of systematic error one has to ensure that the prepared polarization states are describing a great circle on the Bloch sphere and contain the states with extremal results sufficiently well. This error can be approximated by considering the data as shown in Fig.~\ref{fig:dataplot}. By applying a parabolic model for the data points around the extrema of the probability graphs and the maxima of the trace distance graphs, the deviation of the extrema from the measured points can be estimated. This effect might cause a quantum instrument to look better than it actually is, i.e., less disturbing together with smaller measurement error. Yet, for the dataset shown in Fig.~\ref{fig:dataplot}~b), the parabolic fit results in a maximum at $\theta\approx89.95^\circ$ with a trace distance larger by only $0.02\%$ compared to the trace distance at $\theta=90^\circ$. The probabilities in Fig.~\ref{fig:dataplot}~a) around $\theta=0^\circ$ and $\theta=180^\circ$ can nicely be described by parabolae, where the extrema coincide with our measured points. Thus, the systematic effect of underestimating the measurement error or the disturbance due to badly chosen measurement states is negligibly small. In conclusion, the different sources of errors overall reduce the quality of the implemented quantum instruments and do not lead to an underestimation of disturbance and measurement error, respectively. We can thus show the implementation of instruments better than the optimal quantum cloner with high significance. \else \fi \ifdefined\showMain \else \onecolumngrid \fi \end{document}
\begin{document} \title{A simple proof of the Gan--Loh--Sudakov conjecture} \begin{abstract} We give a new unified proof that any simple graph on $n$ vertices with maximum degree at most $\Delta$ has no more than $a\binom{\Delta+1}{t}+\binom{b}{t}$ cliques of size $t \ (t \ge 3)$, where $n = a(\Delta+1)+b \ (0 \le b \le \Delta)$. \end{abstract} \section{Introduction} For a positive integer $t \ge 3$, let $k_t(G)$ be the number of cliques of size $t$ in a simple graph $G = G(V, E)$. In \cite{gan_loh_sudakov}, Gan, Loh, and Sudakov asked how large $k_t(G)$ can be for graphs with maximum degree at most $\Delta$. They made a conjecture, which we henceforth refer to as the \emph{GLS Conjecture}, that $k_t(G)$ is maximized by a disjoint union of $a$ cliques of size $\Delta+1$ and one clique of size $b$, where $|V| = a(\Delta+1) + b$ for $0 \le b \le \Delta$. Moreover, they proved in \cite{gan_loh_sudakov} that \[ \text{the GLS Conjecture holds for $t = 3$ \quad $\Longrightarrow$ \quad the GLS Conjecture holds for $t \ge 4$. } \] The proof is an application of the Lov\'{a}sz version of the famed Kruskal--Katona theorem (see \cite{frankl}). Later on, Chase proved that the GLS Conjecture holds for $t = 3$ in \cite{chase}, and hence resolved the GLS Conjecture completely. In this short note we present a new proof of the GLS conjecture that works for all $t \ge 3$ uniformly without using the Kruskal--Katona theorem. The proof can be viewed as a simplification and a generalization of Chase's proof in \cite{chase}. We prove the following statement: \begin{theorem} \label{thm:GLS} Let $G$ be a simple graph on $n$ vertices with maximum degree at most $\Delta$. For any integer $t \ge 3$, if $n = a(\Delta+1)+b$ where $a, b \in \Z$ and $0 \le b \le \Delta$, then $k_t(G) \le a\binom{\Delta+1}{t}+\binom{b}{t}$. \end{theorem} For every simple graph $G = G(V, E)$, write $u \sim v$ if $uv$ is an edge, and $u \nsim v$ if $uv$ is a nonedge. We denote by $\overline{N(v)} \eqdef \{v\} \cup \{u \in V : u \sim v\}$ the closed neighborhood of $v$. Let $T_v$ be the set of all $t$-cliques intersecting $\overline{N(v)}$. The proof of \Cref{thm:GLS} relies on the following lemma: \begin{lemma} \label{lem:neighb_removal} For any integer $t \ge 3$, if $G = G(V, E)$ is a simple graph, then \[ \sum_{v \in V} |T_v| \le \sum_{v \in V} \binom{\deg(v)+1}{t}. \] \end{lemma} This note is organized as follows: We first show that \Cref{thm:GLS} follows from \Cref{lem:neighb_removal}, and then prove \Cref{lem:neighb_removal} in a separate section. \begin{proof}[Proof of \Cref{thm:GLS} assuming \Cref{lem:neighb_removal}] Fix $t \ge 3$ and $\Delta \in \N_+$, and let $G$ be an $n$-vertex graph. Then there exists $v \in V$ such that $|T_v| \le \binom{\deg(v)+1}{t}$, by \Cref{lem:neighb_removal}. We induct on $n$. The base case is obvious, as \Cref{thm:GLS} is trivially true for $n = 0, 1, \dotsc, \Delta+1$. Suppose \Cref{thm:GLS} is true for $n-1, n-2, \dotsc, n-\Delta-1$. Then we have that \[ k_t(G) \le \begin{cases} \binom{\deg(v)+1}{t} + a\binom{\Delta+1}{t} + \binom{b-\deg(v)-1}{t}, \quad &\text{when $b \ge \deg(v)+1$}, \\ \binom{\deg(v)+1}{t} + (a-1)\binom{\Delta+1}{t} + \binom{b+\Delta-\deg(v)}{t}, \quad &\text{when $b < \deg(v)+1 \le b+\Delta+1$}. \end{cases} \] Since the sequence $\left\{\binom{n}{t}\right\}_{n \ge 0}$ is convex, we have that $\binom{\deg(v)+1}{t} + \binom{b-\deg(v)-1}{t} \le \binom{b}{t}$ when $b \ge \deg(v)+1$, and $\binom{\deg(v)+1}{t}+\binom{b+\Delta-\deg(v)}{t} \le \binom{\Delta+1}{t}+\binom{b}{t}$ otherwise. We conclude that $k_t(G) \le a\binom{\Delta+1}{t}+\binom{b}{t}$. \end{proof} \section{Proof of \texorpdfstring{\Cref{lem:neighb_removal}}{Lemma 2}} Define the set \[ \Phi \eqdef \{(u, x_1, \dotsc, x_t) \in V^{t+1} : \text{$x_1, \dotsc, x_t$ form a $t$-clique in $G$, and $u \sim x_i$ for some $i \in [t]$}\}. \] Observe that each $(v, x_1, \dotsc, x_t) \in \Phi$ consists of a vertex $v \in V$ and a $t$-clique $x_1 \dotsb x_t \in T_v$. Since for every $t$-clique in $G$, there are $t!$ ways to label its $t$ vertices as $x_1, \dotsc, x_t$, we have that \begin{equation} \label{eq:T_Phi} |\Phi| = t! \sum_{v \in V} |T_v|. \end{equation} For each tuple $(u, x_1, \dotsc, x_t) \in \Phi$, the vertices $u, x_1, \dotsc, x_t$ are not necessarily distinct. However, there are at least $t$ distinct vertices among $u, x_1, \dotsc, x_t$, because $x_1, \dotsc, x_t$ form a $t$-clique. For every tuple $(u, x_1, \dotsc, x_t) \in V^{t+1}$, we call it \emph{good} if $u, x_1, \dotsc, x_t$ are distinct, and \emph{bad} otherwise. Let \begin{align*} \Pg &\eqdef \{(u, x_1, \dotsc, x_t) \in \Phi : \text{$(u, x_1, \dotsc, x_t)$ is good}\}, \\ \Pb &\eqdef \{(u, x_1, \dotsc, x_t) \in \Phi : \text{$(u, x_1, \dotsc, x_t)$ is bad}\}. \end{align*} Then $\Pg$ and $\Pb$ partition $\Phi$. Fix $v \in V$. If $(v, x_1, \dotsc, x_t) \in \Pb$, then $v, x_1, \dotsc, x_t$ are vertices of a $t$-clique in $G$, where exactly one $x_i$ happens to be $v$. There are $t$ choices for this $x_i$, and at most $\binom{\deg(v)}{t-1}$ choices for the rest of the vertices $x_1, \dotsc, x_{i-1}, x_{i+1}, \dotsc, x_t$, and $(t-1)!$ choices for their possible permutations. Hence, \begin{equation} \label{eq:phi_good} |\Pb| \le \sum_{v \in V} t \cdot \binom{\deg(v)}{t-1} \cdot (t-1)! = t! \sum_{v \in V} \binom{\deg(v)}{t-1}. \end{equation} To upper bound $|\Pg|$, we need to introduce the auxiliary set \begin{align*} \Og \eqdef \{(w, y_1, \dotsc, y_t) \in V^{t+1} : \ &\text{$(w, y_1, \dotsc, y_t)$ is good, $w \sim y_i$ for all $i \in [t]$, } \\ &\text{and $y_1, \dotsc, y_t$ contain a $(t-1)$-clique in $G$}\}. \end{align*} For any fixed $v \in V$, if $(v, y_1, \dotsc, y_t) \in \Og$, then $y_1, \dotsc, y_t$ are distinct neighbors of $v$, and so \begin{equation} \label{eq:omega_good} |\Og| \le t! \sum_{v \in V} \binom{\deg(v)}{t}. \end{equation} We claim that \begin{equation} \label{eq:phi_omega} |\Pg| \le |\Og|. \end{equation} Assume that \eqref{eq:phi_omega} is established. From the combination of \eqref{eq:T_Phi}, \eqref{eq:phi_good}, \eqref{eq:omega_good}, and \eqref{eq:phi_omega}, we obtain \begin{align*} t! \sum_{v \in V} |T_v| &= |\Phi| = |\Pb| + |\Pg| \le |\Pb| + |\Og| \\ &\le t! \sum_{v \in V} \Bigg( \binom{\deg(v)}{t-1} + \binom{\deg(v)}{t} \Bigg) \\ &= t! \sum_{v \in V} \binom{\deg(v)+1}{t}, \end{align*} which concludes the proof of \Cref{lem:neighb_removal}. \qed \begin{proof}[Proof of estimate \eqref{eq:phi_omega}] When $\bm{u} \eqdef (u, x_1, \dotsc, x_t) \in \Pg$ or $\bm{w} \eqdef (w, y_1, \dotsc, y_t) \in \Og$, the induced subgraph $G[\bm{u}]$ or $G[\bm{w}]$ is connected and contains a $t$-clique. Consider any induced $(t+1)$-vertex subgraph $H$ of $G$ that is connected and contains a $t$-clique. Let $z_1, \dotsc, z_t$ be the vertices of the $t$-clique (choose arbitrary ones if there are several). Let $z^*$ be the remaining vertex of $H$. Assume without loss of generality that $z^* \sim z_1, \dotsc, z^* \sim z_k$, and $z^* \nsim z_{k+1}, \dotsc, z^* \nsim z_t$. Note that $t \ge 3$, we count for different values of $k$ the contribution of $H$ to $|\Pg|$ and $|\Og|$, respectively: \begin{itemize} \item $1 \le k \le t-2$. If $(u, x_1, \dotsc, x_t) \in \Pg$, then $u = z^*$ since the degree of $z^*$ in $H$ is less than $t-1$, and hence $\{x_1, \dotsc, x_t\} = \{z_1, \dotsc, z_t\}$. If $(w, y_1, \dotsc, y_t) \in \Og$, then $w \in \{z_1, \dotsc, z_k\}$, and hence $\{y_1, \dotsc, y_t\} = \{z^*, z_1, \dotsc, z_t\} \setminus \{w\}$. Such an $H$ contributes $t!$ and $k \cdot t!$ elements to $\Pg$ and $\Og$, respectively. \item $k = t-1$. If $(u, x_1, \dotsc, x_t) \in \Pg$, then $\{x_1, \dotsc, x_t\} \supset \{z_1, \dotsc, z_{t-1}\}$, and hence $u \in \{z_t, z^*\}$. If $(w, y_1, \dotsc, y_t) \in \Og$, then $w \in \{z_1, \dotsc, z_{t-1}\}$, and hence $\{y_1, \dotsc, y_t\} = \{z^*, z_1, \dotsc, z_{t-1}\} \setminus \{w\}$. Such an $H$ contributes $2 \cdot t!$ and $(t-1) \cdot t!$ elements to $\Pg$ and $\Og$, respectively. \item $k = t$. Then $H = K_{t+1}$. Such an $H$ contributes $(t+1)!$ elements to both $\Pg$ and $\Og$. \end{itemize} The claimed estimate \eqref{eq:phi_omega} follows from the cases above. \end{proof} \section*{Acknowlegements} We are grateful to Boris Bukh for helpful discussions and suggestions during the preparation of this note. We thank an anonymous referee for valuable feedback on the earlier version of this note. \end{document}
\begin{document} \title{The Effect of Communication on Noncooperative Multiplayer Multi-Armed Bandit Problems} \author{ \IEEEauthorblockN{Noyan Evirgen\IEEEauthorrefmark{1}, Alper Köse\IEEEauthorrefmark{2}\IEEEauthorrefmark{3}} \IEEEauthorblockA{ \small \IEEEauthorrefmark{1}Department of Information Technology and Electrical Engineering, ETH Zurich \\ \IEEEauthorrefmark{2}Research Laboratory of Electronics, Massachusetts Institute of Technology \\ \IEEEauthorrefmark{3}Department of Electrical Engineering, École Polytechnique Fédérale de Lausanne \\ } } \maketitle \footnotetext[1]{This work has been accepted to the 2017 IEEE ICMLA.} \begin{abstract} We consider decentralized stochastic multi-armed bandit problem with multiple players in the case of different communication probabilities between players. Each player makes a decision of pulling an arm without cooperation while aiming to maximize his or her reward but informs his or her neighbors in the end of every turn about the arm he or she pulled and the reward he or she got. Neighbors of players are determined according to an Erd{\H{o}}s-R{\'e}nyi graph with connectivity $\alpha$ which is reproduced in the beginning of every turn. We consider i.i.d. rewards generated by a Bernoulli distribution and assume that players are unaware about the arms' probability distributions and their mean values. In case of a collision, we assume that only one of the players who is randomly chosen gets the reward where the others get zero reward. We study the effects of $\alpha$, the degree of communication between players, on the cumulative regret using well-known algorithms UCB1, $\epsilon$-Greedy and Thompson Sampling. \end{abstract} \begin{IEEEkeywords} Multi-armed bandit, online learning, game theory, reinforcement learning. \end{IEEEkeywords} \IEEEpeerreviewmaketitle \section{Introduction} In Multi-armed Bandit (MAB) problem, players are asked to choose an arm which returns a reward according to a probability distribution. In MAB, we face an exploration-exploitation trade-off. Exploration can be interpreted as a search for the best arm while exploitation can be thought as maximizing reward or minimizing regret by pulling the best arm. Therefore, we must search enough to be nearly sure that we find the best arm without sacrificing much from the reward. There are different kinds of MAB problems that can be studied: \begin{itemize} \item \textbf{Stochastic MAB}: Each arm $i$ has a probability distribution $p_i$ on [0,1], and rewards of arm $i$ are drawn i.i.d. from $p_i$ where distribution $p_i$ does not change according to the decisions of a player. In \cite{auer2002finite}, stochastic MAB setting can be seen. \item \textbf{Adversarial MAB}: No statistical assumptions are made on the rewards. In \cite{auer1995gambling}, authors give a solution to the adversarial MAB. \item \textbf{Markovian MAB}: Each arm $i$ changes its state as in a markov chain when it is pulled and rewards are given depending on the state. In \cite{tekin2010online}, the classical MAB problem with Markovian rewards is evaluated. \end{itemize} MAB problem is introduced by Robbins \cite{robbins1952some} and investigated under many different conditions. Auer et al. \cite{auer2002finite} show some of the basic algorithms in a single player model where the considered performance metric is the regret of the decisions. Koc{\'a}k et al. \cite{kocak2016online} consider adversarial MAB problems where player is allowed to observe losses of a number of arms beside the arm that he or she actually chose and each non-chosen arm reveals its loss with an unknown probability. Kalathil et al. \cite{kalathil2014decentralized} consider decentralized MAB problem with multiple players where no communication is assumed between players. Also, arms give different rewards to different players and in case of a collision, no one gets the reward. Liu and Zhao \cite{liu2010distributed} compare multiple players without communication and multiple players acting as a single entity scenarios where reward is assumed to be shared in an arbitrary way in case of a collision. MAB can be used in different type of applications including cognitive radio networks and radio spectrum management as seen in \cite{lai2008medium}, \cite{gai2011combinatorial} and \cite{anandkumar2010opportunistic}. In this paper, we study a decentralized MAB, and consider the scenario as $N<S$ where $N$ denotes the number of players and $S$ denotes the number of arms. Players exchange information in the end of every turn according to Erd{\H{o}}s-R{\'e}nyi communication graph which is randomly reproduced every turn with $\alpha$ connectivity, $ 0\leq \alpha \leq 1$. Also, we consider collisions in our scenario, where only one randomly chosen player gets the reward where other players get zero reward. Our goal is to minimize the cumulative regret in the model where all players use the same algorithm while making their decisions. To this end, we use three different well-known MAB algorithms, Thompson Sampling \cite{agrawal2012analysis}, $\epsilon$-Greedy \cite{auer2002finite} and UCB1 \cite{auer2002finite}. In the considered scenario, everybody is alone in the sense that all players make decisions themselves, and everybody works together in the sense that there can be a communication between players in the end of every turn. The paper is organized as follows. We formulate the problem in Section II. We explain our reasoning and propose optimal policies in case of $\alpha=0$ and $\alpha=1$ in Section III. Then, we discuss the simulation results where we have $Cumulative$ $Regret$ $vs$ $\alpha$ graph and $Cumulative$ $Regret$ $vs$ $Number$ $of$ $Turns$ graphs and we have these results for two different mean distributions of arms in Section IV. Finally, we conclude our findings in Section V. \section{Problem Formulation} We consider a decentralized MAB problem with $N$ players and $S$ arms. In our model, players are allowed to communicate according to an Erd\H{o}s-Renyi random graph with connectivity $\alpha$, so each player $p$ informs its neighbours $N(p)$ about the arm it pulled and the reward it earned in the end of each turn. In other words, let us think a system graph $\mathcal{G}=(\mathcal{P}, \mathcal{E})$. Players are shown as vertices, $p_{k}\in \mathcal{P}$ where $k=1...N$ and $\{p_{a},p_{b}\}\in \mathcal{E}$ if there is a connection between players $p_{a}$ and $p_{b}$, which is true with probability $\alpha$. One turn is defined as a time interval in which every player pulls an arm according to their game strategy. Note that the random communication graph changes every turn but $\alpha$ is constant. In addition to the aforementioned setup, each arm yields a reward with a random variable $X_{i,t}$ associated to it, where $i$ is the index of an arm and $t$ is the turn number. Successive pulls of an arm are independent and identically distributed according to a Bernoulli distribution with expected value of $\mu_i$, which are unknown to the players. Because of the nature of the problem, "collision" should also be considered. When an arm with index $i$ is chosen by multiple players, only one of the players, chosen randomly, receive the reward $X_{i,t}$ whereas the rest of the players receives zero reward. Players are not aware of the collision model. We can define the expected cumulative regret in a single player model as: \begin{equation} R_{p,T}=T\max_{i \in 1...S}\mu_i - \sum_{k=1}^{T} \mu_{Y_{p,k}} \end{equation} where $Y_{p,k}$ is the chosen arm index in the $k$th turn of pulls by the player $p$. However, for our model having multiple players, we do not want all players to go for the best arm due to collision model. That is to say, in our setting players affect each other's reward. Therefore, we cannot define the regret per player and independently sum them, instead we directly define the cumulative regret in the game based on the total expected reward. The cumulative regret in the game can be defined as: \begin{equation} R_{T} = T\max_{\forall a_p \in \{1...S\}, i\neq j \Rightarrow a_i\neq a_j} (\sum_{p=1}^{N} \mu_{a_p}) - \sum_{p=1}^{N}\sum_{k=1}^{T} \mu_{Y_{p,k}} \end{equation} where $a_i$ is the index of hypothetically chosen slot by the $i$th player. Since the first term of the right hand side is a constant, it can be seen that the strategy which minimizes the cumulative regret is the one which maximizes $\sum_{p=1}^{N}\sum_{k=1}^{T} \mu_{Y_{p,k}}$. Minimizing cumulative regret adds up to same thing with maximizing total cumulative reward in the game. Because of the collision model, total cumulative reward does not depend on the individual pulls. Instead, it can be calculated based on whether an arm is chosen at a certain turn. Therefore, total cumulative reward can be defined as: \begin{equation} \label{eq:gain} G = \sum_{k=1}^{T}\sum_{i=1}^{S}I_{i,k} X_{i,k} \end{equation} where $I_{i,k}$ is indicator of whether the arm with index i is chosen at the $k$th turn of pulls. Let us define $\mathbbm{1}\{\cdot\}$ to be the indicator function. Then $I_{i,k}$ can be calculated as: \begin{equation} \label{eq:indic} I_{i,k} = \mathbbm{1}\Bigl\{\Bigl[\sum_{p=1}^{N}\mathbbm{1}\{Y_{p,k} = i\}\Bigr] \neq 0\Bigr\} \end{equation} where again $Y_{p,k}$ is the chosen arm index by player p in the $k$th turn of pulls. \section{System Model} An important evaluation of strategies is the expected total cumulative reward under the constraints of the problem. Considering players cannot collaboratively plan for their next strategy, it has to be assumed that each player tries to maximize its own reward. The strategy which maximizes the total cumulative reward is the one which assigns N players to different arms which have the highest N expected rewards. Let us define $q_k$ as the $k$th best arm. Then, the expected maximum total cumulative reward after T turns for $\alpha = 0$ can be defined as: \begin{equation} R_{max_{T,\alpha = 0}} = T\sum_{k=1}^N\mu_{q_{k}} \end{equation} This, combined with the connectivity parameter $\alpha$ introduces an interesting trade-off phenomenon. In order to elaborate this, consider the case where $\alpha = 0$. When there is no communication between the players, each player can converge to a different arm believing their choice is the best one, which is mainly caused by the collision model. Converging here means choosing the same arm after a limited turn of pulls. Now consider when $\alpha = 1$ where every player knows everything about other pulls. Inevitably, this results in same probabilistic distributions for every arm for every player. In other words, players cannot converge to different arms. They can either converge to the same arm or not converge at all. Since our reward depends on $I_{i,k}$ from Equation (\ref{eq:indic}), not converging has a higher total cumulative reward than every player converging to the best arm which would only have the reward of that arm. Therefore, the expected maximum total reward when $\alpha = 1$ is when every player randomly chooses an arm with a probability which depends on expected means of the arms, assuming $S > N$. In Equation (\ref{eq:gain}), we introduce total cumulative reward which we try to maximize. Let us define a different metric called $L$ which stands for total cumulative loss: \begin{equation} \begin{split} L & = \sum_{k=1}^{T}\sum_{i=1}^{S}\mathbbm{1}\Bigl\{\Bigl[\sum_{p=1}^{N}\mathbbm{1}\{Y_{p,k} = i\}\Bigr] = 0\Bigr\} X_{i,k} \\ & = \sum_{k=1}^{T}\sum_{i=1}^{S}\Bigl[1-\mathbbm{1}\Bigl\{\Bigl[\sum_{p=1}^{N}\mathbbm{1}\{Y_{p,k} = i\}\Bigr] \neq 0\Bigr\}\Bigr] X_{i,k} \\ & = \sum_{k=1}^{T}\sum_{i=1}^{S}X_{i,k} - G \end{split} \end{equation} First term of the right hand side is a constant. Therefore, maximizing $G$ will minimize $L$. Therefore, $\mathbf{E}[L]$ can be minimized if the expected loss of a turn is minimized: \begin{equation} \mathbf{E}[L_T] = \sum_{i=1}^{S}\mathbbm{1}\Bigl\{\Bigl[\sum_{p=1}^{N}\mathbbm{1}\{Y_{p} = i\}\Bigr] = 0\Bigr\}\mu_i \end{equation} where $Y_{p}$ is the chosen arm index by the player p. Assuming $S > N$ with $S$ number of arms and $N$ players, let us define $c_i$ as the probability of a player choosing arm with index i. Note that, $c_i$ is the same for every player since $\alpha$ equals to 1. Then it can be seen that, $\sum_{i=1}^{S}c_i = 1$. Therefore the expected loss of a turn can be defined as: \begin{equation} \begin{split} \mathbf{E}[L_T] & = \sum_{i=1}^{S}\mathbbm{1}\Bigl\{\Bigl[\sum_{p=1}^{N}\mathbbm{1}\{Y_{p} = i\}\Bigr] = 0\Bigr\}\mu_i \\ & = \sum_{i=1}^{S}(1-c_i)^N\mu_i = \sum_{i=1}^{S}m_i^N\mu_i \end{split} \end{equation} where $m_i$ is $1-c_i$. Note that $0 \leq m_i \leq 1$. In order to minimize expected loss of a turn, the Lagrangian which we try to maximize can be defined as: \begin{equation} \mathcal{L}(m_i,\lambda_i) = -\sum_{i=1}^{S}\Bigl[m_i^N\mu_i + \lambda_{2i-1}m_i + \lambda_{2i}(1-m_i)\Bigr] \end{equation} Since, \begin{equation} \label{eq:eseksi} \begin{split} &\sum_{i=1}^{S}c_i = 1 \\ &\sum_{i=1}^{S}m_i = \sum_{i=1}^{S}(1-c_i) = S - 1\\ \Rightarrow&\frac{\partial m_x}{\partial m_{i \neq x}} = -1 \end{split} \end{equation} where $1 \leq x \leq S$. Then in order to maximize the Lagrangian, \begin{equation} \begin{split} \frac{\partial \mathcal{L}}{\partial m_x} &= -N(m_x^{N-1}\mu_x-\sum_{i = 1, i \neq x}^S m_i^{N-1}\mu_i) \\ & + \lambda_{2x-1}-\lambda_{2x} + \sum_{i=1,i \neq x}^S\lambda_{2i} - \sum_{i=1,i \neq x}^S\lambda_{2i-1} = 0 \end{split} \end{equation} From Karush-Kuhn-Tucker conditions (KKT), $\lambda_{2i-1}m_i = 0$, $\lambda_{2i}(1-m_i) = 0$. $m_i = 1$ is a case where the players do not pull the arm with index i. Similar with $m_i = 0$, where players only pull the arm with index i. Both of these cases can be ignored if there is a valid solution without them. Otherwise, $m_i = 1$ case will be revisited starting from the machine with the lowest expected mean $\mu_i$. For the derivation of the solution let us assume $\lambda_{2i-1} = \lambda_{2i} = 0$. Then, \begin{equation} \begin{split} \frac{\partial \mathcal{L}}{\partial m_{x_1}} &= -N(m_{x_1}^{N-1}\mu_{x_1}-\sum_{i = 1, i \neq x_1}^S m_i^{N-1}\mu_i) = 0\\ \frac{\partial \mathcal{L}}{\partial m_{x_2}} &= -N(m_{x_2}^{N-1}\mu_{x_2}-\sum_{i = 1, i \neq x_2}^S m_i^{N-1}\mu_i) = 0 \end{split} \end{equation} where $x_1 \neq x_2$, $1 \leq x_1 \leq S$ and $1 \leq x_2 \leq S$. Therefore, \begin{equation} \begin{split} m_{x_1}^{N-1}\mu_{x_1}-\sum_{i = 1, i \neq x_1}^S &m_i^{N-1}\mu_i = \\ &m_{x_2}^{N-1}\mu_{x_2}-\sum_{i = 1, i \neq x_2}^S m_i^{N-1}\mu_i \end{split} \end{equation} \begin{equation} \begin{split} m_{x_1}^{N-1}\mu_{x_1}-&m_{x_2}^{N-1}\mu_{x_2} -\sum_{i = 1, i \neq \{x_1,x_2\}}^S m_i^{N-1}\mu_i =\\ &m_{x_2}^{N-1}\mu_{x_2}-m_{x_1}^{N-1}\mu_{x_1} -\sum_{i = 1, i \neq \{x_1,x_2\}}^S m_i^{N-1}\mu_i \\ m_{x_1}^{N-1}\mu_{x_1} &= m_{x_2}^{N-1}\mu_{x_2} \end{split} \end{equation} Let us assume, \begin{equation} \begin{split} &A = m_i^{N-1}\mu_i = (1-c_i)^{N-1}\mu_i, \forall i \in \{1...S\} \\ &c_i = 1 - \sqrt[N-1]{\frac{A}{\mu_i}}\qquad \\ &\sum_{i=1}^{S}c_i = S - \sum_{i=1}^{S}\sqrt[N-1]{\frac{A}{\mu_i}}\qquad = 1 \\ &A =\Bigg[\dfrac{S-1}{\sum_{i=1}^{S}\sqrt[N-1]{\frac{1}{\mu_i}}\qquad}\Bigg]^{N-1} \\ &c_i =1 - \dfrac{\Bigg[\dfrac{S-1}{\sum_{k=1}^{S}\sqrt[N-1]{\frac{1}{\mu_k}}\qquad}\Bigg]}{\sqrt[N-1]{\mu_i}\qquad} \end{split} \end{equation} This results in the optimal $c_i$ for the case of $\alpha = 1$ assuming $0 \leq c_i \leq 1$, $\forall i \in {1,2,...,S}$. If the assumed constraint is not satisfied, it means that $\lambda_{2i} \neq 0$ or $\lambda_{2i-1} \neq 0$. For $\lambda_{2i} \neq 0$, since $\lambda_{2i}(1-m_i) = 0$, it means $m_i = 1$ and $c_i = 0$. This conclusion intuitively makes sense; if expected mean of an arm is small enough to force the $c_i$ to become negative, the optimal strategy would be to not pull the arm at all. For $\lambda_{2i-1} \neq 0$, since $\lambda_{2i-1}(m_i) = 0$, therefore $m_i = 0$ and $c_i = 1$. This means that, every player chooses the $i$th arm which is never the optimal play unless the rest of the arms have zero reward. Using these derivations, we introduce an algorithm called asymptotically optimal algorithm which gives an asymptotically optimal strategy for $\alpha = 0$. The algorithm leverages a simulated annealing approach where it either randomly pulls an arm to explore or calculate the optimal $c_i$s to exploit. $c_i$s are then used to sample the arm pull. Since players are not aware of the collision model, their observed mean estimation for the arms are calculated with the rewards from their neighbors combined with their reward. \begin{table} {\LinesNumberedHidden \begin{algorithm}[H] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetAlgorithmName{Algorithm}{} $S$ is the arm count.\\ $N$ is the player count.\\ $\mu'_i$ is the observed mean reward of arm with index $i$ for the current player.\\ For $0 < k < 1$.\\ \For {t = 1,2,...}{ $random$ = Random a value between 0 and 1.\\ \If{$1-\epsilon > random$}{ $\mathcal{H} = \{1,2,...S\}$.\\ $c_i = 0, \quad \forall i \in \mathcal{H}$.\\ \While{$\exists i\in\mathcal{H}$ with $(c_i \leq 0) $} { \For {i = 1,2,...,S}{ \If{$i \in \mathcal{H}$} { $c_i =1 - \dfrac{\Bigg[\dfrac{S-1}{\sum_{k \in \mathcal{H}}\sqrt[N-1]{\frac{1}{\mu_k'}}\qquad}\Bigg]}{\sqrt[N-1]{\mu_i'}\qquad}$.\\ \If {$c_i \leq 0 $} { Discard $i$ from $\mathcal{H}$. } } } \For {i = 1,2,...,S}{ \If{$i \in \mathcal{H}$} { $c_i =1 - \dfrac{\Bigg[\dfrac{S-1}{\sum_{k \in \mathcal{H}}\sqrt[N-1]{\frac{1}{\mu_k'}}\qquad}\Bigg]}{\sqrt[N-1]{\mu_i'}\qquad}$.\\ } } } $random$ = Random a value between 0 and 1.\\ $sum\_of\_chances = 0$.\\ \For{$i \in \mathcal{H}$}{ $sum\_of\_chances += c_i$.\\ \If{$sum\_of\_chances \geq random$}{ Pull $i$th arm. } } } \Else{ Randomly pull an arm.\\ } $\epsilon = \epsilon * k.$ } \caption{Asymptotically Optimal Algorithm for $\alpha=1$ case} \end{algorithm}} \caption{Asymptotically Optimal Algorithm for $\alpha=1$ case} \label{algo1} \end{table} \section{Simulation Results} We do six different simulations to see the effect of communication in MAB problem. In the setup of all simulations, we set $S=10$ and $N=5$. On the other hand, $\mu$ vector has two different value sets, where $\mu_1=[0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.01]$ and $\mu_2=[0.7, 0.68, 0.66, 0.64, 0.62, 0.4, 0.38, 0.36, 0.34, 0.32]$. We evaluate the effect of connectivity $\alpha$ for three different algorithms and also propose asymptotic limits for total cumulative reward for $\alpha=0$ and $\alpha=1$ cases, which mean no communication and full communication, respectively. In general, cumulative regret increases with increasing $\alpha$. We get the best results for $\alpha=0$, which means there is no communication between players. This is exactly as we expected due to the collision model we use and can be explained by players' disinclination to pull the same arm due to their different estimations on the means of the arms. Therefore, each player tends to pull a different arm which maximizes the reward. On the other hand, in $\alpha=1$ case, all players have the same mean updates for the arms and they behave similarly. So, when there is an arm with high mean $\mu_i$, all of the players are more inclined to pull this arm, which eventually decreases $\mu_i$ due to collisions. In the end, this forms a balance which makes the probability of pulling each arm similar. This causes a higher probability of collision compared to $\alpha=0$ case and decreases the cumulative reward in the system. We test three well-known algorithms of MAB problem which are modified for communications between players. The aim is to understand how robust are these algorithms against communication between players. $\epsilon$-Greedy and UCB1 can be considered as nearly deterministic algorithms which makes them inevitably fail against communication. Interestingly, they could still provide decent total cumulative rewards until $\alpha = 0.9$. This is mostly caused by their "greedy" nature; even though the observed means for arms are close to each other, players using these algorithm choose the best option. This greediness pays off since players can experience different means even with high amount of connection which results in convergence to different arms. On the other hand, Thompson Sampling is a probabilistic approach. Thus, when players have similar means they choose an arm with similar probabilities which results in lower total cumulative reward for high $\alpha$. However, because of the probabilistic nature of the algorithm, it never catastrophically fails. \begin{table}[H] {\LinesNumberedHidden \begin{algorithm}[H] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetAlgorithmName{Algorithm}{} $\mu'_i$ is the observed mean reward of arm with index $i$ for the current player.\\ Pull each arm once.\\ \For {t = 1,2,...}{ Pull the arm $i(t) = argmax_{i} \mu'_i + \sqrt{\frac{2\ln(n)}{n_{i}}}$ where $n_{i}$ is the number of pulls of arm with index $i$ observed by the current player so far and $n$ is the number of arm pulls observed by the current player so far. } \caption{UCB1 Algorithm \cite{auer2002finite}} \end{algorithm}} \caption{UCB1 Algorithm} \label{algo:UCB1 Algorithm} \end{table} \begin{table}[H] {\LinesNumberedHidden \begin{algorithm}[H] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetAlgorithmName{Algorithm}{} $\epsilon=1$ and $0<k<1$.\\ Pull each arm once.\\ \For {t = 1,2,...}{ With probability $1-\epsilon$, pull the arm with index $i$ which has the highest mean reward observed by the current player, else pull a random arm.\\ $\epsilon = \epsilon * k$. } \caption{$\epsilon$-Greedy Algorithm \cite{auer2002finite}} \end{algorithm}} \caption{$\epsilon$-Greedy Algorithm} \label{algo:Greedy Algorithm} \end{table} \begin{table}[H] {\LinesNumberedHidden \begin{algorithm}[H] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetAlgorithmName{Algorithm}{} For each arm $i = 1,...,S$ set $S_{i}(1) = 0, F_{i}(1) = 0$.\\ Pull each arm once.\\ \For {t = 1,2,...}{ For each arm $i=1,...,S$, sample $\theta_{i}(t)$ from the $Beta(S_{i} +1,F_{i} +1)$ distribution.\\ Pull the arm $i(t) = argmax_{i} \theta_{i}(t)$ and observe reward $r$.\\ If $r= 1$, then $S_{i}(t) = S_{i}(t) + 1$, else $F_{i}(t) = F_{i}(t) + 1$. } \caption{Thompson Sampling Algorithm \cite{agrawal2012analysis}} \end{algorithm}} \caption{Thompson Sampling Algorithm} \label{algo:Thompson Sampling Algorithm} \end{table} As seen in Fig. 3 and Fig. 6, in the full communication scenario, $\epsilon$-Greedy and UCB1 algorithms clearly fail while Thompson Sampling performs nearly as good as the asymptotically optimal method. As seen in Fig. 2 and Fig. 5, in no communication setting, Thompson Sampling and $\epsilon$-Greedy with a good tuned $\epsilon$ perform nearly optimal. On the other hand, Fig. 1 and Fig. 4 show that Thompson Sampling underperforms for other values of $\alpha$. UCB1 and $\epsilon$-Greedy clearly have a lower cumulative regret for $0.5\leq \alpha \leq0.9$. \begin{figure} \caption{Change of Cumulative Regret with respect to $\alpha$ where $S=10$, $N=5$ and $\mu=\mu_1$} \end{figure} \begin{figure} \caption{Change of Cumulative Regret with respect to Number of Turns where $S=10$, $N=5$, $\alpha=0$ and $\mu=\mu_1$} \end{figure} \begin{figure} \caption{Change of Cumulative Regret with respect to Number of Turns where $S=10$, $N=5$, $\alpha=1$ and $\mu=\mu_1$} \end{figure} \begin{figure} \caption{Change of Cumulative Regret with respect to $\alpha$ where $S=10$, $N=5$ and $\mu=\mu_2$} \end{figure} \begin{figure} \caption{Change of Cumulative Regret with respect to Number of Turns where $S=10$, $N=5$, $\alpha=0$ and $\mu=\mu_2$} \end{figure} \begin{figure} \caption{Change of Cumulative Regret with respect to Number of Turns where $S=10$, $N=5$, $\alpha=1$ and $\mu=\mu_2$} \end{figure} \section{Conclusion and Future Work} In this paper, we evaluate a decentralized MAB problem with multiple players in cases of different communication densities between players and using penalty for collisions. Limiting factor in the performance is the collision model. Without collision penalty, the problem can be seen as a single player MAB problem in which pulling multiple arms at the same time is allowed and the only difference than the classic problem is faster convergence to the best arm. We observe that Thompson Sampling usually has the highest performance in terms of minimizing regret among three algorithms where an optimally tuned $\epsilon$-Greedy algorithm can perform best depending on the mean vector $\mu$ of the slots. Also, we conclude that sublinear regret is easily achievable without communication between players, whereas we get linear regret in case of full communication. Nature of the MAB problem has applications in economics, network communications, bandwidth sharing and game theory where individuals try to maximize their personal utility with limited resources. We perceive this work as a bridge between a classical reinforcement learning problem and game theory in which we analyze different algorithms and test their robustness to communication. We also provide asymptotically optimal strategies for the extreme cases of no communication and full communication. For future work, optimal strategies for the case $0<\alpha<1$ will be analyzed as it is still unclear how to propose an optimal strategy for any $\alpha$ value. Apart from this, we will evaluate the effect of communication in adversarial and markovian bandits. \ifCLASSOPTIONcaptionsoff \fi \balance \end{document}
\begin{document} \begin{abstract} We study the dynamical Mahler measure of multivariate polynomials and present dynamical analogues of various results from the classical Mahler measure as well as examples of formulas allowing the computation of the dynamical Mahler measure in certain cases. We discuss multivariate analogues of dynamical Kronecker's Lemma and present some improvements on the result for two variables due to Carter, Lal\'in, Manes, Miller, and Mocz. \end{abstract} \title{Dynamical Mahler Measure: A survey and some recent results} \section{Introduction} The inspiration for our investigation comes from the following result which relates the canonical height $\hat{h}_f$ (see Definition~\ref{height-definition}) of a point $\alpha \in \mathbb{P}^1(\overline{\mathbb{Q}})$ relative to some polynomial $f\in \mathbb{Q}[z]$ with an integral of the minimal polynomial of $\alpha$ relative to an invariant measure defined by $f$. \begin{thm}[\cite{PST}] Let $f\in\mathbb{Q}[z]$ be a polynomial, and let $\mathcal J_{f}$ denote the Julia set of $f$. Let $K$ be a number field with $\alpha \in \mathbb{P}^1(K)$, and let $P \in \mathbb{Z}[x]$ be the minimal polynomial for $\alpha$. Then: \begin{equation}\label{eq:dynMah1D} [\mathbb{Q}(\alpha):\mathbb{Q}]\hat h_f(\alpha) = \int_{\mathcal J_{f}}\log |P(z)| d\mu_{f}(z). \end{equation} \end{thm} Compare this with a standard formula relating the Mahler measure of the minimal polynomial $P$ and the height of a root of that polynomial: \begin{equation}\label{eq:mahlerheight} [\mathbb{Q}(\alpha) : \mathbb{Q}] h(\alpha) = \underbrace{\frac{1}{2\pi i}\int_{\mathbb{T}^1} \log \left| P\left(z \right)\right| \frac{dz}{z}}_{\mathrm{m}(P)}. \end{equation} The tantalizing similarities in these formulas lead naturally to questions about extending classical results of Mahler measure to this new ``dynamical Mahler measure'' relative to a fixed polynomial~$f$. In~\cite{TwoVarPoly}, the authors define a multivariate dynamical Mahler measure and prove several preliminary results with this flavor. This survey article presents background, motivation, examples, and strengthening of those results, both illustrating and expanding on the work begun in~\cite{TwoVarPoly}. In Section~\ref{sec:basics}, we provide background on Mahler measure, arithmetic dynamics, and equilibrium measures. In Section~\ref{sec:DMMdef}, we define the multivariate dynamical Mahler measure and give examples where it is possible to compute it exactly. Section~\ref{sec:PrevPaper} provides a summary of results from~\cite{TwoVarPoly}, drawing explicit connections between classical Mahler measure and the dynamical setting. In Section~\ref{sec:DynMahlerConverge}, we prove the existence of dynamical Mahler measure as defined in the previous section; these proofs also appear in~\cite{TwoVarPoly} but are reiterated here (with a bit more detail) to provide a self-contained reference to the subject. Section~\ref{sec:DynKronLem} contains a survey of recent results on properties that are either known or conjectured to be equivalent to a multivariate polynomial having dynamical Mahler measure zero. Section~\ref{sec:e1e2} contains the proof of a new implication of this sort, and Section~\ref{sec:ad} contains strengthening of one of the results from~\cite{TwoVarPoly}. In particular, in the proof of the two-variable Dynamical Kronecker's Lemma, we replace the (rather strong) assumption of Dynamical Lehmer's Conjecture with an assumption about the preperiodic points for the polynomial $f$. Finally, Section~\ref{sec:J=K} investigates that condition on polynomials $f$. \section{Basic Notions} \label{sec:basics} In this section, we provide preliminary material on both Mahler measure and arithmetic dynamics. We refer the interested reader to~\cite{Bertin-Lalin} for a more comprehensive article describing the history and applications of Mahler measure in arithmetic geometry and to~\cite{CurrentTrends} for background and motivation from the arithmetic dynamics perspective. \subsection{Mahler Measure}\label{sec:ClassicalMahler} The (logarithmic) Mahler measure of a non-zero polynomial $P\in \mathbb{C}[x]$, originally defined by Lehmer~\cite{Le}, is a height function given by \begin{equation}\label{eq:defMmeas} \mathrm{m}(P) = \mathrm{m}\left(a \prod_j(x-\alpha_j)\right)=\log |a| + \sum_j \log \max \{1,|\alpha_j|\}. \end{equation} If $P\in \mathbb{Z}[x]$, the formula above makes it clear that $\mathrm{m}(P) \geq 0$. In such a case, it is natural to ask which polynomials $P\in \mathbb{Z}[x]$ satisfy $\mathrm{m}(P)=0$. A result of Kronecker~\cite{Kronecker} gives the answer. \begin{lem}[Kronecker's Lemma] Let $P\in \mathbb{Z}[x]$. Then $\mathrm{m}(P)=0$ if and only if $P$ is monic and can be decomposed as a product of a monomial and cyclotomic polynomials. \end{lem} Lehmer~\cite{Le} computed \[ \mathrm{m}(x^{10}+x^9-x^7-x^6-x^5-x^4-x^3+x+1) = \log(1.176280818\dots) = 0.162357612\dots \] and asked the following: \begin{quest}[Lehmer's question, 1933] Is there a constant $C > 0$ such that for every polynomial $P \in \mathbb{Z}[x]$ with $\mathrm{m}(P) >0$, then $\mathrm{m}(P)\geq C$? \end{quest} Lehmer's question remains open, and his degree-10 polynomial remains the integer polynomial with the smallest known positive measure. Jensen's formula~\cite{Jensen} relates an average of a linear polynomial over the unit circle with the size of its root: \begin{equation}\label{eq:Jensen} \frac{1}{2\pi i}\int_{\mathbb{T}^1} \log \left|x - \alpha \right| \frac{dx}{x} = \log \max\{ 1, |\alpha| \}. \end{equation} Applying Jensen's formula to the definition of Mahler measure in~\eqref{eq:defMmeas}, we find a formula that can be extended naturally to multivariate polynomials and rational functions. Following Mahler~\cite{Mah}, we have: \begin{defn}\label{defn:MM}The (logarithmic) Mahler measure of a non-zero rational function $P \in \mathbb{C}(x_1,\dots,x_n)$ is defined by \begin{equation*} \mathrm{m}(P):=\frac{1}{(2\pi i)^n}\int_{\mathbb{T}^n}\log|P(x_1,\dots, x_n)|\frac{dx_1}{x_1}\cdots \frac{dx_n}{x_n}, \end{equation*} where $\mathbb{T}^n=\{(x_1,\dots,x_n)\in \mathbb{C}^n : |x_1|=\cdots=|x_n|=1\}$. \end{defn} The above integral converges, and for $P\in \mathbb{Z}[x_1,\dots,x_n]$, we still have $\mathrm{m}(P)\geq 0$ (see Proposition~\ref{prop:convergence}). It is natural, then, to consider whether Kronecker's Lemma has an extension to multivariate polynomials. Recall that a polynomial in $\mathbb{Z}[x_1,\dots,x_n]$ is said to be \textbf{primitive} if the coefficients have no non-trivial factor. We have the following result. \begin{thm}\cite[Theorem 3.10]{Everestward} \label{thm:HigherdKronecker} For any primitive polynomial $P\in \mathbb{Z}[x_1,\dots,x_n]$, we have $\mathrm{m}(P) =0$ if and only if $P$ is the product of a monomial and cyclotomic polynomials evaluated on monomials. \end{thm} A connection between the single-variable case and the multivariate case is given by a result due to Boyd ~\cite{Boyd-speculations} and Lawton~\cite{Lawton}. \begin{thm}\cite[Theorem 2]{Lawton} If $P\in \mathbb{C}(x_1,\dots,x_n)^\times$, then \begin{equation}\label{eq:BL} \lim_{q({\bf k})\rightarrow \infty} \mathrm{m} (P(x,x^{k_2},\dots,x^{k_n}))=\mathrm{m}(P(x_1,\dots,x_n)), \end{equation} where \[q({\bf k})=\min \left\{ H({\bf s}) : {\bf s}=(s_2,\dots,s_n) \in \mathbb{Z}^{n-1}, {\bf s} \not = (0,\dots,0), \mbox{ and }\sum_{j=2}^n s_j k_j =0\right\}\] and $H({\bf s})=\max\{|s_j|: 2\leq j \leq n\}$. \end{thm} Intuitively, the second equation says that the limit is taken while $k_2,\dots,k_n$ go to infinity independently from each other. Mahler measure often yields special values of interesting number-theoretic functions, such as the Riemann zeta function and $L$-functions associated to arithmetic-geometric objects such as elliptic curves. For more on these connections, see~\cite{Bertin-Lalin,BrunaultZudilin}. \subsection{Arithmetic Dynamics}\label{sec:ArithDynIntro} A discrete dynamical system is a set $X$ together with a self-map: $f: X \to X$, allowing for iteration. Here we focus on polynomials $f: \mathbb{C} \to \mathbb{C}$. For such an $f$ and for $L \in \operatorname{Aut}(\mathbb{C})$ (so $L = az + b \in \mathbb{C}[x]$), we write \begin{equation}\label{def:conj-notation} f^n = \underbrace{f\circ f \circ \cdots \circ f}_{\text{$n$-fold composition}}, \quad\text{ and }\quad f^L := L^{-1}\circ f\circ L. \end{equation} We will say that $f^L$ and $f$ are affine conjugate over $K$ when $L \in K[x]$. This conjugation is a natural dynamical equivalence relation because it respects iteration: $(f^L)^n = (f^n)^L$. A fundamental goal of dynamics is to study the behavior of points of $X$ under iteration. For example, a point $\alpha \in X$ is said to be: \begin{itemize} \item {\bf periodic} if $f^n(\alpha) = \alpha$ for some $n>0$, \item {\bf preperiodic} if $f^n(\alpha) = f^m(\alpha)$ for some $n>m\geq0$, and \item {\bf wandering} if it is not preperiodic. \end{itemize} We write \[ \operatorname{PrePer}(f) = \{\alpha \in X : \alpha \text{ is preperiodic under } f\}. \] As usual, we say that $\alpha$ is a \textbf{critical} point if $f'(\alpha) = 0$. Critical points play an important role in analyzing the dynamics of the function $f$. Questions in arithmetic dynamics are often motivated by an analogy between arithmetic geometry and dynamical systems in which, for example, rational and integral points on varieties correspond to rational and integral points in orbits, and torsion points on abelian varieties correspond to preperiodic points. It should be no surprise, then, that heights are an essential tool in the study of arithmetic dynamics. We recall that the classical (logarithmic) height of a rational number $\alpha = \frac{a}{b} \in \mathbb{Q}$, written in lowest terms, is $h(\alpha) = \log\max\{|a|, |b|\}$. This can be extended naturally to a height on algebraic numbers. One way of making such an extension is to consider the na\"ive height. Let $\alpha$ be an algebraic number. We consider its minimal polynomial $P_\alpha(z)=\sum_{j=0}^n a_j z^n$ normalized such that it has integral coefficients and is primitive. Then \[h_{\text{na\"ive}}(\alpha):=\log \max_{j} |a_j|.\] Another possible extension of the classical height is given by the Weil height. For $\alpha \in K$, with $K$ a number field, the (absolute logarithmic) Weil height is given by \begin{equation}\label{eq:WeilHeight} h_{\text{Weil}}(\alpha)=\frac{1}{[K:\mathbb{Q}]}\sum_{\substack{v\in M_K\\v\mid p}} [K_v:\mathbb{Q}_p] \log \max \{||\alpha||_v,1\}, \end{equation} where $M_K$ is an appropriately normalized set of inequivalent absolute values on $K$, so that the product formula is satisfied: \[\prod_{\substack{v\in M_K\\v\mid p}} ||x||_v^\frac{[K_v:\mathbb{Q}_p] }{ [K:\mathbb{Q}] }=1.\] More concretely, for $K=\mathbb{Q}$ we can take $|\cdot|_\infty$ to be the usual absolute value and $|\cdot|_p$ to be the $p$-adic absolute value, normalized so that $|p|_p=1/p$. Then for $v \in M_K$ lying over a prime $p$, \[||x||_v=|N_{K_v/\mathbb{Q}_p}(x)|_p^\frac{1}{[K_v:\mathbb{Q}_p]}.\] The factor of $[K:\mathbb{Q}]$ in~\eqref{eq:WeilHeight} ensures that $h_{\text{Weil}}(\alpha)$ is well-defined, with the same answer for any field $K$ containing $\alpha$. While the na\"ive height is very natural to consider, the Weil height is the canonical height for the power map $z \mapsto z^d$. The two heights are commensurate in the sense that for $\alpha \in \mathbb{P}^1(\overline{\mathbb{Q}})$, there is a constant $C(d)$ depending only on the degree $d$ of $\alpha$ such that \begin{equation}\label{eq:heightequiv}|h_{\text{na\"ive}}(\alpha)-h_{\text{Weil}}(\alpha)|\leq C(d).\end{equation} If $f(x) \in K(x)$ is a rational function of degree $d$, then $h\left(f(\alpha)\right)$ should be approximately $d h(\alpha)$. The dynamical canonical height makes this an equality. The definition is reminiscent of the N\'eron--Tate height on an abelian variety, and the proofs of the statements below follow exactly as in this more familiar case. \begin{defn} \label{height-definition} If $f \in \overline\mathbb{Q}(z)$ is a rational map of degree $d$ (i.e.\ the maximum of the degrees of the numerator and denominator is $d$), and $\alpha \in \mathbb{P}^1(\overline{\mathbb{Q}})$, we then define: \[ \hat h_f(\alpha) = \lim_{n \to \infty} \frac{h(f^n(\alpha))}{d^n}, \] where $h$ may be taken as either the na\"ive height $h_{\text{na\"ive}}$ or the Weil height $h_{\text{Weil}}$ by equation \eqref{eq:heightequiv} \end{defn} It is known that this limit exists, that $\hat h_f(f(\alpha)) = d \hat h_f(\alpha)$, and that $\hat h_f(\alpha) = 0$ if and only if $\alpha$ is a preperiodic point for $f$. See Section 3.4 of~\cite{Silverman-arithmetic-dynamical} for details. \begin{defn}\label{def:Julia} Let $f\in \mathbb{C}[z]$. The {\bf filled Julia set} of $f$ is \[\mathcal K_f=\{z\in \mathbb{C} \, :\, f^n(z)\not \rightarrow \infty \mbox{ as } n\rightarrow \infty\}.\] The {\bf Julia set} $\mathcal J_f$ of $f$ is the boundary of the filled Julia set. That is, $\mathcal J_f = \partial\mathcal K_f$. \end{defn} It follows from these definitions that for a polynomial $f\in \mathbb{C}[z]$, both $\mathcal K_f$ and $\mathcal J_f$ are compact. We denote by $F_\infty$ the complement of $\mathcal K_f$ in $\mathbb{P}^1(\mathbb{C})$, which is also the attracting basin of $\infty$ for $f$, that is, the set of points in $\mathbb{P}^1(\mathbb{C})$ whose orbits go off to $ \infty$. For example, for $f(z) = z^d$, we see that $f^n(z) = z^{d^n}$. So for $d\geq 2$, we have three cases: \begin{itemize} \item If $|\alpha| >1$ then $|\alpha^{d^n}| \to \infty $ with $n$. \item If $|\alpha| <1$ then $|\alpha^{d^n}| \to 0 $ with $n$. \item If $|\alpha| =1$ then $|\alpha^{d^n}| = 1 $ for all $n$. \end{itemize} So for pure power maps, we can understand the Julia sets completely: $\mathcal K_f$ is the unit disc, and $\mathcal J_f$ is the unit circle. In general, however, these sets are quite complex. (See Figure~\ref{fig:JuliaSets}.) \begin{figure} \caption{Filled Julia set for \\$f(z) = z^2$} \caption{Filled Julia set for \\$f(z) = z^2-1$} \caption{(Filled) Julia set for \\$f(z) = z^2+0.3$} \caption{The black area shows the filled Julia set $\mathcal K_f$, and its boundary is the Julia set $\mathcal J_f$. In the third case, the Julia set has empty interior, so $\mathcal K_f = \mathcal J_f$.} \label{fig:JuliaSets} \end{figure} It is clear from Definition~\ref{def:Julia} that $\operatorname{PrePer}(f) \subseteq \mathcal K_f$. When all of the critical points of a polynomial $f$ have unbounded orbits, the Julia set $\mathcal J_f$ is totally disconnected, while $\mathcal J_f$ is connected if and only if all the critical orbits are bounded~\cite{Fatou, Julia}. A polynomial of degree 2 has a unique critical point, and we have only these two cases (see Figure~\ref{fig:JuliaSets}). This situation is known as the Fatou--Julia dichotomy. However, for higher degree polynomials the situation is complicated by having more than one critical point, and there are cases of polynomials $f$ for which $\mathcal J_f$ is disconnected but not totally disconnected (see Figure~\ref{fig:disconnectedJulia}). \begin{figure} \caption{The filled Julia set for $f(z) =z^3-z+1$. The Julia set $\mathcal J_f$ is disconnected but not totally disconnected.} \label{fig:disconnectedJulia} \end{figure} We saw in equations~\eqref{eq:dynMah1D} and~\eqref{eq:mahlerheight} that the Julia set $\mathcal J_f$ for a polynomial $f$ will play the role of the unit torus $\mathbb{T}^1$ when studying dynamical Mahler measure. \subsection{Equilibrium Measures} \begin{defn} Given a compact subset $K \subseteq \mathbb{C}$, an \textbf{equilibrium measure} for $K$ is a Borel probability measure $\mu$ on $K$ which has maximal \textbf{energy} \[ I(\mu) := \int_K \int_K \log|z - w|\ d\mu(z)\ d\mu(w)\] among all Borel probability measures on $K$. \end{defn} Every compact set $K \subseteq \mathbb{C}$ has an equilibrium measure~\cite[Theorem 3.3.2]{Ransford}, and if $f$ denotes a polynomial of degree $d \geq 2$, then the equilibrium measure $\mu_f$ on its Julia set $\mathcal J_f$ is unique (this is the consequence of a more general result that states that the equilibrium measure of any compact, \emph{non-polar} set is unique~\cite[Theorem 3.7.6]{Ransford}; the Julia set $\mathcal J_f$ of any polynomial $f$ is non-polar~\cite[Theorem~6.5.1]{Ransford}), which is to say that there is a non-trivial finite Borel measure with compact support such that $I(\mu)>-\infty$. In fact we can characterize the equilibrium measure $\mu_f$ as follows: \begin{thm}[{\cite[Theorem 6.5.8]{Ransford}}] \label{equilibrium-measure-sequence} Let $w \in \mathcal J_f$, and for $n \geq 1$, define the Borel probability measures \[ \mu_n := \frac 1{d^n} \sum_{f^n(\zeta) = w} \delta_{\zeta}, \] where $\delta_{\zeta}$ denotes the unit mass at $\zeta$, and the preimages $\zeta$ of $w$ under $f^n$ are taken with multiplicity. Then $\mu_n \overset{w^\ast}{\to} \mu_f$ (weak$^\ast$-convergence) as $n \to \infty$. \end{thm} \section{Dynamical Mahler Measure: Definition and Examples} \label{sec:DMMdef} Inspired by Definition~\ref{defn:MM} and the parallels between Mahler measure and the dynamical setting in equations~\eqref{eq:dynMah1D} and~\eqref{eq:mahlerheight}, we define the following: \begin{defn} \label{def:dmm-defi} Let $f \in \mathbb{Z}[z]$ be a monic polynomial of degree $d \geq 2$, and let $P \in \mathbb{C}(x_1, \ldots, x_n)^\times$. The \textbf{$f$-dynamical Mahler measure} of $P$ is the number \begin{equation}\label{eq:dmm-defi} \mathrm{m}_f(P) := \int_{\mathcal J_f} \cdots \int_{\mathcal J_f} \log|P(z_1, \ldots, z_n)|\ d\mu_f(z_1) \cdots d\mu_f(z_n). \end{equation} Note that as $\mu_f$ is a probability measure, the value of this integral is not affected by omitted variables, so in this sense the value of $\mathrm{m}_f$ is independent of $n$. \end{defn} It is not clear \emph{a priori} that the integral in~\eqref{eq:dmm-defi} converges---it does, and we prove this in Proposition~\ref{prop:convergence}---but before discussing these details we provide some examples where the dynamical Mahler measure can be explicitly computed. The following lemmas will prove useful throughout. \begin{lem} \label{products} If $f \in \mathbb{Z}[z]$ and $P, Q \in \mathbb{C}(x_1, \ldots, x_n)^\times$, then \[ \mathrm{m}_f(PQ) = \mathrm{m}_f(P) + \mathrm{m}_f(Q). \] \end{lem} \begin{proof} This follows immediately from the corresponding fact about logarithms. \end{proof} \begin{lem} If $f$ and $g$ are nonlinear polynomials that commute under composition, then $\mathrm{m}_f = \mathrm{m}_g$. \end{lem} \begin{proof} If $f$ and $g$ commute, then they have the same Julia set (see~\cite{AtelaHu}), and the equilibrium measure is determined by this set. \end{proof} \begin{prop} \label{power-function-mahler-measure} If $f(z) = z^d$ with $d \geq 2$, then $\mathrm{m}_f(P) = \mathrm{m}(P)$ for $P \in \mathbb{C}(x)^\times$. \end{prop} \begin{proof} In this case, we have seen that $\mathcal J_f$ is given by the circle $\mathbb{T}^1$, and Theorem~\ref{equilibrium-measure-sequence} tells us that the equilibrium measure is the uniform measure on the circle: \[ \frac{\chi_{\mathbb{T}^1}dz}{2\pi i z}, \] where $\chi_{\mathbb{T}^1}$ is the characteristic function on the unit circle. Taking $z = e^{i \theta}$, we then have \[ \mathrm{m}_f(P) = \frac{1}{2\pi} \int_0^{2\pi} \log |P(e^{i\theta})|\ d\theta = \frac{1}{2\pi i} \int_{\mathbb{T}^1} \log |P(z)| \frac{dz}{z} = \mathrm{m}(P). \qedhere \] \end{proof} \begin{prop} \label{chebyshev-polynomial-mahler-measure} Define the $d^\text{th}$ Chebyshev polynomial to be the polynomial $T_d(z) \in \mathbb{Z}[z]$ that satisfies \begin{equation} T_d(z+z^{-1}) = z^d + z^{-d}. \label{eqn:chebyshev def} \end{equation} Then \[\mathrm{m}_{T_d}(P) = \mathrm{m}(P \circ w)\] for $P \in \mathbb{C}(x)^\times$, where $w(z) = z + z^{-1}$. \end{prop} \begin{proof} Note that $T_d \circ w = w \circ f$, where $f(z) = z^d$ (and note the analogy with Proposition~\ref{prop:conjugation} below). The function $w$ maps the Julia set $\mathcal J_f$ onto the Julia set $\mathcal J_{T_d}$, so that $\mathcal J_{T_d}$ is the segment $[-2, 2]$ (traversed twice as $z$ proceeds around the unit circle). It follows from Theorem~\ref{equilibrium-measure-sequence} that the equilibrium measure on $\mathcal J_{T_d}$ is the pushforward $w_\ast \mu_f$ of the equilibrium measure on $\mathcal J_f$, so \begin{align*} \mathrm{m}_f(P) &= \int_{\mathcal J_{T_d}} \log |P(z)|\ d\mu_{T_d}(z) \\ &= \int_{w(\mathcal J_f)} \log |P(z)|\ dw_\ast \mu_f(z) \\ &= \int_{\mathcal J_f} \log |P(w(z))|\ d\mu_f(z) \\ &= \mathrm{m}_f(P \circ w) \\ &= \mathrm{m}(P \circ w) \end{align*} by Proposition~\ref{power-function-mahler-measure}. \end{proof} \begin{remk} Incidentally, writing $u = z + z^{-1} = e^{i\theta} + e^{-i\theta} = 2 \cos \theta$, we have $du = -2 \sin \theta\ d\theta$, which is to say \[ - \frac{du}{\sqrt{4 - u^2}} = d\theta. \] Thus we can write \begin{align*} \mathrm{m}_{T_d}(P) &= \frac{1}{\pi} \int_0^{\pi} \log |P(e^{i\theta} + e^{-i\theta})|\ d\theta \\ &= \frac{1}{\pi} \int_{-2}^2 \log |P(u)| \frac{du}{\sqrt{4 - u^2}}, \end{align*} from which we see that the equilibrium measure on the segment $[-2, 2]$ is \[ \frac{\chi_{[-2,2]}dz}{\pi \sqrt{4-z^2}}. \] \end{remk} More generally, we have: \begin{prop}\label{prop:gen-cheby} Let $\alpha, \beta \in \mathbb{C}$, and let $f(z)=\frac{\beta-\alpha}{4}T_d\left(\frac{4z-2(\alpha+\beta)}{\beta-\alpha}\right)+\frac{\alpha+\beta}{2}$ where $T_d$ is a Chebyshev polynomial defined in equation~\eqref{eqn:chebyshev def} for $d\geq 2$. For $P \in \mathbb{C}[x]$, we have \begin{equation}\label{eq:mfchange} \mathrm{m}_f(P)=\mathrm{m}\left(P\circ \left(\frac{\beta-\alpha}{4}(z+z^{-1})+\frac{\alpha+\beta}{2} \right) \right). \end{equation} \end{prop} \begin{proof} A change of variables shows that the Julia sets of these polynomials are given by $\mathcal J_f=[\alpha,\beta]$, where this is to be understood as the line segment connecting $\alpha$ and $\beta$ in the complex plane. The equilibrium measure is then given by \[\frac{\chi_{[\alpha,\beta]}dz}{\pi \sqrt{(z-\alpha)(\beta-z)}},\] where $\chi_{[\alpha,\beta]}$ is the characteristic function on the segment $[\alpha, \beta]$. This gives \begin{align*} \mathrm{m}_f(P) = & \frac{1}{\pi} \int_{\alpha}^\beta \log|P(z)|\frac{dz}{\sqrt{(z-\alpha)(\beta-z)}}. \end{align*} Setting $z=\frac{\beta-\alpha}{2}\cos(\pi \theta)+\frac{\alpha+\beta}{2}$ gives \begin{align*} \mathrm{m}_f(P)=&\int_{0}^1 \log\left|P\left(\frac{\beta-\alpha}{2}\cos(\pi \theta)+\frac{\alpha+\beta}{2}\right)\right| d\theta\\=&\frac{1}{2}\int_{-1}^1 \log\left|P\left(\frac{\beta-\alpha}{2}\cos(\pi \theta)+\frac{\alpha+\beta}{2}\right)\right| d\theta. \end{align*} Substituting $w=e^{i\theta}$, we turn the domain of integration into the unit circle, and we conclude that $\mathrm{m}_f$ is given by \eqref{eq:mfchange}. \end{proof} We provide the details of Proposition~\ref{prop:gen-cheby} because it is so difficult, in general, to calculate dynamical Mahler measure exactly. However, we note that the result can also be viewed as a consequence of Proposition~\ref{chebyshev-polynomial-mahler-measure} about Chebyshev polynomials and the following result on dynamical Mahler measure for conjugate maps. \begin{prop}\label{prop:conjugation} Let $f \in \mathbb{C}[z]$ and $P \in \mathbb{C}[z]$, and let $L(z) = az+b \in \mathbb{C}[z]$ with $a\neq 0$. Then \[ \mathrm{m}_{f^L}(P) = \mathrm{m}_f(P \circ L^{-1}). \] \end{prop} \begin{proof} Note first that the Julia sets of $f$ and $f^L$ are related by \[ \mathcal J_{f^L} = L^{-1}(\mathcal J_f). \] Next, fixing $w \in \mathcal J_f$, observe that by Theorem~\ref{equilibrium-measure-sequence}, the measure $\mu_{f^L}$ is the limit in the weak$^\ast$ topology of the sequence of measures \[ d^{-n} \sum_{(f^L)^n(\zeta) = w} \delta_\zeta = d^{-n} \sum_{f^n(L(\zeta)) = L(w)} (L^{-1})_\ast \delta_{L(\zeta)},\] where $d$ denotes the degree of $f$ and $(L^{-1})_\ast \delta_{L(\zeta)}$ denotes the measure defined by $(L^{-1})_\ast \delta_{L(\zeta)}(X) = \delta_{L(\zeta)}(L(X))$. But by Theorem~\ref{equilibrium-measure-sequence}, this sequence also has weak$^\ast$-limit $(L^{-1})_\ast \mu_f$. So $\mu_{f^L} = (L^{-1})_\ast \mu_f$. Making the substitution $w = L(z)$, we then have \begin{align*} \mathrm{m}_{f^L}(P) &= \int_{\mathcal J_{f^L}} \log |P(z)|\ d\mu_{f^L}(z) \\ &= \int_{L^{-1}(\mathcal J_f)} \log |P(z)|\ d(L^{-1})_\ast \mu_f(z) \\ &= \int_{\mathcal J_f} \log |P(L^{-1}(w))|\ d\mu_f(w) \\ &= \mathrm{m}_f(P \circ L^{-1}), \end{align*} as desired. \end{proof} \section{Dynamical versions of classical results}\label{sec:PrevPaper} In this section, we summarize results from~\cite{TwoVarPoly}, focusing on the connections between these results and classical Mahler measure as outlined in Section~\ref{sec:ClassicalMahler}. We provide more detail and refine some of these results in the next sections. \noindent {\bf Jensen's formula} gave us an equivalent definition of Mahler measure that extended naturally to higher dimensions: \[ \mathrm{m}\left( P\right) = \log|a| + \sum_{|\alpha_i|>1} \log |\alpha_i| = \frac{1}{2\pi i}\int_{\mathbb{T}^1} \log \left| P\left(z \right)\right| \frac{dz}{z}. \] \noindent {\bf Dynamical Jensen's formula}~\cite[Lemma 3.1] {TwoVarPoly} plays a similar role, allowing us to prove that the integral in \eqref{eq:dmm-defi} converges, and that when $P\in \mathbb{Z}[x_1,\dots,x_n]$, we have $\mathrm{m}_f(P)\geq 0$. Here $p_{\mu_f}$ is the potential function (see Definition~\ref{def:DefPotential} and Proposition~\ref{dynamical-jensen}). \[ \mathrm{m}_f(P) = \log|a| + \sum_{i} p_{\mu_f}(\alpha_i) = \log|a| + \int_{\mathcal K_f} \log|z-w|\ d\mu_f(w). \] \noindent {\bf Kronecker's Lemma} tells us which integer polynomials have Mahler measure zero: Let $P \in \mathbb{Z}[x]$. If $\mathrm{m}(P)=0$, then the roots of $P$ are either zero or roots of unity. Conversely, if $P$ is primitive and its roots either zero or roots of unity, then $\mathrm{m}(P)=0$. \noindent {\bf Dynamical Kronecker's Lemma}~\cite[Lemmas 1.2 and 4.3]{TwoVarPoly} answers the same question for dynamical Mahler measure. Recalling the driving analogy of arithmetic dynamics, that preperiodic points are like torsion points in arithmetic geometry, the result feels natural. \begin{lem}[Dynamical Kronecker's Lemma] \label{lem:dynamicalKronecker} Let $f \in \mathbb{Z}[z]$ be monic of degree $d \ge 2$ and let $P \in \mathbb{Z}[x]$. Then we have $\mathrm{m}_f(P)=0$ if and only if $P(x) = \pm \prod_i (x-\alpha_i)$ with each $\alpha_i$ a preperiodic point of $f$. \end{lem} \noindent {\bf The Boyd-Lawton Theorem} relates single-variable and multivariate Mahler measure. For $P \in \mathbb{C}(x_1, \dots, x_n)^\times$, \[\lim_{k_2 \rightarrow \infty}\dots \lim_{k_n \rightarrow \infty}\mathrm{m}(P(x, x^{k_2}, \dots, x^{k_n})) = \mathrm{m}(P(x_1,\dots, x_n))\] with $k_2, \dots, k_n \rightarrow \infty$ independently from each other. \noindent {\bf The Weak Dynamical Boyd-Lawton Theorem}~\cite[Proposition 1.3]{TwoVarPoly} provides a partial analogue in the dynamical setting for polynomials in two variables. \begin{prop}[Weak Dynamical Boyd-Lawton]\label{prop:weakdynamicalBL} Let $f \in \mathbb{Z}[z]$ monic of degree $d \ge 2$ and let $P \in \mathbb{C}[x , y]$. Then \[ \limsup_{n\to\infty} \mathrm{m}_f(P(x, f^n(x))) \le \mathrm{m}_f(P(x, y)). \] \end{prop} \noindent {\bf Lehmer's Question} asks if there are integer polynomials with arbitrarily small Mahler measure, or if the Mahler measure of $P \in \mathbb{Z}[x]$ with $\mathrm{m}_f(P) \neq 0$ is bounded away from zero. \noindent {\bf Dynamical Lehmer's Conjecture}~\cite[Conjecture~3.25]{Silverman-arithmetic-dynamical} asks the same question for dynamical Mahler measure. \begin{conj}[Dynamical Lehmer's Conjecture] \label{conj:dynamicalLehmer} There is some $\delta = \delta_f > 0$ such that any single-variable polynomial $P\in \mathbb{Z}[x]$ with $\mathrm{m}_f(P) > 0$ satisfies $\mathrm{m}_f(P) > \delta$. \end{conj} \noindent {\bf Higher dimensional Kronecker's Lemma} could be stated quite simply, and had a similar feel to the one-dimensional version: \begin{thmn}[\ref{thm:HigherdKronecker}]\cite[Theorem 3.10]{Everestward} For any primitive polynomial $P\in \mathbb{Z}[x_1,\dots,x_n]$, $\mathrm{m}(P) =0$ if and only if $P$ is the product of a monomial and cyclotomic polynomials evaluated on monomials. \end{thmn} The main result of~\cite{TwoVarPoly} provides a partial two-variable Kronecker's Lemma for dynamical Mahler measure, but the statement and hypotheses are significantly more delicate than in the one-variable case. \begin{thm}\cite[Theorem 1.5]{TwoVarPoly} \label{thm:mainresult} Assume the Dynamical Lehmer's Conjecture. Let $f \in \mathbb{Z}[z]$ be a monic polynomial of degree $d \ge 2$ which is not conjugate to $z^d$ or to $\pm T_d(z)$, where $T_d(z)$ is the $d^\text{th}$ Chebyshev polynomial. Then any polynomial $P \in \mathbb{Z}[x, y]$ which is irreducible in $\mathbb{Z}[x,y]$ (but not necessarily irreducible in $\mathbb{C}[x, y]$) with $\mathrm{m}_f(P) = 0$ and which contains both variables $x$ and $y$ divides a product of complex polynomials of the following form: \[ \tilde{f}^n(x) - L(\tilde{f}^m(y)),\] where $m, n \ge 0$ are integers, $L \in \mathbb{C}[z]$ is a linear polynomial commuting with an iterate of $f$, and $\tilde{f} \in \mathbb{C}[z]$ is a non-linear polynomial of minimal degree commuting with an iterate of $f$ (with possibly different choices of $L$, $\tilde{f}$, $n$, and $m$ for each factor). As a partial converse, suppose there exists a product of complex polynomials $F_j$ such that \begin{enumerate} \item each $F_j$ has the form $\tilde{f}^{n}(x) - L(\tilde{f}^{m}(y))$, where $L$ and $\tilde{f}$ are as above (with possibly different choices of $L$, $\tilde{f}$, $n$, and $m$ for each $j$); \item $\prod F_j \in \mathbb{Z}[x, y]$; and \item $P$ divides $\prod F_j$ in $\mathbb{Z}[x, y]$. \end{enumerate} Then $\mathrm{m}_f(P) = 0$. \end{thm} In Section~\ref{sec:DynKronLem}, we discuss several other statements that are either known or conjectured to be equivalent to having dynamical Mahler measure~0, and in Section~\ref{sec:e1e2} we prove a new implication in this family of results. In Section~\ref{sec:ad}, we prove a new version of Theorem~\ref{thm:mainresult} in which we replace the assumption of Dynamical Lehmer's Conjecture with the assumption that $\operatorname{PrePer}(f) \subseteq \mathcal J_f$. This is a strengthening of the result in some respects, since the hypothesis on the preperiodic points is much easier to check, when it holds, than Dynamical Lehmer's Conjecture. However, there are certainly polynomials $f\in \mathbb{Z}[z]$ for which that assumption does not hold. This is discussed in Section~\ref{sec:J=K}. \section{Convergence and Positivity}\label{sec:DynMahlerConverge} In this section we give an introduction to potentials, and then use them to prove the existence of the dynamical Mahler measure. \begin{defn}\label{def:DefPotential} The \textbf{potential} of a finite Borel measure $\mu$ with compact support $K$ is the function $p_\mu: \mathbb{C} \to [-\infty, \infty)$ given by \[ p_\mu(z) = \int_K \log|z - w|\ d\mu(w). \] \end{defn} We can see the relationship between potentials and dynamical Mahler measure in the following result, which should be considered the dynamical analogue of Jensen's formula: \begin{prop} \label{dynamical-jensen} Suppose $P(x)$ factors over $\mathbb{C}$ as $P(x) = a \prod_i (x - \alpha_i)$. Then \[ \mathrm{m}_f(P) = \log|a| + \sum_i p_{\mu_f}(\alpha_i). \] \end{prop} \begin{proof} We have \begin{align*} \mathrm{m}_f(P) &= \int_{\mathcal J_f} \log \left| a \prod_i (z - \alpha_i) \right|\ d\mu_f \\ &= \log|a| + \sum_i \int_{\mathcal J_f} \log|z - \alpha_i|\ d\mu_f \\ &= \log|a| + \sum_i p_{\mu_f}(\alpha_i). \qedhere \end{align*} \end{proof} \begin{remk} If $f$ is a monic polynomial, the potential $p_{\mu_f}$ of the equilibrium measure on its Julia set is equal to the \emph{Green's function} $g_{F_\infty}(z, \infty)$ on $F_\infty$, the complement of the filled Julia set $\mathcal K_f$ in the Riemann sphere. (See~\cite[Theorem~6.5.1]{Ransford} and the proof of~\cite[Theorem~4.4.2]{Ransford}.) \end{remk} Let us make a few more observations about potentials before returning to dynamical Mahler measure. \begin{prop} \label{potential-continuous} Let $f \in \mathbb{C}[z]$ be a nonlinear polynomial. Then the potential $p_{\mu_f}$ is continuous. \end{prop} \begin{proof} First, the potential is harmonic, and thus continuous, on $\mathbb{C} \setminus \mathcal K_f$~\cite[Theorem~3.1.2]{Ransford}. As $F_\infty$ is a regular domain~\cite[Corollary~6.5.5]{Ransford}, we have $p_{\mu_f}(z) = I(\mu_f)$ for all $z \in \mathcal J_f$~\cite[Theorem~4.2.4]{Ransford}, and it is shown in the proof of~\cite[Corollary~6.5.5]{Ransford} that \[ \lim_{\substack{z \to \zeta \\ z \notin \mathcal K_f}} p_{\mu_f}(z) = I(\mu_f) \] for all $\zeta \in \mathcal J_f$. Finally, it follows from Frostman's Theorem~\cite[Theorem~3.3.4]{Ransford} that $p_{\mu_f}(z) = I(\mu_f)$ on the interior of $\mathcal K_f$ also. \end{proof} \begin{prop} \label{potential-nonnegative} Let $f \in \mathbb{C}[z]$ be a nonlinear, monic polynomial. Then $p_{\mu_f}(z) \geq 0$ for all $z \in \mathbb{C}$, and $p_{\mu_f}(z) = 0$ if and only if $z \in \mathcal K_f$. \end{prop} \begin{proof} It follows from~\cite[Theorem~6.5.1]{Ransford} that $I(\mu_f) = 0$ if $f$ is monic. The proof of Proposition~\ref{potential-continuous} then shows that $p_{\mu_f}(z) = 0$ for $z \in \mathcal K_f$, while $p_{\mu_f}(z) > 0$ for $z \notin \mathcal K_f$~\cite[Theorem~4.4.3]{Ransford}. \end{proof} We now return to the dynamical Mahler measure. \begin{prop}\label{prop:convergence} Let $f \in \mathbb{Z}[z]$ be a monic, nonlinear polynomial, and let $P \in \mathbb{C}(x_1, \ldots, x_n)^\times$. Then the integral defining the $f$-dynamical Mahler measure of $P$ converges, and if $P$ is furthermore a nonzero integer polynomial, then $\mathrm{m}_f(P) \geq 0$. \end{prop} \begin{remk} This result appears in \cite[Proposition 3.2]{TwoVarPoly}. In the interest of providing a self-contained introduction to the key ideas in the subject, we present here an expanded and more detailed proof. The argument is based on the proof of~\cite[Lemma~3.7]{Everestward} for the classical Mahler measure. \end{remk} \begin{proof} It suffices to consider the case of $P$ a polynomial, since $\mathrm{m}_f(F/G) = \mathrm{m}_f(F) - \mathrm{m}_f(G)$ by Lemma~\ref{products}. We induct on the number of variables. When $n$ = 1, we can factor $P$ over $\mathbb{C}$ as $a \prod_i (x - \alpha_i)$. By Proposition \ref{dynamical-jensen}, we have \[ \mathrm{m}_f(P) = \log|a| + \sum_i p_{\mu_f}(\alpha_i). \] Since the potential $p_{\mu_f}$ is nonnegative on $\mathbb{C}$, we can immediately conclude that the integral defining $\mathrm{m}_f(P)$ converges and that it is nonnegative when $P$ has integer coefficients. Now assume the result holds for polynomials in $n - 1$ variables, and let $P \in \mathbb{C}[x_1, \ldots, x_n]$. Write $P$ as a polynomial in $x_1$ with coefficients in $\mathbb{C}[x_2, \ldots, x_n]$: \[ P(x_1, \ldots, x_n) = a_d(x_2, \ldots, x_n) x_1^d + \dots + a_0(x_2, \ldots, x_n). \] Factor this as \[ a_d(x_2, \ldots, x_n) \prod_{j=1}^d (x_1 - g_j(x_2, \ldots, x_n)) \] for some algebraic functions $g_j$. We then have \begin{align} \nonumber \mathrm{m}_f(P) &= \mathrm{m}_f(a_d) + \int_{\mathcal J_f} \cdots \int_{\mathcal J_f} \log \left| \prod_{j=1}^d (z_1 - g_j(z_2, \ldots, z_n)) \right|\ d\mu_f(z_1) \cdots d\mu_f(z_n) \\ &= \mathrm{m}_f(a_d) + \int_{\mathcal J_f} \cdots \int_{\mathcal J_f} \sum_{j=1}^d p_{\mu_f}(g_j(z_2, \ldots, z_n)) \ d\mu_f(z_2) \cdots d\mu_f(z_n). \label{eq:non-neg} \end{align} By the induction hypothesis, $\mathrm{m}_f(a_d)$ exists and is nonnegative if $P$, and thus $a_d$, has integer coefficients. While the $g_j$ may not be continuous, the multiset of values $\{ g_j(z_2, \ldots, z_n) \}$ is, so it follows from Propositions~\ref{potential-continuous} and \ref{potential-nonnegative} that the integrand \[ \sum_{j=1}^d p_{\mu_f}(g_j(z_2, \ldots, z_n)) \] is nonnegative and continuous away from any poles of the $g_j$. On the other hand, as $\mathcal J_f$ is compact, the polynomial $P(z_1, \ldots, z_n)$ is bounded above on $\mathcal J_f^n$, so the integral defining $\mathrm{m}_f(P)$ is also. The same can then be said for the integral \[ \int_{\mathcal J_f} \cdots \int_{\mathcal J_f} \sum_{j=1}^d p_{\mu_f}(g_j(z_2, \ldots, z_n)) \ d\mu_f(z_2) \cdots d\mu_f(z_n) \] by the finiteness of $\mathrm{m}_f(a_d)$; it follows that this integral converges, despite the presence of any poles of the $g_j$, and therefore the integral defining $\mathrm{m}_f(P)$ does also. \end{proof} \section{Multivariable Analogues of Dynamical Kronecker's Lemma} \label{sec:DynKronLem} Multivariate dynamical Mahler measure was defined in~\cite{TwoVarPoly}, but similar ideas have appeared in the literature in recent years. In this section, we present a summary of some results in arithmetic dynamics. These statements, which include the statement that a polynomial has dynamical Mahler measure zero, are all known or conjectured to be equivalent. Assume as usual that $f \in \mathbb{Z}[x]$ is monic of degree $d$, and $P \in \mathbb{Z}[x_1, \dotsc, x_n]$. As in the statement of Theorem~\ref{thm:mainresult}, $L$ always denotes a linear polynomial in $\mathbb{C}[z]$ commuting with an iterate of $f$, and $\tilde{f}$ always denotes a non-linear polynomial in $\mathbb{C}[z]$ of minimal degree commuting with an iterate of $f$. \begin{itemize} \item[(a)] $\mathrm{m}_f(P) =0$. \item[(b)] $h(\overline{\{P=0\}}\subseteq X) = 0$, where $\overline{\{P=0\}}$ is the Zariski closure of the hypersurface $\{P = 0\} \subseteq\mathbb{A}^n \subseteq X$, $X$ is either $(\mathbb{P}^1)^n$ or $(\mathbb{P}^n)$, and $h$ is a dynamical height for subvarieties of $X$ of the type introduced in \cite{Zhang}. \item [(c)] The hypersurface $\{P = 0\}\subseteq\mathbb{A}^n(\mathbb{C})$ is preperiodic under the map $(x_1, \dotsc, x_n) \mapsto (f(x_1), \dotsc, f(x_n))$. (A subvariety $V$ of a variety $X$ is preperiodic for a map $\Phi: X \to X$ if $\Phi^m (V) = \Phi^n(V)$ for some $m \ne n$.) \item[(d)] The hypersurface $\{P = 0\}\subseteq\mathbb{A}^n(\mathbb{C})$ contains a Zariski dense subset of points that are preperiodic for the map $(x_1, \dotsc, x_n) \mapsto (f(x_1), \dotsc, f(x_n))$ (equivalently, with all coordinates preperiodic for $f$). \item[(e1)] $P$ is primitive (gcd of coefficients $= 1$) and, inside the ring $\mathbb{C}[x_1, \dotsc, x_n]$, $P(x)$ divides some polynomial in $\mathbb{C}[x_1, \dotsc, x_n]$ which is a product of factors of the form $\tilde{f}^n(x_i) - L(\tilde{f}^m(x_j))$ (here $i$ can equal $j$). \item[(e2)] Inside the ring $\mathbb{Z}[x_1, \dotsc, x_n]$, $P(x)$ divides some polynomial in $\mathbb{Z}[x_1, \dotsc, x_n]$ which is a product of factors of the form $\tilde{f}^n(x_i) - L(\tilde{f}^m(x_j))$ (the factors do not need to be in $\mathbb{Z}[x]$, but the product does). \end{itemize} The known relationships among these statements are summarized in Figure~\ref{fig:diagram}. \begin{figure} \caption{The known relationships between statements (a--e), with references.} \label{fig:diagram} \end{figure} \subsection{Subvarieties with many preperiodic points and preperiodic subvarieties} Historically, one of the first of these properties to be studied was property (d), as a special case of the following more general question in the field of unlikely intersections: For an algebraic variety $X$ with a self map $\Phi: X \to X$, which subvarieties $Y$ of $X$ contain a Zariski dense subset of preperiodic points for $\Phi$? This question was raised by Zhang~\cite{Zhang}, who conjectured that $Y$ has such a subset if and only if $Y$ is preperiodic for $\Phi$. This conjecture is a generalization of the Manin--Mumford conjecture (proved by Raynaud ~\cite{Raynaud1,Raynaud2}) on subvarieties of abelian varieties containing infinitely many torsion points, and so Zhang in ~\cite{Zhang-distributions} calls this the Dynamical Manin--Mumford Conjecture. \begin{conj}[Dynamical Manin--Mumford Conjecture] For any variety $X$ and dominant map $\Phi: X \to X$, a subvariety $Y$ of $X$ contains a Zariski dense subset of preperiodic points if and only if $Y$ is preperiodic. \end{conj} In terms of our diagram, this is saying that (d) $\iff$ (c). However most of the study of this conjecture has been focused on the implication (d) $\implies$ (c), which has generally been the harder direction. The Dynamical Manin--Mumford Conjecture has been studied in various contexts, and is now known not to hold in full generality as originally stated (counterexamples, and a refined statement, have been given in \cite{Ghioca-Tucker-Zhang}). However, in the case of interest for our application, it is known to be true: \begin{thm}[Ghioca, Nguyen, Ye~\cite{Ghioca-Nguyen-Ye-two-var, Ghioca-Nguyen-Ye-multivar}]\label{GNY-general} If $\Phi: (\mathbb{P}^1)^n \to (\mathbb{P}^1)^n$ is of the form $f \times \dotsb \times f$, where $f$ is a non-exceptional rational map (not conjugate to a power map, a Chebyshev polynomial, or a Latt\`es map), then the Dynamical Manin--Mumford conjecture holds for the pair $((\mathbb{P}^1)^n, \Phi).$ \end{thm} Note that although this theorem is for $(\mathbb{P}^1)^n$, the result also holds for the restriction to $\mathbb{A}^n$, since $\mathbb{A}^n$ is Zariski dense in $(\mathbb{P}^1)^n$. Ghioca, Nguyen, and Ye actually show a more general statement, which includes the case $\Phi = f_1 \times \dotsb \times f_n$ where $f_1, \dotsc, f_n$ are non-exceptional and all of the same degree. Dujardin and Favre \cite{Dujardin-Favre} have shown a related result: that Dynamical Manin--Mumford holds for $(\mathbb{A}^2, \Phi)$, where $\Phi$ is any automorphism of H\'enon type. The set of invariant subvarieties for maps $\Phi: \mathbb{A}^n \to \mathbb{A}^n$ of the form $(x_1, \dotsc, x_n) \mapsto (f_1(x_1), \dotsc, f_n(x))$ was first determined by Medvedev and Scanlon~\cite{MedvedevScanlon}, and their work can be extended to give all preperiodic subvarieties. We are only interested in the case where $f_1 = \dotsb = f_n$ and of preperiodic hypersurfaces. (However, it turns out that $f_1 = \dotsb = f_n$ is the most interesting case, and also that all lower-dimensional preperiodic subvarieties are generated as intersections of preperiodic hypersurfaces.) We compile the relevant results in the theorem statement below: \begin{thm}[Medvedev, Scanlon~\cite{MedvedevScanlon}] \label{thm:MedScan} If $\Phi: (\mathbb{P}^1)^n \to (\mathbb{P}^1)^n$ is of the form $f \times \dotsb \times f$, where $f$ is a non-exceptional rational map (not conjugate to a power map, a Chebyshev polynomial, or a Latt\`es map), then any hypersurface in $\mathbb{A}^n$ that is preperiodic for $\Phi$ is of the form $\{P = 0\}$ where $P(x)$ divides some polynomial in $\mathbb{C}[x_1, \dotsc, x_n]$ which is a product of factors of the form $\tilde{f}^n(x_i) - L(\tilde{f}^m(x_j))$. \end{thm} In terms of Figure~\ref{fig:diagram}, Theorem~\ref{thm:MedScan} says that (c) implies (e1). The proof of Dynamical Manin--Mumford by Ghioca, Nguyen, and Ye similarly proceeds by explicitly describing the subvarieties of $(\mathbb{P}^1)^n$ that have infinitely many preperiodic points for $\Phi$, proving that (d) implies (e1) in Figure~\ref{fig:diagram}. They combine this with an argument for (c) implies (d) to give an independent proof of (c) implies (e1). \subsection{The Connection with Heights} From equations~\eqref{eq:dynMah1D} and~\eqref{eq:mahlerheight}, we see that single-variable Mahler measure is directly related to heights of points in $\mathbb{P}^n(\overline{\mathbb{Q}})$. It is natural to ask if multivariate Mahler measure is also given by a height. A very strong candidate for this is the dynamical height of subvarieties introduced by Zhang in~\cite{Zhang}. Zhang shows that this dynamical height vanishes on both preperiodic subvarieties and on subvarieties with infinitely many preperiodic points, and he conjectures the converse. Since dynamical Mahler measure also detects polynomials whose zero locus is preperiodic or has infinitely many preperiodic points, it is natural to conjecture that $\mathrm{m}_f(P)$ is equal to the dynamical height of the hypersurface $\{P = 0\}$ with respect to the map $f \times \dotsb \times f$. This conjecture is also supported by work of Chambert-Loir, and Thuillier~\cite{Chambert-Loir-Thuillier} showing that ordinary multivariate Mahler measure agrees with the dynamical height for hypersurfaces in $\mathbb{P}^n$ under the map $f \times \dotsb \times f$, where $f$ is a power map. A promising direction for future work would be to prove this relationship between dynamical Mahler measure and dynamical height, which would provide an additional connection between (a) and (b) in Figure~\ref{fig:diagram}. Zhang's height is too technical to define here, but we give a general overview of the main ideas. Let $X$ be a projective variety with a line bundle $\mathcal{L}$, and $\Phi: X \to X$ a dominant endomorphism (that acts compatibly with the line bundle: $\Phi^* \mathcal{L} \cong \mathcal{L}^d$, where $d$ should be thought of as the degree of $\Phi$). Zhang defined a height $h_\Phi$ on subvarieties of $X$ (or more generally, cycles on $X$) which, like the dynamical height we saw earlier, behaves nicely under pushforward: $h_{\Phi}(\Phi(Y)) = d h_{\Phi}(Y)$. The dynamical height of a subvariety is always non-negative. If $Y$ is preperiodic, then a formal consequence of the compatibility with pushforward is that $h_\Phi(Y) = 0$ (\cite[Theorem 2.4(b)]{Zhang}). The converse implication is conjectured~\cite[Conjecture 2.5]{Zhang}. Zhang also shows that if $h_\Phi(Y) > 0$, then there is a Zariski open subset $U \subseteq Y$ which contains no preperiodic subvarieties, hence in particular no preperiodic points. So the preperiodic points of $Y$ are not Zariski dense. Taking the contrapositive, we have the following result: \begin{prop}[Zhang~\cite{Zhang}]\label{general d implies b} If the preperiodic points of $Y$ are Zariski dense, then $h_\Phi(Y) = 0$. \end{prop} In our situation, we are interested in polynomials $P(x_1, \dotsc, x_n) \in \mathbb{C}[x_1,\dots,x_n]$. These polynomials naturally cut out hypersurfaces in $\mathbb{A}^n$, not projective varieties. However, we can solve this problem by completing $\mathbb{A}^n$ to a projective variety: either to $\mathbb{P}^n$ (as in the work of Chambert-Loir and Thullier~\cite{Chambert-Loir-Thuillier}), or to $(\mathbb{P}^1)^n$ (as is the work of Ghioca, Nguyen, and Ye~\cite{Ghioca-Nguyen-Ye-two-var, Ghioca-Nguyen-Ye-multivar}). We conjecture that the heights given by the two options are equal to each other, as well as to the dynamical Mahler measure. In terms of Figure~\ref{fig:diagram}, \cite[Theorem 2.4(b)]{Zhang} gives the implication (c) implies (b), \cite[Conjecture 2.5]{Zhang} would give (b) implies (c), and Proposition~\ref{general d implies b} gives (d) implies (b). \subsection{New results} Here we highlight the equivalences and implications from Figure~\ref{fig:diagram} proved in~\cite{TwoVarPoly} and in Sections~\ref{sec:e1e2} and~\ref{sec:ad} of the current work. (a) implies (d): This result for two-variable polynomials, conditional on the Dynamical Lehmer Conjecture, is the main result of~\cite{TwoVarPoly}. (See Theorem \ref{thm:mainresult} here for a complete statement.) The proof proceeds in two main steps, first obtaining (a) implies (d) conditional on the Dynamical Lehmer Conjecture, and then using the two-variable case of Theorem~\ref{GNY-general} from~\cite{Ghioca-Nguyen-Ye-two-var} for the (d) implies (e) step. More specifically, in \cite[Propostion 7.6]{TwoVarPoly} we show that (a) implies (d) conditional on Dynamical Lehmer for pairs $(P, f)$ satisfying a technical condition that we call the bounded orders property, which we then show holds in all cases. In Section 8 of this paper, we strengthen the two-variable result by showing that (a) implies (d) without Dynamical Lehmer's conjecture for polynomials $f$ such that $\operatorname{PrePer}(f) \subseteq\mathcal J(f)$. Again, combining this with the implication (d) implies (e) from \cite{Ghioca-Nguyen-Ye-two-var} gives us a two-variable Kronecker's Lemma for these polynomials. The implication (a) implies (d) for polynomials in more than two variables is still open. (e2) implies (a): This was shown in ~\cite[Corollary 6.4]{TwoVarPoly}. The strategy for the proof consists of noticing that $\mathrm{m}_f(x-y)=0$ for arbitrary $f\in \mathbb{Z}[x]$ monic, and then using the fact that the Mahler measure is invariant under composition with any polynomial commuting with $f$. (e1) implies (e2): This is shown in Section \ref{sec:e1e2} by studying integrality of polynomials that satisfy certain commutative properties. A key step is to prove that a polynomial commuting with some monic $f \in \overline{\mathbb{Z}}[x]$ (and satisfying certain technical conditions) must have coefficients in $\overline{\mathbb{Z}}[x]$. (See Proposition \ref{prop:coeff}.) \section{An integrality property of commuting polynomials} \label{sec:e1e2} To show that (e1) implies (e2) in Figure~\ref{fig:diagram}, we need to know that we can choose our product of factors of the form $\tilde{f}^n(x_i) - L(\tilde{f}^m(x_j))$ to have integer coefficients. We will do this by showing that $\tilde{f}$ and $L$ have algebraic integer coefficients, hence the product also has algebraic integer coefficients, and the coefficients of the product can be then assumed to be rational integers by enlarging the set of factors, if necessary, to be stable under the Galois action. Let $\overline{\mathbb{Z}}$ be the ring of algebraic integers, which has fraction field $\overline{\mathbb{Q}}$ and unit group $\overline{\mathbb{Z}}^\times$. First we show that $\tilde{f}\in \overline\mathbb{Z}[x]$. \begin{lem}\label{lem:integercomposition} If $g, h \in \overline{\mathbb{Q}}[x]$ are polynomials of positive degree with leading coefficients in $\overline{\mathbb{Z}}^\times$ such that $f = g \circ h \in \mathbb{Z}[x]$, then the polynomials $g(x + h(0))$ and $h(x) - h(0)$ both lie in $\overline{\mathbb{Z}}[x]$. \end{lem} \begin{proof} We follow the method of proof of~\cite[Theorem 2.1]{Gusic}. By replacing $g(x)$ and $h(x)$ with $g(x+ h(0))$ and $h(x) - h(0)$ respectively, we may assume that $h(0) = 0$. In this case, it suffices to show that $g(x)$, $h(x) \in \overline{\mathbb{Z}}[x]$. Write $g(x) = a \prod_i (x-\alpha_i)$ and $h(x) = b \prod_j (x-\beta_j)$. By assumption $a, b \in \overline{\mathbb{Z}}^\times$. Note that the polynomial $f = g \circ h$ has leading coefficient equal to $a \cdot b^{\deg g} \in \overline{\mathbb{Z}}^\times$. Hence, the roots of $f$ are all algebraic integers, and we can factor $f(x)$ in $\overline{\mathbb{Z}}[x]$ as $f(x) = a \cdot b^{\deg g} \prod_k (x - \gamma_k)$ with $\gamma_k \in \overline{\mathbb{Z}}$. We have another factorization: \[ f(x) = g(h(x)) = a \prod_{i} (h(x) - \alpha_i). \] By unique factorization, we must have $h(x) - \alpha_i = b \prod_{k \in S_i} (x - \gamma_{k})$ for some subset $S_i$ of the $\gamma_k$, and so in particular $h(x) - \alpha_i \in \overline{\mathbb{Z}}[x]$ has algebraic integer coefficients. Since we have assumed that $h(0) = 0$, we conclude that $\alpha_i \in \overline{\mathbb{Z}}$ and $h(x) \in \overline{\mathbb{Z}}[x]$. Then also $g(x) = a \prod_i (x-\alpha_i) \in \overline{\mathbb{Z}}[x]$, and this concludes the proof. \end{proof} \begin{prop}\label{prop:iterates} Suppose that $f \in \overline{\mathbb{Q}}[x]$ has degree $>1$ and that the $n$-fold iterate $f^n = f \circ \dotsb \circ f$ is a monic polynomial in $\overline{\mathbb{Z}}[x]$. Then in fact $f \in \overline{\mathbb{Z}}[x]$. \end{prop} \begin{proof} Since the leading coefficient of $f^n$ is $1$, we see that $f$ must have leading coefficient in $\overline{\mathbb{Z}}^\times$ (in fact, it must be a root of unity, but we do not need this). The same is true for any iterate of $f$. Write $c = f(0)$. We now apply Lemma~\ref{lem:integercomposition} to the composition $f^n = (f^{n-1}) \circ f$, and conclude that $f(x) - c \in \overline{\mathbb{Z}}[x]$. Hence, if we write $f(x) = a_d x^d + \dotsb + a_1 x + c$, we have shown that all $a_j$ with $j > 0$ are algebraic integers. It remains to show that also $c$ is an algebraic integer. For this, we note that \begin{equation}\label{eq:slick} f^n(c) -c = f^{n+1}(0) - c = f(f^n(0)) - c = a_d (f^n(0))^d + \dotsb + a_1(f^n(0)), \end{equation} where the constant terms cancel out. The right hand side lies in $\overline{\mathbb{Z}}$ because all the $a_i$ do, as does $f^n(0)$ since $f^n \in \overline{\mathbb{Z}}[x]$. Hence $c$ is a root of a polynomial of the form $f^n(x) - x - A = 0$, where $A \in \overline{\mathbb{Z}}$ is the right hand side of \eqref{eq:slick}. Since $f$ has degree $>1$, this is a monic polynomial with algebraic integer coefficients, hence $c \in \overline{\mathbb{Z}}$ also, as desired. \end{proof} Before proving our next statement, we need the following result: \begin{thm}\cite{Julia,Ritt} \label{thm:JR} If two polynomials $f$ and $g$ commute under composition, then up to conjugation with the same linear polynomial, either both are power functions, both are plus or minus Chebyshev polynomials, or an iterate of one is equal to an iterate of the other. \end{thm} \begin{cor}\label{cor:gint} If $f\in \overline{\mathbb{Z}}[x]$ is a monic polynomial of degree $>1$ that is not conjugate (over $\mathbb{C}$) to a power function or plus or minus a Chebyshev polynomial, and $g \in \overline{\mathbb{Q}}[x]$ of degree $>1$ commutes with some iterate of $f$, then in fact $g \in \overline{\mathbb{Z}}[x]$. \end{cor} \begin{proof} By assumption, $g$ commutes with $f^k$ for some $k$. It follows from Theorem~\ref{thm:JR} that $g^a = (f^k)^b$ for some positive integers $a$ and $b$. Hence $g^a \in \overline{\mathbb{Z}}[x]$ and is monic. By Proposition \ref{prop:iterates}, we conclude that $g \in \overline{\mathbb{Z}}[x]$. \end{proof} \begin{remk} It would be worth investigating if Corollary \ref{cor:gint} is still true even when $f$ is conjugate to a power function or plus or minus a Chebyshev polynomial. It would be also interesting to have a proof of the statement that does not rely on Theorem \ref{thm:JR}. \end{remk} We now show that the commuting linear function $L$ has coefficients in the algebraic integers. \begin{prop} If $f$ in $\overline{\mathbb{Z}}[x]$ is monic of degree $>1$, and $L \in \overline{\mathbb{Q}}[x]$ is a linear polynomial that commutes with $f$, then $L \in \overline{\mathbb{Z}}[x]$. \end{prop} \begin{proof} Write $f(x) = x^n + c_{n-1}x^{n-1} + \dotsb + c_1 x + c_0$, and $L(x) = ax + b$. First look at the leading coefficient of $L \circ f = f \circ L$: \[ a = a^n, \] so $a$ is a root of unity, hence in $\overline{\mathbb{Z}}$. Now look at the constant coefficient of $L \circ f = f \circ L$: \[ a c_0 + b = f(b), \] so $b$ is a root of the equation $f(x) - x - a c_0 \in \overline{\mathbb{Z}}[x]$. So $L$ has algebraic integer coefficients. \end{proof} \begin{lem} If $f$ in $\overline{\mathbb{Q}}[x]$ has degree $>1$, then any polynomial $g \in \mathbb{C}[x]$ commuting with $f$ has coefficients in $\overline{\mathbb{Q}}[x]$. \end{lem} \begin{proof} Since $f\in \overline\mathbb{Q}[x]$, the requirement that $f\circ g = g\circ f$ gives algebraic conditions on the coefficients of $g$. From Boyce~\cite{Boyce}, we know that there are finitely many $g$ of a fixed degree commuting with $f$, so this combined with the algebraic conditions on the coefficients ensures that $g\in \overline\mathbb{Q}[x]$. More specifically, the algebraic condition imposed by $f\circ g = g \circ f$ guarantees that the lead coefficient of $g$ is in $\overline\mathbb{Q}$, and Boyce describes an algorithm due to Jacobsthal~\cite{Jacobsthal} for computing all of the coefficients of $g$ from this lead coefficient. \end{proof} Combining the above, we have the following result. \begin{prop}\label{prop:coeff} If $f \in \overline{\mathbb{Z}}[x]$ is monic of degree $>1$ and is not conjugate to a power function or plus or minus a Chebyshev polynomial, then any polynomial $g$ commuting with $f$ has coefficients in $\overline{\mathbb{Z}}[x]$. \end{prop} We now prove our desired theorem. \begin{thm} \label{e1-implies-e2} Let $f \in \mathbb{Z}[x]$ be a monic polynomial with integer coefficients. Let $P \in \mathbb{Z}[x_1, \dotsc, x_n]$ be a primitive polynomial with integer coefficients. Suppose that, working in the ring $\mathbb{C}[x_1, \dotsc, x_n]$, $P$ divides some polynomial $Q \in \mathbb{C}[x_1, \dotsc, x_n]$ that is a product of factors of the form $\tilde{f}^n(x_i) - L(\tilde{f}^m(x_j))$ where $m, n \ge 0$ are integers, $L \in \mathbb{C}[x]$ is a linear polynomial commuting with an iterate of $f$, and $\tilde{f} \in \mathbb{C}[x]$ is a non-linear polynomial of minimal degree commuting with an iterate of $f$ (with possibly different choices of $L$, $\tilde{f}$, $n$, and $m$ for each factor). Then, working in the ring $\mathbb{Z}[x_1, \dotsc, x_n]$, $P(x)$ divides some polynomial $R \in \mathbb{Z}[x_1, \dotsc x_n]$ that is a product of factors of the form $\tilde{f}^n(x_i) - L(\tilde{f}^m(x_j))$ where $m, n, \tilde f$, and $L$ are as above. \end{thm} \begin{proof} Let $Q = Q_1,\dotsc, Q_n$ be all the Galois conjugates of $Q$. Since the property of commuting with the polynomial $f$, which has $\mathbb{Q}$-coefficients, is preserved by the action of $\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$, the $Q_i$ all have factorizations of our desired form. Now let $R = \prod_i Q_i$, which is also a product of factors of this form. Since all these factors have coefficients in $\overline{\mathbb{Z}}$, so does $R$. But also $R$ is invariant under the Galois action by construction, so $R \in \mathbb{Q}[x]$; therefore, $R \in \mathbb{Z}[x]$. Finally, we need to check that $P$ divides $R$ in the ring $\mathbb{Z}[x_1, \dotsc, x_n]$. From the above, we know that, inside the ring $\mathbb{C}[x_1, \dotsc, x_n]$, $P$ divides $Q$, so also $R$. Since both $P$ and $R$ have rational coefficients, the quotient $R/P$ does also, and $P$ divides $R$ in $\mathbb{Q}[x_1, \dotsc, x_n]$. Since additionally $P$ and $R$ have integer coefficients, and $P$ is primitive, by Gauss's Lemma $P$ divides $R$ in $\mathbb{Z}[x_1, \dotsc, x_n]$, as desired. \end{proof} \section{Dynamical Kronecker's Lemma in some two-variable cases} \label{sec:ad} In this section we prove that (a) implies (d) in Figure~\ref{fig:diagram} for polynomials $f$ that satisfy a certain condition. More precisely, we give an alternate proof of the following key step in the proof of two-variable Dynamical Kronecker, which replaces the assumption of the Dynamical Lehmer's Conjecture with the assumption that the preperiodic points of $f$ belong to its Julia set: \begin{thm}\label{mahler-preperiodic} Assume that $\operatorname{PrePer}(f) \subseteq \mathcal J_f$. If $P(x ,y) \in \mathbb{Z}[x, y]$ with $\mathrm{m}_f(P) = 0$, then the graph of $P(x, y) = 0$ passes through infinitely many points $(\alpha, \beta)$ for which $\alpha$ and $\beta$ are both preperiodic for $P$. \end{thm} \begin{remk}The property $\operatorname{PrePer}(f) \subseteq \mathcal J_f$ is a strong assumption. In Section \ref{sec:J=K} we discuss some conditions that guarantee that this property is satisfied, and so Dynamical Kronecker's Lemma holds unconditionally for $f$. \end{remk} Before proving Theorem \ref{mahler-preperiodic}, we consider two lemmas. \begin{lem}\label{leading-coefficient} Assume that $\operatorname{PrePer}(f) \subseteq \mathcal J_f$. If $P(x, y) = a(x)y^k + (\text{lower order terms in $y$}) \in \mathbb{Z}[x, y]$ is a two-variable polynomial with $\mathrm{m}_f(P) = 0$, then \begin{enumerate}[(i)] \item $\mathrm{m}_f(a) = 0$; \item the polynomial $a(x) \in \mathbb{Z}[x]$ is primitive (the gcd of its coefficients is 1); \item $a(x)$ divides the polynomial $P(x, y)$ (in $\mathbb{Z}[x, y]$). \end{enumerate} \end{lem} \begin{proof} (i) This result follows from the equality case of Proposition~\ref{prop:convergence}. In particular, from equation~\eqref{eq:non-neg} we see that $\mathrm{m}_f(P)$ can be zero if and only if the two (non-negative) summands are both zero, one of which corresponds to $\mathrm{m}_f(a)$ in this two-variable case. (ii) This follows from (i) and the fact that non-primitive polynomials have positive dynamical Mahler measure. (iii) It is enough to show that $a(x)$ divides $P(x, y)$ in $\mathbb{C}[x, y]$, since if this is the case, the polynomial $P(x, y)/a(x)$ must have rational coefficients, and by Gauss's Lemma, the coefficients must also be integers. We argue by contradiction, somewhat in the style of \cite[Lemma~3.20]{Everestward}. Suppose not: then there is a root $\alpha$ of $a(x)$ such that $P(\alpha, y)$ is not identically~0. By the single-variable Dynamical Kronecker's Lemma (Lemma~\ref{lem:dynamicalKronecker}), since $\mathrm{m}_f(a) = 0$, the roots of $a(x)$ are in $\operatorname{PrePer}(f)$. By assumption, they are in $\mathcal J_f$. As in the proof of Proposition~\ref{prop:convergence}, we have a factorization \[ P(x, y) = a(x) \prod_{i = 1}^k (y - g_i(x)), \] where the $g_i$ are algebraic functions which may have branch cuts or singularities. In particular, plugging in the $\alpha$ above, we see that some $g_i(x)$ must have a pole at $x = \alpha$. We will show this leads to a contradiction. We can decompose the Mahler measure of $P$ as \[ \mathrm{m}_f(P) = \mathrm{m}_f(a) + \sum_{i = 1}^k \int_{\mathcal J_f} p_\mu (g_i(x)) d\mu_f(x). \] Since all summands are non-negative, in order to have $\mathrm{m}_f(P) = 0$ we must have $\int_{\mathcal J_f} p_\mu(g_i(x)) d\mu_f(x)= 0$ for each $i$. On the other hand, if $g_i$ has a pole at $\alpha$ of order $r \in \mathbb{Q}$, we have a power series expansion \[ g_i(x) = (x- \alpha)^{-r} + \dotsb \] where the first term dominates near $x = \alpha$, so there must be some neighborhood $U$ of $\alpha$ such that $p_\mu(g_i(x)) > 1$ for $x \in U$. Then, \[ 0 = \int_{\mathcal J_f} p_\mu(g_i(x)) d\mu_f(x) \ge \int_{U \cap \mathcal J_f} p_\mu(g_i(x)) d\mu_f(x)\ge \int_{U \cap \mathcal J_f} d\mu_f(x)= \mu_f(U \cap \mathcal J_f) > 0, \] which gives the desired contradiction. (The last step uses the fact that $\alpha$ lies in the support $\mathcal J_f$ of $\mu$.) \end{proof} After factoring out the leading term, we are reduced to considering the case when \[ P(x, y) = y^k + (\text{lower order terms in $y$}) \] as a monic polynomial in $y$. \begin{lem}\label{monic-case} If $P(x, y) = y^k + (\text{lower order terms in $y$}) \in \mathbb{Z}[x, y]$ is a two-variable polynomial, monic in~$y$, with $\mathrm{m}_f(P) = 0$, then for any $\alpha \in \mathcal{J}_f$ the polynomial $P_{\alpha} \in \mathbb{C}[y]$ given by $P_{\alpha}(y) = P(\alpha, y)$ satisfies $\mathrm{m}_f(P_{\alpha}) = 0$. \end{lem} \begin{proof} The polynomial $P_{\alpha} \in \mathbb{C}[y]$ is monic, so $\mathrm{m}_f(P_{\alpha}) \ge 0$ for all $\alpha$. However, \begin{equation}\label{eq:intzero} 0 = \mathrm{m}_f(P) = \int_{\mathcal{J}_f} \mathrm{m}_f(P_\alpha) d\mu_f(\alpha), \end{equation} from which we can immediately deduce that $\mathrm{m}_f(P_\alpha) = 0$ for almost all $\alpha\in \mathcal{J}_f$ (that is, except possibly on a set of invariant measure $0$). Next we prove that $\mathrm{m}_f(P_\alpha)$ is a continuous function of $\alpha$. To do this, write $P(\alpha,y) = \prod_{i} (y-g_i(\alpha))\in \mathbb{C}[y]$, where as above the $g_i$ are algebraic functions. By Jensen's formula, \[ \mathrm{m}_f(P_\alpha) = \sum_{i} p_{\mu_f}(g_i(\alpha)). \] By Proposition~\ref{potential-continuous}, $p_{\mu_f}$ is continuous, and therefore $\mathrm{m}_f(P_\alpha) $ is continuous as a function of $\alpha\in \mathbb{C}$. Since the support of $\mu_f$ is exactly $\mathcal J_f$ (as discussed in the proof of~\cite[Theorem 6.5.8]{Ransford} or ~\cite[Theorem 2, page 169]{Steinmetz}), equation \eqref{eq:intzero} then implies that $\mathrm{m}_f(P_\alpha) = 0$ for every $\alpha \in \mathcal J_f$. \end{proof} \begin{proof}[Proof of Theorem~\ref{mahler-preperiodic}] Write $P(x, y) = a(x)y^k + (\text{lower order terms in $y$}) \in \mathbb{Z}[x, y]$. Case 1: $a(x)$ is not constant. By the single-variable Dynamical Kronecker's Lemma (Lemma~\ref{lem:dynamicalKronecker}), the roots of $a(x)$ are preperiodic for $f$, and by Part (iii) of Lemma~\ref{leading-coefficient}, $P(x, y)$ contains the vertical lines through those preperiodic $x$-coordinates. Case 2: $a(x)$ is constant, so it equals $\pm 1$ by Part (i) of Lemma~\ref{leading-coefficient}. By flipping the sign as necessary, we may assume that $a(x) = 1$, so we are in the situation of Lemma~\ref{monic-case}. Now let $\alpha$ be any preperiodic point for $f$, and let its Galois conjugates be $\alpha= \alpha_1, \alpha_2, \dotsc, \alpha_n$. Consider the polynomial $P_{\alpha_1}P_{\alpha_2} \dotsm P_{\alpha_n} \in \mathbb{C}[y]$: this has algebraic integer coefficients since $\alpha$ is an algebraic integer (here we are crucially using the fact that $f$ is monic) and $P \in \mathbb{Z}[x, y]$, and in fact rational integer coefficients since it is invariant under the Galois action. Since $\alpha_1, \dotsc, \alpha_n \in \operatorname{PrePer}(f)$, by our assumption they are also in $\mathcal J_f$, and by Lemma~\ref{monic-case}, $\mathrm{m}_f(P_{\alpha_1}P_{\alpha_2} \dotsm P_{\alpha_n}) = 0$. From the single-variable Dynamical Kronecker's Lemma, we conclude that all roots of $P_{\alpha_1}P_{\alpha_2} \dotsm P_{\alpha_n}$ are preperiodic for $f$. Hence any point on the curve $P(x, y) = 0 $ with $x$-coordinate $\alpha = \alpha_1$ also has preperiodic $y$-coordinate. Since $P(x, y)$ is monic in $y$, the polynomial $P(\alpha, y)$ is nonzero for any such $\alpha$, and thus there is some $\beta$ for which $P(\alpha, \beta) = 0$. Since there are infinitely many choices for the periodic point $\alpha$ we get infinitely many points with both coordinates preperiodic. \end{proof} \section{Conditions for the preperiodic points of $f$ to lie in the Julia set $\mathcal J_f$} \label{sec:J=K} Given the results of Section~\ref{sec:ad}, it is natural to ask how restrictive the hypothesis is that $\operatorname{PrePer}(f) \subseteq \mathcal J_f$. This seems to be a delicate question in general. In the case of unicritical polynomials --- polynomials with a unique critical point $\gamma \in \mathbb{C}$ --- we can answer the question completely. We begin with some background on the connection between Julia sets, filled Julia sets, and periodic points. For $f \in \mathbb{C}[z]$, a fixed point $z_0$ is \begin{itemize} \item {\bf repelling} if $|f'(z_0)|>1$, \item {\bf neutral} if $|f'(z_0)|=1$, and \item {\bf attracting} if $|f'(z_0)|<1$. \end{itemize} If $|f'(z_0)|>1$, then the image under $f$ of a small neighborhood around $z_0$ expands so that $z_0$ ``repels'' nearby points. If $|f'(z_0)|<1$, the image under $f$ of small neighborhood around $z_0$ shrinks, so that $z_0$ ``attracts'' nearby points. The number $ f'(z_0)$ is called the {\bf multiplier} of the fixed point. These ideas generalize to $n$-cycles by considering points on the cycle as fixed points of the iterated polynomial $f^n$. So to study an $n$-cycle containing the point $z_0$ and determine whether it is repelling, neutral, or attracting, we consider the absolute value of \[ \left.\frac{d f^n}{d z}\right|_{z=z_0} = \prod_{z_i \text{ on the cycle}} f'(z_i). \] The equality comes from applying the chain rule to the derivative of $f^n$. As noted in Section~\ref{sec:ArithDynIntro}, all preperiodic points of a polynomial $f$ lie in the filled Julia set $\mathcal K_f$. But in fact more is true: The Julia set $\mathcal J_f$ is the closure of the repelling periodic points of $f$~\cite[Theorem 6.9.2]{beardon2000iteration}. Since the Julia set $\mathcal J_f$ is completely invariant under $f$---meaning that $f(\mathcal J_f) = \mathcal J_f = f^{-1}(\mathcal J_f)$~\cite[Theorem 3.2.4]{beardon2000iteration}---we need only determine when the nonrepelling cycles lie in the Julia set. Notice that it suffices to check this when $\mathcal J_f \subsetneq \mathcal K_f$, since in the case of $\mathcal J_f=\mathcal K_f$, the preperiodic points lie in $\mathcal J_f$ trivially. Observe that the condition $\mathcal J_f=\mathcal K_f$ is guaranteed if $\mathcal J_f$ is totally disconnected. The following two results will be useful in the sequel. \begin{thm}~\cite[Theorem 1.35 (a)]{Silverman-arithmetic-dynamical}\label{thm:silverman} Let $f(z) \in \mathbb{C}[z]$ be a polynomial of degree $d\geq 2$. Then $f$ has at most $d-1$ nonrepelling periodic cycles in $\mathbb{C}$. \end{thm} \begin{thm}\cite[Corollary 8.2]{Sutherland} \label{thm:multiplierunity} If $z_0$ is a periodic point with multiplier $\left.\frac{d f^n}{d z}\right|_{z=z_0}=\lambda$ a root of unity, then $z_0\in \mathcal J_f$. \end{thm} \subsection{The degree-2 case} We start by considering the case of $\deg(f)=2$. In this case, we can completely classify monic integer polynomials that fail to have all preperiodic points in the Julia set. \begin{prop}\label{prop:preperjulia2} Let $f\in \mathbb{Z}[z]$ be monic and quadratic. Then $\operatorname{PrePer}(f) \subseteq\mathcal J_f$ unless $f$ is affine conjugate over $\mathbb{Z}$ to either $z^2$ or $z^2-1$. \end{prop} To prove Proposition~\ref{prop:preperjulia2}, we need a couple of tools. First, note that any quadratic polynomial $f\in \mathbb{C}[z]$ is affine conjugate over $\mathbb{C}$ to a polynomial of the form $z^2+c$ with $c \in \mathbb{C}$. To see this, choose a conjugating function $L(z) = \frac z a- \gamma$ where $a$ is the leading coefficient of $f$ and $\gamma$ is the unique critical point of $f$. Then $f^L$ will be monic and have a critical point at zero, which gives it the desired form. Let $f_c(z)=z^2+c$. The Mandelbrot set is defined as \[\mathcal{M}_2=\left\{ c \in \mathbb{C} : \sup |f_{c}^n(0)|<\infty\right\}.\] \begin{center} \begin{figure} \caption{The Mandelbrot set.} \label{Mandelbrot} \end{figure} \end{center} The Mandelbrot set is contained in the disk of radius 2; furthermore, $\mathcal{M}_2$ satisfies $\mathcal{M}_2\cap \mathbb{R}=\left[-2,\frac{1}{4}\right]$ (see~\cite[Chapter VIII, Theorem 1.2]{carleson2013complex}). It follows from the discussion of Julia sets in Section~\ref{sec:ArithDynIntro} that for $c \not\in \mathcal M_2$, the Julia set for $f_c$ is totally disconnected and $\operatorname{PrePer}(f)\subseteq\mathcal J_f$. The Mandelbrot set completely classifies conjugacy classes of complex quadratic polynomials. In fact, the conjugacy described above preserves the field of definition of a quadratic polynomial provided the field does not have characteristic~2. However, to prove Proposition \ref{prop:preperjulia2}, we need to understand conjugacy classes of monic integral quadratic polynomials, which is slightly more delicate. \begin{lem}\label{lem:conj2} Let $f\in\mathbb{Z}[z]$ be monic and quadratic. Then $f$ is affine conjugate over $\mathbb{Z}$ to an integral polynomial of the form $z^2 + c$ or $z^2 + z + c$. \end{lem} \begin{proof} Let $f(z)=z^2+\alpha z+\beta \in \mathbb{Z}[z]$. We will find a linear polynomial $L(z) \in \mathbb{Z}[z]$ such that \[f = g^L=L^{-1}\circ g \circ L\] with $g$ a polynomial as in the statement. We have two cases: If $\alpha$ is even, take $\alpha_1 \in \mathbb{Z}$ such that $\alpha = 2\alpha_1$. Then for $L=z+\alpha_1$ we have $g(z)=z^2+\beta-\alpha_1^2+\alpha_1$. If $\alpha$ is odd, take $\alpha_1 \in \mathbb{Z}$ such that $\alpha=2\alpha_1+1$. Then for $L=z+\alpha_1$ we have $g(z)=z^2+z+\beta-\alpha_1^2$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:preperjulia2}] By Theorem \ref{thm:silverman}, a degree-2 polynomial has at most one nonrepelling periodic cycle in $\mathbb{C}$. It suffices to find one such cycle in each case. First consider the case in which $f(z)$ is conjugate to $g(z)=z^2+c$. Since $\mathcal{M}_2\cap \mathbb{R}=\left[-2,\frac{1}{4}\right]$, it suffices to consider the cases of $c=0,-1,-2$. When $c=0$, we immediately see that $z_0=0$ is an attracting fixed point in $\mathcal K_f\setminus \mathcal J_f$. \ When $c=-2$, we have $g = T_2$ (the second Chebyshev polynomial), and $\mathcal J_f=[-2,2]=\mathcal K_f$. Finally, when $c=-1$, we find that $\{-1,0\}$ is an attracting cycle. Indeed $g^2(z)=z^4-2z^2$ and $\frac{d g^2}{d z}=4z^3-4z$. This gives \[ \left. \frac{d g^2}{d z}\right|_{z=0}=\left. \frac{d g^2}{d z}\right|_{z=-1}=0. \] (This is a general phenomenon: When a critical point $f$ is strictly periodic, one can show that the multiplier of the cycle will be $0$, and the cycle is called {\bf superattracting}.) Now we consider the case in which $f$ is conjugate to $g(z)=z^2+z+c$. Letting $L(z) = z-\frac 1 2$, we find that $g^L = z^2+c+\frac{1}{4} \in \mathbb{Q}[z]$. Again using the fact that $\mathcal{M}_2\cap \mathbb{R}=\left[-2,\frac{1}{4}\right]$, we have $\mathcal K_f=\mathcal J_f$ for $c<-\frac{9}{4}$ and $c>0$ and we only need to check $c=0,-1,-2$. When $c=0$, we have $g(z)=z^2+z$, and $z_0=0$ is a neutral fixed point since $g'(z)=1+2z$ and $g'(0)=1$. Since the multiplier is $1$, Theorem \ref{thm:multiplierunity} implies $0\in \mathcal J_f$. When $c=-1$, we have $g(z)=z^2+z-1$ and $z_0=-1$ is a neutral fixed point since $g'(z)=1+2z$ and $f'(-1)=-1$. Since the multiplier is $-1$, Theorem \ref{thm:multiplierunity} implies $-1\in \mathcal J_f$. Finally, when $c=-2$, we have $g(z)=z^2+z-2$. We claim that the roots of $z^3 + 2z^2 - z - 1$ give a neutral 3-cycle with multiplier 1. First notice that \begin{equation}\label{eq:f3} g^3(z)-z= (z^2 - 2)(z^3 + 2z^2 - z - 1)^2. \end{equation} The roots of the first factor in \eqref{eq:f3} are (repelling) fixed points. Let $z_0$ be a root of $z^3 + 2z^2 - z - 1=0$. Since $z^3 + 2z^2 - z - 1$ has exponent 2 in the factorization of $g^3(z)-z$, it must be a factor of the derivative$\frac{d (g^3(z)-z)}{d z}$, but this implies that \[ \left. \frac{d (g^3(z)-z)}{d z}\right|_{z=z_0}=0, \] from which we get $\left. \frac{d g^3}{d z}\right|_{z=z_0}=1$. Since the multiplier is $1$, Theorem \ref{thm:multiplierunity} implies $z_0\in \mathcal J_f$. \end{proof} \subsection{The family $z^d+c$} A polynomial of the form $f_{d,c}(z)=z^d+c$ has only one critical point, namely $z_0=0$. Therefore, in this family we have the dichotomy between connected and totally disconnected Julia sets, and we can define the Mandelbrot or Multibrot set \[\mathcal{M}_d=\left\{ c \in \mathbb{C} : \sup |f_{d,c}^n(0)|<\infty\right\}.\] \begin{figure} \caption{Multibrot set for $d=3$.} \caption{Multibrot set for $d=4$.} \caption{Multibrot set for $d=5$.} \caption{Some Multibrot sets.} \end{figure} When $d$ is odd and $d>1$, Paris\'e and Rochon~\cite{PariseRochon} proved \begin{equation} \mathcal{M}_d\cap \mathbb{R}=\left [-\frac{d-1}{d^\frac{d}{d-1}},\frac{d-1}{d^\frac{d}{d-1}}\right], \label{eq:dodd} \end{equation} and we remark that this implies $\mathcal{M}_d\cap \mathbb{Z}=\{0\}$ for $d>1$. When $d$ is even Paris\'e, Ransford, and Rochon~\cite{PariseRansfordRochon} proved \begin{equation} \mathcal{M}_d\cap \mathbb{R}=\left[-2^\frac{1}{d-1},\frac{d-1}{d^\frac{d}{d-1}}\right ], \label{eq:ceven} \end{equation} and we remark that this implies $\mathcal{M}_d\cap \mathbb{Z}=\{-1,0\}$ for $d>2$. \begin{thm}\label{lem:conjcubic} Let $f\in\mathbb{Z}[z]$ be a monic polynomial that is affine conjugate over $\mathbb{C}$ to a polynomial of the form $z^d+c$ with $d>2$. Then $\operatorname{PrePer}(f) \not \subseteq\mathcal J_f$ if and only if either $c = 0$ or $d$ is even and $c = -1$. \end{thm} \begin{proof} If $f\in\mathbb{Z}[x]$ is affine conjugate over $\mathbb{C}$ to $z^d+c$, then $f$ has a unique critical point $\gamma \in \mathbb{C}$. Since $f$ is monic, the derivative factors over $\mathbb{C}$ as $f'(z) = d(z-\gamma)^{d-1}$. Integrating, we get $f(z) = (z-\gamma)^d + b$ where $\gamma, b \in \mathbb{C}$, but we also know that $f(z) \in \mathbb{Z}[z]$. Of course, the derivative $f'(z) =d(z-\gamma)^{d-1} \in \mathbb{Z}[z]$ as well. The coefficient of $z^{d-2}$ in $f'(z)$ is $-d(d-1)\gamma$, so we see that $\gamma \in \mathbb{Q}$. We claim that when $d>2$, in fact $\gamma \in \mathbb{Z}$. Looking at the coefficient of $z$ in $f(z)$, we have $d \gamma^{d-1} \in \mathbb{Z}$. Let $p$ be a prime dividing the denominator of $\gamma$. Since $d\gamma^{d-1} \in \mathbb{Z}$, we must have $p^{d-1} \mid d$. But this is impossible since $p^{d-1}\geq 2^{d-1}>d$ when $d>2$. Now consider the constant term of $f$, which is $(-\gamma)^d +b$. Since $\gamma \in \mathbb{Z}$, we have $b\in \mathbb{Z}$ as well. Choose $L(z) = z +\gamma$, and we see that $f^L(z) = z^d + b-\gamma \in \mathbb{Z}[z]$. So $f$ is conjugate to a polynomial of the form $z^d+c$ with $c\in \mathbb{Z}$. We know that $c \not \in \mathcal{M}_d$ implies $\operatorname{PrePer}(f)\subseteq\mathcal J_f$. From equations~\eqref{eq:dodd} and~\eqref{eq:ceven}, we deduce that for $c\in \mathbb{Z}$, $c\not \in \mathcal{M}_d$ iff $c\not =0$ for $d$ odd and $c\not =0, -1$ for $d$ even. Thus, it suffices to consider the exceptional cases $c=0$ and $c=-1$. We already know that for the power function $z^d + 0$, the point $z_0=0$ is an attracting fixed point in $\mathcal K_f\setminus \mathcal J_f$. Now consider $g(z)=z^d-1$ with $d$ even. In this case we find that $\{-1,0\}$ is an attracting cycle. Indeed $g^2(z)=(z^d-1)^d-1$ and $\frac{d g^2}{d z}=d^2z^{d-1}(z^d-1)^{d-1}$. This gives $\left. \frac{d g^2}{d z}\right|_{z=0}=\left. \frac{d g^2}{d z}\right|_{z=-1}=0$. \end{proof} One might hope that for $f \in \mathbb{Z}[z]$, if the coefficients of $f$ are sufficiently large, then all critical points will have unbounded orbit. In this case the Julia set would be totally disconnected, so that again $\operatorname{PrePer}(f) \subseteq \mathcal J_f$. \end{document}